text
stringlengths 6
128k
|
---|
# JanusAQP: Efficient Partition Tree Maintenance for Dynamic Approximate
Query Processing
Xi Liang University of Chicago<EMAIL_ADDRESS>, Stavros Sintos
University of Chicago<EMAIL_ADDRESS>and Sanjay Krishnan University of
Chicago<EMAIL_ADDRESS>
###### Abstract.
Approximate query processing over dynamic databases, i.e., under
insertions/deletions, has applications ranging from high-frequency trading to
internet-of-things analytics. We present JanusAQP, a new dynamic AQP system,
which supports SUM, COUNT, AVG, MIN, and MAX queries under insertions and
deletions to the dataset. JanusAQP extends static partition tree synopses,
which are hierarchical aggregations of datasets, into the dynamic setting.
This paper contributes new methods for: (1) efficient initialization of the
data synopsis in the presence of incoming data, (2) maintenance of the data
synopsis under insertions/deletions, and (3) re-optimization of the
partitioning to reduce the approximation error. JanusAQP reduces the error of
a state-of-the-art baseline by more than 60% using only 10% storage cost.
JanusAQP can process more than 100K updates per second in a single node
setting and keep the query latency at a millisecond level.
## 1\. Introduction
Approximate query processing (AQP) studies principled ways to sacrifice query
result accuracy for faster or more resource-efficient execution
(chaudhuri2017approximate, ; garofalakis2001approximate, ). AQP systems
generally employ reduced-size summaries, or “synopses”, of large datasets that
are faster to process. The simplest of such synopsis structures are histograms
and samples (cormode2011synopses, ; liang2021combining, ; agarwal2013blinkdb,
; lazaridis2001progressive, ), but many others have been proposed in the
literature. More complex synopses are more accurate for specific types of
queries (walenz2019learning, ), specific data settings (poepselland, ), or
even are learned with machine learning models (yang2019deep, ;
hilprecht2019deepdb, ; ma2021learned, ). AQP is particularly interesting and
challenging in a dynamic data setting, where a dataset is continuously
modified with insertions and deletions (garofalakis2001approximate, ;
olma2019taster, ; acharya1999aqua, ). In this setting, hereafter denoted as
DAQP, any synopsis data structures have to be continuously maintained online.
As an example use-case, consider a database aggregating per-stock order data
for the NASDAQ exchange (nasdaqbv, ). Suppose, that we would like to build a
low-latency SQL interface for approximate aggregate queries over the past
seven days of order data. On a typical day, there are 25M new orders that
correspond to trades that are placed by brokers (up to 70,000 orders in any
given second). A decent fraction of these orders are eventually canceled or
prematurely terminated, for a variety of financial reasons. Thus, this
database is highly dynamic with a large volume of new insertions (new orders)
and a small but significant number of deletions (canceled orders). This paper
explores such scenarios with similar motivating applications in internet-of-
things monitoring and enterprise stream processing.
Simple synopses like 1D histograms and uniform samples are easy to maintain
dynamically. However, such structures are often inaccurate in high-dimensional
data and selective query workloads. More complex synopses structures, e.g,
(yang2019deep, ; liang2021combining, ) can be optimized for a particular
instance (dataset and query workload), but are generally harder to maintain
online. For example, recently proposed learned synopses require expensive
retraining procedures which limit insertion/deletion throughput (yang2019deep,
; hilprecht2019deepdb, ; ma2021learned, ). Even classical stratified samples
may have to be periodically re-optimized and re-balanced based on query and
workload shifts (agarwal2013blinkdb, ). These, expensive (re-)initialization
procedures can significantly hurt insertion throughput, and accordingly,
almost all existing AQP systems focus on the static data warehousing
setting111A notable exception being the AQUA project (acharya1999aqua, ) from
20 years ago.. Unfortunately, the existing techniques that _are_ designed for
dynamic data, such as sketches and mergeable summaries (gan2020coopstore_, ;
agarwal2012mergeable, ; poepselland, ), often cannot handle arbitrary
deletions or aggregation queries with arbitrary predicates easily. Thus, it is
understood that most synopsis data structures have at least one of the
following pitfalls in our desired dynamic setting: throughput, drift, or
generality (chaudhuri2017approximate, ).
Figure 1. JanusAQP manages a collection of DPT synopses by maintaining them
online while periodically re-optimizing partitioning and sample allocation.
This paper explores the DAQP problem and studies ways that we can mitigate the
pitfalls of prior approaches with a flexible synopsis data structure that can
continuously re-optimize itself. We present JanusAQP, a new DAQP system, which
supports SUM, COUNT, AVG, MIN, and MAX queries with predicates under arbitrary
insertions and deletions to the dataset. The main data structure in JanusAQP
is a dynamic extension of our recently published work (liang2020fast, ;
liang2021combining, ), which we call a Dynamic Partition Tree (DPT). DPT is a
two-layer synopsis structure that consists of a: (1) hierarchical partitioning
of a dataset into a tree, and (2) a uniform sample of data for each of the
leaf partitions (effectively a stratified sample over the leaves). An
optimizer determines the best partitioning conditions and sample allocations
to meet a user’s performance goals. For each partition (nodes in the tree), we
calculate the SUM, COUNT, MIN, and MAX values of the partition. Any desired
SUM, COUNT, AVG, MIN, and MAX query can be efficiently decomposed into two
parts with the structure: a combination of the partial aggregates where the
predicate fully covers a partition in the tree, and an approximate part where
the predicate partially covers a leaf node (and can be estimated with a
sample). More importantly, this structure is essentially a collection of
materialized views and samples, which can be maintained incrementally.
A core contribution of JanusAQP is online synopsis optimization. JanusAQP
continuously monitors the accuracy of all of its DPT synopses to account for
data and workload drift. When a synopsis is no longer accurate, it triggers a
re-optimization procedure that resamples and repartitions the data. This re-
optimization problem is both a significant algorithmic and systems challenge.
From an algorithmic perspective, JanusAQP needs an efficient way to determine
the optimal partitioning conditions in dynamic data. We propose an efficient
algorithm based on a dynamic range tree index that finds a partitioning that
controls the minimax query error (up to an approximation factor). From a
systems perspective, re-optimization poses a bit of a logistical challenge.
New data will arrive as the new synopsis data structure is being constructed.
We design an efficient multi-threaded catch-up processing algorithm that
synchronizes new data and historical data without sacrificing the statistical
rigor of the estimates.
Our prototype version of JanusAQP is integrated with the message-broker
framework, Apache Kafka. Insertions, deletions, and user queries are processed
as Kafka topics allowing for a multi-threaded DAQP server. In our experiments,
we show that DPT is significantly more accurate than other baselines and
state-of-the-art systems such as, reservoir sampling, stratified reservoir
sampling, and the DeepDB system (hilprecht2019deepdb, ). It also achieves a
throughput of processing nearly 200k records per second, while serving sub-
millisecond query latencies.
Overall, the new system JanusAQP we proposed has the following benefits over
previously known indexes for approximate query processing. It handles dynamic
updates efficiently (comparing to the static indexes PASS (liang2021combining,
), and VecrdictDB (park2018verdictdb, )), it provides theoretical guarantees
on the confidence intervals (comparing to the machine learning based indexes
such that DeepDB (hilprecht2019deepdb, )), the query procedure accesses only a
small synopsis of data and does not touch the original data set so the
communication throughput is low (comparing to other tree-based indexes such as
(joshi2008materialized, ; jurgens1998r, )), and the estimation error is always
low without making any assumption about the spatial/value-domain distribution
of the data (comparing to (lazaridis2001progressive, )). We highlight all
benefits/differences in the related work.
## 2\. Background
We first introduce the core concepts behind the synopses used in this work.
### 2.1. Dynamic Approximate Query Processing
We assume an initial database table $\mathcal{D}^{(0)}$. This table
$\mathcal{D}^{(0)}$ is continuously modified through a stream of insertions
and deletions of tuples. As a design principle, we assume that insertions are
common but deletions are rare. With each insertion or deletion operation, the
table evolves over time with a new state at each time step $i$:
$\mathcal{D}^{(0)},\mathcal{D}^{(1)},\ldots,\mathcal{D}^{(i)},\mathcal{D}^{(i+1)},\ldots$
A synopsis is a data structure that summarizes the evolving table. For each
$\mathcal{D}^{(i)}$, there is a corresponding synopsis $\Sigma^{(i)}$:
$\Sigma^{(0)},\Sigma^{(1)},\ldots,\Sigma^{(i)},\Sigma^{(i+1)},\ldots$
In DAQP, the problem is to answer queries as best as possible from only the
$\Sigma^{(i)}$. For a query $q$, the estimation error is defined as the
difference between the estimated result (using the synopsis) and the true
result (using the current database state):
$\textsf{Error}(q,\Sigma^{(i)})=|q(\mathcal{D}^{(i)})~{}~{}-~{}~{}q(\Sigma^{(i)})|$
We further assume that there is sufficient cold/archival storage to store the
current state of the table $\mathcal{D}^{(i)}$. This data can be accessed in
an offline way for initialization, re-optimization, and logging purposes but
not for query processing.
Figure 2. The core data structure in JanusAQP is based on the PASS data
structure (liang2021combining, ) that summarizes a dataset with a tree of
aggregates at different levels of resolution (granularity of partitioning).
Associated with the leaf nodes are stratified samples. The two stage synopsis
structure can be optimally partitioned to minimize error.
There are a few notable differences from the “streaming” setting. First, most
data streaming models do not support arbitrary record deletion, i.e., as
studied in (poepselland, ). We find that in many use-cases limited support for
deletion is needed due to records that are invalidated through an out-of-band,
asynchronous data process like fraud detection or financial auditing. Next,
most streaming settings enforce a single pass over the data with limited
overall memory. We do not make this assumption and allow for archival storage
and slow access to old data. This is a more realistic AQP setting where all
data are stored, however, there is limited working memory for a fast,
approximate query answering service.
### 2.2. Related work
There is significant research in histograms and their variants that is highly
relevant to this project (jagadish1998optimal, ; koudas2000optimal, ;
jagadish2001global, ). V-Optimal histograms construct buckets to minimize the
cumulative variance (jagadish1998optimal, ). There are works on multi-
dimensional histograms (lazaridis2001progressive, ), and histograms on the
streaming/dynamic setting (guha2006approximation, ; gilbert2002fast, ). Like
histograms, JanusAQP constructs partitions over attribute domains and
aggregates within the partition. However, we contribute different partition
optimization criteria than typically used in histograms and novel techniques
based on geometric data structures to scale partitioning into higher
dimensions. Furthermore, our system works in the general dynamic setting,
unlike (gilbert2002fast, ) where the number of total items must remain the
same. Another related area of research is into mergeable summaries that
compute a partition of the data and optimize sampling at a data partition
level (rong2020approximate, ; liang2020fast, ; agarwal2012mergeable, ;
gan2020coopstore, ; poepselland, ). The DPT used in JanusAQP very much behaves
like a mergeable summary but a far greater breadth of downstream queries.
Furthermore, some prior work mostly focuses on a streaming setting without
support for deletion (poepselland, ). Similarly, sketches (cormode2011sketch,
; cormode2011synopses, ) have been used to find a summary of data to answer
approximately a variety of queries efficiently. However, they also do not
handle arbitrary range queries using space independent of the size of the full
database. Mergeable summaries and sketches usually focus on optimizing
different types of problems such that frequency queries, percentile queries,
etc. This paper shows how to operationalize a general DAQP system for
aggregation queries with both systems and algorithmic contributions relating
to the design of dynamic synopses and their continuous optimization. Our
system can handle arbitrary updates and can estimate any arbitrary predicate
query with provable confidence intervals.
In databases, a number of tree-based indexes, such as the improved $R^{*}$
tree (jurgens1998r, ), have been used to support range aggregation queries
efficiently. The space of such indexes is (super-)linear with respect to the
input items. The query procedures need to have access to the entire tree-index
that contains the entire dataset. That leads to high communication throughput
or high I/O operations comparing to JanusAQP where the queries are executed in
a small synopsis of data stored in a local machine or RAM with zero
communication throughput. In another line of work, tree-based data structured
are used to return a set of $k$ uniform samples in a query range. More
specifically, in (joshi2008materialized, ; wang2015spatial, ) the authors
construct indexes such that given a query range $Q$ and a parameter $k$, they
return $k$ uniform samples from the input items that lie inside $Q$. These
samples can be used to estimate any aggregation query in the range query $Q$.
There are several issues with these indexes in our setting. First, the design
of the index in (joshi2008materialized, ) makes their structure inherently
static and it cannot be maintained efficiently. Furthermore, the estimation
error in both indexes is the same as the error in the simple uniform random
sampling schema. In Section 6, we show that the error of our new index in real
data sets is always less than half of the error in uniform random sampling, so
our new index always outperforms these range sampling indexes. In addition,
the communication throughput or the I/O operations during a query procedure of
these indexes depend on $N$, i.e. the size of the input set, so they cannot be
used on big data. Finally, the dynamic tree structure in
(lazaridis2001progressive, ) can store a synopsis of data in a tree-based
index and use only this synopsis/index to return estimations of a range
aggregation queries, which is also the case in our system JanusAQP. However,
there are two main differences. The index in (lazaridis2001progressive, )
returns a good estimation only if an assumption about the spatial/value-domain
distribution of the data is made, while JanusAQP uses stratified sampling and
it always returns unbiased estimators with small error without assuming any
distribution over the data. Furthermore, while their partition tree in
(lazaridis2001progressive, ) can handle dynamic updates, its
structure/partitioning remains unchanged. In our index we maintain a near-
optimal partitioning over the updates. As we show in Section 6.7, running
experiments on real data, re-partitioning is essential in order to maintain a
small error.
Dynamic AQP problems have been discussed in prior work
(garofalakis2001approximate, ), however, most existing systems have focused on
a static data warehousing setting (agarwal2013blinkdb, ). The Aqua system
(acharya1999aqua, ) did consider the maintenance of its synopsis data
structures under updates. However, these synopses were relatively simple and
only samples and histograms. Furthermore, we discuss systems issues such as
catch-up processing that was not discussed in (acharya1999aqua, ) or any
subsequent work (gibbons2002fast, ).
Many new AQP techniques use machine learning. The basic ideas exist for a
while, e.g., (jermaine2003robust, ; jin2006new, ). Recently, there are more
comprehensive solutions that train from a past query workload
(park2017database, ) or directly build a probabilistic model of the entire
database (hilprecht2019deepdb, ; yang2019deep, ). We show that these systems
are not optimized for a dynamic setting. Even when they can be updated
efficiently with warm-start training, their throughput is much lower than
JanusAQP.
### 2.3. Partition Trees for AQP
We propose a new dynamic data synopsis and optimization strategy that is an
extension of our previous work (liang2021combining, ). In particular, we
proposed a system called PASS (which we call SPT for “static partition tree”).
SPT synopses are related to works such as (lazaridis2001progressive, ) in the
data cube literature and hybrid AQP techniques (peng2018aqp++, ). We showed
that with appropriate optimization of the partitioning conditions, an SPT
could achieve state-of-the-art accuracy in AQP problems.
#### 2.3.1. Construction
An SPT is a synopsis data structure used for answering aggregate queries over
relational data. To use SPT, the user defines an _aggregation column_
(numerical attribute to aggregate) and a _set of predicate columns_ (columns
over which filters will be applied). An SPT consists of two pieces: (1) a
hierarchical aggregation of a dataset, and (2) a uniform sample of data for
each of the leaf partitions (effectively a stratified sample over the leaves).
The system returns a synopsis that can answer SUM, COUNT, AVG, MIN, and MAX
aggregates over the aggregation column filtered by the predicate columns.
Figure 2 illustrates a partition tree synopsis over toy stock-order data.
To understand how this structure is useful, let us overview some of its formal
properties. A _partition_ of a dataset $\mathcal{D}$ is a decomposition of
$\mathcal{D}$ into disjoint parts $\mathcal{D}_{1},...,\mathcal{D}_{B}$. Each
$\mathcal{D}_{i}$ has an associated partitioning condition, a predicate that
when applied to the full dataset as a filter retrieves the full partition.
Partitions naturally form a hierarchy and can be further subdivided into even
more partitions, which can then be subdivided further. A _static partition
tree_ $\mathcal{T}$ is a tree with $B$ nodes (where each node corresponds to a
partition) with the following invariants: (1) every child is a subset of its
parent, (2) all siblings are disjoint, and (3) the union of all siblings
equals the parent.
In an SPT, each node of the tree is associated with SUM, COUNT, MIN, and MAX
statistics over the items in $\mathcal{D}$ that lie inside the node. SPT
synopses have a flexible height to tradeoff accuracy v.s. storage. In shorter
trees, the leaf nodes of an SPT can cover large subsets of data and vice versa
in deeper trees. Note how each layer of the tree in Figure 2 aggregates the
lower layer over coarser-and-coarser aggregation conditions (first by “sector”
and then by “order type”).
This structure works well when the queries align with partition boundaries.
For example, a user aggregating total orders by “order type” in Figure 2 would
get an exact answer with no approximation. The challenge is to answer queries
with predicates that partially intersect partitions. Due to the tree
invariants, the set of partial intersections can be fully determined at the
leaf nodes. To estimate the contributions of these partial intersections, an
SPT associates a uniform sample of tuples _within that partition_ for each
leaf node.
#### 2.3.2. Query Processing
Using an SPT, a user can estimate the result of a query as follows.
Essentially, the query processing algorithm identifies “fully covered” nodes
that are contained in the query predicate and “partially covered” ones that
overlap in some way. Exact statistics from the “fully covered” nodes can be
used, while estimates can be used to determine the contribution of “partially
covered” ones. We present SUM, COUNT, AVG for brevity, but it is also possible
to get estimations for MIN and MAX.
Step 1: Frontier Lookup. Given a query predicate $q$, traverse the tree top-
down to retrieve two sets of nodes partitions: $R_{cover}$ (nodes that fully
cover the predicate) and $R_{partial}$ (nodes that partially intersect the
predicate). Nodes that do not intersect the predicate can be ignored.
Step 2: Partial Aggregation For each partition in $R_{cover}$, we can compute
an exact “partial aggregate” for the tuples in those partitions. For a
SUM/COUNT query $q$: $agg=\sum_{R_{i}\in R_{cover}}SUM(R_{i})$, for an AVG
query, we weight the average by the relative size of the partition:
$agg=\sum_{R_{i}\in R_{cover}}SUM(R_{i})\frac{N_{i}}{N_{q}}$, where $N_{i}$ is
the size of the partition $R_{i}$, $N_{q}$ is the total size in all relevant
partitions of query $q$, and $SUM(R_{i})=\sum_{t\in R_{i}\cap\mathcal{D}}t.a$
is the sum of the aggregation values of all tuples in the partition $R_{i}$.
Step 3: Sample Estimation. Each partition in $R_{partial}$ is a leaf node
with an associated stratified sample. Within each stratified sample, we use
standard AQP techniques to estimate that partition’s contribution to the final
query result (agarwal2013blinkdb, ). For completeness, we include those
calculations here. Suppose a partition $R_{i}$ has a set $S_{i}$ of $m_{i}$
samples and there are $N_{i}$ total tuples in $R_{i}$. We can formulate COUNT,
SUM, AVG as calculating an average over transformed attributes:
$f(S_{i})=\frac{1}{m_{i}}\sum_{t\in S_{i}}\phi_{q}(t)$, where
$\phi_{q}(\cdot)$ expresses all the necessary scaling to translate the samples
in query $q$ into an average query population. In particular, if we define
$Predicate(t,q)=1$ if tuple $t$ satisfies the predicate of query $q$, and $0$
otherwise, we have
* •
COUNT: $\phi_{q}(t)=Predicate(t,q)\cdot N_{i}$
* •
SUM: $\phi_{q}(t)=Predicate(t,q)\cdot N_{i}\cdot t.a$
* •
AVG: $\phi_{q}(t)=Predicate(t,q)\cdot\frac{m_{i}}{\sum_{t\in
S_{i}}Predicate(t,q)}\cdot t.a$
We run such a calculation for each partition that is partially covered. These
results are combined with a weighted combination like before. For SUM/COUNT
queries it is: $samp=\sum_{R_{i}\in R_{partial}}f(S_{i})$. And for AVG
queries, it is: $samp=\sum_{R_{i}\in
R_{partial}}f(S_{i})\cdot\frac{N_{i}}{N_{q}}$. $N_{i}$ and $N_{q}$ can be
exactly retrieved from the statistics computed for each partition.
Step 4: Final Estimate. The results can be found by taking a sum of the two
parts: $result=samp+agg.$ For this result estimate, confidence intervals can
be calculated using standard stratified sampling formulas.
PASS and JanusAQP comparison. As we noted, JanusAQP is an extension of PASS
in the dynamic setting. However, there are key differences between the two
systems, since PASS is an inherently static index that cannot handle dynamic
updates. The main differences and novelties of our new system JanusAQP
comparing to PASS are the following: i) PASS finds a static partitioning that
is not changing after insertions and deletions of items. In JanusAQP we
propose algorithms (Subsection 5.4) that automatically check if a re-
partitioning is needed after the dynamic updates. As we show empirically in
Section 6 the re-partitioning method is very important and leads to much lower
errors comparing to PASS with a fixed partition tree. ii) Even if re-
partitioning is allowed in PASS, the algorithms we proposed in
(liang2021combining, ) do not run efficiently in the dynamic setting. Here we
propose dynamic indexes and algorithms with theoretical guarantees that
perform much faster than the algorithms in PASS. In Section 6 we show that our
new dynamic indexes can construct a new partitioning extremely faster than the
partitioning algorithm of PASS. iii) Even if we use our new dynamic algorithms
in PASS, there is no mechanism to compute the exact statistics of the nodes
after a re-partitioning happening and there is no mechanism handling the
updates as the re-partitioning is executed. In this paper we introduce a novel
multi-thread approach called the _catch-up_ phase. JanusAQP can improve the
estimators in the nodes of DPT after a re-partitioning, while handling new
dynamic updates and new queries. iv) Last but not least, we implement JanusAQP
on Apache Kafka, so it can be used by real database systems. On the other
hand, there is no system implementation of PASS.
## 3\. System Architecture
In this section, we describe the JanusAQP architecture.
### 3.1. Construction and Optimization API
First, we overview how users construct synopsis data structures in JanusAQP.
Unlike systems like BlinkDB (agarwal2013blinkdb, ), JanusAQP does not use a
single synopsis to answer all queries. Much like index construction in a
database, users choose which attributes to include in the synopsis structure.
Each synopsis can answer query templates of the following form:
⬇
SELECT SUM/COUNT/AVG/MIN/MAX(A) FROM D
WHERE Rectangle(D.c1,...,D.cd)
where $A$ is an aggregation attribute and $c_{1},...,c_{d}$ are predicate
attributes used in some rectangular predicate region (a conjunction of $>,<,=$
clauses). The dimensionality of a synopsis is the number of predicate
attributes $d$. To construct a synopsis, the user must define the following
basic inputs:
* •
Aggregation Attribute. An attribute $A$ that is the primary metric for
aggregation.
* •
Predicate Attributes. A collection of $d$ columns $c_{1},...,c_{d}$ that are
used to filter the data prior to aggregation.
* •
Memory Constraint. The maximum amount of space that the synopsis can take.
* •
Query Processing Constraint. The maximum bytes of data that the system should
process in answering a query.
* •
Historical Data Limit. How much historical data to include in the synopsis,
i.e., the earliest time-step of data included in the system.
JanusAQP contains an optimizer that integrates these constraints into a solver
that produces an optimized synopsis (one with low error). Beyond these basic
knobs that are relevant to most AQP systems, there are two other
considerations discussed in this paper: Catch-Up Processing. Constructing a
synopsis will require some amount of computational time. While incremental
maintenance might be efficient, constructing the initial synopsis $S^{(0)}$
from the initial database state $\mathcal{D}^{(0)}$ might be very expensive if
there is a significant amount of initial data. However, as the initial
$S^{(0)}$ is being constructed new data will arrive, and the system will
require additional processing to catch up. JanusAQP optimizes the catch-up
process using a multi-threaded system and approximate internal statistics for
the partition tree. This process minimizes the amount of time where the system
is unable to process new data or queries. The user decides how much processing
to expend during catch up, the quicker the system is ready, the higher the
error will be.
Throughput. The maximum data throughput is the maximum rate of insertions and
deletions that the system can support. Throughput depends on the complexity of
the synopsis used.
### 3.2. Data and Query API
For processing queries and data, we adopt the PSoup architecture where both
queries and data are streams (chandrasekaran2003psoup, ). JanusAQP supports
three types of requests: insertion of a new tuple, deletion of an existing
tuple and querying of the existing tuples. Thus, there are three Kafka topics
insert(tuple), delete(tuple), and execute(query).
The use of Kafka, with its timing and delivery guarantees, simplifies the
query processing semantics. The system will process the incoming stream of
queries in order. Each query will have an arrival time $i$, which is the
current database state at the time at which the query is issued. Therefore, we
define $q^{(i)}_{j}$ as the $j$th query in the sequence that arrives at
database state $i$. Query results should reflect all of the data that has
arrived until the time point $i$.
### 3.3. Summary and Algorithmic Contributions
To summarize, the usage of JanusAQP can be thought of as a life cycle. (1.
Initialization) The user triggers synopsis construction through an API call.
(2. Catch-Up) The system will online construct the synopsis while managing new
data arriving into the system. (3. Query/Data Processing) Then, JanusAQP is
ready to process requests of insertions, deletions, and queries. (4. Re-
Initialization) As JanusAQP processes more updates, the data or query workload
could drift requiring re-partitioning. This procedure re-enters the
initialization phase.
Throughout the rest of the paper, we present technical contributions
throughout this synopsis life cycle:
* •
Dynamic Partition Trees (Section 4) The core data structure in JanusAQP is
called a dynamic partition tree (DPT). This data structure aggregates data at
multiple levels of resolutions and associates some of the aggregates with
stratified samples. The size of the tree (i.e., depth and width) can be
changed to tradeoff accuracy at the cost of increased storage and throughput.
The sampling rate at the leaf nodes can be adjusted to tradeoff query latency
with accuracy.
* •
Warm-Start Deployment (Section 4.3) Next, we present a multi-threaded
technique that allows DPT synopses to be deployed in dynamic environments
accounting for new data that might arrive during their construction. This
component crucially allows for either partial or full online re-partitioning
and re-balancing of the data structure.
* •
Minimum Variance Partitioning (Section 5) We describe a new, efficient
algorithm for selecting the DPT tree structure that minimizes the worst-case
estimation error.
## 4\. Dynamic Partition Trees
We discuss how Dynamic Partition Trees (DPT) are constructed, how they answer
queries, and how they are maintained under updates. Structurally, a DPT is
essentially the same data structure as an SPT; however, the way that the
partition statistics and samples are represented differ to allow for
incremental maintenance. Figure 3 summarizes the basic update process.
Table 1. Table of basic notation $\mathcal{D}$ | Full database | $H_{i}$ | $H\cap R_{i}$
---|---|---|---
$N$ | $|\mathcal{D}|$ | $m_{i}$ | $|S_{i}|$
$S$ | Set of reservoir samples | $h_{i}$ | $|H_{i}|$
$H$ | Set of catch-up samples | $m$ | $|S|$
$R_{i}$ | Partition/bucket/rectangle | $t$ | Tuple in $\mathcal{D}$
$|R_{i}|$ | $|R_{i}\cap S|$ | $t.a$ | Aggregation value of tuple $t$
$N_{i}$ | $\mathcal{D}\cap R_{i}$ | $\mathcal{T}$ | Partition tree in DPT
$S_{i}$ | $S\cap R_{i}$ | |
Figure 3. The DPT update process for an insertion or deletion. (1) A set of
samples is maintained using a reservoir sampling algorithm. (2) The leaf node
statistics are incrementally updated. (3) The updated statistics from the leaf
node propagate to the parents. (4) Updated statistics from the parents
propagate all the way to the root.
### 4.1. Incrementally Maintaining Nodes
Each node defines a partition and contains statistics (the SUM, COUNT, MIN,
and MAX aggregates) of the data contained in that partition. The key challenge
is to keep these statistics up-to-date in the presence of insertions and
deletions. When an insertion or deletion arrives, an entire path of nodes from
the leaf to the root will have to be updated.
DPT Nodes: First, we discuss how we represent the statistics in a DPT node.
Since the SUM and COUNT are easy to incrementally maintain under both
insertions and deletions, we simply store a single SUM and COUNT value for
each aggregation attribute. The MIN and MAX values are harder to incrementally
maintain. To store the MIN and MAX values, we store the top-k and the bottom-k
values in a MIN/MAX heap respectively. The top value of these heaps is equal
to the MIN and MAX of all the data in the node.
Insert New Record: When a new record is inserted, we first test each leaf
node to see if the record is contained in the node. Once find the appropriate
leaf node, we then increment the SUM and COUNT statistics accordingly.
Finally, we push the new aggregation values onto the heap. If the heap exceeds
the size limit $k$, then the bottom value on the heap is removed.
Delete Existing Record: When an existing record is deleted, we first test
each leaf node to see if the record is contained in the node. Once find the
appropriate leaf node, we then decrement the SUM and COUNT statistics
accordingly. Finally, if that aggregation value is contained in the heap it is
removed from the heap.
### 4.2. Maintaining Stratified Samples
Next, we describe how to maintain the samples associated with leaf nodes. We
use a modified version of the well-known technique of _reservoir-sampling_
(vitter1985random, ) under updates (gibbons2002fast, ). The details of how we
implement this are interesting. Conceptually, each leaf node is associated
with a physically disjoint sample of just that partition, i.e., a stratified
sample. Instead of physical strata, we implement virtual partitions of a
single global sample. This global sample can be maintained using a reservoir
sampling algorithm and makes it easier to control the overall size of the
synopsis under insertions/deletions as well as simplifies concurrency control.
Sample Representation: The DPT maintains a “pooled” sample (all the relevant
samples in a single data structure). This set of samples has a target size of
$2m$ tuples. At the construction time, we choose a set $S$ of $2m$ uniform
random samples from $\mathcal{D}$. The update procedure ensures that there are
always between $m\leq|S|\leq 2m$ samples. The leaf nodes index into this
“pooled” sample selecting only the relevant data to their corresponding
partitions.
Insert New Record: Suppose we insert a new tuple $t$. If $|S|<2m$ we add $t$
in $S$. If $|S|=2m$, we choose $t$ with probability
$\frac{|S|}{|\mathcal{D}|}$. If it is selected then we replace $t$ with a
point from $S$ uniformly at random.
Delete Existing Record: Next, suppose that we delete a tuple $t$ from
$\mathcal{D}$. If $t\notin S$ we do not do anything. If $t\in S$ then we check
the cardinality of $S$. If $|S|>m$ then we only remove $t$ from $S$. If
$|S|=m$ then we skip the set $S$ and we re-sample $2m$ items from
$\mathcal{D}$. As shown in (gibbons2002fast, ) this procedure always maintain
a set of uniform random samples. Using a simple dynamic search binary tree of
space $O(m)$ we can update the samples $S$ stored in $\mathcal{T}$ in
$O(\text{height}(\mathcal{T}))$ time.
Figure 4. JanusAQP synopses can be re-initialized online using a multi-
threaded implementation to minimize unavailability
### 4.3. Re-initialization and Catch-Up
As we noted before, repeated deletes on the same leaf partition can degrade
the accuracy of the synopsis. As we will see in the next section, it is also
possible for repeated insertions to degrade the accuracy as well. In such
cases, re-initialization of the DPT may be needed where the data structure is
re-built and re-optimized over existing data.
Enabling periodic re-initialization is crucial for reliable long-term
deployment but is challenging because new data will not simply stop arriving
during the re-initialization period. As the dataset size grows, the amount of
time needed for re-initialization will grow as well. We employ a multi-
threaded approach to minimize any period unavailability for processing new
data arrival as well as new queries (Figure 4). When re-initialization is
triggered, the main processing thread initiates the construction of a new DPT
synopsis and the following steps are performed:
1. (1)
Optimization Phase (In Parallel)
* •
The partition optimization algorithm analyzes the data in the pooled reservoir
sample to determine the optimal new partitioning criteria. It returns a new
empty DPT with no node statistics.
* •
In parallel with (Step 1), the old synopsis is maintained under all insertions
and deletions that happen during the optimization algorithm. Queries can still
be answered with the old synopsis.
2. (2)
(Blocking) Approximate node statistics are populated into the new synopsis
using the pooled reservoir sample $S$ (note, that this will reflect any data
that arrived during the optimization phase). This is the only blocking step in
the re-initialization routine and new data and queries will have to wait until
completion.
3. (3)
The old synopsis is discarded.
4. (4)
The system resamples a uniform sample of data from archival storage to be the
new pooled reservoir sample. Queries and results can still be processed on the
new synopsis even without a sample.
5. (5)
Random samples of historical data are used to improve the node statistics in
the background until a user-specified “catch-up” time.
This process is the key difference between an SPT and an DPT, where after
catch-up the node statistics may be inexact. However, this old data is
propagated in a random order, which means that the SUM,COUNT,AVG values in
each node will be unbiased estimates of their full data statistics. The
duration of the catch-up phase can be chosen by the user. For example, in our
experiments, the catch-up phase does not stop until we get
$0.1\cdot|\mathcal{D}|$ samples. It is worth noting that queries close to the
beginning of the catch-up phase will have a higher error, however queries
towards the middle or the end of the catch-up phase will have a smaller error.
In Section 5.4, we describe how to trigger re-initialization. Furthermore,
there is only one step (2) where the synopsis is unavailable to process
queries and data (has the duration of 100s of milliseconds in our
experiments).
### 4.4. Answering Queries With a DPT
The basic structure of the result estimator is the same as before, especially
for the $R_{partial}$ partitions. However, there are a few key changes due to
the nature of the catch-up phase. In SPT, for each partition in $R_{cover}$,
we can compute an exact “partial aggregate” for the tuples in those partitions
and combine the partial aggregates. In a DPT, this process changes considering
the estimations we get from the catch-up samples. Overall, the estimation of a
partition $R_{i}\in R_{cover}$ consists of i) estimation using the catch-up
samples $H$ and the formulas of Section 2.3, ii) the exact statistics of the
new inserted tuples in $R_{i}$, and iii) the exact statistics of the deleted
tuples in $R_{i}$ (recall that the quantities in ii), iii) are stored and
maintained as described in the Incrementally Maintaining Statistics in Section
4.1). By taking the sum of i), ii) and subtracting iii) we get the unbiased
estimation in partition $R_{i}$.
Let $H$ be the set of catch-up samples and $H_{i}\subseteq H$ be the subset of
$H$ that lie in a partition $R_{i}$, and $h_{i}=|H_{i}|$. All basic notations
are defined in Table 1. The formulas for estimating COUNT and SUM queries in
both $R_{cover}$, $R_{partial}$ from Section 2.3 contain the factor
$\frac{N_{i}}{m_{i}}$ or $\frac{N_{i}}{h_{i}}$, while the formulas for
estimating the AVG contain the factor $\frac{N_{i}}{N_{q}}$. In DPT we do not
have the exact values for $N_{i}$. Instead, we use an estimate of the size of
the partition $R_{i}$ denoted by $\hat{N_{i}}$. In particular we use the
catch-up samples $H$ to estimate $\hat{N_{i}}=\frac{h_{i}}{h}N$.
#### 4.4.1. Confidence Intervals
While the estimators do not significantly change from an SPT to a DPT, the
confidence intervals are calculated very differently. This is because there
are now two sources of errors: estimation errors due to the stratified samples
and estimation errors in the node statistics. Both these sources of errors
have to be integrated into a single measure of uncertainty. Assuming that all
partitions are large enough, the central limit theorem can be used to
asymptotically bound the estimation error for SUM/COUNT/AVG queries.
Informally, the central limit theorem states that this asymptomatic error is
proportional to the square-root of the ratio of estimate variance and the
amount of samples used $\propto\sqrt{\frac{var(est_{i})}{m_{i}}}$. We simply
have to match terms to this formula for all sample estimates and all node
estimates because both are derived from samples.
Error in Node Estimates. First, let’s account for all the uncertainty due to
catch-up. Recall that $H$ is the set of catch-up samples we have considered so
far and $H_{i}\subseteq H$ is the samples in partition $R_{i}$ with
$h_{i}=|H_{i}|$. We note that we do not store the set $H$ or the subsets
$H_{i}$, instead we only use the new catch-up samples to continuously improve
the statistics we store in the nodes. Using the notation in the previous
section, we can calculate the catch-up variance $\nu_{c}$:
$\nu_{c}(q)=\sum_{R_{i}\in
R_{cover}}w_{i}^{2}\frac{var(\phi_{q}(H_{i}))}{h_{i}}$
where $w_{i}=\frac{\hat{N_{i}}}{\hat{N_{q}}}$ for AVG queries and $w_{i}=1$
for SUM/COUNT queries. Calculating $\phi_{q}(H_{i})$ is straight-forward. We
simply store additional information that allows us to efficiently calculate
the variance. For any node $i$ of $\mathcal{T}$ we store $h_{i}$, $\sum_{t\in
H_{i}}t.a^{2}$, $\sum_{t\in H_{i}}t.a$.
Error in Sample Estimates. For a partition $R_{i}\in R_{partial}$, let
$S_{i}\subseteq\mathcal{D}$ be the set of samples in $S$ that lie in partition
$R_{i}$ and let $m_{i}=|S_{i}|$. Like the catch-up variance, we can calculate
the sample estimate variance $\nu_{s}$:
$\nu_{s}(q)=\sum_{R_{i}\in
R_{partial}}w_{i}^{2}\frac{var(\phi_{q}(S_{i}))}{m_{i}}$
We can calculate an overall confidence interval as:
$\pm z\cdot\sqrt{\nu_{c}(q)+\nu_{s}(q)}$
where $z$ is a normal scaling factor corresponding to the desired confidence
level, e.g., $z=1.96$ for 95%. As before,
$w_{i}=\frac{\hat{N_{i}}}{\hat{N_{q}}}$ for AVG queries and $w_{i}=1$ for
SUM/COUNT queries. In Appendix B, we show analytically all formulas for
computing the variance under different types of queries.
Communication throughput and I/O operations. The query procedure of our system
only accesses the partition tree stored in a local machine. Hence the
communication throughput (or the I/O operations to access the full data) in
the query procedure is zero.
## 5\. Optimal DPT Partitioning
We next describe a new dynamic partitioning algorithm designed for the dynamic
setting.
### 5.1. Preliminaries and Problem Setup
The partitioning algorithm analyzes the pooled reservoir sample of data to
determine how best to partition the dataset. The goal of the partitioning
algorithm is to find a partitioning such that the subsequent queries issued to
the DPT have low-error. Surprisingly enough, the partitioning algorithm does
not need an exact query workload to perform this optimization. It simply needs
a focus aggregation function (e.g., SUM, COUNT, AVG) and finds a partitioning
that minimizes the worst-case query error for sufficiently large predicates.
Given a set of $O(m)$ samples $S$, the goal is to construct a data structure
that supports the following operations. (i) Insert or delete a sample from $S$
efficiently, and (ii) when a partitioning request comes, it creates a near-
optimum partition tree $\mathcal{T}$ in $o(m)$ time.
In order to find a near-optimum partition tree we define the following
optimization problem. Let $Q$ be a set of possible aggregate queries with a
predicate. And, let $\Theta$ be the set of all DPT synopses defined by
rectangular partitioning conditions over $k$ partitions. The main optimization
objective is to minimize the maximum error over the query workload:
(1) $\min_{\mathcal{T}\in\Theta}\max_{q\in Q}\textsf{Error}(q,\mathcal{T})$
The error is defined as the length of the confidence interval, as defined in
the previous section. Since the catch-up variance is usually extremely smaller
than the sample estimate variance, we focus on minimizing the maximum length
of the confidence interval with respect to the sample estimate variance
$\nu_{s}(\cdot)$. For simplicity, when we say variance we always mean the
sample estimate variance.
The above problem still seems challenging because queries can intersect
partitions and cut a tree in arbitrary ways, but our recent work shows an
important simplification (liang2021combining, ) (under mild technical
conditions about the size of partitions). Instead of looking over all possible
queries to minimize the maximum error, one only needs to focus on single
partitions to ensure they do not have “high-variance” sub-partitions. Indeed
by considering only these sub-partitions we can still get a
$\sqrt{k}$-approximation for COUNT and SUM queries over the optimum partition
considering all queries (for $k$ leaves). The approximation factor improves to
$\sqrt{2}$ for $d=1$. For AVG queries the error of the optimum partition of
this simplification is the same with the maximum error considering every
possible query.
The error of a query $q$ inside a leaf node (partition) $R_{i}$ is defined
(expanding the equations from the definition of $\nu_{s}(\cdot)$) as
$\frac{N_{i}^{2}}{m_{i}^{3}}\\!\left[m_{i}\\!\sum_{t\in
q}\\!t.a^{2}\\!-\\!\left(\\!\sum_{t\in
q}\\!t.a\\!\right)^{2}\right],\quad\frac{1}{m_{i}|q\cap
S|^{2}}\\!\left[m_{i}\\!\sum_{t\in q}\\!t.a^{2}\\!-\\!\left(\\!\sum_{t\in
q}\\!t.a\\!\right)^{2}\right].$
for SUM/COUNT and AVG queries, respectively.
Thus, the optimization problem reduces to finding partitions that do not
contain a high-variance “rectangle” of data. Unfortunately, all algorithms in
(liang2021combining, ) are designed only for the static case and their running
is always super-linear $\Omega(m)$ with respect to the samples. Hence, their
algorithms cannot be used to satisfy the requirements in the dynamic setting.
One of the core subroutine of all our new partitioning algorithms is the
following. Given a rectangle $R$, the goal is to find a rectangular query
within $R$ with maximum variance among all possible queries in $R\cap S$. For
now, we assume that we have a dynamic index $\mathcal{M}$ with near-linear
space such that given a query rectangle $R$, it returns a query $q$ within $R$
with $\nu_{s}(q)\geq\frac{1}{\gamma}\mathcal{V}(R)$, for a parameter
$\gamma>1$, in $O(M)$ time, where $\mathcal{V}(R)$ is the variance of the
maximum variance rectangular query in $R$. Let $\mathcal{M}(R)$ be the
variance of the query returned by the index $\mathcal{M}$. We describe this
index with more details in Subsection 5.3.
### 5.2. Partitioning for $d=1$
Now, we discuss how to solve the partitioning optimization problem in one
dimension. We present results for SUM and AVG queries. COUNT can be thought of
as a special case of SUM with binary data. The basic trick is to search over a
discretized set of possible variance values. For each value $e$, we try to
construct a partitioning of $k$ partitions such that in each bucket the length
of the longest confidence interval of a query is at most $e$. By
systematically reducing $e$ in each iteration, we control for the worst-case
error.
Bounding the Error. The first step is to calculate the bounds for the maximum
length of the largest possible confidence interval among queries that
intersect one partition. We assume that the aggregation value of any item in
$\mathcal{D}$ is bounded by a maximum value $\mathcal{U}$ and a minimum non-
zero value $\mathcal{L}$. We allow items to take zero values since this is
often the case in real datasets but no item with positive value less than
$\mathcal{L}$ or larger than $\mathcal{U}$ exists. We assume that
$\mathcal{U}=O(\textrm{poly}(N))$ and
$\mathcal{L}=\Omega(1/\textrm{poly}(N))$. In Appendix C.2 we show that the
length of the longest confidence interval is also bounded by
$O(\textrm{poly}(N))$ and $\Omega(1/\textrm{poly}(N))$.
Description of Algorithm. We describe the partitioning algorithm for SUM
queries. The procedure is identical for AVG queries. For a parameter
$\rho\in\mathbb{R}$ with $\rho>1$, let $E=\\{\rho^{t}\mid
t\in\mathbb{Z},\frac{\mathcal{L}}{\sqrt{2}}\leq\rho^{t}\leq
N\mathcal{U}\\}\cup\\{0\\}$, be the discretization of the range defined by the
lower and upper bound of the longest confidence interval (as defined in the
previous paragraph). We run a binary search on the values of $E$. For each
value $e\in E$ we consider, we try to construct a partitioning of $k$
partitions such that in each partition the length of the longest confidence
interval of a query is at most $e$. If there exists such a partitioning we
continue the binary search with values $e^{\prime}<e$. If there is no such a
partitioning we continue the binary search with values $e^{\prime}>e$. In the
end of the binary search we return the last partitioning that we were able to
compute.
It remains to describe how to check if a partitioning with $k$ buckets
(intervals) with maximum length confidence interval at most $e$ exists. A high
level description of the algorithm is the following.
1. (1)
For $i=1$ to $k$
1. (a)
Let $b_{i}$ be the $i$-th bucket with left endpoint $t_{a}$
2. (b)
Binary search on samples $t_{j}$ to find the maximum bucket $b_{i}$ with error
at most $e$
3. (c)
If $\sqrt{\mathcal{M}([t_{a},t_{j}])}\leq e$
1. (i)
Continue search for values $>j$
2. (ii)
Else Continue search for values $<j$
2. (2)
If the partitioning contains all samples construct $\mathcal{T}$ using $b_{i}$
as its leaf nodes. Otherwise $\mathcal{T}=\emptyset$.
We start with the leftmost sample, say $t_{1}$, which is the left boundary of
the first bucket. In order to find its right boundary we run a binary search
on the samples $S$. Let $t_{j}$ be one of the right boundaries we check in the
binary search, and let $b_{1}=[t_{1},t_{j}]$. If
$\sqrt{\mathcal{M}(b_{1})}\leq e$ then we continue the binary search with a
sample at the right side of $t_{j}$ (larger bucket). Otherwise, we continue
the binary search with a sample at the left side of $t_{j}$ (smaller bucket).
When we find the maximal bucket with longest confidence interval at most $e$
we continue with the second bucket repeating the same process for at most $k$
buckets. In the end, if all samples in $S$ are contained in $k$ buckets then
we return that there exists a partitioning (with $k$ buckets) with maximum
variance at most $e$. If we cannot cover all samples in $k$ buckets then we
return that there is no partitioning with $k$ buckets and maximum variance at
most $e$.
Correctness. In Appendix C.2 we use the monotonic property of the longest
confidence interval (the bigger the bucket the larger the error) and we show
$\sqrt{\mathcal{V}(b^{\prime})}\leq\sqrt{\gamma\mathcal{M}(b^{\prime})}\leq\sqrt{\gamma}e^{\prime}\leq\rho\sqrt{\gamma}\sqrt{\mathcal{V}(b^{*})}$,
where $b^{\prime}$ is the bucket with the longest confidence interval in the
returned partitioning, $e^{\prime}$ is the smallest value in $E$ such that
$\sqrt{\mathcal{V}(b^{*})}\leq e^{\prime}$, and $b^{*}$ is the bucket of
optimum partitioning with the largest confidence interval. For $d=1$ we have
that $\gamma=4$ for SUM and AVG queries queries, so we get a partitioning
where the maximum error is within $2\rho\sqrt{2}$ of the optimum error for SUM
queries and within $2\rho$ of the optimum error for AVG queries.
Running time. Since, $\mathcal{L},\mathcal{U}$ are polynomially bounded on
$N$ we have that $|E|=O(\log_{\rho}N)$ and it can be constructed in
$O(\log_{\rho}N)$ time. The binary search over $E$ takes at most
$O(\log\log_{\rho}N)$ steps. We can decide if there exists a partitioning with
error $e$ in $O(kM\log m)$ time. Overall, the running time of our algorithm is
$O(kM\log m\log\log_{\rho}N)$. If $\rho$ is a constant, for example $\rho=2$,
then the running time is $O(kM\log m\log\log N)$. In Appendix C.2 we have that
in $1$-dimension $M=O(\log m)$ for SUM and AVG queries. Notice that if we skip
the $\log$ factors the running time depends only linearly on the number of
buckets $k$ and the approximation factor is constant.
### 5.3. Partitioning in Higher Dimensions
#### 5.3.1. Indexing To Find Maximum Variance
Now, we describe the core index $\mathcal{M}$ that we use in all our
partitioning algorithms for any dimension $d\geq 1$. The exact description of
the index depends on the type of aggregation queries we focus on.
For SUM and COUNT queries, we propose a simple index to find the query with
the largest variance in a query rectangle. In particular, we build a dynamic
range tree on $S$. Given a query rectangle $R$, we split it into two smaller
rectangles $R_{1},R_{2}$ such that $|R_{1}\cap S|=|R_{2}\cap S|=|R\cap S|/2$.
Using a dynamic range tree (de1997computational, ) we return the rectangle
$R_{i}$ (either $R_{1}$ or $R_{2}$) with the largest variance. We can show
that $\nu_{s}(R_{i})\geq\frac{1}{4}\mathcal{V}(R)$. The running time and the
update time is $O(\log^{d}m)$.
For AVG queries, the algorithm proposed in (liang2021combining, ) cannot be
extended to the dynamic case. Hence we propose a new dynamic index with a
better approximation factor. Similarly to (liang2021combining, ), we assume
that every valid query that is contained in a bucket of the partitioning must
contain at least $2\delta m$ samples (for a small parameter $\delta<1$),
otherwise the estimation is not accurate. For simplicity, we use the notation
$\tilde{O}(\cdot)$ to hide $\log(m)$ factors. In Appendix C.1 we show the
following crucial observation: for any rectangle $q$ inside a query rectangle
$R$ with $|q\cap S|=\delta m$ that maximizes $\sum_{t\in q\cap S}t.a^{2}$, it
holds that $\nu_{s}(q)\geq\frac{1}{4}\mathcal{V}(R)$. Hence, we build a
dynamic index so that given a query rectangle $R$ it returns a rectangle that
contains $\delta m$ samples and the sum of squares of their aggregate values
is close to the maximum sum.
We build a dynamic range tree $T^{\prime}$ over the samples $S$, storing the
number of samples in each node of the tree. Furthermore, we build another
empty dynamic range tree $T$. We will use $T$ to store weighted rectangles (as
points in $2d$) that contain at most $\delta m$ samples. More specifically, we
store in $T$ the _canonical_ rectangles of $T^{\prime}$ that contain at most
$\delta m$ samples. Notice that there are $\tilde{O}(m)$ nodes in $T^{\prime}$
hence $T$ uses $\tilde{O}(m)$ space. When we have an insertion or deletion in
$T^{\prime}$ there are only $\tilde{O}(1)$ nodes/rectangles that are updated,
hence we can update both $T^{\prime}$ and $T$ in $\tilde{O}(1)$ time. Given a
query rectangle $R$ we use $T$ to find a rectangular query $q^{*}$ with the
largest sum inside $R$ in $\tilde{O}(1)$ time. From the definition of a range
tree, for any rectangle there is a partitioning of $\log^{d+1}m$ canonical
rectangles from $T^{\prime}$. Hence we can show that
$\nu_{s}(q^{*})\geq\frac{1}{4\log^{d+1}m}\mathcal{V}(R)$. The exact
complexities depend on the dynamic range tree structure we use; our data
structure has roughly $O(m\log^{3d}m)$ space, $O(\log^{3d}m)$ update time, and
$O(\log^{2d}m)$ query time.
#### 5.3.2. Partitioning
We construct a partitioning by building a k-d tree using the dynamic procedure
$\mathcal{M}$ as we described above. The construction is similar to the
construction in (liang2021combining, ). However, they construct a near optimum
k-d tree in time $O(km)$ skipping the $\log$ factors. Here, we use our
improved index $\mathcal{M}$ to construct a k-d tree faster (in roughly $O(k)$
time) with better approximation guarantees.
The high level description of the algorithm is the following.
1. (1)
Max Heap $C$ containing partition $R_{1}$ covering all items in $\mathcal{D}$
2. (2)
For $j=2$ to $k$
1. (a)
Extract the partition $R_{i}$ with maximum $\mathcal{M}(R_{i})$ from $C$
2. (b)
Create a partitioning of $R_{i}$ of two partitions $R_{i_{1}}$, $R_{i_{2}}$ by
splitting on the median of $R_{i}$
3. (c)
Insert $\mathcal{M}(R_{i_{1}})$, $\mathcal{M}(R_{i_{2}})$ in $C$
4. (d)
Set $R_{i_{1}}$, $R_{i_{2}}$ as children of $R_{i}$ in $\mathcal{T}$
We can show that such a tree construction returns a partitioning which is near
optimal with respect to the optimum partition tree construction following the
same splitting criterion: split on the median of the leaf node with the
largest maximum variance query. Overall we construct a data structure that can
be updated in $O(\textrm{polylog}m)$ time. For a (re-)partition activation
over a set $S$ of $m$ samples, we can construct a new $\mathcal{T}$ with the
following guarantees: For COUNT/SUM queries, $\mathcal{T}$ can be constructed
in $O(k\log^{d}m)$ time with approximation factor $2\sqrt{k}$. For AVG
queries, $\mathcal{T}$ can be constructed in $O(k\log^{2d}m)$ time with
approximation factor $2\log^{(d+1)/2}m$. In all cases we can construct near-
optimum partitions in $\tilde{O}(k)$ time.
### 5.4. Re-Partitioning Triggers
Assume that the current partitioning is $\mathcal{R}$ and let
$\mathcal{M}(\mathcal{R})$ be the (approximate) maximum variance query with
respect to the current set of samples $S$. JanusAQP first checks the number of
samples in each bucket (leaf node) of the current $\mathcal{T}$. If there is a
leaf node $i$ associated with partition $R_{i}$ such that
$|S_{i}|<<\frac{1}{\alpha}\log m$ (where $\alpha$ is the sampling rate) then
there are not enough samples in $u$ to make robust estimators. Hence, we need
to find a new re-partitioning. Even if the number of samples in each bucket is
large our system might enable a re-partitioning: For a partition $R_{i}$ in
the leaf node layer of $\mathcal{T}$ let $\mathcal{M}_{i}=\mathcal{M}(R_{i})$
be the (approximate) maximum variance at the moment we constructed
$\mathcal{T}$. Let $\beta>1$ be a parameter that controls the maximum
allowable change on the variance. It can either be decided by the user or we
can set it to $\beta=10$. Assume that an update occurred in the leaf node
associated with the partition $R_{i}$. After the update we get
$\mathcal{M}_{i}^{\prime}=\mathcal{M}(R_{i})$. If
$\frac{1}{\beta}\mathcal{M}_{i}\leq\mathcal{M}_{i}^{\prime}\leq\beta\mathcal{M}_{i}$
then the new maximum variance in partition $b_{i}$ is not very different than
before so we do not trigger a re-partition. Otherwise, the maximum variance in
bucket $b_{i}$ changed by a factor larger than $\beta$ from the initial
variance $\mathcal{M}_{i}$. In this case a re-partitioning might find a new
tree with smaller maximum error. We compute a new partitioning
$\mathcal{R}^{\prime}$ and hence a new tree $\mathcal{T}$. If
$\mathcal{M}(\mathcal{R}^{\prime})<\frac{1}{\beta}\mathcal{M}(\mathcal{R})$
then we activate a re-partition restarting the catch-up phase over the new
tree $\mathcal{T}$. On the other hand, if
$\mathcal{M}(\mathcal{R}^{\prime})\geq\frac{1}{\beta}\mathcal{M}(\mathcal{R})$
then our current partitioning $\mathcal{R}$ is good enough so we can still use
it. Of course, the user can also manually trigger re-partitioning. For
example, the user can choose to re-partition once every hour, day, or after
$\tau$ insertions and deletions have occurred. In Appendix C.4, we also
describe how JanusAQP can execute either partial or full re-partitioning.
## 6\. Experiments
We run our experiments on a Linux machine with an Intel Core i7-8700 3.2GHz
CPU and 16GB RAM.
### 6.1. Setup
To set up each experiment, we select a single aggregate attribute and one or
more predicate attributes. We generate query workloads of 2000 queries by
uniformly sampling from rectangular range queries over the predicates. We then
initialize a JanusAQP instance with a user-specified sample rate, a catch-up
ratio and a number of leaf nodes of the partition tree to compare with other
baselines (these parameters directly control the Throughput, Query Latency,
and Storage Size).
#### 6.1.1. Datasets
Intel Wireless dataset. The Intel Wireless dataset (intelwireless_, ) contains
3 million rows of sensor data collected in the Berkeley Research lab in 2004.
Each row contains measurements like humidity, temperature, light, voltage as
well as the date and time each record was collected.
New York Taxi Records dataset. The New York City Taxi Trip Records dataset
(nyctaxi, ) contains 7.7 million rows of yellow and green taxi trip records
collected in January 2019. Each record contains information about the trip
including pickUpDateTime, dropOffDateTime, tripDistance, dropOffLocation,
passengerCount, etc.
NASDAQ ETF Prices dataset. The NASDAQ Exchange Traded Fund (ETF) Prices
dataset(nasdaq, ) contains 2166 ETFs traded in the NASDAQ exchange from April
1986 to April 2020. There are 4 million entries in the dataset and each entry
contains the date, the volume of transactions of an ETF on the date, and 4
prices: the price of an ETF when the market opens and closes; the highest and
the lowest of its daily price range.
#### 6.1.2. Metrics and ground truth
In terms of performance, we report the wall-clock latency and the throughput,
i.e. number of requests (query/data) processed per second. To measure the
accuracy of the system, unless otherwise specified, we report the 95
percentile of the relative error which is the difference between ground truth
and estimated query result divided by the ground truth. We define the ground
truth to be w.r.t all the tuples available when the query arrives, i.e. the
true results reflect all insertions and deletions up to its arrival point.
With this setup, the results indeed depend on the sequence of requests that
are processed by the system. To make sure our experiments are deterministic,
we fix this sequence up-front and ensure they are the same for each baseline.
#### 6.1.3. Baselines
We evaluate the following baselines. All of these baselines are tuned to
roughly control for query latency.
Reservoir Sampling (RS) and Stratified Reservoir Sampling (SRS). We construct
a uniform sample of the entire data set which is maintained using the
reservoir sampling algorithm (vitter1985random, ). We use a variant of RS
first designed for the AQUA system that handles both insertions and deletions
(gibbons2002fast, )222Due to its age, a direct comparison with AQUA was not
feasible. Unless otherwise noted, we use a 1% sample of data. For stratified
seservoir sampling, the strata is constructed using a equal-depth partitioning
algorithm.
DeepDB. We also compare with a machine learning-based baseline called
DeepDB(hilprecht2019deepdb, ). DeepDB achieves state-of-the-art AQP results in
the static setting, and we chose it as a baseline since it has limited support
for dynamic data. In our baseline, DeepDB trains on 10% of the data. We set
this to be equivalent to the “catch-up” sampling in DPT.
Dynamic Partition Tree-Only (DPT). We compare with a baseline of only using a
single DPT synopsis without online optimization. This synopsis is constructed
once and then used for the duration of the experiment. Unless otherwise noted
there are 128 leaf nodes in a balanced binary tree, the leaf nodes are
associated with 1% samples of their respective strata, and the catch-up
sampling rate 10% of the data.
JanusAQP. Finally, we evaluate the full-featured JanusAQP system. This
includes a DPT and also performs re-partitioning if needed. Unless otherwise
noted there are 128 leaf nodes in a balanced binary tree, the leaf nodes are
associated with 1% samples of their respective strata, and the catch-up
sampling rate 10% of the data.
The storage costs of the baselines on the NYC Taxi dataset given the typical
setting (128 leaf-nodes, 10% catch-up rate, and 1% sample rate) are the
following, reservoir sampling baseline takes about 5MB, JanusAQP and DPT takes
about 6MB, a DeepDB baseline trained with 10% of the data is about 60MB.
### 6.2. Accuracy
—
---
Approach
JanusAQP
DeepDB
RS
SRS
Intel (%)
---
0.2 | 0.5 | 0.9
0.67 | 0.62 | 0.33
1.5 | 1.7 | 0.8
2.1 | 1.6 | 1.3
1.3 | 1.3 | 1.2
NYC (%)
---
0.2 | 0.5 | 0.9
0.48 | 0.22 | 0.2
4.7 | 4.7 | 4.7
3.4 | 2.1 | 0.94
2.4 | 1.2 | 0.95
ETF (%)
---
0.2 | 0.5 | 0.9
5 | 4.3 | 2.3
- | - | -
16 | 9.8 | 8.6
10 | 8.2 | 8
Intel (ms/query)
---
0.2 | 0.5 | 0.9
0.19 | 0.31 | 0.63
0.6 | 0.6 | 0.6
2.5 | 6.3 | 13.2
3.1 | 6 | 10.7
NYC (ms/query)
---
0.2 | 0.5 | 0.9
0.27 | 0.57 | 0.97
0.6 | 0.6 | 0.6
4.7 | 14.2 | 30.6
4.6 | 14.7 | 25.3
ETF (ms/query)
---
0.2 | 0.5 | 0.9
0.14 | 0.28 | 0.46
0.6 | 0.6 | 0.6
2.58 | 6.8 | 13
2.66 | 5.2 | 12.7
Table 2. Median relative error (%) of 2000 SUM random queries and average
query latency (ms/query) over three datasets.
We first evaluate the end-to-end performance of JanusAQP and the baselines on
a 1d problem (1 predicate attribute). For the NYC Taxi dataset, we use the
pickUpTime attribute as the predicate attribute and the tripDistance attribute
as the aggregate attribute; for the ETF dataset, we use the volume attribute
as the predicate attribute and the close attribute as the aggregate attribute;
for the Intel Wireless dataset, we use the time and light attributes as
predicate and aggregate attribute respectively.
We start with 10% of the data in Kafka which is used by the baselines for
initialization (simulating historical data). We incrementally add 10% more
data in increments (simulating new data arrival). After every 10% increment,
we re-train the model for DeepDB and re-initialize the DPT used by JanusAQP.
Due to space limitation, we report results when 20%, 50%, and 90% of the rows
from each dataset are inserted into the system. The median relative error and
the corresponding average query latency can be found in Table 2.
We can see that JanusAQP has the overall best accuracy while controlling for
query latency. We note that the accuracy of DeepDB is stable as a function of
progress. This is because as a learned model DeepDB has a roughly fixed
resolution of the data (it does not increase the number of parameters as more
data is inserted).333We omit the results of DeepDB on the ETF dataset in Table
2 due to very large error ($>$ 1000%) for SUM queries while the error of COUNT
queries is reasonable. These findings are consistent with results from
(liang2021combining, ). The accuracy of RS and SRS improves at a cost of a
higher query latency.
### 6.3. Performance
Next, we evaluate the throughput and re-optimization cost of JanusAQP. We
populate Kafka with the first $p$ percent of the NYC Taxi dataset ($p$ varies
from 10 to 90). Like before, we initialize JanusAQP on the first 10% of data
and then incrementally add increments of 10% more. In this experiment, we
construct a mixed update workload of both insertions and deletions.
On the left plot of Figure 5, we show the throughput of handling insertions
and deletions using a pool of 12 threads. We can see the performance of
JanusAQP is quite stable and does not change with the size of existing data or
the amount of data that have been processed. For each insertion and deletion,
we simply find the target node in $O(\log(k))$ and modify the summary. Even
though a larger reservoir size increases the overhead of manipulating the
samples for reservoir sampling, the increased overhead is unnoticeable. This
is because the stratum stored in each node is $\frac{1}{k}$ of the reservoir,
and each stratum is independent with others and race condition only happens if
two workers are working on the same node.
On the right plot of Figure 5, we show the re-optimization time cost in
seconds by JanusAQP and DeepDB. The cost to initialize JanusAQP increases with
the number of tuples stored in Kafka but it is still much cheaper than DeepDB.
It is worth noting that the re-optimization cost of DeepDB is the cost of re-
training instead of incremental training. This is mostly due to the constraint
of the API exposed by DeepDB, and we observe that re-train a model with 2$n$
samples is faster than train a model with $n$ samples then incrementally train
another $n$ samples. The results suggest that complex, learned synopses are
not ideal in the dynamic setting.
Figure 5. We evaluate the throughput of JanusAQP when handling insertions and
deletions in multi-threaded mode. We also compare the re-optimization cost
with DeepDB.
### 6.4. Handle Deletion
To demonstrate how JanusAQP handles deletions of data, we construct a JanusAQP
instance with the 50% percent of each dataset, then we delete the last $p$% of
data of the first 50% ($p$ varies from 1% to 9%). After JanusAQP process all
the deletions, a query workload of 2000 random queries is evaluated and we
record the median relative error of the 2000 queries. We use the data that
remains in the system to compute the ground truth, e.g., for $p=1\%$, the
ground truth is computed with the first 49% of each dataset.
Results can be found in Figure 6, we notice that the relative error is
relatively stable when we vary the deletion percentage. This is because the
tuples that are being deleted are uniformly distributed over the predicate
attributes of the query workloads, i.e. the deletion would occur in each leaf
node of the DPT with roughly the same probability, therefore, the DPT without
re-optimization works reasonably well. In another experiment we artificially
generate deletions that are skewed to demonstrate scenario where re-
optimization is needed, details can be found in Sec. 6.7.
Figure 6. Median relative error of JanusAQP varying the amount of deletions
from 1% to 9% over three datasets.
### 6.5. The Catch-up Phase
In this experiment, we want to understand how the catch-up phase can impact
the accuracy and performance of the entire system.
#### 6.5.1. Accuracy
We use the entire Intel wireless dataset as the existing data. We compare a
set of JanusAQP (128, $c$, 1%) instances where the catch-up goal $c$ varies
from 1% to 10% with a step of 1%. When each JanusAQP instance reaches the
catch-up goal, we use it to evaluate the same set of 2000 random queries
generated using the light attribute as the aggregate attribute and the time
attribute as the predicate attribute.
The results can be found in the left plot of Figure 7. As a reference, we also
show the accuracy of an RS baseline with 1% sample rate. We notice that
JanusAQP (128,1%,1%) has no advantage against the RS baseline because neither
the samples nor the summaries built during catch-up could provide better
accuracy. As we increase the catch-up ratio, we can see an improvement in
accuracy because the quality of the summaries built by the catch-up phase
improved. Comparing with the expensive offline pre-processing used in
(liang2021combining, ), we believe the catch-up phase is a better alternative
that provides another knob to tune the tradeoff between accuracy and cost.
Figure 7. Varying the catch-up goal from 1% to 10% of the data, we evaluate
the accuracy of JanusAQP (left plot) and the time cost of the catch-up phase
(right plot).
#### 6.5.2. Overhead
The overhead of the catch-up phase comes from two sources: the loading and
processing of the samples. We distinguish and measure the two types of
overhead in terms of their time cost. Data loading time measures the time
spent on calling the Kafka poll() API, transferring the data, and ETL
operations that are necessary to prepare the data for JanusAQP to process. It
is worth noting that the data loading cost is part of the essential cost that
occurs in all systems and is usually less relevant to the core design of the
system but more relevant to the design of interfaces. For example, with a
different interface, instead of dealing with the strings from Kafka that can
be expensive to parse, the system could use Protocol Buffers(protobuff, ) for
more efficient data exchanging or even offload some of the ETL duties to the
client-side as described in (ding2021ciao, ). On the other hand, the data
processing time stands for the time taken by JanusAQP to analyze the data then
accordingly modify internal data structures that will be used for query
processing.
Results can be found in the right plot of Figure 7. We can see that the data
processing with a single thread takes less than 1.5 seconds for a catch-up
ratio of 10%, which is equivalent to a throughput of processing 160,000 tuples
per second. Furthermore, the data loading cost is much higher than the data
processing cost and we believe the data loading cost can be further improved
by more engineering efforts and techniques such as client-assisted data
loading(ding2021ciao, ).
### 6.6. Multi-dimensional Query Templates
Next, we investigate the performance of JanusAQP with multi-dimensional
queries on the NASDAQ ETF Prices dataset. We randomly generate 2000 queries
from a 5-D query template that uses the volume attribute as the target
attribute, the date attribute and the 4 price attributes as predicate
attributes. We perform the same workflow as we did in Section 6.2. We first
compare the median relative error of JanusAQP (256,10%,1%) with DeepDB and the
results can be found in the left plot of Figure 8. We notice that the accuracy
of JanusAQP is better than DeepDB but the relative error increases for both.
This is because multi-dimensional queries are usually more selective. Also,
because the queries are generated using the entire dataset, we notice that
many of the ground truths generated using the first 20% of the data are 0s.
Therefore, in the experiment, we start with 30% of the data. On the right plot
of Figure 8, we can find the re-optimization cost of JanusAQP is lower than
DeepDB but is more expensive than in the 1D setting. While the increase of
dimensions can indeed make it more expensive to process the samples we fetched
during catch-up, we believe the re-optimization cost can be further improved
with more engineering efforts.
Figure 8. We compare the median relative error and the re-optimization cost of
JanusAQP with DeepDB on multi-dimensional queries.
### 6.7. Re-partitioning
Next, we consider microbenchmarks that evaluate the re-partitioning
optimizations in JanusAQP. In the first experiment, using the NYC Taxi
dataset, JanusAQP performs a periodic re-partitioning after every 10%
insertions. For comparison, the DPT baseline does not perform any re-
partitioning and we evaluate the accuracy. We deliberately skew the insertions
by sorting on pickUpDateTime so that new insertions would hit a small number
of partitions. The results are illustrated in Figure 9(left), we can see the
relative error of DPT increases drastically due to a partition tree that
becomes more and more imbalanced with new insertions. With periodic re-
partition, JanusAQP keeps the accuracy at a controlled level.
Figure 9. We compare the accuracy of JanusAQP and DPT in two scenarios that
cause imbalanced partition trees.
In the second experiment, we use the pickupTimeOfDay as the predicate
attribute. Because the dataset is randomly distributed over the
pickupTimeOfDay attribute, the insertions are not skewed as in the previous
setting. To demonstrate a situation where a re-partition is triggered by
deletions, we randomly choose 10% of the nodes and we randomly delete half of
the samples that belong to these nodes then we insert the next 10% data. After
the insertion, the re-partition will be triggered for JanusAQP. For
comparison, we use a DPT baseline that does not perform any re-partition. We
perform the same operations to the leaf nodes of the DPT baseline then we
evaluate the same set of queries. The results can be found in the right plot
of Figure 9, we can see the relative error of DPT increases due to the
imbalanced partition tree while the error of JanusAQP drops because of re-
partition.
### 6.8. A More Efficient Partitioning Algorithm
In Section 5, we propose a binary search-based (BS-based) partitioning
algorithm for 1 dimension that is much more efficient. In this experiment, we
compare the accuracy and time cost of the BS-based algorithm with the dynamic
programming-based partitioning algorithm used by PASS on the Intel Wireless
dataset. We implement the BS-based algorithm in Python in our code base of
PASS for a fair comparison. We measure the time cost in seconds of each
partitioning algorithm given different number of partitions, we also compare
the median relative error of the PASS variation over 2000 randomly generate
queries.
| | 16 | 32 | 64 | 128
---|---|---|---|---|---
Partition Time (s) | DP | 16 | 22 | 382 | 6349
BS | 0.3 | 0.3 | 0.4 | 1.6
Median RE (CNT) | DP | 0.2% | 0.1% | 0.05% | 0.04%
BS | 0.6% | 0.4% | 0.1% | 0.1%
Median RE (SUM) | DP | 0.2% | 0.1% | 0.07% | 0.05%
BS | 1% | 0.9% | 0.2% | 0.2%
Median RE (AVG) | DP | 0.2% | 0.1% | 0.08% | 0.05%
BS | 1% | 0.7% | 0.2% | 0.15%
Table 3. We compare our new binary search-based (BS) partitioning algorithm
with the dynamic programming-based (DP) algorithm proposed in
(liang2021combining, ) on the Intel dataset.
The result can be found in Table 3, we vary the number of partitions from 16
to 128, as we increase the number of partitions, the sample size used by the
algorithms also increase. We notice that the time cost of the DP-based
algorithm increase drastically444Because we use a larger sample size than what
we used in (liang2021combining, ), the time cost of the DP algorithm increases
and the accuracy improves from what we reported in (liang2021combining, ).
with the number of partitions while the time cost of the BS-based algorithm
increase slightly. On the accuracy side, the DP-based algorithm does lead to a
lower error but the BS-based algorithm also introduce good accuracy. Overall,
we believe the BS-based algorithm is more scalable than the DP-based algorithm
and it provides favorable trade-off between cost and accuracy.
## References
* [1] Nasdaq bookviewer, 2021.
* [2] S. Acharya, P. B. Gibbons, and V. Poosala. Aqua: A fast decision support system using approximate query answers. In In Proc. of 25th Intl. Conf. on Very Large Data Bases. Citeseer, 1999.
* [3] P. K. Agarwal, G. Cormode, Z. Huang, J. Phillips, Z. Wei, and K. Yi. Mergeable summaries. In Proceedings of the 31st ACM SIGMOD-SIGACT-SIGAI symposium on Principles of Database Systems, pages 23–34, 2012.
* [4] S. Agarwal, B. Mozafari, A. Panda, H. Milner, S. Madden, and I. Stoica. Blinkdb: queries with bounded errors and bounded response times on very large data. In Proceedings of the 8th ACM European Conference on Computer Systems, pages 29–42, 2013.
* [5] J. L. Bentley and J. B. Saxe. Decomposable searching problems i. static-to-dynamic transformation. Journal of Algorithms, 1(4):301–358, 1980.
* [6] S. Chandrasekaran and M. J. Franklin. Psoup: a system for streaming queries over streaming data. The VLDB Journal, 12(2):140–156, 2003.
* [7] S. Chaudhuri, B. Ding, and S. Kandula. Approximate query processing: No silver bullet. In Proceedings of the 2017 ACM International Conference on Management of Data, pages 511–519, 2017.
* [8] G. Cormode. Sketch techniques for approximate query processing. Foundations and Trends in Databases. NOW publishers, 2011.
* [9] G. Cormode, M. Garofalakis, P. J. Haas, C. Jermaine, et al. Synopses for massive data: Samples, histograms, wavelets, sketches. Foundations and Trends® in Databases, 4(1–3):1–294, 2011.
* [10] M. De Berg, M. Van Kreveld, M. Overmars, and O. C. Schwarzkopf. Computational Geometry: Algorithms and Applications. Springer, 3rd edition, 2008.
* [11] C. Ding, D. Tang, X. Liang, A. J. Elmore, and S. Krishnan. Ciao: An optimization framework for client-assisted data loading. In 2021 IEEE 37th International Conference on Data Engineering (ICDE), pages 1979–1984. IEEE, 2021.
* [12] J. Erickson. Static-to-dynamic transformations. http://jeffe.cs.illinois.edu/teaching/datastructures/notes/01-statictodynamic.pdf.
* [13] E. Gan, P. Bailis, and M. Charikar. Coopstore: Optimizing precomputed summaries for aggregation. Proceedings of the VLDB Endowment, 13(11).
* [14] E. Gan, P. Bailis, and M. Charikar. Coopstore: Optimizing precomputed summaries for aggregation. Proceedings of the VLDB Endowment, 13(12):2174–2187, 2020.
* [15] M. N. Garofalakis and P. B. Gibbons. Approximate query processing: Taming the terabytes. In VLDB, volume 10, pages 645927–672356, 2001.
* [16] P. B. Gibbons, Y. Matias, and V. Poosala. Fast incremental maintenance of approximate histograms. ACM Transactions on Database Systems (TODS), 27(3):261–298, 2002\.
* [17] A. C. Gilbert, S. Guha, P. Indyk, Y. Kotidis, S. Muthukrishnan, and M. J. Strauss. Fast, small-space algorithms for approximate histogram maintenance. In Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, pages 389–398, 2002.
* [18] Google. Google protocol buffer, 2021.
* [19] S. Guha, N. Koudas, and K. Shim. Approximation and streaming algorithms for histogram construction problems. Trans. on Datab. Syst., 31(1):396–438, 2006.
* [20] B. Hilprecht, A. Schmidt, M. Kulessa, A. Molina, K. Kersting, and C. Binnig. Deepdb: Learn from data, not from queries! VLDB Endowment, 2019.
* [21] H. Jagadish, H. Jin, B. C. Ooi, and K.-L. Tan. Global optimization of histograms. ACM SIGMOD Record, 30(2):223–234, 2001.
* [22] H. V. Jagadish, N. Koudas, S. Muthukrishnan, V. Poosala, K. C. Sevcik, and T. Suel. Optimal histograms with quality guarantees. In VLDB, volume 98, pages 24–27, 1998.
* [23] C. Jermaine. Robust estimation with sampling and approximate pre-aggregation. In Proceedings 2003 VLDB Conference, pages 886–897. Elsevier, 2003\.
* [24] R. Jin, L. Glimcher, C. Jermaine, and G. Agrawal. New sampling-based estimators for OLAP queries. In 22nd International Conference on Data Engineering (ICDE’06), pages 18–18. IEEE, 2006.
* [25] S. Joshi and C. Jermaine. Materialized sample views for database approximation. IEEE Transactions on Knowledge and Data Engineering, 20(3):337–351, 2008.
* [26] M. Jurgens and H.-J. Lenz. The r/sub a/*-tree: an improved r*-tree with materialized data for supporting range queries on olap-data. In Proceedings Ninth International Workshop on Database and Expert Systems Applications (Cat. No. 98EX130), pages 186–191. IEEE, 1998.
* [27] N. Koudas, S. Muthukrishnan, and D. Srivastava. Optimal histograms for hierarchical range queries. In Proceedings of the nineteenth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, pages 196–204, 2000.
* [28] I. Lazaridis and S. Mehrotra. Progressive approximate aggregate queries with a multi-resolution tree structure. Acm sigmod record, 30(2):401–412, 2001.
* [29] X. Liang, Z. Shang, S. Krishnan, A. J. Elmore, and M. J. Franklin. Fast and reliable missing data contingency analysis with predicate-constraints. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, pages 285–295, 2020.
* [30] X. Liang, S. Sintos, Z. Shang, and S. Krishnan. Combining aggregation and sampling (nearly) optimally for approximate query processing. In Proceedings of the 2021 International Conference on Management of Data, pages 1129–1141, 2021.
* [31] Q. Ma, A. M. Shanghooshabad, M. Almasi, M. Kurmanji, and P. Triantafillou. Learned approximate query processing: Make it light, accurate and fast. In CIDR, 2021.
* [32] M. Olma, O. Papapetrou, R. Appuswamy, and A. Ailamaki. Taster: self-tuning, elastic and online approximate query processing. In 2019 IEEE 35th International Conference on Data Engineering (ICDE), pages 482–493. IEEE, 2019.
* [33] O. Onyshchak. Stock market dataset. =https://www.kaggle.com/dsv/1054465, 2020.
* [34] M. H. Overmars and J. van Leeuwen. Worst-case optimal insertion and deletion methods for decomposable searching problems. Information Processing Letters, 12(4):168–173, 1981.
* [35] Y. Park, B. Mozafari, J. Sorenson, and J. Wang. Verdictdb: Universalizing approximate query processing. In Proceedings of the 2018 International Conference on Management of Data, pages 1461–1476, 2018.
* [36] Y. Park, A. S. Tajik, M. Cafarella, and B. Mozafari. Database learning: Toward a database that becomes smarter every time. In Proceedings of the 2017 ACM International Conference on Management of Data, pages 587–602, 2017.
* [37] J. Peng, D. Zhang, J. Wang, and J. Pei. Aqp++ connecting approximate query processing with aggregate precomputation for interactive analytics. In Proceedings of the 2018 International Conference on Management of Data, pages 1477–1492, 2018.
* [38] W. H. Peter Bodik et al. Intel wireless dataset. http://db.csail.mit.edu/labdata/labdata.html, 2004.
* [39] R. Poepsel-Lemaitre, M. Kiefer, J. von Hein, J.-A. Quiané-Ruiz, and V. Markl. In the land of data streams where synopses are missing, one framework to bring them all. 2021\.
* [40] K. Rong, Y. Lu, P. Bailis, S. Kandula, and P. Levis. Approximate partition selection for big-data workloads using summary statistics. arXiv preprint arXiv:2008.10569, 2020.
* [41] N. Taxi and L. Commission. New york city taxi trip records dataset. https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page, 2019\.
* [42] J. S. Vitter. Random sampling with a reservoir. ACM Transactions on Mathematical Software (TOMS), 11(1):37–57, 1985\.
* [43] B. Walenz, S. Sintos, S. Roy, and J. Yang. Learning to sample: Counting with complex queries. Proceedings of the VLDB Endowment, 13(3):390–402, 2019.
* [44] L. Wang, R. Christensen, F. Li, and K. Yi. Spatial online sampling and aggregation. Proceedings of the VLDB Endowment, 9(3):84–95, 2015.
* [45] Z. Yang, E. Liang, A. Kamsetty, C. Wu, Y. Duan, X. Chen, P. Abbeel, J. M. Hellerstein, S. Krishnan, and I. Stoica. Deep unsupervised cardinality estimation. arXiv preprint arXiv:1905.04278, 2019.
## Appendix A Sampling from Kafka-like Systems
Random sampling is a key building block of JanusAQP because it is used in many
components: we use random samples to build the leaf layer of the partition
tree and initialize the reservoir that are later used to solve queries. During
the catch-up phase, we also use random samples to construct the summaries that
are stored in the leaf nodes. Random sampling affects both the accuracy and
performance of the entire JanusAQP system: a biased sample hurts the accuracy
of the system and expensive sampling operations leads to a higher latency and
lower throughput.
Designing a random sampler for message brokers like Kafka can be a non-trivial
task because the API of such systems usually does not provide random access to
the data. To retrieve data from a Kafka topic, a Kafka consumer has to send an
offset to the server indicating the location it wants to access data from.
Therefore, a naive random sampler can be implemented by using the poll() API
with a random offset.
However, such a naive implementation can be expensive because each poll could
retrieve a batch of thousands of tuples that are contiguous (therefore
biased). To guarantee unbiased sampling, we will have to keep only a small
portion from each batch and discard most of the tuples, set another random
offset, repeat the process until we collected enough samples.
To build a scalable, efficient and unbiased random sampler for Kafka, we need
more control over the polling process besides the offset, to be more specific,
we want to control the size of each poll. We propose two sampling methods:
Sequential Sampler A sequential sampler retrieves the data in a sequential
manner. In each poll, a random sample is drawn from the batch and the rest
tuples are discard. Sequential samplers work the best when the size of the
dataset is medium to large because the entire dataset is transferred from
Kafka to the sampler. Therefore, one overhead of this approach is the network
traffic of transferring the entire dataset, another drawback of a sequential
sampler is that the random samples are only available until the sampler have
retrieved the entire dataset, the high latency might not be acceptable in some
situations.
Singleton Sampler In each poll, a singleton sampler request one tuple from a
random offset, it repeats until enough sample has been collected. Singleton
samplers minimize the network traffic with the cost of server-side overhead
due to a more frequent use of the API. The main advantage of a singleton
sampler is that it offers lower latency because the random sample is built
incrementally, a small random sample can be available with a lower latency.
In general, we observe that, for light-weight sampling tasks or tasks that
requires low latency, a singleton sampler works the best. In scenarios where
the dataset size is medium to large and the expected latency is acceptable,
the sequential sampler might be preferred for a more consistent performance.
There might be an interesting mid-ground for sampling from Kafka-like systems
that can offer better efficiency and latency and we plan to investigate in a
future work.
#### A.0.1. Experiments
In this experiment, we study the performance of the two sampling strategies
discussed in Section A with the Intel wireless dataset. We implement samplers
with different poll size and measure the time cost of collecting 1 million
tuples from Kafka. Essentially, this experiment measures the overhead
introduced by transferring the data and calling Kafka API.
Results can be found in Table 4 where the Singleton sampler is represented by
the row with pollSize equals to 1. Rows with pollSize larger than 1 are
sequential samplers which retrieve the entire dataset and sample from each
poll.
pollSize | nPolls | total(ms) | ms/poll | EquivSingletonSR
---|---|---|---|---
1 | 1000000 | 19000 | 0.019 | —
10 | 10000 | 4000 | 0.04 | 0.21
100 | 10000 | 2000 | 0.02 | 0.11
1000 | 1000 | 1500 | 1.5 | 0.08
10000 | 100 | 1400 | 14 | 0.075
100000 | 10 | 1700 | 17 | 0.09
Table 4. We use a singleton sampler (pollSize=1) and sequential samplers
(pollSize¿1) to sample 1 million tuples, given the latency of each sequential
samplers, we derive a equivalent sample rate of the singleton sampler indicate
a sample rate above which sequential samplers take less time to collect all
requested random samples. If the expected total latency is acceptable,
sequential samplers might be preferred in these scenarios.
Because we have to wait a total(ms) time for sequential samplers to collect
the samples. We calculate the applicable sampleRate for each sequential
sampler above which they can achieve a lower total(ms). For example, if our
sample rate is 10% which is larger than 7.5% achieved by the best Sequential
sampler with pollSize of 10000, the singleton sampler will take more time than
the sequential sampler to complete the sampling process. And the sequential
samplers might be preferred if the estimated latency of total(ms) is
acceptable.
In JanusAQP, because the sample rate we use during initialization is no larger
than 1%, we always use a singleton sampler during initialization, i.e. to
collect sample to build the partition tree and to initialized the strata. For
the catch-up phase, if our catch-up rate is larger than 10%, when dealing with
a dataset of medium to large size with an acceptable latency, we will prefer
to use a sequential sampler because we can have a complete catch-up with a
better accuracy, otherwise, a singleton sampler is preferred for the low
latency it offers and we will keep it running in background.
## Appendix B Variance Estimator Details
Using algebra we have the following formulas: For a COUNT/SUM query $q$ (for
COUNT we assume that $t.a=1$ for any tuple $t$),
$w_{i}\cdot mean(\phi_{q}(S_{i}))=\frac{1}{m_{i}}\sum_{t\in
S_{i}}\phi(t)=\frac{N_{i}}{m_{i}}\sum_{t\in S_{i}\cap q}t.a$ $w_{i}\cdot
mean(\phi_{q}(H_{i}))=\frac{1}{h_{i}}\sum_{t\in
H_{i}}\phi(t)=\frac{N_{i}}{h_{i}}\sum_{t\in H_{i}}t.a$
$w_{i}^{2}\frac{var(\phi_{q}(S_{i}))}{m_{i}}=\frac{N_{i}^{2}}{m_{i}^{3}}\left[m_{i}\sum_{t\in
S_{i}\cap q}t.a^{2}-\left(\sum_{t\in S_{i}\cap q}t.a\right)^{2}\right]$
$w_{i}^{2}\frac{var(\phi_{q}(H_{i}))}{h_{i}}=\frac{N_{i}^{2}}{h_{i}^{3}}\left[h_{i}\sum_{t\in
H_{i}}t.a^{2}-\left(\sum_{t\in H_{i}}t.a\right)^{2}\right].$
For an AVG query $q$ we have
$w_{i}\cdot mean(\phi_{q}(S_{i}))=\frac{N_{i}}{|S_{i}\cap
q|\cdot\sum_{i\in\mathcal{I}_{q}}N_{i}}\sum_{t\in S_{i}\cap q}t.a$ $w_{i}\cdot
mean(\phi_{q}(H_{i}))=\frac{N_{i}}{h_{i}\cdot\sum_{i\in\mathcal{I}_{q}}N_{i}}\sum_{t\in
H_{i}}t.a,$
where $\mathcal{I}_{q}$ is the set of all leaf nodes intersected (either
partially or fully) by the query $q$.
$w_{i}^{2}\frac{var(\phi_{q}(S_{i}))}{m_{i}}=\frac{w_{i}^{2}}{m_{i}\cdot|S_{i}\cap
q|^{2}}\left[m_{i}\sum_{t\in S_{i}\cap q}t.a^{2}-\left(\sum_{t\in S_{i}\cap
q}t.a\right)^{2}\right]$
$w_{i}^{2}\frac{var(\phi_{q}(H_{i}))}{h_{i}}=\frac{w_{i}^{2}}{h_{i}^{3}}\left[h_{i}\sum_{t\in
H_{i}}t.a^{2}-\left(\sum_{t\in H_{i}}t.a\right)^{2}\right]$
## Appendix C Partition algorithms
### C.1. Maximum variance under updates
An important procedure of all our algorithms is to find what is the maximum
error of a query that lies completely in a leaf node or more generally in a
rectangle. We explain how to do it for the queries COUNT, SUM, AVG. In
particular, we describe a stronger result: We construct a dynamic data
structure over a set of samples $S$ with efficient update time such that given
a query rectangle, it returns an approximation of the variance of the query
with the maximum variance in the query rectangle. This result leads to very
efficient dynamic algorithms for checking the maximum variance and re-
constructing a new partition, as we will see in the next subsections. Let
$\mathcal{M}(R)$ be the value of the variance returned by our approximation
algorithm in a rectangle $R$.
COUNT queries. For COUNT queries it is known [30] that the query with the
maximum variance in a rectangle $R$ contains exactly $|R\cap S|/2$ samples.
Hence, we construct a dynamic range tree $T$ over $S$ with space
$O(m\log^{d}m)$. $T$ can be constructed in $O(m\log^{d}m)$ time and can be
updated in $O(\log^{d}m)$ time. Given a query rectangle $R$, we run a binary
search using $T$ to find two rectangles that contain $|R_{i}|/2$ items. The
query runs in $O(\log^{d}m)$ time.
SUM queries. For SUM queries it is known [30] that we can get a
$\frac{1}{4}$-approximation of the maximum variance query inside $R$ with the
following simple approach: Find two non-intersecting rectangles $R_{1},R_{2}$
such that $R_{1}\cup R_{2}=R$, and $|R_{1}\cap S|=|R_{2}\cap S|=|R\cap S|/2$.
Then, they compare $\sum_{t\in R_{1}\cap S}t.a^{2}$ with $\sum_{t\in R_{2}\cap
S}t.a^{2}$ and return the variance of the rectangle with the largest sum of
squares. The variance of this rectangle is a $\frac{1}{4}$-approximation of
the maximum variance in $R$. Range trees work for any aggregation function so
we can also use them to compute the sum of the values squared in a rectangle
or the variance of a rectangle. Hence, we can use a dynamic range tree as we
had in the COUNT case returning a $\frac{1}{4}$-approximation of the maximum
variance. This data structure has exactly the same complexities as the data
structure for COUNT queries.
AVG queries. For AVG queries the offline algorithms of [30] cannot be
efficiently extended to the dynamic case we are interested in (is not known
how to achieve $\textrm{polylog}m$ update time). Here we propose a new dynamic
data structure for finding an approximation of the maximum variance AVG query
in a query rectangle efficiently. The new data structure we propose does not
only handle updates efficiently, unlike the data structures in [30], but it
also improves the approximation factor for any dimension $d$.
For a set of samples $S$ we construct a dynamic range tree $T^{\prime}$. We
also initialize an empty dynamic data structure $T$ that stores weighted
rectangles. Given a query rectangle $R$, it returns the rectangle with the
highest weight that lies completely inside $R$. For example $T$ can be a
dynamic range tree in $2d$ dimensions storing each rectangle as a $2d$ point
by creating a point from its two main opposite corners. Notice that every
$d$-th level node $u$ of $T^{\prime}$ corresponds to a rectangle $R_{u}$. If
$|R_{u}\cap S|\leq\delta m$ then we add $R_{u}$ in $T$ with weight
$S(R_{u})=\sum_{t\in R_{u}\cap S}t.a^{2}$. If $\delta m\leq|R_{u}\cap S|\leq
2\delta m$ we split $R_{u}$ into two rectangles $R_{u_{1}},R_{u_{2}}$ such
that $R_{u_{1}}\cup R_{u_{2}}=R_{u}$ and $|R_{u_{1}}\cap S|=|R_{u_{2}}\cap
S|=|R_{u}\cap S|/2$. We add $R_{u_{1}},R_{u_{2}}$ in $T$ with weights
$S(R_{u_{1}})=\sum_{t\in R_{u_{1}}\cap S}t.a^{2}$, $S(R_{u_{2}})=\sum_{t\in
R_{u_{2}}\cap S}t.a^{2}$. The tree $T^{\prime}$ can be constructed in
$\tilde{O}(m)$ time and it has $\tilde{O}(m)$ space. In $T$ we might insert
$\tilde{O}(m)$ rectangles so it has $\tilde{O}(m)$ space and can be
constructed in $\tilde{O}(m)$ time. For any insertion or deletion of a sample
in $S$, $T^{\prime}$ can be updated in $\tilde{O}(1)$ time by modifying at
most $\tilde{O}(1)$ nodes. For each modified node we update accordingly the
corresponding rectangle in $T$ in $\tilde{O}(1)$ time. Furthermore, after an
update we traverse the $d$-th level of $T^{\prime}$ from the updated leaf
nodes to their roots inserting or removing rectangles from $T$ accordingly
based on the number of points they contain. Using [5, 34, 12] we can propose a
simple dynamic data structure with amortized update time guarantee that can be
extended to worst case guarantee by standard techniques [12]. Overall our data
structure can be constructed in $O(m\log^{3d}m)$ time, has $O(\log^{3d}m)$
space and can be updated in $O(\log^{3d+1}m)$ time.
Given a query rectangle $R$ such that $R\cap S>2\delta m$ (as we had in [30])
we show how to return an approximation of the maximum variance query
efficiently. We search $T$ using the query rectangle $R$ and we get a set of
$\tilde{O}(1)$ canonical nodes that contain all rectangles completely inside
$R$. From the canonical subsets we get the rectangle $q^{\prime}$ inside
rectangle $R$ with the largest weight. If $|q^{\prime}\cap S|<\delta m$ then
using $T^{\prime}$ we run binary search over all dimensions until we find an
expansion of $q^{\prime}$ that contains exactly $\delta m$ samples. This can
be done in $\tilde{O}(1)$ time. Without loss of generality assume that
$q^{\prime}$ contains exactly $\delta m$ samples. Using $T^{\prime}$ we
measure the variance of $q^{\prime}$ in $R$ in $\tilde{O}(1)$ time. In the end
we return the variance of $q^{\prime}$ as the approximation of the maximum
variance AVG query in the query rectangle $R$. The query procedure takes
$\tilde{O}(1)$ time.
###### Lemma C.1.
It holds that $\nu_{s}(q^{\prime})\geq\frac{1}{4\log^{d+1}m}\mathcal{V}(R)$.
###### Proof.
Let $q$ be the AVG query (a rectangle) with the maximum variance in $R$. It is
known from [30] that $q$ contains at most $2\delta m$ and at least $\delta m$
samples. Since $|R\cap S|\geq 2\delta m$ we have that
$|R\cap S|\sum_{t\in q^{\prime}\cap S}t.a^{2}-\left(\sum_{t\in q^{\prime}\cap
S}t.a\right)^{2}\geq\frac{|R\cap S|}{2}\sum_{t\in q^{\prime}\cap S}t.a^{2},$
following from Lemma A.2 in [30]. Next, notice that a query procedure on
$T^{\prime}$ with the query range $q$ would give a set of $\log^{d}m$
canonical rectangles that cover $q$ where each of them contains at most
$2\delta m$ samples. We note that for each of these canonical rectangles in
$T^{\prime}$ there are at most two rectangles in $T$ containing the same
items. Let $X$ be the set of rectangles in $T$ corresponding to the canonical
rectangles in $T^{\prime}$. From its definition it holds that $|X|\leq
2\log^{d+1}m$. All of these rectangles in $X$ lie completely inside $R$ so the
query procedure will consider them to find the rectangle with the largest
weight. Hence it holds that
$\sum_{t\in q^{\prime}\cap S}t.a^{2}\geq\max_{x\in X}\sum_{t\in x\cap
S}t.a^{2}\geq\frac{1}{2\log^{d+1}m}\sum_{t\in q\cap S}t.a^{2}.$
Overall we have, (for simplicity $|R|=|R\cap S|$)
$\displaystyle\nu_{s}(q^{\prime})$
$\displaystyle=\frac{1}{|R|\cdot|q^{\prime}\cap S|^{2}}\left[|R\cap
S|\sum_{t\in q^{\prime}\cap S}t.a^{2}-\left(\sum_{t\in q^{\prime}\cap
S}t.a\right)^{2}\right]$ $\displaystyle\geq\frac{1}{|R|\cdot|q^{\prime}\cap
S|^{2}}\frac{|R\cap S|}{2}\sum_{t\in q^{\prime}\cap S}t.a^{2}$
$\displaystyle\geq\frac{1}{|R|\cdot|q^{\prime}\cap
S|^{2}}\frac{1}{4\log^{d+1}m}|R\cap S|\sum_{t\in q\cap S}t.a^{2}$
$\displaystyle\\!\geq\\!\frac{|q\cap S|^{2}}{|q^{\prime}\cap
S|^{2}}\frac{1}{4\log^{d+1}m}\frac{1}{|R|\\!\cdot\\!|q\cap S|^{2}}\left[|R\cap
S|\\!\\!\\!\sum_{t\in q\cap S}\\!\\!t.a^{2}\\!-\\!\left(\\!\sum_{t\in q\cap
S}\\!\\!t.a\\!\right)^{2}\\!\right]$
$\displaystyle\geq\frac{1}{4\log^{d}m}\mathcal{V}(R)$
∎
If $d=1$ we can modify the data structure so that it gives an approximation
factor $4$.
Overall, given a set $S$ of $m$ points in $d$ dimensions we can construct a
data structure of space $O(m\cdot\textrm{polylog}(m))$ in
$O(m\cdot\textrm{polylog}(m))$ time with update time $O(\textrm{polylog}(m))$
such that given a query rectangle it finds an approximation of the maximum
variance COUNT, SUM or AVG query inside $R$ in $O(\textrm{polylog}(m))$ time.
### C.2. Partition for $d=1$
For COUNT queries the optimum partition in $1$D consists of equal size buckets
(intervals) so we can find the new partition in $O(k\log m)$ time by
maintaining the order of the samples $S$ under insertion or deletion using a
balanced search binary tree where the samples are stored in the leaf nodes.
Such a tree can be updated in $O(\log m)$ time while the order of the samples
on the real line is the same as the order of the leaf nodes from left to
right. When we have to (re-)construct the partition we find the right endpoint
of each bucket using the search binary tree. Overall, we need $O(\log m)$ time
to update the tree and $O(k\log m)$ time to construct a new partition.
Next, we focus on SUM and AVG queries.
Bounding the error. We first show a lemma that bounds the maximum length of
the largest possible confidence interval among queries that intersect one
bucket of the partition. We assume that the value of any item in $\mathcal{D}$
is bounded by a maximum value $\mathcal{U}$ and a minimum non-zero value
$\mathcal{L}$. We allow items to take zero values since this is often the case
in real datasets but no item with positive value less than $\mathcal{L}$ or
larger than $\mathcal{U}$ exists. We assume that
$\mathcal{U}=O(\textrm{poly}(N))$ and
$\mathcal{L}=\Omega(1/\textrm{poly}(N))$.
###### Lemma C.2.
Let $R$ be any rectangle and let $\mathcal{V}_{S}(R),\mathcal{V}_{A}(R)>0$ be
the variance of the SUM and AVG query respectively with the maximum variance
in $R$. Then it holds that
$\frac{\mathcal{L}}{\sqrt{2}}\leq\sqrt{\mathcal{V}_{S}(R)}\leq N\mathcal{U}$
and
$\frac{\mathcal{L}}{\sqrt{2}N}\leq\sqrt{\mathcal{V}_{A}(R)}\leq\sqrt{N}\mathcal{U}$.
###### Proof.
Without loss of generality let $q$ be the SUM or AVG query with the maximum
variance in $R$.
First we focus on SUM queries. Let $N_{R}=|R\cap\mathcal{D}|$ be the number of
total tuples in $R$. Unless $\nu_{s}(q)=0$, from [30], we know that there
exists a query $q^{\prime}$ with $|q^{\prime}|=|R|/2$ such that
$\nu_{s}(q)\geq\frac{N_{R}^{2}}{|R|^{3}}\frac{|R|}{2}\sum_{t\in
q^{\prime}}t.a^{2}$, where $\sum_{t\in q^{\prime}}t.a^{2}>0$. We also have
$\sum_{t\in q^{\prime}}t.a^{2}\geq\mathcal{L}^{2}$ (and $w_{u}=1$) leading to
$\sqrt{\mathcal{V}_{S}(R)}\geq\frac{N_{R}}{\sqrt{2}|R|}\mathcal{L}\geq\frac{\mathcal{L}}{\sqrt{2}}$.
Furthermore, we have
$\nu_{s}(q)\leq\frac{N_{R}^{2}}{|R|^{2}}|R|^{2}\mathcal{U}^{2}\leq
N^{2}\mathcal{U}^{2}$ leading to $\sqrt{\mathcal{V}_{S}(R)}\leq N\mathcal{U}$.
Next, we consider AVG queries. Unless $\nu_{s}(q)=0$, from [30], we know that
there exists a query $q^{\prime}$ with $|q^{\prime}|=\delta m\leq|R|/2$ such
that
$\nu_{s}(q^{\prime})\geq\frac{1}{|R|\delta^{2}m^{2}}\frac{|R|}{2}\sum_{t\in
q^{\prime}}t.a^{2}$, where $\sum_{t\in q^{\prime}}t.a^{2}>0$. We also have
$\sum_{t\in q^{\prime}}t.a^{2}\geq\mathcal{L}^{2}$ (and $w_{u}=1$) leading to
$\sqrt{\mathcal{V}_{A}(R)}\geq\frac{1}{\sqrt{2}\delta
m}\mathcal{L}\geq\frac{\mathcal{L}}{\sqrt{2}N}$. Furthermore, we have
$\nu_{s}(q)\leq\frac{|R|}{|q|^{2}}\mathcal{U}^{2}\leq|R|\mathcal{U}^{2}\leq
N\mathcal{H}^{2}$ leading to
$\sqrt{\mathcal{V}_{A}(R)}\leq\sqrt{N}\mathcal{U}$. ∎
Since, $\mathcal{U},\mathcal{L}$ are bounded by a polynomial with respect to
$N$, we have that the length of the longest confidence interval is bounded by
$O(\textrm{poly}(N))$ and $\Omega(1/\textrm{poly}(N))$, i.e.
$\Omega(1/\textrm{poly}(N))\leq\sqrt{\mathcal{V}_{S}(R)},\sqrt{\mathcal{V}_{A}(R)}\leq
O(\textrm{poly}(N)).$
Description of algorithm. We describe the partition algorithm for SUM queries.
The procedure is identical for AVG queries and we highlight the differences in
the end of this section. For a parameter $\rho\in\mathbb{R}$ with $\rho>1$,
let $E=\\{\rho^{t}\mid
t\in\mathbb{Z},\frac{\mathcal{L}}{\sqrt{2}}\leq\rho^{t}\leq
N\mathcal{U}\\}\cup\\{0\\}$, be the discretization of the range
$[\frac{\mathcal{L}}{\sqrt{2}},N\mathcal{U}]$, i.e., the lower and upper bound
of the longest confidence interval (assuming queries completely inside one
bucket), by the multiplicative parameter $\rho$. For an interval $b$, let
$\mathcal{M}(b)$ be the approximation of the query with the maximum variance
in bucket $b$ (supporting updates) as described in Section C.1. We run a
binary search on the values of $E$. For each value $e\in E$ we consider, we
try to construct a partition of $k$ buckets such that in each bucket the
length of the longest confidence interval is at most $e$. If there exists such
a partition we continue the binary search with values $e^{\prime}<e$. If there
is no such a partition we continue the binary search with values
$e^{\prime}>e$. In the end of the binary search we return the last partition
that we were able to compute.
It remains to describe how to check if a partition with $k$ buckets
(intervals) with maximum length confidence interval at most $e$ exists. We
start with the leftmost sample, say $t_{1}$, which is the left boundary of the
first bucket. In order to find its right boundary we run a binary search on
the samples $S$. Let $t_{j}$ be one of the right boundaries we check in the
binary search, and let $b_{1}=[t_{1},t_{j}]$. If $\sqrt{\mathcal{M}(b_{1})}<e$
then we continue the binary search with a sample at the right side of $t_{j}$
(larger bucket). Otherwise, we continue the binary search with a sample at the
left side of $t_{j}$ (smaller bucket). When we find the maximal bucket with
longest confidence interval at most $e$ we continue with the second bucket
repeating the same process for at most $k$ buckets. In the end, if all samples
in $S$ are contained in $k$ buckets then we return that there exists a
partition (with $k$ buckets) with maximum variance at most $e$. If we cannot
cover all samples in $k$ buckets then we return that there is no partition
(with $k$ buckets) with maximum variance at most $e$.
The same algorithm also works for AVG queries. The only difference is that $E$
should be defined with respect to the upper and lower bound of the longest
confidence interval, as shown in Lemma C.2.
Correctness. Before we start with the correctness proof of our algorithm we
recall that in [30] we showed that under a mild assumption, for two buckets
$b_{i},b_{j}$ if $b_{i}\subseteq b_{j}$ then
$\sqrt{\mathcal{V}(b_{i})}\leq\sqrt{\mathcal{V}(b_{j})}$, namely the length of
the longest confidence interval in $b_{i}$ is smaller than the length of the
longest confidence interval in $b_{j}$. This is the monotonic property of the
longest confidence interval.
We assume that $\mathcal{M}(b_{i})$ computes a
$\frac{1}{\gamma}$-approximation of the maximum variance in $b_{i}$, i.e.,
$\mathcal{M}(b_{i})\geq\frac{1}{\gamma}\mathcal{V}(b_{i})$. Let
$\mathcal{R}^{*}$ be the optimum partition and let $b^{*}$ be the bucket that
contains the query with the longest confidence interval in $\mathcal{R}^{*}$.
First, we notice that if $e\geq\sqrt{\mathcal{V}(b^{*})}$ then we always find
a partition with longest confidence interval at most $e$. We can show it by
induction on the right boundaries of the buckets (intervals) and the monotonic
property of confidence intervals. For the base case, let
$b_{1}^{*}=[t_{1},t_{2}]$ be the first bucket of partition $\mathcal{R}^{*}$.
The procedure $\mathcal{M}$ always underestimates the maximum variance in an
interval so the binary search in our procedure will consider the right
boundary to be greater than $t_{2}$. Let $t_{i}$ be the right boundary of the
$i$-th bucket in $\mathcal{R}^{*}$ and let assume that the $i$-th bucket in
our procedure has a right boundary $t_{j}\geq t_{i}$. We consider the
$(i+1)$-th bucket in $\mathcal{R}^{*}$ with boundaries $[t_{i+1},t_{r}]$. We
show that the $(i+1)$-th bucket in our procedure has a right boundary at least
$t_{r}$. Let $[t_{a},t_{b}]$ be the boundaries of the $(i+1)$-th bucket in our
procedure. We have $t_{a}\geq t_{i+1}$. If $t_{a}=t_{i+1}$ then $t_{b}\geq
t_{r}$ as in the basis case. If $t_{a}>t_{i+1}$ then because of the monotonic
property of the confidence intervals and the fact that the $\mathcal{M}$
procedure underestimates the maximum variance we also have that $t_{b}\geq
t_{r}$. Let $e^{\prime}$ be the smallest value in $E$ such that
$\sqrt{\mathcal{V}(b^{*})}\leq e^{\prime}$. Because of the previous
observation our algorithm always returns at least a valid partition for an
$e\leq e^{\prime}$. For every bucket $b$ of this partition,
$\sqrt{\mathcal{M}(b)}\leq e$. Let $b^{\prime}$ be the bucket in the returned
partition containing the query with the longest confidence interval in the
partition. We have,
$\sqrt{\mathcal{V}(b^{\prime})}\leq\sqrt{\gamma\mathcal{M}(b^{\prime})}\leq\sqrt{\gamma}e\leq\sqrt{\gamma}e^{\prime}\leq\rho\sqrt{\gamma}\sqrt{\mathcal{V}(b^{*})}$.
From Section C.1 we have that $\gamma=4$ for SUM and AVG queries queries. So
we get a partition where the maximum error is within $2\rho\sqrt{2}$ of the
optimum error for SUM queries and within $2\rho$ of the optimum error for AVG
queries.
Running time. We assume that $\mathcal{M}(\cdot)$ can be computed in $M$ time.
Since, $\mathcal{L},\mathcal{U}$ are polynomially bounded on $N$ we have that
$|E|=O(\log_{\rho}N)$ and it can be constructed in $O(\log_{\rho}N)$ time. The
binary search over $E$ takes at most $O(\log\log_{\rho}N)$ steps. For each
value $e\in E$ of the binary search we check if there is a partition with $k$
buckets and longest confidence interval at most $e$. For each possible bucket
we run a binary search over the samples $S$ and we run the procedure
$\mathcal{M}$ to get an approximation of the maximum variance. Hence, we can
decide if there exists a partition with confidence interval $e$ in $O(kM\log
m)$ time. Overall, our algorithm takes $O(kM\log m\log\log_{\rho}N)$. If
$\rho$ is a constant, for example $\rho=2$ then the running time is $O(kM\log
m\log\log N)$. From Section C.1 we have that in $1$-dimension $M=O(\log m)$
for SUM and $M=O(\log^{2}m)$ for AVG queries. Notice that if we skip the
$\log$ factors the running time depends only linearly on the number of buckets
$k$.
### C.3. Partition for higher dimensions
We construct a partition by building a k-d tree using the dynamic procedures
$\mathcal{M}$ as shown in Section C.1. Using the results of [30] we could
construct a near optimum k-d tree in time $O(km)$ skipping the $\log$ factors.
Here, we use our new results from Section C.1 to construct a k-d tree faster
(roughly $O(k)$) with better approximation approximation guarantees.
We start by constructing a dynamic data structure from Section C.1 over the
initial set of samples $S$. Assume that after a number of insertions and
deletions in $S$ we want to (re)construct the tree structure $\mathcal{T}$
over $S$. We construct a partition tree on $S$ using ideas from the balanced
k-d tree construction. We pre-define an ordering of the dimensions. Each node
$u$ of the tree is associated with a rectangle $R_{u}$. We build the tree in
$k$ iterations in a top-down manner starting from the root $v$ such that
$\mathcal{D}\subset R_{v}$. In any iteration we store and maintain the
approximate maximum variance queries of every leaf node in a max heap $C$. In
the end of the $i$-th iteration we have a tree of $i$ leaf nodes. Let $u$ be
the leaf node with the maximum $\mathcal{M}(R_{u})$ value in $C$. We remove
its value from $C$. We find the medium coordinate with respect to the next
dimension in the ordering (in this branch of the tree) among the samples
$R_{u}\cap S$. We split $R_{u}$ on the median into two rectangles
$R_{u_{1}},R_{u_{2}}$ such that $R_{u_{1}}\cup R_{u_{2}}=R_{u}$ and we
construct the children $u_{1},u_{2}$ of the parent node $u$. Using the
algorithms from Section C.1 we compute
$\mathcal{M}(R_{u_{1}}),\mathcal{M}(R_{u_{2}})$ and we insert their values in
the max-heap $C$. We continue with the same way until we construct a tree
$\mathcal{T}$ with $k$ leaf nodes (buckets).
As we showed in [30] such a tree construction returns a partition which is
near optimal with respect to the optimum partition tree construction following
the same splitting criterion: split on the median of the leaf node with the
largest (real) maximum variance query. In our case we do not always split the
nod with the real largest error since we use the approximation function
$\mathcal{M}(\cdot)$.
For any query our data structure from Section C.1 can be updated in
$\tilde{O}(1)$ time. Given a (re-)partition activation query over a set $S$ of
$m$ samples we can construct a new $\mathcal{T}$ with the following
guarantees: For SUM queries, $\mathcal{T}$ can be constructed in
$O(k\log^{d}m)$ time with approximation factor $2\sqrt{k}$. For COUNT queries
we get the same construction time $O(k\log^{d}m)$ but the tree we construct is
optimum (with respect to the partition tree with same split criterion). For
AVG queries, $\mathcal{T}$ can be constructed in $O(k\log^{2d}m)$ time with
approximation factor $2\log^{d/2}m$. In all cases we can construct near-
optimum partitions in $\tilde{O}(k)$ time.
### C.4. Re-Partitioning Triggers
A key contribution of JanusAQP is continuous re-optimization of the
partitioning. We describe how JanusAQP tracks the variances of the current
partitions and decides when to re-partition. We also propose two ways to re-
partition: partial or full re-partitioning.
Assume that the current partitioning is $\mathcal{R}$ and let
$\mathcal{M}(\mathcal{R})$ be the (approximate) maximum variance query with
respect to the current set of samples $S$. The automatic procedure first
checks the number of samples in each bucket (leaf node) of the current
$\mathcal{T}$. If there is a leaf node $i$ associated with partition $R_{i}$
such that $|S_{i}|<<\frac{1}{\alpha}\log m$ (recall that $\alpha$ is the
sampling rate) then there are not enough samples in $u$ to make robust
estimators. Hence, we need to find a new re-partition of $S$. Even if the
number of samples in each bucket is large our system might enable a re-
partition: For a partition $R_{i}$ in the leaf node layer of $\mathcal{T}$ let
$\mathcal{M}_{i}=\mathcal{M}(R_{i})$ be the (approximate) maximum variance at
the moment we constructed $\mathcal{T}$. Let $\beta>1$ be a parameter that
controls the maximum allowable change on the variance. It can either be
decided by the user or we can set it to $\beta=10$. Assume that an update
occurred in the leaf node associated with the partition $R_{i}$. After the
update we run the function $\mathcal{M}_{i}^{\prime}=\mathcal{M}(R_{i})$ and
we update $\mathcal{M}(\mathcal{R})$ if needed. If
$\frac{1}{\beta}\mathcal{M}_{i}\leq\mathcal{M}_{i}^{\prime}\leq\beta\mathcal{M}_{i}$
then the new maximum variance in partition $b_{i}$ is not very different than
before so we do not trigger a re-partition. Otherwise, the maximum variance in
bucket $b_{i}$ changed by a factor larger than $\beta$ from the initial
variance $\mathcal{M}_{i}$. In this case a re-partition might find a new tree
with smaller maximum error. We compute a new partitioning
$\mathcal{R}^{\prime}$ and hence a new tree $\mathcal{T}$. If
$\mathcal{M}(\mathcal{R}^{\prime})<\frac{1}{\beta}\mathcal{M}(\mathcal{R})$
then we activate a re-partition restarting the catch-up phase over the new
tree $\mathcal{T}$. On the other hand, if
$\mathcal{M}(\mathcal{R}^{\prime})\geq\frac{1}{\beta}\mathcal{M}(\mathcal{R})$
then our current partitioning $\mathcal{R}$ is good enough (its worst error is
close to the optimum one) so we can still use it. Of course, the user can also
manually trigger re-partitioning. For example, the user can choose to re-
partition once every hour, day, or after $\tau$ insertions and deletions have
occurred.
Next, we propose two ways to re-partition the index. In particular, the user
can select either partial re-partitioning or full re-partitioning. full re-
partitioning is easy; using the algorithms from the previous subsections we
can construct a new partitioning and a new tree structure in near-linear time
with respect to the samples. Hence, we focus on partial re-partitioning.
Instead of re-partitioning the entire space we can only re-partition the area
around the ”problematic” leaf node. Let $b_{i}$ be this leaf node with high
error or small number of samples. In order to define the neighboring area
around $b_{i}$ we propose either a predefined way or an automatic way. In both
cases, the neighboring area is defined by a parameter $\psi$, which is the
level of the tree above $b_{i}$ that the tree needs to be updated. In the
predefined way, the parameter $\psi$ is a known parameter. We find the node
$v$ which is defined as an ancestor of the leaf node $b_{i}$, $\psi$ levels
above $b_{i}$. Let $\mathcal{T}_{u}$ is the subtree with root node $u$ and let
$l_{u}$ be the number of leaf nodes in $\mathcal{T}_{u}$. Using the algorithms
from the previous subsections we find a near optimum partition starting from
node $u$ with $l_{u}$ leaf nodes. The running time is near-linear with respect
to the samples stored $\mathcal{T}_{u}$. In the automatic way, we do not know
the parameter $\psi$ upfront so we try different values of $\psi$ running a
binary search on the levels of the tree until we find a partition with low
enough error. For each different value of $\psi$ we try, we run the same
partial re-partitioning algorithm as in the static case starting from the node
$u$ we are considering in the binary search.
Generally, partial re-partitioning is faster than the full re-partitioning
since it only suffices to find a better partitioning in a small area of the
space. Furthermore, in partial re-partitioning we can still keep all the
current estimations in all nodes of $\mathcal{T}\setminus\mathcal{T}_{u}$,
i.e., the nodes of the tree that are not changed. Hence, the error of queries
after a partial re-partitioning is also lower than the error of the queries
immediately after a full re-partitioning. However, in both cases we need to
restart the catch-up phase over the new tree in order to get good estimators
to the nodes that were changed by the partial re-partitioning. Recall that we
cannot get samples from a particular area (ideally samples that stored in the
leaf nodes of $\mathcal{T}_{u}$) hence we run the catch-up phase getting
samples from the entire space. We finally note that while the catch-up phase
considers samples from the entire space, we only use these samples to improve
the estimators in the nodes that are still under-represented, i.e., the catch-
up phase time threshold for these nodes has not been completed.
|
# Task-specific Inconsistency Alignment for Domain Adaptive Object Detection
Liang Zhao Limin Wang ✉
State Key Laboratory for Novel Software Technology, Nanjing University, China
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Detectors trained with massive labeled data often exhibit dramatic performance
degradation in some particular scenarios with data distribution gap. To
alleviate this problem of domain shift, conventional wisdom typically
concentrates solely on reducing the discrepancy between the source and target
domains via attached domain classifiers, yet ignoring the difficulty of such
transferable features in coping with both classification and localization
subtasks in object detection. To address this issue, in this paper, we propose
Task-specific Inconsistency Alignment (TIA), by developing a new alignment
mechanism in separate task spaces, improving the performance of the detector
on both subtasks. Specifically, we add a set of auxiliary predictors for both
classification and localization branches, and exploit their behavioral
inconsistencies as finer-grained domain-specific measures. Then, we devise
task-specific losses to align such cross-domain disagreement of both subtasks.
By optimizing them individually, we are able to well approximate the category-
and boundary-wise discrepancies in each task space, and therefore narrow them
in a decoupled manner. TIA demonstrates superior results on various scenarios
to the previous state-of-the-art methods. It is also observed that both the
classification and localization capabilities of the detector are sufficiently
strengthened, further demonstrating the effectiveness of our TIA method. Code
and trained models are publicly available at https://github.com/MCG-NJU/TIA.
††✉: Corresponding author.
## 1 Introduction
Object detection [13, 28, 14, 25] is manifesting a high demand for massive
annotated data, which however, due to either economic or technical reasons, is
struggling to be fulfilled in some scenarios. An alternative is to transfer
knowledge from a source domain depicting general or synthetic scenes to the
target domain describing particular scenes of interest. Yet, as a consequence
of the domain shift [33], the performance of the detector would typically
suffer dramatic degradation. A practical strategy to cope with this dilemma is
to adopt Unsupervised Domain Adaptation (UDA). Generally, by narrowing the
divergence in pixel or feature-level between the source and target domains, a
detector trained on source labeled domain can be then well-generalized to
unlabeled target domain. This classic strategy of domain alignment, which
originated from cross-domain classification [11, 37, 20, 30, 21, 40],
establishes a solid foundation for downstream domain adaptive detection [4,
29, 32, 16, 3, 8].
(a) Vanilla [28]
(b) UMT [8]
(c) Our TIA
Figure 1: Exampled images from PASCAL VOC [10] $\rightarrow$ Clipart [18].
Compared with vanilla detector [28], both UMT [8] and TIA identify more
foreground objects (Row 1), yet deliver lower as well as higher quality
bounding boxes (Row 2), respectively.
Often, as an extension of domain adaptive classifiers, existing domain
adaptive detectors focus solely on decreasing the generalization error of
their classifiers. Yet, they tend to ignore the potential improvement of their
localization errors [4, 19]. As shown in Fig. 1, compared to vanilla detector,
it is observed that the state-of-the-art domain adaptive detector (_i.e_. UMT
[8]) is capable of correctly identifying and classifying more foreground
objects, but delivering relatively lower quality bounding boxes for them. One
possible reason is that, by applying domain alignment via an external binary
classifier, the resulted transferable (_i.e_. cross-domain invariant) features
grown in the classification space might be harmful for the localization in
regression space. Intuitively, the regression space is usually continuous and
sparse and has no obvious decision boundaries, hence significantly differs
from the classification space.
Motivated by this observation, we argue that the transferable features induced
by previous adaptive detectors fail to cope well with both classification and
localization subtasks. Therefore this paper for the first time, explicitly
develops the feature alignment in separate task spaces, in order to seek
consistent performance gains on both classification and localization branches.
Prevalent two-stage detectors generate a single coupled region of interest
(ROI) feature for both subtasks, hindering us from directly applying
conventional alignment for each task’s feature separately. To overcome this
issue, we resort to build multiple auxiliary classifiers and localizers and
introduce their behavioral inconsistencies to constitute two task-specific
discriminators. In this way, we are able to realize a new decoupled and fine-
grained feature alignment by optimizing them separately.
Specifically, we design a general Task-specific Inconsistency Alignment (TIA)
module to exploit the inconsistency among these new auxiliary predictors and
apply it to both subtasks of detectors. Therein, two task-specific losses are
devised so that the behavioral disagreement among predictors can be better
perceived and easily optimized. In particular, for classification, we
establish a stable approximation to the diversity of auxiliary classifiers’
decision boundaries with the aid of Shannon Entropy (SE), for effectively
shrinking the cross-domain category-wise discrepancies. Meanwhile for
localization, in consideration of the continuity and sparsity of the
regression space, we leverage the Standard Deviation (SD) practically to
harvest the ambiguity of various localizers’ predictions at each boundary.
This allows the boundary-wise perception of localizers to be efficiently
promoted. Overall, by maximining these two losses, we are able to directly
perform inconsistency alignments independently in fully decoupled task spaces,
thereby consistently advancing the transferability of features for both
classification and localization tasks.
In summary, our contributions fall into threefold: (1) We empirically observe
that the resulted features guided by existing feature alignment methods fail
to improve the performance of both classification and localization tasks in
domain adaptive object detection. To the best of our knowledge, we are the
first to address this dilemma by developing domain adaptation into these two
branches and directly performing alignment in these two task spaces (not
feature space) independently. (2) To effectively perform alignment in task
spaces, we propose to build a set of auxiliary predictors and use their
behavioral inconsistency for cross-domain alignment. These new inconsistency
measures are task-specific and finer-grained, thus expected to better capture
the domain difference. (3) Exhaustive experiments have been conducted on
various domain shift scenarios, demonstrating superior performance of our
framework over state-of-the-art domain adaptive detectors. As shown in Fig. 1
(c), our TIA makes significant progress in both tasks.
## 2 Related Work
Unsupervised Domain Adaptation (UDA). In light of the basic assumption [1],
extensive domain adaption methods have been proposed [11, 37, 20, 30, 21, 40],
aiming at learning transferable features to shrink the discrepancy across
domains. Recently, several methods [20, 30, 21, 40] have embraced the
consensus regularization [26] strategy derived from semi-supervised learning.
Generally, multiple classifiers with varying initializations are introduced
and the inconsistency among their outputs are viewed as an indicator, for
measuring the divergence between domains. In this way, [20] reduces this
disagreement and diversifies the constructed multiple feature embeddings at
the same time. [30] then simplifies this procedure, by iteratively maximizing
and minimizing the disagreement. On top of them, [21] introduces the
wasserstein metric for mining the natural notion of dissimilarity among
predictions, while [42, 40] extend the form of [30] and explore in detail the
scoring disagreement in the multi-class case. These methods are further
generalized to downstream domain adaptation tasks, including semantic
segmentation [27, 46] and keypoint detection [47, 19]. In contrast, object
detection is a more challenging task in that it is structurally complex and
requires the simultaneous optimization of two unparalleled subtasks. Hence,
our TIA delves into the task-specific alignment and investigates in depth how
to accurately bound then reduce both the category-wise disparities and
boundary-wise ambiguity within individual task spaces.
Figure 2: Framework overview. Best viewed in color. We develop the high-level
feature alignment into separate task spaces, by applying the proposed Task-
specific Inconsistency Alignment module to both the classification (green
part) and localization (blue part) branches of the baseline detector (gray
part). In each branch, the behavioral inconsistency of multiple auxiliary
predictors is optimized via the corresponding inconsistency-aware loss, for
essentially bridging the category-wise or boundary-wise margins between
domains.
UDA for Object Detection. Along the lines of domain adaptive classifiers, the
focus of domain adaptive detectors is mostly on bridging the pixel or feature-
level divergence between the two domains. Many methods [18, 17, 3, 8, 43]
leverage the labeled target-like images generated by CycleGAN [48] to pursue a
pixel-level consistency. Yet far more methods [4, 29, 32, 16, 3, 8] are
devoted to incrementally reinforcing a feature-level consistency. Nearly all
of them explicitly integrate domain-adversarial neural network [11] into the
detector, thereby accomplishing feature alignment with simply domain
classifiers. [4] initially carries out domain alignment on both backbone
features (image-level) and ROI features (instance-level). After that, massive
methods [29, 32, 16, 3, 8] continuously strengthen these two alignments, and
further improve the performance of the detector with multi-scale [16],
contextual [29, 3], spatial attention [22], category attention [38] and cross-
domain topological relations [2] information. In addition, [44] and [43]
concentrate on enhancing the cross-domain performance of the region proposal
network (RPN) to generate high-quality ROIs, whereby the former enforces
collaborative training with [30] and self-training on RPN and region proposal
classifier, yet the latter construct one set of learnable RPN prototypes for
alignment. Problematically, almost all of existing domain adaptive detectors
specialize in regulating decision boundaries of classifiers within detectors,
yet ignore the behavioral anomalies of their localizers. In contrast, our TIA
first takes this problem into account and develops the general feature
alignment into independent task spaces, leading to guaranteed accuracy for
each label predictor.
## 3 Methodology
Following the regular settings of unsupervised domain adaption, we define a
labeled source domain $\mathcal{D}_{s}$ and an unlabeled target domain
$\mathcal{D}_{t}$. Our goal is to establish a knowledge transfer from
$\mathcal{D}_{s}$ to $\mathcal{D}_{t}$ for object detection, with a favorable
generalization over the target domain being guaranteed. In this section, we
present technical details of the proposed framework, its overall architecture
is illustrated in Fig. 2. We first briefly review the baseline model (left
gray part) and, on top of it, thoroughly describe the proposed task-specific
inconsistency alignment (right blue and green parts). In the end, some
theoretical insights will be raised to explain how our method functions to
improve the transferability of both subtasks within the detector.
### 3.1 Baseline Model
Our framework is implemented on the basis of the popular two-stage detector
Faster R-CNN [28], and the gray areas in Fig. 2 represent the detector’s core
structure. Images from both domains are firstly fed into the backbone to yield
image-level features, followed by RPN to deliver plentiful proposals, which
are then aggregated with the backbone features through ROI Align [14] to
generate a certain number of ROIs. With the two ROI predictors on the right of
FCs, the total detection loss can be formally defined as
$\mathcal{L}_{det}=\mathcal{L}_{rpn}+\mathcal{L}_{roi}.$ (1)
To pursue the semantic consistency for subsequent modules, we adhere to the
mainstream practice of aligning features on the source and target domains, at
both mid-to-upper layers of the backbone (_i.e_. image-level) and ROI layer
(_i.e_. instance-level). Similar to [4, 32, 3], all these feature alignments
are realized by adversarial training, in terms of the domain-adversarial
neural network (DANN) [11]. Specifically, features are conveyed via a Gradient
Reversal Layer (GRL) to the discriminator $D_{k}$ for distinguishing their
domain labels. The objective is as follows:
$\footnotesize\mathcal{L}_{da}=\sum_{k=1}^{K}\bigg{(}\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}{L}\Big{(}D_{k}\big{(}f_{k,i}\big{)},d_{k,i}^{s}\Big{)}+\frac{1}{n_{t}}\sum_{i=1}^{n_{t}}{L}\Big{(}D_{k}\big{(}f_{k,i}\big{)},d_{k,i}^{t}\Big{)}\bigg{)},$
(2)
where ${L}$ is normally a binary cross-entropy loss, $f_{k,i}$ denotes the
$i$-th feature output from the $k$-th layer and $d_{k,i}$ indicates its
corresponding domain label, $n_{s}$ and $n_{t}$ refer to the total number of
features within a mini-batch in source and target domains, respectively, and
$K$ represents the total number of feature alignments. After the above domain
adaptation loss being minimized, the sign of gradient back-propagated from
discriminator to generator (_e.g_. backbones) is inverted by GRL, guiding
generator to deliver cross-domain invariant features so as to confuse
discriminator and maximize the loss. The overall objective of the baseline
model can be formulated as
$\mathcal{L}=\mathcal{L}_{det}+\lambda_{1}\mathcal{L}_{da},$ (3)
where $\lambda_{1}$ is the trade-off parameter.
Following [3, 8], we further interpolate the input to encourage the pixel-
level consistency. Specifically, we augment the source domain, by mixing
original source images with the target-like source images generated using
CycleGAN [48]. In summary, we build a very competitive baseline model with
feature-level and pixel-level consistency.
### 3.2 Task-specific Inconsistency Alignment
Conventional object detectors yield a single ROI feature after FCs for both
tasks of classification and localization, making it difficult to apply
previous feature alignment in this coupled space. An intuitive way to perform
task-specific alignment is to simply duplicate FCs and then align their
outputs to each predictor in a DANN [11] manner. However, as discussed in Sec.
5.1, such an alternative poorly decouples the task spaces and leads to
insufficient alignments. More importantly, it still suffers from the lack of
task-specific treatment, especially for localization task.
Following [30, 42, 40], we propose the Task-specific Inconsistency Alignment
to directly shrink the task-specific divergence between source and target
domains. This module can be applied to both the classification and
localization heads independently, as illustrated in the blue and green
regions. Rather than externally attaching additional discriminators, we use a
set of auxiliary predictors to estimate the inconsistency of each domain. By
aligning them, our method can not only yield an easier approximation to domain
distance, but also come up with a more natural and direct solution to perform
alignment in each task space independently for detectors with multiple
prediction heads.
Auxiliary predictors. The core of our idea is employing multiple auxiliary
predictors to construct an alignment mechanism between domains. Therefore,
apart from the primitive classifier $C^{p}$ and localizer $L^{p}$, two
additional sets of auxiliary classifiers $C^{a}$ and localizers $L^{a}$
composed of $N$ classifiers $C^{a}_{i}(1\leq i\leq N)$ and $M$ localizers
$L^{a}_{j}(1\leq j\leq M)$ respectively, are constructed on top of FCs. To
ensure a high prediction accuracy, they are all trained with labeled source
data as in the primary predictors by the objective:
$\footnotesize\mathcal{L}_{roi}=\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}\bigg{(}\sum_{j=1}^{N}\mathcal{L}^{cls}\Big{(}C_{j}^{a}\big{(}\hat{r}_{i}\big{)},y_{i}^{s}\Big{)}+\sum_{j=1}^{M}\mathcal{L}^{loc}\Big{(}L_{j}^{a}\big{(}\hat{r}_{i}\big{)},b_{i}^{s}\Big{)}\bigg{)},$
(4)
where $\hat{r}_{i}$ represents the higher-level feature of ROI patches $r_{i}$
processed by FCs, $y_{i}$ and $b_{i}$ indicate the corresponding category
label and bounding box, respectively. For $\mathcal{L}^{cls}$ and
$\mathcal{L}^{loc}$, the traditional cross-entropy and smooth-L1 losses are
used. Notably, the gradients of these auxiliary predictors are detached when
back-propagating to avoid affecting the training of primitive predictors. In
addition, to use these auxiliary predictors to perform inconsistency alignment
between source and target domains, some GRLs are inserted between FCs and them
to adversarially train the proposed task-specific inconsistency-aware losses.
#### 3.2.1 Inconsistency Alignment Mechanism
Previous DANN-based methods [11] rely merely on attached binary discriminators
to optimize task-agnostic losses. In contrast, our method optimizes fine-
grained category- and boundary-wise multi-class losses [9, 41] for
inconsistency alignment between domains, by means of discriminators composed
of various auxiliary predictors. By nature, our objective, the alignment to
the inconsistency of auxiliary predictors’ behavior (_e.g_. decision
boundaries of classifiers), essentially characterizes a more precise
estimation to the margins across domains [40]. To better perceive this
disagreement and perform alignment, we construct an integral and adversarial,
single-stage training mechanism with GRL, to cope with detectors that are too
sophisticated to perform multi-stage iterative optimization like [30].
Specifically, we initially detect the behavioral inconsistency of auxiliary
predictors trained on the source domain over the target domain, and maximize
the proposed task-specific inconsistency-aware loss $\mathcal{L}_{ia}^{task}$.
With GRL, the gradients back-propagated to generator (_i.e_. FCs) are reversed
hence the loss is actually minimized for generator. In this adversarial
training, the framework reaches a dynamic equilibrium in which the predictors
are diversified to better discriminate the discrepancy between domains, yet
the generator yields sufficiently transferable features to discourage the
judgments of these predictors. In addition, the behavioral consistency over
the source domain of auxiliary predictors is also leveraged in a similar way.
We maximize the consistency-aware loss (the negative of
$\mathcal{L}_{ia}^{task}$), so as to simultaneously diversifying the source
domain distribution and strengthening the predictors’ capabilities. The entire
domain adaptation objective can be described as follows:
$\footnotesize\begin{split}\mathcal{L}^{task}_{da}=&-\frac{1}{n_{t}}\sum_{i=1}^{n_{t}}\mathcal{L}_{ia}^{task}\Big{(}P_{1}^{a}\big{(}\hat{r_{i}}\big{)},P_{2}^{a}\big{(}\hat{r_{i}}\big{)},...,P_{N}^{a}\big{(}\hat{r_{i}}\big{)}\Big{)}\\\
&-\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}(-\mathcal{L}_{ia}^{task})\Big{(}P_{1}^{a}\big{(}\hat{r_{i}}\big{)},P_{2}^{a}\big{(}\hat{r_{i}}\big{)},...,P_{N}^{a}\big{(}\hat{r_{i}}\big{)}\Big{)}.\end{split}$
(5)
where $task\in\\{cls,loc\\}$, and $P\in\\{C,L\\}$, the specific inconsistency
measure will be explained in next subsections.
Figure 3: Illustration of the effect when maximin the $\mathcal{L}_{ia}^{cls}$
over target domain in a toy example with two classes and three auxiliary
classifiers. Best viewed in color. (a) Initially, the behavior of classifiers
is basically consistent with similar decision boundaries; after executing a
maximin optimization, we find that: (b) the decision boundaries of the
classifiers are mutually exclusive, making the probability distribution on
each category sharper and the entropy lower, hence maximizing the loss; (c)
the disparity between generated features on class A is shrunk, flattening the
probability distribution and increasing the entropy, hence minimizing the
loss.
#### 3.2.2 Classification-specific Loss
The first question is how to capture this behavioral disagreement among
decision boundaries of auxiliary classifiers. Different distances including L1
[30], Kullback-Leibler (KL) [40], and Sliced Wasserstein Discrepancy (SWD)
[21] have been utilized to measure the discrepancy between outputs of a pair
of classifiers, but they are hard to generalize to handle multi-classifier
situations. For the score distribution constituted by auxiliary
classifications on each category, a simple assessment of its sharpness or
flatness is expected. Considering the stability of the optimization and also
inspired by [6, 34], we bound it with Shannon Entropy (SE). Concretely, for a
probability matrix $\textbf{M}\in\textbf{R}^{N\times C}$ of auxiliary
predictions, each of its column vectors $\textbf{m}_{i}\in\textbf{R}^{N}(1\leq
i\leq C)$ represents the predicted probabilities of all classifiers on a
particular class $i$. We can calculate an entropy vector
$\textbf{p}\in\textbf{R}^{C}$ — each of whose elements is the entropy
calculated from the corresponding softmaxed $\textbf{m}_{i}$ — to describe the
category-wise variations among various decision boundaries of multiple
auxiliary classifiers. Formally, the SE-driven classification-specific
inconsistency-aware loss $\mathcal{L}_{ia}^{cls}$ is defined as follows:
$\small\mathcal{L}_{ia}^{cls}=-\textbf{p}\cdot\textbf{q}=-\sum_{i=1}^{C}{\big{(}\sum_{j=1}^{N}{-\hat{m}_{ij}\log{\hat{m}_{ij}}}\big{)}\cdot\big{(}\frac{1}{N}\sum_{j=1}^{N}{m_{ij}}\big{)}},$
(6)
where $\hat{\textbf{m}}_{i}=\mathrm{softmax}(\textbf{m}_{i})$, and q indicates
the average probability vector. It is notable that the inner product operation
between entropy vector and average probability vector is crucial, as weighting
the entropy by the confidence of distinct classes keeps the attention on the
proper category.
Since the optimization of the inconsistency over target domain is our main
objective, we take this process as an example, as depicted in Fig. 3. After
solving a maximin game on $\mathcal{L}_{ia}^{cls}$, the behavior of auxiliary
classifiers changes first, and driving the probability distribution over each
category to flow in a sharper and more deterministic direction. In this case,
the decision boundaries of classifiers are diversified, as shown in Fig. 3
(b). Meanwhile, the generated target domain features are shifted towards the
source domain features, flattening the probability distribution. In this
context, features are aligned by category in the classification space, so that
greater transferability and discriminability are achieved at the same time, as
illustrated in Fig. 3 (c).
#### 3.2.3 Localization-specific Loss
The second question lies in, how to catch the behavioral disagreement across
various localizers in the regression space. Unlike classification, the
regression space usually exhibits a continuity and sparsity, and the predicted
locations are normally heterogeneously clustered in certain regions, rendering
it challenging to properly assess the dispersion of the predictions. Some
domain adaptation methods [19, 47] dealing with keypoint detection consider
that, shrinking the regression space by transformations contributes to
alleviating the negative impact of sparsity on the adversarial learning of
localizers. Besides, the recent proposed [24, 23], in which the ambiguity of
multiple localizers’ predictions on object boundaries are exploited for
detecting anomalous bounding boxes, regard top-k values along with their mean
value as a robust representation to the ambiguity.
Practically, in this work, we recommend to choose the most straightforward
statistic, the standard deviation (SD) to measure the behavioral inconsistency
among auxiliary localization results. This choice is attributed to two
reasons. First, two-stage detectors since R-CNN [13] have already well
constrained the regression space by linear transformations. Second, the
L2-norm within SD is more sensitive to outliers, which are crucial for
representing the behavioral inconsistency of localizers. The SD-driven
localization-specific inconsistency-aware loss $\mathcal{L}_{ia}^{loc}$ can be
formulated as
$\footnotesize\mathcal{L}^{loc}_{ia}=\frac{1}{4\cdot\sqrt{M}}\sum_{i=1}^{4}{\left\|m_{i}-\frac{1}{M}\sum_{j=1}^{M}{m_{ij}}\right\|_{2}}$
(7)
where $\textbf{m}_{i}\in\textbf{R}^{M}$ denotes the $i$-th column vector of
the prediction matrix $\textbf{M}\in\textbf{R}^{M\times 4}$ constructed by M
auxiliary localizers, $\left\|\cdot\right\|_{2}$ indicates the L2-norm.
Method | aero | bcycle | bird | boat | bottle | bus | car | cat | chair | cow | table | dog | hrs | bike | prsn | plnt | sheep | sofa | train | tv | mAP
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
DAF [4] | 38.0 | 47.5 | 27.7 | 24.8 | 41.3 | 41.2 | 38.2 | 11.4 | 36.8 | 39.7 | 12.7 | 12.7 | 31.9 | 47.8 | 55.6 | 46.3 | 12.1 | 25.6 | 51.1 | 45.5 | 34.7
SWDA [29] | 26.2 | 48.5 | 32.6 | 33.7 | 38.5 | 54.3 | 37.1 | 18.6 | 34.8 | 58.3 | 12.5 | 12.5 | 33.8 | 65.5 | 54.5 | 52.0 | 9.3 | 24.9 | 54.1 | 49.1 | 38.1
SCL [32] | 44.7 | 50.0 | 33.6 | 27.4 | 42.2 | 55.6 | 38.3 | 19.2 | 37.9 | 69.0 | 30.1 | 26.3 | 34.4 | 67.3 | 61.0 | 47.9 | 21.4 | 26.3 | 50.1 | 47.3 | 41.5
HTCN [3] | 33.6 | 58.9 | 34.0 | 23.4 | 45.6 | 57.0 | 39.8 | 12.0 | 39.7 | 51.3 | 20.1 | 20.1 | 39.1 | 72.8 | 61.3 | 43.1 | 19.3 | 30.1 | 50.2 | 51.8 | 40.3
SAP [22] | 27.4 | 70.8 | 32.0 | 27.9 | 42.4 | 63.5 | 47.5 | 14.3 | 48.2 | 46.1 | 31.8 | 17.9 | 43.8 | 68.0 | 68.1 | 49.0 | 18.7 | 20.4 | 55.8 | 51.3 | 42.2
UMT [8] | 39.6 | 59.1 | 32.4 | 35.0 | 45.1 | 61.9 | 48.4 | 7.5 | 46.0 | 67.6 | 21.4 | 29.5 | 48.2 | 75.9 | 70.5 | 56.7 | 25.9 | 28.9 | 39.4 | 43.6 | 44.1
DBGL [2] | 28.5 | 52.3 | 34.3 | 32.8 | 38.6 | 66.4 | 38.2 | 25.3 | 39.9 | 47.4 | 23.9 | 17.9 | 38.9 | 78.3 | 61.2 | 51.7 | 26.2 | 28.9 | 56.8 | 44.5 | 41.6
Source Only | 35.6 | 52.5 | 24.3 | 23.0 | 20.0 | 43.9 | 32.8 | 10.7 | 30.6 | 11.7 | 13.8 | 6.0 | 36.8 | 45.9 | 48.7 | 41.9 | 16.5 | 7.3 | 22.9 | 32.0 | 27.8
Baseline | 31.9 | 56.3 | 33.4 | 26.3 | 40.2 | 53.3 | 42.7 | 17.9 | 42.3 | 59.1 | 15.5 | 23.6 | 35.1 | 85.2 | 63.2 | 46.3 | 22.0 | 28.4 | 51.0 | 48.2 | 41.1
TIACLS | 38.3 | 51.0 | 38.3 | 33.2 | 43.0 | 65.7 | 43.8 | 22.2 | 43.3 | 57.1 | 20.9 | 23.7 | 38.9 | 89.4 | 64.2 | 53.8 | 38.2 | 25.0 | 52.4 | 50.5 | 44.7
TIALOC | 37.5 | 55.8 | 35.3 | 32.2 | 45.6 | 63.1 | 44.1 | 15.6 | 44.4 | 62.1 | 15.1 | 26.3 | 38.5 | 74.3 | 65.3 | 46.9 | 30.7 | 27.2 | 55.5 | 48.9 | 43.2
TIA | 42.2 | 66.0 | 36.9 | 37.3 | 43.7 | 71.8 | 49.7 | 18.2 | 44.9 | 58.9 | 18.2 | 29.1 | 40.7 | 87.8 | 67.4 | 49.7 | 27.4 | 27.8 | 57.1 | 50.6 | 46.3
Table 1: Experimental results (%) of Real-to-Artistic scenario, PASCAL VOC
$\rightarrow$ Clipart.
#### 3.2.4 Overall Objective
Combined with the baseline model, the final objective of the proposed
framework becomes
$\mathcal{L}=\mathcal{L}_{det}+\lambda_{1}\mathcal{L}_{da}+\lambda_{2}\mathcal{L}^{cls}_{da}+\lambda_{3}\mathcal{L}^{loc}_{da},$
(8)
where $\lambda_{1}$, $\lambda_{2}$ and $\lambda_{3}$ are trade-off parameters
for balancing various loss components.
### 3.3 Theoretical Insights
Tracing the roots, extensive unsupervised domain adaptation methods are
motivated by the theoretical analysis in [1], which states the following:
###### Theorem 1
Let $\mathcal{H}$ be the hypothesis space and let
$\langle\mathcal{D}_{s},f_{s}\rangle$ and
$\langle\mathcal{D}_{t},f_{t}\rangle$ be the two domains consisting of a pair
of distribution $\mathcal{D}$ and labeling function $f$. Hence for any
$h\in\mathcal{H}$:
$\small\varepsilon_{t}(h,f_{t})\leq\varepsilon_{s}(h,f_{s})+\frac{1}{2}d_{\mathcal{H}\Delta\mathcal{H}}(\mathcal{D}_{s},\mathcal{D}_{t})+\lambda^{\ast},$
(9)
where $\epsilon_{s}$ (resp. $\epsilon_{t}$) denotes the disagreement (_i.e_.
error) between the labeling function $f_{s}$ (resp. $f_{t}$) and hypothesis
$h$ over the source (resp. target) domain, $d_{\mathcal{H}\Delta\mathcal{H}}$
denotes the $\mathcal{H}\Delta\mathcal{H}$ divergence between domains,
$\lambda^{\ast}$ indicates the error of an ideal hypothesis $h^{\ast}$.
Most of existing cross-domain detectors continue the practice in DANN [11] and
are dedicated to approximating the optimal $\mathcal{H}$-divergence (including
$\mathcal{H}\Delta\mathcal{H}$-divergence) by minimizing the Jensen-Shannon
divergence [35]. Then, for the two labeling functions (a classifier $f^{c}$
and a localizer $f^{l}$) possessed by all detectors, we have
$\small\begin{split}\varepsilon_{t}(h,f_{t}^{c})\leq\varepsilon_{s}(h,f_{s}^{c})+\frac{1}{2}d_{\mathcal{H}\Delta\mathcal{H}}(\mathcal{D}_{s},\mathcal{D}_{t})+\lambda^{\ast},\\\
\varepsilon_{t}(h,f_{t}^{l})\leq\varepsilon_{s}(h,f_{s}^{l})+\frac{1}{2}d_{\mathcal{H}\Delta\mathcal{H}}(\mathcal{D}_{s},\mathcal{D}_{t})+\lambda^{\ast}.\end{split}$
(10)
In this context, by narrowing a single divergence, the target errors of both
two labeling functions are restricted, which is however, hard to be done.
Since the large differences in classification and regression spaces make it
difficult for a single hypothesis to be consistent with both functions
simultaneously, and we also empirically found that the target domain error of
the localizer is often poorly bounded. In response to this problem, our
framework actually decouples the optimization of the above divergence, and by
specifying the hypothesis on each labeling function, consistently reducing the
two target errors. Specifically, we have
$\small\begin{split}\varepsilon_{t}(h_{1},f_{t}^{c})\leq\varepsilon_{s}(h_{1},f_{s}^{c})+\frac{1}{2}d_{MCSD}^{cls}(\mathcal{D}_{s},\mathcal{D}_{t})+\lambda^{\ast},\\\
\varepsilon_{t}(h_{2},f_{t}^{l})\leq\varepsilon_{s}(h_{2},f_{s}^{l})+\frac{1}{2}d_{MCSD}^{loc}(\mathcal{D}_{s},\mathcal{D}_{t})+\lambda^{\ast},\end{split}$
(11)
where $d_{MCSD}^{cls}$ (resp. $d_{MCSD}^{loc}$) indicates the classification
(resp. localization)-specific Multi-Class Scoring Disagreement [40]
divergence, which is narrowed when maximining our proposed
$\mathcal{L}^{cls}_{da}$ (resp. $\mathcal{L}^{loc}_{da}$).
## 4 Experiments
### 4.1 Experimental Setup
Following the default settings in [4, 29], in all experiments, the input image
is first resized to have a shorter side length of 600, and then fed into the
Faster R-CNN[28] with ROI Align [14]. We train the model using the SGD
optimizer with an initial learning rate of 0.001 and divide by 10 every 50k
iterations. The batch size is set to 2, one for source domain and one for
target domain. For experiments on Normal-to-Foggy and Cross-Camera, the VGG16
[36] pretrained on ImageNet [7] is employed as the detection backbone, and 70k
iterations are trained totally. While for Real-to-Artistic, we use the
pretrained ResNet101 [15] instead and train a total of 120k iterations. The
numbers of auxiliary classifiers (N) and localizers (M) are set to 8 and 4,
and the trade-off parameters $\lambda_{1}$, $\lambda_{2}$, and $\lambda_{3}$
are given as 1.0, 1.0 and 0.01, respectively. We report mean average precision
(mAP) with a threshold of 0.5 for evaluation.
Various state-of-the-art domain adaptive detectors are introduced for
comparison, including DAF [4], SWDA [29], MAF [16], SCL [32], HTCN [3], CST
[44], SAP [22], RPNPA [43], UMT [8], DBGL [2], MeGA [38]. For all these
methods, we cite the results in their original paper. To verify the
effectiveness of our method, we report the performance of the Baseline model
and our TIA sequentially. We also train the Faster R-CNN using only the source
images, as well as only the annotated target images, and their performance on
different scenarios is uniformly referred as Source Only, Target Only,
respectively.
### 4.2 Real to Artistic
In this scenario, we specialize in the migration from trivial real to stylized
artistic domains. Typically, to simulate this adaptation, we use both
VOC2007-trainval and VOC2012-trainval in PASCAL VOC [10] to construct natural
source domain, and Clipart [18] to represent artistic target domain, according
to [18, 29, 3]. The Clipart shares 20 categories with PASCAL VOC, totaling 1k
images, is employed for both training (without labels) and evaluation.
Tab. 1 shows the results of adaptation from PASCAL VOC to Clipart. It can be
observed that our approach outperforms the previous state-of-the-arts with a
notable margin (+2.2%), achieving a mAP of 46.3%. Notably, the increase in
localization accuracy delivers a consistent improvement over all classes,
enabling the highest mean AP with limited categories reaching the highest AP.
The overall results showcase that, a finer-grained feature alignment towards
high-level abstract semantic inconsistency is essential, especially in such
completely dissimilar scene. Also, considering the cross-domain label shifts
in the class distribution and the spatial distribution of bounding boxes, our
way of shrinking both the category-wise and boundary-wise discrepancies
explains the superiority of TIA.
Method | bus | bcycle | car | cycle | person | rider | train | truck | mAP
---|---|---|---|---|---|---|---|---|---
DAF [4] | 35.3 | 27.1 | 40.5 | 20.0 | 25.0 | 31.0 | 20.2 | 22.1 | 27.6
SWDA [29] | 36.2 | 35.3 | 43.5 | 30.0 | 29.9 | 42.3 | 32.6 | 24.5 | 34.3
MAF [16] | 39.9 | 33.9 | 43.9 | 29.2 | 28.2 | 39.5 | 33.3 | 23.8 | 34.0
SCL [32] | 41.8 | 36.2 | 44.8 | 33.6 | 31.6 | 44.0 | 40.7 | 30.4 | 37.9
HTCN [3] | 47.4 | 37.1 | 47.9 | 32.3 | 33.2 | 47.5 | 40.9 | 31.6 | 39.8
CST [44] | 45.6 | 36.8 | 50.1 | 30.1 | 32.7 | 44.4 | 25.4 | 21.7 | 35.9
SAP [22] | 46.8 | 40.7 | 59.8 | 30.4 | 40.8 | 46.7 | 37.5 | 24.3 | 40.9
RPNPA [43] | 43.6 | 36.8 | 50.5 | 29.7 | 33.3 | 45.6 | 42.0 | 30.4 | 39.0
UMT [8] | 56.5 | 37.3 | 48.6 | 30.4 | 33.0 | 46.7 | 46.8 | 34.1 | 41.7
MeGA [38] | 49.2 | 39.0 | 52.4 | 34.5 | 37.7 | 49.0 | 46.9 | 25.4 | 41.8
Source Only | 22.3 | 26.5 | 34.3 | 15.3 | 24.1 | 33.1 | 3.0 | 4.1 | 20.3
Baseline | 33.0 | 45.7 | 47.9 | 33.3 | 45.5 | 36.0 | 35.0 | 37.0 | 39.2
TIA | 52.1 | 38.1 | 49.7 | 37.7 | 34.8 | 46.3 | 48.6 | 31.1 | 42.3
Target Only | 53.1 | 36.4 | 52.8 | 36.0 | 36.2 | 46.5 | 40.2 | 34.0 | 41.9
Table 2: Experimental results (%) of Normal-to-Foggy scenario, Cityscapes
$\rightarrow$ Foggy Cityscapes.
### 4.3 Normal to Foggy
The capability to accommodate various weather conditions becomes a new
expectation for detectors. In this experiment, we use Cityscapes [5] and Foggy
Cityscapes [31] as the source and target domains, respectively, to perform a
transfer from regular scenes to foggy scenes. Cityscapes comprises 3,475
images, of which 2,975 are training set and the remaining 500 are validation
set. Foggy Cityscapes is built on Cityscapes and rendered with the physical
model of haze, thus both are identical in scenes and annotations. Results are
reported in the validation set of Foggy Cityscapes.
According to Tab. 2, our proposed framework TIA obtains the highest mAP
(42.3%) over all compared methods, and in particular, our method outperforms
the Target Only (+0.4%) for the first time. These results demonstrate the
importance of aligning task-specific inconsistency. Additionally, taking into
account that the benchmark is close to saturation, the performance improvement
we achieve relative to the state-of-the-art method (+0.5%) is quite
considerable.
Method | KITTI $\rightarrow$ City | KITTI $\leftarrow$ City
---|---|---
DAF [4] | 38.5 | 64.1
SWDA [29] | 37.9 | 71.0
MAF [16] | 41.0 | 72.1
SCL [32] | 41.9 | 72.7
HTCN [3] | 42.1 | 73.2
CST [44] | 43.6 | -
SAP [22] | 43.4 | 75.2
RPNPA [43] | - | 75.1
MeGA [38] | 43.0 | 75.5
Source Only | 30.2 | 53.5
Baseline | 42.4 | 73.0
TIA | 44.0 | 75.9
Table 3: Experimental results (%) of Cross-Camera scenario, KITTI
$\leftrightarrow$ Cityscapes.
### 4.4 Cross Camera
The domain gap derived from camera differences constitutes a shackle that
limits applications of many deep learning algorithms. In this part, we adopt
both KITTI [12] which contains 7,481 images and Cityscapes as the source and
target domains and transfer them in both adaptation directions. In line with
the protocol of [4], we only evaluate detection performance on their common
category, car.
The AP on detecting cars of various adaptive detectors is reported in Tab. 3.
Our method achieves the new state-of-the-art results of 44.0% and 75.9% in
both adaptations, also improves +1.6% and +2.9% respectively relative to
Baseline, manifesting once again the effectiveness and generalization of our
approach.
Classification | Localization | mAP
---|---|---
- | - | 41.1
DANN [11] | DANN [11] | 42.6
L1 | L1 | 43.2
KL | L1 | 43.7
SWD [21] | SWD [21] | 44.4
$\mathcal{L}_{ia}^{cls}$ (6) | $\mathcal{L}_{ia}^{loc}$ (7) | 46.3
Table 4: Ablation study on the effect on subtasks.
Figure 4: Ablation study on the effect of number of auxiliary predictors $N$,
$M$.
Figure 5: Error analysis of highest confident detections. (SO refers to Source
Only)
## 5 Analysis
### 5.1 Ablation Study
Effect on subtasks. Tab. 1 also demonstrates the effectiveness of TIA on both
subtasks of classification and localization. As represented by TIACLS and
TIALOC, our proposed classification- and localization-specific inconsistency
alignment bring consistent improvements (+3.6% and +2.1%). These results
indicate that aligning the inconsistency in each task space is effective in
enhancing both the category-wise and boundary-wise transferability. Moreover,
because of the orthogonality of classification and localization, and boundary-
wise alignment always leads to congruent improvement for the detector,
revealing the significance of learning a cross-domain localizer.
Effect of inconsistency losses. The superiority of our proposed
$\mathcal{L}_{ia}^{cls}$ (6) and $\mathcal{L}_{ia}^{loc}$ (7) is validated by
Tab. 4, on the PASCAL VOC $\rightarrow$ Clipart benchmark. For compared models
from third to fifth rows, the number of auxiliary classifiers ($N$) and
localizers ($M$) is fixed to 2. In comparison to the practical L1 loss used in
MCD [30], KL divergence used in [40], the recently proposed SWD [21], and the
intuitive alternative as mentioned in Sec. 3.2, our losses better capture the
disagreement and ambiguity of the behavior in auxiliary classifiers and
localizers, thus suggesting more precise measures to cross-domain
discrepancies. In particular, the use of L1 loss on our framework (shown in
the third row) can be regarded as a suitable alternative to MCD [30], since
experimentally, it fails to work on detection. Therefore, the improvement of
our TIA w.r.t. it reveals both the stability of our optimization and the
superiority of the proposed losses. Additionally, the way of simply repeating
FCs and then aligning their output features to each predictor based on DANN
[11] (shown in the second row) brings a limited performance gain. This is
reasonable since it actually shares substantial layers before FCs, resulting
in poor decoupling of the feature space and inadequate task-specific
alignment. Meanwhile, it also lacks task-specific treatment and hence still
suffers from the rising of localization error in the classification space of
the domain discriminator.
Effect of number of auxiliary predictors $N$, $M$. To reveal more clearly the
impact of number of auxiliary predictors, _i.e_. $N$ or $M$, on their
accordingly tasks, we fix one to 0 and exponentially vary the other from 0 to
32, and Fig. 5 depicts their impact. Obviously, $N$ contributes more to the
overall performance than $M$, suggesting a rational representation of
category-wise differences is of greater significance. In addition, in sparse
regression spaces, the growth in the number of localizers is of limited
benefit in capturing the inconsistency. And we speculate that this is because
the localization results are typically heterogeneously clustered in certain
regions.
### 5.2 Error Analysis
To demonstrate that our framework is capable of promoting the discriminability
of features towards both classification and localization tasks, we analyze the
accuracies of Source Only [28], DAF [4], SWDA [29], HTCN [3], UMT [8] and our
TIA caused by most confident detections on the Foggy Cityscapes $\rightarrow$
Cityscapes task. In line with [4], we categorize the detections into 3 types:
Correct (IoU with GT $\geq$ 0.5), MisLocalization (0.5 >IoU with GT $\geq$
0.3) and Background (IoU with GT <0.3). For each class, we select top-K
predictions where K is the number of ground-truth bounding boxes in this
class, and report the mean percentage of each type across all categories. As
shown in Fig. 5, in comparison with previous mainstream cross-domain
detectors, not only does our TIA clearly improve the number of correct
detections (green color) and reduce the number of false positives, but more
importantly, it lowers mislocalization error relative to Source Only for the
first time (blue color). This proves that our TIA boosts the transferability
of features while also enhancing their awareness towards both tasks,
especially the localization task.
## 6 Conclusions and Limitations
In this paper, we propose a new method TIA, by developing fine-grained feature
alignment in separate task spaces in terms of inconsistency, sufficiently
strengthening both the classification and localization capabilities of the
detector. Extensive experiments demonstrate the effectiveness of TIA.
Nevertheless, the issues of label shift and training stability inherent in
domain adaptation still limit TIA, and research in these regards will become
future work.
## Acknowledgments
This work is supported by National Natural Science Foundation of China
(No.62076119, No.61921006), Program for Innovative Talents and Entrepreneur in
Jiangsu Province, and Collaborative Innovation Center of Novel Software
Technology and Industrialization.
## References
* [1] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79(1):151–175, 2010.
* [2] Chaoqi Chen, Jiongcheng Li, Zebiao Zheng, Yue Huang, Xinghao Ding, and Yizhou Yu. Dual bipartite graph learning: A general approach for domain adaptive object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2703–2712, 2021.
* [3] Chaoqi Chen, Zebiao Zheng, Xinghao Ding, Yue Huang, and Qi Dou. Harmonizing transferability and discriminability for adapting object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8869–8878, 2020.
* [4] Yuhua Chen, Wen Li, Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Domain adaptive faster r-cnn for object detection in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3339–3348, 2018.
* [5] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213–3223, 2016.
* [6] Shuhao Cui, Shuhui Wang, Junbao Zhuo, Liang Li, Qingming Huang, and Qi Tian. Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3941–3950, 2020.
* [7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
* [8] Jinhong Deng, Wen Li, Yuhua Chen, and Lixin Duan. Unbiased mean teacher for cross-domain object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4091–4101, 2021.
* [9] Ürün Dogan, Tobias Glasmachers, and Christian Igel. A unified view on multi-class support vector classification. J. Mach. Learn. Res., 17(45):1–32, 2016.
* [10] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–338, 2010.
* [11] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016\.
* [12] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE conference on computer vision and pattern recognition, pages 3354–3361. IEEE, 2012.
* [13] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014.
* [14] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017.
* [15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [16] Zhenwei He and Lei Zhang. Multi-adversarial faster-rcnn for unrestricted object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6668–6677, 2019.
* [17] Han-Kai Hsu, Chun-Han Yao, Yi-Hsuan Tsai, Wei-Chih Hung, Hung-Yu Tseng, Maneesh Singh, and Ming-Hsuan Yang. Progressive domain adaptation for object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 749–757, 2020.
* [18] Naoto Inoue, Ryosuke Furuta, Toshihiko Yamasaki, and Kiyoharu Aizawa. Cross-domain weakly-supervised object detection through progressive domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5001–5009, 2018.
* [19] Junguang Jiang, Yifei Ji, Ximei Wang, Yufeng Liu, Jianmin Wang, and Mingsheng Long. Regressive domain adaptation for unsupervised keypoint detection. arXiv preprint arXiv:2103.06175, 2021.
* [20] Abhishek Kumar, Prasanna Sattigeri, Kahini Wadhawan, Leonid Karlinsky, Rogerio Feris, William T Freeman, and Gregory Wornell. Co-regularized alignment for unsupervised domain adaptation. arXiv preprint arXiv:1811.05443, 2018.
* [21] Chen-Yu Lee, Tanmay Batra, Mohammad Haris Baig, and Daniel Ulbricht. Sliced wasserstein discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10285–10295, 2019.
* [22] Congcong Li, Dawei Du, Libo Zhang, Longyin Wen, Tiejian Luo, Yanjun Wu, and Pengfei Zhu. Spatial attention pyramid network for unsupervised domain adaptation. In European Conference on Computer Vision, pages 481–497. Springer, 2020.
* [23] Xiang Li, Wenhai Wang, Xiaolin Hu, Jun Li, Jinhui Tang, and Jian Yang. Generalized focal loss v2: Learning reliable localization quality estimation for dense object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11632–11641, 2021.
* [24] Xiang Li, Wenhai Wang, Lijun Wu, Shuo Chen, Xiaolin Hu, Jun Li, Jinhui Tang, and Jian Yang. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection. arXiv preprint arXiv:2006.04388, 2020.
* [25] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988, 2017.
* [26] Ping Luo, Fuzhen Zhuang, Hui Xiong, Yuhong Xiong, and Qing He. Transfer learning from multiple source domains via consensus regularization. In Proceedings of the 17th ACM conference on Information and knowledge management, pages 103–112, 2008.
* [27] Yawei Luo, Liang Zheng, Tao Guan, Junqing Yu, and Yi Yang. Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2507–2516, 2019.
* [28] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497, 2015.
* [29] Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada, and Kate Saenko. Strong-weak distribution alignment for adaptive object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6956–6965, 2019.
* [30] Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3723–3732, 2018.
* [31] Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 126(9):973–992, 2018\.
* [32] Zhiqiang Shen, Harsh Maheshwari, Weichen Yao, and Marios Savvides. Scl: Towards accurate domain adaptive object detection via gradient detach based stacked complementary losses. arXiv preprint arXiv:1911.02559, 2019.
* [33] Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244, 2000\.
* [34] Rui Shu, Hung H Bui, Hirokazu Narui, and Stefano Ermon. A dirt-t approach to unsupervised domain adaptation. arXiv preprint arXiv:1802.08735, 2018.
* [35] Changjian Shui, Qi Chen, Jun Wen, Fan Zhou, Christian Gagné, and Boyu Wang. Beyond $\mathcal{H}$-divergence: Domain adaptation theory with jensen-shannon divergence. arXiv preprint arXiv:2007.15567, 2020.
* [36] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
* [37] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7167–7176, 2017.
* [38] Vibashan VS, Vikram Gupta, Poojan Oza, Vishwanath A Sindagi, and Vishal M Patel. Mega-cda: Memory guided attention for category-aware unsupervised domain adaptive object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4516–4526, 2021.
* [39] Yifan Wu, Ezra Winston, Divyansh Kaushik, and Zachary Lipton. Domain adaptation with asymmetrically-relaxed distribution alignment. In International Conference on Machine Learning, pages 6872–6881. PMLR, 2019.
* [40] Yabin Zhang, Bin Deng, Hui Tang, Lei Zhang, and Kui Jia. Unsupervised multi-class domain adaptation: Theory, algorithms, and practice. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020\.
* [41] Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. Bridging theory and algorithm for domain adaptation. In International Conference on Machine Learning, pages 7404–7413. PMLR, 2019.
* [42] Yabin Zhang, Hui Tang, Kui Jia, and Mingkui Tan. Domain-symmetric networks for adversarial domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5031–5040, 2019.
* [43] Yixin Zhang, Zilei Wang, and Yushi Mao. Rpn prototype alignment for domain adaptive object detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12425–12434, 2021.
* [44] Ganlong Zhao, Guanbin Li, Ruijia Xu, and Liang Lin. Collaborative training between region proposal localization and classification for domain adaptive object detection. In European Conference on Computer Vision, pages 86–102. Springer, 2020.
* [45] Han Zhao, Remi Tachet Des Combes, Kun Zhang, and Geoffrey Gordon. On learning invariant representations for domain adaptation. In International Conference on Machine Learning, pages 7523–7532. PMLR, 2019.
* [46] Zhedong Zheng and Yi Yang. Rectifying pseudo label learning via uncertainty estimation for domain adaptive semantic segmentation. International Journal of Computer Vision, 129(4):1106–1120, 2021\.
* [47] Xingyi Zhou, Arjun Karpur, Chuang Gan, Linjie Luo, and Qixing Huang. Unsupervised domain adaptation for 3d keypoint estimation via view consistency. In Proceedings of the European conference on computer vision (ECCV), pages 137–153, 2018.
* [48] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017.
## Appendix
## Appendix A More Implementation Details
Discriminator $D_{1}$
---
Conv 1 × 1 × 256, stride 1, pad 0
ReLU
Conv 1 × 1 × 128, stride 1, pad 0
ReLU
Conv 1 × 1 × 1, stride 1, pad 0
Sigmoid
Discriminator $D_{2}$ and $D_{3}$
---
Conv 3 × 3 × 512, stride 2, pad 1
Batch Normalization, ReLU, Dropout
Conv 3 × 3 × 128, stride 2, pad 1
Batch Normalization, ReLU, Dropout
Conv 3 × 3 × 128, stride 2, pad 1
Batch Normalization, ReLU, Dropout
Average Pooling
Fully connected 128 × 2
Discriminator $D_{4}$
---
Conv 3 × 3 × 512, stride 2, pad 1
ReLU
Conv 3 × 3 × 128, stride 2, pad 1
ReLU
Conv 3 × 3 × 128, stride 2, pad 1
ReLU
Average Pooling
Fully connected 128 × 2
Table 5: Architecture of discriminators.
In this section, we present more details about the Baseline model. As
mentioned in main text (Sec. 3.1), for higher semantic consistency, we adhere
to the mainstream practice of aligning features on the source and target
domains, at both mid-to-upper layers of the backbone (_i.e_. image-level) and
ROI layer (_i.e_. instance-level), with the help of Gradient Reversal Layer
(GRL) [11]. Concretely, in consistent with [32], for the features output from
the last three blocks of VGG16 [36], or last three layers of ResNet101 [15],
we feed them into separate discriminators ($D_{1}$, $D_{2}$ and $D_{3}$, their
concrete architecture is shown in Tab. 5) connected via a GRL to determine the
domain to which the features belong. After that, three image-level domain
adaptation losses are calculated as follows:
$\begin{split}\mathcal{L}_{da}^{img1}&=\frac{1}{n_{s}\cdot H\cdot
W}\sum_{i=1}^{n_{s}}\sum_{w=1}^{W}\sum_{h=1}^{H}{D_{1}(x_{i})^{2}_{wh}}\\\
&+\frac{1}{n_{t}\cdot H\cdot
W}\sum_{i=1}^{n_{t}}\sum_{w=1}^{W}\sum_{h=1}^{H}{(1-D_{1}(x_{i})_{wh})^{2}},\end{split}$
(12)
$\begin{split}\mathcal{L}_{da}^{img2}&=\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}{\mathcal{L}_{ce}(D_{2}(x_{i}^{{}^{\prime}}),d_{i}^{s})}\\\
&+\frac{1}{n_{t}}\sum_{i=1}^{n_{t}}{\mathcal{L}_{ce}(D_{2}(x_{i}^{{}^{\prime}}),d_{i}^{t})},\end{split}$
(13)
$\begin{split}\mathcal{L}_{da}^{img3}&=\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}{\mathcal{L}_{fl}(D_{3}(x_{i}^{{}^{\prime\prime}}),d_{i}^{s})}\\\
&+\frac{1}{n_{t}}\sum_{i=1}^{n_{t}}{\mathcal{L}_{fl}(D_{2}(x_{i}^{{}^{\prime\prime}}),d_{i}^{t})},\end{split}$
(14)
where $x_{i}$, $x_{i}^{{}^{\prime}}$ and $x_{i}^{{}^{\prime\prime\prime}}$
denotes the features output from the last three blocks of the backbone for the
$i$-th training image, $d_{i}$ indicates the corresponding domain label, and
$n_{s}$ and $n_{t}$ refer to the total number of images within a mini-batch in
source and target domains, respectively. Besides, ${L}_{ce}$ suggests the
cross-entropy loss, while the ${L}_{fl}$ indicates the focal loss, with its
$\gamma$ set to 5 following [29]. Likewise, the alignment of high-level
feature patches (ROIs) is also employed. With the discriminator $D_{4}$
illustrated in Tab. 5, the instance-level loss is formally as
$\begin{split}\mathcal{L}_{da}^{ins}&=\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}\mathcal{L}_{ins}(D(r_{i}),d_{i}^{s})\\\
&+\frac{1}{n_{t}}\sum_{i=1}^{n_{t}}\mathcal{L}_{ins}(D(r_{i}),d_{i}^{t}),\end{split}$
(15)
where $r_{i}$ denotes the $i$-th ROI and $d_{i}$ indicates the corresponding
domain label. As for $\mathcal{L}_{ins}$, we use cross-entropy loss for the
Normal-to-Foggy and Cross-Camera scenarios and focal loss for the Real-to-
Artistic scenario, with $\gamma$ being also set to 5.
In conclusion, the overall training objective of Baseline becomes:
$\mathcal{L}=\mathcal{L}_{det}+\lambda_{1}(\mathcal{L}_{da}^{img1}+\mathcal{L}_{da}^{img2}+\mathcal{L}_{da}^{img3}+\mathcal{L}_{da}^{ins}),$
(16)
where $\lambda_{1}$ is set to 1.0. Additionally, we concatenate the image-
level features processed by previous three discriminators with the high-level
ROI representation after FCs, in a manner similar to [29, 32], to realize
greater training stability.
## Appendix B Additional Ablation Study
For the localization-specific inconsistency alignment module, the effect of
different measures of dispersion is further investigated here. To reveal more
clearly their impact on the localization branch, we remove the classification
branch. The results on the Real-to-Artistic scenario are displayed in Tab. 6.
It showcases that (1) a measure that is closer to the original scale is
preferred; (2) L2-norm delivers a more appropriate and precise estimate to
behavioral uncertainty among diverse localizers.
Measurement | mAP
---|---
Mean absolute deviation | 42.7
Variance | 41.8
Standard deviation | 43.2
Table 6: Ablation study on different measures of dispersion.
## Appendix C Visualization
We provide some detection results of vanilla detector (_i.e_. Source Only
[28]), state-of-the-art adaptive detectors (_e.g_. HTCN [3] and UMT [8]), and
our framework TIA. Fig. 6 illustrates the comparison of detections on the
PASCAL VOC [10] $\rightarrow$ Clipart [18] benchmark. It is observed that our
proposed TIA outperforms both Source Only and UMT [8], and produces more
accurate detection results, _i.e_., more foreground objects are identified
(Row 1&2), and higher quality bounding boxes are provided along with accurate
categorization (Row 3-5). Qualitative results on the Cityscapes [5]
$\rightarrow$ Foggy Cityscapes [31] benchmark represented by Fig. 7 also
demonstrates the superiority of our TIA. For example, in the first row, for
the two cars on the left, the bounding box given by HTCN is relatively off-
target, while ours method present more compact boundaries, compared to Source
Only’s.
## Appendix D Limitations
The discrepancy between source and target domains in the label space, _i.e_.,
label shift, substantially affects the design philosophy and severely limits
the performance of existing domain adaptive detectors. In this subsection, we
will provide in-depth analysis of how label shift limits our TIA for each
dataset benchmark.
The benchmarks used in Normal-to-Foggy (Cityscapes [5] $\rightarrow$ Foggy
Cityscapes [31]) and Real-to-Artistic (PASCAL VOC [10] $\rightarrow$ Clipart
[18]) are essentially appropriate and they allow a good evaluation of the
performance of various domain adaptive detectors. Specifically, the former
case is ideal, since it shares an identical label space between the source and
target domains, while the latter one has its label shift diluted due to the
scale of the source domain. In this context, it is observed that, our
framework exceeds the upper bound indicated by Target Only on the former
benchmark and easily achieves state-of-the-art performance on the latter
benchmark.
It is quite different in the Cross-Camera scenario. We find that the label
shift of the benchmark (KITTI [12] $\leftrightarrow$ Cityscapes) employed in
this scenario is dominated by the imbalance in the foreground-background
ratio, namely the inconsistency in the average number of objects between the
source and target domain data. In fact, the average numbers of instances of
Cityscapes and KITTI are 9.1 and 3.8, respectively. This directly leads to two
serious problems. On the one hand, we observe that the Source Only model
undergoes severe overfitting issue during training, which means that we
underestimate the lower bound of the benchmark; on the other hand, it imposes
higher demands on the cross-domain performance of RPN, and this
straightforwardly undermines the effectiveness of the existing mainstream
approaches that focus on feature alignment for it.
In summary, two arguments are made. First, existing methods are highly
inefficient in coping with label shift. In light of [39], although the
execution of domain alignment alone reduces the divergence between domains
(the second term in Theorem 1), it leads to arbitrary increases in
$\lambda^{\ast}$ (the third term in Theorem 1), hence eventually, the target
errors of detectors cannot be well-guaranteed. For this reason, taking into
account the detectors’ empirical predictions on the target domain, or namely,
the behavior of label predictors, is gradually emerging as a necessity.
Moreover, compared to classification tasks, the label shift in object
detection task is considerably complicated. It is no longer limited to the
differences in category proportions, but is more widely distributed in spatial
differences in scale, position, etc. of bounding boxes. These two facts drive
the proposal of TIA on a different aspect.
Second, in view of the fact that the label shift cannot be well estimated nor
truly eliminated, we argue that there is a gap between the true upper bound
and the present upper bound specified by Target Only, according to [45]. Under
such circumstances, the close performance of the domain adaptive detectors in
the Cross-Camera benchmark can be reasonably explained.
(a) Source Only
(b) UMT [8]
(c) Our TIA
Figure 6: Illustration of the detection results on the PASCAL VOC
$\rightarrow$ Clipart benchmark. Compared to Source Only, UMT’s localization
performance is worse, while ours is better.
(a) Source Only
(b) HTCN [3]
(c) Our TIA
Figure 7: Illustration of the detection results on the Cityscapes
$\rightarrow$ Foggy Cityscapes benchmark. Our TIA identifies more objects and
delivers more accurate bounding boxes.
|
# How Social Rewiring Preferences Bridge Polarized Communities
Henrique M. Borges<EMAIL_ADDRESS>Departamento de Física, Instituto Superior Técnico, Universidade de Lisboa,
Lisbon, Portugal Vítor V. Vasconcelos<EMAIL_ADDRESS>Computational
Science Lab, Informatics Institute, University of Amsterdam, Amsterdam, The
Netherlands POLDER, Institute for Advanced Study, University of Amsterdam,
Amsterdam, The Netherlands Center for Urban Mental Health, University of
Amsterdam, Amsterdam, The Netherlands Flávio L. Pinheiro
<EMAIL_ADDRESS>NOVA Information Management School (NOVA IMS),
Universidade Nova de Lisboa, Lisbon, Portugal
###### Abstract
Recently, social debates have been marked by increased polarization of social
groups. Such polarization not only implies that groups cannot reach a
consensus on fundamental questions but also materializes in more modular
social spaces/networks that further amplify the risks of polarization in less
polarizing topics. How can network adaptation bridge different communities
when individuals reveal homophilic or heterophilic social rewiring
preferences? Here, we consider information diffusion processes that capture a
continuum from simple to complex contagion processes. We use a computational
model to understand how fast and to what extent individual rewiring
preferences bridge initially weakly connected communities and how likely it is
for them to reach a consensus. We show that homophilic and heterophilic
rewiring have different impacts depending on the type of opinion spread.
First, in the case of complex opinion diffusion, we show that even polarized
social networks can reach a population-wide consensus without reshaping their
underlying network. When polarized social structures amplify opinion
polarization, heterophilic rewiring preferences play a key role in creating
bridges between communities and facilitating a population-wide consensus.
Secondly, in the case of simple opinion diffusion, homophilic rewiring
preferences are more capable of fostering consensus and avoiding a co-
existence (dynamical polarization) of opinions. Hence, across a broad profile
of simple and complex opinion diffusion processes, only a mix of heterophilic
and homophilic rewiring preferences avoids polarization and promotes
consensus.
Social Contagion; Adaptive Social Networks; Opinion Dynamics; Polarization
††preprint: XXXX
## I Introduction
In the past decades, social media platforms have been at the center stage of
social debate and occupy a major role in our social dynamics. These platforms
have also amplified our tendency to form polarized groups whose segregation
and clustering of views prevents them from reaching consensus even on the most
fundamental societal questions [1]. As such, it is not surprising that much
research has been conducted to understand the phenomena of social polarization
better [2, 3, 4, 5, 6, 7]. While past works focused on identifying underlying
mechanisms that can lead to social polarization, both in opinion composition
[8, 9, 10, 11] and in respect to the structural organization of communities
[12, 13, 14, 15, 16, 17], few works have looked into how dynamical processes
on already structurally polarized populations can amplify or mitigate the
degree of structural polarization of a community. Here, we study how the co-
evolution of an information diffusion process—that interpolates between simple
and complex contagion processes—and the network structure of an initially
polarized social network can lead to the reshaping of social structures and
build environments that are more suitable for the formation of consensus.
Empirical evidence suggests that distinct types of information spread
differently [18, 19, 20, 21, 22, 23, 24], but that there is a positive and
direct relationship between the probability that an individual adopts new
information and the number of friends that already hold it [25, 26, 27, 28,
29, 21, 22]. In that context, information diffusion models can be divided into
simple contagion (social learning) or complex contagion (social influence)
processes. Formally, in simple contagion processes, the probability that
information is transmitted is directly proportional to the fraction of
neighbors with such information [18, 30, 19, 20, 31]. In contrast, under
complex contagion, adoption is typically modeled using a threshold function,
where the probability of transmission is one if the fraction of neighbors with
that information exceeds a given threshold and zero otherwise [32, 33, 22,
34]. However, empirical evidence supports the view that a heterogeneous
distribution of thresholds better describes populations [35, 25], leading to
the proposal of more general models [36, 37].
Figure 1: Opinion dynamics in static social networks. Panel a shows the
fixation times, measured for fully connected communities (well-mixed, black
dashed lines) and structured populations (different topologies in orange,
purple, and green lines). These results averaged over $1.0\times 10^{4}$
independent simulations starting from a configuration with equal abundances of
opinions. Vertical lines separate the different dynamical regions described in
the main text, and gray areas indicate the mismatch between well-mixed and
structured populations. This figure corresponds to $p=0$, matches Figure 3a
from Ref. [36], and sets up a baseline scenario in the absence of rewiring.
Panels b and c illustrate the possible outcomes of structural and dynamical
polarization. The former is characterized by a scenario in which polarization
occurs due to structural lock-ins. In the latter, polarization results from
agents’ inability to reach a consensus due to co-existence-like dynamics.
While, from an information diffusion perspective, polarization can be
characterized by a population that cannot reach a consensus (i.e., the
majority of the population cannot align towards the same opinion),
structurally speaking, a polarized population can be described by a modular
network structure with dense within- and sparse between-community connections.
Modular structures emphasize the amplification of group differences in terms
of complex information (i.e., complex contagion or social influence) but do
not affect the dissemination of simple information (i.e., simple contagion or
social learning) [38, 39]. To break such structural lock-ins, populations must
reshape their connections. In that sense, two adaptive network mechanisms
stemming from individual choices are homophily—the degree to which individuals
desire similarity between social contacts—and heterophily—desiring difference.
Coupling the agents’ dynamic states and connections leads to a feedback loop
where the network structure and individuals’ opinions affect each other. Past
works proposed models that combine opinion dynamics with homophilic and
heterophilic network dynamics [40, 41, 42], but they have not addressed the
interplay between the type of information diffusion and the rewiring mechanism
taking place.
Here, we study the feedback between the dynamics of individual opinions and
network structure in contemporary (polarized) social networks and ask to what
extent heterophilic and homophilic individual rewiring preferences can lead to
the bridging of initially polarized communities. We focus on potential future
debates, which will exhibit a range of diffusion properties, and test how
different rewiring preferences influence the potential for consensus formation
in competitive opinion dynamics. Furthermore, we show how the resulting
network structures put populations at risk of polarization in future social
debates. Across a broad profile of simple and complex opinion diffusion
processes, only a mix of heterophilic and homophilic rewiring preferences
avoids polarization and promotes consensus.
## II Materials and Methods
Let’s consider a finite but large population of $Z>>1$ individuals where each
agent is characterized by one of two contrasting opinions, $A$ or $B$. At any
given moment, the population contains a fraction $x=n^{A}/Z$ of _A_ s and
$1-x=n^{B}/Z$ of _B_ s. Moreover, individuals are embedded in a complex
network of social relationships, where each node corresponds to an individual,
and links capture who influences whom.
We study the case of a co-evolving population in which individuals can update
their opinions and adapt their ties. We consider a stochastic one-step process
in which, at each time step, one of two events takes place. With probability
$p$, individuals attempt to rewire a social tie, and, with probability $1-p$,
their opinion. In both cases, the decision depends on the composition of
individuals’ neighborhoods.
Figure 2: Schematic illustration of the co-evolutionary model used in this
manuscript. The proposed model combines a competitive opinion diffusion
process that co-evolves with a network dynamics process that can follow
homophily (individuals have a preference to be connected with individuals of
the same opinion) or heterophily (individuals have a preference to be
connected with individuals of opposite opinion).
### II.1 Opinion Dynamics
During an opinion update step, an agent $i$ is selected at random and updates
its opinion $X\in\\{A,B\\}$ to $Y\in\\{A,B\\}$ according to
$p_{i}^{X\to Y}=\bigg{(}\frac{n^{Y}_{i}}{z_{i}}\bigg{)}^{\alpha_{XY}},$ (1)
where $z_{i}$ is the degree of the individual $i$, $n^{Y}_{i}$ is the number
of $i$’s neighbors with opinion $Y$, and $\alpha_{XY}\geq 0$ is the complexity
of opinion $Y$ when learned by an individual with opinion $X$. When
$\alpha_{AB}=\alpha_{BA}=1$, the model resumes to the voter’s model. It is
convenient to reparametrize the complexities into polar coordinates, such that
$\alpha_{AB}=1+r\sin\theta$ and $\alpha_{BA}=1+r\cos\theta$. Hence, with a
single parameter $\theta$, we can explore the four dynamical regions of
interest in fully connected populations:
* •
A dominance, for $\pi/2\leq\theta<\pi$: This region does not have any internal
fixed point and $x^{*}=0$ is unstable and $x^{*}=1$ is stable. Opinion A will
dominate the population.
* •
B dominance, for $3\pi/2\leq\theta<2\pi$: This region does not have any
internal fixed point and $x^{*}=0$ is stable and $x^{*}=1$ is unstable.
Opinion B will dominate the population. In Evolutionary Game Theory (EGT),
this and A dominance exhibit dynamics akin to the Prisoner’s Dilemma and
Harmony Game [43, 44, 45].
* •
Polarization, for $\pi/2\leq\theta<3\pi/2$: this region is characterized by a
single stable internal fixed point that leads to the polarization of opinions,
which is identified by the constant co-existence of both opinions and the
inability of the population to reach a population-wide consensus due to a
dynamical lock. In EGT, this outcome is dynamically similar to a 2-person
Snowdrift Game [46].
* •
Consensus, for $0\leq\theta<\pi/2$: this region has a single unstable internal
fixed point resulting in coordination dynamics and a population-wide
consensus, which only depends on the initial abundance of opinions. In EGT,
this outcome is dynamically similar to a 2-person Stag-Hunt Game [47].
In the Polarization and Consensus regions, the internal fixed point position
is independent of $r$ and depends only on the ratios of complexities [36]
according to $x^{\ast}=\cot\theta=(\alpha_{BA}-1)/(\alpha_{AB}-1)$. For the
remainder of the manuscript, we shall consider the space spanned by $r=1/2$
and $0\leq\theta\leq 2\pi$. Figure 1 compares the fixation times (time to
consensus) obtained for three different network structures across the four
regions of interest. Except for modular networks, qualitatively, the expected
time to reach consensus in structured populations is consistent with the well-
mixed scenario. In structured populations, the Polarization region is reduced.
Moreover, fixation times peak in the Consensus region in modular population
structures. Such scenarios result from each community reaching a different
local consensus and then being unable to converge to a population-wide
consensus due to imposed structural lock-ins, a well-known result in the
context of complex contagion [38].
### II.2 Rewiring Preferences
We consider two different families of rewires based on how individuals’
networks are assessed: Homophilic or Heterophilic updates, in which
individuals rewire a connection if their neighborhood is too dissimilar or
similar to them, respectively. As such, during a link update step, a random
individual $i$ breaks a random tie with a probability given by:
$\displaystyle
p_{i_{Hom}}^{X}=\bigg{(}\frac{n_{i}^{Y}}{z_{i}}\bigg{)}^{\beta_{X}}\text{or}$
(2a) $\displaystyle
p_{i_{Het}}^{X}=\bigg{(}1-\frac{n_{i}^{Y}}{z_{i}}\bigg{)}^{\beta_{X}},$ (2b)
where $\beta_{X}$ accounts for the tolerance of an agent with opinion
$X\in\\{A,B\\}$ regarding the composition of its neighborhood for the
homophilic 2a and heterophilic 2b cases. If a link is broken, then $i$ creates
a new tie with a random friend of a neighbor. This ensures that the network
remains connected. This evaluation procedure is similar in spirit to the
Schelling model [48], especially taking into account a heterogeneous-
thresholds interpretation [36].
Figure 3: Opinion dynamics with Homophilic and Heterophilic rewiring
preferences in a Consensus regime ($\theta=\pi/4$). Panel a) shows the
fixation times as a function of the rewiring rate, $p$. Panel b) shows the
fraction of times a population ends in an opinion polarization state as a
function of the rewiring rate, $p$. Panel c) shows how the modularity of the
initial structure decays as a function of $p$. Panels d) and e), show the
distribution of the fixation times until reaching consensus for different
rewiring rates, $p$, for a homophilic (d) and heterophilic (e) rewiring
preferences and $\beta=(0.5,0.5)$. In panels a), b), and c) symbols indicate
different values of the strictness of rewiring decisions ($\beta$) and colors
the different rewiring preferences (homophilic in blue and heterophilic
populations in red). Results are the average over $10^{3}$ independent
simulations, each with an upper bound of $2.5\times 10^{6}$ iterations, for an
initial random distribution of equal proportions of _A_ s and _B_ s, on
modular networks with $N=1000$ nodes, and average degree $\langle
z_{i}\rangle=4$.
The tolerance coefficient, $\beta_{X}$, follows a similar definition of
$\alpha_{XY}$. As such, it allows us to interpolate between distinct
scenarios. Lower values of $\beta_{X}$ are associated with harsher
evaluations, which makes it more likely for their neighborhood to change,
compared to an individual with an identical neighborhood but a less stringent
assessment (larger values of $\beta_{X}$). Moreover, individuals of different
opinions can have different tolerance levels, $\beta_{X}$. As such, we define
the pair of tolerance-to-rewiring coefficients as
$\beta=(\beta_{A},\beta_{B})$.
### II.3 Simulations
We consider the case of a population whose initial structure is modular and
defined by two weakly connected communities. We generated these networks by
randomly linking $q=20$ nodes from two independently generated Barabási-Albert
[49] networks with $N/2$ nodes each. We considered $N=10^{3}$ and an average
degree of $\langle z_{i}\rangle=4$ unless specified otherwise.
Each simulation starts with an equal proportion of _A_ s and _B_ s. It is also
assumed that all individuals have either a homophilic or heterophilic
evaluation of their neighborhood. While consensus is eventually reached in
finite populations, the time taken can be exceedingly long. For that reason,
we set an upper bound of $M_{\text{iter}}=2.5\times 10^{6}$ iterations, which
we take as the maximum time to consensus. We present the average out of
$10^{3}$ independent simulations for each parameter set. To capture different
future polarizing topics, we consider a scenario in which opinions are
randomly distributed in the population and another where opinions are
associated with specific modules of the network.
## III Consensus Regime
Let us start by considering the properties of the diffusion process that lie
in the structural polarization region for static networks when the modularity
of the networks breaks their ability to reach a population-wide consensus,
i.e., $\theta=\pi/4$. Figure 3a shows the average fixation time as a function
of the rewiring rate $p$, and Figure 3b shows the fraction of simulations in
which populations end polarized in terms of opinions. Besides considering
populations with heterophilic (red) and homophilic (blue) rewiring
preferences, we also look into the strictness of the tolerance-to-rewire
coefficient (symbols), $\beta=(\beta_{A},\beta_{B})$.
Figure 4: Opinion Opinion dynamics with Homophilic and Heterophilic rewiring
preferences in a Consensus regime ($\theta=\pi/4$) with initial opinion
polarized configurations. In this series of results, each network community
starts with a local consensus on a different opinion. Panel a shows the
fixation time as a function of the rewiring rate, $p$. Panel b shows the
fraction of times a population ends in an opinion polarization state as a
function of the rewiring rate, $p$. Panel c shows how the modularity of the
initial structure decays as a function of $p$. Panels d) and e), show the
distribution of the fixation times until reaching consensus for different
rewiring rates, $p$, for a homophilic (d) and heterophilic (e) rewiring
preferences and $\beta=(0.5,0.5)$. In panels a), b), and c) symbols indicate
different values of the strictness of rewiring decisions ($\beta$) and colors
the different rewiring preferences (homophilic in blue and heterophilic
populations in red). Results are the average over $10^{3}$ independent
simulations, each with $M_{\text{iter}}=2.5\times 10^{6}$ iterations, on
modular networks with $N=10^{3}$ nodes, an average degree of $\langle
z_{i}\rangle=4$.
For low rewiring rates ($p<10^{-3}$), we observe a flat fixation time (see
Figure 3a). In that range, the rewiring dynamics are not impactful enough to
generate timely structural changes in the population structure. As such,
population outcomes can be divided into two cases: the fraction of simulations
in which population-wide consensus is reached in a relatively short time
($\approx 10^{5}$ iterations) since both communities reach the same consensus
independently and those simulations in which each community reaches a
different local consensus and for which the simulations stop at the designated
$M_{\text{iter}}$ iterations. Hence, the initial plateau observed is not the
maximum number of iterations but an average of the combination of times from
those two scenarios.
In the consensus regime, for intermediate values of rewiring rates
($10^{-3}<p<10^{-1}$), the nature of the assessment (homophily vs heterophily,
color) has a more significant impact on the results than the strictness of the
assessment (tolerance level, symbols). Homophilic rewiring preferences show
decreased fixation times with lower tolerance-to-rewrite coefficients
($\beta$), whereas heterophilic rewiring leads to shorter fixation times
overall.
The rewiring rate determines if structurally polarized populations can
reshuffle their social structure and bridge communities in due time and, thus,
foster a population-wide consensus (see Figure 3b). As such, the average time
required to reach a consensus decreases monotonically with $p$. However, as
shown in Figures 3d and 3e, the average time will continue to have two
distinct contributions: one that represents the scenario where both modules
reach consensus ($\approx 10^{-5}$) and another corresponding to situations
where consensus is achieved through the rewiring of links and the break of the
initial polarized social structure.
It is possible to track the degree by which rewiring reshapes the original
network by tracking how the network modularity [50, 51, 52] decays with $p$
(see Figure 3c). We compute the modularity assuming the two initial
communities as the network partitions. For low values of rewiring rate ($p$),
the final networks can still keep their initial modular structure intact.
However, when the rewiring probability is large, the network structure begins
to lose its distinctive modular properties, initially slowly and then
abruptly. In fact, it is possible to observe a sharp transition in the
relative modularity of the network at a critical rewiring probability (see
Figure 3c). This critical point marks a transition between a ‘modular-like
network phase,’ below the critical rewiring rate $p_{c}$, and a ‘randomly
rewired network phase,’ for $p>p_{c}$, where the final networks lose the
initial modular character into that of a completely mixed structure with the
number of edges between each pair of nodes (within one of the original
communities) being equivalent to that of a network that has undergone random
rewiring and shares the same degree distribution with this final network.
Overall, the dynamics can be separated into two distinct phases: an initial
fast convergence to population-wide consensus and a second case that lasts
longer and in which the population is first stuck in a polarized opinion state
along the community structure of the network and then, through link rewiring,
is able to build the necessary bridges to reach consensus. For that reason, we
investigate what occurs when the population starts from an initial condition
of an opinion-polarized population with local consensus along the network’s
community structure. Since each community makes up half of the population, we
start with the same abundance of opinions as before. Figures 4a and 4b show
that for $p<p_{c}$, fixation times are longer for this initial set-up in
relation to a random initial set-up, and, for $p\gtrsim 0$, the average
fixation time is $M_{\text{iter}}$ and, thus, all populations end in a
polarized state, effectively removing the scenario where the network achieves
fast population-wide consensus (see, Figures 4d and 4e). Moreover, while
Figures 4a and 4b) display a similar trend for the curves under analysis in
comparison to those from Fig.3a and 3b, we see now that consensus is only
possible after a significant decay in the initial modularity of the network,
see Figure 4c, a task in which heterophilic rewiring preferences are more
efficient than homophilic ones.
These results show that the rewiring process alone does not guarantee that the
network population reaches consensus and, to achieve it, the rewiring process
must occur with a sufficiently high frequency and be of the adequate
type—heterophilic or homophilic— to lead to the desired outcome. Further, the
time to reach it differs even when the outcome is the same for both types.
Figure 5: Opinion dynamics with Homophilic and Heterophilic rewiring
preferences in a Polarization regime ($\theta=11\pi/10$). Panel a shows the
fixation time as a function of the rewiring rate, $p$. Panel b shows the
fraction of times a population ends in an opinion polarization state as a
function of the rewiring rate, $p$. Panel c shows how the modularity of the
initial structure decays as a function of $p$. Panels d) and e), show the
parameter range in which the observed dynamical pattern is aligned with the
dynamical polarization of well-mixed populations. In panels a), b), and c)
symbols indicate different values of the strictness of rewiring decisions
($\beta$) and colors the different rewiring preferences (homophilic in blue
and heterophilic populations in red). Results are the average over $10^{3}$
independent simulations, each with an upper bound of $2.5\times 10^{6}$
iterations, on modular networks with $N=10^{3}$ nodes, an average degree of
$\langle z_{i}\rangle=4$.
## IV Dynamic Polarization Regime
Let us turn our attention to the regime when individuals easily change to rare
strategies and associated with dynamic polarization in the static and well-
mixed scenario. Non-complete social networks restrict the range of parameters
in which dynamical polarization occurs, facilitating the formation of
consensus. Although, in this case, the initial polarized structure of the
population plays a less relevant role, it is important to understand to which
extent rewiring can affect the chance and time to consensus.
Similarly to structural polarization, Figures 3a-c suggest that the nature of
the assessment (colors) seems to have a much more significant impact on the
results obtained than the strictness (symbols) of the assessment itself.
However, it is also possible to see that homophilic rewiring preferences are
more sensitive to the strictness of the assessment, especially for
intermediate values of $p$.
It is possible to recognize that the larger the rewiring rate, the easier it
becomes for heterophilic populations to remain polarized, as evidenced in the
increasing value of both the fraction of final polarized networks, Fig. 5b,
and the average time to reach consensus, Fig. 5a. Most importantly, however,
the rewiring probability can deeply change the dynamical pattern obtained by
the final networks populated by homophilic individuals.
The time to reach consensus and the fraction of polarized populations, Figures
5a and 5b, tend to increase with $p$. This increase can be attributed to the
lower probability of losing active links (i.e., links that can promote a
change in opinion), resulting in a delay in fixation time. However, in
populations with homophilic rewiring preferences, unlike heterophilic,
fixation times start decreasing for large values of $p$, followed by a
decrease in the fraction of populations that end in polarization. This
unexpected non-linear behavior for homophilic rewiring preferences is fostered
by the fact that if rewiring rates of homophilic individuals pass a critical
point, rewiring will outpace opinion dynamics and, as such, foster the
emergence of compact clusters of like-minded individuals in which the lack of
variability of opinions in a neighborhood limits the changes of opinions
spreading or, in other words, opinion updates. The same is not observed in
heterophilic rewiring preferences, where individuals rewire their links,
constantly looking to surround themselves with others of different opinions.
Moreover, the impact of rewiring dynamics on the initially modular structure
of the social network is also worth analyzing. Like in the first case, the
relative modularity decreases with increased rewiring probability ( Figure
5c), but it exhibits a clearer S-shaped behavior and decays faster with $p$.
This non-linear relationship suggests that the social network, depending on
the rewiring rate ($p$), will be either strongly modular (slow network
adaptation) or lack modularity network (intermediate in heterophilic rewiring
and fast in homophilic rewiring).
Finally, in regards to the results obtained for well-mixed populations, we see
that, when rewiring dynamics is considered, each of the studied rewiring
preferences leads to different dynamical responses: homophilic rewiring
preferences maintain or decrease the range of complexity parameters in which
dynamical polarization is observed (Figure 5d), but heterophilic rewiring
preferences expand the range of parameters and in the limit of very fast
rewiring rates match the well-mixed scenario.
## V Conclusions
This study delves into the dynamics of consensus formation in polarized social
networks through a coevolutionary model that integrates competing opinions
with adaptive network dynamics. Our research focuses on the interplay between
homophilic and heterophilic rewiring preferences across a range of competing
processes to enhance the understanding of social mechanisms that either
facilitate or impede recovery from social polarization. In scenarios where
information needs significant reinforcement to outcompete alternatives, our
findings reveal two distinct pathways to consensus: a rapid one through
independent community consensus and a slower one driven by rewiring dynamics.
We demonstrate that heterophilic preferences are more effective in bridging
communities for consensus in complex-information-diffusion contexts due to
their ability to diversify opinion spaces. Conversely, in contagions requiring
minimal reinforcement, homophilic rewiring emerges as more adept at fostering
consensus and mitigating polarization risks, displaying a non-linear
relationship with the rewiring rate. This finding suggests that homophilic
preferences may be more resilient to polarization in environments where
information bits are easily interchangeable.
Our research extends beyond the existing literature by examining how different
rewiring preferences can mitigate or amplify polarization. This approach
contrasts with studies that predominantly emphasize homophilic tendencies in
social networks, suggesting a more nuanced role for heterophilic interactions
in bridging divided communities.
The implications of our findings are significant for policymakers and
designers of social or organizational networks, online and offline. By
promoting a diversity of rewiring preferences, societies can enhance their
resilience to the impacts of social polarization across a spectrum of simple
to complex contagion processes. Identifying the specific complexity of the
contagion of critical issues could further refine strategic approaches.
Expanding the proposed coevolutionary model to include more than two competing
opinions and network communities is a natural next step as the dynamics become
increasingly complex in such scenarios. Investigating different rewiring
mechanisms, formulating optimal strategies for opinion dissemination, and
designing targeted social interventions to control the spread of particular
viewpoints are crucial areas for future exploration. Motivated by specific
datasets, these extensions will allow us to fully capture the complexities of
specific real-world social interactions and individual decision-making
processes. Furthermore, developing new metrics to quantify social polarization
among competing contagion processes, rather than standalone issues, will
enable a deeper understanding of these complex polarization patterns. This
expansion of research will provide a more comprehensive understanding of
social dynamics and guide the development of effective strategies to address
the challenges posed by social polarization.
### V.1 Acknowledgments
###### Acknowledgements.
FLP acknowledges the financial support provided by FCT Portugal under the
project UIDB/04152/2020 – Centro de Investigação em Gestão de Informação
(MagIC). VVV acknowledges funding from ENLENS under the project ”The Cost of
Large-Scale Transitions: Introducing Effective Targeted Incentives.”
## References
* [1] Vítor V Vasconcelos, Sara M Constantino, Astrid Dannenberg, Marcel Lumkowsky, Elke Weber, and Simon Levin. Segregation and clustering of preferences erode socially beneficial coordination. Proceedings of the National Academy of Sciences, 118(50):e2102153118, 2021.
* [2] Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. Quantifying controversy on social media. ACM Transactions on Social Computing, 1(1):1–27, 2018.
* [3] Uthsav Chitra and Christopher Musco. Analyzing the impact of filter bubbles on social network polarization. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 115–123, 2020.
* [4] Dennis Jacob and Sven Banisch. Polarization in social media: A virtual worlds-based approach. Journal of Artificial Societies and Social Simulation, 26(3), 2023\.
* [5] Emily Kubin and Christian von Sikorski. The role of (social) media in political polarization: a systematic review. Annals of the International Communication Association, 45(3):188–206, 2021.
* [6] Pablo Barberá. Social media, echo chambers, and political polarization. Social media and democracy: The state of the field, prospects for reform, 34, 2020.
* [7] Cass R Sunstein. Is social media good or bad for democracy. SUR-Int’l J. on Hum Rts., 15:83, 2018.
* [8] Balazs Kozma and Alain Barrat. Consensus formation on adaptive networks. Physical Review E, 77(1):016102, 2008.
* [9] Cecilia Nardini, Balázs Kozma, and Alain Barrat. Who’s talking first? consensus or lack thereof in coevolving opinion formation models. Physical Review Letters, 100(15):158701, 2008.
* [10] Tyll Krueger, Janusz Szwabiński, and Tomasz Weron. Conformity, anticonformity and polarization of opinions: insights from a mathematical model of opinion dynamics. Entropy, 19(7):371, 2017.
* [11] Alina Sîrbu, Dino Pedreschi, Fosca Giannotti, and János Kertész. Algorithmic bias amplifies opinion fragmentation and polarization: A bounded confidence model. PloS one, 14(3):e0213246, 2019.
* [12] Richard Durrett, James P Gleeson, Alun L Lloyd, Peter J Mucha, Feng Shi, David Sivakoff, Joshua ES Socolar, and Chris Varghese. Graph fission in an evolving voter model. Proceedings of the National Academy of Sciences, 109(10):3682–3687, 2012.
* [13] Kazutoshi Sasahara, Wen Chen, Hao Peng, Giovanni Luca Ciampaglia, Alessandro Flammini, and Filippo Menczer. Social influence and unfollowing accelerate the emergence of echo chambers. Journal of Computational Social Science, 4(1):381–402, 2021.
* [14] Yi Yu, Gaoxi Xiao, Guoqi Li, Wee Peng Tay, and Hao Fatt Teoh. Opinion diversity and community formation in adaptive networks. Chaos: An Interdisciplinary Journal of Nonlinear Science, 27(10), 2017.
* [15] Antonio F Peralta, Matteo Neri, János Kertész, and Gerardo Iñiguez. Effect of algorithmic bias and network structure on coexistence, consensus, and polarization of opinions. Physical Review E, 104(4):044312, 2021.
* [16] Fernando P Santos, Yphtach Lelkes, and Simon A Levin. Link recommendation algorithms and dynamics of polarization in online social networks. Proceedings of the National Academy of Sciences, 118(50):e2102141118, 2021.
* [17] David JP O’Sullivan, Gary J O’Keeffe, Peter G Fennell, and James P Gleeson. Mathematical modeling of complex contagion on clustered networks. Frontiers in Physics, 3:71, 2015.
* [18] William Goffman and V Newill. Generalization of epidemic theory. Nature, 204(4955):225–228, 1964.
* [19] Daryl J Daley and David G Kendall. Epidemics and rumours. Nature, 204(4963):1118–1118, 1964.
* [20] Damon Centola. The spread of behavior in an online social network experiment. Science, 329(5996):1194–1197, 2010.
* [21] Daniel A Sprague and Thomas House. Evidence for complex contagion models of social contagion from observational data. PloS one, 12(7):e0180802, 2017.
* [22] Bjarke Mønsted, Piotr Sapieżyński, Emilio Ferrara, and Sune Lehmann. Evidence of complex contagion of information in social media: An experiment using twitter bots. PloS one, 12(9):e0184148, 2017.
* [23] Alessandro Vespignani. Modelling dynamical processes in complex socio-technical systems. Nature Physics, 8(1):32–39, 2012.
* [24] Sinan Aral, Lev Muchnik, and Arun Sundararajan. Distinguishing influence-based contagion from homophily-driven diffusion in dynamic networks. Proceedings of the National Academy of Sciences, 106(51):21544–21549, 2009.
* [25] Márton Karsai, Gerardo Iniguez, Kimmo Kaski, and János Kertész. Complex contagion process in spreading of online innovation. Journal of The Royal Society Interface, 11(101):20140694, 2014.
* [26] Eytan Bakshy, Itamar Rosenn, Cameron Marlow, and Lada Adamic. The role of social networks in information diffusion. In Proceedings of the 21st International Conference on World Wide Web, pages 519–528, 2012.
* [27] Lars Backstrom, Dan Huttenlocher, Jon Kleinberg, and Xiangyang Lan. Group formation in large social networks: membership, growth, and evolution. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 44–54, 2006.
* [28] Eytan Bakshy, Brian Karrer, and Lada A Adamic. Social influence and the diffusion of user-created content. In Proceedings of the 10th ACM Conference on Electronic Commerce, pages 325–334, 2009.
* [29] Meeyoung Cha, Alan Mislove, and Krishna P Gummadi. A measurement-driven analysis of information propagation in the flickr social network. In Proceedings of the 18th International Conference on World Wide Web, pages 721–730, 2009.
* [30] David Kempe, Jon Kleinberg, and Éva Tardos. Maximizing the spread of influence through a social network. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 137–146, 2003.
* [31] Thomas W Valente. Social network thresholds in the diffusion of innovations. Social Networks, 18(1):69–89, 1996.
* [32] Mark Granovetter. Threshold models of collective behavior. American Journal of Sociology, 83(6):1420–1443, 1978.
* [33] Damon Centola. How behavior spreads: The science of complex contagions, volume 3. Princeton University Press Princeton, NJ, 2018.
* [34] Sune Lehmann and Yong-Yeol Ahn. Complex spreading phenomena in social systems. Springer, 10:978–3, 2018.
* [35] Márton Karsai, Gerardo Iñiguez, Riivo Kikas, Kimmo Kaski, and János Kertész. Local cascades induced global contagion: How heterogeneous thresholds, exogenous effects, and unconcerned behaviour govern online adoption spreading. Scientific Reports, 6(1):1–10, 2016.
* [36] Vítor V Vasconcelos, Simon A Levin, and Flávio L Pinheiro. Consensus and polarization in competing complex contagion processes. Journal of the Royal Society Interface, 16(155):20190196, 2019.
* [37] Nikolaj Horsevad, David Mateo, Robert E Kooij, Alain Barrat, and Roland Bouffanais. Transition from simple to complex contagion in collective decision-making. Nature Communications, 13(1):1442, 2022.
* [38] Damon Centola and Michael Macy. Complex contagions and the weakness of long ties. American Journal of Sociology, 113(3):702–734, 2007.
* [39] Lilian Weng, Filippo Menczer, and Yong-Yeol Ahn. Virality prediction and community structure in social networks. Scientific Reports, 3(1):1–6, 2013.
* [40] Petter Holme and Mark EJ Newman. Nonequilibrium phase transition in the coevolution of networks and opinions. Physical Review E, 74(5):056108, 2006.
* [41] Daichi Kimura and Yoshinori Hayakawa. Coevolutionary networks with homophily and heterophily. Physical Review E, 78(1):016103, 2008.
* [42] Federico Vazquez, Víctor M Eguíluz, and Maxi San Miguel. Generic absorbing transition in coevolution dynamics. Physical Review Letters, 100(10):108702, 2008.
* [43] Anatol Rapoport and Albert M Chammah. Prisoner’s dilemma: A study in conflict and cooperation, volume 165\. University of Michigan press, 1965.
* [44] Amir N Licht. Games commissions play: 2x2 games of international securities regulation. Yale J. Int’l L., 24:61, 1999.
* [45] Ross Cressman. Evolutionary dynamics and extensive form games, volume 5. MIT Press, 2003.
* [46] Michael Doebeli and Christoph Hauert. Models of cooperation based on the prisoner’s dilemma and the snowdrift game. Ecology Letters, 8(7):748–766, 2005.
* [47] Brian Skyrms. The stag hunt and the evolution of social structure. Cambridge University Press, 2004.
* [48] Thomas C Schelling. Micromotives and macrobehavior. WW Norton & Company, 1978.
* [49] Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. Science, 286(5439):509–512, 1999.
* [50] Mark EJ Newman. Mixing patterns in networks. Physical Review E, 67(2):026126, 2003.
* [51] Mark EJ Newman. Finding and evaluating community structure in networks. Physical Review E, 69(2):026113, 2004.
* [52] Mark EJ Newman. Modularity and community structure in networks. Proceedings of the National Academy of Sciences, 103(23):8577–8582, 2006.
|
# A generalization of the probability that the commutator of two group
elements is equal to a given element
Ahmad M.A. Alghamdi Department of Mathematical Sciences, Faculty of Applied
Sciences
Umm Alqura University, P.O. Box 14035, Makkah, 21955, Saudi Arabia
<EMAIL_ADDRESS>and Francesco G. Russo IISSS ”Axel Munthe”
viale Axel Munthe, 80074, Anacapri (Naples), Italy<EMAIL_ADDRESS>
###### Abstract.
The probability that the commutator of two group elements is equal to a given
element has been introduced in literature few years ago. Several authors have
investigated this notion with methods of the representation theory and with
combinatorial techniques. Here we illustrate that a wider context may be
considered and show some structural restrictions on the group.
###### Key words and phrases:
Commutativity degree, relative $n$-th nilpotency degree, probability of
commuting pairs, characters.
Mathematics Subject Classification 2010: 20P05; 20D60
## 1\. Different formulations of the commutativity degree
Given two elements $x$ and $y$ of a group $G$, several authors studied the
probability that a randomly chosen commutator $[x,y]$ of $G$ satisfies a
prescribed property. P. Erdős and P. Turán [6] began to investigate the case
$[x,y]=1$, noting some structural restrictions on $G$ from bounds of
statistical nature. Their approach involved combinatorial techniques, which
were developed successively in [2, 3, 4, 5, 7, 9, 10, 12, 13, 15, 17] and
extended to the infinite case in [8, 13, 18]. On another hand, P. X. Gallagher
[11] investigated the case $[x,y]=1$, using character theory, and opened
another line of research, illustrated in [3, 4, 12, 16, 19]. The literature
shows that it is possible to variate the condition on $[x,y]$ involving
arbitrary words, which could not be the commutator word $[x,y]$. From now, all
the groups which we consider will be finite.
Given two subgroups $H$ and $K$ of $G$ and two integers $n,m\geq 1$, we define
(1.1)
$\mathrm{p}^{(n,m)}_{g}(H,K)=\small{\frac{|\\{(x_{1},\ldots,x_{n},y_{1},\ldots,y_{m})\in
H^{n}\times K^{m}\ |\ [x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}]=g\\}|}{|H|^{n}\
|K|^{m}}}$
as the probability that a randomly chosen commutator of weight $n+m$ of
$H\times K$ is equal to a given element of $G$. Denoting
(1.2) $\mathcal{A}=\\{(x_{1},\ldots,x_{n},y_{1},\ldots,y_{m})\in H^{n}\times
K^{m}\ |\ [x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}]=g\\},$
$|\mathcal{A}|=|H|^{n}\cdot|K|^{m}\cdot\mathrm{p}^{(n,m)}_{g}(H,K)$. The case
$n=m=1$ can be found in [4] and is called generalized commutativity degree of
$G$. For $n=m=1$ and $H=K=G$,
(1.3) $\mathrm{p}^{(1,1)}_{g}(G,G)=\mathrm{p}_{g}(G)=\frac{|\\{(x,y)\in G^{2}\
|\ [x,y]=g\\}|}{|G|^{2}}$
is the probability that the commutator of two group elements of $G$ is equal
to a given element of $G$ in [16].
It is well known (see for instance [1, Excercise 3, p. 183]) that the function
$\psi(g)=|\\{(x,y)\in G\times G\ |\ [x,y]=g\\}|$ is a character of $G$ and we
have $\psi={\underset{\chi\in\mathrm{Irr}(G)}{\sum}}\frac{|G|}{\chi(1)}\chi$,
where $\mathrm{Irr}(G)$ denotes the set of all irreducible complex characters
of $G$. However, the authors exploited this fact in [16, Theorem 2.1], writing
(1.3) as
(1.4)
$\mathrm{p}_{g}(G)=\frac{1}{|G|}{\underset{\chi\in\mathrm{Irr}(G)}{\sum}}\frac{\chi(g)}{\chi(1)},$
For terminology and notations in character theory we refer to [14].
Now for $g=1$,
(1.5)
$\mathrm{p}^{(1,1)}_{1}(G,G)=\mathrm{p}_{1}(G)=\mathrm{d}(G)=\frac{|\\{(x,y)\in
G^{2}\ |\ [x,y]=1\\}|}{|G|^{2}}=\frac{|{\rm Irr}(G)|}{|G|}$
is the $probability$ $of$ $commuting$ $pairs$ $of$ $G$ (or briefly the
$commutativity$ $degree$ of $G$), largely studied in [2, 3, 4, 5, 7, 9, 10,
11, 12, 13, 15, 17, 19]. In particular,
(1.6) $\mathrm{p}^{(n,1)}_{1}(G,G)=\frac{|\\{(x_{1},\ldots,x_{n},x_{n+1})\in
G^{n+1}\ |\
[x_{1},\ldots,x_{n},x_{n+1}]=1\\}|}{|G|^{n+1}}=\mathrm{d}^{(n)}(G),$
is the $n$-th nilpotency degree of $G$ in [2, 7, 9, 17, 18] and that
(1.7) $\mathrm{p}^{(n,1)}_{1}(H,G)=\frac{|\\{(x_{1},\ldots,x_{n},y)\in
H^{n}\times G\ |\ [x_{1},\ldots,x_{n},y]=1\\}|}{|H|^{n}\
|G|}=\mathrm{d}^{(n)}(H,G)$
is the relative $n$-th nilpotency degree of $H$ in $G$, studied in [7, 9, 17,
18]. We may express (1.7) not necessarily with $g=1$; assuming that $H$ is
normal in $G$, [4, Equation (4) and Theorem 4.2] imply
(1.8) $\mathrm{p}^{(1,1)}_{g}(H,G)=\frac{|\\{(x,y)\in H\times G\ |\
[x,y]=g\\}|}{|H|\
|G|}=\frac{1}{|H||G|}{\underset{\chi\in\mathrm{Irr}(G)}{\sum}}\frac{|H|\langle\chi_{H},\chi_{H}\rangle}{\chi(1)}\chi(g),$
where $\chi_{H}$ denotes the restriction of $\chi$ to $H$ and
$\langle,\rangle$ the usual inner product. Our purpose is to study (1.1),
extending the previous contributions in [2, 4, 7, 16, 17]. The main results of
the present paper are in Section 3, in which the general considerations of
Section 2 are applied.
## 2\. Technical properties and some computations
We begin with two elementary observations on (1.1).
###### Remark 2.1.
If $\mathcal{S}=\\{[x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}]\ |\
x_{1},\ldots,x_{n}\in H;y_{1},\ldots,y_{m}\in K\\}$, then
$\mathrm{p}^{(n,m)}_{g}(H,K)=0$ if and only if $g\not\in\mathcal{S}$. On
another hand, $\mathrm{p}^{(n,m)}_{1}(H,K)=1$ if and only if
$[\underbrace{H,\ldots,H}_{n-\mathrm{times}},\underbrace{K,\ldots,K}_{m-\mathrm{times}}]=[_{n}H,_{m}K]=1$.
###### Remark 2.2.
The equation (1.1) assigns by default the map
(2.1) $\mathrm{p}^{(n,m)}_{g}:(x_{1},\ldots,x_{n},y_{1},\ldots,y_{m})\in
H^{n}\times K^{m}\mapsto\mathrm{p}^{(n,m)}_{g}(H,K)\in[0,1],$
which is a probability measure on $H^{n}\times K^{m}$, satisfying a series of
standard properties such as being multiplicative, symmetric and monotone.
The fact that (2.1) is multiplicative is described by the next result.
###### Proposition 2.3.
Let $E$ and $F$ be two groups such that $e\in E$, $f\in F$, $A,C\leq E$ and
$B,D\leq F$. Then
$\mathrm{p}^{(n,m)}_{(e,f)}(A\times C,B\times
D)=\mathrm{p}^{(n,m)}_{e}(A,B)\cdot\mathrm{p}^{(n,m)}_{f}(C,D).$
###### Proof.
It is enough to note that
$[([a_{1},\ldots,a_{n}],[c_{1},\ldots,c_{n}]),([b_{1},\ldots,b_{m}],[d_{1},\ldots,d_{m}])]=([[a_{1},\ldots,a_{n}],[b_{1},\ldots,b_{m}]],[c_{1},\ldots,c_{n}],[d_{1},\ldots,d_{m}]]).$
∎
Proposition 2.3 is true for finitely many factors instead of only two factors
and this can be checked with easy computations. Therefore the proof is
omitted. The fact that (2.1) is symmetric is described by the next result.
###### Proposition 2.4.
With the notations of (1.1),
$\mathrm{p}^{(n,m)}_{g}(H,K)=\mathrm{p}^{(n,m)}_{g^{-1}}(K,H)$. Moreover, if
$H$, or $K$, is normal in $G$, then
$\mathrm{p}^{(n,m)}_{g}(H,K)=\mathrm{p}^{(n,m)}_{g}(K,H)=\mathrm{p}^{(n,m)}_{g^{-1}}(H,K)$.
###### Proof.
The commutator rule $[x,y]^{-1}=[y,x]$ implies the first part of the result.
Now let $H$ be normal in $G$, $n\leq m$ and
$\mathcal{B}=\\{(y_{1},\ldots,y_{m},x_{1},\ldots,x_{n})\in K^{m}\times H^{n}\
|\ [y_{1},\ldots,y_{m},x_{1},\ldots,x_{n}]=g\\}$. The map
$\varphi:(x_{1},\ldots,x_{n},y_{1},\ldots,y_{m})\in\mathcal{A}\mapsto(y^{-1}_{1},y^{-1}_{2},\ldots,y^{-1}_{n},y^{-1}_{n+1},\ldots,y^{-1}_{m},y_{1}x_{1}y^{-1}_{1},y_{2}x_{2}y^{-1}_{2},\ldots,y_{n}x_{n}y^{-1}_{n})\in\mathcal{B}$
is bijective and so the remaining equalities follow. A similar argument can be
applied, when the assumption $H$ is normal in $G$ is replaced by $K$ is normal
in $G$. ∎
The fact that (2.1) is monotone is more delicate to prove, since this is a
situation in which we may find upper bounds for (1.1). Details are given later
on. Now we will get another expression for (1.1). With the notations of (1.1),
$\mathrm{Cl}_{K}([x_{1},\ldots,x_{n}])$ denotes the $K$-conjugacy class of
$[x_{1},\ldots,x_{n}]\in H$.
###### Proposition 2.5.
With the notations of (1.1),
(2.2) $\mathrm{p}_{g}^{(n,m)}(H,K)=\frac{1}{|H|^{n}\
|K|^{m}}\underset{\underset{g^{-1}[x_{1},\ldots,x_{n}]\in\mathrm{Cl}_{K}([x_{1},\ldots,x_{n}])}{x_{1},\ldots,x_{n}\in
H}}{\sum}|C_{K}([x_{1},\ldots,x_{n}])|^{m}.$
###### Proof.
It is straightforward to check that
(2.3)
$C_{K^{m}}([x_{1},\ldots,x_{n}])=\underbrace{C_{K}([x_{1},\ldots,x_{n}])\times\ldots\times
C_{K}([x_{1},\ldots,x_{n}])}_{m-\mathrm{times}}.$
In particular,
$|C_{K^{m}}([x_{1},\ldots,x_{n}])|=|C_{K}([x_{1},\ldots,x_{n}])|^{m}$.
$\mathcal{A}=\underset{[x_{1},\ldots,x_{n}]\in
H}{\bigcup}\\{[x_{1},\ldots,x_{n}]\\}\times T_{[x_{1},\ldots,x_{n}]},$ where
$T_{[x_{1},\ldots,x_{n}]}=\\{(y_{1},\ldots,y_{m})\in K^{m}\ |\
[x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}]=g\\}$. Obviously,
$T_{[x_{1},\ldots,x_{n}]}\not=\emptyset$ if and only if
$g^{-1}[x_{1},\ldots,x_{n}]\in\mathrm{Cl}_{K}([x_{1},\ldots,x_{n}])$. Let
$T_{[x_{1},\ldots,x_{n}]}\not=\emptyset$. Then
$|T_{[x_{1},\ldots,x_{n}]}|=|C_{K^{m}}([x_{1},\ldots,x_{n}])|$, because the
map $\psi:[y_{1},\ldots,y_{m}]\mapsto
g\overline{[y_{1},\ldots,y_{m}]}^{{}^{-1}}[y_{1},\ldots,y_{m}]$ is bijective,
where $\overline{[y_{1},\ldots,y_{m}]}$ is a fixed element of
$T_{[x_{1},\ldots,x_{n}]}$. We deduce that
(2.4) $\begin{array}[]{lcl}|\mathcal{A}|=\sum_{[x_{1},\ldots,x_{n}]\in
H}|T_{[x_{1},\ldots,x_{n}]}|=\underset{\underset{g^{-1}[x_{1},\ldots,x_{n}]\in\mathrm{Cl}_{K}([x_{1},\ldots,x_{n}])}{x_{1},\ldots,x_{n}\in
H}}{\sum}|C_{K^{m}}([x_{1},\ldots,x_{n}])|\vspace{0.3cm}\\\
=\underset{\underset{g^{-1}[x_{1},\ldots,x_{n}]\in\mathrm{Cl}_{K}([x_{1},\ldots,x_{n}])}{x_{1},\ldots,x_{n}\in
H}}{\sum}|C_{K}([x_{1},\ldots,x_{n}])|^{m}\end{array}$
and the result follows. ∎
Special cases of Proposition 2.5 are listed below.
###### Corollary 2.6.
In Proposition 2.5 , if $m=1$ and $G=K$, then
(2.5) $\mathrm{p}_{g}^{(n,1)}(H,G)=\frac{1}{|H|^{n}\
|G|}\underset{\underset{g^{-1}[x_{1},\ldots,x_{n}]\in\mathrm{Cl}_{G}([x_{1},\ldots,x_{n}])}{x_{1},\ldots,x_{n}\in
H}}{\sum}|C_{G}([x_{1},\ldots,x_{n}])|.$
###### Corollary 2.7 (See [4], Theorem 2.3).
In Proposition 2.5 , if $m=n=1$, then
(2.6) $\mathrm{p}_{g}^{(1,1)}(H,K)=\frac{1}{|H|\
|K|}\underset{\underset{g^{-1}x\in\mathrm{Cl}_{K}(x)}{x\in
H}}{\sum}|C_{K}(x)|.$
In particular, if $G=K$, then $\mathrm{p}_{g}^{(1,1)}(H,G)=\frac{1}{|H|\
|G|}\underset{\underset{g^{-1}x\in\mathrm{Cl}_{G}(x)}{x\in
H}}{\sum}|C_{G}(x)|$.
###### Corollary 2.8 (See [7], Proof of Lemma 4.2).
In Proposition 2.5 , if $m=1$ and $G=K$, then
(2.7) $\mathrm{p}_{1}^{(n,1)}(H,G)=\mathrm{d}^{(n)}(H,G)=\frac{1}{|H|^{n}\
|G|}\underset{x_{1},\ldots,x_{n}\in H}{\sum}|C_{G}([x_{1},\ldots,x_{n}])|.$
###### Corollary 2.9.
In Proposition 2.5 , if $C_{K}([x_{1},\ldots,x_{n}])=1$, then
(2.8)
$\mathrm{p}^{(n,m)}_{1}(H,K)=\frac{1}{|H|^{n}}+\frac{1}{|K|^{m}}-\frac{1}{|H|^{n}\
|K|^{m}}.$
[4, Proposition 3.4] follows from Corollary 2.9, when $m=n=1$.
###### Remark 2.10.
Equation (1.7) makes equivalent the study of $\mathrm{p}^{(n,1)}_{1}(H,G)$ and
that of $\mathrm{d}^{(n)}(H,G)$. This is illustrated in Corollary 2.8 and
noted here for the first time. Therefore there are many information from [2,
7, 9, 17] and [4, 3, 16] which can be connected. It is relevant to point out
that these concepts were treated independently and with different methods in
the last years.
Let $\chi$ be a character of $G$ and $\theta$ be a character of $H\leq G$. The
Frobenius Reciprocity Law [14, Lemma 5.2] gives a link between the restriction
$\chi_{H}$ of $\chi$ to $H$ and the induced character $\theta^{G}$ of
$\theta$. Therefore
$\langle\chi,\theta^{G}\rangle_{G}=\langle\chi_{H},\theta\rangle_{H}.$Write
this number as
$e_{(\chi,\theta)}=\langle\chi,\theta^{G}\rangle_{G}=\langle\chi_{H},\theta\rangle_{H}.$
If $e_{(\chi,\theta)}=0$, then $\theta$ does not appear in $\chi_{H}$ and so
$\chi$ does not appear in $\theta^{G}$. Recall from [14] that, if
$e_{(\chi,\theta)}\neq 0$, then $\chi$ covers $\theta$ (or also $\theta$
belongs to the constituents of $\chi_{H}$). In particular, if
$\theta=\chi_{H}$, then
$e_{(\chi,\chi_{H})}=\langle\chi,(\chi_{H})^{G}\rangle_{G}=\langle\chi_{H},\chi_{H}\rangle_{H}.$
From a classic relation (see [14, Lemma 2.29]),
$e_{(\chi,\chi_{H})}=\langle\chi,(\chi_{H})^{G}\rangle_{G}=\langle\chi_{H},\chi_{H}\rangle_{H}\leq|G:H|\
\langle\chi,\chi\rangle_{G}=|G:H|e_{(\chi,\chi)}$ and the equality holds if
and only if $\chi(x)=0$ for all $x\in G-H$. In particular, if
$\chi\in\mathrm{Irr}(G)$, then $\langle\chi_{H},\chi_{H}\rangle_{H}=|G:H|\
\mathrm{if\ and\ only\ if}\ \chi(x)=0,$ for all $x\in G-H.$ Therefore the
following result is straightforward.
###### Corollary 2.11.
With the notations of (1.1), $\mathrm{p}_{g}^{(1,1)}(H,G)\leq|G:H|\
\mathrm{p}_{1}(G)$ and the equality holds if and only if all the characters
vanish on $G-H$.
At this point, [4, Theorem 4.2] becomes
(2.9) $\zeta(g)=|H|\
\underset{\chi\in\mathrm{Irr}(G)}{\sum}\frac{e_{(\chi_{H},\chi_{H})}}{\chi(1)}\cdotp\chi(g)=|\\{(x,y)\in
H\times G\ |\
[x,y]=g\\}|=\underset{\underset{g^{-1}x\in\mathrm{Cl}_{G}(x)}{x\in
H}}{\sum}|C_{G}(x)|,$
where $\zeta(g)$ is the number of solutions $(x,y)\in H\times G$ of the
equation $[x,y]=g$. Note that (2.9) and [1, Excercise 3, p. 183] give a short
argument to prove that $\zeta(g)$ is a character of $G$ with respect to the
argument in [4, Corollary 4.3]. The equation (1.8) becomes
(2.10) $\mathrm{p}_{g}^{(1,1)}(H,G)=\frac{\zeta(g)}{|H|\ |G|}.$
For the general case that $n>1$, $m>1$ and $G=K$,
(2.11)
$\mathrm{p}_{g}^{(n,m)}(H,G)=\frac{\zeta^{(n,m)}(g)}{|G|^{m}}=\frac{1}{|G|^{m}}\Big{(}\underset{\underset{g^{-1}[x_{1},\ldots,x_{n}]\in\mathrm{Cl}_{G}([x_{1},\ldots,x_{n}])}{x_{1},\ldots,x_{n}\in
H}}{\sum}|C_{G}([x_{1},\ldots,x_{n}])|^{m}\Big{)},$
where
(2.12)
$\zeta^{(n,m)}(g)=\underset{\underset{g^{-1}[x_{1},\ldots,x_{n}]\in\mathrm{Cl}_{G}([x_{1},\ldots,x_{n}])}{x_{1},\ldots,x_{n}\in
H}}{\sum}|C_{G}([x_{1},\ldots,x_{n}])|^{m}$
is the number of solutions $(x_{1},\ldots,x_{n},y_{1},\ldots,y_{m})\in
H^{n}\times G^{m}$ of $[x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}]=g$.
###### Remark 2.12.
There are many evidences from the computations that $\zeta^{(n,m)}(g)$ is a
character of $G$.
Now we may prove upper bounds for (1.1) and find that (2.1) is monotone.
###### Proposition 2.13.
With the notations of (1.1), if $H\leq K$, then
$\mathrm{p}^{(n,m)}_{g}(H,G)\geq\mathrm{p}^{(n,m)}_{g}(K,G).$ The equality
holds if and only if $\mathrm{Cl}_{H}(x)=\mathrm{Cl}_{K}(x)$ for all $x\in G$.
###### Proof.
We note that $\frac{1}{|K|}\leq\frac{1}{|H|}$ and then
$\frac{1}{|K|^{n}}\leq\frac{1}{|H|^{n}}$. By Proposition 2.5,
(2.13)
$\begin{array}[]{lcl}|G|^{m}\cdot\mathrm{p}_{g}^{(n,m)}(K,G)=\frac{1}{|K|^{n}}\underset{\underset{g^{-1}[x_{1},\ldots,x_{n}]\in\mathrm{Cl}_{G}([x_{1},\ldots,x_{n}])}{x_{1},\ldots,x_{n}\in
K}}{\sum}|C_{G}([x_{1},\ldots,x_{n}])|\vspace{0.3cm}\\\
\leq\frac{1}{|H|^{n}}\underset{\underset{g^{-1}[x_{1},\ldots,x_{n}]\in\mathrm{Cl}_{G}([x_{1},\ldots,x_{n}])}{x_{1},\ldots,x_{n}\in
K}}{\sum}|C_{G}([x_{1},\ldots,x_{n}])|\end{array}$
in particular the last relation is true for $x_{1},\ldots,x_{n}\in H\leq K$
and continuing
(2.14)
$=\frac{1}{|H|^{n}}\underset{\underset{g^{-1}[x_{1},\ldots,x_{n}]\in\mathrm{Cl}_{G}([x_{1},\ldots,x_{n}])}{x_{1},\ldots,x_{n}\in
H}}{\sum}|C_{G}([x_{1},\ldots,x_{n}])|=|G|^{m}\cdot p_{g}^{(n,m)}(H,G).$
The rest of the proof is clear. ∎
The next result shows an upper bound, which generalizes [7, Theorem 4.6].
###### Proposition 2.14.
With the notations of (1.1), if $N$ is a normal subgroup of $G$ such that
$H\leq N$, then
$\mathrm{p}^{(n,m)}_{g}(H,G)\leq\mathrm{p}^{(n,m)}_{g}\Big{(}\frac{H}{N},\frac{G}{N}\Big{)}.$
Moreover, if $N\cap[_{n}H,_{m}G]=1$, then the equality holds.
###### Proof.
We have
$\begin{array}[]{lcl}|H|^{n}\ |G|^{m}\
\mathrm{p}^{(n,m)}_{g}(H,G)=|\mathcal{A}|\vspace{0.3cm}\\\
=|\\{(x_{1},\ldots,x_{n},y_{1},\ldots,y_{m})\in H^{n}\times G^{m}\ |\
[x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}]\cdot g^{-1}=1\\}|\vspace{0.3cm}\\\
=|\\{(x_{1},\ldots,x_{n},y_{1},\ldots,y_{m})\in H^{n}\times G^{m}\ |\
[x_{1},\ldots,x_{n},y_{1},\ldots,y_{m},g^{-1}]=1\\}|\vspace{0.3cm}\\\
=\sum_{x_{1}\in H}\ldots\sum_{x_{n}\in H}\sum_{y_{1}\in G}\ldots\sum_{y_{m}\in
G}|C_{G}([x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}])|\vspace{0.3cm}\\\
=\sum_{x_{1}\in H}\ldots\sum_{x_{n}\in H}\sum_{y_{1}\in G}\ldots\sum_{y_{m}\in
G}\frac{|C_{G}([x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}])N|\cdot|C_{N}([x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}])|}{|N|}\vspace{0.3cm}\\\
\leq\sum_{x_{1}\in H}\ldots\sum_{x_{n}\in H}\sum_{y_{1}\in
G}\ldots\sum_{y_{m}\in
G}|C_{G/N}([x_{1}N,\ldots,x_{n}N,y_{1}N,\ldots,y_{m}N])|\vspace{0.3cm}\\\
\cdot|C_{N}([x_{1},\ldots,x_{n},y_{1},\ldots,y_{m}])|\vspace{0.3cm}\\\
=\sum_{S_{1}\in H/N}\sum_{x_{1}\in S_{1}}\ldots\sum_{S_{n}\in
H/N}\sum_{x_{n}\in S_{n}}\sum_{T_{1}\in G/N}\sum_{y_{1}\in
T_{1}}\ldots\sum_{T_{m}\in G/N}\sum_{y_{m}\in T_{m}}\vspace{0.3cm}\\\
|C_{G/N}([S_{1},\ldots,S_{n},T_{1},\ldots,T_{m}])|\cdot|C_{N}([x_{1},\ldots,y_{m}])|\vspace{0.3cm}\\\
=\Big{(}\sum_{S_{1}\in H/N}\ldots\sum_{S_{n}\in H/N}\sum_{T_{1}\in
G/N}\ldots\sum_{T_{m}\in
G/N}|C_{G/N}([S_{1},\ldots,S_{n},T_{1},\ldots,T_{m}])|\Big{)}\vspace{0.3cm}\\\
\cdot\Big{(}\sum_{x_{1}\in S_{1}}\ldots\sum_{x_{n}\in S_{n}}\sum_{y_{1}\in
T_{1}}\ldots\sum_{y_{m}\in
T_{m}}|C_{N}([x_{1},\ldots,y_{m}])|\Big{)}\vspace{0.3cm}\\\
\leq|N|^{n+m}\sum_{S_{1}\in H/N}\ldots\sum_{S_{n}\in H/N}\sum_{T_{1}\in
G/N}\ldots\sum_{T_{m}\in G/N}\vspace{0.3cm}\\\
|C_{G/N}([S_{1},\ldots,S_{n},T_{1},\ldots,T_{m}])|\vspace{0.3cm}\\\
=\Big{|}\frac{H}{N}\Big{|}^{n}\ \Big{|}\frac{G}{N}\Big{|}^{m}\
p^{(n,m)}_{g}\Big{(}\frac{H}{N},\frac{G}{N}\Big{)}\ |N|^{n+m}=|H|^{n}\
|G|^{m}\
\mathrm{p}^{(n,m)}_{g}\Big{(}\frac{H}{N},\frac{G}{N}\Big{)}.\end{array}$
The condition of equality in the above relations is satisfied exactly when
$N\cap[_{n}H,_{m}G]=1$. The result follows.∎
###### Corollary 2.15.
A special case of Proposition 2.14 is
$\mathrm{p}_{g}(G)\leq\mathrm{p}_{g}(G/N)$.
###### Corollary 2.16 (See [7], Theorem 4.6).
In Proposition 2.14 , if $m=1$ and $g=1$, then
$\mathrm{d}^{(n)}(H,G)\leq\mathrm{d}^{(n)}(H/N,G/N)$.
## 3\. Some upper and lower bounds
A relation among (1.1)–(1.8) is described below.
###### Theorem 3.1.
With the notations of (1.1),
$\mathrm{p}^{(n,m)}_{g}(G,G)\leq\mathrm{p}^{(n,m)}_{g}(H,K)\leq\mathrm{p}^{(n,m)}_{1}(H,K)\leq\mathrm{p}^{(n,m)}_{1}(H,G)\leq\mathrm{p}^{(n,m)}_{1}(H,H).$
###### Proof.
From Proposition 2.13,
$\mathrm{p}^{(n,m)}_{g}(G,G)\leq\mathrm{p}^{(n,m)}_{g}(G,H)$. From Proposition
2.5,
(3.1) $\mathrm{p}^{(n,m)}_{g}(H,K)=\frac{1}{|H|^{n}\
|K|^{m}}\underset{\underset{g^{-1}[x_{1},\ldots,x_{n}]\in\mathrm{Cl}_{K}([x_{1},\ldots,x_{n}])}{x_{1},\ldots,x_{n}\in
H}}{\sum}|C_{K}([x_{1},\ldots,x_{n}])|^{m}$
and for $g=1$ we get
(3.2) $\leq\frac{1}{|H|^{n}\ |K|^{m}}\underset{x_{1},\ldots,x_{n}\in
H}{\sum}|C_{K}([x_{1},\ldots,x_{n}])|^{m}=\mathrm{p}^{(n,m)}_{1}(H,K),$
where in the last passage still Proposition 2.5 is used. From
$C_{K}([x_{1},\ldots,x_{n}])\subseteq C_{G}([x_{1},\ldots,x_{n}])$, we deduce
(3.3) $\leq\underset{x_{1},\ldots,x_{n}\in
H}{\sum}|C_{G}([x_{1},\ldots,x_{n}])|^{m}=\mathrm{p}^{(n,m)}_{1}(H,G).$
Applying Proposition 2.4,
$\mathrm{p}^{(n,m)}_{1}(H,G)=\mathrm{p}^{(n,m)}_{1}(G,H)$ and so
$\mathrm{p}^{(n,m)}_{1}(G,H)\leq\mathrm{p}^{(n,m)}_{1}(H,H)$ by Proposition
2.13. ∎
###### Corollary 3.2.
With the notations of (1.1), if $Z(G)=1$, then
$\mathrm{p}^{(n,1)}_{g}(H,K)\leq\frac{2^{n}-1}{2^{n}}.$
###### Proof.
It follows from Theorem 3.1 and [7, Theorem 5.3]. ∎
Another significant restriction is the following.
###### Theorem 3.3.
With the notations of (1.1), let $p$ be the smallest prime divisor of $|G|$.
Then
* (i)
$\mathrm{p}^{(n,m)}_{g}(H,K)\leq\frac{2p^{n}+p-2}{p^{m+n}}$;
* (ii)
$\mathrm{p}^{(n,m)}_{g}(H,K)\geq\frac{(1-p)|Y_{H^{n}}|+p|H^{n}|}{|H^{n}|\
|K^{m}|}-\frac{(|K|+p)|C_{H}(K)|^{n}}{|H^{n}|\ |K^{m}|}$;
where $Y_{H^{n}}=\\{[x_{1},\ldots,x_{n}]\in H^{n}\ |\
C_{K}([x_{1},\ldots,x_{n}])=1\\}$.
###### Proof.
If $[_{n}H,_{m}K]=1$, then $C_{H^{n}}(K^{m})=H^{n}$ and $Y_{H^{n}}$ equals
$H^{n}$ or an empty set according as $K^{m}$ is trivial or nontrivial. Assume
that $[_{n}H,_{m}K]\not=1$. Then $Y_{H^{n}}\cap
C_{H^{n}}(K^{m})=Y_{H^{n}}\cap(C_{H}(K^{m})\times\ldots\times
C_{H}(K^{m}))=Y_{H^{n}}\cap(C_{H}(K)\times C_{H}(K)\times\ldots\times
C_{H}(K))=Y_{H^{n}}\cap(C_{H}(K))^{nm}\not=\emptyset$ and
(3.4) $\begin{array}[]{lcl}\underset{x_{1},\ldots,x_{n}\in
H}{\sum}|C_{K^{m}}([x_{1},\ldots,x_{n}])|=\underset{x_{1},\ldots,x_{n}\in
H}{\sum}|C_{K}([x_{1},\ldots,x_{n}])|^{m}\vspace{0.3cm}\\\
=\underset{x_{1},\ldots,x_{n}\in
Y_{H^{n}}}{\sum}|C_{K}([x_{1},\ldots,x_{n}])|^{m}+\underset{x_{1},\ldots,x_{n}\in
C_{H^{n}}(K)}{\sum}|C_{K}([x_{1},\ldots,x_{n}])|^{m}\vspace{0.3cm}\\\
+\underset{x_{1},\ldots,x_{n}\in H^{n}-(Y_{H^{n}}\cup
C_{H^{n}}(K))}{\sum}|C_{K}([x_{1},\ldots,x_{n}])|^{m}\vspace{0.3cm}\\\
=|Y_{H^{n}}|+|K|\ |C_{H}(K)|^{n}+\underset{x_{1},\ldots,x_{n}\in
H^{n}-(Y_{H^{n}}\cup
C_{H^{n}}(K))}{\sum}|C_{K}([x_{1},\ldots,x_{n}])|^{m}.\end{array}$
Since $p^{m}\leq|C_{K}([x_{1},\ldots,x_{n}])|^{m}\leq\frac{|K^{m}|}{p^{m}}$,
$|Y_{H^{n}}|\leq|H^{n}|$ and
$p^{n}\leq|C_{H}(K)|^{n}\leq\frac{|H^{n}|}{p^{n}}$,
(3.5) $\leq|Y_{H^{n}}|+|K|\
|C_{H}(K)|^{n}+(|H^{n}|-(|Y_{H^{n}}|+|C_{H}(K)|^{n})\cdot\frac{|K^{m}|}{p^{m}}$
and then
(3.6)
$\begin{array}[]{lcl}\mathrm{p}^{(n,m)}_{g}(H,K)\leq\frac{|Y_{H^{n}}|}{|H^{n}|\
|K^{m}|}+\frac{|K|\ |C_{H}(K)|^{n}}{|H^{n}|\
|K^{m}|}+\frac{1}{p^{m}}-\frac{|Y_{H^{n}}|}{p^{m}\
|H^{n}|}-\frac{|C_{H}(K)|^{n}}{p^{m}\ |H^{n}|}\vspace{0.3cm}\\\
\leq\frac{1}{p^{m}}+\frac{1}{p^{m+n-1}}+\frac{1}{p^{m}}-\frac{1}{p^{m+n}}-\frac{1}{p^{m+n}}=\frac{2p^{n}+p-2}{p^{m+n}}.\end{array}$
Hence (i) follows. On another hand, we may continue in the other direction
(3.7) $\geq|Y_{H^{n}}|+|K|\ |C_{H}(K)|^{n}+p\
(|H^{n}|-(|Y_{H^{n}}|+|C_{H}(K)|^{n})$
and then
(3.8) $\mathrm{p}^{(n,m)}_{g}(H,K)\geq\frac{(1-p)|Y_{H^{n}}|}{|H^{n}|\
|K^{m}|}+\frac{p}{|K^{m}|}-\frac{(|K|+p)|C_{H}(K)|^{n}}{|H^{n}|\ |K^{m}|}.$
Then (ii) follows. ∎
The bound in Theorem 3.3 (i) is a little bit different from the bound in [4,
Corollary 3.9], where it is proved that
$\mathrm{p}^{(1,1)}_{g}(H,K)\leq\frac{2p-1}{p^{2}}$ and in particular
$\mathrm{p}^{(1,1)}_{g}(H,K)\leq\frac{3}{4}$. We conclude the following
structural restriction.
###### Corollary 3.4.
In Theorem 3.3, if $\mathrm{p}^{(n,m)}_{g}(H,K)=\frac{2p^{n}+p-2}{p^{m+n}}$,
then
(3.9)
$|H:C_{H}(K)|\leq\Big{(}\frac{p^{n+1}-p^{3}-\frac{p^{2}}{2}+p}{2p^{2}+p-2}\Big{)}^{\frac{1}{n}}.$
###### Proof.
Looking at (3.6) and the proof of Theorem 3.3 (i), we deduce
(3.10)
$\begin{array}[]{lcl}\frac{2p^{n}+p-2}{p^{m+n}}\leq\frac{|Y_{H^{n}}|}{|H^{n}|\
|K^{m}|}+\frac{|K||C_{H}(K)|^{n}}{|H^{n}|\
|K^{m}|}+\frac{1}{p^{m}}\leq\frac{1}{p^{m}}+\frac{1}{p^{m-1}}\
\Big{|}\frac{C_{H}(K)}{H}\Big{|}^{n}+\frac{1}{p^{m}}\vspace{0.3cm}\\\
=\frac{1}{p^{m-1}}\Big{(}\frac{2}{p}+|\frac{C_{H}(K)}{H}|^{n}\Big{)}\end{array}$
and then
$\frac{2p^{n}+p-2}{p^{n+1}}\leq\frac{2}{p}+\Big{|}\frac{C_{H}(K)}{H}\Big{|}^{n}$.
We conclude that
$\frac{p^{n+1}}{2p^{n}+p-2}\geq\frac{p}{2}+\Big{|}\frac{H}{C_{H}(K)}\Big{|}^{n}$
and so
(3.11)
$\frac{p^{n+1}}{2p^{n}+p-2}-\frac{p}{2}=\frac{p^{n+1}-p^{3}-\frac{p^{2}}{2}+p}{2p^{2}+p-2}\geq\Big{|}\frac{H}{C_{H}(K)}\Big{|}^{n}.$
The result follows, once we extract the $n$-th root. ∎
## Acknowledgement
The second author is grateful to the colleagues of the Ferdowsi University of
Mashhad for some helpful comments in the period in which the present work has
been written.
## References
* [1] J. Alperin and B. Bell, Groups and Representations, Springer, 1995, New York.
* [2] K. Chiti, M. R. R. Moghaddam and A. R. Salemkar, $n$–isoclinism classes and $n$–nilpotency degree of finite groups, Algebra Colloq. 12 (2005), 225–261.
* [3] A. K. Das and R. K. Nath, On solutions of a class of equations in a finite group, Commun. Algebra 37 (2009), 3904–3911.
* [4] A. K. Das and R. K. Nath, On the generalized relative commutative degree of a finite group, Int. Electr. J. Algebra 7 (2010), 140–151.
* [5] H. Doostie and M. Maghasedi, Certain classes of groups with commutativity degree $d(G)<1/2$, Ars Combinatoria 89 (2008), 263–270.
* [6] P. Erdős and P. Turán, On some problems of statistical group theory, Acta Math. Acad. Sci. Hung. 19 (1968), 413–435.
* [7] A. Erfanian, P. Lescot and R. Rezaei, On the relative commutativity degree of a subgroup of a finite group, Comm. Algebra 35 (2007), 4183–4197.
* [8] A. Erfanian and R. Rezaei, On the commutativity degree of compact groups, Arch. Math. (Basel) 93 (2009), 345–356.
* [9] A. Erfanian, R. Rezaei and F. G. Russo, Relative $n$-isoclinism classes and relative $n$-th nilpotency degree of finite groups, e-print, Cornell University, 2010, arXiv:0003310 [math.GR].
* [10] I.V. Erovenko and B. Sury, Commutativity degree of wreath products of finite abelian groups, Bull. Austral. Math. Soc. 77 (2008), 31–36.
* [11] P. X. Gallagher, The number of conjugacy classes in a finite group, Math. Z. 118 (1970), 175–179.
* [12] R. M. Guralnick and G. R. Robinson, On the commuting probability in finite groups, J. Algebra 300 (2006), 509–528.
* [13] W. H. Gustafson, What is the probability that two groups elements commute? Amer. Math. Monthly 80 (1973), 1031–1304.
* [14] I. M. Isaacs, Character Theory of Finite Groups, Dover Publ., New York, 1994.
* [15] P. Lescot, Isoclinism classes and commutativity degrees of finite groups, J. Algebra 177 (1987), 847–869.
* [16] M. R. Pournaki and R. Sobhani, Probability that the commutator of two group elements is equal to a given element, J. Pure Appl. Algebra 212 (2008), 727–734.
* [17] R. Rezaei and F. G. Russo, $n$-th relative nilpotency degree and relative $n$-isoclinism classes, e-print, Cornell University, 2010, arXiv:1003.2297v1 [math.GR].
* [18] R. Rezaei and F. G. Russo, Bounds for the relative $n$-th nilpotency degree in compact groups, e-print, Cornell University, 2009, arXiv:0910.4716v1 [math.GR].
* [19] D. J. Rusin, What is the probability that two elements of a finite group commute?, Pacific J. Math. 82 (1979), 237–247.
|
# VTOL Failure Detection and Recovery by Utilizing Redundancy
Mohammadreza Mousaei1, Azarakhsh Keipour2, Junyi Geng3 and Sebastian Scherer4
1,3,4 Robotics Institute, Carnegie Mellon University, Pittsburgh, PA
[mmousaei, junyigen<EMAIL_ADDRESS>Robotics Institute, Carnegie
Mellon University, Pittsburgh, PA<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Offering vertical take-off and landing (VTOL) capabilities and the ability to
travel great distances are crucial for Urban Air Mobility (UAM) vehicles.
These capabilities make hybrid VTOLs the clear front-runners among UAM
platforms.
On the other hand, concerns regarding the safety and reliability of autonomous
aircraft have grown in response to the recent growth in aerial vehicle usage.
As a result, monitoring the aircraft status to report any failures and
recovering to prevent the loss of control when a failure happens are becoming
increasingly important. Hybrid VTOLs can withstand some degree of actuator
failure due to their intrinsic redundancy. Their aerodynamic performance,
design, modeling, and control have all been addressed in the previous studies.
However, research on their potential fault tolerance is still a less
investigated field.
In this workshop, we will present a summary of our work on aircraft fault
detection and the recovery of our hybrid VTOL. First, we will go over our
real-time aircraft-independent system for detecting actuator failures and
abnormal behaviors. Then, in the context of our custom tiltrotor VTOL aircraft
design, we talk about our optimization-based control allocation system, which
utilizes the vehicle’s configuration redundancy to recover from different
actuation failures. Finally, we explore the ideas of how these parts can work
together to provide a fail-safe system. We present our simulation and real-
life experiments.
## I Introduction
Fixed-rotor unmanned aerial vehicles (UAVs), such as multirotors, have
vertical take-off and landing (VTOL) capabilities [1, 2]. However, they are
inefficient. On the other hand, fixed-wings are far more energy-efficient but
lack VTOL capability [3]. Hybrid VTOL UAVs combine VTOL capabilities with
efficient long-range flight and are the clear front-runners for Urban Air
Mobility (UAM).
To securely incorporate UAVs into the airspace for real-world UAM
applications, the aircraft should have a degree of tolerance to different
hardware failures. While the controller is generally designed to be as robust
to failures as possible [4, 5], failures may still happen. Hence, there is a
need to detect and identify the failures and recover from them.
Figure 1: Our custom VTOL tiltrotor platform [6].
The majority of the fault detection methods are heavily model-dependent [7, 8,
9, 10]. On the other hand, signal processing-based techniques [11, 12, 13] do
not require the aircraft’s model and instead detect the faults by analyzing
the signals from the aircraft.
On the failure recovery side, depending on the configuration of the aircraft,
there are a variety of ways to deal with actuator failures [14, 15, 16]. In
the case of hybrid VTOL UAVs, although there has been a recent increase in
research [17, 18, 19, 20, 21, 22], fault tolerance has received only limited
attention [23]. These aircraft offer configuration redundancy; however, little
investigation has been done into using this redundancy to recover from
actuator failures.
This workshop summarizes our works on aircraft fault detection [24, 25] and
failure recovery of our hybrid VTOL design [6]. First, we describe our real-
time RLS-based method for detecting actuator failures and anomalies in
aircraft behavior. Instead of relying on the complete model of the aircraft,
this approach models the connection between correlated input-output signal
pairs and estimates a model online. The generated model is then used for real-
time fault detection of virtually any aircraft without prior training. Then,
we discuss tolerance to numerous types of actuator failures in the context of
our custom tiltrotor VTOL aircraft design (Figure 1). The aircraft is more
resilient to actuator failures thanks to our designed dynamic control
allocation, which takes advantage of system redundancy. The aircraft can thus
recover from a collection of actuator failures in different flight phases by
solving a constraint optimization problem. We present our simulation and real-
world experiments performed to validate our methods. Finally, we explore the
idea of how the detection and recovery systems can create a complete pipeline
from detection to identification and recovery and discuss the possible future
directions to achieve the safety requirement for UAM applications.
## II Our Work
In this section, we first briefly describe our fault detection system, which
can be used for almost any aircraft [24]. Then we describe our custom VTOL
system and discuss the fault recovery approach implemented in the context of
this system [6].
Figure 2: The flowchart of the fault detection method [24].
The aircraft dynamics are nonlinear and cannot be described by a linear model.
However, instead of modeling the whole aircraft dynamics, the purpose of the
detection part is to describe the interaction between two signals. Because
many signal pairs are linearly connected to one another, we may use a linear
model to estimate their connection. The aircraft dynamics were modeled using
an Autoregressive Exogenous (ARX) Time-Domain Transfer Function model. Using
our model, having past states and the current input is adequate to estimate
the current output.
Assuming a true unknown ARX model for the relationship of input-output
signals, the estimation algorithm aims to compute a prediction model which
converges to the true model given enough samples. Recursive estimation methods
aim to compute a new model estimate by a simple update to the current model
when a new observation becomes available.
Our approach [24] uses the Recursive Least Squares (RLS) method, which is an
online optimization method that recursively finds the coefficients to minimize
a weighted linear least-squares cost function related to the input signals
[26]. Compared to most other methods, RLS exhibits fast convergence [27].
However, this benefit comes at the cost of higher computational complexity.
Once the ARX Time-Domain Transfer Function is estimated for the input-output
pair, the output is predicted from the input using this model at each step.
The error between the prediction and the measured output is utilized to update
the model and calculate its Z-score. A high Z-score indicates a fault in the
system. The algorithm can be written in the form of Fig. 2.
We published our dataset consisting of tens of flights with various actuator
failures on a real aircraft [25]. It has been used in ours and several other
works, including for failure identification with short identification times
[28].
Based on the methods mentioned above, the failure can be detected and
identified in near real-time. Given a correctly detected and identified
failure, the next step is recovery from the failure.
For our tiltrotor VTOL, we developed an optimization-based dynamic control
allocation method in the PX4 flight control stack that can respond to the
configuration changes of the vehicle, such as an actuator failure [6]. Our
designed control diagram is shown in Fig. 3.
Figure 3: An overview of the VTOL recovery method [6].
This control allocation module aims to allocate the desired control setpoint
to the actuators depending on the current state and aircraft configuration. To
execute this allocation, we define a constrained optimization problem.
We first linearize the vehicle dynamics around the equilibrium points. This
linearization leads to the control allocation matrix that maps our actuation
change from the equilibrium points to the resulting forces and moments. To
assign certain actuation for the desired set of forces and moments, the most
straightforward technique is to use the pseudo-inverse of the control
allocation matrix to obtain the least-norm solution.
An issue with the least-norm solution is that it might be out of actuation
limits, which means that the desired values cannot be allocated to the
actuators after trimming. This issue could cause the vehicle to deviate from
the required forces and moments. Another problem is that consecutive solutions
might be far apart, leading to jittery motor performance. So, even if the
least-norm solution is feasible and within the range of actuation, it is not
usable in practice.
To resolve the issues of the least-norm solution, we utilize the fact that our
system is over-actuated, and there exists a non-zero null space that could be
utilized to adjust the least-norm solution while still producing the desired
set of forces and torques. We achieve this by combining the null space and the
least-norm solutions. The least-norm solution achieves the net wrench, whereas
the null space solution generates zero wrenches but remaps the inputs within
the actuator limits while minimizing actuation change between consecutive
steps. The final optimization problem is expressed as follows:
$\displaystyle\min_{\boldsymbol{\lambda}}$ $\displaystyle
J(\mathbf{u}_{\mathrm{sp}})$ (1) $\displaystyle\mathrm{s.t}~{}~{}$
$\displaystyle\mathbf{u}_{\mathrm{min},i}\leq\mathbf{u}_{\mathrm{sp},i}\leq\mathbf{u}_{\mathrm{max},i}$
where $\mathbf{u}_{\mathrm{sp}}$ is the overall control setpoint, $J(\cdot)$
is the objective function where we try to minimize the actuator change from
the trimmed condition,
$J=(\mathbf{u}_{\mathrm{sp}}-\mathbf{u}_{\mathrm{sp},\mathrm{trim}})^{\top}\mathbf{R}(\mathbf{u}_{\mathrm{sp}}-\mathbf{u}_{\mathrm{sp},\mathrm{trim}})$
(2)
where $\mathbf{R}$ is the weight matrix considering the contribution from
different actuators. The inequality constraints of problem (Eq. 1) ensure that
the output is within actuator limits, where $i$ represents the $i^{\text{th}}$
actuator. Simultaneously, minimizing the cost function ensures that the
actuation change is minimized while the commands are kept near the equilibrium
point.
In the event of an actuation failure, we assume the failure has been detected
and identified. With the knowledge of the actuation failure, we zero out or
delete the corresponding column, which represents the failed actuator in the
control allocation matrix. This provides the control allocation for the
failure scenario. We subtract that from the desired wrench since the failed
actuator still produces some wrench. The control allocation problem then
becomes similar to that in the healthy vehicle. The only difference is that
the problem dimension has been reduced by the number of failed actuators.
Therefore, we formulate a similar constrained optimization problem for the
vehicle with actuation failure.
TABLE I: Actuator failure cases in different flight phases [6]. Flight Phase | Failure Description | Cause
---|---|---
Multirotor | Lock of one tilt in hover | Broken servo
Single motor failure in hover | Motor flaw/propeller loss
Fix-wing | Single motor failure in cruise | Motor flaw/propeller loss
Lock of one elevator in cruise | Broken servo
Lock of one aileron in cruise | Broken servo
## III Experiments and Results
### III-A Failure Detection
TABLE II: Failure detection statistics [24]. | Failure
---
Type
| # of
---
tests
| Flight
---
Time(s)
| Avg.
---
Detection
Time(s)
| Max
---
Detection
Time(s)
| Accuracy
---
(%)
Engine | 7 | 665 | 2.28 | 3.37 | 100
Rudder | 3 | 171 | 0.21 | 0.25 | -
Elevator | 2 | 181 | 0.36 | 0.36 | -
No Failure | 5 | 262 | - | - | -
Total | 22 | 1735 | 2.02 | 5.6 | 86.36
Our failure detection system is developed and tested using C++ and ROS in
Ubuntu on Nvidia Jetson TX2 onboard computer and Ardupilot autopilot is used
on Pixhawk flight control. The flight test platform is a modified Carbon Z
T-28, a fixed-wing UAV with a wingspan of 2 meters and a central electric
engine.
The statistics from 22 flight tests with various types of failures are
presented in Table II. Different performance metrics are used to evaluate our
technique. Out of 22 tests, we found two False Positive (FP) and two False
Negative (FN) detections, achieving $86.36$ percent accuracy, $88.23$ percent
precision, and $88.23$ percent recall (sensitivity) within the total number of
19 correct sequences. A more detailed description of the tests and results is
available in [24] and our dataset article [25] explains the assessment
measures utilized for our calculations in further detail.
Fig. 4 shows how the system detects the engine failure from a pair of
commanded/measured signals (roll error in this case). After the initial
stabilization phase, the Z-score of the prediction error tends to be
significantly less than the set threshold for the anomaly. However, when the
failure happens, the prediction does not match the measurement anymore, and
the spike in the Z-score indicates that an anomaly has happened. The figure
also shows how the prediction error variance stabilizes with the stabilization
of the predictor model.
Figure 4: Z-Score vs Roll input for an engine failure flight test showing the
detection of the fault from the sudden increase in Z-score [24].
### III-B Failure Recovery
We model the tiltrotor VTOL in Gazebo based on our design in [6]. The dynamic
control allocation is developed in PX4 autopilot, which can directly run on
real aircraft. The constrained optimization is solved using Algilib [29],
which is a numerical analysis and data processing library.
#### III-B1 Motor Failure in Hover
When the vehicle takes off and hovers before transition, we completely turn
down one of the motors to test this failure. The allocated actuator command is
shown in Fig. 5. When the front-right motor fails, it is evident that all
other effective actuators adjust and compensate for the thrust reduction. It
is worth noting that the tilt angle associated with the failed motor
immediately drops to zero, which is understandable given that the defective
rotor can no longer create any applicable torques. After about 10 seconds, the
actuator commands converge. After recovery, the aircraft is still controllable
and can follow the planned waypoints.
Figure 5: Actuator commands for a motor failure in hover [6].
We compare our approach to one in which the controller is unaware of the motor
failure. The aircraft attitude and flight path are depicted in Fig. 6 for two
scenarios: with and without being alerted of the system failure. Without the
knowledge of the failure, the system is still attempting to allocate a control
wrench to the failed actuator. The vehicle instantly enters an extreme turn,
eventually loses control, and crashes.
Figure 6: Aircraft attitude and path for a motor failure in hover [6].
#### III-B2 Motor Failure During Cruise Flight
Motor failure during the cruise flight is shown in Fig. 7. The dynamical
control allocation allows the aircraft to quickly adapt to the failure and
resume normal flight. It is worth noting that the optimization solution
maintains the complete tilts of all the rotors (all the rotors are facing
forward). The vehicle can recover from the failure by altering the motor speed
and control surfaces. In fact, in high-speed cruise flight, relying more on
motor speed and control surfaces benefits the aircraft because an abrupt tilt
change could be detrimental to the structure. Compared to the case in which
the system is unaware of the failure, it is evident that the failure knowledge
enables the aircraft to maintain a straight route, whereas being oblivious to
the failure causes significant path deviation.
Figure 7: Actuator commands and flight path for a motor failure during cruise
[6].
#### III-B3 Other Failures
The list of failures tested with successful recovery is shown in Table. I.
Tilt angle failure in the hover phase is simulated by injecting a sudden tilt
servo lock at a fixed position (about 60∘ tilt). Even with a 60∘ tilt angle
change, the system can recover quickly after a big sudden disturbance by
dynamically reallocating the rest of the actuators.
The elevator failure during cruise flight is injected with the control surface
locked at 6∘. In this scenario, the rear two rotors tilt back, and the front
two tilt slightly forward over 90∘ to compensate for the continual pitch up
moment generated by the locked elevator while maintaining the force balanced.
Finally, the aileron failure is injected during the cruise flight. When the
aileron is locked at 15∘, the other healthy control surfaces adjust to
compensate for the lower roll control authority in order to sustain cruise
flight.
Ref. [6] provides a more detailed description of the failure recovery
experiments and the test results.
## IV Conclusions and Future Work
This workshop provided an overview of our failure detection and recovery
progress. We described our method for real-time fault detection using a
recursive least square algorithm and presented the real-world tests
highlighting its accuracy of over 88 percent. Then, in the context of our
custom tiltrotor VTOL, we explained our optimization-based control allocation
approach for failure recovery. Finally, we showcased the extensive
experimental results that validate our failure recovery capability under
various actuator failures.
We suggest the following potential future work directions to move towards
safer UAM vehicles:
* •
Extending the detection system by monitoring multiple input vs. multiple
output signals,
* •
Using a nonlinear model for failure detection to provide more precise
predictions,
* •
Performing real flight tests to validate the described failure recovery
approach further,
* •
Developing a failure identification systems to identify the type of the
detected failure,
* •
Integrating the developed detection and recovery systems with an
identification system to form a complete detection, identification, and
recovery pipeline.
## Acknowledgment
The authors would like to thank Dongwei Bai for his support and help with the
mechanical build of the vehicle.
## References
* [1] A. Keipour, M. Mousaei, A. T. Ashley, and S. Scherer, “Integration of fully-actuated multirotors into real-world applications,” _arXiv preprint arXiv:2011.06666_ , 2020.
* [2] A. Keipour, G. A. S. Pereira, R. Bonatti, R. Garg, P. Rastogi, G. Dubey, and S. Scherer, “Visual servoing approach for autonomous UAV landing on a moving vehicle,” _arXiv:2104.01272_ , 2021. [Online]. Available: https://arxiv.org/abs/2104.01272
* [3] S. Schopferer, J. S. Lorenz, A. Keipour, and S. Scherer, “Path planning for unmanned fixed-wing aircraft in uncertain wind conditions using trochoids,” in _2018 International Conference on Unmanned Aircraft Systems (ICUAS)_ , June 2018, pp. 503–512.
* [4] N. Hegde, V. George, C. Gurudas Nayak, and K. Kumar, “Transition flight modeling and robust control of a VTOL unmanned quad tilt-rotor aerial vehicle,” _Indonesian Journal of Electrical Engineering and Computer Science_ , vol. 18, no. 3, pp. 1252–1261, Jan. 2020\.
* [5] J. A. Guerrero, R. Lozano, G. Romero, D. Lara-Alabazares, and K. C. Wong, “Robust control design based on sliding mode control for hover flight of a mini tail-sitter unmanned aerial vehicle,” in _2009 35th Annual Conference of IEEE Industrial Electronics_ , 2009, pp. 2342–2347.
* [6] M. Mousaei, J. Geng, A. Keipour, D. Bai, and S. Scherer, “Design, modeling and control for a tilt-rotor vtol uav in the presence of actuator failure,” 2022\. [Online]. Available: https://arxiv.org/abs/2205.05533
* [7] M. Kuric, B. Lacevic, N. Osmic, and A. Tahirovic, “Rls-based fault-tolerant tracking control of multirotor aerial vehicles,” in _2017 IEEE International Conference on Advanced Intelligent Mechatronics (AIM)_ , July 2017, pp. 1148–1153.
* [8] X. Qi, D. Theilliol, J. Qi, Y. Zhang, and J. Han, “A literature review on fault diagnosis methods for manned and unmanned helicopters,” in _2013 International Conference on Unmanned Aircraft Systems (ICUAS)_ , May 2013, pp. 1114–1118.
* [9] X. Qi, J. Qi, D. Theilliol, Y. Zhang, J. Han, D. Song, and C. Hua, “A review on fault diagnosis and fault tolerant control methods for single-rotor aerial vehicles,” _Journal of Intelligent & Robotic Systems_, vol. 73, no. 1, pp. 535–555, Jan 2014. [Online]. Available: https://doi.org/10.1007/s10846-013-9954-z
* [10] A. Ansari and D. S. Bernstein, “Aircraft sensor fault detection using state and input estimation,” in _2016 American Control Conference (ACC)_ , July 2016, pp. 5951–5956.
* [11] Z. Birnbaum, A. Dolgikh, V. Skormin, E. O’Brien, D. Muller, and C. Stracquodaine, “Unmanned aerial vehicle security using behavioral profiling,” in _2015 International Conference on Unmanned Aircraft Systems (ICUAS)_ , June 2015, pp. 1310–1319.
* [12] Z. Birnbaum, A. Dolgikh, V. Skormin, E. O’Brien, and D. Muller, “Unmanned aerial vehicle security using recursive parameter estimation,” _Journal of Intelligent & Robotic Systems_, vol. 84, no. 1, pp. 107–120, Dec 2016. [Online]. Available: https://doi.org/10.1007/s10846-015-0284-1
* [13] W. Han, Z. Wang, and Y. Shen, “Fault estimation for a quadrotor unmanned aerial vehicle by integrating the parity space approach with recursive least squares,” _Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering_ , vol. 232, no. 4, pp. 783–796, 2018. [Online]. Available: https://doi.org/10.1177/0954410017691794
* [14] M. W. Mueller and R. D’Andrea, “Relaxed hover solutions for multicopters: Application to algorithmic redundancy and novel vehicles,” _The International Journal of Robotics Research_ , vol. 35, no. 8, pp. 873–889, 2016\.
* [15] E. Baskaya, M. Hamandi, M. Bronz, and A. Franchi, “A novel robust hexarotor capable of static hovering in presence of propeller failure,” _IEEE Robotics and Automation Letters_ , vol. 6, no. 2, pp. 4001–4008, 2021.
* [16] T. Stastny and R. Siegwart, “Nonlinear model predictive guidance for fixed-wing UAVs using identified control augmented dynamics,” in _2018 International Conference on Unmanned Aircraft Systems (ICUAS)_. IEEE, 2018, pp. 432–442.
* [17] R. C. Busan, P. C. Murphy, D. B. Hatke, and B. M. Simmons, “Wind tunnel testing techniques for a tandem tilt-wing, distributed electric propulsion VTOL aircraft,” in _AIAA SciTech 2021 Forum_ , 2021, p. 1189.
* [18] X. Lyu, H. Gu, Y. Wang, Z. Li, S. Shen, and F. Zhang, “Design and implementation of a quadrotor tail-sitter VTOL UAV,” in _2017 IEEE international conference on robotics and automation (ICRA)_. IEEE, 2017, pp. 3924–3930.
* [19] A. Kamal and A. Ramirez-Serrano, “Conceptual design of a highly-maneuverable transitional VTOL UAV with new maneuver and control capabilities,” in _AIAA Scitech 2020 Forum_ , 2020, p. 1733.
* [20] G. Ducard and M.-D. Hua, “Modeling of an unmanned hybrid aerial vehicle,” in _2014 IEEE Conference on Control Applications (CCA)_. IEEE, 2014, pp. 1011–1016.
* [21] J. Zhang, P. Bhardwaj, S. A. Raab, S. Saboo, and F. Holzapfel, “Control allocation framework for a tilt-rotor vertical take-off and landing transition aircraft configuration,” in _2018 Applied Aerodynamics Conference_ , 2018, p. 3480.
* [22] L. Bauersfeld, L. Spannagl, G. J. Ducard, and C. H. Onder, “Mpc flight control for a tilt-rotor VTOL aircraft,” _IEEE Transactions on Aerospace and Electronic Systems_ , vol. 57, no. 4, pp. 2395–2409, 2021.
* [23] S. Fuhrer, S. Verling, T. Stastny, and R. Siegwart, “Fault-tolerant flight control of a VTOL tailsitter UAV,” in _2019 International Conference on Robotics and Automation (ICRA)_. IEEE, 2019, pp. 4134–4140.
* [24] A. Keipour, M. Mousaei, and S. Scherer, “Automatic real-time anomaly detection for autonomous aerial vehicles,” in _2019 International Conference on Robotics and Automation (ICRA)_. IEEE, 2019, pp. 5679–5685. [Online]. Available: https://ieeexplore.ieee.org/document/8794286
* [25] ——, “ALFA: A dataset for UAV fault and anomaly detection,” _The International Journal of Robotics Research_ , vol. 40, no. 2-3, pp. 515–520, 2021\. [Online]. Available: https://journals.sagepub.com/doi/10.1177/0278364920966642
* [26] M. H. Hayes, _Statistical digital signal processing and modeling_. John Wiley & Sons, 2009.
* [27] E. Eweda and O. Macchi, “Convergence of the rls and lms adaptive filters,” _IEEE Transactions on Circuits and Systems_ , vol. 34, no. 7, pp. 799–803, July 1987.
* [28] M. W. Ahmad, M. U. Akram, R. Ahmad, K. Hameed, and A. Hassan, “Intelligent framework for automated failure prediction, detection, and classification of mission critical autonomous flights,” _ISA Transactions_ , 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0019057822000209
* [29] S. Bochkano. (1999) Alglib. Accessed: 2022-03-01. [Online]. Available: www.alglib.net
|
(LABEL:)Eq.
# Performance Evaluation of IEEE 802.11bf Protocol in the sub-7 GHz Band
Anirudha Sahoo1, Tanguy Ropitault2,3, Steve Blandino2,3 and Nada Golmie1
1National Institute of Standards and Technology, Gaithersburg, Maryland, USA
2Associate, National Institute of Standards and Technology, Gaithersburg,
Maryland, USA
3Prometheus Computing LLC, Bethesda, Maryland, USA
Email: {anirud, tanguy.ropitault, steve.blandino<EMAIL_ADDRESS>
###### Abstract
Changes in Wi-Fi signal, using Wi-Fi sensing, have been used to detect
movements in the environment and have led to development of many related
applications. However, there has not been a standard way to do this until the
IEEE 802.11bf standard development activity was taken up recently by the IEEE.
Wi-Fi sensing is an overhead to data communication. While the IEEE 802.11bf
standard has been designed with careful attention to the overhead and its
impact on data communication, there has been no study done to quantify those.
Therefore, in this paper, we evaluate performance of IEEE 802.11bf protocol
with different system configurations corresponding to different sensing loads
and the impact of sensing on data communication in those configurations. We
outline some of the key findings from our simulation experiments which may be
useful in practical operating configurations of an IEEE 802.11bf network.
## I Introduction
Using Wi-Fi sensing, changes in Wi-Fi radio channel can be used to detect
movements in the environment enabling a wide range of applications such as
presence of humans, localization, fall detection etc. [1]. Since Wi-Fi
networks are widely deployed, the above paradigm would make it possible to
make aforementioned diverse set of applications available to the users and
eliminate the need for different kind of sensors for different applications.
Although a lot of work has been reported in the literature about Wi-Fi based
sensing [1, 2, 3, 4, 5, 6, 7, 8], lack of standardization has limited the
proliferation of Wi-Fi sensing based applications. Therefore, the Task Group
IEEE 802.11bf (TGbf) started the development of an amendment to the IEEE
802.11 standard in September 2020 to standardize Wi-Fi based sensing [9] which
will be known as IEEE 802.11bf. The IEEE 802.11bf standard defines Wireless
Local Area Network (WLAN) sensing procedures, both in the sub-7 GHz [9, 10]
and above 45 GHz band [9, 11]††U.S. Government work, not subject to U.S.
Copyright..
Integrating sensing with data communication in Wi-Fi network is quite
attractive since it allows for more efficient use of spectrum and hardware.
However, for sensing, the system may need to allocate a part of its radio
resources to send dedicated sounding frames and other sensing related
information, reducing resources available for regular data communication.
Thus, sensing may become an overhead for data communication. Hence, TGbf has
given careful attention to the design of WLAN sensing procedure to limit the
sensing overhead and its impact on communication performance. But, to the best
of our knowledge, there is no performance study available to quantify the
sensing performance and the impact of sensing on communication. Therefore, in
this paper, we evaluate WLAN sensing performance and the impact of WLAN
sensing on communication using performance metrics defined by us.
In the IEEE 802.11bf sensing protocol for sub-7 GHz, the actual sensing
measurements are done during, what are called, sensing measurement exchanges
(SMEs) and those SMEs account for most of the sensing overhead. So, in this
performance study, we focus on the SME part of the protocol. To perform an
SME, the initiator of sensing, which could be the Access Point (AP) or a Wi-Fi
Station (STA), has to get access to the channel and obtain a Transmission
Opportunity (TxOP). It may use Enhanced Distributed Channel Access (EDCA) or
Point coordination function (PCF) Interframe Space (PIFS) to get a TxOP. The
IEEE 802.11bf protocol also defines a periodically occurring sensing window
within which the SMEs have to be performed. The duration and periods of
sensing windows are configurable. Our simulation based performance study
examines both the EDCA and PIFS based access methods at different sensing load
in the system when the AP is the initiator of sensing. The sensing load in the
system is varied by varying the number of sensing STAs, the number of sensing
applications, the number of transmit and receive antennae and the sensing
window duration. We present an extensive set of simulation results at
different system configurations corresponding to different sensing loads. We
then highlight some important and interesting findings from our simulation
results which, we believe would be useful in configuring IEEE 802.11bf systems
in the sub-7 GHz band.
The main contributions of this work are as follows.
* •
To the best of our knowledge, this is the first work to present extensive
simulation of the IEEE 802.11bf protocol.
* •
This work provides quantitative insights into performance of the IEEE 802.11bf
protocol in terms of defined performance metrics.
* •
Our simulation exposes limitations of WLAN sensing using EDCA based access
when sensing reports need to be sent from sensing STAs to the AP.
* •
The results presented in this work provide guidance and insights into
practical operating configurations of an IEEE 802.11bf network.
## II Related Work
There has been a lot of research work on Wi-Fi reported in the literature. In
[3], the authors present a passive Wi-Fi radar system for human sensing by
exploiting high data rate OFDM signals and periodic Wi-Fi beacon signals.
Change in Received Signal Strength (RSS) in a commercial off-the-shelf (COTS)
Wi-Fi device held on a person’s chest is used to design a respiratory
monitoring system in [2]. Changes in Wi-Fi signal strength have been studied
to detect hand gestures around a user’s mobile device in [4]. In [6], the
authors have implemented an end-to-end system to monitor human respiratory
motion using Wi-FI Channel State Information (CSI). They propose a deep
learning based processing algorithm called BreatheSmart that anaylzes the
changes in amplitude and phase of CSI data to detect respiratory motion. In
[7], a four antenna passive bistatic indoor radar configuration is set up
using IEEE 802.11ax Wi-Fi system to track multi-target human based on range,
doppler and angle-of-arrival measurements. A prototype of Wi-Fi based passive
radar system for localization and tracking of moving targets using range,
doppler and direction of arrival is presented in [8]. The above mentioned
research works are focused on methodologies or algorithms for the concerned
applications, but do not deal with estimating the Wi-Fi sensing related
overhead of the system. A fairly comprehensive survey of Wi-Fi sensing with
CSI is presented in [1].
To standardize Wi-Fi sensing process, the TGbf is developing a standard which
will be known as IEEE 802.11bf [9]. This standard defines the mechanisms and
protocols to provide channel state information in the sub-7 GHz band and radar
based information (e.g., range, doppler, beam azimuth) above 45 GHz band.
Since Wi-Fi sensing protocol is an overhead to the Wi-Fi data communication,
it is important to study the performance of Wi-Fi sensing and its impact on
data communication in different configurations. To the best of our knowledge,
there is no such study available in the literature. This paper is the first to
report performance analysis of IEEE 802.11bf protocol in various
configurations.
## III Overview of IEEE 802.11bf Sensing Procedure
An IEEE 802.11bf capable STA and AP exchange their sensing capabilities during
the association process. WLAN sensing in the sub-7 GHz band, referred to as
Sensing Procedure, starts out with the establishment of a sensing measurement
session between a sensing initiator and a sensing responder at which time the
operating parameters of the session are determined. Examples of operating
parameters include bandwidth, role of the STAs (sensing transmitter or sensing
receiver), timer values etc. The actual sensing measurements are performed in
SMEs. SMEs can be trigger based (TB) or non-trigger based (NTB). Since TB SME
is envisioned to be the most common deployment scenario, in this study, we
focus on TB SME. In a TB SME an AP is the sensing initiator and one or more
non-AP STAs are the sensing responders. A TB SME can have up to four phases as
shown in Fig. 1. In the polling phase, an AP (the sensing initiator) sends a
Sensing Polling Trigger frame to the sensing responder STAs to participate in
the SME during a sensing availability window (SAW). A SAW has two parameters:
SAW duration and SAW period. SAW durations occur periodically and the period
is determined by SAW period. An AP and the STAs may participate in SMEs only
in a SAW duration. In the Null Data Packet Announcement (NDPA) Sounding phase
the AP is the sensing transmitter and one or more STAs are the sensing
receivers. The AP sends a NDPA frame followed by a Null Data Packet (NDP)
frame to the receiver STAs. The STAs measure the channel state using the
received NDP frame. In the Trigger Frame (TF) Sounding phase, the AP acts as
the sensing receiver and the STAs as sensing transmitters. The AP sends a TF
to the sensing receiver STAs, which then send NDP frames (which are
multiplexed in the spatial domain) to the AP. The AP measures the channel
state using the received NDP frames. Reporting phase is present, if sensing
report (mainly consisting of CSI) is required to be sent from the STAs to the
AP. Orthogonal Frequency Division Multiple Access (OFDMA) mechanism is used
for reporting, for which the AP allocates Resource Units (RUs) to the STAs.
Only NDPA sounding phase may necessitate a Reporting phase, if reporting was
enabled as part of operational parameters during sensing measurement session
set up. For more details on TB SME, please refer to [9].
Figure 1: Different phases in a TB Sensing Measurement Exchange
An AP starts an SME by obtaining a TxOP after a SAW period starts. It may
obtain the TxOP by EDCA or PIFS mechanism. We refer to them as EDCA access and
PIFS access respectively. If it uses EDCA access, then due to contention, the
actual SAW duration available for SME sometimes may be shorter than the
configured SAW duration. But when PIFS access is used, AP gets priority access
to the channel and hence, gets almost the entire SAW duration for sensing.
### III-A Overhead Calculation
The Sensing Procedure is an overhead for data communication. Majority of the
overhead is incurred in the SME part of the protocol. So, in this study, we
concentrate on the SME of the protocol. In an SME, the NDPA sounding phase and
Reporting phase account for most of the overhead. Hence, our overhead
calculation involves only those two phases. Note that in an NDPA sounding
phase, the AP acts as a sensing transmitter and one or more STAs act as
sensing receivers. For the NDPA sounding phase, we take the number of bytes in
NDPA and NDP frame structure as overhead [9]. The NDPA frame structure is
presented in Fig. 9.58 in [12] and the STA Info field format used in NDPA
frame is shown in Fig. 9-61da in [13]. The NDP format shown in Fig. 27.46a in
[13] is used for computation of NDP overhead. For the reporting overhead, we
use the CSI size computation used in Equation (9-5f) in [9]:
$\begin{split}CSI\,size=&\lceil 1.5\times N_{tx}\times
N_{rx}\rceil+\frac{N_{tx}\times N_{rx}\times N_{b}\times N_{sc}}{4}\\\
&+2\times N_{rx},\end{split}$ (1)
where $N_{tx}$ is the number of transmit antennas, $N_{rx}$ is the number of
receive antennas, $N_{b}$ is the number of bits used for quantization of each
CSI value, $N_{sc}$ is the number of subcarriers reported in CSI. The bytes
transmitted as part of NDPA, NDP, and reporting frame will be referred to by a
general term called sensing information bytes throughout this paper.
## IV Simulation Experiments
### IV-A Simulation Setup
In our simulation setup, in terms of network topology, we assume that there is
one AP and a variable number of STAs associated with the AP. Our simulation
assumes that all messages are received correctly by the receiver, i.e., there
is no message error due to interference.
We assume that each sensing application runs on every STA in the network and
that the AP requires CSI report from every STA in the network for a given
sensing application. Due to resource limitation, if a complete report cannot
be sent from a STA, the STA still sends a partial report. Although in
practice, this will not happen, we resort to this method to highlight the
sensing overhead and the missed sensing that such cases lead to. When there is
no sensing activity in the network, the STAs send data traffic using EDCA with
full bandwidth. We assume that each STA always has at least a TxOP worth of
data to be sent. The TxOP duration was set to its maximum value of 5.484 ms
[14]. The AP only participates in sensing and does not send any data traffic.
Performance evaluation of systems at high load is usually more interesting.
Since the SME part of the Sensing Procedure incurs most of the overhead, it
leads to high sensing load. Hence, it is the focus of our simulation. As
mentioned in Section III-A, we compute overhead based on the NDPA sounding and
Reporting phase of an SME. During Reporting phase, the AP allocates one RU (RU
sizes given in Table II) to each STA which is used by each STA to send sensing
reports using OFDMA. We have implemented our simulation code by incorporating
IEEE 802.11bf features into the software available at [15], which was used in
the simulation study reported in [14].
### IV-B Performance Metrics
We define the following performance metrics in our evaluation.
* •
Percentage Sensing Overhead (PSO): It is the percentage of total simulation
duration spent on exchanging sensing related messages.
* •
Percentage Sensing Missed (PSM): In every SAW period, sensing is considered to
be complete, if all the sensing messages, for all the applications, were able
to be sent in the SAW duration. If no sensing messages were sent (completely
missed) or only a part of sensing messages were sent (partially missed), then
we consider those cases as sensing missed. So, the percentage of the number of
SAW periods in which sensing is missed is defined as Percentage Sensing Missed
(PSM).
* •
Data Throughput: This is the total number of data bits sent by all the STAs
divided by the simulation time.
* •
Percent Available Window Duration (PAWD): This is defined as the percentage of
SAW duration actually available for sensing related tasks. Note that when EDCA
acess is used, some part of SAW duration may be lost due to contention. In
such situations, the PAWD would fall below 100 %.
TABLE I: Simulation Parameters Common to all Configurations Parameter | Value
---|---
Sensing availability window period | 1 (=100 TU = 102.4 ms)
TxOP duration | 5.484 ms
Number of antennas in the AP | 8
Number of antennas in each STA | 2
AP Bandwidth | 80 MHz
STA Bandwidth | 80 MHz
maximum number of subcarriers | 996
subcarrier grouping ($N_{g}$) | 4
Number of subcarriers reported in CSI ($N_{sc}$) | 250
Number of bits used for quantization of each CSI value ($N_{b}$) | 8
EDCA transmission in a TxOP | Payload = 10 ethernet packets of size 1500 bytes each in an A-MPDU
MCS | 6
Simulation duration | 10000 seconds
TABLE II: Subcarrier Allocation vs. Number of STAs Number of STA | Subcarriers per STA (size of RU allocation per STA)
---|---
1 | 996
2 | 484
[3 - 4] | 242
[5 - 9] | 106
[10 - 16] | 52
### IV-C Simulation Experiment Design
Three parameters decide the sensing load in an IEEE 802.11bf network: (i)
number of sensing STAs, (ii) number of sensing applications, and (iii) number
of transmit and receive antennae involved in sensing. Hence, for this
performance study, we increased the sensing load in the system by increasing
the value of one of those parameters while keeping the values of other two
constant. This led us to run our experiments in three configurations as
described below.
* •
Configuration 1: In this configuration, sensing load is increased by
increasing the number of STAs (nSTAs) involved in sensing at different SAW
durations. The number of applications is fixed at 4 and the sensing
transmitter and receiver antenna configuration is set to 2x2. Note that
sensing transmitter and receiver antenna configuration 2x2 implies that for
each STA, the AP (sensing transmitter) uses two of its eight antenna and each
STA (sensing receiver) uses all of its two antenna. Thus, the AP can engage
with up to four STAs in an SME for sensing.
* •
Configuration 2: Sensing load, in this configuration, is increased by
increasing the number of sensing applications (numapp) in the system at
different nSTA values. The SAW duration is fixed at 127 (Note: SAW duration 1
= 100 $\mu$s), which corresponds to its maximum possible value of 12.7 ms. The
sensing transmitter and receiver antenna configuration is set to 2x2.
* •
Configuration 3: Sensing load is increased by involving more transmit and
receive antennae, referred to as sensing transmitter and receiver antenna
(STRA) configuration, at different nSTA values. The number of application is
fixed at 4 and SAW duration is fixed at 127 (12.7 ms).
Simulation parameters common to all configurations are shown in Table I.
### IV-D Experiment Results
#### IV-D1 Configuration 1
Fig. LABEL:fig:edca_nsta_pso shows how PSO changes as nSTA increases with EDCA
access. Generally, PSO decreases as nSTA increases, because there is more
contention for getting TxOP for sensing and hence, less duration is available
for sensing. However, PSO increases from nSTA = 4 to 5 and from nSTA = 9 to 10
for SAW duration $>$ 10\. At these nSTA transition points the size of an RU
assigned to each STA goes down (see Table II), hence, more time is needed to
send a given number sensing information bytes thereby increasing the overhead.
SAW duration 10 (1 ms) is very short relative to the SAW period of 102.4 ms.
Hence, the overhead is very low in this case and at high nSTA, due to high
contention, PSO goes down to almost zero.
As seen in Fig. LABEL:fig:pifs_nsta_pso, with PIFS access, PSO remains
unchanged when RU size per STA does not change. Unlike EDCA access, there is
no variability in actual SAW duration available for sensing since no
contention is involved. Report size per STA does not change since the number
of application is constant in this configuration. Hence, PSO remains constant
in the intervals where RU size per STA does not change. But when RU size per
STA decreases (e.g., from nSTA = 9 to 10), the duration to send sensing report
goes up and hence, PSO goes up. SAW duration = 10 is too short which limits
the number of sensing information bytes sent, to a constant value across the
nSTA values and hence, PSO does not change. Note that PSO for SAW duration 90
and 127 are identical all throughout. For these two SAW durations PSM is 0 %
all throughout (see Fig. LABEL:fig:pifs_nsta_psm). Hence, the amount of
sensing information bytes sent is same for the two SAW durations. For SAW
duration 50, PSO is identical to those of SAW duration 90 and 127 until
nSTA=9, since PSM is 0 % until that point. But after that PSM goes up to 100
%. But these missed sensing are due to partial missed sensing and PSO beyond
nSTA=9 is just 0.04 % lower than those of SAW durations 90 and 127. Hence, its
PSO looks almost identical to them after nSTA=9. This indicates that SAW
duration 50 fell slightly short of the duration needed to send all the sensing
information bytes.
With EDCA access, as nSTA increases, PSM increases (see Fig.
LABEL:fig:edca_nsta_psm). Due to more contention, actual SAW duration
available for sensing decreases, hence, more sensing is missed. SAW duration
10 is too short for EDCA such that $100\,\%$ sensing is always missed. Except
for nSTA = 1 none of the configurations can give $0\,\%$ PSM, which is
important for sensing application performance. For SAW duration = 10, even
though PSM is $100\,\%$, there is overhead which is due to partial missed
sensing.
SAW duration 10 is too short even for PIFS access, hence $100\,\%$ sensing is
missed (see Fig. LABEL:fig:pifs_nsta_psm). But SAW duration 90 and 127 give
$0\,\%$ throughout. SAW duration 50 is not long enough, beyond nSTA = 9, to
send all the sensing information bytes, due to decrease in RU size per STA.
With EDCA access, from Fig. LABEL:fig:edca_nsta_thrpt, it can be seen that
throughput goes down when sensing is on (compared to no sensing). As nSTA
increases, throughput decreases due to more contention and collisions. Also,
as SAW duration increases, throughput decreases because more time is used for
sensing. For SAW duration 10 and nSTA $\geq$ 3, throughput almost equals to
that of no sensing case because actual available sensing duration becomes very
short due to higher TxOP contention.
As shown in Fig. LABEL:fig:pifs_nsta_thrpt, with PIFS access, throughput is
lower than the respective EDCA cases due to higher sensing overhead (and lower
missed sensing). The throughput of SAW duration 50, 90 and 127 are almost
equal since sensing overheads for these cases are almost equal.
With EDCA access, PAWD generally decreases as nSTA increases due to increase
in contention (see Fig. LABEL:fig:config1_pawd). As expected, higher the SAW
duration, higher is the PAWD. PAWD is found to be always $100\,\%$ or very
close to $100\,\%$ for PIFS access (not shown in a graph).
#### IV-D2 Configuration 2
With EDCA access, PSO generally increases as numapp increases (Fig.
LABEL:fig:edca_numapp_pso). At high numapp (e.g., 6 and 8), for nSTA = 12 and
nSTA = 16, PSO remains flat, because the number sensing report bytes that can
be sent in the SAW duration is limited by the RU size allocated to the STA.
This can also be explained through PSM graph in Fig. LABEL:fig:edca_numapp_psm
where between numapp = 6 and 8, PSM becomes $100\,\%$ for nSTA = 12 and 16. We
notice that the overhead of nSTA = 12 is more than nSTA=16 which is counter
intuitive. With nSTA = 16, there is less duration available for sensing due to
more contention. Hence, nSTA = 12 gets more sensing opportunities and incurs
higher overhead. We also notice that for nSTA = 8, overhead goes beyond nSTA =
12 and 16 at numapp = 8. nSTA = 8 has larger RU size than nSTA = 12 and 16 and
hence, could send more sensing information bytes than nSTA = 12 and 16.
In case of PIFS access, overhead consistently increases as numapp increases
and nSTA increases (see Fig. LABEL:fig:pifs_numapp_pso). This can be explained
by observing PSM (see Fig. LABEL:fig:pifs_numapp_psm), where there is no
sensing missed. Hence overhead increases with increase in numapp and also with
increase in nSTA.
Fig. LABEL:fig:edca_numapp_psm shows that with EDCA access, as numapp
increases, PSM increases. At some nSTA values, the jump is more drastic at
certain numapp. For example, For nSTA = 12 and 16, as numapp increase from 4
to 6, the report size increases such that with the allocated RUs full report
cannot be sent even for one application. Hence, PSM increases drastically to
$100\,\%$. For nSTA = 1 and 4, the numapp increase does not affect PSM due to
low report size.
In case of PIFS access (see Fig. LABEL:fig:pifs_numapp_psm), there is no
missed sensing in any configuration since PIFS gives priority access to the
channel and the SAW duration 127 is long enough to send all sensing
information bytes.
With EDCA access (see Fig. LABEL:fig:edca_numapp_thrpt), for a given nSTA, the
throughput goes down slowly as numapp increases since it is only affected by
the report size increase. But for a given numapp, as nSTA increases throughput
drops much more due to higher contention and collisions as well as due to
report size increase. For nSTA = 12 and 16, as numapp increases from 6 to 8,
throughput remains flat because the PSO in this case does not change (see Fig.
LABEL:fig:edca_numapp_pso).
From Fig. LABEL:fig:pifs_numapp_thrpt, we observe that with PIFS access,
throughput decrease is steeper than EDCA access as numapp increases, since
PIFS access incurs $0\,\%$ PSM and higher sensing overhead than EDCA access.
Fig. LABEL:fig:config2_pawd shows the PAWD performance for EDCA access. Since
SAW duration is 12.7 ms and TxOP is 5.484 ms, sensing can have up to three
TxOPs. When nSTA is small (1 and 4), then increasing numapp does not change
the duration and the number of TxOPs required to complete sensing. Hence, PAWD
remains almost constant. But at large nSTA and large numapp, (e.g., nSTA = 12
and numapp = 6), it requires more TxOPs to finish sensing operation. Since
each TxOP is subject to contention, PAWD comes down. PAWD is always $100\,\%$
or close to $100\,\%$ for PIFS access (not shown).
#### IV-D3 Configuration 3
Figure 3: Available Window Duration vs. number of STAs with EDCA Access
(Configuration 3)
With EDCA access, as STRA increases, PSO generally increases (see Fig.
LABEL:fig:edca_antenna_pso). For nSTA = 12 and 16 PSO remains flat from STRA =
4x2 to 8x2, because the number of sensing information bytes that can be sent
in the SAW duration is limited by the RUs assigned to the STAs. This is
similar to what was seen in Configuration 2. Between nSTA = 12 and 16, the
overhead of nSTA = 12 is more than nSTA = 16, which is again similar to
Configuraiton 2 and the same explanation is applicable. We also notice that
for nSTA = 8, overhead goes beyond nSTA = 12 and 16 at STRA 8x2. nSTA = 8 has
larger RU size than nSTA = 12 and 16. Hence, nSTA = 8 could send more sensing
information bytes than nSTA = 12 and 16 which results in higher PSO.
With PIFS access, as shown in Fig. LABEL:fig:pifs_antenna_pso, PSO goes up as
STRA increases. As STRA increases report size also increases which leads to
more overhead. Overhead for nSTA = 12 and 16 are the same throughout, since RU
size per STA and PSM are same for them.
With EDCA access, as STRA increases, generally PSM also increases (see Fig.
LABEL:fig:edca_antenna_psm). Since nSTA = 8 has larger RU size, it does not
hit $100\,\%$ until STRA = 8x2. nSTA = 12 and 16 have smaller RU size, hence
they hit $100\,\%$ at a lower STRA = 4x2. For a given STRA configuration, as
nSTA increases, more sensing is missed due to the larger report size and more
contention to claim TxOP.
For PIFS access, as shown in Fig. LABEL:fig:pifs_antenna_psm, PSM is mostly
$0\,\%$, except for STRA = 8x2 and nSTA = 12 and 16 when the report size
becomes too large for the RUs assigned to the STAs and PSM hits $100\,\%$.
From Fig. LABEL:fig:edca_antenna_thrpt, for EDCA access we notice that
generally throughput decreases as STRA configuration increases due to increase
in sensing overhead. Also as nSTA increases for a given STRA configuration,
throughput decreases due to more contention as well as due to higher sensing
overhead. For nSTA = 12 and nSTA = 16 between STRA = 4x2 and 8x2, throughput
is flat because the corresponding PSO is also flat.
In Fig. LABEL:fig:pifs_antenna_thrpt it is observed that, with PIFS access,
throughput drops more than EDCA access case as STRA configuration increases
because of more sensing overhead. Between STRA = 4x2 and 8x2 throughput drop
is more drastic which matches with corresponding drastic increase in sensing
overhead.
PAWD performance with EDCA access, shown in Fig. 3, is very similar to the
that in Configuration 2. Hence, the same explanation holds. For PIFS access
PAWD is always 100 % or close to 100 %.
#### IV-D4 Discussion
From the above discussions on our simulation results, we highlight the
following key points. Since keeping PSM to $0\,\%$ is important for the
performance of a sensing application, EDCA access is not a suitable option as
it can lead to missed sensing in almost all cases. So, PIFS based access
should be used for sensing. Very short PAW duration (e.g., 10) is not a good
choice even at very low sensing load since it leads to missed sensing. In
fact, with PIFS access, it is better to set the SAW duration to its maximum
value 127 to avoid missed sensing, since the performance impact of sensing on
data communication in terms of PSO and throughput is almost identical to
smaller SAW durations at which there is no missed sensing. With PIFS access,
when numapp = 4 and nSTA = 16, the overhead is about $5\,\%$ and the
throughput drops by about 8 Mbps (or about $5\,\%$) compared to “no sensing”
case. Considering that this is a very high sensing load situation, the
overhead and throughput drop may be acceptable. Another important thing to
note is that the RU size changes at discrete points (with respect to nSTA) and
there can be sudden change of performance or performance may seem counter-
intuitive at those change points. These results also show that a system can be
designed with an upper limit on sensing overhead. The AP in such a system
would allow the sensing load to increase (by having more applications,
stations or more antennae), until the sensing overhead limit is reached.
## V Conclusion
IEEE 802.11bf is a relatively new standard for Wi-Fi sensing. While
integrating sensing with communication in Wi-Fi network leads to more
efficient use of spectrum and hardware, it also contributes to communication
overhead. Although the TGbf has carefully designed the IEEE 802.11bf protocol
to limit the overhead, there is no performance analysis of the protocol and
its impact on data communication available in the literature. In this paper,
we evaluate performance of the protocol and its impact on data communication
using performance metrics defined by us. Our simulation results show that,
when NDPA sounding phase with reporting is enabled, EDCA access is not
suitable for sensing since it can lead to missed sensing. Also, very short SAW
duration (e.g., 10), even at low sensing load, is not a good choice since it
leads to missed sensing. A good rule of thumb is to have PIFS access with SAW
duration set to a large value (e.g., its maximum value of 127), which ensures
$0\,\%$ PSO in almost all the cases.
## References
* [1] Y. Ma, G. Zhou, and S. Wang, “WiFi sensing with channel state information: A survey,” _ACM Computing Surveys (CSUR)_ , vol. 52, no. 3, pp. 1–36, 2019.
* [2] H. Abdelnasser, K. A. Harras, and M. Youssef, “Ubibreathe: A ubiquitous non-invasive wifi-based breathing estimator,” in _Proceedings of the 16th ACM international symposium on mobile ad hoc networking and computing_ , 2015, pp. 277–286.
* [3] W. Li, R. J. Piechocki, K. Woodbridge, C. Tang, and K. Chetty, “Passive wifi radar for human sensing using a stand-alone access point,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 59, no. 3, pp. 1986–1998, 2020.
* [4] H. Abdelnasser, M. Youssef, and K. A. Harras, “Wigest: A ubiquitous wifi-based gesture recognition system,” in _2015 IEEE conference on computer communications (INFOCOM)_. IEEE, 2015, pp. 1472–1480.
* [5] S. Arshad, C. Feng, Y. Liu, Y. Hu, R. Yu, S. Zhou, and H. Li, “Wi-chase: A wifi based human activity recognition system for sensorless environments,” in _2017 IEEE 18th International Symposium on A World of Wireless, Mobile and Multimedia Networks (WoWMoM)_. IEEE, 2017, pp. 1–6.
* [6] S. Mosleh, J. B. Coder, C. G. Scully, K. Forsyth, and M. O. A. Kalaa, “Monitoring respiratory motion with Wi-Fi CSI: Characterizing performance and the BreatheSmart algorithm,” _IEEE Access_ , pp. 1–1, 2022.
* [7] L. Storrer, H. C. Yildirim, M. Crauwels, E. I. P. Copa, S. Pollin, J. Louveaux, P. De Doncker, and F. Horlin, “Indoor tracking of multiple individuals with an 802.11ax Wi-Fi-based multi-antenna passive radar,” _IEEE Sensors Journal_ , vol. 21, no. 18, pp. 20 462–20 474, 2021.
* [8] P. Falcone, F. Colone, A. Macera, and P. Lombardo, “Localization and tracking of moving targets with WiFi-based passive radar,” in _2012 IEEE Radar Conference_ , 2012, pp. 0705–0709.
* [9] “IEEE p802.11bf™/d3.0 draft standard for information technology— telecommunications and information exchange between systems local and metropolitan area networks— specific requirements part 11: Wireless LAN medium access control (MAC) and physical layer (PHY) specifications amendment 2: Enhancements for wireless LAN sensing,” 2023.
* [10] T. Ropitault, S. Blandino, A. Sahoo, and N. Golmie, “IEEE 802.11bf: Enabling the widespread adoption of wi-fi sensing,” accepted in IEEE Communications Standards Magazine: https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=935175, [Online; accessed September 28, 2023].
* [11] S. Blandino, T. Ropitault, C. R. da Silva, A. Sahoo, and N. Golmie, “IEEE 802.11 bf DMG sensing: Enabling high-resolution mmwave wi-fi sensing,” _IEEE Open Journal of Vehicular Technology_ , vol. 4, pp. 342–355, 2023.
* [12] “Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications,” 802.11 Working Group of the LAN/MAN Standards Committee of the IEEE Computer Society, Dec. 2020.
* [13] “IEEE p802.11az™/d7.0 draft standard for information technology— telecommunications and information exchange between systems local and metropolitan area networks— specific requirements part 11: Wireless LAN medium access control (MAC) and physical layer (PHY) specifications amendment 4: Enhancements for positioning sensing,” 2022.
* [14] Y. Daldoul, D.-E. Meddour, and A. Ksentini, “Performance evaluation of ofdma and mu-mimo in 802.11 ax networks,” _Computer Networks_ , vol. 182, p. 107477, 2020.
* [15] “802.11ax lightsim,” https://github.com/yousri-daldoul/802.11ax-lightsim, accessed April 2023.
*[TGbf]: Task Group IEEE 802.11bf
|
# An empirical comparison and characterisation of nine popular clustering
methods
Christian Hennig,
Dipartimento di Scienze Statistiche “Paolo Fortunati”
Universita di Bologna
Bologna, Via delle belle Arti, 41, 40126, Italy
<EMAIL_ADDRESS>
###### Abstract
Nine popular clustering methods are applied to 42 real data sets. The aim is
to give a detailed characterisation of the methods by means of several cluster
validation indexes that measure various individual aspects of the resulting
clusters such as small within-cluster distances, separation of clusters,
closeness to a Gaussian distribution etc. as introduced in Hennig (2019). 30
of the data sets come with a “true” clustering. On these data sets the
similarity of the clusterings from the nine methods to the “true” clusterings
is explored. Furthermore, a mixed effects regression relates the observable
individual aspects of the clusters to the similarity with the “true”
clusterings, which in real clustering problems is unobservable. The study
gives new insight not only into the ability of the methods to discover “true”
clusterings, but also into properties of clusterings that can be expected from
the methods, which is crucial for the choice of a method in a real situation
without a given “true” clustering.
Keywords: Cluster benchmarking internal cluster validation external cluster
validation mixed effects model
MSC2010 classification: 62H30
## 1 Introduction
This work compares cluster analysis methods empirically on 42 real data sets.
30 of these data sets come with a given “true” classification. The principal
aim is to explore how different clustering methods produce solutions with
different data analytic characteristics, which can help a user choosing an
appropriate method for the research question of interest. This does not
require the knowledge of a “true” clustering. The performance of the methods
regarding recovery of the “truth” is reported, but is not the main focus.
Cluster analysis plays a central role in modern data analysis and is applied
in almost every field where data arise, be it finance, marketing, genetics,
medicine, psychology, archaeology, social and political science, chemistry,
engineering, or machine learning. Cluster analysis can have well-defined
research aims such as species delimitation in biology, or be applied in a
rather exploratory manner to learn about potentially informative structure in
a data set, for example when clustering the districts of a city. New cluster
analysis methods are regularly developed, often for new data formats, but also
to fix apparent defects of already existing methods. One reason for this is
that cluster analysis is difficult, and all methods, or at least those with
which enough experience has been collected, are known to “fail” in certain,
even fairly regular and non-pathological, situations, where “failing” is often
taken to mean that a certain pre-specified “true” clustering in data is not
recovered.
A key problem with clustering is that there is no unique and generally
accepted definition of what constitutes a cluster. This is not an accident,
but rather part of the nature of the clustering problem. In real applications
there can be different requirements for a “good” clustering, and different
clusterings can qualify as “true” on the same data set. For example, crabs can
be classified according to species, or as male or female; paintings can be
classified according to style of the painter or according to the motif; a data
set of customers of a company may not show any clusters that are clearly
separated from each other, but may be very heterogeneous, and the company may
be interested in having homogeneous subgroups of customers in order to better
target their campaigns, but the data set may allow for different groupings of
similar quality; in many situations with given “true” classes, such as
companies that go bankrupt in a given period vs. those that do not, there is
no guarantee that these “true” classes correspond to patterns in the data that
can be found at all. One could even argue that in a data set that comes with a
supposedly “true” grouping a clustering that does not coincide with that
grouping is of more scientific interest than reproducing what is already
known.
Rather than being generally “better” or “worse”, different cluster analysis
methods can be seen as each coming with their own implicit definition of what
a cluster is, and when cluster analysis is to be applied, the researchers have
to decide which cluster concepts are appropriate for the application at hand.
Cluster analysis can have various aims, and these aims can be in conflict with
each other. For example, clusters that are well separated by clear density
gaps may involve quite large within-cluster distances, which may be tolerable
in some applications but unacceptable in others. Clusters that can be well
represented by cluster centroids may be different from those that correspond
to separable Gaussian distributions with potentially different covariance
matrices, which in some applications are interpreted as meaningfully different
data subsets. See Ackerman et al. (2010); von Luxburg et al. (2012); Hennig
(2015b, a) for the underlying “philosophy” of clustering.
The starting point of this work is the collection of cluster validation
indexes presented in Hennig (2019). These are indexes defined in order to
provide a multivariate characterisation of a clustering, individually
measuring aspects such as between-cluster separation, within-cluster
homogeneity, or representation of the overall dissimilarity structure by the
clustering. They are applied here in order to give general information about
how the characteristics of clusterings depend on the clustering method.
Many cluster validation indexes have been proposed in the literature, often in
order to pick an optimal clustering in a given situation, e.g., by comparing
different numbers of clusters, see Halkidi et al. (2015) for an overview. Most
of them (such as the Average Silhouette Width, Kaufman and Rousseeuw (1990))
attempt to assess the quality of a clustering overall by defining a compromise
of various aspects, particularly within-cluster homogeneity and between-
cluster separation. Following Hennig (2019) and Akhanli and Hennig (2020), the
present work deviates from this approach by keeping different aspects separate
in order to inform the user in a more detailed way what a given clustering
achieves.
A number of benchmark studies for cluster analysis have already been
published. Most of them focus on evaluating the quality of clusterings by
comparing them to given “true” clusterings. This has been done for
artificially generated data (e.g., Milligan (1980); Brusco and Steinley
(2007); Steinley and Brusco (2011); Saracli et al. (2013); Rodriguez et al.
(2019); see Milligan (1996) for an overview of earlier work), for real data,
mostly focusing on specific application areas or types of data (e.g., de Souto
et al. (2008); Kou et al. (2014); Boulesteix and Hatz (2017); Liu et al.
(2019)), or a mixed collection of real and artificial data, sometimes
generating artificial data from models closely derived from a real application
(e.g., Meila and Heckerman (2001); Maulik and Bandyopadhyay (2002);
Dimitriadou et al. (2004); Arbelaitz et al. (2013); Javed et al. (2020)). An
exception is Jain et al. (2004), where different clustering methods were
mapped according to the similarity of their clusterings on various data sets
(something similar is done here, see Section 3.1). Anderlucci and Hennig
(2014) contrasted recovery of a “true” classification in artificial data sets
with the requirement of having homogeneous clusters.
All of these studies attempt to provide a neutral comparison of clustering
methods, which is to be distinguished from the large number of studies, using
real and artificial data, that have been carried out by method developers in
order to demonstrate that their newly proposed method compares favourably with
existing methods. Due to selection effects, the results of such work, although
of some value in their own right, cannot be taken as objective indicators of
the quality of methods (Boulesteix et al. (2013); Hennig (2018)). The study
presented here is meant to be neutral; I have not been involved in the
development of any of the compared methods, and have no specific interest to
portray any of them as particularly good or bad. Note that “true” neutrality
can never be secured and is probably never given; for example, I have been
active promoting my own “philosophy” of clustering (e.g., Hennig (2015a)) and
may be suspected to favour results that are in line with the idea that the
success of clustering methods strongly depends on the application; however n
No selections have been made depending on results (Boulesteix (2015)); the 42
data sets from which results are reported are all that were involved.
Section 2 explains the design of the study, i.e., the clustering methods, the
data sets, and the validation indexes. Section 3 presents the results,
starting with the characterisation of the methods in terms of the internal
indexes, then results regarding the recovery of the “true” clusters, and
ultimately connecting “true” cluster recovery with the characteristics of the
clustering solutions using a mixed effects regression model. A discussion
concludes the paper.
## 2 Study design
For the study design, recommendations for benchmark studies as given, e.g., in
Boulesteix (2015); Van Mechelen et al. (2018) have been taken into account.
One important issue is a definition of the scope of the study. There is an
enormous amount of clustering methods, and clustering is applied to data of
very different formats. It is not even remotely possible to cover everything
that could potentially be of interest. Therefore the present study constrains
its scope in the following way:
* •
Only clustering methods for $2\leq p$-dimensional Euclidean data that can be
treated as continuous are used. Methods that work with dissimilarities are run
using the Euclidean distance.
* •
Accordingly, data sets contain numerical variables only. Some data sets
include discrete variables, which are treated as admissible for the study if
they carry numerical information and take at least three different values
(variables taking a small number of values, particularly three or four, are
very rare in the study).
* •
The number of clusters is always treated as fixed. Only methods that allow to
fix the number of clusters are used; methods to estimate the number of
clusters are not involved. For data sets with a given “true” clustering, the
corresponding number of clusters was taken. For data sets without such
information, a number of clusters was chosen subjectively considering data
visualisation and, where possible, subject matter information.
* •
The included clustering methods were required to have an R-implementation that
can be used in a default way without additional tuning in order to allow for a
comparison that is not influenced by different tuning flexibilities.
* •
No statistical structure (such as time series or regression clustering) is
taken into account, and neither is any automatic dimension reduction involved
as part of any method. All data is treated as plain $p$-dimensional Euclidean.
* •
Methods are only admissible for the study if they always produce crisp
partitions. Every observation always is classified (also in the given “true”
clusterings) to belong to one and only one cluster.
### 2.1 Clustering methods
The involved clustering methods are all well established and widely used, as
far as my knowledge goes. They represent the major classes of clustering
methods listed in Hennig and Meila (2015) with the exception of density-based
clustering, which was excluded because standard density-based methods such as
DBSCAN (Ester et al. (1996)) do not accept the number of clusters as input and
often do not produce partitions. Another popular method that was not involved
was Ward’s method, as this is based on the same objective function as
$K$-means and can be seen as just another technique to optimise this function
locally (Everitt et al. (2011)). On the other hand, including mixtures of t-
and skew t-distributions means that mixture model-based clustering is strongly
represented. The motivation for this is that the other included methods are
not meant to fit distributional shapes including outliers and skewness, which
may be widespread in practice; alternatives would be methods that have the
ability to not include observations classified as “outliers” in any cluster,
but this is beyond the scope of the present study. Here are the included
methods.
K-means
as implemented in the R-function `kmeans` using the algorithm by Hartigan and
Wong (1979).
Partitioning Around Medoids (clara)
(Kaufman and Rousseeuw (1990)) as implemented in the R-function `claraCBI`
(therefore abbreviated “clara” in the results) in R-package `fpc` (Hennig
(2020)), which calls function `pam` in R-package `cluster` (Maechler et al.
(2019)) using (unsquared) Euclidean distances.
Gaussian mixture model (mclust)
fitted by Maximum Likelihood using the EM-algorithm, where the best of various
covariance matrix models is chosen by the Bayesian Information Criterion (BIC)
(Fraley and Raftery (2002)) as implemented in the R-function `mclustBIC` in
R-package `mclust` (Scrucca et al. (2016)).
Mixture of skew t-distributions (emskewt)
fitted by Maximum Likelihood using the EM-algorithm (Lee and McLachlan
(2013)), including fully flexible estimation of the degrees of freedom and the
shape matrix, as implemented in the function `EmSkew` with parameter
`distr="mst"` in the R-package `EMMIXskew` (Wang et al. (2018)).
Mixture of t-distributions (teigen)
fitted by Maximum Likelihood using the EM-algorithm (McLachlan and Peel
(2000)), where the best of various covariance matrix models is chosen by the
BIC (Andrews and McNicholas (2012)) as implemented in the R-function `teigen`
in R-package `teigen` (Andrews et al. (2018)).
Single linkage hierarchical clustering
as implemented in the R-function `hclust` and the dendrogram cut at the
required number of clusters to produce a partition, as is done also for the
other hierarchical methods. See Everitt et al. (2011) for an explanation and
historical references for all involved hierarchical methods.
Average linkage hierarchical clustering
as implemented in the R-function `hclust`.
Complete linkage hierarchical clustering
as implemented in the R-function `hclust`.
Spectral clustering
(Ng et al. (2001)) as implemented in the R-function `specc` in R-package
`kernlab` (Karatzoglou et al. (2004)).
The functions were mostly run using the default settings. In some cases, e.g.,
`hclust`, parameters had to be provided in order to determine which exact
method was used. Some amendments were required. In particular, all methods
were run in such a way that they would always deliver a valid partition as a
result. See Appendix A1 for more computational detail.
### 2.2 Data sets
The data sets used in this study are a convenience sample, collected from
mostly well known benchmark data sets in widespread use together with some
data sets that I have come across in my work. 21 data sets are from the UCI
repository (Dua and Graff (2017)), further ones are from Kaggle,
`www.openml.org`, example data sets of R-packages, open data accompanying
books and research papers, and some were collected by myself or provided to me
by collaborators and advisory clients with permission to use them. Details
about the data sets are given in Appendix A2.
There were some criteria on top of those stated above according to which data
sets have been selected, which define the scope of the study. There was a
target number of collecting at least 30 data sets with and at least 10 data
sets without given “true” classes; ultimately there are 30 data sets with and
12 data sets without true classes. The aim was to cover a large range of
application areas, although due to the availability of data sets, this has not
been perfectly achieved. 17 of the data sets come from the related areas of
biology, genetics, medicine, and chemistry. Eight are from the social
sciences, two from finance, eight can be classified as engineering including
typical pattern recognition tasks, the remaining seven data sets come from
miscellaneous areas.
As some of the clustering methods cannot handle data with a smaller number of
observations $n$ than the number of variables $p$ within clusters, all data
sets have $p$ substantially smaller than $n$. The calibration of validation
indexes requires repeated computations based on $n\times n$ distance matrices
(see Section 2.3), for this reason the biggest data set has $n=4601$, and
generally data sets with $n<3000$ were preferred. The maximum $p$ is 72. $p=1$
is excluded, as it could not be handled by two methods. The maximum number of
“true” clusters $K$ is 100. The aim was to achieve a fairly even
representation of $p$ and $K$ up to 10 and a number of instances for these
values larger than 10, although there are apparently far more data sets in
benchmark use with $k=2$ than with larger $K$. Data sets without missing
values were preferred, but some data sets with a very small number of missing
values were admitted. In these cases mean imputation was used. Tables 1, 2,
and 3 show the distributions of $n$, $p$, and $K$, respectively, over the data
sets.
The variables were scaled to mean 0 and variance 1 before clustering, except
for data sets in which the variables have compatible units of measurement and
there seems to be a subject matter justification to make their impact for
clustering proportional to the standard deviation. See Appendix A2 for details
on the preprocessing for some data sets.
Table 1: Numbers of observations for the 42 data sets. Observations | Number of data sets
---|---
$n\leq 100$ | 5
$100<n\leq 200$ | 6
$200<n\leq 300$ | 8
$300<n\leq 500$ | 5
$500\leq n<1000$ | 7
$1000\leq n<2000$ | 6
$n>2000$ | 5
Table 2: Numbers of variables for the 42 data sets Variables | Number of data sets
---|---
$p=2$ | 2
$p=4$ | 5
$p=5$ | 5
$6\leq p\leq 8$ | 6
$9\leq p\leq 11$ | 11
$12\leq p\leq 20$ | 6
$21\leq p\leq 50$ | 4
$p>50$ | 3
Table 3: Numbers of clusters for the 30 data sets with given “true” clusterings, and for the 12 data sets without “true” clusterings, as chosen by the author. Number of clusters | With “true” clustering | Without “true” clustering
---|---|---
$k=2$ | 8 | 1
$k=3$ | 3 | 3
$k=4$ | 3 | 1
$k=5$ | 2 | 6
$6\leq k\leq 7$ | 5 | 1
$8\leq k\leq 11$ | 6 | 0
$k>11$ | 3 | 0
An issue with the “representativity” of these data sets for real clustering
problems is that the availability of “true” clusterings constitutes a
difference to the real unsupervised problems to which clustering is usually
applied. This is an issue with almost all collections of data sets for
benchmarking clustering algorithms. In particular, several such data sets have
been constructed in order to have all clusters represented by the same number
of observations. This is the case for eight of the 30 data sets with “true”
clusterings used here (seven of these have exactly equal cluster sizes). This
is not possible for unsupervised problems in practice. Such data sets will
favour methods that tend to produce clusters of about equal sizes.
### 2.3 Internal validation indexes
Internal validation indexes are used here with the aim of measuring various
aspects of a clustering that can be seen as desirable, depending on the
specific application. It is then investigated to what extent the different
clustering methods work well according to these aspects. Hennig (2015a) lists
and discusses a number of aspects that can be relevant. Hennig (2019) and
Akhanli and Hennig (2020) formalised many of these aspects, partly using
already existing indexes, partly introducing new ones. Here the indexes used
in the present study are listed. For more background and discussion, including
possible alternatives, see Hennig (2019) and Akhanli and Hennig (2020). The
indexes attempt to formalise clustering aspects in a direct intuitive manner,
without making reference to specific models (unless it is of interest whether
data look like generated by a particular probability model, see below). The
indexes as defined here do not allow comparison between or aggregation over
different data sets. In order to do this, they need to be calibrated, which is
treated in Section 2.4.
The data set is denoted as ${\cal D}=\\{x_{1},\ldots,x_{n}\\}$. Here the
observations $x_{1},\ldots,x_{n}$ are assumed to be $\in\mathbb{R}^{p}$, and
$d(x,y)$ is the Euclidean distance between $x$ and $y$, although the indexes
can be applied to more general types of data and distances. A clustering is a
set ${\cal C}=\\{C_{1},\ldots,C_{K}\\}$ with $C_{j}\subseteq{\cal D},\
j=1,\ldots,K$. For $j=1,\ldots,K$, $n_{j}=|C_{j}|$ is the number of objects in
$C_{j}$. Assume ${\cal C}$ to be a partition, e.g., $j\neq k\Rightarrow
C_{j}\cap C_{k}=\emptyset$ and $\bigcup_{j=1}^{K}C_{j}={\cal D}$. Let
$\gamma:\ \\{1,\ldots,n\\}\mapsto\\{1,\ldots,K\\}$ be the assignment function,
i.e., $\gamma(i)=j\Leftrightarrow x_{i}\in C_{j}$.
Average within-cluster distances
(avewithin; aw; Akhanli and Hennig (2020)). This index measures homogeneity in
the sense of small distances within clusters. Smaller values are better.
$I_{avewithin}(\mathcal{C})=\frac{1}{n}\sum_{k=1}^{K}\frac{1}{n_{k}-1}\sum_{x_{i}\neq
x_{j}\in C_{k}}d(x_{i},x_{j}).$
Representation of cluster members by centroids.
In some applications cluster centroids are used in order to represent the
clustered objects, and an important aim is that this representation is good
for all cluster members. This is directly formalised by the objective
functions of $K$-means (sum of squared distances from the cluster mean) and
Partitioning Around Medoids (sum of distances from the cluster medoid). Both
of these criteria have been used as internal validation indexes in the present
study, however results are not presented, because over all results both of
these turn out to have a correlation of larger than 0.95 with $I_{avewithin}$,
so $I_{avewithin}$ can be taken to measure this clustering aspect as well.
Maximum diameter
(maxdiameter; md). In some applications there may be a stricter requirement
that large distances within clusters cannot be tolerated, rather than having
only the distance average small. This can be formalised by
$I_{maxdiameter}(\mathcal{C})=\max_{C\in\mathcal{C};x_{i},x_{j}\in
C}d(x_{i},x_{j}).$
Smaller values are better.
Widest within-cluster gap
(widestgap; wg; Hennig (2019)). Another interpretation of cluster homogeneity
is that there should not be different parts of the same cluster that are
separated from each other. This can be formalised by
$I_{widestgap}({\cal C})=\max_{C\in{\cal C},D,E:\ C=D\cup E}\min_{x\in D,y\in
E}d(x,y).$
Smaller values are better.
Separation index
(sindex; si; Hennig (2019)). This index measures whether clusters are
separated in the sense that the closest distances between clusters are large.
For every object $x_{i}\in C_{k}$, $i=1,\ldots,n$, $k\in{1,\ldots,K}$, let
$d_{k:i}=\min_{x_{j}\notin C_{k}}d(x_{i},x_{j})$. Let $d_{k:(1)}\leq\ldots\leq
d_{k:(n_{k})}$ be the values of $d_{k:i}$ for $x_{i}\in C_{k}$ ordered from
the smallest to the largest, and let $[pn_{k}]$ be the largest integer $\leq
pn_{k}$. $p$ is a parameter tuning what proportion of observations counts as
“close to the border” of a cluster with another. Here, $p=0.1$. Then,
$I_{sindex}(\mathcal{C};p)=\frac{1}{\sum_{k=1}^{K}[pn_{k}]}\sum_{k=1}^{K}\sum_{i=1}^{[pn_{k}]}d_{k:(i)}.$
Larger values are better.
Analogously to the maximum diameter, the minimum separation, i.e., the minimum
distance between any two clusters may also be of interest. In the present
study, this has a correlation of 0.93 with $I_{sindex}$, and results for the
minimum separation are omitted for reasons of redundancy.
Pearson-version of Hubert’s $\Gamma$
(pearsongamma; pg; Hubert and Schultz (1976)). This index measures to what
extent the clustering corresponds or represents the distance structure in the
data. the vector of pairwise dissimilarities Let ${\bf d}={\rm
vec}\left([d(x_{i},x_{j})]_{i<j}\right)$ be the vector of pairwise distances.
Let ${\bf c}={\rm vec}\left([c_{ij}]_{i<j}\right)$, where
$c_{ij}=1(\gamma(i)\neq\gamma(j))$, and $1(\bullet)$ denotes the indicator
function, be a vector of “clustering induced dissimilarities”. With $r$
denoting the sample Pearson correlation,
$I_{Pearson\Gamma}({\cal C})=r({\bf d},{\bf c}).$
Larger values are better. This is one version of a family of indexes
introduced in Hubert and Schultz (1976), sometimes referred to as “Hubert’s
$\Gamma$”.
Density mode index
(dmode; dm). An intuitive idea of a cluster is that it is associated with a
density mode, and that the density goes down toward the cluster border. This
is formalised by the “dmode” index. It is based on a simple kernel density
estimator $h$ that assigns a density value $h(x)$ to every observation. Let
$q_{d,p}$ be the $p$-quantile of the vector of dissimilarities ${\bf d}$,
e.g., for $p=0.1$, the 10% smallest dissimilarities are $\leq q_{d,0,1}$.
Define the kernel and density as
$\kappa(d)=\left(1-\frac{1}{q_{d,p}}d\right)1(d\leq q_{d,p}),\qquad
h(x)=\sum_{i=1}^{n}\kappa(d(x,x_{i})).$
The following algorithm constructs a sequence of neighbouring observations
from the mode in such a way that the density should always go down, and
penalties are incurred if the density goes up. It also constructs a set $T$
that collects information about high dissimilarities between high density
observations used below. $I_{densdec}$ collects the penalties.
Initialisation
$I_{d1}=0$, $T=\emptyset$. For $j=1,\ldots,K$:
Step 1
$S_{j}=\\{x\\}$, where $x=\mathop{\rm arg\,max}\limits_{y\in C_{j}}h(y)$.
Step 2
Let $R_{j}=C_{j}\setminus S_{j}$. If $R_{j}=\emptyset$: $j=j+1$, if $j\leq K$
go to Step 1, if $j+K=1$ then go to Step 5. Otherwise:
Step 3
Find $(x,y)=\mathop{\rm arg\,min}\limits_{(z_{1},z_{2}):z_{1}\in
R_{j},z_{2}\in S_{j}}d(z_{1},z_{2})$. $S_{j}=S_{j}\cup\\{x\\}$,
$T=T\cup\\{\max_{z\in R_{j}}h(z)d(x,y)\\}$.
Step 4
If $h(x)>h(y):\ I_{d1}=I_{d1}+(h(x)-h(y))^{2}$, back to Step 2.
Step 5
$I_{densdec}({\cal C})=\sqrt{\frac{I_{d1}}{n}}.$
It is possible that there is a large gap between two observations with high
density, which does not incur penalties in $I_{densdec}$ if there are no low-
density observations in between. This can be picked up by
$I_{highdgap}({\cal C})=\max T.$
These two indexes, which are both better for smaller values, were defined in
Hennig (2019), but they can be seen as contributing to the measurement of the
same aspect, with $I_{highdgap}$ just adding information missed by
$I_{densdec}$. An aggregate version, which is used here, can be defined as
$I_{dmode}({\cal C})=0.75I_{densdec}^{*}({\cal C})+0.25I_{highdgap}^{*}({\cal
C}),$
where $I_{densdec}^{*}$ and $I_{highdgap}^{*}$ are suitably calibrated
versions of $I_{densdec}$, $I_{highdgap}$, respectively, see Section 2.4. The
weights 0.75 and 0.25 in the definition of $I_{dmode}$ can be interpreted as
the relative impact of the two sub-indexes.
Cluster boundaries cutting through density valleys
(denscut; dc; Hennig (2019)). A complementary aspect of the idea that clusters
are associated with high density regions is that cluster boundaries should run
through density valleys rather than density mountains. The “denscut”-index
penalises a high contribution of points from different clusters to the density
values in a cluster (measured by $h_{o}$ below).
$\mbox{For }x_{i},\ i=1,\ldots,n:\
h_{o}(x_{i})=\sum_{k=1}^{n}\kappa(d(x_{i},x_{k}))1(\gamma(k)\neq\gamma(i)).$
A penalty is incurred if for observations with a large density $h(x)$ there is
a large contribution $h_{o}(x)$ to that density from other clusters:
$I_{denscut}({\cal C})=\frac{1}{n}\sum_{j=1}^{K}\sum_{x\in
C_{j}}h(x)h_{o}(x).$
Smaller values are better.
Entropy
(en; Shannon (1948)). Although not normally listed as primary aim of
clustering, in many applications very small clusters are not very useful, and
cluster sizes should optimally be close to uniform. This is measured by the
well known entropy:
$I_{entropy}(\mathcal{C})=-\sum_{k=1}^{K}\frac{n_{k}}{n}\log(\frac{n_{k}}{n}).$
Large values are good.
Gaussianity of clusters
(kdnorm; nor; Coretto and Hennig (2016)). Due to the Central Limit Theorem and
a widespread belief that the Gaussian distribution approximates many real
random processes, it may be of interest in its own right to have clusters that
are approximately Gaussian. The index $I_{kdnorm}$ is defined, following
Coretto and Hennig (2016), as the Kolmogorov distance between the empirical
distribution of within-cluster Mahalanobis distances to the cluster means, and
a $\chi^{2}_{p}$-distribution, which is the distribution of Mahalanobis
distances in perfectly Gaussian clusters.
Coefficient of variation of distances to within-cluster neighbours
(cvnnd; cvn; Hennig (2019)). Another within-cluster distributional shape of
potential interest is uniformity, where clusters are characterised by a
uniform within-cluster density level. This can be characterised by the
coefficient of variation (CV) of the dissimilarities to the $k$th nearest
within-cluster neighbour $d^{k}_{w}(x)$ ($k=2$ is used here). Define for
$j=1,\ldots,k$, assuming $n_{j}>k$:
$m(C_{j};k)=\frac{1}{n_{j}}\sum_{x\in C_{j}}d^{k}_{w}(x),\qquad{\rm
CV}(C_{j})=\frac{\sqrt{\frac{1}{n_{j}-1}\sum_{x\in
C_{j}}(d^{k}_{w}(x)-m(C_{j};k))^{2}}}{m(C_{j};k)}.$
Using this,
$I_{cvdens}({\cal C})=\frac{\sum_{j=1}^{K}n_{j}{\rm
CV}(C_{j})1(n_{j}>k)}{\sum_{j=1}^{K}n_{j}1(n_{j}>k)}.$
Smaller values are better.
Average Silhouette Width
(asw; Kaufman and Rousseeuw (1990)). This is a popular internal validation
index that deviates somewhat from the “philosophy” behind the collection of
indexes presented here, because it attempts to balance two aspects of cluster
quality, namely homogeneity and separation. It has been included in the study
anyway, because it also uses an intuitive direct formalisation of clustering
characteristics of interest. For $i=1,\ldots,n$, define the “silhouette width”
$s_{i}=\frac{b_{i}-a_{i}}{\max{\left\\{a_{i},b_{i}\right\\}}}\in[-1,1],$
where
$a_{i}=\frac{1}{n_{l_{i}}-1}\sum_{x_{j}\in C_{l_{i}}}d(x_{i},x_{j}),\
b_{i}=\min_{h\neq l_{i}}\frac{1}{n_{h}}\sum_{x_{j}\in C_{h}}d(x_{i},x_{j}).$
The Average Silhouette Width is then defined as
$I_{asw}(\mathcal{C})=\frac{1}{n}\sum_{i=1}^{n}s_{i}.$
### 2.4 Calibrating the indexes
For aggregating the indexes introduced in Section 2.3 over different data sets
and to compare the performance of a clustering method over the indexes in
order to characterise it, it is necessary to calibrate the values of the
indexes, so that they become comparable. This is done as in Hennig (2019);
Akhanli and Hennig (2020). The idea is to generate a large number $m$ of
“random clusterings” $\mathcal{C}_{R1},\ldots,\mathcal{C}_{Rm}$ on the data.
Denote the clusterings of the $q=9$ methods from Section 2.1 by
${\mathcal{C}}_{1},\ldots,\mathcal{C}_{q}$. For a given data set ${\cal D}$
and index $I$, first change $I$ to $-I$ in case that smaller values are better
according to the original definition of $I$, so that for all calibrated
indexes larger values are better. Then use these clusterings to standardise
$I$:
$\displaystyle m(I,{\cal D})$ $\displaystyle=$
$\displaystyle\frac{1}{m+q}\left(\sum_{i=1}^{m}I(\mathcal{C}_{Ri})+\sum_{i=1}^{q}I(\mathcal{C}_{i})\right),$
$\displaystyle s^{2}(I,{\cal D})$ $\displaystyle=$
$\displaystyle\frac{1}{m+q-1}\left(\sum_{i=1}^{m}\left[I(\mathcal{C}_{Ri})-m(I,{\cal
D})\right]^{2}+\sum_{i=1}^{q}\left[I(\mathcal{C}_{i})-m(I,{\cal
D})\right]^{2}\right),$ $\displaystyle I^{*}(\mathcal{C}_{i})$
$\displaystyle=$ $\displaystyle\frac{I(\mathcal{C}_{i})-m(I,{\cal
D})}{s(I,{\cal D})},\ i=1,\ldots,q.$
$I^{*}$ is therefore scaled so that its values can be interpreted as
expressing the quality (larger is better) compared to what the collection of
clusterings
$\mathcal{C}_{R1},\ldots,\mathcal{C}_{Rm},{\mathcal{C}}_{1},\ldots,\mathcal{C}_{q}$
achieves on the same data set. The approach depends on the definition of the
random clusterings. These should generate enough random variation in order to
work as a tool for calibration, but they also need to be reasonable as
clusterings, because if all random clusterings are several standard deviations
away from the “proper” clusterings, the exact distance may not be very
meaningful. They also need to be fast to generate, as many of them will be
required in order to calibrate index values of every single data set.
Four different algorithms are used for generating the random clusterings, for
detains see Akhanli and Hennig (2020). For clusterings with $K$ clusters,
these are:
Random $K$-centroids:
Draw $K$ observations from ${\cal D}$. Assign every observation to the nearest
centroid.
Random nearest neighbour:
Draw $K$ observations as starting points for the $K$ clusters. At every stage,
of the observations that are not yet clustered, assign the observation $x$ to
the cluster of its nearest already clustered neighbour, where $x$ is the
observation that has the smallest distance to this neighbour.
Random farthest neighbour:
As random nearest neighbour, but $x$ is the observation that has the smallest
distance to the minimum farthest cluster member.
Random average distances:
As random nearest neighbour, but $x$ is the observation that has the smallest
average distance to the closest cluster.
Experience shows that these methods generate a range of clusterings that have
sufficient variation in characteristics and are mostly reasonably close to the
proper clustering methods (as can be seen in Akhanli and Hennig (2020) as well
as from the results of the present study). Here, 50 random clusterings from
each algorithm are generated, i.e., $m=200$. All results in Section 3 are
given in terms of calibrated indexes $I^{*}$.
### 2.5 External validation indexes
“Truth” recovery is measured by external validation indexes that quantify the
similarity between two clusterings on the same data, here the “true” one and a
clustering generated by one of the clustering methods.
The probably most popular external validation index is the Adjusted Rand Index
(ARI; Hubert and Arabie (1985)). This index is based on the relative number of
pairs of points that are in the same cluster in both clusterings or in
different clusters in both clusterings, adjusted for the number of clusters
and the cluster sizes in such a way that its expected value under random
cluster labels with the same number and sizes of clusters is 0. The maximum
value is 1 for perfect agreement. Values can be negative, but already a value
of 0 can be interpreted as indicating that the two clusterings have nothing to
do with each other.
In some work, the ARI has been criticised, often in the framework of an
axiomatic approach where it can be shown that it violates some axioms taken to
be desirable, e.g., Meila (2007); Amigo et al. (2009). Alternative indexes
have been proposed that fulfill the presented axioms. Meila (2007) introduced
the Variation of Information (VI), which is a proper metric between
partitions. This means that, as opposed to the ARI, smaller values are better.
In Section 3, the negative VI is considered so that for all considered indexes
larger values are better. The VI is defined by comparing the entropies of the
two clusterings with the so-called “mutual information”, which is based on the
entropy of the intersections between two clusters from the two different
clusterings. If the two clusterings are the same, the entropy of the
intersections between clusters is the same as the entropy of the original
clusterings, meaning that the VI is zero, its minimum value.
Amigo et al. (2009) show their axioms for an index called BCubed first
proposed in Bagga and Baldwin (1998). This index is based on observation-wise
concepts of “precision” and “recall”, i.e., what percentage of observations in
the same cluster are from the same “true” class, and what percentage of
observations in a different cluster is “truly” different. It takes values
between 0 and 1, 1 corresponding to a perfect agreement. See Meila (2015) for
further discussion and some more alternatives.
## 3 Results
Three issues are addressed:
* •
How can the clusters produced by the methods be characterised in terms of the
external validation indexes?
* •
How do the methods perform regarding the recovery of the “true” clusterings?
* •
Can the recovery of the “true” clusterings be related to the internal
validation indexes?
### 3.1 Characterisation of the methods in terms of the internal indexes
The methods can be characterised by the distribution of values of the
calibrated internal validation indexes, highlighting the dominating features
of the clusterings that they produce. In order to do this, parallel coordinate
plots will be used that show the full results including how results belonging
to the same data set depend on each other.
I decided against running null hypothesis tests due to issues of multiple
testing and model assumptions; the plots allow a good assessment of to what
extent differences between methods are meaningful, dominated by random
variation, or borderline. Although the values of the calibrated indexes can be
compared over indexes as relative to the ensemble of clusterings from the
methods and random, what is shown are images that compare the different
clustering methods for each index, as the comparison of the clustering methods
gives information additional to the performance relative to the random
clusterings.
Figure 1: Calibrated values of $I_{avewithin}^{*}$ and $I_{maxdiameter}^{*}$.
Values belonging to the same data set are connected by lines. The thick red
line gives the average values.
Average within-cluster distances
(left side of Figure 1): The two centroid-based methods $K$-means and clara
achieve the best results. The Gaussian and $t$-mixture are about at the same
level as spectral clustering; complete linkage and the mixture of skew
$t$-distributions are worse. Average linkage is behind these, and single
linkage is the worst by some distance.
Results regarding representation of the data by centroids are not shown and
look largely the same. The only additional distinctive feature is that
$K$-means is better than clara looking at squared Euclidean distances to the
centroid, whereas clara is better for unsquared distances. This was to be
expected, as it corresponds to what $K$-means, clara, respectively, attempt to
optimise.
Maximum diameter
(right side of Figure 1): Unsurprisingly, complete linkage is best; at each
step it merges clusters so that the maximum diameter is the smallest possible,
although it is not optimal for every single data set (the hierarchical scheme
will not normally produce a global optimum). Average linkage is second best,
followed by $K$-means, clara, and single linkage, which somewhat surprisingly
avoids large distances within clusters more than spectral clustering and the
three mixture models. Another potential surprise is that the Gaussian mixture
does not do better than the $t$-mixture in this respect; a flexible covariance
matrix can occasionally allow for very large within-cluster distances.
Figure 2: Calibrated values of $I_{widestgap}^{*}$ and $I_{sindex}^{*}$.
Values belonging to the same data set are connected by lines. The thick red
line gives the average values.
Widest within-cluster gap
(left side of Figure 2): The three linkage methods are best at avoiding large
within-cluster gaps, with single linkage in the first place, which will not
join sets between which there is a large gap. The two centroid-based methods
follow, however differences between them, the three mixture models, and
spectral clustering look small compared to the variance, and dominated by
outliers. The skew $t$-mixture produces very large within-cluster gaps for a
number of data sets. With strong skewness there can be large distances in a
tail of a cluster.
Separation index
(right side of Figure 2): Single linkage achieves the best results here. Its
clustering process keep separated subsets in distinct clusters (often one-
point clusters with strongly separated outliers). The two other linkage
methods follow. Complete linkage is sometimes portrayed as totally
prioritising within-cluster homogeneity over separation, but in fact regarding
separation it does better than spectral clustering, which is still a bit
better than the centroid-based and the mixture models, between which
differences look insignificant.
Figure 3: Calibrated values of $I_{Pearson\Gamma}^{*}$ and $I_{dmode}^{*}$.
Values belonging to the same data set are connected by lines. The thick red
line gives the average values.
Pearson-$\Gamma$
(left side of Figure 3): The average results for the methods regarding the
representation of the distance structure by the clustering vary relatively
little compared to the variation over data sets. Average linkage is overall
best, and the skew $t$-mixture worst, even if the latter has good results in
some data sets. Single linkage does occasionally very well, but also worse
than the others for a number of data sets.
Density mode index
(right side of Figure 3): Results here are dominated by variation between data
sets as well. Interestingly, the methods based on mixtures of unimodal
distributions do not do best here, but rather clara and spectral clustering.
Once more the mixture of skew $t$-distributions does worst, with outliers in
both directions.
Figure 4: Calibrated values of $I_{denscut}^{*}$ and $I_{entropy}^{*}$. Values
belonging to the same data set are connected by lines. The thick red line
gives the average values.
Density cutting
(left side of Figure 4): Due to its focus on cluster separation, single
linkage is best at avoiding cutting through density mountains. The skew $t$\-
and $t$-mixture have the strongest tendency to put cluster boundaries in high
density areas, but differences between methods are not large.
Entropy
(right side of Figure 4): clara yields the highest average entropy followed by
$K$-means, but differences between these and the three mixture models do not
seem significant. This runs counter to the idea, sometimes found in the
literature, that $K$-means favours similar cluster sizes more than mixtures,
or even implicitly assumes them. The other four methods have a clear tendency
to produce less balanced clusters, particularly single linkage, but also
average and complete linkage, and to some lesser extent spectral clustering.
Figure 5: Calibrated values of $I_{kdnorm}^{*}$ and $I_{cvdens}^{*}$. Values
belonging to the same data set are connected by lines. The thick red line
gives the average values.
Gaussianity
(left side of Figure 5): Although the Gaussian mixture produces on average the
most Gaussian-looking clusters, as was to be expected, the differences between
all nine methods look largely insignificant. The Gaussian mixture has positive
and negative outliers, the skew $t$-mixture only negative ones.
CV of distances to within-cluster neighbours
(right side of Figure 5): Despite one lower outlier, the Gaussian mixture
tends to produce the largest cvnnd, i.e., the lowest within-cluster CVs. It
probably helps that large variance clusters can bring together observations
that have large distances between each other and to the rest. clara and the
$t$-mixture produce the lowest cvnnd values. Differences between the other
methods are rather small.
Figure 6: Calibrated values of the ASW. Values belonging to the same data set
are connected by lines. The thick red line gives the average values.
Average silhouette width
(left side of Figure 6): Average linkage is a method that explicitly balances
separation and homogeneity, and consequently it achieves the best ASW values.
$K$-means achieves higher values than complete linkage, but the remaining
methods do worse than the linkage methods. ASW had been originally proposed
for use with clara (Kaufman and Rousseeuw (1990)), but clara does not produce
particularly high ASW values, if better than the mixture models and spectral
clustering.
These results characterise the clustering methods as follows:
kmeans
clearly favours within-cluster homogeneity over separation. It does not favour
entropy as strongly as some literature suggests; in this respect it is in line
with clara and the mixture models, ahead of the remaining methods. It should
be noted that entropy is treated here as a potentially beneficial feature of a
clustering, whereas some literature makes it seem like a defect of kmeans that
such solutions are favoured (as far as this in fact happens).
clara
has largely similar characteristics to kmeans. It is slightly worse regarding
the representation of the distance structure and the ASW. It is slightly
better regarding clusters with density decrease from the mode. This may have
to do with the fact that the density goes down faster from the mode for the
multivariate Laplace distribution (where the log-likelihood sums up unsquared
distances) than for the Gaussian distribution (which corresponds to squared
distances).
mclust
produces clusters with the highest Gaussianity, but only by a rather
insignificant distance. It is best regarding uniformity as measured by cvnnd.
The reason for this is probably its ability to build clusters with large
within-cluster variation collecting observations that have large distances to
all or most other points, whereas other methods either need to isolate such
observations in one-point clusters, or integrate them in clusters with denser
cores. Mixtures of $t$\- and skew $t$-distributions could in principle also
produce large variance clusters, but the shapes of $t$\- and skew
$t$-distributions allow to integrate outlying observations more easily with
denser regions.
mclust often tolerates large within-cluster distances, whereas its clusters
are not on average better separated than those from $K$-means. On the other
hand, its cluster sizes are not significantly less well balanced. Its ability
to produce clusters with strongly different within-cluster variance makes it
less suitable regarding Pearson-$\Gamma$ and the ASW, which treat distances in
the same way in all clusters.
emskewt
looks bad on almost all internal indexes. It is not particularly bad regarding
recovery of the “true” clusters though, see Section 3.2. This means that the
current collection of internal indexes does not capture favourable
characteristics of skewly distributed clusters appropriately; it also means
that emskewt is not an appropriate method for finding clusters with the
characteristics that are formalised by the internal indexes.
teigen
has a profile that is by and large very similar to the one of mclust, apart
from being slightly better regarding the maximum diameter, and slightly worse
regarding Gaussianity and uniformity.
single linkage
has a very distinct profile. It is best regarding separation, avoiding wide
within-cluster gaps, and cluster boundaries through density valleys, and worst
by some distance regarding within-cluster homogeneity and entropy.
average linkage
has similar strengths and weaknesses as single linkage, but not as extreme. It
is the best method regarding Pearson-$\Gamma$ and the ASW, both of which
balance homogeneity and separation and measure therefore how much the
clustering is in line with the distance structure.
complete linkage
is best regarding the maximum diameter. In most other respects it stands
between single and average linkage on one side and the centroid- and mixture-
based methods on the other side.
spectral
is another method that provides a compromise between the rather separation-
oriented single and average linkage on one side and the rather homogeneity-
oriented centroid- and mixture-based methods. Its maximum cluster diameter is
rather high on average. Its mode index value is good if not clearly different
from the one of clara. Its mid-range entropy value may look attractive in
applications in which a considerable degree of imbalance in the cluster sizes
may seem realistic but the tendency of the linkage methods to produce one-
point clusters should be avoided.
The multivariate characterisation of the clustering methods also allows to map
them, using a principal components analysis (PCA). Results of this are shown
in Figure 7. On the left side, PCs are shown using every index value for every
data set as a separate value, i.e., 42*11 variables. The first two PCs carry
30.9% and 16.6% of the variance, respectively. On the right side, the PCA is
performed on 11 variables that give average index values over all data sets.
While this reduces information, it allows to show the indexes as axes in a
biplot. The first two PCs here carry 50.0% and 19.7% of the variance,
respectively. After rotation, the maps are fairly similar. Using the more
detailed data set information, spectral seems much closer to kmeans and clara
than to mclust and teigen, but the apparent similarity to the latter ones
using average index values is an effect of dimension reduction; involving
information from the third PC (not shown), the similarity structure is more
similar to that of the plot using all 42*11 variables. The biplot on the right
side shows the opposite tendencies of separation on one hand and entropy and
average within distances on the other hand when characterising the methods,
with indexes such as maximum diameter, density mode, Pearson-$\Gamma$, and the
ASW opening another dimension, rather corresponding to kmeans, average, and
complete linkage. Qualitative conclusions from these maps agree roughly with
those in Jain et al. (2004), where more clustering algorithms, but fewer data
sets, were involved.
Figure 7: Clustering methods mapped on first two principal components from
using all data sets separately (left side), and from using mean values over
the data sets (right side).
The study data allow to also investigate the values of the internal indexes
computed for the “true” clusterings. These are shown in Figure 8. Only the
entropy and Gaussianity are clearly above the mean zero of the random
clustering ensemble (which includes the solutions from the proper clustering
methods as a small minority), and also above the mean for the clustering
methods. The clustering methods are on average all above zero, which should be
expected, because these are meant to be desirable features of a good
clustering, and as such should be better for the proper clustering methods
than for the random ones. The methods achieve the highest average for the ASW,
which makes sense as this attempts to measure general clustering quality. The
fact that index values are mostly below zero for the “true” clusterings can be
interpreted in such a way that many given “true” clusterings are data
analytically wanting. The high values for entropy are probably artificial, due
to a biased choice of data sets. The high values for Gaussianity, however,
could suggest that there is a tendency in some real clusters, i.e.,
homogeneous subpopulations, to approximate the Gaussian distribution. A
possible explanation is that in a crisp clustering of a data set produced by a
clustering method, tails of a within-cluster distribution tend to be cut off
in the direction of other clusters, whereas “true” clusters tend to have some
proper overlap (clearly separated clusters are in my experience indeed rare in
real data), which is in line with the low values of the separation and denscut
(cluster boundaries running through density valleys) index. This probably also
affects the ASW and Pearson-$\Gamma$.
Figure 8: The boxplots show the distributions of the internal indexes computed
on the “true” clusterings. The red line shows the average index values
produced by the clustering methods.
### 3.2 Recovery of “true” clusterings
The quality of the recovery of the “true” clusterings is measured by the ARI,
BCubed, and the VI. Figure 9 shows the ARI-values achieved by the different
clustering methods. On average, there is a clear advantage of the centroid-
and mixture-based methods compared with the linkage methods (single linkage is
clearly the worst), and spectral clustering is in between. Every method
achieves good results on some data sets, but the linkage methods produce an
ARI around zero on many data sets. Differences between kmeans, clara, mclust,
emskewt, and teigen do not seem significant but are clearly dominated by
variation. On some data sets all methods produce very low values, and no
method achieves an ARI larger than 0.5 on more than half of the data sets. The
mean ARI is 0.28, the mean ARI of the best clusterings for every data set is
0.46. Interpreting these numbers, it has to be kept in mind that the given
“true” clustering does not necessarily qualify as the best clustering of the
data from a data analytic point of view; some of these are neither homogeneous
nor separated. Furthermore there may be meaningful clusters in the data that
differ from those declared as “truth”. A better recovery does not necessarily
mean that a method delivers the most useful clustering that can be found. On
the other hand, some given “true” clusterings correspond to clearly visible
patterns in the data, and at least some methods manage to find them. Overall,
the variation is quite high.
The picture changes strongly looking at the results regarding BCubed and
particularly VI, see Figure 10. BCubed still shows single linkage as the
weakest method, but otherwise differences look hardly significant, and
according to the VI, the average quality of the methods is almost uniform.
Figure 9: Adjusted Rand Index values by method. Values belonging to the same
data set are connected by lines. The thick red line gives the average values.
Figure 10: BCubed and negative Variation of Information values by method.
Values belonging to the same data set are connected by lines. The thick red
line gives the average values.
Further exploratory analysis (not shown) reveals that better values of the
external indexes are systematically associated with lower data dimension $p$
and lower sample size $n$, the latter probably because of confounding with the
correlated dimension. There was no clear interaction with the methods, and no
clear pattern regarding the number of clusters $k$.
Table 4 shows how often the different methods come out as the best according
to the indexes. This portrays mclust as very successful at recovering the
“truth”. Spectral clustering is hardly ever on top, but it has values very
close to the best for a number of data sets. Given that emskewt looks so bad
regarding the internal indexes in Section 3.1, its performance regarding the
external indexes looks surprisingly good. The most striking difference between
the indexes is that single linkage is not the best method for a single data
set with respect to the the ARI, but it is the best for 11 data sets with
respect to the VI. This is explored in the following.
| Clustering methods
---|---
Index | kmeans | clara | mclust | mskewt | teigen | single | average | complete | spectral
ARI | 3 | 4 | 8 | 5 | 5 | 0 | 3 | 1 | 1
BCubed | 2 | 2 | 7 | 5 | 3 | 4 | 4 | 2 | 1
VI | 2 | 1 | 6 | 3 | 3 | 11 | 2 | 1 | 1
Table 4: Number of times that a method comes out best according to the three
external indexes. Figure 11: Pairs plot of ARI, BCubed, and VI
Figure 11 shows how the three indexes are related to each other over all nine
clustering methods applied to the 30 data sets with “true” clusterings. VI and
BCubed have a correlation $\rho$ of -0.94, but the ARI is correlated
substantially weaker to both, $\rho=0.75$ with BCubed and $\rho=-0.57$ with
VI. BCubed can therefore be seen as a compromise between the two. In order to
explore what causes the differences between ARI and VI, in Figure 11 it can be
seen that the major issue is that the VI can produce fairly good values close
to zero for some situations in which the ARI is around zero, indicating
unrelated clusterings, or only slightly better. Generally these situations
tend to occur where one clustering is very imbalanced, mostly with one or more
one-point clusters, whereas the other one (more often the “true” one) is not.
The VI involves cluster-wise percentages of points occurring together in the
same cluster in the other clustering, and therefore assesses one-point
clusters favourably, whereas the random labels model behind ARI indicates that
what happens with the object in a one-point cluster in another (potentially
“true”) clustering is random and therefore not meaningful as long as it
appears in a substantially bigger cluster there.
For example, consider the data set “22 - Wholesale” (see Appendix A2).
According to the VI, the single linkage clustering is optimal (VI$=0.64$), but
this has an ARI-value of about 0. It is second best according to BCubed with a
value of 0.72. Table 5 shows how this is related to the “true” clustering. In
favour of this clustering it can be said that single linkage cluster 2 is
“pure” regarding the truth; however, it is clear that any random clustering
that fixes one cluster size as 1 will be about equally good. This is a rather
extreme case, however most of the assessment differences between ARI and VI
(and to a lesser extent BCubed) are of a similar kind. This makes the ARI look
like the more appropriate index here.
Table 5: Contingency table of “true” clustering and single linkage clustering for data set “22 - Wholesale” | Single linkage cluster
---|---
Truth | 1 | 2
1 | 297 | 1
2 | 142 | 0
### 3.3 Relating “true” cluster recovery to the internal indexes
Figure 12: Correlation matrix of internal and external validation indexes
It is of interest whether the internal index values, which are observable in a
real situation, can explain to some extent the performance regarding the
“true” cluster recovery. A tool to assess this is a linear regression with an
external index as response, and the internal indexes as explanatory variables.
There is dependence between the different clusterings on the same data set,
and this can be appropriately handled using a random data set effect.
An important issue is that the internal indexes are correlated, which can make
the interpretation of the regression results difficult. Figure 12 shows the
correlation structure among the internal indexes, ARI and -VI (BCubed is not
taken into account in this section due to the high correlation with VI). The
order of indexes in Figure 12 was determined by a hierarchical clustering
using correlation dissimilarity, however -VI and ARI were put on top due to
their different role in the regression, and the ASW was put at the bottom. The
ASW is not involved in the regression, as it is defined in order to compromise
between homogeneity and separation, which themselves are represented by other
internal indexes. It is involved in Figure 12 because its correlation to the
other indexes may be of interest anyway. One thing that can be seen is that it
is fairly strongly correlated to a number of other indexes, particularly
maximum diameter, Pearson-$\Gamma$, and the separation index, but rather
weakly to the average within-cluster distances meant to formalise homogeneity.
Considerable correlation occurs between the average within-cluster distances
and the entropy. Both of these are the internal indexes with the highest
correlation to the ARI. This is a problem for interpretation because this
means that entropy and homogeneity are confounded when explaining recovery
success. Furthermore, both, entropy in particular, are strongly negatively
correlated with separation, which may explain the negative correlation between
separation and the ARI. There is no further high ($>0.2$) correlation between
either -VI or ARI and other internal indexes. It is obvious that the ARI is
closer connected to entropy and homogeneity, whereas the -VI is more
positively connected to separation. There are a number of further correlations
among the internal indexes; separation, the density mode and cut indexes,
Pearson-$\Gamma$, the maximum cluster diameter, and the absence of large
within-cluster gaps are all positively connected. The Gaussianity index and
the nearest neighbours CV are correlated 0.24 to each other; all their other
correlations are lower.
Table 6: Mixed-effects regression results regressing ARI, -VI, respectively, on the internal indexes excluding the ASW. Response | ARI | -VI
---|---|---
Indexes | Coefficient | $t$ | $p$ | Coefficient | $t$ | $p$
Intercept | .324 | 6.91 | .000 | -1.54 | -10.11 | .000
avewithin | -.019 | -1.34 | .181 | 0.03 | 0.88 | .377
maxdiameter | -.025 | -4.03 | .000 | 0.01 | 0.64 | .520
widestgap | .014 | 2.00 | .047 | -0.00 | -0.21 | .814
sindex | -.010 | -1.65 | .101 | 0.05 | 3.84 | .000
pearsongamma | .020 | 2.43 | .016 | -0.04 | -1.86 | .064
dmode | .009 | 0.89 | .374 | 0.05 | 1.92 | .056
denscut | .000 | 0.03 | .978 | -0.05 | -1.80 | .074
entropy | .088 | 4.69 | .000 | 0.00 | 0.01 | .990
kdnorm | .024 | 3.51 | .001 | 0.02 | 1.44 | .151
cvnnd | -.006 | -0.86 | .388 | -0.01 | -0.48 | .633
random eff. (data set) | | | .000 | | | .000
Table 6 gives the results of two regression analyses, with ARI and -VI as
responses, with a random data set effect. This has been obtained by the
R-package `lme`, Pinheiro and Bates (2000). $p$-values are interpreted in an
exploratory manner, as they are not precise. However, the null hypotheses of
zero effect of a variable given all other variables are in the model are of
interest here.
The ARI regression has maximum diameter, entropy, and Gaussianity as highly
significant effects; Pearson-$\Gamma$ is clearly significant at 5%-level.
widestgap is borderline significant, which is potentially not meaningful given
the number of tests.
The interpretation of entropy (which has the clearly largest $t$-value) is
problematic for two reasons. Firstly, due to correlation, its coefficient may
partly carry information due to avewithin. Secondly, eight data sets have
artificially balanced classes, which may favour entropy among good
clusterings. The regression was re-run excluding those data sets (not shown),
yielding by and large the same significances including entropy, but its
$t$-value fell to 2.75. Even in this scenario it cannot be excluded that the
“sample” of data sets with known “true” clusters favours entropy artificially.
Gaussianity seems to be a valuable predictor for recovery of “true” classes.
The maximum diameter has a negative coefficient, meaning that on average and
controlled for all other indexes, a larger (therefore worse) maximum cluster
diameter went with a better “truth” recovery regarding the ARI. It is however
clearly correlated with Pearson-$\Gamma$ and widestgap, which have positive
effects.
Despite a positive relationship between ARI and -VI, the results of the VI-
regression are very different, mainly because -VI can achieve high values for
clusterings with very low entropy even if the “true” clustering is balanced.
This means that there is no bias in favour of entropy by the data set sample;
rather the VI seems biased against entropy by definition, see above. The only
clearly significant index for -VI is the separation index, with a positive
coefficient, which was not significant in the ARI-regression.
Plots of the fitted values of both regressions against their response variable
(not shown) look satisfactorily linear. In principle, the regressions could be
used to predict the ARI or VI for data sets with unknown “truth” from the
observable internal indexes, but this will not work very well, due the strong
data set effect.
Overall these results do not allow clear cut conclusions, due to correlation,
issues with the representativity of the data sets, and the very different
patterns observed for ARI and VI. The character of the “true” clusterings may
just be so diverse that no general statement about which clustering
characteristics allow for good recovery can be made. Preferring the ARI as
external index, the only safely interpretable significance seems to be the one
of Gaussianity, due to its low correlation with other indexes. Separation
seems to help in terms of the VI, but this includes favouring clusterings that
separate outliers as one-point clusters, arguably an issue with the VI.
## 4 Discussion
The aim of this study is to characterise the clustering methods in terms of
the internal indexes, to learn about the recovery of “true” clusterings, both
regarding the methods, and regarding characteristics that could be connected
to recovery.
Regarding the characterisation of the clustering methods, the right side of
Figure 7 is probably most expressive, locating the clustering methods relative
to the internal indexes. Some indexes do not separate the methods very
strongly. Single linkage stands out as being quite different to most other
methods in many respects. On the other hand, the centroid-based methods, the
mixture-based methods and spectral clustering have much in common; one
surprising result is that $K$-means does not favour balanced cluster sizes
particularly strongly, compared to the mixture-based methods. Another result
is that single and complete linkage are not opposite extremes, but rather that
on most characteristics of single linkage, complete linkage is closer to
single linkage, with average linkage in between, than the centroid- based and
mixture-based methods. Gaussian mixture-based clustering stands out more by
its good value regarding uniformity (cvnnd) than regarding Gaussianity of the
clusters.
Regarding the recovery of “true” clusterings, there is large variation between
the data sets. According to the ARI and BCubed, the Gaussian mixture is the
best for the largest number of data sets. Single Linkage does badly regarding
the ARI. Differences between the other methods are not that pronounced, and
all of them did best in some data sets. This includes the skew $t$-mixture,
which does not look good according to the internal indexes but better
regarding the external indexes. There is currently no index, at least in the
collection used here, that formalises in which sense such a mixture can yield
a good clustering. This is a topic for further work. According to the VI (and
to some extent BCubed), single linkage does much better, but this rather
indicates a problem with the indexes than a good performance of single
linkage.
Explaining the “true” cluster recovery by the internal indexes does not
deliver very clear results, except that Gaussianity seems to help, which is
sometimes achieved by the Gaussian mixture, but only insignificantly more
often than by some other methods. A critical interpretation could be that
quality according to the internal indexes does not really measure what is
important for recovery. On the other hand one could argue that this shows the
heterogeneity of “true” clusterings, and that there is no “one fits it all
approach”, neither for clustering, nor for measuring clustering quality. The
given “true” clusterings are of such a nature that their recovery cannot be
reliably predicted from observable cluster characteristics.
Some problems were exposed with the non-representativity of the data sets,
with “true” clusterings, and with the VI (and somewhat less extreme the
BCubed) index. These problems are not exclusive to the present study, and it
can be hoped that these issues are on the radar whenever such benchmark
studies are run. These problems affect analyses involving the “true
clusterings” in particular. There is no reason to believe that the results
regarding the internal validation indexes are biased for these reasons.
## Appendix
### A1: Computational details
The following amendments were made to the clustering functions listed in
Section 2.1:
Mixture models:
Crisp partitions have always been enforced by assigning observations to the
cluster with maximum posterior probability of membership.
kmeans:
This was run with parameter `runs=10`, governing the number of random
initialisations. The default value is `runs=1`, which yields very unstable
results.
emskewt
The function `EmSkew` would occasionally produce errors or invalid results. It
is run inside a wrapper function that enforces a solution in the following
way: For each covariance matrix model111The shape of a skew t-distribution is
defined by the covariance matrix of an involved Gaussian mixture, see Lee and
McLachlan (2013), although this is not the covariance matrix of the resulting
skew t-distribution., starting from (1) the fully flexible model, 5 attempts
(different random initialisations) are made to find a solution. If all
attempts for a model fail, a less flexible model is tried out, in the order
(2) diagonal covariance matrices, (3) flexible but equal covariance matrices,
(4) equal diagonal covariance matrices, (5) equal spherical covariance
matrices, until a valid solution is found. If none of these is successful, the
same routine is carried out with a mixture of skew normal distributions, and
if this does not yield a valid solution either, `mclustBIC` is called with
default settings.
teigen
The function `teigen` would occasionally produce errors or invalid results. It
is run inside a wrapper function that enforces a solution in the following
way: If no valid solution is found, the wrapper-function for `EmSkew` as
explained above is called, but with `dist="mvt"`, fitting a multivariate
t-distribution.
specc
The function `specc` would occasionally produce errors or invalid results. It
is run inside a wrapper function. 10 attempts (different random
initialisations) are made to find a solution. If they all fail, all
observations are assigned to cluster 1. While this approach may seem unfair
for spectral clustering in comparison to `EmSkew`, which ultimately calls
`mclust` and can as such still produce a reasonable clustering, the motivation
is that a Gaussian mixture model can be seen as a constrained version of a
mixture of skew t-distributions, whereas spectral clustering has no
straightforward constrained version that can guarantee a valid solution.
In principle there can be situations in which also `mclustBIC` fails to
deliver a valid solution, however such a situation did not occur in the study.
Exhausting all attempts, both `specc` and `EmSkew` failed twice before
resorting to a one-cluster solution or `mclustBIC`, respectively, and `teigen`
failed 5 times; in all of these cases `EmSkew` with `distr="mvt"` delivered a
valid solution.
### A2: More details on data sets
Tables 7 and 8 give a list of the data sets used in the study.
Table 7: Overview of data sets used in the study. As “Source”, the source is given from which the data set was retrieved for the study, which in some cases is not the original source (most data sets retrieved from www.openml.org and many from R-packages are from UCI). Missing references: (i) Turing Institute, Glasgow, (ii) www.bundestag.de (iii) maps.met.police.uk/tables.htm Number | Name | $n$ | $p$ | $K$ | “Truth” given | Source | Reference
---|---|---|---|---|---|---|---
1 | Crabs | 200 | 5 | 4 | Yes | R-MASS | Campbell and Mahon (1974)
Morphological measurements of crabs, two species, two sexes
2 | Dortmund | 170 | 5 | 5 | No | See reference | Sommerer and Weihs (2005)
Various characteristics of the districts of the city of Dortmund
3 | Iris | 150 | 4 | 3 | Yes | R-datasets | Anderson (1935)
Measurements on 50 flowers from each of 3 species of iris
4 | Vowels | 990 | 10 | 11 | Yes | See reference | Hastie et al. (2001)
Recognition of British English vowels
5 | Bats | 2677 | 72 | 8 | Yes | V. Zamora-Gutierrez | Zamora-Gutierrez et al. (2016)
Acoustic identification of Mexican bat species
6 | USArrests | 50 | 4 | 2 | No | R-datasets | McNeil (1977)
Arrests per 100,000 residents for various crimes in US states 1973
7 | OliveOil | 572 | 8 | 9 | Yes | R-pdfcluster | Forina et al. (1983)
Chemical decomposition of Italian olive oils from 9 regions
8 | OldFaithful | 299 | 2 | 3 | No | R-MASS | Azzalini and Bowman (1990)
Duration and waiting times for eruptions of Old Faithful geyser
9 | Tetragonula | 236 | 4 | 9 | Yes | R-prabclus | Franck et al. (2004)
Genetic information on 9 species of tetragonula bees
10 | Thyroid | 215 | 6 | 3 | Yes | R-mclust | Coomans et al. (1983)
Results of five laboratory tests diagnosing thyroid gland patients
11 | Spam | 4601 | 57 | 2 | Yes | R-kernlab | Hastie et al. (2001)
Email spam classification from word and character frequencies
12 | Wisconsin | 569 | 30 | 2 | Yes | UCI | Street et al. (1993)
Diagnosis of breast cancer, measurements of features of image
13 | Yeast | 1484 | 8 | 10 | Yes | UCI | Horton and Nakai (1996)
Discriminative features for protein Localization Sites in cells
14 | Vehicle | 846 | 18 | 4 | Yes | R-mlbench | (i)
Recognising vehicle type from silhouettes
15 | Letters | 2000 | 16 | 26 | Yes | R-mlbench | Frey and Slate (1991)
Recognising handwritten letters from pixel displays
16 | Bundestag | 299 | 5 | 5 | No | R-flexclust | (ii)
German Bundestag election results 2009 of 5 major parties by constituency
17 | Finance | 889 | 4 | 2 | Yes | R-Rmixmod | du Jardin and Séverin (2010)
Predicting firm bankruptcy from four financial ratios
18 | BankNotes | 200 | 6 | 2 | Yes | R-mclust | Flury and Riedwyl (1988)
Identifying counterfeit Swiss bank notes from measurements
19 | StoneFlakes | 79 | 8 | 3 | No | Thomas Weber | Weber (2009)
Measurements on prehistoric stone tools
20 | Leaf | 340 | 14 | 30 | Yes | UCI | Silva et al. (2013)
Shape and consistency measurements on leafs from 30 plant species
21 | London | 32 | 9 | 4 | No | See reference | (iii)
Relative numbers of various crimes in the boroughs of London 2014
Table 8: Overview of data sets used in the study (part 2). As “Source”, the source is given from which the data set was retrieved for the study, which in some cases is not the original source (most data sets retrieved from www.openml.org and many from R-packages are from UCI). Missing references: (i) Deepraj Baidya (ii) Dukascopy Historical Data Feed (iii) www.decathlon2000.com Number | Name | $n$ | $p$ | $K$ | “Truth” given | Source | Reference
---|---|---|---|---|---|---|---
22 | Wholesale | 440 | 7 | 2 | Yes | `www.openml.org` | Abreu (2011)
Spending on various product categories by clients of wholesale distributor
23 | Heart | 200 | 13 | 5 | Yes | `www.openml.org` | Detrano et al. (1989)
Diagnosing different stages of heart disease by diagnostic measurements
24 | MachineKnow | 403 | 5 | 5 | No | `www.openml.org` | Kahraman et al. (2013)
Students’ knowledge status about the subject of Electrical DC Machines
25 | PlantLeaves | 1599 | 64 | 100 | Yes | `www.openml.org` | Yan et al. (2013)
Plant species classification by texture detected from leaf images
26 | RNAYan | 90 | 2 | 7 | Yes | Bioconductor | Yan et al. (2013)
RNA sequencing data distinguishing cell types in human embryonic development
27 | RNAKolo | 704 | 5 | 3 | Yes | Bioconductor | Kolodziejczyk et al. (2015)
RNA sequencing data on mouse embryonic stem cell growth
28 | Cardiotocography | 2126 | 23 | 10 | Yes | `www.openml.org` | Ayres-de Campos et al. (2000)
Classification of cardiotocograms into pattern classes
29 | Stars | 240 | 4 | 6 | Yes | Kaggle | (i)
Predict star types from features of stars
30 | Kidney | 203 | 11 | 2 | Yes | R-teigen | Dua and Graff (2017)
Presence or absence of chronic kidney disease from diagnostic features
31 | BreastTissue | 106 | 9 | 4 | Yes | `www.openml.org` | Jossinet (1996)
Classes of breast carcinoma diagnosed by impedance measurements
32 | FOREX | 1832 | 10 | 2 | Yes | `www.openml.org` | (ii)
Historical price data EUR/JPY for predicting direction next day
33 | SteelPlates | 1941 | 24 | 7 | Yes | `www.openml.org` | Buscema (1998)
Classification of steel plates faults from various measurements
34 | BostonHouse | 506 | 13 | 5 | No | `www.openml.org` | Harrison and Rubinfeld (1978)
Multivariate characterisation of different areas of Boston
35 | Ionosphere | 351 | 32 | 2 | Yes | `www.openml.org` | Sigillito et al. (1989)
Radar data to distinguish free electron patterns from noise in ionosphere
36 | Glass | 214 | 9 | 6 | Yes | R-MASS | Venables and Ripley (2002)
Identify type of glass from chemical analysis
37 | CustomerSat | 1811 | 10 | 5 | Yes | R-bayesm | Rossi et al. (2005)
Responses to a satisfaction survey for a product
38 | Avalanches | 394 | 9 | 5 | No | Margherita Maggioni | Maggioni (2004)
Avalanche frequencies by size and other factors for mapping release areas
39 | Decathlon | 2580 | 10 | 6 | No | R-GDAdata | (iii)
Points per event of decathlon athletes
40 | Alcohol | 125 | 10 | 5 | Yes | UCI | Adak et al. (2020)
Five types of alcohol classified by QCM sensor data
41 | Augsburg | 95 | 11 | 3 | No | See reference | Theus and Urbanek (2009)
Tax data for districts of the city of Augsburg before and after Thirty Years
War
42 | ImageSeg | 2310 | 16 | 7 | Yes | `www.openml.org` | Dua and Graff (2017)
$3\times 3$ pixel regions of outdoor images classified as object
A zip file with all data sets in the form in which they were analysed in the
present study (i.e., after all pre-processing) is planned to be provided as
online supplement of the published version of this article. Where a “true”
clustering is given this is in the first variable. All variables were scaled
to zero mean and unit variance before clustering, except where stated in the
following list, which gives information about data pre-processing where this
was applied.
2 Dortmund
The original data set has 203 variables, many of which are not of much
substantial interest, with several linear dependencies. The version used here
is described in Coretto and Hennig (2016).
4 Vowels
The original data set is split into test and training data for supervised
classification. Both are used together here.
5 Bats
The used data set is a preliminary version of what is analysed in Zamora-
Gutierrez et al. (2016) that was provided to me for testing purposes by
Veronica Zamora-Gutierrez. A small number of missing values were imputed by
mean imputation.
7 OliveOil
The original data set contains classification by 9 regions, which are
subclasses of 3 macro areas. The regions were used as “true” clustering.
9 Tetragonula
This data set originally contains categorical genetic information. The version
used here was generated by running a four-dimensional multidimensional scaling
on genetic distances as proposed by Hausdorf and Hennig (2010). The resulting
data were not scaled before clustering; the original scales represent the
original distances.
15 Letters
This data set has originally 20,000 observations, which is too big for
handling a full distance matrix. Only the first 2,000 have been used.
16 Bundestag
The data set was not scaled before clustering. The unscaled version represents
comparable voter percentages.
19 StoneFlakes
A small number of missing values were imputed by mean imputation.
21 London
The data were retrieved from the website `maps.met.police.uk/tables.htm` in
December 2015. The website has been reorganised in the meantime and the
original data are probably no longer available there, however more recent data
of the same kind is available. Only the major crime categories were used,
divided by the total number of offences; the total number of offences was used
as a variable divided by the number of inhabitants. After constructing these
features, variables were scaled (the number of serious crimes is very low, and
not scaling the relative numbers would have strongly reduced their influence
on clustering).
24 MachineKnow
A classification variable is provided, but this was not used as “true”
clustering here, because according to the documentation this was constructed
from the data by a machine learning algorithm, and does not qualify as “ground
truth”.
26 RNAYan
This is originally a data set with $p\gg n$. Unscaled principal components
were used as explained in Batool and Hennig (2020) in line with some
literature cited there.
27 RNAKolo
This is originally a data set with $p\gg n$. Unscaled principal components
were used as explained in Batool and Hennig (2020) in line with some
literature cited there.
28 Cardiotocography
A variable with less than four distinct values has been removed.
33 SteelPlates
Three variables with less than four distinct values have been removed.
34 BostonHouse
This was originally a regression problem. The original response variable
“housing price” is used together with the other variables for clustering here.
A binary variable has been removed.
35 Ionosphere
Two binary variables have been removed.
38 Avalanches
On top of the first six variables, which give geographical information, the
original data has frequencies for avalanches of ten different sizes,
categorised by what percentage of the release areas is covered. This
information has been reduced to the three variables “number of avalanches”,
“mean coverage” and “variance of coverages”.
39 Decathlon
Only data from the year 2000 onward are used, in order to generate a data set
of manageable size. Variables were not scaled, because the decathlon points
system is meant to make the original points values comparable.
41 Augsburg
For four count variables the meaning of missing values was “not existing”, and
these were set to zero. Some other missing values were imputed by mean
imputation.
42 ImageSeg
Three variables with less than five distinct values have been removed.
## References
* Abreu (2011) Abreu, N. (2011). Analise do perfil do cliente recheio e desenvolvimento de um sistema promocional. Master’s thesis, Lisbon: ISCTE-IUL.
* Ackerman et al. (2010) Ackerman, M., S. Ben-David, and D. Loker (2010). Towards property-based classification of clustering paradigms. In Advances in Neural Information Processing Systems (NIPS), pp. 10–18.
* Adak et al. (2020) Adak, M. F., P. Lieberzeit, P. Jarujamrus, and N. Yumusak (2020). Classification of alcohols obtained by qcm sensors with different characteristics using abc based neural network. Engineering Science and Technology, an International Journal 23, 463–469.
* Akhanli and Hennig (2020) Akhanli, S. E. and C. Hennig (2020). Comparing clusterings and numbers of clusters by aggregation of calibrated clustering validity indexes. Statistics and Computing 30(5), 1523–1544.
* Amigo et al. (2009) Amigo, E., J. Gonzalo, J. Artiles, and F. Verdejo (2009). A comparison of extrinsic clustering evaluation metrics based on formal constraints. Information retrieval 12, 461–486.
* Anderlucci and Hennig (2014) Anderlucci, L. and C. Hennig (2014). Clustering of categorical data: a comparison of a model-based and a distance-based approach. Communications in Statistics - Theory and Methods 43, 704–721.
* Anderson (1935) Anderson, E. (1935). The irises of the Gaspe Peninsula. Bulletin of the American Iris Society 59, 2–5.
* Andrews and McNicholas (2012) Andrews, J. L. and P. D. McNicholas (2012). Model-based clustering, classification, and discriminant analysis via mixtures of multivariate t-distributions. Statistics and Computing 22(5), 1021–1029.
* Andrews et al. (2018) Andrews, J. L., J. R. Wickins, N. M. Boers, and P. D. McNicholas (2018). teigen: An R package for model-based clustering and classification via the multivariate $t$ distribution. Journal of Statistical Software 83(7), 1–32.
* Arbelaitz et al. (2013) Arbelaitz, O., I. Gurrutxaga, J. Muguerza, J. M. Pérez, and I. Perona (2013). An extensive comparative study of cluster validity indices. Pattern Recognition 46(1), 243–256.
* Ayres-de Campos et al. (2000) Ayres-de Campos, D., J. Bernardes, A. Garrido, J. Marques-de Sa, and L. Pereira-Leite (2000). Sisporto 2.0: a program for automated analysis of cardiotocograms. Journal of maternal-fetal medicine 9, 311–318.
* Azzalini and Bowman (1990) Azzalini, A. and A. W. Bowman (1990). A look at some data on the old faithful geyser. Applied Statistics 39, 357–365.
* Bagga and Baldwin (1998) Bagga, A. and B. Baldwin (1998). Entity-basedcross-document coreferencing using the vector space model. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics (COLING-ACL 98), pp. 79–85. ACL, Stroudsburg PE.
* Batool and Hennig (2020) Batool, F. and C. Hennig (2020). Clustering with the average silhouette width. arXiv:1910.11339 [stat], Computational Statistics and Data Analysis (accepted for publication).
* Boulesteix (2015) Boulesteix, A.-L. (2015). Ten simple rules for reducing overoptimistic reporting in methodological computational research. Plos Computational Biology 11, e1004191.
* Boulesteix and Hatz (2017) Boulesteix, A.-L. and M. Hatz (2017). Benchmarking for clustering methods based on real data: A statistical view. In Data Science: Innovative Developments in Data Analysis and Clustering, pp. 73–82. Springer, Berlin.
* Boulesteix et al. (2013) Boulesteix, A.-L., S. Lauer, and M. J. A. Eugster (2013). A plea for neutral comparison studies in computational sciences. PlosOne 8, e61562.
* Brusco and Steinley (2007) Brusco, M. J. and D. Steinley (2007). A comparison of heuristic procedures for minimum within-cluster sums of squares partitioning. Psychometrika 72, 583–600.
* Buscema (1998) Buscema, M. (1998). Metanet: The theory of independent judges. Substance Use & Misuse 33, 439–461.
* Campbell and Mahon (1974) Campbell, N. A. and R. J. Mahon (1974). A multivariate study of variation in two species of rock crab of genus leptograpsus. Australian Journal of Zoology 22, 417–425.
* Coomans et al. (1983) Coomans, D., M. Broeckaert, M. Jonckheer, and D. L. Massart (1983). Comparison of multivariate discriminant techniques for clinical data \- application to the thyroid functional state. Methods of Information in Medicine 22, 93–101.
* Coretto and Hennig (2016) Coretto, P. and C. Hennig (2016). Robust improper maximum likelihood: tuning, computation, and a comparison with other methods for robust Gaussian clustering. Journal of the American Statistical Association 111, 1648–1659.
* de Souto et al. (2008) de Souto, M. C., I. G. Costa, D. S. de Araujo, T. B. Ludermir, and A. Schliep (2008). Clustering cancer gene expression data: a comparative study. BMC Bioinformatics 9, 497.
* Detrano et al. (1989) Detrano, R., A. Janosi, W. Steinbrunn, M. Pfisterer, J. Schmid, S. Sandhu, K. Guppy, S. Lee, and V. Froelicher (1989). International application of a new probability algorithm for the diagnosis of coronary artery disease. American Journal of Cardiology 64, 304–310.
* Dimitriadou et al. (2004) Dimitriadou, E., M. Barth, C. Windischberger, K. Hornik, and E. Moser (2004). A quantitative comparison of functional mri cluster analysis. Artificial Intelligence in Medicine 31, 57–71.
* du Jardin and Séverin (2010) du Jardin, P. and E. Séverin (2010). Dynamic analysis of the business failure process: A study of bankruptcy trajectories. In Proceedings of the 6th Portuguese Finance Network Conference, Ponta Delgada, Azores, 1 July 2010.
* Dua and Graff (2017) Dua, D. and C. Graff (2017). UCI machine learning repository.
* Ester et al. (1996) Ester, M., H.-P. Kriegel, J. Sander, and X. Xu (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. In E. Simoudis, J. Han, and U. M. Fayyad (Eds.), KDD 96: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, pp. 226–231. AAAI Press, Menlo Park CA.
* Everitt et al. (2011) Everitt, B. S., S. Landau, M. Leese, and D. Stahl (2011). Cluster Analysis, 5th ed. Wiley, New York.
* Flury and Riedwyl (1988) Flury, B. and H. Riedwyl (1988). Multivariate Statistics: A practical approach. Chapman & Hall, London.
* Forina et al. (1983) Forina, M., C. Armanino, S. Lanteri, and E. Tiscornia (1983). Classification of olive oils from their fatty acid composition. In H. Martens and H. Russwurm (Eds.), Food Research and Data Analysis, pp. 189–214. Applied Science Publ., Barking.
* Fraley and Raftery (2002) Fraley, C. and A. E. Raftery (2002). Model-based clustering, discriminant analysis and density estimation. Journal of the American Statistical Association 97, 611–631.
* Franck et al. (2004) Franck, P., E. Cameron, G. Good, J.-Y. Rasplus, and B. P. Oldroyd (2004). Nest architecture and genetic differentiation in a species complex of australian stingless bees. Molecular Ecology 13, 2317–2331.
* Frey and Slate (1991) Frey, P. W. and D. J. Slate (1991). Letter recognition using holland-style adaptive classifiers. Machine Learning 6, 161–182.
* Halkidi et al. (2015) Halkidi, M., M. Vazirgiannis, and C. Hennig (2015). Method-independent indices for cluster validation and estimating the number of clusters. In C. Hennig, M. Meila, F. Murtagh, and R. Rocci (Eds.), Handbook of Cluster Analysis, pp. 595–618. CRC Press.
* Harrison and Rubinfeld (1978) Harrison, D. and D. L. Rubinfeld (1978). Hedonic prices and the demand for clean air. Journal of Environmental Economics & Management 5, 81–102.
* Hartigan and Wong (1979) Hartigan, J. A. and M. A. Wong (1979). Algorithm as 136: A k-means clustering algorithm. Applied Statistics 28, 100–108.
* Hastie et al. (2001) Hastie, T., R. Tibshirani, and J. H. Friedman (2001). The Elements of Statistical Learning. Springer, New York.
* Hausdorf and Hennig (2010) Hausdorf, B. and C. Hennig (2010). Species Delimitation Using Dominant and Codominant Multilocus Markers. Systematic Biology 59(5), 491–503.
* Hennig (2015a) Hennig, C. (2015a). Clustering strategy and method selection. In C. Hennig, M. Meila, F. Murtagh, and R. Rocci (Eds.), Handbook of Cluster Analysis, pp. 703–730. CRC Press.
* Hennig (2015b) Hennig, C. (2015b). What are the true clusters? Pattern Recognition Letters 64, 53–62.
* Hennig (2018) Hennig, C. (2018). Some thoughts on simulation studies to compare clustering methods. Archives of Data Science, Series A (Online First) 5(1), 1–21.
* Hennig (2019) Hennig, C. (2019). Cluster validation by measurement of clustering characteristics relevant to the user. In C. H. Skiadas and J. R. Bozeman (Eds.), Data Analysis and Applications 1: Clustering and Regression, Modeling - Estimating, Forecasting and Data Mining, pp. 1–24. ISTE Ltd., London.
* Hennig (2020) Hennig, C. (2020). fpc: Flexible Procedures for Clustering. R package version 2.2-8.
* Hennig and Meila (2015) Hennig, C. and M. Meila (2015). Cluster analysis: An overview. In C. Hennig, M. Meila, F. Murtagh, and R. Rocci (Eds.), Handbook of Cluster Analysis, pp. 1–19. CRC Press.
* Horton and Nakai (1996) Horton, P. and K. Nakai (1996). A probablistic classification system for predicting the cellular localization sites of proteins. In D. J. States, P. Agarwal, T. Gaasterland, L. Hunter, and R. F. Smith (Eds.), Proceedings of the Fourth International Conference for Intelligent Systems for Molecular Biology, pp. 109–115. AAAI Press, Menlo Park CA.
* Hubert and Arabie (1985) Hubert, L. and P. Arabie (1985). Comparing partitions. Journal of Classification 2(2), 193–218.
* Hubert and Schultz (1976) Hubert, L. J. and J. Schultz (1976). Quadratic assignment as a general data analysis strategy. British Journal of Mathematical and Statistical Psychology 29, 190–241.
* Jain et al. (2004) Jain, A. K., A. Topchy, M. H. C. Law, and J. M. Buhmann (2004). Landscape of clustering algorithms. In Proceedings of the 17th International Conference on Pattern Recognition (ICPR04) , Vol. 1, pp. 260–263. IEEE Computer Society Washington, DC.
* Javed et al. (2020) Javed, A., B. S. Lee, and D. M. Rizzo (2020). A benchmark study on time series clustering. Machine Learning with Applications 1, 100001.
* Jossinet (1996) Jossinet, J. (1996). Variability of impedivity in normal and pathological breast tissue. Medical and biological engineering and computing 34, 346–350.
* Kahraman et al. (2013) Kahraman, H. T., S. Sagiroglu, and I. Colak (2013). Developing intuitive knowledge classifier and modeling of users’ domain dependent data in web. Knowledge Based Systems 37, 283–295.
* Karatzoglou et al. (2004) Karatzoglou, A., A. Smola, K. Hornik, and A. Zeileis (2004). kernlab – an S4 package for kernel methods in R. Journal of Statistical Software 11(9), 1–20.
* Kaufman and Rousseeuw (1990) Kaufman, L. and P. J. Rousseeuw (1990). Finding groups in data: an introduction to cluster analysis, Volume 344. Wiley, New York.
* Kolodziejczyk et al. (2015) Kolodziejczyk, A. A., J. K. Kim, J. C. Tsang, T. Ilicic, J. Henriksson, K. N. Natarajan, A. C. Tuck, X. Gao, M. Bühler, P. Liu, J. C. Marioni, and S. A. Teichmann (2015). Single cell rna-sequencing of pluripotent states unlocks modular transcriptional variation. Cell stem cell 17(4), 471–485.
* Kou et al. (2014) Kou, G., Y. Peng, and G. Wang (2014). Evaluation of clustering algorithms for financial risk analysis using mcdm methods. Information Sciences 275, 1–12.
* Lee and McLachlan (2013) Lee, S. X. and G. J. McLachlan (2013). On mixtures of skew normal and skew t-distributions. Advances in Data Analysis and Classification 7, 241–266.
* Liu et al. (2019) Liu, X., W. Song, B. Y. Wong, T. Zhang, S. Yu, G. N. Lin, and X. Di (2019). A comparison framework and guideline of clustering methods for mass cytometry data. Genome Biology 20, 297.
* Maechler et al. (2019) Maechler, M., P. Rousseeuw, A. Struyf, M. Hubert, and K. Hornik (2019). cluster: Cluster Analysis Basics and Extensions. R package version 2.1.0.
* Maggioni (2004) Maggioni, M. (2004). Avalanche release areas and their influence on uncertainty in avalanche hazard mapping. Ph. D. thesis, Universität Zürich.
* Maulik and Bandyopadhyay (2002) Maulik, U. and S. Bandyopadhyay (2002). Performance evaluation of some clustering algorithms and validity indices. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(12), 1650–1654.
* McLachlan and Peel (2000) McLachlan, G. J. and D. Peel (2000). Finite Mixture Models. Wiley, New York.
* McNeil (1977) McNeil, D. R. (1977). Interactive Data Analysis. Wiley, New York.
* Meila (2007) Meila, M. (2007). Comparing clusterings—an information based distance. Journal of Multivariate Analysis 98(5), 873 – 895.
* Meila (2015) Meila, M. (2015). Criteria for comparing clusterings. In C. Hennig, M. Meila, F. Murtagh, and R. Rocci (Eds.), Handbook of Cluster Analysis, pp. 619–635. CRC Press.
* Meila and Heckerman (2001) Meila, M. and D. Heckerman (2001). An experimental comparison of model-based clustering methods. Machine Learning 42, 9–29.
* Milligan (1980) Milligan, G. W. (1980). An examination of the effect of six types of error perturbation on fifteen clustering algorithms. Psychometrika 45, 325–342.
* Milligan (1996) Milligan, G. W. (1996). Clustering validation: results and implications for applied analyses. In P. Arabie, L. J. Hubert, and G. D. Soete (Eds.), Clustering and Classification, pp. 341–375. World Scientific, Singapore.
* Ng et al. (2001) Ng, A. Y., M. I. Jordan, and Y. Weiss (2001). On spectral clustering: Analysis and an algorithm. In T. Dietterich, S. Becker, and Z. Ghahramani (Eds.), Advances in Neural Information Processing Systems 14 (NIPS 2001), pp. 1–8. NIPS.
* Pinheiro and Bates (2000) Pinheiro, J. C. and D. M. Bates (2000). Mixed-Effects Models in S and S-PLUS. Springer, New York.
* Rodriguez et al. (2019) Rodriguez, M. Z., C. H. Comin, D. Casanova, O. M. Bruno, D. R. Amancio, L. Costa, and F. A. Rodrigues (2019). Clustering algorithms: A comparative approach. PloS one 14, e0210236.
* Rossi et al. (2005) Rossi, P. E., G. M. Allenby, and R. McCulloch (2005). Bayesian Statistics and Marketing. Wiley, New York.
* Saracli et al. (2013) Saracli, S., N. Dogan, and I. Dogan (2013). Comparison of hierarchical cluster analysis methods by cophenetic correlation. Journal of Inequalities and Applications (electronic publication) 203.
* Scrucca et al. (2016) Scrucca, L., M. Fop, T. B. Murphy, and A. E. Raftery (2016). mclust 5: clustering, classification and density estimation using Gaussian finite mixture models. The R Journal 8(1), 289–317.
* Shannon (1948) Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal 27(3), 379–423.
* Sigillito et al. (1989) Sigillito, V. G., S. P. Wing, L. V. Hutton, and K. B. Baker (1989). Classification of radar returns from the ionosphere using neural networks. Johns Hopkins APL Technical Digest 10, 262–266.
* Silva et al. (2013) Silva, P. F. B., A. R. S. Marçal, and R. M. A. da Silva (2013). Evaluation of features for leaf discrimination. In M. Kamel and A. Campilho (Eds.), Image Analysis and Recognition, Berlin, Heidelberg, pp. 197–204. Springer Berlin Heidelberg.
* Sommerer and Weihs (2005) Sommerer, E.-O. and C. Weihs (2005). Introduction to the contest “social milieus in dortmund”. In Classification - the Ubiquitious Challenge, pp. 667–673. Springer, Berlin.
* Steinley and Brusco (2011) Steinley, D. and M. J. Brusco (2011). Evaluating the performance of model-based clustering: Recommendations and cautions. Psychological Methods 16, 63–79.
* Street et al. (1993) Street, W. N., W. H. Wolberg, and O. L. Mangasarian (1993). Nuclear feature extraction for breast tumor diagnosis. In IS&T/SPIE 1993 International Symposium on Electronic Imaging: Science and Technology, Volume 1905, San Jose, CA, pp. 861–870.
* Theus and Urbanek (2009) Theus, M. and S. Urbanek (2009). Interactive Graphics for Data Analysis. CRC/Chapman & Hall, Boca Raton FL.
* Van Mechelen et al. (2018) Van Mechelen, I., A.-L. Boulesteix, R. Dangl, N. Dean, I. Guyon, C. Hennig, F. Leisch, and D. Steinley (2018, October). Benchmarking in cluster analysis: A white paper. arXiv:1809.10496 [stat].
* Venables and Ripley (2002) Venables, W. N. and B. D. Ripley (2002). Modern Applied Statistics with S. Springer, New York.
* von Luxburg et al. (2012) von Luxburg, U., R. Williamson, and I. Guyon (2012). Clustering: Science or art? JMLR Workshop and Conference Proceedings 27, 65–79.
* Wang et al. (2018) Wang, K., A. Ng, and G. McLachlan. (2018). EMMIXskew: The EM Algorithm and Skew Mixture Distribution. R package version 1.0.3.
* Weber (2009) Weber, T. (2009). The lower/middle palaeolithic transition - is there a lower/middle palaeolithic transition? Preistoria Alpina 44, 1–6.
* Yan et al. (2013) Yan, L., M. Yang, H. Guo, L. Yang, J. Wu, R. Li, P. Liu, Y. Lian, X. Zheng, J. Yan, J. Huang, M. Li, X. Wu, L. Wen, K. Lao, R. Li, J. Qiao, and F. Tang (2013). Single-cell rna-seq profiling of human preimplantation embryos and embryonic stem cells. Nature structural & molecular biology 20, 1131–1139.
* Zamora-Gutierrez et al. (2016) Zamora-Gutierrez, V., C. Lopez-Gonzalez, M. C. MacSwiney Gonzalez, B. Fenton, G. Jones, E. K. V. Kalko, S. J. Puechmaille, V. Stathopoulos, and K. E. Jones (2016). Acoustic identification of mexican bats based on taxonomic and ecological constraints on call design. Methods in Ecology and Evolution 7(9), 1082–1091.
|
# O$|$R$|$P$|$E - A Data Semantics Driven Concurrency Control Mechanism with
Run-time Adaptation
Tim Lessner1, Fritz Laux2, Thomas M Connolly3 1freiheit.com technologies
gmbh, Hamburg, Germany
Email<EMAIL_ADDRESS>2Reutlingen University, Reutlingen, Germany
Email<EMAIL_ADDRESS>3University of the West of
Scotland, Paisley, UK
Email<EMAIL_ADDRESS>
###### Abstract
This paper presents a concurrency control mechanism that does not follow a
’one concurrency control mechanism fits all needs’ strategy. With the
presented mechanism a transaction runs under several concurrency control
mechanisms and the appropriate one is chosen based on the accessed data. For
this purpose, the data is divided into four classes based on its access type
and usage (semantics). Class $O$ (the optimistic class) implements a first-
committer-wins strategy, class $R$ (the reconciliation class) implements a
first-n-committers-win strategy, class $P$ (the pessimistic class) implements
a first-reader-wins strategy, and class $E$ (the escrow class) implements a
first-n-readers-win strategy. Accordingly, the model is called O$|$R$|$P$|$E.
The selected concurrency control mechanism may be automatically adapted at
run-time according to the current load or a known usage profile. This run-time
adaptation allows O$|$R$|$P$|$E to balance the commit rate and the response
time even under changing conditions. O$|$R$|$P$|$E outperforms the Snapshot
Isolation concurrency control in terms of response time by a factor of
approximately 4.5 under heavy transactional load (4000 concurrent
transactions). As consequence, the degree of concurrency is 3.2 times higher.
###### Index Terms:
Transaction processing; multimodel concurrency control; optimistic concurrency
control; snapshot isolation; performance analysis; run-time adaptation.
## I Introduction
The drawbacks of existing concurrency control (CC) mechanisms are that
pessimistic concurrency control (PCC) is likely to block transactions and is
prone to deadlocks, optimistic concurrency control (OCC) may experience a
sudden decrease in the commit rate if contention increases. Snapshot Isolation
(SI) better supports query processing since transactions generally operate on
snapshots and also prevents read anomalies, but depending on the
implementation of SI, either pessimistic or optimistic, it is also subject to
the previously mentioned drawbacks of PCC or OCC. Semantics based CC (SCC)
remedies some problems of PCC or OCC. It performs well under contention,
reduces the blocking time, and better supports disconnected operations.
However, its applicability is limited since data and transactions have to
comply with specific properties such as the commutativity of operations. In
addition to the previously mentioned drawbacks, neither PCC nor OCC nor SCC
support long-lived and disconnected data processing. However, these properties
are essential to achieve scalability in Web-based and loosely coupled
applications. Another challenge is that in real-life scenarios often the data
usage profile changes over time (e.g. stock refill in the morning, selling
goods during business hours, housekeeping during closing hours) which calls
for a dynamic CC-mechanism.
This paper extends a mechanism presented in [1] and originally introduced in
[2] that combines OCC, PCC, and SCC and steps away from the ‘one concurrency
control mechanism fits all needs’ strategy. Instead, the CC mechanism is
chosen depending on the data usage. While the original O$|$R$|$P$|$E model
assigns the appropriate CC-mechanism statically, this paper addresses a
dynamic adaptation of the CC-mechanism due to sudden changes of the system
load. To address scalability, the mechanism was designed with a focus on long-
lived and disconnected data processing.
Consider, for example, the wholesale scenario as presented in the TPC-C [3].
With PCC using shared and exclusive locks, the likelihood of deadlocks
increases for hot spot fields such as the stock’s quantity or the account’s
debit or credit. If transactions are long-lived, PCC is even worse since
deadlocks manifest during write time and a significant amount of work is
likely to be lost [4] [2]. With OCC, deadlocks cannot occur. However, hot-spot
fields like an account’s debit or credit would experience many version
validation failures under high load causing the restart of a transaction. Like
PCC, validation failures manifest during the write-phase of a transaction and
a significant amount of work is likely to be lost. Both PCC and OCC cannot
ensure that modifications attempted during a transaction’s read-phase will
prevail during the write-phase. Whereas PCC is prone to deadlocks, OCC is
prone to its optimistic nature itself.
O$|$R$|$P$|$E resolves these drawbacks and data can be classified in CC
classes. For example, customer data such as the address or password can be
controlled by a PCC that uses exclusive locks only [5]. Such a rigorous
measure ensures ownership of data and should be used if data is modified that
belongs to one transaction. For example, account data or master data should
not be modified concurrently and given the importance of this data a rigorous
isolation is justified. The debit or credit of an account can be classified in
CC class $R$, which guarantees no lost updates and no constraint violations.
Such a guarantee is often sufficient for hot-spot fields. Class $E$ can be
used to access an item’s stock, for example. Class $E$ is able to handle use
cases such as reservations. It should be used if during the read-phase a
guarantee is required that the changes will succeed during the write-phase.
Class $O$ is the default class. It avoids blocking and under normal load it
represents a good trade-off between commit and abort-rate.
Section II defines these four CC classes with different data access strategies
used by our mechanism. In the case of a conflict, class $O$ implements a
first-committer-wins strategy, class $R$ implements a first-n-committers-win
strategy, class $P$ implements a first-reader-wins strategy, and class $E$
implements a first-n-readers-win strategy. The number $n$ is determined by the
semantics of the accessed data, e.g., by database constraints. According to
the classes, the mechanism is called O$|$R$|$P$|$E. The “$|$” indicates the
demarcation between data.
Section III proofs the correctness of the model. Section IV briefly describes
the prototype implementation. Section V highlights some advantages of
O$|$R$|$P$|$E, because it provides an application flexibility in choosing the
best suitable CC mechanism and thereby significantly increases the commit rate
and outperforms optimistic SI. The run-time adaptation mechanism and its
adaptation rules are presented in Section VII. In the following Section V a
prototype implementation is tested with various workloads. The results are
discussed and the behavior is illustrated with time diagrams. Section VIII
summarizes related work and compares it to our model. Finally, the paper draws
some conclusions and provides an outlook (see Section IX) to future work.
## II Model
The model relies on disconnected transactions and 4 CC classes, which are
defined in the following.
### II-A Transaction
To support long-lived and disconnected data processing, which both supports
scalability, O$|$R$|$P$|$E models a transaction as a disconnected transaction
$\tau$, with separate read- and write-phase, i.e., no further read after the
first write operation (see Definition 1, taken from [2]). To disallow blind
writes, O$|$R$|$P$|$E guarantees that in addition to the value of a field, the
version of a data field has to be read, too.
###### Definition 1:
Disconnected Transaction:
1. 1.
Let $ta$ be a flat transaction that is defined as a pair $ta=(OP,<)$ where
$OP$ is a finite set of steps of the form $r(x)$ or $w(x)$ and ${<}(\subseteq
OP\times OP)$ is a partial order.
2. 2.
A disconnected transaction $\tau=(TA^{R},TA^{W})$ consists of two disjoint
sets of transactions. $TA^{R}=\\{ta^{R}_{1},\ldots,ta^{R}_{i}\\}$ to read and
$TA^{W}=\\{ta^{W}_{1},\ldots,ta^{W}_{j}\\}$ to write the proposed
modifications back.
3. 3.
A transaction has to read any data item $x$ before being allowed to modify $x$
(no blind writes).
4. 4.
If a transaction only reads data it has to be labeled as read only.
### II-B CC Classes
Class $O$ is the default class and is implemented by an optimistic SI
mechanism, which is advantageous since reads do not block writes and non-
repeatable or phantom phenomena do not happen. However, SI is not fully
serializable [6] [7].
As stated, the drawback of optimistic mechanisms prevails if load increases,
because many transactions may abort during their validation at commit time. An
abort at commit time is expensive, because significant amount of work might be
lost. A circumstance particularly crucial for long-lived transactions (see
[2]).
Regarding the strategy, optimistic SI follows a “first-committer-wins”
semantics revealing another drawback of $O$. It is the lack of an option
allowing a transaction to explicitly run as an owner of some data. Consider,
for example, the private data of a user such as its password or address. A
validation failure should be prevented by all means, since it would mean that
at least two transactions try to concurrently update private data. Although
technically this is a reasonable state, for this kind of data a pessimistic
approach that acquires all locks at read time is more appropriate. Such a
mechanism follows a “first-reader-wins” (ownership) semantics and directly
leads to class $P$. The acquisition of exclusive locks at read time prevents
deadlocks during write time. To prevent deadlocks at all, a strict sequential
access and preclaiming (all locks appear before the first read) or sorted
read-sets are possible mechanisms. Which mechanism is chosen to prevent or
resolve deadlocks is unimportant regarding the correctness of O$|$R$|$P$|$E
(see Section III). Preclaiming has its drawbacks concerning the time a lock
has to be acquired. Sorted read-sets may be unfeasible due to limitations of
the storage layer or chosen index structure. The prototype (see Section IV)
uses a Wait-For-Graph to prevent deadlocks during the read-phase of a
transaction. Also, during our experiments (see Section V) the number of
deadlocks was considerably small, because data classified in $P$ should have
no concurrent modifications by definition.
The decision if a data item is classified as $O$ or $P$ is based on the
following properties [2]:
1. 1.
Mostly read ($mr$): Is the data item mostly read? If ’Yes’, there is no need
for restrictive measures and the data item should by classified for optimistic
validation. A low conflict probability is assumed.
2. 2.
Frequently written ($fw$): $fw$ is the opposite of $mr$.
3. 3.
unknown ($un$): It means neither $mr$ nor $fw$ apply, i.e., it is unknown
whether an item is mostly read or written or approximately even.
4. 4.
Ownership ($ow$): if accessing a data item should explicitly cause the
transaction to own this item for its lifetime?
###### Example 1:
Classify data items in class $O$ and $P$ (taken from [2]).
This example is based on the TPC-C [3] benchmark and its “New-Order”
transaction. Note that an additional table Account has been introduced to keep
track about a customer’s bookings (column debit and credit). It also defines
an overdraft limit (column limit). The following tables are used in our
example: Customer (id, name, surname), Stock (StockId, ItemId, quantity),
Account (AcctNo, debit, credit, limit), and Item (ItemId, name, unit, price).
Table I shows an initial classification.
Attributes name, surname, and id of a customer are expected to be mostly read,
but if modified by a transaction it should definitively be the owner. The id
of a customer, like all ids, is expected to become modified rarely. If the id
becomes modified, ownership is required. In principal, all business keys
should be classified in $P$, because they are owned by the application
provider (see Rule 1, 1)).
Stock.quantity is expected to become modified frequently ($fw$) and to prevent
the situation where an item was marked as available during the read phase, but
at commit time the item is no longer available due to concurrent transactions,
it is also marked as $ow$. For the time being, however, quantity will be
classified as an ambiguity (see also Rule 1, 3)), which will be discussed
below.
The Account.credit and Account.debit of a customer’s account might be accessed
frequently depending on a customer’s activity and $un$ is a good choice.
However, since multiple transactions might concurrently update the balance,
and an owner is hardly identifiable, $\neg ow$ is chosen. So, it is also an
ambiguity (see Rule 1, 3)).
The Account.limit is the overdraft limit of a customer and expected to be
mostly read, hence, $mr$ is a good choice. Since it is neither owned by the
customer nor by others, $\neg ow$ is a good choice (see Rule 1, 2)).
Assuming the application is a high frequency trading application, Item.Price
might quickly become a bottleneck. An exact prediction is not possible though,
hence, $un$ is a good choice. Property $ow$ would not be a good choice,
because transactions of different components ($dc$) might simultaneously
calculate the price (see Rule 1, 3)).
TABLE I: Classification of Example 1 $x$ | $mr$ | $fw$ | $un$ | $ow$ | CC class
---|---|---|---|---|---
Customer.name | 1 | 0 | 0 | 1 | $P$
Customer.surname | 1 | 0 | 0 | 1 | $P$
Customer.id | 1 | 0 | 0 | 1 | $P$
Stock.StockId | 1 | 0 | 0 | 1 | $P$
Stock.ItemId | 1 | 0 | 0 | 1 | $P$
Stock.quantity | 0 | 1 | 0 | 1 | A
Account.debit | 0 | 0 | 1 | 0 | A
Account.credit | 0 | 0 | 1 | 0 | A
Account.limit | 0 | 0 | 1 | 0 | A
Item.name | 1 | 0 | 0 | 1 | $P$
Item.unit | 1 | 0 | 0 | 1 | $P$
Item.price | 0 | 0 | 1 | 0 | A
The ambiguities $A$ of Example 1, see class $A$ in Table I, highlight that
classes $O$ and $P$ and their properties are not sufficient. Particularly, hot
spot items such as Stock.quantity would benefit from a CC mechanism that
allows many winners and resolves the drawbacks of OCC and PCC.
Laux and Lessner [8] propose the usage of a mechanism that reconciles
conflicts –class $R$–. Their approach is an optimistic variant of O’Neil’s [9]
Transactional Escrow Method (TEM). Both approaches exploit the commutativity
of write operations. If operations commute, it is irrelevant which operation
is applied first as long as the final state can be calculated (see [8] [2] for
further details) and no constraint is violated.
Unlike TEM, the reconciliation mechanism requires a dependency function.
Consider, for example, two transactions that update an account and both read
an initial amount of 10€ , one credits in 20€ and the other debits 10€ . Once
both have committed, it is relevant that no constraint was violated at any
time and the final amount has to be 20€ . Usually, a database would write the
new state for each transaction causing a lost update. A dependency function
would actually add or subtract the amount (the delta!) and would always take
the latest state as input. In other words, reconciliation replays the
operation in case of a conflict. However, this is only possible if no further
user input is required. In the example above this means the user wants to
credit 20€ (or debit 10 € ) independent of the account’s amount as long as no
constraint is violated! Another requirement is that each dependency function
has to be compensatable (see also [2]).
The reconciliation mechanism [8] follows a “first-n-committers-win” semantics
and the number of winners $n$ is solely determined by constraints. The
correctness of the mechanism is proven in [8], which also introduces “Escrow
Serializability”, a notion for semantic correctness.
TEM grants guarantees to transactions during their read-phase. For example, a
reservation system is able to grant guarantees to a transaction about the
desired number of tickets as long as tickets are available. The consequence is
that transactions need to know their desired update in advance (see [9] for
further details).
Whereas TEM [9] is pessimistic (constraint validation during the read phase)
and works for numerical data only, Reconciliation [8] is optimistic
(constraint validation during the write phase) and works for any data as long
as a dependency function is known. The proof that $E$, like $R$, is escrow
serializable can be found in [2].
The decision if an item is member of $R$ or $E$ is based on the following
properties:
1. 1.
$con$: Does a constraint exist for this data item?
2. 2.
$num$: Is the type of the data item numeric?
3. 3.
$com$: Are operations on this data item commutative?
4. 4.
$dep$: Is a dependency function known for an operation modifying the data
item?
5. 5.
$in$: Is user input independence given for an operation modifying the data
item?
6. 6.
$gua$: Is a guarantee needed that a proposed modification will succeed?
###### Rule 1:
Derivation of CC classes for data item $x$
1. 1.
$ow\rightarrow$ classify $x$ in $P$ (identify $P$).
2. 2.
$\neg ow\wedge mr\rightarrow$ classify $x$ in $O$ (identify $O$).
3. 3.
all other combinations of $ow$ and $mr$: classify $x$ in $A$ (ambiguity).
4. 4.
$com\rightarrow$ classify $x$ in $E\vee R$
1. (a)
$(con\wedge num\wedge com\wedge gua)\rightarrow$ classify $x$ in $E$ (identify
$E$).
2. (b)
$(in\wedge dep\wedge com)\rightarrow$ classify $x\in R$ (identify $R$).
5. 5.
$x\in A\rightarrow$ item $x$ will be eventually in $O$.
###### Example 2 (Classification of data items in $R$ and $E$):
The ambiguities of Table I are the input for this example. Table II shows the
result of the classification of these ambiguities.
`Stock.quantity` has a constraint $value>0$ and is numeric. The dependency
function $dep$ is known too. As stated above, a dependency function performs a
context dependent write. For example, dependency function $d$ would be
$d(x,xread,xnew)=x+(xnew-xread)$. User input independence $in$ is not given.
If placing the order fails at the end, a replay would also fail. So, class $R$
is not an option. Since an order requires a guarantee that the requested
amount of items remains available, Rule 1, 4a) applies.
`Account.credit` and `Account.debit` are classified as $R$. Property $dep$ is
known, because operations are either additions or subtractions. Property $in$
is given, because the account has to be updated if the order is placed and no
constraint is violated. As the updates follow a dependency function they can
be reconciled and should not raise an exception. Again, only a constraint
violation such as an overdraft can cause the abort. Rule 1, 4b) applies.
`Item.price` depends on a variety of parameters including the last price
itself. As a result, a price update might not be commutative. `Item.price`
remains ambiguous and remains in $O$, because $O$ is the default class. Rule
1, 5) applies.
TABLE II: Illustrative classification of ambiguities of Example 1.
$x$ | $con$ | $com$ | $num$ | $dep$ | $in$ | $gua$ | CC class
---|---|---|---|---|---|---|---
Stock.quantity | 1 | 1 | 1 | 1 | 0 | 1 | $E$
Account.credit | 1 | 1 | 1 | 1 | 1 | 0 | $R$
Account.debit | 1 | 1 | 1 | 1 | 1 | 0 | $R$
Item.price | 0 | 0 | 1 | 1 | 0 | 0 | $O$
## III Correctness
A transaction potentially runs under four different CC mechanisms. Due to the
CC classes’ individual semantics, each class has a different notion for a
conflict, too. In any case, two read operations are never in conflict because
read operations do not alter the database state and hence are commutative
[10].
Usually, a conflict is given if two operations access the same data item and
the corresponding transaction overlap in their execution time, and at least
one operation writes the data item [5]. Whereas for $O$ and $P$ this is a
correct definition of a conflict, for $R$ and $E$ it is not, because both can
resolve certain write conflicts. The resolution of conflicts is a key aspect
and advantage of SCC, and SCC questions the seriousness of a conflict. In
other words, the meaning of a read-write or write-write conflict is
interpreted. For $R$ and $E$ only a constraint violation is a conflict.
Moreover, the state read by an operation is assumed to be irrelevant,
otherwise commutativity is not given. It follows that any final serialization
graph $SG-R$ and $SG-E$ for class $R$ and $E$ is non-cyclic because potential
conflicts are reconciled (see [2] for a thorough discussion).
For $P$, the common definition of a conflict is correct. If a transaction
wants to modify item $p$ (let $p\in P$), it has to acquire a lock on $p$
during its read-phase to become the exclusive owner. If not, the transaction
does a blind write, which is disallowed according to Definition 1. Hence,
every write in $P$ cannot encounter a concurrent write or read, because if a
transaction writes $p$ it has to be the exclusive owner of $P$.
Consider the following (incorrect) schedule, for example ($disc_{i}$ and
$disc_{j}$ denote the disconnect phase of transaction $i$ (resp. $j$) and let
$o\in O$ and $p\in P$):
$r_{i}(o),r_{j}(p),r_{j}(o),disc_{j},w_{j}(o),c_{j},r_{i}(p),disc_{i},\\\
w_{i}(p),c_{i}$ (1)
In this schedule transaction $i$ reads $o$ before $j$ modifies $o$ and
transaction $j$ reads $p$ ($r_{j}(p)$) before $i$ writes $p$ ($w_{i}(p)$).
Usually, the ordering of transaction operations are visualized by a precedence
graph as in Figure 1.
###### Definition 2 (Serialization Graph (SG)):
Let $S$ be a schedule of transactions. The Serialization Graph (aka Conflict
Graph) is a precedence graph where each node represents a transaction and each
directed edge between two transactions represents a precedence of conflicting
operations [11] [12] on a data item.
It is well known that a transaction schedule is conflict serializable if and
only if the SG is acyclic [11] [13]. If the SG of a transaction schedule
includes a cycle then no equivalent serial schedule exists and, therefore,
this schedule is not serializable [11].
The above Schedule 1 leads to the following cyclic SG of Figure 1.
Figure 1: The cyclic serialization graph from Schedule (1).
Transaction $i$ precedes $j$ in class $O$ and $j$ precedes $i$ in $P$. Having
opposite orders, i.e., $i\rightarrow j$ in one, but $j\rightarrow i$ in
another class violates serializability, because globally $i$ precedes $j$,
which in turn precedes $i$.
A transaction that reads a data item in $O$ has to validate the value at
write-time, even if the write is only for an item $p\in P$. The operation
$w_{i}(p)$ causes a validation failure on item $o$ because transaction $i$ has
read a value of $o$ that transaction $j$ has meanwhile updated. This is a
conflict between transactions $i$ and $j$ in $O$ and produces a validation
failure. Commit $c_{i}$ is wrong in the schedule above and would never happen
in O$|$R$|$P$|$E. Hence, the above schedule looks as follows in O$|$R$|$P$|$E:
$r_{i}(o),r_{j}(p),r_{j}(o),disc_{j},w_{j}(o),c_{j},r_{i}(p),disc_{i},\\\
w_{i}(p),a_{i}$ (2)
Even a deadlock in $P$ cannot create a cyclic graph between $O$ and $P$,
because at least a write is required to create a conflict in $P$. However,
since all deadlocks can only happen during the read phase of a transaction, no
conflict cycle involving a deadlock can happen in $P$.
Based on these initial findings it is possible to state Theorem 1. The
corresponding proof exploits that for $R$, $P$, and $E$ the corresponding
serialization graphs are non-cyclic.
###### Theorem 1:
Let $SG-G$ be the global serialization graph, which is the union of $SG-O$,
$SG-R$, $SG-P$, and $SG-E$. The global serialization graph $SG-G$ is non-
cyclic if $SG-O$ is non-cyclic.
###### Proof by contradiction.
Given that $ta_{i}$ is serialized before $ta_{j}$ $(i\rightarrow j)$ in
$SG-O$. In $P$, no other transaction can access an item in $P$ if transaction
$ta_{i}$ has read this item. This is the consequence of x-locks during the
read-phase used in class P. The same argument applies to $ta_{j}$ as well and
it is impossible to have a serialization order $j\rightarrow i$ in $P$. Since
$i$ and $j$ can be arbitrarily changed there is a contradiction if
$i\rightarrow j$ exists in one, and $j\rightarrow i$ in another class. $SG-R$
and $SG-E$ are negligible because any conflict is finally reconciled and both
serialization graphs are non-cyclic. ∎
###### Corollary 1:
$SG-O$ sets the global serialization order for $P$.
If a $ta$ does not modify data in $O$, then $P$ sets the order. If a $ta$ does
not modify data in $P$, then $R$ sets the order, because it is prone to
validation conflicts as opposed to $E$ that already has a guarantee to
succeed.
## IV Prototype Reference Implementation of O|R|P|E
The prototype of O$|$R$|$P$|$E is not a full database system. From a fully
operational database the backup and recovery functions are missing. Both
functions do not functionally influence the CC mechanism. There is only a
negative effect on the performance during backup or recovery. This applies in
a similar way for any database management system with a single CC mechanism.
It was implemented using the JAVA programming language and Figure 2
illustrates its architecture. A client API provides access to the data and
depending on the operation’s type, read or write, the operation is executed by
a dedicated pool. Pools “Reads” and “Writes” represent an read- and write-
lane. In addition, a pool to handle the termination (commit and abort) has
been implemented. Pools’ reads and writes handle all incoming and outgoing
operations and the classification has been placed directly into the index.
Depending on an item’s classification the corresponding CC mechanism is
plugged in. This placement allows to decide about the CC mechanism with a
single read operation, which imposes an negligible overhead. Once an item has
been read or written, the additional pools’ “read-callback” and “write-
callback” deliver the results back to the clients. A Pool WFG (Wait-for-Graph)
is used to handle access to the WFG. Deadlocks may occur during the read-phase
of a transaction if the transaction accesses data items in class $P$.
Deadlocks can only occur in class $P$ during the read-phase, because lock
acquisition is not globally ordered.
Having separate pools and callbacks to handle incoming and outgoing operations
means that the prototype supports disconnected transactions, because the
entire communication is asynchronous. Figure 3 illustrates the message flow
within the prototype. A read operation is passed to the “Reads” pool. Each
read is executed asynchronously and the complete read set is sent back to the
client via a dedicated callback pool. To support asynchronous writes, a write
operation is passed to the “Writes” pool and if all writes have been applied
the write set is sent back to the client. Clients always sent their complete
write-set.
Data is kept solely in memory and no data is written to disk unless the
operating system needs to swap data to disk due to memory limitations. The
only output to disk is to write logging events that are used for performance
evaluation. Other functionality that has been implemented includes:
* •
CC mechanisms $O$, $R$, $P$ and $E$,
* •
The prototype supports constraints,
* •
The prototype supports item selects, range-selects, updates, and inserts. The
deletion of an item is implemented as update that invalidates a data item.
* •
A WFG implementation.
Figure 2: Architecture of the prototype. Figure 3: Message flow of the
prototype.
## V Performance Study with Static Data Class Assignment
The performance study has been carried out based on the prototype presented in
the previous section (Section IV). As benchmark, the TPC-C++ benchmark [7] has
been chosen, because we also conducted a study comparing O$|$R$|$P$|$E with
Serializable SI, which is beyond the scope of this paper.
The data used for this study is similar to those of Examples 1 and 2. Each
data item was statically assigned to a CC-Class as shown in Table III. Aspects
of a dynamic assignment and its performance effects will be studied in the
next section.
The performance study measures the response-time (resp. - time), the abort
rate (ab-rate), the commits per second, and the degree of concurrency (deg.
conc.). The degree of concurrency is the quotient of the serial estimated
execution time over the elapsed time of the experiment. In addition, the
arrival rate $\lambda$ of new transactions has been varied to be set to the
optimum (minimized abort rate and response time, maximized degree of
concurrency). This optimum $\lambda$ has been taken to conduct fair and
calibrated comparisons. Each experiment has been repeated three times and the
mean value is reported. Values refer to the execution of a transaction mix
–deck– (42 New Order-, 42 Payment-, 4 Delivery-, 4 Credit check-, 4 Update
Stock Level-, and 4 Read Stock Level - transactions see [7] [3] [2]).
TABLE III: TPC-C: classification of data items. Item | CC Class | operation
---|---|---
Customer | $P$ | read
CustomerCredit | $P$ | update
CustomerBalance | $R$ | read
Customer | $P$ | read
CustomerBalance | $R$ | update
Customer | $P$ | read
CustomerCredit | $P$ | read
StockQuantity | $E$ | update
Customer | $P$ | read
CustomerBalance | $R$ | update
WarehouseYTD | $R$ | update
DistrictYTD | $R$ | update
StockQuantity | $E$ | read only
StockQuantity | $E$ | update
Figure 4 illustrates the abort rate and degree of concurrency for SI under
full contention and shows the drawbacks of optimistic SI: the higher the
number of concurrent transactions, the higher the abort rate. Also, the system
starts thrashing if the degree of concurrency drops below one, which is the
point where a serial execution outperforms a concurrent. Table IV shows that
for SI and O$|$R$|$P$|$E with the same $\lambda$ (tests #1-6 and #10-15) the
response-time increases with larger $\lambda$, which is expected and normal
behavior. The direct comparison reveals that O$|$R$|$P$|$E has a $3-38$ times
better response time, which shows that SI is over-strained for a workload of
$\lambda\geq 200$. For $\lambda=1000$ tas/sec the response time is about 3
times higher for SI and the degree of concurrency is only half compared to
O$|$R$|$P$|$E. A good degree of concurrency with a low abort rate is given by
$\lambda=133$ (see Table IV #3).
Figure 4: TPC-C++, optimistic SI (class $O$), abort rate and degree of
concurrency.
Figure 5 shows the response-time and degree of concurrency for O$|$R$|$P$|$E
for increasing $\lambda$. Unlike SI, O$|$R$|$P$|$E has no aborts caused by
serialization or validation conflicts due to the classification of hot-spot
data items in $R$ or $E$, which prevents $ww$-conflicts. As shown by Figure 5,
O$|$R$|$P$|$E has its best degree with $\lambda=1000$ transactions per second
achieving 227 commits per second (see Table IV, #15).
Figure 5: Response time and degree of concurrency for increasing $\lambda$ for
O$|$R$|$P$|$E . Figure 6: TPC-C++, SI and O$|$R$|$P$|$E : response-time and
degree of concurrency for $\lambda=133$ (SI) and $\lambda=1000$ (O$|$R$|$P$|$E
).
The comparison of O$|$R$|$P$|$E and SI uses $\lambda=133$ (Table IV #3, and
#7-9) for SI and $\lambda=1000$ (Table IV #15-18) for O$|$R$|$P$|$E . For SI,
$\lambda=133$ was considered as being the best trade-off with respect to the
degree of concurrency, $\lambda=1000$ was considered as being the best trade-
off for O$|$R$|$P$|$E.
Figure 6 illustrates the degree and the response-time for data of class $O$
with SI and O$|$R$|$P$|$E if both use the $\lambda$ which reflect the best
trade-off. As the figure shows, SI has a better response-time for 1000, 2000,
and 3000 concurrent transactions, but then suddenly undergoes thrashing and
the response-time grows exponentially. However, O$|$R$|$P$|$E shows a moderate
and stable increase of the response-time even for 4000 concurrent
transactions.
With a workload of $2000$ transactions the degree of concurrency is $3.41$ for
O$|$R$|$P$|$E versus $1.87$ for SI. The average response time is only $388$
msec for SI and $1551$ msec for O$|$R$|$P$|$E. It would be wrong to conclude
that SI has a better performance than O$|$R$|$P$|$E because for a comparison
$\lambda$ has to be taken into account. In the test O$|$R$|$P$|$E had a 7.5
times higher transaction arrival rate than SI ($\lambda=1000$ as opposed to
$\lambda=133$ for SI). At $4000$ concurrent transactions O$|$R$|$P$|$E
outperforms SI in terms of response time by a factor of $3.7$ (see Figure 6)
and the degree of concurrency is $2.6$ times better. Hence, under high
contention O$|$R$|$P$|$E has the lowest abort rate and considering the trade-
off between concurrency and response time, O$|$R$|$P$|$E outperforms SI
significantly. Furthermore, its abort rate is nearly independent of the
contention.
TABLE IV: Measured values of experiments #1-18. # | tas | $\lambda$ | resp.-time | ab. rate | commits | deg.
---|---|---|---|---|---|---
| | | | | /second | conc.
SI | | | | | |
1 | 1000 | 80 | 43 | 2% | 71 | 1,39
2 | 1000 | 100 | 84 | 3% | 80 | 1,57
3 | 1000 | 133 | 309 | 5% | 82 | 1,63
4 | 1000 | 200 | 1640 | 20% | 62 | 1,50
5 | 1000 | 400 | 2091 | 26% | 61 | 1,57
6 | 1000 | 1000 | 2464 | 27% | 62 | 1,61
7 | 2000 | 133 | 388 | 9% | 90 | 1,87
8 | 3000 | 133 | 522 | 8% | 91 | 1,89
9 | 4000 | 133 | 23416 | 46% | 22 | 0,79
O$|$R$|$P$|$E | | | | | |
10 | 1000 | 80 | 5 | 4% | 69 | 1,01
11 | 1000 | 100 | 5 | 4% | 85 | 1,24
12 | 1000 | 133 | 8 | 4% | 108 | 1,58
13 | 1000 | 200 | 14 | 4% | 150 | 2,19
14 | 1000 | 400 | 213 | 4% | 217 | 3,18
15 | 1000 | 1000 | 724 | 4% | 227 | 3,32
16 | 2000 | 1000 | 1551 | 4% | 234 | 3,41
17 | 3000 | 1000 | 3704 | 4% | 184 | 2,69
18 | 4000 | 1000 | 4968 | 5% | 174 | 2,55
## VI Run-time Adaption
The attempt to manually classify data may finally result in ambiguous
classification where default class $O$ applies (see Rule 1, 5)). But, high
contention can quickly cause performance issues for data classified in $O$.
Even if class $P$ is more expensive, because $P$ requires locking during the
read-phase it will lead to a better performance in this situation as the
locking will queue the transactions and process them successfully.
An automatic and dynamic adaptation of the classification when transactional
load or data usage changes would make the initial classification less critical
and O$|$R$|$P$|$E could choose the optimal CC-mechanism based on the current
situation.
A solution for automatic run-time adaptation is presented in this section. It
re-classifies a data items of default class $O$ to class $P$ if the commit
rate drops below an adjustable threshold. With this measure the commit rate
increases again for the price of a longer response time. When the
transactional load decreases and after the commit rate exceeds the threshold
again it switches back to its original class $O$.
Data originally classified in $P$ will not be re-classified to $O$ when the
load is low. This is not feasible, because an item initially in $P$ has to
remain in $P$ due to the item’s ownership semantics. An adaption at run-time
that results in $O$ would contradict the ownership semantics since a
transaction would no longer request locks during its read-phase. This is,
however, mandatory to comply with the ownership semantics (see Rule 1, 1)).
At a first glance, an adaption between $E\rightarrow R$ seems reasonable if
the probability of an invariant violation (PIV) is low. It would save
additional overhead, because invariant conditions in $R$ have not to be
validated at read-time, but in $E$. However, this is only a good decision if
contention is low. To take this decision at high workload will result in a
much longer response time because the response time for class $R$ grows much
faster than for class $E$. With high contention, the probability of constraint
violations increases, but the exact determination is application dependent.
Classifying a data item in $E$ is only justified if an aborted transaction is
more costly than to retry the transaction, i.e., the transaction needs a
guarantee to succeed which leads to class $E$ from the beginning (Rule 1,
4a)).
### VI-A Adaptation Criteria
The run-time adaptation is based on the commit rate $cr$. To measure and
analyze $cr$ a statistical model for the transactional system is necessary.
According to [14] [15] [16], a transactional system is modeled as an open
system whose transactional arrival rate is a Poisson process. The time between
arrivals of transactions is assumed to be independent in Poisson, which has
the advantage that the conflict rate (the term conflict is stated more
precisely below) can be modeled around a single variable $\lambda$ that
represents the number of arrivals in relation to the time window. A Poisson
process has a conflict probability density function $PC_{x}(X=k)$ given by
Equation (3):
$PC_{x}(X=k)=\frac{\lambda^{k}}{k!}e^{-\lambda}$ (3)
For example, if on average $100$ transactions arrive within one Time Window
(TW), the probability that $k=50$ transactions access item $x$ within a TW is
given by Formula (3). The arrival rate $\lambda$ is in relation to time, for
example, within one second; i.e., for a transaction that accesses $x$ during
that second, it means that the probability is
$PC_{x}(X>=2)=\sum_{k=2}^{\infty}\lambda^{k}/k!\,e^{-\lambda}$ to encounter
other conflicting concurrent transactions.
Figure 7: Arrivals (workload) and time windows.
Figure 7 illustrates the usage of TW as well as the arrivals –workload– in
relation to time. The workload is, however, not constant over the lifetime of
a transaction. A constant workload ignores that the workload, and hence,
$\lambda$ might suddenly change in particular if transactions are long
running. Measuring the number of transactions terminating or committing during
a time window are means to detect and react to sudden changes in the workload,
which is an idea borrowed from [14]. The length of the TW defines the sample
rate and its sensitivity.
The commit rate $cr$ is used as indicator for the performance of the
optimistic CC-mechanism of class $O$. If the $cr$ drops below a threshold,
there are more aborts due to validation failures and that class $P$ would be a
better choice to increase $cr$.
$cr:=\frac{\mbox{\\#committed tas}/\mbox{TW}}{(\mbox{\\#terminated
tas}-\mbox{\\#re-class. aborts})/\mbox{TW}}$ (4)
For each TW the commit rate $cr$ is calculated as fraction of the committed
transaction divided by all terminated transaction without those that were
aborted due to a re-classification. The commit rate $cr$ is identical to the
effective commit rate $cr_{\text{eff}}$ (see Definition 5) if no adaptation
occurs. Formula (4) is apparently insensitive to the length of the TW. But, a
longer TW tends to compute smoother $cr$ and it saves measuring overhead. We
used a TW of $100$ msec which delivered a good trade off for the prototype
implementation.
The adaptation policy is given by Rule 2, which uses a threshold $\gamma$ for
the target commit rate and an hysteresis $\delta$ to avoid constant switching
(thrashing) between both classes. When a data item is re-assigned during an
active transaction, the transaction is aborted when the change is from $O$ to
$P$. In the opposite case, the transaction can continue without conflicts,
because the write-phase will succeed since the data item is already
exclusively locked for that transaction.
###### Rule 2:
General Adaptation $O\rightarrow P$
Let $cr$ be the commit rate, $\delta$ the hysteresis, and $\gamma$ the target
commit rate. Adaptation is according to the following rules:
1. 1.
When $cr$ decreases and $O$ is the current class for an item $x$: If
$cr<\gamma-\delta$ then $P$ is the new classification of $x$
2. 2.
When $cr$ increases and $P$ is the current class for an item $x$: If
$cr>\gamma+\delta$ then $O$ is the new classification of $x$.
3. 3.
Reclassification during a transaction:
a) If a $ta$ reads at the time when $O$ is the current class, but will write
at a time when $P$ is the current class, $ta$ is aborted (non-avoidable crash)
to maintain consistency.
b) If a $ta$ reads at a time when the data item is in $P$ and writes when it
is in $O$, the success of the write is guaranteed because the data is
exclusively locked since read-time.
Adaptation solely relies on the commit rate $cr$. The arrival rate $\lambda$
and hence the conflict probability are not measured which would be much more
difficult. This leverages the decision to use a Poisson distribution for the
transaction arrivals.
Figure 8: Example run-time adaptation scenario with decreasing $cr$ in TW
$(t_{1},t_{2})$, reclassification at $t_{2}$ to $P$ and increasing $cr$ in TW
$(t_{2},t_{3})$ and $(t_{3},t_{4})$ and switch back to class $O$ at $t_{4}$.
Figure 8 illustrates how the adaptation works if the commit rate decreases and
later increases again. During the first TW ($t_{2}-t_{1}$) the commit rate
$cr$ drops to $1/8$ because only one out of 8 transactions was successful. Two
transactions ($ta_{9},ta_{10}$) have not terminated yet.
At the end of epoch 1 the commit rate is compared to $\gamma-\delta$ and as
$cr$ is below the threshold data $x$ is re-classified to $P$. The transaction
$ta_{9}$ will later abort due to a constraint violation and $ta_{10}$ has to
abort because of the re-classification to $P$. Now, for the following
transactions the locking mechanism for $P$ applies. One consequence is that
$ta_{11},ta_{12}$, and $ta_{13}$ execute mostly sequentially. The commit rate
grows in the following TW to $3/4$, but, this is not sufficient to switch $x$
back to class $O$. During the third TW ($t_{4}-t_{3}$) the commit rate rises
to $cr=2/2>\gamma+\delta$ and the (initial) optimistic CC (class $O$) is re-
established.
The following history describes the example of Figure 8 more formally:
$\displaystyle H=$
$\displaystyle\underbrace{(r_{1}(x),r_{2}(x),r_{3}(x),\ldots,r_{10}(x),w_{1}(x),c_{1}}_{\text{commit
rate decreases}}$
$\displaystyle\underbrace{w_{2}(x),a_{2},w_{3}(x),a_{3},\ldots}_{\text{commit
rate decreases}},\mbox{adapt to P},a_{10},a_{9},$
$\displaystyle\underbrace{l_{11}(x),r_{11}(x),w_{11}(x),c_{11},l_{12}(x),r_{12}(x),}_{\text{commit
rate increases}}$
$\displaystyle\underbrace{w_{12}(x),c_{12},l_{13}(x),r_{13}(x),w_{13}(x),c_{13}\ldots}_{\text{commit
rate increases}}$
The history $H$ shows in the first phase 10 transactions
$ta_{1},ta_{2},\dots,ta_{10}$ accessing $x$. They first read $x$
($r_{1}(x),r_{2}(x),r_{3}(x),\ldots,r_{10}(x)$) and then try to write $x$
($w_{1}(x),w_{2}(x),\ldots$). In the given scenario only $ta_{1}$ can commit
($c_{1}$), all others have to abort ($a_{2},a_{3},\ldots$) because too many
transactions try to concurrently update $x$. This leads to a sudden decrease
in the commit rate $cr=1/8$ because only $ta_{1}$ was successful and $ta_{9}$
and $ta_{10}$ have not yet updated $x$, i.e., it is still pending. If we
assume a threshold $\gamma$ of $0.8$ and an hysteresis $\delta$ of $0.1$, then
$cr<\gamma-\delta$ which triggers the adaption according to Rule 2, 1).
After adaptation has been carried out, $ta_{10}$ has to abort (Rule 2, 3a)) if
it tries to update $x$. The abort $a_{10}$ appears in the history after the
adaptation even though the item $x$ is now classified in $P$. Transaction
$ta_{10}$ has to abort, because it has not locked $x$ before reading $x$
($r_{10}(x)$). If $ta_{10}$ would not abort it would risk a lost-update,
because $ta_{10}$ would overwrite the last committed state since $P$ does no
version validation. Even with version validation, $ta_{10}$ is very likely to
abort, because the probability for a validation failure is high in this
situation.
Let assume that transaction $ta_{9}$ accesses other data beside $x$ and
validation fails due to a constraint violation. This leads to an abort of
$ta_{9}$. The distinction of the abort reason is important here as it will be
counted for the commit rate.
After the adaptation to $P$ newly arriving transactions apply a locking scheme
for data $x$ which is indicated by $l_{11},l_{12},\ldots$. The commit rate
increases again because transactions $ta_{11},ta_{12},ta_{13}$ succeed and
commit $c_{11},c_{12},c_{13}$. In fact, all following transaction succeed
except those which violate a constraint.
If we choose the Time Window TW to start just before $ta_{11}$ arrives the
commit rate $cr$ rises with each committed transaction. Class $O$ is not
reestablished at the end of this TW despite that the next $3$ transactions
succeed because $cr=3/4\leq\gamma+\delta=0.9$. The class assignment remains
unchanged and the following TW ($t_{4}-t_{3}$) will reestablish class $O$
because $cr=2/2$.
The adaptation mechanism proposed in Rule 2 maximizes the commit rate as seen
in the previous example. But due to the restrictive locking policy the
response time increases as the execution tends to be serial. In the worst
case, enduring contention, the growth is exponential. But, what if the maximum
response-time is limited, for example, by Service Level Agreements (SLA) and
penalties apply for exceeding the maximum acceptable response-time? The SLA
penalties may outweigh the costs for aborts.
In this case maximizing the commit rate as only criteria is not a good
strategy since it increases costs. To prevent unacceptable response times a
barrier (denoted as $\beta$) is used that regulates the adaptation; i.e., once
$\beta$ is reached re-classification to $O$ takes place despite a low commit
rate and the abort rate starts to increase which in turn leads to shorter
response times for the remaining successful transactions. The concrete value
of $\beta$ is application dependent. Its general purpose is to minimize costs,
i.e., if the abort costs are lower than the costs for exceeding the response-
time, more aborts are acceptable until the ratio turns over.
Application specific requirements that set $\beta$ are out of the paper’s
scope, but to allow applications to limit the adaptation, $\beta$ is
incorporated in O$|$R$|$P$|$E (see Rule 3). Applications can now set $\beta$
to limit the response time and, at run-time, continuously monitor and adapt
the achieved commit rate as well as the response time as measured by the
applications themselves. Further, applications can increase $\beta$ at run-
time appropriately. This way, applications can determine their own equilibrium
between commit rate and response-time.
The challenge is the estimation of the expected mean response-time
$rt_{\text{est}}$, which implies to predict the workload. As stated in the
previous section, this is complicated if not impossible in a general and
dynamic way. O$|$R$|$P$|$E circumvents this problem and measures the time
between a read and the corresponding write if the current classification is
$P$. Furthermore, adaptation does no longer calculate $cr$ at the end of the
current TW, instead each termination (commit and abort) triggers the
adaptation. A useful fixed TW is difficult to choose. If TW is too short, the
overhead is considerable and degrades performance. If the TW is too long the
adaptation is too slow.
To estimate the future workload the terminating transaction snapshots the lock
queue’s size if $P$ is the current class. The current queue size together with
the average time between read and write give a good indication for the
expected workload. Because the transaction has to notify all waiting
transactions about the ongoing unlock and already is the current owner of the
lock-queue, there is no need for further synchronization and the overhead is
considerably low, but of course exists. It is a price that has to be paid to
get run-time adaptation.
Finally, the number of notified transactions multiplied by the average time
distance between a read and write is used as an approximation for
$rt_{\text{est}}$. The rationale is that if $q$ transactions are waiting to
execute and the mean time between read and write is ø$(mt)$ then for newly
arriving transactions $rt_{\text{est}}$ is expected to be
$rt_{\text{est}}=\o(mt)\times(q+1)$ because of the mostly sequential
execution. Following this approach O$|$R$|$P$|$E can balance commit rate and
response time.
Transaction termination triggers adaptation, however, it is important to note
that the adaptation is not executed as part of a transaction. This prevents
the situation where a failed adaptation would cause the transaction to abort,
too.
###### Rule 3:
Adaptation $O\rightarrow P$ with barrier
Let $cr$ be the commit rate, $\delta$ the hysteresis, $\gamma$ the target
commit rate, and $\beta$ the response time barrier. Adaptation is according to
the following rules:
1. 1.
($O\rightarrow P$): If $O$ is the current class for an item $x$ and
$cr<\gamma-\delta$ and $rt_{\text{est}}<\beta$ then $P$ is the new
classification of $x$.
2. 2.
($P\rightarrow O$): If $P$ is the current class for an item $x$ and $cr$ is
low ($cr<\gamma-\delta$) and $rt_{\text{est}}>\beta$
then $O$ is the new classification of $x$.
3. 3.
($P\rightarrow O$): If $P$ is the current class for an item $x$ and $cr$ is
high ($cr>\gamma+\delta$)
then $O$ is the new classification of $x$.
4. 4.
Reclassification during a transaction:
a) If a $ta$ reads at the time when $O$ is the current class, but is about to
write at a time when $P$ is the current class, $ta$ is aborted (non-avoidable
crash) to maintain consistency.
b) If a $ta$ reads at a time when the data item is in $P$ and writes when it
is in $O$ the success of the write is guaranteed because the data is
exclusively locked since read-time.
Rule 3, 1) takes care that the commit rate is sufficiently high as long as the
response time is low. If the response time exceeds the limit $\beta$ and $cr$
is (still) low then Rule 3, 2) switches back to $O$. Rule 3, 3) ensures that
when the commit rate is high the default CC-mechanism of class $O$ is chosen.
For all other situations the classification remains unchanged.
Rule 3, 4) is the same as before. It ensures that a reclassification can take
place during ongoing transactions. Reclassification is now triggered by two
parameters, the commit rate $cr$ and the mean response time $mrt$.
## VII Performance under Adaptation
The performance study uses the implementation of O$|$R$|$P$|$E described in
Section IV. Even if it is not a full database implementation with all features
(no backup and no recovery functionality) it is sufficient for measuring the
performance of O$|$R$|$P$|$E under different situations. Since backup and
recovery are normally inactive there is no impact on the concurrency
mechanism. Therefore, the performance measurements would also be valid for a
fully featured database system. Clearly, if backup or recovery are active,
this would impair performance. This would also apply to our prototype.
The study analyzes different workload profiles indicated by a sequence of
workloads with a total life-span of one second each. The workload is held
constant for one second (called _epoch_). The arrival rate $\lambda$ for the
workload ranges from $6.66$ tas/sec up to a heavy overload of over $300$
tas/sec. These values have been chosen, to show the behavior of the overloaded
system with frequent aborts and the behavior under moderate workload with a
stable commit rate.
During one epoch (1 sec) the commit rate is measured 10 times (sample rate
$sr=10$/sec). For simplicity, all transactions read and write only one data
item, i.e., the worst case is simulated where an item in $O$ suddenly becomes
a bottleneck. The time unit in all simulations is milliseconds if not stated
otherwise.
To obtain a preliminary understanding the first experiments study short living
transactions with no disconnect time during three epochs. Afterwards long
living transactions with a random disconnect time $dt$ between 100 and 1000
milliseconds are analyzed over seven epochs. A disconnect time $dt$ within
these bounds simulates typical situations.
Finally, barrier $\beta$ is enabled for the next set of experiments. The set
up of long living transactions and seven epochs is always the same except for
the response time barrier $\beta$ which varies between $1000$ and $15000$
msec. We study the effects on commit rate $cr$ and response time $rt$. Each
experiment was executed three times.
### VII-A Short Living Transactions with Three Epochs
Table V lists our test scenarios and summarizes the result. The right column
of the table refers to the corresponding figures for a detailed analysis. The
four tests use a different arrival rate $\lambda$ for each epoch (one second
interval) as marked in the Epochs column. The first two test scenarios do not
require a concurrency control adaptation to demonstrate the base performance
without adaptation. In Tests #3 and #4 the workload is increased to trigger
adaptation.
TABLE V: Results for three epochs with different workload, $\gamma=0.9$,
$\delta=5$% and $dt=0$. Summary
---
Test# | Epochs | ø$(cr)$ | $\sigma(cr)$ | ø$(rt)$ | Figure
1 | 9,14,19 | 1,00 | 0,00 | 3,6 | 9 (a)
2 | 153,176,176 | 0,89 | 0,16 | 2 | 9 (b)
3 | 10,19,178 | 0,96 | 0,06 | 3,7 | 9 (c)
4 | 168,310,309 | 0,90 | 0,05 | 2824 | 9 (d)
The average response time ø$(rt)$ is very high for test scenario #4. This is
the result of an increasing overload, which quickly triggers adaptation at the
beginning of the second epoch (see Figure 9 (d)). This leads to a mostly
sequential execution of the transactions, which explains the very high ø$(rt)$
and the high ø$(cr)$ at the same time. This increase of $cr$ is typical for
scenarios after adaptation to $P$ has taken place. It continues until the
upper bound $\gamma+\delta$ is reached. Then the adaptation switches back to
class $O$.
As the tests indicate later, it would be better to add an additional criteria
for the re-adaptation from $P\rightarrow O$. If the workload is still high
(wait queue $>1$) the data should remain in $P$ until the workload is low
again before going back to $O$. This measure could avoid multiple re-
adaptations that produce an unstable system behavior during a sudden
transition of the workload from heavy overload to low workload.
Figure 9 shows the commit rate $cr$, lower and upper bounds (set by
$\gamma\pm\delta$), and the accumulated number of aborts and commits of the
four test scenarios.
Figure 9: Various short workloads to demonstrate Run-time Adaptation; (a) low
$9-19$ tas/sec , (b) high $\approx 160$ tas/sec, (c) increasing load $10-180$
tas/sec, (d) increasing overload $170-310$ tas/sec, $\gamma=90\%$,
$\delta=5\%$, $sr=10/sec$, and $dt=0$.
Test #1 has a low workload in all three epochs. The load starts with 9
tas/sec, continues in epoch #2 with 14 tas/sec and in the last epoch the
workload rises to 19 tas/sec. The transactions are executed as they arrive and
no concurrent interleaving transactions occur. As expected, no adaptation
takes place. From the corresponding Figure 9 (a) it can be seen that the
commit rate is 1 and no aborts occur. After $3.1$ sec (31 time units) all
transactions have successfully terminated and the number of commits remain
constant. Test #1 is the only scenario without contention but surprisingly not
the shortest $rt$. The reason for this is that a commit is more expensive than
an abort for an optimistic CC. Compared to the other tests, Test #1 has no
aborts and a commit rate of 100%.
For Test #2 the load is high ($\approx 160$ tas/sec) and nearly constant for
three seconds. The load is heavy and contention is present as can be seen from
the number of aborts and the decreasing commit rate. Figure 9 (b) shows that
the commit rate does not fall below the re-classification limit, hence no
adaptation occurs. The data remains in class $O$ and the optimistic CC has low
overhead which results in a short response-time of only $2$ msec.
Part (c) of Figure 9 (Test #3) shows the results for an increasing workload
where finally in the third epoch the adaptation is triggered. The workload
starts with $10-19$ tas/sec for two seconds and continues with $178$ tas/sec
for the third epoch. The commit rate drops under the minimum threshold
($\gamma-\delta$) at the blue vertical line ($2.2$ sec after start). The CC-
mechanism immediately switches to locking and the number of aborts decreases
(the accumulated abort graph makes a sharp bend to a lower gradient). During
the third epoch the workload is slightly higher than the system can
immediately execute. This can be seen from the slowly growing gap between the
accumulated transaction arrival (tas) and the accumulated committed
transactions (co). The average response time ø$(rt)$ stays low since during
the first two seconds the transactions were executed under $O$ with short
$rt$.
It is interesting to compare Tests #2 and #3. Test #2 has a constant high
workload, but not high enough to trigger the adaptation, hence, the data
remains in $O$. This is the reason for the very short response time. Test #3
has initially a low workload, but in Epoch #3 the workload just exceeds the
threshold and adaptation to $P$ applies. This leads to a higher $rt$ even if
the average workload is below the workload of Test #2.
Also, a start with low load (Tests #1 and #3) reduces the response-time
because all transactions of the first epoch are executed under optimistic CC
with a short $rt$.
Test #4 produces a heavy and increasing overload which triggers adaptation at
the end of epoch 1. The gap between committed and arrived transactions grows
until the arrival ends after 3 seconds. The adaptation to $P$ allows to
increase the commit rate until after 10 sec all queued transactions have
terminated. The system needs 7 sec to process the queued transactions after
the arrival of transactions has stopped before it becomes resilient. This
explains the high mean response time ø$(rt)$.
It can be noted that run-time adaptation under heavy workload achieves an
average commit rate ø$(cr)$ of approximately $90\%$, which was preset by
$\gamma$. The price for improving $cr$ is clearly a longer response-time $rt$
which grows to $2.8$ seconds for continuous overload in test-case #4.
The commit rate $cr$ is the basis for adaptation. When $cr$ drops below the
lower bound $\gamma-\delta$ the adaptation is triggered and $cr$ increases
again. The commit rate $cr$ increases until the upper bound $\gamma+\delta$ is
reached which again triggers re-classification.
Summarizing, for sudden increases and decreases of $cr$, adaptation ensures a
good response-time and a high commit rate if transactions are short lived
($dt=0$) and the system is not permanently overloaded. If contention
constantly remains high, adaptation has severe effects on the response-time.
### VII-B Long Living Transactions, Seven Epochs, and $\beta$ disabled
Long living transactions are characterized by a certain time interval between
the read phase and the write phase where no data access occurs. Some authors
[17] [18] [19] [20] [21] [22] call this interval ”think time” when a typical
transaction reads and displays data, then the user thinks about it, and
finally modifies or adds some values. We prefer to call this time ”disconnect
time”, because Web based transactional systems tend to logically disconnect
from the database during this period.
For the tests a disconnect time $dt$ from $100$ \- $1000$ msec was randomly
chosen. Each test consisted of seven epochs with different workloads. Workload
W1 starts with $\lambda=7-14$ tas/sec and rises the workload in epochs $3-7$
from 80 tas/sec continuously to 106 tas/sec. Workload W2 stresses the system
with an increasing overload from $66-460$ tas/sec.
The detailed workload profiles are as follows:
* •
W1=(7,14,80,87,93,100,106) and
* •
W2=(66,132,200,265,332,400,460).
Each number denotes the transactions arriving during the respective Epoch of
one second each. The tests were executed with two target commit rates
$\gamma=0.9$ and $0.7$. Table VI shows a summary of the results.
TABLE VI: Results of seven epochs with workload W1$=(7,14,80,87,93,100,106)$, W2$=(66,132,200,265,332,400,460)$, $\gamma=(0.9,0.7)$, random disconnect time $dt=100-1000$ ms, and barrier $\beta$ disabled. Workload | $\gamma$ | ø$(rt)$ | #Tas | ø$(cr_{\text{eff}})$ | Figure
---|---|---|---|---|---
W1 | 90% | 4561 | 487 | 82% | 10
W2 | 90% | 24104 | 1845 | 82% | 11
W1 | 70% | 927 | 483 | 57% |
W2 | 70% | 18957 | 1845 | 46% | 12
Adaptation from $O\rightarrow P$ causes a systematic abort of pending
transactions originating in $O$. To take these aborts into account the
effective commit rate is defined as:
$cr_{\text{eff}}:=\frac{\mbox{\\# committed tas}}{\mbox{\\# terminated tas}}$
(5)
The effective commit rate $cr_{\text{eff}}$ measures -as the name suggests-
the performance of the system as shown to the user and the previously defined
commit rate $cr$ is used to trigger adaptation, because this indicator is more
sensitive to the workload. The effective commit rate $cr_{\text{eff}}$ reached
our tests 82% for the first and $\approx 50$% for the second value of
$\gamma$. Note that without adaptation all experiments would have a commit
rate between 1 and 3 percent only due to the long living nature of the
transactions and the higher conflict potential. This is also the reason why
the performance in this test scenario is lower than in the previous subsection
without disconnect time.
W1 has the shortest response-time due to the comparatively low workload. In
Epoch 3 with high workload ($80$ tas/sec) quickly lets $cr$ drop under the
lower boundary $\gamma-\delta=0.85$ (see Figure 10). The adaptation to $P$ is
triggered and in the following epochs $cr$ rises again until the upper
boundary is reached. The data is reclassified in $O$ after 11 epochs and again
the $cr$ drops, but recovers faster as before, because the arrival of new
transactions stopped after 7 epochs and after 12 epochs all pending
transaction have terminated.
Figure 10: Run-time adaptation for W1 $=(7,14,80,87,93,100,106)$,
$\gamma=0.9$, random disconnect time $dt=100-1000$ ms, and barrier $\beta$
disabled.
The adaptation profile for workload W2 (permanent contention) shown in Figure
11 is similar to W1. Due to the heavy workload starting in Epoch 1, the
adaptation is already triggered at the end of Epoch 1. The permanent overload
leads to a significantly longer mean response-time due to locking and queuing
in $P$.
Figure 11: Run-time adaptation for W2 $=(66,132,200,265,332,400,460)$,
$\gamma=0.9$, random disconnect time $dt=100-1000$ ms, and barrier $\beta$
disabled.
For workload W2 (permanent contention, second row), the mean response-time is
significantly longer due to the queuing effect under $P$. Taking the same
workload with a target commit rate of $\gamma$ = 70% the adaptation behavior
shows an instability (Figure 12). After adaptation to $P$, the upper boundary
for $cr$ is reached very quickly during the third epoch (time = 27 units = 2.7
sec) and the data is reclassified again in $O$ (Rule 2, 2)) with the result
that the commit rate $cr$ drops to 40%. After this decrease, the system
recovers slowly and reaches the upper boundary in epoch 14 again. At this
point the arrival of transaction has already stopped but the remaining
(queued) transactions cause another jitter for $cr$.
Figure 12: Run-time adaptation for W2 $=(66,132,200,265,332,400,460)$,
$\gamma=0.7$, random disconnect time $dt=100-1000$ ms, and barrier $\beta$
disabled.
The reason for this oscillating effect is that Rule 2, 2) does not look at the
number of queued transactions it only takes criteria $cr>\gamma+\delta$ to re-
classify the data in $O$ again. But in this situation all pending transactions
except one will fail due to concurrency violation. This lets the commit rate
$cr$ drop as low as 40%.
It takes now longer for the adaptation mechanism to reach the upper boundary
because many transactions have already aborted and accordingly more
transaction have to commit to rise $cr$. The upper boundary is reached after
14 sec when the arrival of transaction has already stopped.
Summarizing, despite a sudden increase in contention, adaptation keeps the
commit rate stable even if transactions are long living. If contention remains
high, the response-time is getting longer since $P$ queues transactions. With
a low $\gamma$ the mechanism tends to become unstable and an oscillating
behavior can be noticed. Having $\gamma$ close to 100% is recommended since
adaptation is triggered earlier. To prevent an excessive increase in response-
time, $\beta$ has to be enabled as discussed in the next section.
### VII-C Long Living Transactions, Seven Epochs, and $\beta$ enabled
The following experiments study the effects on the workloads of the previous
subsection if barrier $\beta$ is enabled and $\gamma$ is high (=90%) as
recommended before. Table VII summarizes the results and shows barrier
$\beta$, mean $cr_{\text{eff}}$, and the mean response-time ø$(rt)$ for
workloads W1 and W2. It further links to Figures 14 and 15 showing sample
graphs of one run of an experiment at a time.
TABLE VII: Results of seven epochs with workload W1$=(7,14,80,87,93,100,106)$, W2$=(66,132,200,265,332,400,460)$, $\gamma=(0.9)$, random disconnect time $dt=100-1000$ ms, and barrier $\beta$ enabled. Workload | $\beta$ | ø$(rt)$ | ø$(cr_{\text{eff}})$ | Figure
---|---|---|---|---
W1 | 1000 | 187 | 17% | Figure 14 (a)
W1 | 3000 | 343 | 18% | Figure 14 (b)
W1 | 5000 | 355 | 29% | –
W1 | 8000 | 1960 | 36% | Figure 14 (c)
W1 | 15000 | 3758 | 39% | –
W2 | 1000 | 136 | 3% | Figure 15 (a)
W2 | 3000 | 248 | 16% | Figure 15 (b)
W2 | 5000 | 1219 | 18% | –
W2 | 8000 | 1172 | 25% | Figure 15 (c)
W2 | 15000 | 2625 | 31% | –
As Table VII shows, each workload was executed with different values
($1000,3000,5000,8000,15000$) for $\beta$. All experiments show that the mean
response-time is bounded by $\beta$ and the effect of a very long response-
time of 19 or 24 seconds (see Table VI of the previous section’s experiments)
with workload W2 no longer occurs. The table also shows that the value of
$\beta$ does not allow to infer the actual mean response-time. However, it
shows that for an increasing $\beta$, the response-time and the commit rate
increase and $\beta$ correlates with these values.
Barrier $\beta$ does not directly match with the maximum response-time as
given, for example, by Service Level Agreements (SLA). The response time
depends on the workload and is directly influenced by the transactions’
arrival rate. The distribution of the response time depends additionally on
the concurrency model. For a queuing system like the concurrency model of
class $P$ a Poisson arrival process is assumed. The response time $rt$ is
calculated as wait time $wt$ in the queue plus transaction processing $pt$
time. Even in the simplest queuing system, the P/P/1, with Poisson arrival and
one service process, only statements about the mean response time ø$(rt)$ can
be made. To estimate the expected response time $rt_{\text{est}}$, the arrival
and service rate is necessary. But in the present case both rates are heavily
changing. If the arrival rate would only change due to statistical variation
no adaptation would be necessary. But if a systematic change happens, e.g.,
because the data access type changes, the original class assignment is not any
more suitable. Adaptation changes the service time and hence the service rate
as well. The service time $st$ in the case of $P$ is the time between read and
write. The only indicators for the estimated response time are the wait queue
length $|Q_{w}|$ and the past average ø$(st)$. This leads to Formula (6):
$rt_{\text{est}}:=\o(st)\times(|Q_{w}|+1)$ (6)
The calculation takes into account the transactions that are already queued
for execution and the average time to process a transaction. The processing
time includes a possible waiting time due to locking.
The SLA defines a limit for the response time $rt$ and in the case of an SLA
violation, a penalty has to be paid. There is a trade off between loosing
transactions or having excessive response time. Assuming an average price of
$r$ for each lost transaction and a penalty of $p$ for every transaction
exceeding the response time limit $\beta$ the trade-off is given at the
intersection of two cost functions that depend on the commit rate $cr$ and the
number of transactions $tas$:
$\displaystyle ca:=r\times(1-cr)\times tas$ (7) $\displaystyle cp:=p\times
tas_{rt>\beta}(cr)\times tas$ (8)
If the functions are normalized with the number of transactions $tas$ then
Figure 13 shows the principal graph for this trade off. The break even point
for this normalized example is given at commit rate $cr=0.72$. In practice,
the database system will measure the actual and number of aborts and the
application should monitor these values and calculate the break even based on
the costs for SLA violation and failed transactions.
Figure 13: Example trade off between aborts and response time in terms of
costs.
In the case of a fast changing workload it is difficult to estimate the
workload profile. If the calculation is based on the past workload, the system
may not react fast enough to sudden changes of the arrival rate.
The situation is more promising if a workload profile is known in advance.
This is often the case if employees have clear routines during their workday.
Assume, for example, the following tasks: order processing in the morning,
stock administration after lunch, and master data management from 5 pm to 6
pm. In this scenario data access to product data in the morning and afternoon
will be classified $O$ while the product data will be re-classified to $P$
from 5 - 6 pm.
Figure 14: Run-time adaptation profile for target commit rate $\gamma=0.9$:
(a) workload W1 with $\beta=1000$, (b) Workload W1 with $\beta=3000$, and (c)
Workload W1 with $\beta=8000$. Figure 15: Run-time adaptation profile for
target commit rate $\gamma=0.9$: (a) workload W2 with $\beta=1000$, (b)
Workload W2 with $\beta=3000$, and (c) Workload W2 with $\beta=8000$.
Figures 14 and 15 illustrate the run-time adaptation profile if $\beta$ is
set. The time of the estimated response time $rt_{\text{est}}$ is shown on the
right vertical axis. The left ordinate shows the commit rate and the target
boundaries. The horizontal axis shows the transactional time which is given by
a sequence of time ordered events. The time interval from one event to the
next is not constant and hence the time scale is not linear.
In Figure 14 (a) the commit rate $cr$ is $1$ during the first two seconds when
the workload is low. When the overload begins after two seconds the commit
rate $cr$ drops quickly below the lower bound $\gamma-\delta=0.85$ and
adaptation to $P$ takes place. The effective commit rate $cr_{\text{eff}}$
(green line in Figures 14 and 15) always stays below $cr$ because $cr$ does
not count aborts due to the adaptation $O\rightarrow P$, but $cr_{\text{eff}}$
does. After adaptation to $P$ the system stabilizes the commit rate $cr$ as
shown by the red graph. This appears in all test runs and can be seen more
clearly when we have a higher response time limit $\beta$ as in part (b) and
(c) of Figure 14.
In the case of $\beta=8000$ (part (c)) the commit rate increases until the
estimated response time exceeds the preset limit $\beta$. If
$rt_{\text{est}}>\beta$ the re-adaptation to $O$ is triggered by Rule 3, 2)
because $cr$ is still below the lower bound. The result is that pending
transactions abort and in the following the commit rate decreases. Run time
estimation works for $P$ only because a wait queue $Q_{w}$ is needed for
Formula (6). If there is no wait queue then the number for $rt_{\text{est}}$
is set to $0$. Hence, as soon as $rt_{\text{est}}$ exceeds $\beta$ the systems
switches to $O$ and the $rt_{\text{est}}$ drops to 0, which explains the saw
tooth figure of $rt_{\text{est}}$.
A low limit for the response time as in Figure 14 (a) causes a low $cr$ and
many transactions run in $O$, which can only be observed indirectly by the low
$rt_{\text{est}}$. If $\beta$ increases (Figure 14 (b) and (c)), the number of
aborts reduces because the system remains longer in $P$, which causes more
waiting transactions which in turn cause more and higher peaks in the
$rt_{\text{est}}$.
For workload W2, Figures 15 (a) - (c) illustrate the workload profiles for
$\beta=1000$, $\beta=3000$, and $\beta=8000$. The graph of $rt_{\text{est}}$
for $\beta=1000$ shows regularly appearing peaks of longer duration caused by
the permanent contention. This effect nearly disappears for larger values of
$\beta$ $(\geq 8000)$ because after adaptation to $P$ much more transactions
are allowed to queue up and commit later. This is indicated by higher and
shorter $rt_{\text{est}}$ peaks, which move to the beginning of the test run.
As a result the $cr_{\text{eff}}$ is slightly higher if $\beta$ is high.
When the workload ends after 7 seconds and the $rt_{\text{est}}$ drops below
$\beta$ Rule 3, 1) applies and the concurrency class switches to $P$ which
lets $cr$ and $cr_{\text{eff}}$ rises until all transactions have terminated.
Part (c) of Figures 14 and 15 show an effect of instability. This happens
after the arrival of transactions has ended and before all transactions have
terminated. The system has switched to $P$ because $rt_{\text{est}}$ was below
the limit $\beta$ and now the high number of remaining transactions in the
queue leads to $rt_{\text{est}}$$>\beta=8000$ and the re-adaptation to $O$
lets $rt_{\text{est}}$ drop below $\beta$, which again triggers Rule 3, 1) and
forces the data to class $P$. The oscillation between $P$ and $O$ continues
until most transactions have terminated and the queue is short enough to keep
$rt_{\text{est}}$ below the limit $\beta$.
In part (a) and (b) this effect shows up in a moderate form during the
workload but not after its termination because the lower $\beta$ does not
allow many transactions to be queued and delayed for a longer time.
Summarizing, the usage of $\beta$ keeps the mean response-time bounded, but
compared to having $\beta=\infty$, a higher abort rate is the price that has
to be paid. The exact determination of $\beta$ demands a continuous adjustment
and has to be carried out by applications. In particular in the case of a
mixed workload a greater $\beta$ causes short peaks in the $rt_{\text{est}}$
since more transactions are allowed to commit in $P$. A lower $\beta$ causes
longer peaks since many transactions wait and their abort is not yet known.
They continue in $O$ and abort at write-time at the earliest.
Generally, it is important to know that O$|$R$|$P$|$E classifies hot spot
items (HSIs) in classes $R$ and $E$, if possible. This is the better choice if
the semantic of the data allows this classification. Adaptation is only
provided to handle a sudden, but impermanent increase of the contention for
items classified in default class $O$. Permanent contention is likely to cause
any system to become overloaded. O$|$R$|$P$|$E is at least able to protect
itself by trading off response-time and commit rate.
## VIII Related Work
This paper extends the findings of [1] and is based on the Ph.D. thesis [2] of
the main author, which introduces O$|$R$|$P$|$E. A vast amount of work [5]
[11] has been carried out in the field of transaction management and CC, but
so far no attempt was undertaken to use a combination of CC mechanisms
according to the data usage (semantics). Most authors use the semantics of a
transaction to divide it into sub-transactions, thus achieving a finer
granularity that hopefully exhibit less conflicts. Some authors [23] use the
semantics of the data to build a compatibility set while others try to reduce
conflicts using multiversions [24] [25]. The reconciliation mechanism was
introduced in [8] and is an optimistic variant of “The escrow transactional
method” [9]. Escrow relies on guaranties given to the transaction before the
commit time, which is only possible for a certain class of transactions, e.g.
transactions with commutative operations. Optimistic concurrency control was
introduced by [26], which did not gain much consideration in practice until
SI, introduced by [27], has been implemented in an optimistic way. SI in
general gained much attention through [6] [7], and also in practice [28]. Its
strength lies in applications that have to deal with many concurrent queries
but has only a moderate rate of updating transactions. O$|$R$|$P$|$E, however,
is designed for high performance updating transactions processing, especially
with data hot spots.
## IX Conclusion and Outlook
The paper presented a multimodel concurrency control mechanism that breaks
with the _one concurrency mechanism fits all needs_. The concurrency mechanism
is chosen according to the access semantic of the data. Four concurrency
control classes are defined and rules guide the developer with the manual
classification. When the access semantic is unknown the default class $O$ with
an optimistic snapshot isolation mechanism is chosen. For those data the model
is extended to dynamically change the class assignment if the performance
suggests a pessimistic mechanism $P$. The simulations with the prototype
demonstrated that the mechanism is working and tests with the TPC-C++
benchmark resulted in a 3 to 4 times superior performance. The adaptation
mechanism provides a response time guaranty to comply with Service Level
Agreements for the price of a lower commit rate.
The tests revealed an instability in the form of an oscillating adaptation.
This occurs only under an abrupt change of the workload from overload to
inactive system. However, a refinement of the adaptation rule could possibly
avoid the oscillation when the re-classification from $P\rightarrow O$ is
executed. This could be achieved if the re-classification is only triggered
when the wait queue is small or empty.
A dynamic algorithm for an automatic classification of data would be desirable
and would relief the developer from manual classification. The same mechanism
could then be used to dynamically adapt the data according to a changed usage
profile.
Also, comprehensive performance tests that consider replication, online backup
and a study of run-time adaptation under real-life conditions is still
missing.
## References
* [1] T. Lessner, F. Laux, and T. M. Connolly, “O$|$R$|$P$|$E - a data semantics driven concurrency control mechanism,” in _DBKDA 2015, The Seventh International Conference on Advances in Databases, Knowledge, and Data Applications_ , 2015, pp. 147 – 152.
* [2] T. Lessner, “O$|$R$|$P$|$E - a high performance semantic transaction model for disconnected systems,” Ph.D. dissertation, University of the West of Scotland, 2014.
* [3] _TPC BENCHMARK C, Standard Specification, Revision 5.11_ , Transaction Processing Performance Council Std., February 2010.
* [4] A. Thomasian, “Concurrency control: methods, performance, and analysis,” _ACM Comput. Surv._ , vol. 30, no. 1, pp. 70–119, Mar. 1998.
* [5] J. Gray and A. Reuter, _Transaction Processing: Concepts and Techniques_. Morgan Kaufmann, 1993.
* [6] A. Fekete, D. Liarokapis, E. O’Neil, P. O’Neil, and D. Shasha, “Making snapshot isolation serializable,” _ACM Trans. Database Syst._ , vol. 30, no. 2, pp. 492–528, Jun. 2005.
* [7] M. J. Cahill, U. Röhm, and A. D. Fekete, “Serializable isolation for snapshot databases,” _ACM Trans. Database Syst._ , vol. 34, no. 4, pp. 20:1–20:42, Dec. 2009.
* [8] F. Laux and T. Lessner, “Transaction processing in mobile computing using semantic properties,” in _Proceedings of the 2009 First International Conference on Advances in Databases, Knowledge, and Data Applications_ , ser. DBKDA ’09. IEEE Computer Society, 2009, pp. 87–94.
* [9] P. E. O’Neil, “The escrow transactional method,” _ACM Transactions On Database Systems_ , vol. 11, pp. 405–430, December 1986.
* [10] M. Kifer, A. Bernstein, and P. M. Lewis, _Database Systems: An Application Oriented Approach, Complete Version (2nd Edition)_. Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc., 2005.
* [11] G. Weikum and G. Vossen, _Transactional Information Systems: Theory, Algorithms, and the Practice of Concurrency Control and Recovery_. Morgan Kaufmann, 2002.
* [12] H. Garcia-Molina, J. D. Ullman, and J. Widom, _Database systems - the complete book (2. ed.)_. Pearson Education, 2009.
* [13] Abraham Silberschatz, Henry F. Korth, and S. Sudarshan, _Database System Concepts (sixth edition)_. McGraw-Hill, 2011.
* [14] T. Kraska, M. Hentschel, G. Alonso, and D. Kossmann, “Consistency rationing in the cloud: pay only when it matters,” _Proc. VLDB Endow._ , vol. 2, pp. 253–264, August 2009. [Online]. Available: http://portal.acm.org/citation.cfm?id=1687627.1687657
* [15] D. Gómez Ferro and M. Yabandeh, “A critique of snapshot isolation,” in _Proceedings of the 7th ACM european conference on Computer Systems_ , ser. EuroSys ’12. New York, NY, USA: ACM, 2012, pp. 155–168. [Online]. Available: http://doi.acm.org/10.1145/2168836.2168853
* [16] R. Osman and W. J. Knottenbelt, “Database system performance evaluation models: A survey,” _Performance Evaluation_ , vol. 69, no. 10, pp. 471 – 493, 2012. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0166531612000442
* [17] F. Laux and M. Laiho, “Sql access patterns for optimistic concurrency control,” in _Proceedings of the 2009 Computation World: Future Computing, Service Computation, Cognitive, Adaptive, Content, Patterns_ , ser. COMPUTATIONWORLD ’09. Washington, DC, USA: IEEE Computer Society, 2009, pp. 254–258. [Online]. Available: http://dx.doi.org/10.1109/ComputationWorld.2009.63
* [18] A. Adya, R. Gruber, B. Liskov, and U. Maheshwari, “Efficient optimistic concurrency control using loosely synchronized clocks,” in _Proceedings of the 1995 ACM SIGMOD International Conference on Management of Data, San Jose, California, May 22-25, 1995._ , M. J. Carey and D. A. Schneider, Eds. ACM Press, 1995, pp. 23–34. [Online]. Available: http://doi.acm.org/10.1145/223784.223787
* [19] B. Ding, L. Kot, A. J. Demers, and J. Gehrke, “Centiman: elastic, high performance optimistic concurrency control by watermarking,” in _Proceedings of the Sixth ACM Symposium on Cloud Computing, SoCC 2015, Kohala Coast, Hawaii, USA, August 27-29, 2015_ , S. Ghandeharizadeh, S. Barahmand, M. Balazinska, and M. J. Freedman, Eds. ACM, 2015, pp. 262–275. [Online]. Available: http://doi.acm.org/10.1145/2806777.2806837
* [20] J. Huang, J. A. Stankovic, K. Ramamritham, and D. F. Towsley, “Experimental evaluation of real-time optimistic concurrency control schemes,” in _17th International Conference on Very Large Data Bases, September 3-6, 1991, Barcelona, Catalonia, Spain, Proceedings._ , G. M. Lohman, A. Sernadas, and R. Camps, Eds. Morgan Kaufmann, 1991, pp. 35–46. [Online]. Available: http://www.vldb.org/conf/1991/P035.PDF
* [21] R. Agrawal, M. J. Carey, and M. Livny, “Concurrency control performance modeling: Alternatives and implications,” _ACM Trans. Database Syst._ , vol. 12, no. 4, pp. 609–654, 1987. [Online]. Available: http://doi.acm.org/10.1145/32204.32220
* [22] F. Laux, M. Laiho, and T. Lessner, “Implementing row version verification for persistence middleware using sql access patterns,” _International Journal on Advances in Software, issn 1942-2628_ , vol. 3, no. 3 & 4, pp. 407 – 423, 2010. [Online]. Available: http://www.iariajournals.org/software/
* [23] H. Garcia-Molina, “Using semantic knowledge for transaction processing in a distributed database,” _ACM Trans. Database Syst._ , vol. 8, no. 2, pp. 186–213, Jun. 1983.
* [24] S. H. Phatak and B. Nath, “Transaction-centric reconciliation in disconnected client-server databases,” _Mob. Netw. Appl._ , vol. 9, no. 5, pp. 459–471, 2004.
* [25] P. Graham and K. Barker, “Effective optimistic concurrency control in multiversion object bases,” in _ISOOMS ’94: Proceedings of the International Symposium on Object-Oriented Methodologies and Systems_ , ser. Lecture Notes in Computer Science, E. Bertino and S. D. Urban, Eds., vol. 858\. London, UK: Springer-Verlag, 1994, pp. 313–328.
* [26] H. T. Kung and J. T. Robinson, “On optimistic methods for concurrency control,” _ACM Trans. Database Syst._ , vol. 6, no. 2, pp. 213–226, Jun. 1981.
* [27] H. Berenson, P. Bernstein, J. Gray, J. Melton, E. O’Neil, and P. O’Neil, “A critique of ansi sql isolation levels,” _SIGMOD Rec._ , vol. 24, no. 2, pp. 1–10, May 1995.
* [28] D. R. K. Ports and K. Grittner, “Serializable snapshot isolation in postgresql,” _Proc. VLDB Endow._ , vol. 5, no. 12, pp. 1850–1861, Aug. 2012\.
|
PI}=\frac{\pi\ell^{3}}{2\epsilon^{3}}-\frac{\pi\nu^{2}\ell}{4\epsilon}\,\,+\frac{\pi\nu^{3}}{6}-\sum_{k=0}^{2}\frac{\nu^{k}}{k!}\,\frac{{\rm
Li}_{3-k}(e^{-2\pi\nu})}{(2\pi)^{2-k}}\,.$ (143)
The corresponding Euclidean energy $U_{\rm PI}=\rho_{\rm PI}\,\pi\ell^{2}$
(135) is given by
$\displaystyle 2\pi\ell\,U_{\rm PI}=V\rho_{\rm
PI}=-\frac{\pi\ell^{3}}{2\epsilon^{3}}+\frac{\pi(\nu^{2}+\tfrac{2}{3}\eta)\ell}{4\epsilon}-\frac{\pi}{6}(\nu^{2}+\eta)\nu\coth(\pi\nu)$
(144)
where $V={\rm vol}(S^{3}_{\ell})=2\pi^{2}\ell^{3}$. For minimal coupling
$\xi=0$ (i.e. $\eta=1$), $U^{\rm fin}_{\rm PI}$ equals $U^{\rm fin}_{\rm
bulk}$ (49), but not for $\xi\neq 0$. For general $d,\xi$, $U_{\rm PI}^{\rm
fin}$ is given by (48) with the overall factor $m^{2}$ the mass $m^{2}$
appearing in the action rather than $m_{\rm eff}^{2}$, in agreement with
Dowker:1975tf ; Candelas:1975du or (6.178)-(6.180) of birrell_davies_1982 .
The entropy $S_{\rm PI}=\log Z_{\rm PI}+2\pi\ell\,U_{\rm PI}$ (136) is
$\displaystyle S_{\rm
PI}=\frac{\pi\eta}{6}\Bigl{(}\frac{\ell}{\epsilon}-\nu\coth(\pi\nu)\Bigr{)}-\sum_{k=0}^{3}\frac{\nu^{k}}{k!}\,\frac{{\rm
Li}_{3-k}(e^{-2\pi\nu})}{(2\pi)^{2-k}}\,\,,$ (145)
where we used $\coth(\pi\nu)=1+2\,{\rm Li}_{0}(e^{-2\pi\nu})$ (51). Since
$Z_{\rm PI}=Z_{\rm bulk}$ in general for scalars and $U_{\rm PI}=U_{\rm bulk}$
for minimally coupled scalars, $S_{\rm PI}=S_{\rm bulk}$ for minimally coupled
scalars. Indeed, after conversion to Pauli-Villars regularization, (145)
equals (52) if $\eta=1$. As a check on the results, the first law $dS_{\rm
PI}=Vd\rho_{\rm PI}$ (137) can be verified explicitly.
In the $m\ell\to\infty$ limit, $S_{\rm
PI}\to\frac{\pi}{6}\eta(\epsilon^{-1}-m)\ell$, reproducing the well-known
scalar one-loop Rindler entropy correction computed by a Euclidean path
integral on a conical geometry Susskind:1994sm ; Callan:1994py ; Kabat:1995eq
; Kabat:1995jq ; Larsen:1995ax ; Solodukhin:2011gn . Note that $S_{\rm PI}<0$
when $\eta<0$. Indeed as reviewed in the Rindler context in appendix E.5,
$S_{\rm PI}$ does not have a statistical mechanical interpretation on its own.
Instead it must be interpreted as a correction to the large positive classical
gravitational horizon entropy. We discuss this in the de Sitter context in
section 8.
A pleasant feature of the sphere computation is that it avoids replicated or
conical geometries: instead of varying a deficit angle, we vary the sphere
radius $\ell$, preserving manifest $SO(d+2)$ symmetry, and allowing
straightforward exact computation of the Euclidean entropy directly from
$Z_{\rm PI}(\ell)$, for arbitrary field content.
#### Free 3D massive spin $s$
Recall from (90) that for a $d=2$ massive spin-$s\geq 1$ field of mass $m$,
the bulk part of $\log Z_{\rm PI}$ is twice that of a $d=2$ scalar (143) with
$\nu=\sqrt{(m\ell)^{2}-\eta}$, $\eta=(s-1)^{2}$, while the edge part is
$-s^{2}$ times that of a $d=0$ scalar, as in (139), with the important
difference however that $\nu=\sqrt{(m\ell)^{2}-\eta}$ instead of $\nu=m\ell$.
Another important difference with (139) is that in the case at hand, (135)
stipulates $V\rho_{\rm PI}=2\pi\ell\,U_{\rm
PI}=-\frac{1}{d+1}\ell\partial_{\ell}\log Z_{\rm PI}$ with $d=2$ instead of
$d=0$. As a result, for the bulk contribution, we can just copy the scalar
formulae (144) and (145) for $U_{\rm PI}$ and $S_{\rm PI}$ setting
$\eta=(s-1)^{2}$, while for the edge contribution we get something rather
different from the harmonic oscillator energy and entropy (139):
$\displaystyle V\rho_{\rm PI}$
$\displaystyle=2\times(\ref{UPIscalar})-s^{2}\bigl{(}-\tfrac{\pi}{3}\tfrac{1}{\epsilon}\ell+\tfrac{\pi}{3}\bigl{(}\nu^{2}+\eta\bigr{)}\nu^{-1}\coth(\pi\nu)\bigr{)}$
(146) $\displaystyle S_{\rm PI}$
$\displaystyle=2\times(\ref{SPIscalar})-s^{2}\bigl{(}\tfrac{2\pi}{3}(\tfrac{1}{\epsilon}\ell-\nu)+\tfrac{\pi}{3}\eta\nu^{-1}\coth(\pi\nu)+{\rm
Li_{1}}(e^{-2\pi\nu})+\tfrac{2\pi}{3}\nu\,{\rm Li}_{0}(e^{-2\pi\nu})\bigr{)}$
(147)
The edge contribution renders $S_{\rm PI}$ negative for all $\ell$. In
particular, in the $m\ell\to\infty$ limit, $S_{\rm
PI}\to\tfrac{\pi}{3}\bigl{(}(s-1)^{2}-2s^{2}\bigr{)}\,\bigl{(}\epsilon^{-1}-m\bigr{)}\ell\to-\infty$:
although the bulk part gives a large positive contribution for $s\geq 2$, the
edge part gives an even larger negative contribution. Going in the opposite
direction, to smaller $m\ell$, we hit the $d=2$, $s\geq 1$ unitarity bound at
$\nu=0$, i.e. at $m\ell=\sqrt{\eta}=s-1$. Approaching this bound, the bulk
contribution remains finite, while the edge part diverges, again negatively.
For $s=1$, $S_{\rm PI}\to\log(m\ell)$, due to the ${\rm Li}_{1}(e^{-2\pi\nu})$
term, while for $s\geq 2$, more dramatically, we get a pole $S_{\rm
PI}\to-\frac{s^{2}(s-1)}{6}\bigl{(}m\ell-(s-1)\bigr{)}^{-1}$, due to the
$\eta\nu^{-1}\coth(\pi\nu)$ term. Below the unitarity bound, i.e. when
$\ell<(s-1)/m$, $S_{\rm PI}$ becomes complex. To be consistent as a
perturbative low-energy effective field theory valid down to some length scale
$l_{s}$, massive spin-$s\geq 2$ particles on dS3 must satisfy
$m^{2}>(s-1)^{2}/l_{s}^{2}$.
#### Massless spin 2
From the results and examples in section 5.3, $\log Z_{\rm PI}^{(1)}=\log
Z_{\rm PI,div}^{(1)}+\log Z_{\rm PI,fin}^{(1)}-\frac{(d+3)\pi}{2}\,i$,
$\displaystyle\log Z_{\rm
PI,fin}^{(1)}(\ell)=-\frac{D_{d}}{2}\log\frac{A(\ell)}{4G_{\rm
N}}+\alpha^{(2)}_{d+1}\log\frac{\ell}{L}+K_{d+1}$ (148)
$D_{d}=\dim{\rm so}(d+2)=\frac{(d+2)(d+1)}{2}$,
$A(\ell)=\Omega_{d-1}\ell^{{d-1}}$, $\alpha^{(2)}_{d+1}=0$ for even $d$ and
given by (116) for odd $d$. $L$ is an arbitrary length scale canceling out of
the sum of finite and divergent parts, and $K_{d+1}$ an exactly computable
numerical constant. Explicitly for $d=2,3,4$, from (120):
$\begin{array}[]{l|l|l}d&\log Z_{\rm PI,div}^{(1)}&\log Z_{\rm
PI,fin}^{(1)}\\\ \hline\cr
2&0-\frac{9\pi}{2}\frac{1}{\epsilon}\ell&-3\log(\frac{\pi}{2G_{\rm
N}}\ell)+5\log(2\pi)\\\
3&\frac{8}{3}\frac{1}{\epsilon^{4}}\ell^{4}-\frac{32}{3}\frac{1}{\epsilon^{2}}\ell^{2}-\frac{571}{45}\log(\frac{2e^{-\gamma}}{\epsilon}L)&-5\log(\frac{\pi}{G_{\rm
N}}\ell^{2})-\frac{571}{45}\log(\frac{1}{L}\ell)-\log(\frac{8\pi}{3})+\frac{715}{48}-\frac{47\,\zeta^{\prime}(-1)}{3}+\frac{2\,\zeta^{\prime}(-3)}{3}\\\
4&\frac{15\pi}{8}\frac{1}{\epsilon^{5}}\ell^{5}-\frac{65\pi}{24}\frac{1}{\epsilon^{3}}\ell^{3}-\frac{105\pi}{16}\frac{1}{\epsilon}\ell&-\frac{15}{2}\log(\frac{\pi^{2}}{2G_{\rm
N}}\ell^{3})+\log(12)+\frac{27}{2}\log(2\pi)+\frac{65\,\zeta(3)}{48\,\pi^{2}}+\frac{5\,\zeta(5)}{16\,\pi^{4}}\end{array}$
(149)
The one-loop energy and entropy (135)-(136) are split accordingly. The finite
parts are
$\displaystyle S_{\rm PI,fin}^{(1)}=\log Z_{\rm PI,fin}^{(1)}+V\rho_{\rm
fin}^{(1)}\,,\qquad V\rho_{\rm
fin}^{(1)}=\tfrac{1}{2}\tfrac{d-1}{d+1}\,D_{d}-\tfrac{1}{d+1}\alpha_{d+1}^{(2)}\,,$
(150)
where as always $2\pi\ell\,U=V\rho$ with $V=\Omega_{d+1}\ell^{d+1}$. For
$d=2,3,4$:
$\begin{array}[]{l|l|l|l}d&V\rho_{\rm div}^{(1)}&V\rho_{\rm fin}^{(1)}&S_{\rm
PI,div}^{(1)}\\\ \hline\cr
2&0+\frac{3\pi}{2}\frac{1}{\epsilon}\ell&1&-3\pi\frac{1}{\epsilon}\ell\\\
3&-\frac{8}{3}\frac{1}{\epsilon^{4}}\ell^{4}+\frac{16}{3}\frac{1}{\epsilon^{2}}\ell^{2}&\frac{5}{2}+\frac{571}{180}&-\frac{16}{3}\frac{1}{\epsilon^{2}}\ell^{2}-\frac{571}{45}\log(\frac{2e^{-\gamma}}{\epsilon}L)\\\
4&-\frac{15\pi}{8}\frac{1}{\epsilon^{5}}\ell^{5}+\frac{13\pi}{8}\frac{1}{\epsilon^{3}}\ell^{3}+\frac{21\pi}{16}\frac{1}{\epsilon}\ell&\frac{9}{2}&-\frac{13\pi}{12}\frac{1}{\epsilon^{3}}\ell^{3}-\frac{21\pi}{4}\frac{1}{\epsilon}\ell\end{array}$
(151)
Like their quasicanonical bulk counterparts, the Euclidean quantities obtained
here are UV-divergent, and therefore ill-defined from a low-energy effective
field theory point of view. However if the metric itself, i.e. gravity, is
dynamical, these the UV-sensitive terms can be absorbed into standard
renormalizations of the gravitational coupling constants, rendering the
Euclidean thermodynamics finite and physically meaningful. We turn to this
next.
## 8 Quantum gravitational thermodynamics
In section 7 we considered the Euclidean thermodynamics of effective field
theories on a fixed background geometry. In general the Euclidean partition
function and entropy depend on the choice of background metric; more
specifically on the background sphere radius $\ell$. Here we specialize to
field theories which include the metric itself as a dynamical field, i.e. we
consider gravitational effective field theories. We denote $Z_{\rm PI}$,
$\rho_{\rm PI}$ and $S_{\rm PI}$ by ${\cal Z}$, $\varrho$ and ${\cal S}$ in
this case:
$\displaystyle{\cal Z}=\mbox{\Large$\int$}{\cal
D}g\,\cdots\,e^{-S_{E}[g,\ldots]}\,,\qquad S_{E}[g,\ldots]=\frac{1}{8\pi
G}\int\\!\\!\sqrt{g}\,\bigl{(}\Lambda-\tfrac{1}{2}R+\cdots\bigr{)}\,.$ (152)
The geometry itself being dynamical, we have $\partial_{\ell}{\cal Z}=0$, so
(135)-(136) reproduce (1):
$\displaystyle\varrho=0\,,\quad{\cal S}=\log{\cal Z}\,,$ (153)
We will assume $d\geq 2$, but it is instructive to first consider $d=0$, i.e.
1D quantum gravity coupled to quantum mechanics on a circle. Then ${\cal
Z}=\int\frac{d\beta}{2\beta}\,{\rm Tr}\,e^{-\beta H}$, where $\beta$ is the
circle size and $H$ is the Hamiltonian of the quantum mechanical system
shifted by the 1D cosmological constant. To implement the conformal factor
contour rotation of Gibbons:1978ac implicit in (153), we pick an integration
contour $\beta=2\pi\ell+iy$ with $y\in{\mathbb{R}}$ and $\ell>0$ the
background circle radius. Then ${\cal Z}=\pi i\,{\cal N}(0)$ where ${\cal
N}(E)$ is the number of states with $H<E$. This being $\ell$-independent
implies $\varrho=0$. A general definition of microcanonical entropy is $S_{\rm
mic}(E)=\log{\cal N}(E)$. Thus, modulo the content-independent $\pi i$ factor
in ${\cal Z}$, ${\cal S}=\log{\cal Z}$ is the microcanonical entropy at zero
energy in this case.
Of course $d=0$ is very different from the general-$d$ case, as there is no
classical saddle of the gravitational action, and no horizon. For $d\geq 2$
and $\Lambda\to 0$, the path integral has a semiclassical expansion about a
round sphere saddle or radius $\ell_{0}\propto 1/\sqrt{\Lambda}$, and ${\cal
S}$ is dominated by the leading tree-level horizon entropy (2). As in the AdS-
Schwarzschild case reviewed in E.5.1, the microscopic degrees of freedom
accounting for the horizon entropy, assuming they exist, are invisible in the
effective field theory. A natural analog of the dual large-$N$ CFT partition
function on $S^{1}\times S^{d-1}$ microscopically computing the AdS-
Schwarzschild free energy may be some dual large-$N$ quantum mechanics coupled
to 1D gravity on $S^{1}$ microscopically computing the dS static patch
entropy. These considerations suggest interpreting ${\cal S}=\log{\cal Z}$ as
a macroscopic approximation to a microscopic microcanonical entropy, with the
semiclassical/low-energy expansion mapping to some large-$N$ expansion.
The one-loop corrected ${\cal Z}$ is obtained by expanding the action to
quadratic order about its sphere saddle. The Gaussian $Z_{\rm PI}^{(1)}$ was
computed in previous sections. Locality and dimensional analysis imply that
one-loop divergences are $\propto\int R^{n}$ with $2n\leq d+1$. Picking
counterterms canceling all (divergent and finite) local contributions of this
type in the limit $\ell_{0}\propto 1/\sqrt{\Lambda}\to\infty$, we get a well-
renormalized ${\cal S}=\log{\cal Z}$ to this order. Proceeding along these
lines would be the most straightforward path to the computational objectives
of this section. However, when pondering comparisons to microscopic models,
one is naturally led to wondering what the actual physics content is of what
has been computed. This in turn leads to small puzzles and bigger questions,
such as:
1. 1.
A natural guess would have been that the one-loop correction to the entropy
${\cal S}$ is given by a renormalized version of the Euclidean entropy $S_{\rm
PI}^{(1)}$ (136). However (153) says it is given by a renormalized version of
the free energy $\log Z_{\rm PI}^{(1)}$. In the examples given earlier, these
two look rather different. Can these considerations be reconciled?
2. 2.
Besides local UV contributions absorbed into renormalized coupling constants
determining the tree-level radius $\ell_{0}$, there will be nonlocal IR vacuum
energy contributions (pictorially Hawking radiation in equilibrium with the
horizon), shifting the radius from $\ell_{0}$ to $\bar{\ell}$ by gravitational
backreaction. The effect would be small, $\bar{\ell}=\ell_{0}+O(G)$, but since
the leading-order horizon entropy is $S(\ell)\propto\ell^{d-1}/G$, we have
$S(\bar{\ell})=S(\ell_{0})+O(1)$, a shift at the one-loop order of interest.
The horizon entropy term in (153) is ${\cal S}^{(0)}=S(\ell_{0})$, apparently
not taking this shift into account. Can these considerations be reconciled?
3. 3.
At any order in the large-$\ell_{0}$ perturbative expansion, UV-divergences
can be absorbed into a renormalization of a finite number of renormalized
coupling constants, but for the result to be physically meaningful, these must
be defined in terms of low-energy physical “observables”, invariant under
diffeomorphisms and local field redefinitions. In asymptotically flat space,
one can use scattering amplitudes for this purpose. These are unavailable in
the case at hand. What replaces them?
To address these and other questions, we follow a slghtly less direct path,
summarized below, and explained in more detail including examples in appendix
I.
Free energy/quantum effective action for volume We define an off-shell free
energy/quantum effective action $\Gamma(V)=-\log Z(V)$ for the volume, the
Legendre transform of the off-shell entropy/moment-generating function
$S(\rho)$:161616Non-metric fields in the path integral are left implicit. Note
“off-shell” = on-shell for c.c. $\Lambda^{\prime}=\Lambda-8\pi G\,\rho$.
$\displaystyle S(\rho)\equiv\log\mbox{\Large$\int$}{\cal
D}g\,e^{-S_{E}[g]+\rho\int\\!\sqrt{g}}\,,\qquad\log Z(V)\equiv
S-V\rho\,,\qquad
V=\partial_{\rho}S=\bigl{\langle}\,\mbox{$\int\\!\sqrt{g}$}\,\bigr{\rangle}_{\rho}\,.$
(154)
At large $V$, the geometry semiclassically fluctuates about a round sphere.
Parametrizing the mean volume $V$ by a corresponding mean radius $\ell$ as
$V(\ell)\equiv\Omega_{d+1}\ell^{d+1}$, we have
$\displaystyle
Z(\ell)=\mbox{\Large$\int$}_{\\!\\!\\!\text{tree}}\,d\rho\,\mbox{\Large$\int$}{\cal
D}g\,e^{-S_{E}[g]+\rho(\int\\!\sqrt{g}-V(\ell))}\,,$ (155)
where $\int_{\text{tree}}d\rho$ means saddle point evaluation, i.e.
extremization. The Legendre transform (154) is the same as (137), so we get
thermodynamic relations of the same form as (135)-(137):
$\displaystyle dS=Vd\rho\,,\quad d\log
Z=-\rho\,dV\,,\qquad\rho=-\tfrac{1}{d+1}\ell\partial_{\ell}\log
Z\,/\,V\,,\quad S=\bigl{(}1-\tfrac{1}{d+1}\ell\partial_{\ell}\bigr{)}\log
Z\,.$ (156)
On-shell quantities are obtained at $\rho=0$, i.e. at the minimum $\bar{\ell}$
of the free energy $-\log Z(\ell)$:
$\varrho=\rho(\bar{\ell})=0\,,\qquad{\cal S}=S(\bar{\ell})=\log
Z(\bar{\ell})\,,\qquad\bigl{\langle}\,\mbox{$\int\\!\sqrt{g}$}\,\bigr{\rangle}=\Omega_{d+1}\bar{\ell}^{d+1}\,.$
(157)
Tree level At tree level (155) evaluates to
$\displaystyle\log Z^{(0)}(\ell)=-S_{E}[g_{\ell}]\,,\qquad\mbox{$g_{\ell}$ =
round $S^{d+1}$ metric of radius $\ell$}\,,$ (158)
readily evaluated for any action using
$R_{\mu\nu\rho\sigma}=(g_{\mu\rho}g_{\nu\sigma}-g_{\mu\sigma}g_{\nu\rho})/\ell^{2}$,
taking the general form
$\displaystyle\log Z^{(0)}=\frac{\Omega_{d+1}\ell^{d+1}}{8\pi
G}\bigl{(}-\Lambda+\tfrac{d(d+1)}{2}\,\ell^{-2}+z_{1}\,l_{s}^{2}\,\ell^{-4}+z_{2}\,l_{s}^{4}\,\ell^{-6}+\cdots\bigr{)}\,.$
(159)
The $z_{n}$ are $R^{n+1}$ coupling constants and $l_{s}\ll\ell$ is the length
scale of UV-completing physics. The off-shell entropy and energy density are
obtained from $\log Z^{(0)}$ as in (156).
$S^{(0)}=\frac{\Omega_{d-1}\ell^{d-1}}{4G}\bigl{(}1+s_{1}\,l_{s}^{2}\,\ell^{-2}+\cdots\bigr{)},\qquad\rho^{(0)}=\frac{1}{8\pi
G}\bigl{(}\Lambda-\tfrac{d(d-1)}{2}\,\ell^{-2}+\rho_{1}\,l_{s}^{2}\,\ell^{-4}+\cdots\bigr{)}$
(160)
where $s_{n},\rho_{n}\propto z_{n}$ and we used
$\Omega_{d+1}=\frac{2\pi}{d}\Omega_{d-1}$. The on-shell entropy and radius are
given by
$\displaystyle{\cal
S}^{(0)}=S^{(0)}(\ell_{0})\,,\qquad\rho^{(0)}(\ell_{0})=0\,,$ (161)
either solved perturbatively for $\ell_{0}(\Lambda)$ or, more conveniently,
viewed as parametrizing $\Lambda(\ell_{0})$.
One loop The one-loop order, (155) is a by construction tadpole-free Gaussian
path integral, (500):
$\displaystyle\log Z=\log Z^{(0)}+\log Z^{(1)}\,,\qquad\log Z^{(1)}=\log
Z^{(1)}_{\rm PI}+\log Z_{\rm ct}\,,$ (162)
with $Z_{\rm PI}^{(1)}$ as computed in sections 4-5 and $\log Z_{\rm
ct}(\ell)=-S_{E,\rm ct}[g_{\ell}]$ a polynomial counterterm. We define
renormalized coupling constants as the coefficients of the $\ell^{d+1-2n}$
terms in the $\ell\to\infty$ expansion of $\log Z$, and fix $\log Z_{\rm ct}$
by equating tree-level and renormalized coefficients of the polynomial part,
which amounts to the renormalization condition
$\displaystyle\lim_{\ell\to\infty}\partial_{\ell}\log Z^{(1)}=0\,,$ (163)
in even $d+1$ supplemented by $\log Z_{\rm
ct}(0)\equiv-\alpha_{d+1}\log(2e^{-\gamma}L/\epsilon)$, implying
$L\partial_{L}\log Z^{(0)}=\alpha_{d+1}$.
Example: 3D Einstein gravity + minimally coupled scalar (I.4.1), putting
$\nu\equiv\sqrt{m^{2}\ell^{2}-1}$,
$\displaystyle\log
Z^{(1)}=-3\log\frac{2\pi\ell}{4G}+5\log(2\pi)\,\,-\,\,\sum_{k=0}^{2}\frac{\nu^{k}}{k!}\,\frac{{\rm
Li}_{3-k}(e^{-2\pi\nu})}{(2\pi)^{2-k}}+\frac{\pi\nu^{3}}{6}\,\,-\,\,\frac{\pi
m^{3}\ell^{3}}{6}+\frac{\pi m\ell}{4}\,.$ (164)
The last two terms are counterterms. The first two are nonlocal graviton
terms. The scalar part is $O(1/m\ell)$ for $m\ell\gg 1$ but goes nonlocal at
$m\ell\sim 1$, approaching $-\log(m\ell)$ for $m\ell\ll 1$.
Defining $\rho^{(1)}$ and $S^{(1)}$ from $\log Z^{(1)}$ as in (156), and the
quantum on-shell $\bar{\ell}=\ell_{0}+O(G)$ as in (157), the quantum entropy
can be expressed in two equivalent ways, (507)-(508):
$\displaystyle{A}:\,\,{\cal
S}=S^{(0)}(\bar{\ell})+S^{(1)}(\bar{\ell})+\cdots\,,\qquad{B}:\,\,{\cal
S}=S^{(0)}(\ell_{0})+\log Z^{(1)}(\ell_{0})+\cdots$ (165)
where the dots denote terms neglected in the one-loop approximation. This
simultaneously answers questions 1 and 2 on our list, reconciling intuitive
$({A})$ and (153)-based (${B}$) expectations. To make this physically obvious,
consider the quantum static patch as two subsystems, geometry (horizon) +
quantum fluctuations (radiation), with total energy
$\propto\rho=\rho^{(0)}+\rho^{(1)}=0$. If $\rho^{(0)}=0$, the horizon entropy
is $S^{(0)}(\ell_{0})$. But here we have $\rho=0$, so the horizon entropy is
actually $S^{(0)}(\bar{\ell})=S^{(0)}(\ell_{0})+\delta S^{(0)}$, where by the
first law (156), $\delta S^{(0)}=V\delta\rho^{(0)}=-V\rho^{(1)}$. Adding the
radiation entropy $S^{(1)}$ and recalling $\log Z^{(1)}=S^{(1)}-V\rho^{(1)}$
yields ${\cal S}={A}={B}$. Thus ${A}={B}$ is just the usual small+large =
system+reservoir approximation, the horizon being the reservoir, and the
Boltzmann factor $e^{-V\rho^{(1)}}=e^{-\beta U^{(1)}}$ in $Z^{(1)}$ accounting
for the reservoir’s entropy change due to energy transfer to the system.
Viewing the quantum contributions as (Hawking) radiation has its picturesque
merits and correctly conveys their nonlocal/thermal character, e.g. ${\rm
Li}(e^{-2\pi\nu})\sim e^{-\beta m}$ for $m\ell\gg 1$ in (164), but might
incorrectly convey a presumption of positivity of $\rho^{(1)}$ and $S^{(1)}$.
Though positive for minimally coupled scalars (fig. 21), they are in fact
negative for higher spins (figs. 22, 23), due to edge and group volume
contributions. Moreover, although the negative-energy backreaction causes the
horizon to grow, partially compensating the negative $S^{(1)}$ by a positive
$\delta S^{(0)}=-V\rho^{(1)}$, the former still wins: ${\cal
S}^{(1)}\equiv{\cal S}-{\cal S}^{(0)}=S^{(1)}-V\rho^{(1)}=\log Z^{(1)}<0$.
Computational recipe and examples For practical purposes, (B) is the more
useful expression in (165). Together with (161) computing ${\cal S}^{(0)}$,
the exact results for $Z^{(1)}_{\rm PI}$ obtained in previous sections (with
$\gamma_{0}=\sqrt{2\pi/{\cal S}^{(0)}}$, see (167) below), and the
renormalization prescription outlined above, it immediately gives
$\displaystyle{\cal S}={\cal S}^{(0)}+{\cal S}^{(1)}+\cdots\,,\qquad{\cal
S}^{(0)}=S^{(0)}(\ell_{0})\,,\qquad{\cal S}^{(1)}=\log Z^{(1)}(\ell_{0})$
(166)
in terms of the renormalized coupling constants, for general effective field
theories of gravity coupled to arbitrary matter and gauge fields.
For 3D gravity, this gives ${\cal S}={\cal S}^{(0)}-3\log{\cal
S}^{(0)}+5\log(2\pi)+O(1/{\cal S}^{(0)})$. We work out and plot several other
concrete examples in appendix I.4: 3D Einstein gravity + scalar (I.4.1, fig.
21), 3D massive spin $s$ (I.4.2, fig. 22), 2D scalar (I.4.3), 4D massive spin
$s$ (I.4.4, fig. 23), and 3D,4D,5D gravity (including higher-order curvature
corrections) (I.4.5). Table 12 in the introduction lists a few more sample
results.
Local field redefinitions, invariant coupling constants and physical
observables
Although the higher-order curvature corrections to the tree-level dS entropy
${\cal S}^{(0)}=S^{(0)}(\ell_{0})$ (160) seem superficially similar to
curvature corrections to the entropy of black holes in asymptotically flat
space Wald:1993nt ; Iyer:1994ys , there are no charges or other asymptotic
observables available here to endow them with physical meaning. Indeed, they
have no intrinsic low-energy physical meaning at all, as they can be removed
order by order in the $l_{s}/\ell$ expansion by a metric field redefinition,
bringing the entropy to pure Einstein form (2). In $Z^{(0)}(\ell)$ (159), this
amounts to setting all $z_{n}\equiv 0$ by a redefinition
$\ell\to\ell\sum_{n}c_{n}\ell^{-2n}$ (490). The value of ${\cal
S}^{(0)}=\max_{\ell\gg l_{s}}\log Z^{(0)}(\ell)$ remains of course unchanged,
providing the unique field-redefinition invariant combination of the coupling
constants $G,\Lambda\mbox{(or $\ell_{0}$)},z_{1},z_{2},\ldots$.
Related to this, as discussed in I.4.5, caution must be exercised when porting
the one-loop graviton contribution in (112) or (148): $G_{\rm N}$ appearing in
$\gamma_{0}=\sqrt{8\pi G_{\rm N}/A}$ is the algebraically defined Newton
constant (109), as opposed to $G$ defined by the Ricci scalar coefficient
$\frac{1}{8\pi G}$ in the low-energy effective action. The former is field-
redefinition invariant; the latter is not. In Einstein frame ($z_{n}=0$) the
two definitions coincide, hence in a general frame
$\displaystyle\gamma_{0}=\sqrt{2\pi/{\cal S}^{(0)}}\,.$ (167)
Since $\log{\cal S}^{(0)}=\log\frac{A}{4G}+\log(1+O(l_{s}^{2}/\ell_{0}^{2}))$,
this distinction matters only at $O(l_{s}^{2}/\ell_{0}^{2})$, however.
In $d=2$, ${\cal S}^{(0)}$ is in fact the only invariant gravitational
coupling: because the Weyl tensor vanishes identically, any 3D parity-
invariant effective gravitational action can be brought to Einstein form by a
field redefinition. In the Chern-Simons formulation of H.2, ${\cal
S}^{(0)}=2\pi\kappa$. In $d\geq 3$, the Weyl tensor vanishes on the sphere,
but not identically. As a result, there are coupling constants not picked up
by the sphere’s ${\cal S}^{(0)}=-S_{E}[g_{\ell_{0}}]$. Analogous ${\cal
S}^{(0)}_{M}\equiv-S_{E}[g_{M}]$ for different saddle geometries $g_{M}$,
approaching Einstein metrics in the limit $\Lambda\propto\ell_{0}^{-2}\to 0$,
can be used instead to probe them, and analogous ${\cal S}_{M}\equiv\log{\cal
Z}_{M}$ expanded about $g_{M}$ provide quantum observables. Section I.5
provides a few more details, and illustrates extraction of unambiguous linear
combinations of the 4D one-loop correction for 3 different $M$.
This provides the general picture we have in mind as the answer, in principle,
to question 3 on our list below (153): the tree-level ${\cal S}^{(0)}_{M}$ are
the analog of tree-level scattering amplitudes, and the analog of quantum
scattering amplitudes are the quantum ${\cal S}_{M}$.
Constraints on microscopic models For pure 3D gravity ${\cal
S}^{(0)}=\frac{2\pi}{4G}\bigl{(}\ell_{0}+s_{1}\ell_{0}^{-1}+s_{2}\,\ell_{0}^{-3}+\cdots\bigr{)}$,
and to one-loop order we have (531):
$\displaystyle{\cal S}={\cal S}^{(0)}-3\log{\cal
S}^{(0)}+5\log(2\pi)+\cdots\,.$ (168)
Granting171717This does not affect the 1-loop based conclusions below, but
does affect the $c_{n}$. One could leave $l$ general. (442) with $l=0$ gives
the all-loop expansion of pure 3D gravity, taking into account $G\equiv SO(4)$
here while $G\equiv SU(2)\times SU(2)$ there, to all-loop order,
${\cal S}={\cal S}_{0}+\log\left|\sqrt{\tfrac{4}{2+i\,{\cal
S}_{0}/{2\pi}}}\,\sin\bigl{(}\tfrac{\pi}{2+i\,{{\cal
S}_{0}}/{2\pi}}\bigr{)}\right|^{2}={\cal S}_{0}-3\log{\cal
S}_{0}+5\log(2\pi)+\mbox{$\sum_{n}$}\,c_{n}\,{\cal S}_{0}^{-2n}$ (169)
where ${\cal S}_{0}\equiv{\cal S}^{(0)}$ to declutter notation. Note all
quantum corrections are strictly nonlocal, i.e. no odd powers of $\ell_{0}$
appear, reflected in the absence of odd powers of $1/{\cal S}_{0}$.
Though outside the scope of this paper, let us illustrate how such results may
be used to constrain microscopic models identifying large-$\ell_{0}$ and
large-$N$ expansions in some way. Say a modeler posits a model consisting of
$2N$ spins $\sigma_{i}=\pm 1$ with $H\equiv\sum_{i}\sigma_{i}=0$. The
microscopic entropy is $S_{\rm mic}=\log{2N\choose N}=2\log 2\cdot
N-\frac{1}{2}\log(\pi N)+\sum_{n}c^{\prime}_{n}N^{1-2n}$. There is a unique
identification of ${\cal S}_{0}$ bringing this in a form with the same
analytic/locality structure as (169), to wit, ${\cal S}_{0}=\log 4\cdot
N+\sum_{n}c^{\prime}_{n}N^{1-2n}$, resulting in
$S_{\rm mic}\,=\,{\cal S}_{0}-\tfrac{1}{2}\log{\cal
S}_{0}+\log(\tfrac{\pi}{2\log
2})+\mbox{$\sum_{n}$}\,c_{n}^{\prime\prime}\,{\cal S}_{0}^{-2n}\,,$ (170)
where $c_{1}^{\prime\prime}=-\frac{1}{8}\log
2,c_{2}^{\prime\prime}=\frac{3}{64}(\log 2)^{2}+\frac{1}{48}(\log
2)^{3},\ldots$, fully failing to match (169), starting at one loop. The model
is ruled out.
A slightly more sophisticated modeler might posit $S_{\rm mic}=\log d(N)$,
where $d(N)$ is the $N$-th level degeneracy of a chiral boson on $S^{1}$. To
leading order $S_{\rm mic}\approx 2\pi\sqrt{N/6}\equiv K$. Beyond, $S_{\rm
mic}=K-a^{\prime}\log K+b^{\prime}+\sum_{n}c_{n}^{\prime}K^{-n}+O(e^{-K/2})$,
where $a^{\prime}=2$, $b^{\prime}=\log(\pi^{2}/6\sqrt{3})$ and
$c_{n}^{\prime}$ given by 10.1112/plms/s2-17.1.75 . Identifying ${\cal
S}_{0}=K+\sum_{n}c^{\prime}_{2n-1}K^{-(2n-1)}$ brings this to the form (169),
yielding $S_{\rm mic}={\cal S}_{0}-a^{\prime}\log{\cal
S}_{0}+b^{\prime}+\sum_{n}c_{n}^{\prime\prime}{\cal S}_{0}^{-2n}+O(e^{-{\cal
S}_{0}/2})$, with $c_{1}^{\prime\prime}=-\frac{5}{2}$,
$c_{2}^{\prime\prime}=\frac{37}{12}$, $\ldots$ — ruled out.
We actually did not need the higher-loop corrections at all to rule out the
above models. In higher dimensions, or coupled to more fields, one-loop
constraints moreover become increasingly nontrivial, evident in (12). For pure
5D gravity (531),
${\cal S}={\cal S}^{(0)}-\frac{15}{2}\log{\cal
S}^{(0)}+\log(12)+\frac{27}{2}\log(2\pi)+\frac{65\,\zeta(3)}{48\,\pi^{2}}+\frac{5\,\zeta(5)}{16\,\pi^{4}}\,.$
(171)
It would be quite a miracle if a microscopic model managed to match this.
## 9 dS, AdS$\pm$, and conformal higher-spin gravity
Vasiliev higher-spin gravity theories Vasiliev:1990en ; Vasiliev:2003ev ;
Bekaert:2005vh have infinite spin range and an infinite-dimensional higher-
spin algebra, ${\mathfrak{g}}={\rm hs}({\rm so}(d+2))$, leading to divergences
in the one-loop sphere partition function formula (112) untempered by the UV
cutoff. In this section we take a closer look at these divergences. We
contrast the situation to AdS with standard boundary conditions (AdS$+$),
where the issue is entirely absent, and we point out that, on the other hand,
for AdS with alternate HS boundary conditions (AdS$-$) as well as conformal
higher-spin (CHS) theories, similar issues arise. We end with a discussion of
their significance.
### 9.1 dS higher-spin gravity
Nonminimal type A Vasiliev gravity on dSd+1 has a tower of massless spin-$s$
fields for all $s\geq 1$ and a $\Delta=d-2$ scalar. We first consider $d=3$.
The total bulk and edge characters are obtained by summing (102) and adding
the scalar, as we did for the bulk part in (63):
$\displaystyle\chi_{\rm
bulk}\,=\,2\cdot\biggl{(}\frac{q^{1/2}+q^{3/2}}{(1-q)^{2}}\biggr{)}^{2}-\frac{q}{(1-q)^{2}}\,,\qquad\chi_{\rm
edge}=2\cdot\biggl{(}\frac{q^{1/2}+q^{3/2}}{(1-q)^{2}}\biggr{)}^{2}\,.$ (172)
Quite remarkably, the bulk and edge contributions almost exactly cancel:
$\displaystyle\chi_{\rm bulk}-\chi_{\rm edge}=-\frac{q}{(1-q)^{2}}\,.$ (173)
For $d=4$ however, we see from (102) that due to the absence of overall
$q^{s}$ suppression factors, the total bulk and edge characters each diverge
separately by an overall multiplicative factor:
$\displaystyle\chi_{\rm
bulk}=\sum_{s}(2s+1)\cdot\frac{2\,q^{2}}{(1-q)^{4}}\,,\qquad\chi_{\rm
edge}=\sum_{s}\tfrac{1}{6}s(s+1)(2s+1)\cdot\frac{2\,q}{(1-q)^{2}}\,.$ (174)
This pattern persists for all $d\geq 4$, as can be seen from the explicit form
of bulk and edge characters in (359), (388), (390). For any $d$, there is
moreover an infinite-dimensional group volume factor in (112) to make sense
of, involving a divergent factor $(\ell^{d-1}/G_{\rm N})^{\dim G/2}$ and the
volume of an object of unclear mathematical existence Monnier:2014tfa .
Before we continue the discussion of what, if anything, to make of this, we
consider AdS$\pm$ and CHS theories within the same formalism. Besides
independent interest, this will make clear the issue is neither intrinsic to
the formalism, nor to de Sitter.
### 9.2 AdS$\pm$ higher-spin gravity
#### AdS characters for standard and alternate HS boundary conditions
Standard boundary conditions on massless higher spin fields $\varphi$ in
AdSd+1 lead to quantization such that spin-$s$ single-particle states
transform in a UIR of ${\rm so}(2,d)$ with primary dimension
$\Delta_{\varphi}=\Delta_{+}=s+d-2$. Higher-spin Euclidean AdS one-loop
partition functions with these boundary conditions were computed in
Giombi:2013fka ; Giombi:2014iua ; Gunaydin:2016amv ; Giombi:2016pvg ;
Skvortsov:2017ldz . In Giombi:2013yva , the Euclidean one-loop partition
function for alternate boundary conditions ($\Delta_{\varphi}=\Delta_{-}=2-s$)
was considered. In the EAdS$+$ case, the complications listed under (96) are
absent, but for EAdS$-$ close analogs do appear.
EAdS path integrals can be expressed as character integrals Sun:2020ame ;
Basile:2018zoy ; Basile:2018acb , in a form exactly paralleling the formulae
and bulk/edge picture of the present work Sun:2020ame .181818In this picture,
EAdS is viewed as the Wick-rotated AdS-Rindler wedge, with dSd static patch
boundary metric, as in Keeler:2014hba ; Keeler:2016wko . The bulk character is
$\chi\equiv{\rm tr}_{G}\,q^{iH}$, with $H$ the Rindler Hamiltonian, not the
global AdS Hamiltonian. Its $q$-expansion counts quasinormal modes of the
Rindler wedge. The one-loop results are interpreted as corrections to the
gravitational thermodynamics of the AdS-Rindler horizon Sun:2020ame ;
Keeler:2014hba ; Keeler:2016wko . The AdS analog of the dS bulk and edge
characters (85) for a massive spin-$s$ field $\varphi$ with
$\Delta_{\varphi}=\Delta_{\pm}$ is Sun:2020ame
$\displaystyle\chi^{\rm AdS\pm}_{\rm bulk,\varphi}\equiv
D_{s}^{d}\,\frac{q^{\Delta_{\pm}}}{(1-q)^{d}}\,,\qquad\chi^{\rm AdS\pm}_{\rm
edge,\varphi}\equiv D_{s-1}^{d+2}\,\frac{q^{\Delta_{\pm}-1}}{(1-q)^{d-2}}\,,$
(175)
where $\Delta_{-}=d-\Delta_{+}$. Thus, as functions of $q$,
$\displaystyle\chi_{\varphi}^{\rm dS}=\chi^{\rm AdS+}_{\varphi}+\chi^{\rm
AdS-}_{\varphi}\,.$ (176)
The AdS analog of (97) for a massless spin-$s$ field $\phi_{s}$ with gauge
parameter field $\xi_{s^{\prime}}$ is
$\displaystyle\hat{\chi}_{s}^{{\rm AdS}\pm}\equiv\chi^{\rm
AdS\pm}_{\phi}-\chi^{\rm AdS\pm}_{\xi}\,,$ (177)
where $\Delta_{\phi,+}=s^{\prime}+d-1$, $\Delta_{\xi,+}=s+d-1$,
$s^{\prime}\equiv s-1$. More explicitly, analogous to (98),
$\displaystyle\hat{\chi}^{\rm AdS+}_{{\rm bulk},s}$
$\displaystyle=\frac{D_{s}^{d}\,q^{s^{\prime}+d-1}-D_{s^{\prime}}^{d}\,q^{s+d-1}}{(1-q)^{d}}\,,$
$\displaystyle\hat{\chi}^{\rm AdS+}_{{\rm edge},s}$
$\displaystyle=\frac{D^{d+2}_{s-1}\,q^{s^{\prime}+d-2}-D^{d+2}_{s^{\prime}-1}\,q^{s+d-2}}{(1-q)^{d-2}}$
(178) $\displaystyle\hat{\chi}^{\rm AdS-}_{{\rm bulk},s}$
$\displaystyle=\frac{D_{s}^{d}\,q^{1-s^{\prime}}-D_{s^{\prime}}^{d}\,q^{1-s}}{(1-q)^{d}}\,,$
$\displaystyle\hat{\chi}^{\rm AdS-}_{{\rm edge},s}$
$\displaystyle=\frac{D_{s-1}^{d+2}\,q^{-s^{\prime}}-D_{s^{\prime}-1}^{d+2}\,q^{-s}}{(1-q)^{d-2}}\,.$
(179)
The presence of non-positive powers of $q$ in $\chi^{\rm AdS-}$ has a similar
path integral interpretation as in the dS case summarized in section 5.2. The
necessary negative mode contour rotation and zeromode subtractions are again
implemented at the character level by flipping characters. In particular the
proper $\chi_{s}$ to be used in the character formulae for EAdS$\pm$ are
$\displaystyle\chi_{s}^{\rm AdS-}=\bigl{[}\hat{\chi}_{s}^{\rm
AdS-}\bigr{]}_{+}\,,\qquad\chi_{s}^{\rm AdS+}=\bigl{[}\hat{\chi}_{s}^{\rm
AdS+}\bigr{]}_{+}=\hat{\chi}_{s}^{\rm AdS+}\,,$ (180)
with $[\hat{\chi}]_{+}$ defined as in (100). The omission of Killing tensor
zeromodes for alternate boundary conditions must be compensated by a a
division by the volume of the residual gauge group $G$ generated by the
Killing tensors. Standard boundary conditions on the other hand kill these
Killing tensor zeromodes: they are not part of the dynamical, fluctuating
degrees of freedom. The group $G$ they generate acts nontrivially on the
Hilbert space as a global symmetry group.
#### AdS$+$
For standard boundary conditions, the character formalism reproduces the
original results of Giombi:2013fka ; Giombi:2014iua ; Gunaydin:2016amv ;
Giombi:2016pvg ; Skvortsov:2017ldz by two-line computations Sun:2020ame . We
consider some examples:
For nonmimimal type A Vasiliev with $\Delta_{0}=d-2$ scalar boundary
conditions, dual to the free $U(N)$ model, using (178) and the scalar
$\chi_{0}=q^{d-2}/(1-q)^{d}$, the following total bulk and edge characters are
readily obtained:
$\displaystyle\chi_{\rm bulk}^{{\rm AdS}+}=\sum_{s=0}^{\infty}\chi_{{\rm
bulk},s}^{{\rm
AdS}+}=\biggl{(}\frac{q^{\frac{d}{2}-1}+q^{\frac{d}{2}}}{(1-q)^{d-1}}\biggr{)}^{2}\,,\quad\chi_{\rm
edge}^{{\rm AdS}+}=\sum_{s=0}^{\infty}\chi_{{\rm edge},s}^{{\rm
AdS}+}=\biggl{(}\frac{q^{\frac{d}{2}-1}+q^{\frac{d}{2}}}{(1-q)^{d-1}}\biggr{)}^{2}\,.$
(181)
The total bulk character takes the singleton-squared form expected from the
Flato-Fronsdal theorem Flato:1978qz . More interestingly, the edge characters
sum up to exactly the same. Thus the generally negative nature of edge
“corrections” takes on a rather dramatic form here:
$\displaystyle\chi_{\rm tot}^{{\rm AdS}+}=\chi_{\rm bulk}^{{\rm
AdS}+}-\chi_{\rm edge}^{{\rm AdS}+}=0\qquad\Rightarrow\qquad\log Z_{\rm
PI}^{{\rm AdS}+}=0\,.$ (182)
As $Z^{{\rm AdS}+}_{\rm bulk}$ has an Rindler bulk ideal gas interpretation
analogous to the static patch ideal gas of section 2 Sun:2020ame , the exact
bulk-edge cancelation on display here is reminiscent of analogous one-loop
bulk-edge cancelations expected in string theory according to the qualitative
picture reviewed in appendix E.5.2.
For minimal type A, dual to the free $O(N)$ model, the sum yields an
expression which after rescaling of integration variables $t\to t/2$ is
effectively equivalent to the ${\rm so}(2,d)$ singleton character, which is
also the ${\rm so}(1,d)$ character of a conformally coupled ($\nu=i/2$) scalar
on $S^{d}$. Using (74), this means $Z_{\rm PI}^{{\rm AdS}+}$ equals the sphere
partition function on $S^{d}$, immediately implying the $N\to N-1$
interpretation of Giombi:2013fka ; Giombi:2014iua ; Gunaydin:2016amv ;
Giombi:2016pvg ; Skvortsov:2017ldz .
For nonminimal type A with $\Delta_{0}=2$ scalar boundary conditions, dual to
an interacting U(N) CFT, the cancelation is almost exact but not quite:
$\displaystyle\chi_{\rm tot}^{{\rm
AdS}+}=\frac{\sum_{k=2}^{d-3}\,q^{k}}{(1-q)^{d-1}}\,.$ (183)
#### AdS$+$ higher-spin swampland
In the above examples it is apparent that although the spin-summed $\chi_{\rm
bulk}$ has increased effective UV-dimensionality $d^{\rm bulk}_{\rm
eff}=2d-2$, as if we summed KK modes of a compactification manifold of
dimension $d-2$, the edge subtraction collapses this back down to a net
$d_{\rm eff}=d-1$, decreasing the original $d$. Correspondingly, the UV-
divergences of $Z^{(1)}_{\rm PI}$ are not those of a $d+1$ dimensional bulk-
local theory, but rather of a $d$-dimensional boundary-local theory. In fact
this peculiar property appears necessary for quantum consistency, in view of
the non-existence of a nontrivially interacting local bulk action
Sleight:2017pcz . It appears to be true for all AdS$+$ higher spin theories
with a known holographic dual Sun:2020ame , but not for all classically
consistent higher-spin theories. Thus it appears to be some kind of AdS
higher-spin “swampland” criterion:
$\displaystyle\mbox{AdS${}_{d+1}$ HS theory has holographic
dual}\qquad\Rightarrow\qquad d_{\rm eff}=d-1\,.$ (184)
Higher-spin theories violating this criterion do exist. Theories with a tower
of massless spins $s\geq 2$ and an a priori undetermined number $n$ of real
scalars can be constructed in AdS3 Vasiliev:1999ba ; Gaberdiel:2010pz .
Assuming all integer spins $s\geq 2$ are present, the total character sums up
to
$\displaystyle\chi_{\rm
tot}=\frac{2\,q^{2}}{(1-q)^{2}}-\frac{4\,q}{(1-q)^{2}}+\sum_{i=1}^{n}\frac{q^{\Delta_{i}}}{(1-q)^{3}}\,.$
(185)
For $t\to 0$ diverges as $\chi_{\rm HS}\sim(n-2)/t^{2}+O(1/t)$. To satisfy
(184), the number of scalars must be $n=2$. This is inconsistent with the
$n=4$ AdS3 theory originally conjectured in Gaberdiel:2010pz to be dual to a
minimal model CFT2, but consistent with the amended conjecture of Chang:2011mz
; Gaberdiel:2012ku ; Gaberdiel:2012uj .
#### AdS$-$
For alternate boundary conditions, one ends up with a massless higher-spin
character formula similar to (112). The factor $\gamma^{\dim G}$ in (112) is
consistent with $\log Z^{{\rm AdS}-}_{\rm PI}\propto(G_{\rm
N})^{\frac{1}{2}\sum_{s}N^{\rm KT}_{s-1}}$ found in Giombi:2013yva . (176)
implies the massless AdS$\pm$ and dS bulk and edge characters are related as
$\displaystyle\boxed{\chi_{s}^{{\rm AdS}-}=\chi_{s}^{\rm dS}-\chi_{s}^{{\rm
AdS}+}}$ (186)
hence we can read off the appropriate flipped $\chi_{s}^{{\rm
AdS}-}=[\hat{\chi}_{s}^{{\rm AdS}-}]_{+}$ characters from our earlier explicit
results (388) and (390) for $\chi_{s}^{\rm dS}$. Just like in the dS case, the
final result involves divergent spin sums when the spin range is infinite.
### 9.3 Conformal higher-spin gravity
#### Conformal HS characters
Conformal (higher-spin) gravity theories FRADKIN1985233 have (higher-spin
extensions of) diffeomorphisms and local Weyl rescalings as gauge symmetries.
If one does not insist on a local action, a general way to construct such
theories is to view them as induced theories, obtained by integrating out the
degrees of freedom of a conformal field theory coupled to a general background
metric and other background fields. In particular one can consider a free
$U(N)$ CFTd in a general metric and higher-spin source background. For even
$d$, this results in a local action, which at least at the free level can be
rewritten as a theory of towers of partially massless fields with standard
kinetic terms Tseytlin:2013jya ; Tseytlin:2013fca . Starting from this
formulation of CHS theory on $S^{d}$ (or equivalently dSd), using our general
explicit formulae for partially massless higher-spin field characters (388)
and (390), and summing up the results, we find
$\displaystyle\boxed{\chi_{s}^{\rm CdS_{d}}=\chi_{s}^{{\rm
AdS_{d+1}}-}-\chi_{s}^{{\rm AdS_{d+1}}+}=\chi_{s}^{{\rm
dS_{d+1}}}-2\,\chi_{s}^{{\rm AdS_{d+1}}+}}$ (187)
where $\chi_{s}^{\rm CdS_{d}}$ are the CHS bulk and edge characters and the
second equality uses (186). Since we already know the explicit dS and AdS HS
bulk and edge characters, this relation also provides the explicit CHS bulk
and edge characters. For example
$\begin{array}[]{l|l|l|l}d&s&\chi^{\rm CdS_{d}}_{{\rm
bulk},s}\cdot(1-q)^{d}&\chi^{\rm CdS_{d}}_{{\rm edge},s}\cdot(1-q)^{d-2}\\\
\hline\cr 2&\geq
2&-4q^{s}(1-q)&-2\bigl{(}s^{2}q^{s-1}-(s-1)^{2}q^{s}\bigr{)}\\\ 3&\geq
1&0&0\\\ 3&0&-q(1-q)&0\\\ 4&\geq
0&2(2s\\!+\\!1)\,q^{2}+2s^{2}q^{s+3}-2(s\\!+\\!1)^{2}q^{s+2}&\frac{s(s+1)(2s+1)}{3}\,q+\frac{(s-1)s^{2}(s+1)}{6}\,q^{s+2}-\frac{s(s+1)^{2}(s+2)}{6}\,q^{s+1}\\\
5&\geq
0&\frac{(s+1)(2s+1)(2s+3)}{3}\,q^{2}(1-q)&\frac{s(s+1)(s+2)(2s+1)(2s+3)}{30}\,q(1-q)\end{array}$
(188)
The bulk $SO(1,d)$ $q$-characters $\chi^{\rm CdS_{d}}_{{\rm bulk},s}$ computed
from (187) agree with the ${\rm so}(2,d)$ $q$-characters obtained in
Beccaria:2014jxa . Edge characters were not derived in Beccaria:2014jxa , as
they have no role in the thermal $S^{1}\times S^{d-1}$ CHS partition functions
studied there.191919A priori the interpretation of the bulk characters in
(188) and those in Beccaria:2014jxa is different. Their mathematical equality
is a consequence of the enhanced ${\rm so}(2,d)$ symmetry allowing to map
$S^{d}\to{\mathbb{R}}\times S^{d-1}$.
The one-loop Euclidean path integral of the CHS theory on $S^{d}$ is given by
(112) using the bulk and edge CHS characters $\chi_{s}^{\rm CdS_{d}}$ and with
$G$ the CHS symmetry group generated by the conformal Killing tensors on
$S^{d}$ (counted by $D^{d+3}_{s-1,s-1}$). The coefficient of the log-divergent
term, the Weyl anomaly of the CHS theory, is extracted as usual, by reading
off the coefficient of the $1/t$ term in the small-$t$ expansion of the
integrand in (112), or more directly from the “naive” integrand
$\frac{1}{2t}\frac{1+q}{1-q}\,\hat{\chi}$. For example for conformal $s=2$
gravity on $S^{2}$ coupled to $D$ massless scalars, also known as bosonic
string theory in $D$ spacetime dimensions, we have $\dim
G=\sum_{\pm}D^{4}_{1,\pm 1}=6$, generating $G=SO(1,3)$, and from the above
table (188),
$\displaystyle\chi_{\rm
tot}=D\cdot\frac{1+q}{1-q}-\frac{4q^{2}}{1-q}+2(4q-q^{2})\,.$ (189)
The small-$t$ expansion of the integrand in (112) for this case is
$\displaystyle\frac{1}{2t}\frac{1+q}{1-q}\bigl{(}\chi_{\rm
tot}-12\bigr{)}\to\frac{2(D-2)}{t^{3}}+\frac{D-26}{3\,t}+\cdots\,,$ (190)
reassuringly informing us the critical dimension for the bosonic string is
$D=26$. Adding a massless $s=\frac{3}{2}$ field, we get 2D conformal
supergravity. For half-integer conformal spin $s$, $\chi_{\rm
bulk}=-4q^{s}/(1-q)$ and $\chi_{\rm
edge}=-2\bigl{(}(s-\frac{1}{2})(s+\frac{1}{2})q^{s-1}-(s-\frac{3}{2})(s-\frac{1}{2})q^{s}\bigr{)}$.
Furthermore adding $D^{\prime}$ massless Dirac spinors, the total fermionic
character is
$\displaystyle\chi_{\rm tot}^{\rm
fer}=D^{\prime}\cdot\frac{2\,q^{1/2}}{1-q}-\frac{4\,q^{3/2}}{1-q}+4\,q^{1/2}\,.$
(191)
The symmetry algebra has $\sum_{\pm}D^{4}_{\frac{1}{2},\pm\frac{1}{2}}=4$
fermionic generators, contributing negatively to $\dim G$ in (112). Putting
everything together,
$\displaystyle\frac{1}{2t}\,\frac{1+q}{1-q}\bigl{(}\chi^{\rm bos}_{\rm
tot}-2(6-4)\bigr{)}-\frac{1}{2t}\,\frac{\sqrt{q}}{1-q}\,\chi^{\rm fer}_{\rm
tot}\to\frac{2(D-D^{\prime})}{t^{3}}+\frac{2D+D^{\prime}-30}{6\,t}+\cdots\,,$
(192)
from which we read off supersymmetry + conformal symmetry requires
$D^{\prime}=D=10$.
More systematically, the Weyl anomaly $\alpha_{d,s}$ can be read off by
expanding $\frac{1}{2t}\frac{1+q}{1-q}\,\hat{\chi}^{\rm CS^{d}}$ with
$\hat{\chi}^{\rm CS^{d}}=\hat{\chi}^{{\rm AdS_{d+1}}-}-\hat{\chi}^{{\rm
AdS_{d+1}}+}$ given by (178)-(179) for integer $s$. For example,
$\begin{array}[]{l|l}d&-\alpha_{d,s}\\\ \hline\cr
2&\frac{2(6s^{2}-6s+1)}{3}\\\ 4&\frac{s^{2}(s+1)^{2}(14s^{2}+14s+3)}{180}\\\
6&\frac{(s+1)^{2}(s+2)^{2}(22s^{6}+198s^{5}+671s^{4}+1056s^{3}+733s^{2}+120s-50)}{151200}\\\
8&\frac{(s+1)(s+2)^{2}(s+3)^{2}(s+4)(150s^{8}+3000s^{7}+24615s^{6}+106725s^{5}+261123s^{4}+351855s^{3}+225042s^{2}+31710s-14560)}{2286144000}\end{array}$
(193)
This reproduces the $d=2,4,6$ results of Tseytlin:2013fca ; Tseytlin:2013jya
and generalizes them to any $d$.
#### Physics pictures
Cartoonishly speaking, the character relation (187) translates to one-loop
partition function relations of the form $Z^{{\rm CS}^{d}}\sim Z^{{\rm
EAdS_{d+1}}-}/Z^{{\rm EAdS_{d+1}}+}$ and $Z^{S^{d+1}}\sim Z^{{\rm
CS}^{d}}\bigl{(}Z^{{\rm EAdS_{d+1}}+}\bigr{)}^{2}$. The first relation can
then be understood as a consequence of the holographic duality between AdSd+1
higher-spin theories and free CFTd vector models Tseytlin:2013fca ;
Giombi:2013yva ; Tseytlin:2013jya , while the second relation can be
understood as an expression at the Gaussian/one-loop level of
$Z^{S^{d+1}}\sim\int{\cal D}\sigma\,\bigl{|}\psi_{\rm
HH}(\sigma)\bigr{|}^{2}$, where $\psi_{\rm HH}(\sigma)=\psi_{\rm
HH}(0)\,e^{-\frac{1}{2}\sigma K\sigma+\cdots}$ is the late-time dS Hartle-
Hawking wave function, related by analytic continuation to the EAdS partition
function with boundary conditions $\sigma$ Maldacena:2002vr . The factor
$\bigl{(}Z^{{\rm EAdS}_{d+1}+}\bigr{)}^{2}$ can then be identified with the
bulk one-loop contribution to $|\psi_{\rm HH}(0)|^{2}$, and $Z^{{\rm
cnf}\,S^{d}}$ with $\int{\cal D}\sigma\,e^{-\sigma K\sigma}$, along the lines
of Giombi:2013yva . Along the lines of footnote 8, perhaps another
interpretation of the spin-summed relation (187) exists within the picture of
Alishahiha:2004md .
### 9.4 Comments on infinite spin range divergences
Let us return now to the discussion of section 9.1. Above we have seen that
for EAdS$+$, summing spin characters leads to clean and effortless computation
of the one-loop partition function. The group volume factor is absent because
the global higher-spin symmetry algebra ${\mathfrak{g}}$ generated by the
Killing tensors is not gauged. The character spin sum converges, and no
additional regularization is required beyond the UV cutoff at $t\sim\epsilon$
we already had in place. The underlying reason for this is that in AdS$+$, the
minimal energy of a particle is bounded below by its spin, hence a UV cutoff
is effectively also a spin cutoff. In contrast, for dS, AdS$-$ and CHS
theories alike, ${\mathfrak{g}}$ is gauged, leading to the group volume
division factor, and moreover, for $d\geq 4$, the quasinormal mode levels (or
energy levels for CHS on ${\mathbb{R}}\times S^{d-1}$) are infinitely
degenerate, not bounded below by spin, leading to character spin sum
divergences untempered by the UV cutoff. The geometric origin of quasinormal
modes decaying as slowly as $e^{-2T/\ell}$ for every spin $s$ in $d\geq 4$ was
explained below (361).
One might be tempted to use some form of zeta function regularization to deal
with divergent sums $\sum_{s}\chi_{s}$ such as (174), which amounts to
inserting a convergence factor $\propto e^{-\delta s}$ and discarding the
divergent terms in the limit $\delta\to 0$. This might be justified if the
discarded divergences were UV, absorbable into local counterterms, but that is
not the case here. The divergence is due to low-energy features, the infinite
multiplicity of slow-decaying quasinormal modes, analogous to the divergent
thermodynamics of an ideal gas in a box with an infinite number of different
massless particle species. Zeta function regularization would give a finite
result, but the result would be meaningless.
As discussed at the end of section 6, the Vasiliev-like202020“Vasiliev-like”
is meant only in a superficial sense here. The higher-spin algebras are rather
different Joung:2014qya . limit of the 3D HSn higher-spin gravity theory,
$n\to\infty$ with $l=0$ and ${\cal S}^{(0)}$ fixed, is strongly coupled as a
3D QFT. Unsurprisingly, the one-loop entropy “correction” ${\cal S}^{(1)}=\log
Z^{(1)}$ diverges in this limit: writing the explicit expression for the
maximal-entropy vacuum $R={\bf n}$ in (12) as a function of $\dim
G=2(n^{2}-1)$, one gets ${\cal S}^{(1)}=\dim G\cdot\log\bigl{(}\dim
G/\sqrt{{\cal S}^{(0)}}\bigr{)}+\cdots\to\infty$. The higher-spin
decomposition (459) might inspire an ill-advised zeta function regularization
along the lines of $\dim
G=2\sum_{r=1}^{\infty}2r+1=4\,\zeta(-1)+2\,\zeta(0)=-\frac{4}{3}$. This gives
${\cal S}^{(1)}=\frac{2}{3}\log{\cal S}^{(0)}+c$ with $c$ a computable
constant — a finite but meaningless answer. In fact, using (127), the all-loop
quantum correction to the entropy can be seen to vanish in the limit under
consideration, as illustrated in fig. 4. As discussed around (128), there are
more interesting $n\to\infty$ limits one can consider, taking ${\cal
S}^{(0)}\to\infty$ together with $n$. In these cases, the weakly-coupled
description is not a 3D QFT, but a topological string theory.
Although these and other considerations suggest massless higher-spin theories
with infinite spin range cannot be viewed as weakly-coupled field theories on
the sphere, one might wonder whether certain quantities might nonetheless be
computable in certain (twisted) supersymmetric versions. We did observe some
hints in that direction. One example, with details omitted, is the following.
First consider the supersymmetric AdS5 higher-spin theory dual to the 4D
${\cal N}=2$ supersymmetric free $U(N)$ model, i.e. the $U(N)$ singlet sector
of $N$ massless hypermultiplets, each consisting of two complex scalars and a
Dirac spinor. The AdS5 bulk field content is obtained from this following
Basile:2018dzi . In their notation, the hypermultiplet corresponds to the
${\rm so}(2,4)$ representation ${\rm Di}+2\,{\rm Rac}$. Decomposing $({\rm
Di}+2\,{\rm Rac})\otimes({\rm Di}+2\,{\rm Rac})$ into irreducible ${\rm
so}(2,4)$ representations gives the AdS5 free field content: four $\Delta=2$
and two $\Delta=3$ scalars, one $\Delta=3$, $S=(1,\pm 1)$ 2-form field, six
towers of massless spin-$s$ fields for all $s\geq 1$, one tower of massless
$S=(s,\pm 1)$ fields for all $s\geq 2$, one $\Delta=\frac{5}{2}$ Dirac spinor,
and four towers of massless spin $s=k+\frac{1}{2}$ fermionic gauge fields for
all $k\geq 1$. Consider now the same field content on $S^{5}$. The bulk and
edge characters are obtained paralleling the steps summarized in section 5.2,
generalized to the present field content using (92) and (93). Each individual
spin tower gives rise to a badly divergent spin sum similar to (174). However,
a remarkable conspiracy of cancelations between various bosonic and fermionic
bulk and edge contributions in the end leads to a finite, unambiguous net
integrand:212121The spin sums are performed by inserting a convergence factor
such as $e^{-\delta s}$, but the end result is finite and unambiguous when
taking $\delta\to 0$, along the lines of $\lim_{\delta\to
0}\sum_{s\in\frac{1}{2}{\mathbb{N}}}\,(-1)^{2s}(2s+1)\,e^{-\delta
s}=\frac{1}{4}$.
$\displaystyle\int\frac{dt}{2t}\biggl{(}\frac{1+q}{1-q}\,\chi_{\rm tot}^{\rm
bos}-\frac{2\sqrt{q}}{1-q}\,\chi_{\rm tot}^{\rm
fer}\biggr{)}=-\frac{3}{4}\int\frac{dt}{2t}\,\frac{1+q}{1-q}\,\frac{q}{(1-q)^{2}}\,.$
(194)
Note that the effective UV dimensionality is reduced by two in this case.
An analogous construction for $S^{4}$ starting from the 3D ${\cal N}=2$ $U(N)$
model, gives two $\Delta_{\pm}=1,2$ scalars, a $\Delta=\frac{3}{2}$ Dirac
spinor and two massless spin-$1,\frac{3}{2},2,\frac{5}{2},\ldots$ towers, as
in Sezgin:2012ag ; Hertog:2017ymy . The fermionic bulk and edge characters
cancel and the bosonic part is twice (173). In this case we moreover get a
finite and unambiguous $\dim G=\lim_{\delta\to
0}\sum_{s\in\frac{1}{2}{\mathbb{N}}}^{\infty}\,(-1)^{2s}\,2\,D^{5}_{s-1,s-1}\,e^{-\delta
s}=\frac{1}{4}$.
The above observations are tantalizing, but leave several problems unresolved,
including what to make of the supergroup volume ${\rm vol}\,G$. Actually
supergroups present an issue of this kind already with a finite number of
generators, as their volume is generically zero. In the context of supergroup
Chern-Simons theory this leads to indeterminate $0/0$ Wilson loop expectation
values Mikhaylov:2014aoa . In this case the indeterminacy is resolved by a
construction replacing the Wilson loop by an auxiliary worldline quantum
mechanics Mikhaylov:2014aoa . Perhaps in this spirit, getting a meaningful
path integral on the sphere in the present context may require inserting an
auxiliary “observer” worldline quantum mechanics, with a natural action of the
higher-spin algebra on its phase space, allowing to soak up the residual gauge
symmetries.
One could consider other options, such as breaking the background isometries,
models with a finite-dimensional higher-spin algebra Boulanger:2011qt ;
Joung:2015jza ; Manvelyan:2013oua ; Brust:2016gjy , models with an
$\alpha^{\prime}$-like parameter breaking the higher-spin symmetries, or
models of a different nature, perhaps along the lines of Anninos:2017eib , or
bootstrapped bottom-up. We leave this, and more, to future work.
###### Acknowledgements.
We are grateful to Simone Giombi, Sean Hartnoll, Chris Herzog, Austin Joyce,
Igor Klebanov, Marcos Mariño, Ruben Monten, Beatrix Mühlmann, Bob Penna,
Rachel Rosen and Charlotte Sleight for discussions, explanations and past
collaborations inspiring this work. FD and DA are particularly grateful to Eva
Depoorter and Carolina Martes for near-infinite patience. FD, AL and ZS were
supported in part by the U.S. Department of Energy grant de-sc0011941. DA is
funded by the Royal Society under the grant The Atoms of a deSitter Universe.
## Appendix A Harish-Chandra characters
### A.1 Definition of $\chi$
A central ingredient in this work is the Harish-Chandra group character for
unitary representations $R$ of Lie groups $G$,
$\displaystyle\tilde{\chi}_{R}(g)\equiv{\rm tr}\,R(g)\,,\qquad g\in G\,.$
(195)
More rigorously this should be viewed as a distribution to be integrated
against smooth test functions $f(g)$ on $G$. The smeared operators
$\int[dg]\,f(g)R(g)$ are trace-class operators, and $\tilde{\chi}_{R}(g)$ is
always a locally integrable function on $G$, analytic away from its poles
bams/1183520006 ; bams/1183525024 .
The group of interest to us is $SO(1,d+1)$, the isometry group of global
dSd+1, generated by $M_{IJ}$ as defined under (289). The representations of
$SO(1,d+1)$ were classified and their characters explicitly computed in
10.3792/pja/1195522333 ; 10.3792/pja/1195523378 ; 10.3792/pja/1195523460 . For
a recent review and an extensive dictionary between fields and
representations, see Basile:2016aen .
For our purposes in this work we only need to consider characters restricted
to group elements of the form $g=e^{-itH}$, where $H=M_{0,d+1}$ generates
global $SO(1,1)$ transformations acting as time translations $T\to T+t$ on the
southern static patch (fig. 5):
$\displaystyle\chi(t)\equiv{\rm tr}\,e^{-itH}\,.$ (196)
For example for a spin-0 UIR corresponding to a scalar field of mass
$m^{2}=\Delta(d-\Delta)$, as we will explicitly compute below, this takes the
form
$\displaystyle\chi(t)={\rm
tr}\,e^{-itH}=\frac{e^{-t\Delta}+e^{-t\bar{\Delta}}}{|1-e^{-t}|^{d}}\,,\qquad\bar{\Delta}\equiv
d-\Delta\,.$ (197)
Putting $\Delta=\frac{d}{2}+i\nu$, we get $m^{2}=(\frac{d}{2})^{2}+\nu^{2}$,
so $m^{2}>0$ if $\nu\in{\mathbb{R}}$ (principal series) or $\nu=i\mu$ with
$|\mu|<\frac{d}{2}$ (complementary series). Since
$\bar{\Delta}=d-\Delta=\frac{d}{2}-i\nu$, this implies $\chi(t)=\chi(t)^{*}$,
as follows more generally from $H^{\dagger}=H$. The absolute value signs
moreover ensure $\chi(t)=\chi(-t)$ for all $d$. The latter property holds for
any $SO(1,d+1)$ representation:
$\displaystyle\chi(-t)=\chi(t)\,.$ (198)
This follows from the fact that the $SO(1,1)$ boost generator $H=M_{0,d+1}$
can be conjugated to a boost $-H$ in the opposite direction by a 180-degree
rotation: $-H=uHu^{-1}$ for e.g. $u=e^{i\pi M_{d,d+1}}$, implying
$\chi(-t)={\rm tr}\,e^{iHt}={\rm tr}\,u\,e^{-iHt}u^{-1}={\rm
tr}\,e^{-iHt}=\chi(t)$.
### A.2 Computation of $\chi$
Here we show how characters $\chi(t)={\rm tr}\,e^{-itH}$ can be computed by
elementary means. The full characters $\chi(t,\phi)={\rm
tr}\,e^{-itH+i\phi\cdot J}$ can be computed similarly, but we will focus on
the former.
#### Simplest example: $d=1$, $s=0$
We first consider a $d=1$, spin-0 principal series representation with
$\Delta=\frac{1}{2}+i\nu$, $\nu\in{\mathbb{R}}$. This corresponds to a massive
scalar field on dS2 with mass $m^{2}=\frac{1}{4}+\nu^{2}$. This unitary
irreducible representation of $SO(1,2)$ can be realized on the Hilbert space
of square-integrable wave functions $\psi(\varphi)$ on $S^{1}$, with standard
inner product. The circle can be thought of as the future conformal boundary
of global dS2 in global coordinates (cf. (290)), which for dS2 becomes
$ds^{2}=(\cos\vartheta)^{-2}(-d\vartheta^{2}+d\varphi^{2})$. Kets
$|\varphi\rangle$ can be thought of as states produced by a boundary conformal
field222222${\cal O}(\varphi)$ arises from the bulk scalar
$\phi(\vartheta,\varphi)$ as $\phi(\frac{\pi}{2}-\epsilon,\varphi)\sim{\cal
O}(\varphi)\,\epsilon^{\Delta}+\bar{\cal O}(\varphi)\,\epsilon^{\bar{\Delta}}$
in the infinite future $\epsilon\to 0$ ${\cal O}(\varphi)$ of dimension
$\Delta=\frac{1}{2}+i\nu$ acting on an $SO(1,2)$-invariant global vacuum state
$|{\rm vac}\rangle$ such as the global Euclidean vacuum:
$\displaystyle|\varphi\rangle\equiv{\cal O}(\varphi)|{\rm
vac}\rangle\,,\qquad\langle\varphi|\varphi^{\prime}\rangle=\delta(\varphi-\varphi^{\prime})\,.$
(199)
This pairing is $SO(1,2)$ invariant. Normalizable states $|\psi\rangle$ are
then superpositions
$\displaystyle|\psi\rangle=\int_{-\pi}^{\pi}d\varphi\,\psi(\varphi)\,|\varphi\rangle\,,\qquad\langle\psi|\psi\rangle=\int_{-\pi}^{\pi}d\varphi\,|\psi(\varphi)|^{2}<\infty\,.$
(200)
In conventions in which $H$, $P$ and $K$ are hermitian, the Lie algebra of
${\rm so(1,2)}$ is
$\displaystyle[H,P]=iP\,,\qquad[H,K]=-iK\,,\qquad[K,P]=2iH\,,$ (201)
the action of these generators on kets $|\varphi\rangle$ in the above
representation is
$\displaystyle H|\varphi\rangle$
$\displaystyle=i\bigl{(}\sin\varphi\,\partial_{\varphi}+\Delta\cos\varphi\bigr{)}|\varphi\rangle$
(202) $\displaystyle P|\varphi\rangle$
$\displaystyle=i\bigl{(}(1+\cos\varphi)\partial_{\varphi}-\Delta\sin\varphi\bigr{)}|\varphi\rangle$
$\displaystyle K|\varphi\rangle$
$\displaystyle=i\bigl{(}(1-\cos\varphi)\partial_{\varphi}+\Delta\sin\varphi\bigr{)}|\varphi\rangle\,.$
Note that this implies that the action of for example $H$ on wave functions
$\psi(\varphi)$ is given by $H|\psi\rangle=\int d\varphi\,{\cal
H}\psi(\varphi)\,|\varphi\rangle$ where ${\cal
H}\psi(\varphi)=-i\bigl{(}\sin\varphi\,\partial_{\varphi}+\bar{\Delta}\cos\varphi\bigr{)}\psi(\varphi)$,
with $\bar{\Delta}=1-\Delta=\frac{1}{2}-i\nu$. One gets simpler expressions
after conformally mapping this to planar boundary coordinates
$x=\tan\frac{\varphi}{2}$, that is to say changing basis from
$|\varphi\rangle_{S^{1}}$ to $|x\rangle_{\mathbb{R}}$, $x\in{\mathbb{R}}$,
where
$\displaystyle|x\rangle_{{\mathbb{R}}}\equiv\bigl{(}\tfrac{\partial\varphi}{\partial
x}\bigr{)}^{\Delta}\bigl{|}\varphi(x)\bigr{\rangle}_{S^{1}}=\bigl{(}\tfrac{2}{1+x^{2}}\bigr{)}^{\Delta}\,\bigl{|}2\arctan
x\bigr{\rangle}_{S^{1}}\,,\qquad\langle
x|x^{\prime}\rangle=\delta(x-x^{\prime})\,.$ (203)
Then $H,P,K$ take the familiar planar dilatation, translation and special
conformal form:
$\displaystyle H|x\rangle=i(x\partial_{x}+\Delta)|x\rangle\,\qquad
P|x\rangle=i\partial_{x}|x\rangle\,,\qquad
K|x\rangle=i(x^{2}\partial_{x}+2\Delta x)|x\rangle\,.$ (204)
In particular this makes exponentiation of $H$ easy:
$\displaystyle e^{-itH}|x\rangle=e^{t\Delta}|e^{t}x\rangle\,.$ (205)
However one has to keep in mind that planar coordinates miss a point of the
global boundary, here the point $\varphi=\pi$. This will actually turn out to
be important in the computation of the character. Let us first ignore this
though and compute
$\displaystyle\chi(t)|_{\rm planar}=\int dx\,\langle
x|e^{-itH}|x\rangle=e^{t\Delta}\int
dx\,\delta\bigl{(}x-e^{t}x\bigr{)}=e^{t\Delta}\int
dx\,\frac{\delta(x)}{|1-e^{t}|}=\frac{e^{t\Delta}}{|1-e^{t}|}\,.$
We see that the computation localizes at the point $x=0$, singled out because
it is a fixed point of $H$. Actually there is another fixed point, which we
missed here because it is exactly the point at infinity in planar coordinates.
This is clear from the global version (202): one fixed point of $H$ is at
$\varphi=0$, which maps to $x=0$ and was picked up in the above computation,
while the other fixed point is at $\varphi=\pi$, which maps to $x=\infty$ and
so was missed.
This is easily remedied though. The most straightforward way is to repeat the
computation in the global boundary basis $|\varphi\rangle$, which is sure not
to miss any fixed points. It suffices to consider an infinitesimally small
neighborhood of the fixed points. For $\varphi=y\to 0$, we get $H\approx
i(y\partial_{y}+\Delta)$, which coincides with the planar expression, while
for $\varphi=\pi+y$ with $y\to 0$, we get $H\approx-i(y\partial_{y}+\Delta)$,
which is the same except with the opposite sign. Thus we obtain
$\displaystyle\chi(t)=\int
d\varphi\,\langle\varphi|e^{-itH}|\varphi\rangle=\frac{e^{t\Delta}}{|1-e^{t}|}+\frac{e^{-t\Delta}}{|1-e^{-t}|}\,=\frac{e^{-t\Delta}+e^{-t\bar{\Delta}}}{|1-e^{-t}|}\,,$
(206)
where $\bar{\Delta}=1-\Delta=\frac{1}{2}-i\nu$, reproducing (197) for $d=1$.
For the complementary series $0<\Delta<1$, we have $\Delta^{*}=\Delta$ instead
of $\Delta^{*}=\bar{\Delta}\equiv 1-\Delta$, so the conjugation properties of
$H$, $D$, $K$ are different. As a result they are no longer hermitian with
respect to the inner product (199), but rather with respect to
$\langle\varphi|\varphi^{\prime}\rangle\propto\bigl{(}1-\cos(\varphi-\varphi^{\prime})\bigr{)}^{-\Delta}$.
However we can now define a “shadow” bra $(\varphi|\propto\int
d\varphi^{\prime}\bigl{(}1-\cos(\varphi-\varphi^{\prime})\bigr{)}^{-\bar{\Delta}}\langle\varphi^{\prime}|$
satisfying $(\varphi|\varphi^{\prime}\rangle=\delta(\varphi-\varphi^{\prime})$
and compute the trace as $\chi(t)=\int
d\varphi\,(\varphi|e^{-itH}|\varphi\rangle$. The computation then proceeds in
exactly the same way, with the same result (206).
#### General dimension and spin
The generalization to $d>1$ is straightforward. Again the trace only picks up
contributions from fixed points of $H$. The fixed point at the origin in
planar coordinates contributes $e^{t\Delta}\int
d^{d}x\,\delta^{d}(x-e^{t}x)=\frac{e^{t\Delta}}{|1-e^{t}|^{d}}\,,$ while the
fixed point at the other pole of the global boundary sphere gives a
contribution of the same form but with $t\to-t$. Together we get
$\displaystyle\chi_{0,\Delta}(t)=\frac{e^{-t\Delta}+e^{-t\bar{\Delta}}}{|1-e^{-t}|^{d}}\,,$
(207)
where $\bar{\Delta}=d-\Delta$.
For massive spin-$s$ representations, the basis merely gets some additional
$SO(d)$ spin labels, and the trace picks up a corresponding degeneracy factor,
so232323 Here $\Delta=\frac{d}{2}+i\nu$ with either $\nu\in{\mathbb{R}}$
(principal series) or $\nu=i\mu$ with $|\mu|<\frac{d}{2}$ for $s=0$ and
$|\mu|<\frac{d}{2}-1$ for $s\geq 1$ (complementary series). For $s=0$ the mass
is $m^{2}=(\frac{d}{2})^{2}+\nu^{2}=\Delta(d-\Delta)$ while for $s\geq 1$ it
is given by (79):
$m^{2}=(\tfrac{d}{2}+s-2)^{2}+\nu^{2}=(\Delta+s-2)(d-\Delta+s-2)$.
$\displaystyle\chi_{s,\Delta}(t)=D_{s}^{d}\,\frac{e^{-t\Delta}+e^{-t\bar{\Delta}}}{|1-e^{-t}|^{d}}\,,$
(208)
where $\bar{\Delta}=d-\Delta$ as before, and $D_{s}^{d}$ is the spin
degeneracy, for example $D_{s}^{3}=2s+1$. More generally for $d>2$ it is the
number of totally symmetric traceless tensors of rank $s$:
$\displaystyle D^{d}_{s}=\mbox{\large$\binom{s+d-1}{d-1}-\binom{s+d-3}{d-1}$}$
(209)
(For $d=2$ we get spin $\pm s$ irreps of $SO(2)$ with $D^{2}_{\pm s}=1$.
However both of these appear when quantizing a Fierz-Pauli spin-$s$ field.)
Explicit low-$d$ spin-$s$ degeneracies are listed in (283).
The most general massive unitary representation of $SO(1,d+1)$ is labeled by
an irrep $S=(s_{1},\ldots,s_{r})$ of $SO(d)$ (cf. appendix D.1) and
$\Delta=\frac{d}{2}+i\nu$, $\nu\in{\mathbb{R}}$ (principal series) or
$\Delta=\frac{d}{2}+\mu$, $|\mu|<\mu_{\rm max}(S)\leq\frac{d}{2}$
(complementary series) 10.3792/pja/1195522333 ; 10.3792/pja/1195523378 ;
10.3792/pja/1195523460 ; Basile:2016aen . The spin-$s$ case discussed above
corresponds to $S=(s_{1},0,\ldots,0)$. The character for general $S$ is
$\displaystyle\chi_{S,\Delta}(t)=D_{S}^{d}\,\frac{e^{-t\Delta}+e^{-t\bar{\Delta}}}{|1-e^{-t}|^{d}}\,,$
(210)
where the generalized spin degeneracy factor $D_{S}^{d}$ is the dimension of
the $SO(d)$ irrep $S$, explicitly given for general $S$ in appendix D.1.
#### Massless and partially massless representations
(Partially) massless representations correspond to higher-spin gauge fields
and are in the exceptional or discrete series. These representations and their
characters $\chi(t)$ are considerably more intricate. We give the general
expression and examples in appendix G.1 for the massless case. Guided by our
path integral results of section 5, we are led to a simple recipe for
constructing these characters from their much simpler “naive” counterparts,
spelled out in (100). This generalizes straightforwardly to the partially
massless case, leading to the explicit general-$d$ formula (388).
### A.3 Importance of picking a globally regular basis
Naive evaluation of the character trace $\chi(t)={\rm tr}\,e^{-itH}$ by
diagonalization of $H$ results in nonsense. In this section we explain why:
emphasizing the importance of using a basis on which finite $SO(1,d+1)$
transformations act in a globally regular way.
#### Failure of computation by diagonalization of $H$
Naively, one might have thought the easiest way to compute $\chi(t)={\rm
tr}\,e^{-itH}$ would be to diagonalize $H$ and sum over its eigenstates. The
latter are given by $|\omega\sigma\rangle$, where $H=\omega\in{\mathbb{R}}$,
$\sigma$ labels $SO(d)$ angular momentum quantum numbers, and
$\langle\omega\sigma|\omega^{\prime}\sigma^{\prime}\rangle=\delta(\omega-\omega^{\prime})\,\delta_{\sigma\sigma^{\prime}}$.
However this produces a nonsensical result,
$\displaystyle\chi(t)={\rm
tr}\,e^{-itH}\,\,\stackrel{{\scriptstyle\text{naive}}}{{=}}\,\,\int
d\omega\sum_{\sigma}\langle\omega\sigma|e^{-itH}|\omega\sigma\rangle=2\pi\,\sum_{\sigma}\delta(0)\,\delta(t)\,\qquad\mbox{(naive)}\,,$
(211)
not even remotely resembling the correct $\chi(t)$ as computed earlier in A.2.
Our method of computation there also illuminates why this naive computation
fails. To make this concrete, let us go back to the $d=1$ scalar example with
$\Delta=\frac{1}{2}+i\nu$, $\nu\in{\mathbb{R}}$. Recalling the action of $H$
on wave functions $\psi(\varphi)$ mentioned below (202), it is straightforward
to find the wave functions $\psi_{\omega\sigma}(\varphi)$ of the eigenkets
$|\omega\sigma\rangle=\int_{-\pi}^{\pi}d\varphi\,\psi_{\omega\sigma}(\varphi)\,|\varphi\rangle$
of $H$:
$\displaystyle\psi_{\omega\sigma}(\varphi)=\frac{\Theta(\sigma\sin\varphi)}{\sqrt{2\pi}}\,|\sin\varphi|^{-\bar{\Delta}}\left|\tan\frac{\varphi}{2}\right|^{i\omega}\,,\qquad\omega\in{\mathbb{R}}\,,\quad\sigma=\pm
1\,.$ (212)
where $\Theta$ is the step function. Alternatively we can first conformally
map $S^{1}$ to the “cylinder” ${\mathbb{R}}\times S^{d-1}={\mathbb{R}}\times
S^{0}$ parametrized by $(T,\Omega)$, $T\in{\mathbb{R}}$,
$\Omega\in\\{-1,+1\\}=S^{0}$, that is to say change basis
$|\varphi\rangle_{S^{1}}\to|T\Omega\rangle_{{\mathbb{R}}\times S^{0}}$.242424
Explicitly $T=\log|\tan\frac{\varphi}{2}|$, $\Omega={\rm sign}\,\varphi$,
which analogously to the global $\to$ planar map (203) yields
$\bigl{|}T\pm\bigr{\rangle}_{{\mathbb{R}}\times S^{0}}=(\cosh
T)^{-\Delta}\bigl{|}\pm 2\arctan e^{T}\bigr{\rangle}_{S^{1}}$, satisfying
$\langle
T\Omega|T^{\prime}\Omega^{\prime}\rangle=\delta(T-T^{\prime})\,\delta_{\Omega\Omega^{\prime}}$
and $H|T\Omega\rangle=i\partial_{T}|T\Omega\rangle$. Then $H$ generates
translations of $T$, so the wave functions of $|\omega\sigma\rangle$ in this
basis are simply
$\displaystyle\psi_{\omega\sigma}(T,\Omega)=\frac{1}{\sqrt{2\pi}}\,\delta_{\Omega,\sigma}\,e^{i\omega
T}\,.$ (213)
The cylinder is the conformal boundary of the future wedge, $F$ in fig. 14
(which actually splits in two wedges at $\Omega=\pm 1$ in the case of dS2),
and the $|\omega\pm\rangle$ are the states obtained by the usual free field
quantization corresponding to the natural modes $\phi_{\omega\pm}(T,r)$ in
this patch.
It is now clear why the naive computation (211) of $\chi(t)$ in the basis
$|\omega\sigma\rangle$ fails to produce the correct result: the wave functions
$\psi_{\omega\sigma}(\varphi)$ are singular precisely at the fixed points
$\varphi=0,\pi$ of $H$ (top corners of Penrose diagram in fig. 5), which are
exactly the points at which the character trace computation of section A.2
localizes. Closely related failure would be met in the basis
$|T\Omega\rangle$: $H$ acts as by translating $T$, seemingly without fixed
points, oblivious to their actual presence at $T=\pm\infty$. In other words,
despite their lure as being the bases in which the action of $H$ is maximally
simple, $|T\Omega\rangle$ or its Fourier dual $|\omega\sigma\rangle$ are in
fact the worst possible choice one could make to compute the trace.
Similar observations hold for in higher dimensions. The wave functions
diagonalizing $H$ take the form $\psi_{\omega\sigma}(T,\Omega)\propto
e^{i\omega T}Y_{\sigma}(\Omega)$ in ${\mathbb{R}}\times S^{d-1}$ cylinder
coordinates. Transformed to global $S^{d}$ coordinates, these are singular
precisely at the fixed points of $H$, excluded from the cylinder, making this
frame particularly ill-suited for computing ${\rm tr}\,e^{-itH}$.
#### Globally regular bases
More generally, to ensure correct computation of the full Harish-Chandra group
character $\chi_{R}(g)={\rm tr}\,R(g)$, $g\in SO(1,d+1)$, we must use a basis
on which finite $SO(1,d+1)$ transformations $g$ act in a globally nonsingular
way. This is the case for a global dSd+1 boundary basis
$|\bar{\Omega}\rangle_{S^{d}}$, $\bar{\Omega}\in S^{d}$, generalizing the
$d=1$ global $S^{1}$ basis $|\varphi\rangle_{S^{1}}$, but not for a planar
basis $|x\rangle_{{\mathbb{R}}^{d}}$ or a cylinder basis
$|T\Omega\rangle_{{\mathbb{R}}\times S^{d-1}}$. Indeed generic $SO(d+1)$
rotations of the global $S^{d}$ move the poles of the sphere, thus mapping
finite points to infinity in planar or cylindrical coordinates. This singular
behavior is inherited by the corresponding Fourier dual bases
$|p\rangle\propto\int d^{d}x\,e^{ipx}|x\rangle$ and
$|\omega\sigma\rangle\propto\int dT\,d\Omega\,e^{i\omega
T}Y_{\sigma}(\Omega)\,|T\Omega\rangle$. From a bulk point of view these are
the states obtained by standard mode quantization in the planar patch resp.
future wedge. The singular behavior is evident here from the fact that these
patches have horizons that are moved around by global $SO(d+1)$ rotations.
Naively computing $\chi(g)$ in these frames will in general give incorrect
results. More precisely the result will be wrong unless the fixed points of
$g$ lie at finite position on the corresponding conformal boundary patch.
On the other hand the normalizable dual basis $|\bar{\sigma}\rangle=\int
d\bar{\Omega}\,Y_{\bar{\sigma}}(\bar{\Omega})\,|\bar{\Omega}\rangle$ inherits
the global regularity of $|\bar{\Omega}\rangle_{S^{d}}$. Here
$Y_{\bar{\sigma}}(\Omega)$ is a basis of spherical harmonics on $S^{d}$, with
$\bar{\sigma}$ labeling the global $SO(d+1)$ angular momentum quantum numbers,
and
$\langle\bar{\sigma}|\bar{\sigma}^{\prime}\rangle=\delta_{\bar{\sigma}\bar{\sigma}^{\prime}}$.
(From the bulk point of view this is essentially the basis obtained by
quantizing the natural mode functions of the global dSd+1 metric in table
290.) Although in practice much harder than computing $\chi(t)=\int
d\bar{\Omega}\,\langle\bar{\Omega}|e^{-iHt}|\bar{\Omega}\rangle$ as in section
A.2, computing
$\displaystyle\chi(t)={\rm
tr}\,e^{-itH}=\sum_{\bar{\sigma}}\langle\bar{\sigma}|e^{-itH}|\bar{\sigma}\rangle$
(214)
gives in principle the correct result. Note that this suggests a natural UV
regularization of $\chi(t)$ for $t\to 0$, by cutting off the global $SO(d+1)$
angular momentum. For example for a scalar on dS3 with $SO(3)$ angular
momentum cutoff $L$, this would be
$\displaystyle\chi_{L}(t)\equiv\sum_{\ell=0}^{L}\langle\ell m|e^{-itH}|\ell
m\rangle\,.$ (215)
## Appendix B Density of states and quasinormal mode resonances
The review in appendix A focuses mostly on mathematical and computational
aspects of the Harish-Chandra character $\chi(t)={\rm tr}\,e^{-itH}$. Here we
focus on its physics interpretation, in particular the density of states
$\rho(\omega)$ obtained as its Fourier transform. We define this in a general
and manifestly covariant way using Pauli-Villars regularization in section 2.
Here we will not be particularly concerned with general definitions or
manifest covariance, taking a more pedestrian approach. At the end we briefly
comment on an “S-matrix” interpretation and a possible generalization of the
formalism including interactions.
In B.1, we contrast the spectral features encoded in the characters of unitary
representations of the ${\rm so}(1,d+1)$ isometry algebra of global dSd+1 with
the perhaps more familiar characters of unitary representations of the ${\rm
so}(2,d)$ isometry algebra of AdSd+1: in a sentence, the latter encodes bound
states, while the former encodes scattering states. In B.2 we explicitly
compare $\rho(\omega)$ obtained as the Fourier transform of $\chi(t)$ for dS2
to the coarse-grained eigenvalue density obtained by numerical diagonalization
of a model discretized by global angular momentum truncation, and confirm the
results match at large $N$. In B.3 we identify the poles of $\rho(\omega)$ in
the complex $\omega$ plane as scattering resonances/quasinormal modes, counted
by the power series expansion of the character. As a corollary this implies
the relation $Z_{\rm PI}=Z_{\rm bulk}$ of (68) can be viewed as a precise
version of the formal quasinormal mode expansion of $\log Z_{\rm PI}$ proposed
in Denef:2009kn .
### B.1 Characters and the density of states: dS vs AdS
We begin by highlight some important differences in the spectrum encoded in
the characters of unitary ${\rm so}(1,d+1)$ representations furnished by
global dSd+1 single-particle Hilbert spaces and the characters of unitary
${\rm so}(2,d)$ representations furnished by global AdSd+1 single-particle
Hilbert spaces. Although the discussion applies to arbitrary representations,
for concreteness we consider the example of a scalar of mass
$m^{2}=(\frac{d}{2})^{2}+\nu^{2}$ on dSd+1. Its character as computed in (207)
is
$\displaystyle\chi_{\rm dS}(t)\equiv{\rm
tr}\,e^{-itH}=\frac{e^{-\Delta_{+}t}+e^{-\Delta_{-}t}}{|1-e^{-t}|^{d}}\,,\qquad\Delta_{\pm}=\tfrac{d}{2}\pm
i\nu\,,\qquad t\in{\mathbb{R}}.$ (216)
where ${\rm tr}$ traces over the global single-particle Hilbert space and we
recall $H=M_{0,d+1}$ is a global $SO(1,1)$ boost generator, which becomes a
spatial momentum operator in the future wedge and the energy operator in the
southern static patch (cf. fig. 14c). This is to be contrasted with the
familiar character of the unitary lowest-weight representation of a scalar of
mass $m^{2}=-(\frac{d}{2})^{2}+\mu^{2}$ on global AdSd+1 with standard
boundary conditions:
$\displaystyle\chi_{\rm AdS}(t)\equiv{\rm
tr}\,e^{-itH}=\frac{e^{-i\Delta_{+}t}}{(1-e^{-it})^{d}}\,,\qquad\Delta_{+}=\tfrac{d}{2}+\mu\,,\qquad{\rm
Im}\,t<0\,.$ (217)
Here the ${\rm so}(2)$ generator $H$ is the energy operator in global AdSd+1.
Besides the occurrence of both $\Delta_{\pm}$ in (216), another notable
difference is the absence of factors of $i$ in the exponents.
The physics content of $\chi_{\rm AdS}$ is clear: $\chi_{\rm
AdS}(-i\beta)={\rm tr}\,e^{-\beta H}$ is the single-particle partition
function at inverse temperature $\beta$ for a scalar particle trapped in the
global AdS gravitational potential well. Equivalently for ${\rm Im}\,t<0$, the
expansion
$\displaystyle\chi_{\rm
AdS}(t)=\sum_{\lambda}N_{\lambda}\,e^{-it\lambda}\,,\qquad\lambda=\Delta_{+}+n\,,\quad
n\in{\mathbb{N}}\,,$ (218)
counts normalizable single-particle states of energy $H=\lambda$, or
equivalently global normal modes of frequency $\lambda$. The corresponding
density of single-particle states is
$\displaystyle\rho_{\rm
AdS}(\omega)=\int_{-\infty}^{\infty}\frac{dt}{2\pi}\,\chi_{\rm
AdS}(t)\,e^{i\omega t}=\sum_{\lambda}N_{\lambda}\,\delta(\omega-\lambda)\,.$
(219)
Figure 10: Density of states $\rho_{\Lambda}(\omega)$ for dS3 scalars with
$\Delta=1+2i$, $\Delta=\frac{1}{2}$, $\Delta=\frac{1}{10}$, and UV cutoff
$\Lambda=100$, according to (222). The red dotted line represents the term
$2\Lambda/\pi$. The peak visible at $\Delta=\frac{1}{10}$ is due to a
resonance approaching the real axis, as explained in section B.3.
For dS, we can likewise expand the character as in (218). For $t>0$,
$\displaystyle\chi_{\rm
dS}(t)=\sum_{\lambda}N_{\lambda}\,e^{-it\lambda}\,,\qquad\lambda=-i(\Delta_{\pm}+n)=-i(\tfrac{d}{2}+n)\pm\nu\,,\quad
n\in{\mathbb{N}}\,.$ (220)
However $\lambda$ is now complex, so evidently $N_{\lambda}$ does not count
physical eigenstates of the hermitian operator $H$. Rather, as further
discussed in section B.3, it counts resonances, or quasinormal modes. The
density of physical states with $H=\omega\in{\mathbb{R}}$ is formally given by
$\displaystyle\rho_{\rm
dS}(\omega)=\int_{-\infty}^{\infty}\frac{dt}{2\pi}\,\chi_{\rm
dS}(t)\,e^{i\omega t}=\int_{0}^{\infty}\frac{dt}{2\pi}\,\chi_{\rm
dS}(t)\bigl{(}e^{i\omega t}+e^{-i\omega t}\bigr{)}\,,$ (221)
where $\omega$ can be interpreted as the momentum along the $T$-direction of
the future wedge ($F$ in fig. 14 and table 290). Alternatively for $\omega>0$
it can be interpreted as the energy in the southern static patch, as discussed
in section 2.2. A manifestly covariant Pauli-Villars regularization of the
above integral is given by (41). For our purposes here a simple
$t>\Lambda^{-1}$ cutoff suffices. For example for dS3,
$\displaystyle\rho_{\rm dS_{3},\Lambda}(\omega)$
$\displaystyle\equiv\int_{\Lambda^{-1}}^{\infty}\frac{dt}{2\pi}\,\frac{e^{-(1+i\nu)t}+e^{-(1-i\nu)t}}{(1-e^{-t})^{2}}\bigl{(}e^{i\omega
t}+e^{-i\omega t}\bigr{)}$ (222)
$\displaystyle=\frac{2\Lambda}{\pi}-\frac{1}{2}\sum_{\pm}(\omega\pm\nu)\coth\bigl{(}\pi(\omega\pm\nu)\bigr{)}\,.$
Some examples are illustrated in fig. 10. In contrast to AdS, $\rho_{\rm
dS}(\omega)$ is continuous. Indeed energy eigenkets $|\omega\sigma\rangle$ of
the static patch form a continuum of scattering states, coming out of and
going into the horizon, instead of the discrete set of bound states one gets
in the global AdS potential well. Note that although the above $\rho_{\rm
dS_{3},\Lambda}(\omega)$ formally goes negative in the large-$\omega$ limit,
it is positive within its regime of validity, that is to say for
$\omega,\nu\ll\Lambda$.
### B.2 Coarse-grained density of states in globally truncated model
Figure 11: Density of states for a $\Delta=\frac{1}{2}+i\nu$ scalar with
$\nu=2$ in dS2. The red dots show the local eigenvalue density
$\bar{\rho}_{N}(\omega)$, (225), of the truncated model with global angular
momentum cutoff $N=2000$, obtained by numerical diagonalization. The blue line
shows $\rho(\omega)$ obtained as the Fourier transform of $\chi(t)$,
explicitly (223) with $e^{-\gamma}\Lambda\approx 4000$. The plot on the right
zooms in on the IR region. The peaks are due to the proximity of quasinormal
mode poles in $\rho(\omega)$, discussed in B.3.
For a $\Delta=\frac{1}{2}+i\nu$ scalar on dS2, the density of states
regularized by as in (222) is
$\displaystyle\rho(\omega)=\frac{2}{\pi}\log(e^{-\gamma}\Lambda)-\frac{1}{2\pi}\sum_{\pm,\pm}\psi\bigl{(}\tfrac{1}{2}\pm
i\nu\pm i\omega)\bigr{)}\,,$ (223)
where $\gamma$ is the Euler constant, $\psi(x)=\Gamma^{\prime}(x)/\Gamma(x)$
is the digamma function, and the sum is over the four different combinations
of signs. To ascertain it makes physical sense to identify this as the density
of states, we would like to compare this to a model with discretized spectrum
of eigenvalues $\omega$.
An efficient discretization — which does not require solving bulk equations of
motion and is quite natural from the point of view of dS-CFT approaches to de
Sitter quantum gravity Maldacena:2002vr ; Strominger:2001pn ; Anninos:2011ui
— is obtained by truncating the global dSd+1 angular momentum $SO(d+1)$ of the
single-particle Hilbert space, considering instead of $H$ a finite-dimensional
matrix
$\displaystyle
h_{\bar{\sigma}\bar{\sigma}^{\prime}}\equiv\langle\bar{\sigma}|H|\bar{\sigma}^{\prime}\rangle\,,$
(224)
where $\bar{\sigma}$ are $SO(d+1)$ quantum numbers, as in (214). For dS2 this
is $SO(2)$ and $\bar{\sigma}=n\in{\mathbb{Z}}$, truncated e.g. by $|n|\leq N$.
The matrix $h$ is sparse and can be computed either directly using
$|n\rangle\propto\int d\varphi\,e^{in\varphi}|\varphi\rangle$ and the explicit
form of $H$ given in (202), or algebraically.
The algebraic way goes as follows. A normalizable basis $|n\rangle$ of the
global dS2 scalar single-particle Hilbert space can be constructed from the
$SO(1,2)$ conformal algebra (201), using a basis of generators $L_{0}$,
$L_{\pm}$ related to $H$, $K$ and $P$ as $L_{0}=\frac{1}{2}(P+K)$,
$L_{\pm}=\frac{1}{2}(P-K)\pm iH$. Then $L_{0}$ is the global angular momentum
generator $i\partial_{\phi}$ along the future boundary $S^{1}$ and $L_{\pm}$
are its raising and lowering operators. In some suitable normalization of the
$L_{0}$ eigenstates $|n\rangle$, we have $L_{0}|n\rangle=n|n\rangle$,
$L_{\pm}|n\rangle=(n\pm\Delta)|n\pm 1\rangle$. Cutting off the single-particle
Hilbert space at $-N<n\leq N$,252525The asymmetric choice here allows us to
use the simple coarse graining prescription (225) and keep this discussion
short. A symmetric choice $|n|\leq N$ would lead to an enhanced
${\mathbb{Z}}_{2}$ and two families of eigenvalues distinguished by their
${\mathbb{Z}}_{2}$ parity, inducing persistent microstructure in the level
spacing. The most efficient way to proceed then is to compute
$\bar{\rho}_{N,\pm}(\omega)$ as the inverse level spacing for these two
families separately and then add the contributions together as interpolated
functions. For dS3 with $SO(3)$ cutoff $\ell\leq N$ one similarly gets $2N+1$
families of eigenvalues, labeled by $SO(2)$ angular momentum $m$, and one can
proceed analogously. Alternatively, one can compute $\bar{\rho}_{N}(\omega)$
directly by binning and counting, but this requires larger $N$. the operator
$H=\frac{i}{2}(L_{-}-L_{+})$ acts as a sparse $2N\times 2N$ matrix on the
truncated basis $|n\rangle$.
A minimally coarse-grained density of states can then be defined as the
inverse spacing of its eigenvalues $\omega_{i}$, $i=1,\ldots,2N$, obtained by
numerical diagonalization:
$\displaystyle\bar{\rho}_{N}(\omega_{i})\equiv\frac{2}{\omega_{i+1}-\omega_{i-1}}.$
(225)
The continuum limit corresponds to $N\to\infty$ in the discretized model, and
to $\Lambda\to\infty$ in (223). To compare to (223), we adjust $\Lambda$, in
the spirit of renormalization, to match the density of states at some scale
$\omega$, say $\omega=0$. The results of this comparison for $\nu=2$, $N=2000$
are shown in fig. 11. Clearly they match remarkably well indeed in the regime
where they should, i.e. well below the UV cutoff scale.
Figure 12: Comparison of $d=1$ character $\chi(t)$ defined in (216) (blue) to
the coarse-grained discretized character $\bar{\chi}_{N,\delta}(t)$ defined in
(226) (red), with $\delta=0.1$ and other parameters as in fig. 11. Plot on the
right shows wider range of $t$. Plot in the middle smaller range of $t$, but
larger $\chi$.
We can make a similar comparison directly at the (UV-finite) character level.
The discrete character is $\sum_{i}e^{-i\omega_{i}t}$, which is a wildly
oscillating function. At first sight this seems very different from the
character $\chi(t)={\rm tr}\,e^{-iHt}$ in (221). However to properly compare
the two, we should coarse grain this at a small but finite resolution
$\delta$. We do this by convolution with a Gaussian kernel, that is to say we
consider
$\displaystyle\bar{\chi}_{N,\delta}(t)\equiv\frac{1}{\sqrt{2\pi}\delta}\int_{-\infty}^{\infty}dt^{\prime}\,e^{-(t-t^{\prime})^{2}/2\delta^{2}}\,\sum_{i}e^{-i\omega_{i}t^{\prime}}=\sum_{i}e^{-it\omega_{i}-\delta^{2}\omega_{i}^{2}/2}\,.$
(226)
A comparison of $\bar{\chi}_{N,\delta}$ to $\chi$ is shown in fig. 12 for
$\delta=0.1$. The match is nearly perfect for $|t|$ not too large and not too
small. For small $t$, the $\bar{\chi}_{N,\delta}(t)$ caps off at a finite
value, the number of eigenvalues $|\omega_{i}|\lesssim 1/\delta$, while
$\chi(t)\sim 1/|t|\to\infty$. The approximation gets better here when $\delta$
is made smaller. For larger values of $t$, $\bar{\chi}_{N,\delta}(t)$ starts
showing some oscillations again. These can be eliminated by increasing
$\delta$, at the cost of accuracy at smaller $t$. In the $N\to\infty$ limit,
the discretized approximation gets increasingly better over increasingly large
intervals of $t$, with $\lim_{\delta\to
0}\lim_{N\to\infty}\bar{\chi}_{N,\delta}(t)=\chi(t)$.
Note that there is no reason to expect any discretization scheme will converge
to $\chi(t)$ or $\rho(\omega)$. For example it is not clear a brick wall
discretization along the lines described in section E.3 would. On the other
hand, the convergence of the above global angular momentum cutoff scheme to
the continuum $\chi(t)$ was perhaps to be expected, given (214) and the
discussion preceding it.
### B.3 Resonances and quasinormal mode expansion
Figure 13: Plot of $|\rho(\omega)|$ in complex $\omega$-plane corresponding to
the dS3 examples of fig. 10, that is $\Delta_{\pm}=\\{1+2i,1-2i\\}$,
$\\{\frac{1}{2},\frac{3}{2}\\}$, $\\{0.1,1.9\\}$, and $2\Lambda/\pi\approx
64$. Lighter is larger with plot range $58\text{ (black)}<|\rho|<67\text{
(white)}$. Resonance poles are visible at $\omega=\mp i(\Delta_{\pm}+n)$,
$n\in{\mathbb{N}}$.
Substituting the expansion (220) of the dS character,
$\displaystyle\chi(t)=\sum_{\lambda}N_{\lambda}\,e^{-it\lambda}\qquad(t>0)\,,$
(227)
into (221),
$\rho(\omega)=\frac{1}{2\pi}\int_{0}^{\infty}dt\,\chi(t)\,(e^{i\omega
t}+e^{-i\omega t})$, we can formally express the density of states as
$\displaystyle\rho(\omega)=\frac{1}{2\pi
i}\sum_{\lambda}N_{\lambda}\Bigl{(}\frac{1}{\lambda-\omega}+\frac{1}{\lambda+\omega}\Bigr{)}\,,$
(228)
From this we read off that $\rho(\omega)$ analytically continued to the
complex plane has poles at $\omega=\pm\lambda$ which for massive
representations means $\omega=\mp i(\Delta_{\pm}+n)$. This can also be checked
from explicit expressions such as the dS3 scalar density of states (222),
illustrated in fig. 13. These values of $\omega$ are precisely the frequencies
of the (anti-)quasinormal field modes in the static patch, that is to say
modes with purely ingoing/outgoing boundary conditions at the horizon, regular
in the interior. If we think of the normal modes as scattering states, the
quasinormal modes are to be thought of as scattering resonances. Indeed the
poles of $\rho(\omega)$ are related to the poles/zeros of the static patch
$S$-matrix $S(\omega)$, cf. (229) below. Thus we see the coefficients
$N_{\lambda}$ in (227) count resonances (or quasinormal modes), rather than
states (or normal modes) as in AdS. This expresses at the level of characters
the observations made in Ng:2012xp . It holds for any $SO(1,d+1)$
representation, including massless representations, as explored in more depth
in Sun:2020sgn (see also appendix G.1). Some corresponding quasinormal mode
expansions of bulk thermodynamic quantities are given in (56) and (59), and
related there to the quasinormal mode expansion of Denef:2009kn for scalar
and spinor path integrals.
#### “S-matrix” formulation
The appearance of resonance poles in the analytically continued density of
states is well-known in quantum mechanical scattering off a fixed potential
$V$. They are directly related to the poles/zeros in the S-matrix $S(\omega)$
at energy $\omega$ through the relation262626An elementary exposition can be
found e.g. in chapter 39 of chaosbook .
$\displaystyle\rho(\omega)-\rho_{0}(\omega)=\frac{1}{2\pi
i}\,\frac{d}{d\omega}\,{\rm tr}\log S(\omega)\,,$ (229)
where $\rho_{0}(\omega)$ is the density of states at $V=0$.
Using the explicit form of the dS2 dimension-$\Delta$ scalar static patch mode
functions (316) $\phi^{\Delta}_{\omega\ell}(r,T)$, expanding these for
$r=:\tanh X\to 1$ as
$\displaystyle\phi^{\Delta}_{\omega\ell}(r)\to
A^{\Delta}_{\ell}(\omega)\,e^{-i\omega(T+X)}+B^{\Delta}_{\ell}(\omega)\,e^{-i\omega(T-X)}\,,$
(230)
and defining $S^{\Delta}_{\ell}(\omega)\equiv
B^{\Delta}_{\ell}(\omega)/A^{\Delta}_{\ell}(\omega)$, one can check that
$\rho^{\Delta}(\omega)$ as obtained in (223) satisfies
$\displaystyle\rho^{\Delta}(\omega)-\rho_{0}(\omega)=\frac{1}{2\pi
i}\,\frac{d}{d\omega}\sum_{\ell=0,1}\log S^{\Delta}_{\ell}(\omega)\,,$ (231)
where $\rho_{0}(\omega)=\frac{1}{\pi}(\psi(i\omega)+\psi(-i\omega))+{\rm
const.}$ does not depend on $\Delta$. This can be viewed as a rough analog of
(229), although the interpretation of $\rho_{0}(\omega)$ in the present
setting is not clear to us. Similar observations can be made in higher
dimensions.
In PhysRev.187.345 , a general (flat space) $S$-matrix formulation of
statistical mechanics for interacting QFTs was developed. In this formulation,
the canonical partition function is expressed as
$\displaystyle\log Z-\log Z_{0}=\frac{1}{2\pi i}\int dE\,e^{-\beta
E}\,\frac{d}{dE}\,\bigl{[}{\rm Tr}\log S(E)\bigr{]}_{c}\,,$ (232)
where the subscript $c$ indicates restriction to connected diagrams (where
“connected” is defined with the rule that particle permutations are
interpreted as interactions PhysRev.187.345 ). Combined with the above
observations, this hints at a possible generalization of our free QFT results
to interacting theories.
## Appendix C Evaluation of character integrals
The most straightforward way of UV-regularizing character integrals is to
simply cut off the $t$-integral at some small $t=\epsilon$. However to compare
to the standard heat kernel (or spectral zeta function) regularization for
Gaussian Euclidean path integrals Vassilevich:2003xt , it is useful to have
explicit results in the latter scheme. In this appendix we give an efficient
and general recipe to compute the exact heat kernel-regularized one-loop
Euclidean path integral, with regulator $e^{-\epsilon^{2}/4\tau}$ as in (66),
requiring only the unregulated character formula as input. For concreteness we
consider the scalar case in the derivation, but because the scalar character
$\chi_{0}(t)$ provides the basic building block for all other characters
$\chi_{S}(t)$, the final result will be applicable in general. We spell out
the derivation is some detail, and summarize the final result together with
some examples in section C.2. Application to the massless higher-spin case is
discussed in section C.3, where we work out the exact one-loop Euclidean path
integral for Einstein gravity on $S^{4}$ as an example. In section C.4 we
consider different regularizations, such as the simple $t>\epsilon$ cutoff.
### C.1 Derivation
As shown in section 3, the scalar Euclidean path integral regularized as
$\displaystyle\log
Z_{\epsilon}=\int_{0}^{\infty}\frac{d\tau}{2\tau}\,e^{-\frac{\epsilon^{2}}{4\tau}}\,F_{D}(\tau)\,,\qquad
F_{D}(\tau)\equiv{\rm Tr}\,e^{-\tau
D}=\sum_{n}D_{n}^{d+2}\,e^{-\left(n+\frac{d}{2}+i\nu\right)\left(n+\frac{d}{2}-i\nu\right)\tau}\,,$
(233)
where $D=-\nabla^{2}+\frac{d^{2}}{4}+\nu^{2}$, can be written in character
integral form as
$\displaystyle\log Z_{\epsilon}$
$\displaystyle=\int_{\epsilon}^{\infty}\frac{dt}{2\sqrt{t^{2}-\epsilon^{2}}}\sum_{n}D_{n}^{d+2}\Bigl{(}e^{-(n+\frac{d}{2})t-i\nu\sqrt{t^{2}-\epsilon^{2}}}+e^{-(n+\frac{d}{2})t+i\nu\sqrt{t^{2}-\epsilon^{2}}}\Bigr{)}$
(234)
$\displaystyle=\int_{\epsilon}^{\infty}\frac{dt}{2\sqrt{t^{2}-\epsilon^{2}}}\,\,\frac{1+e^{-t}}{1-e^{-t}}\,\frac{e^{-\frac{d}{2}t-i\nu\sqrt{t^{2}-\epsilon^{2}}}+e^{-\frac{d}{2}t+i\nu\sqrt{t^{2}-\epsilon^{2}}}}{(1-e^{-t})^{d}}\,,$
(235)
Putting $\epsilon=0$ we recover the formal (UV-divergent) character formula
$\displaystyle\log Z_{\epsilon=0}$
$\displaystyle=\int_{0}^{\infty}\frac{dt}{2t}\,F_{\nu}(t)\,,$ $\displaystyle
F_{\nu}(t)$
$\displaystyle\equiv\sum_{n}D_{n}^{d+2}\Bigl{(}e^{-(n+\frac{d}{2}+i\nu)t}+e^{-(n+\frac{d}{2}-i\nu
n)t}\Bigr{)}=\frac{1+e^{-t}}{1-e^{-t}}\,\frac{e^{-(\frac{d}{2}+i\nu)t}+e^{-(\frac{d}{2}-i\nu)t}}{(1-e^{-t})^{d}}\,.$
(236)
To evaluate (235), we split the integral into UV and IR parts, each of which
can be evaluated in closed form in the limit $\epsilon\to 0$.
#### Separation into UV and IR parts
The separation of the integral in UV and IR parts is analogous to the usual
procedure in heat kernel regularization, where one similarly separates out the
UV part of the $\tau$ integral by isolating the leading terms in the $\tau\to
0$ heat kernel expansion
$\displaystyle F_{D}(\tau):={\rm Tr}\,e^{-\tau
D}\,\to\,\sum_{k=0}^{d+1}\alpha_{k}\,\tau^{-(d+1-k)/2}=:F_{D}^{\rm
uv}(\tau)\,.$ (237)
Introducing an infinitesimal IR cutoff $\mu\to 0$, we may write $\log
Z_{\epsilon}=\log Z^{\rm uv}_{\epsilon}+\log Z^{\rm ir}$ where
$\displaystyle\log Z^{\rm
uv}_{\epsilon}\equiv\int_{0}^{\infty}\frac{d\tau}{2\tau}\,e^{-\frac{\epsilon^{2}}{4\tau}}\,F_{D}^{\rm
uv}(\tau)\,e^{-\mu^{2}\tau}\,,\quad\log Z^{\rm
ir}\equiv\int_{0}^{\infty}\frac{d\tau}{2\tau}\bigl{(}F_{D}(\tau)-F_{D}^{\rm
uv}(\tau)\bigr{)}\,e^{-\mu^{2}\tau}\,.$ (238)
Dropping the UV regulator in the IR integral is allowed because all UV
divergences have been removed by the subtraction. The factor
$e^{-\mu^{2}\tau}$ serves as an IR regulator needed for the separate integrals
when $F^{\rm uv}$ has a term $\frac{\alpha_{d+1}}{2\tau}\neq 0$, that is to
say when $d+1$ is even. The resulting $\log\mu$ terms cancel out of the sum at
the end. Evaluating this using the specific UV regulator of (233) gives
$\displaystyle\log
Z_{\epsilon}=\frac{1}{2}\zeta_{D}^{\prime}(0)+\alpha_{d+1}\log\bigl{(}\tfrac{2}{e^{\gamma}\epsilon}\bigr{)}+\frac{1}{2}\sum_{k=0}^{d}\alpha_{k}\,\Gamma\bigl{(}\tfrac{d+1-k}{2}\bigr{)}\left(\tfrac{2}{\epsilon}\right)^{d+1-k}\,,$
(239)
where $\zeta_{D}(z)={\rm
Tr}\,D^{-z}=\frac{1}{\Gamma(z)}\int\frac{d\tau}{\tau}\,\tau^{z}\,{\rm
Tr}\,e^{-\tau D}$ is the zeta function of $D$ and $\alpha_{d+1}=\zeta_{D}(0)$.
We can apply the same idea to the square-root regulated character formula
(235) for $Z_{\epsilon}$. The latter is obtained from the simpler integrand of
the formal character formula (236) for $Z_{\epsilon=0}$ by dividing it by
$r(\epsilon,t)\equiv\sqrt{t^{2}-\epsilon^{2}}/t$ and replacing $\nu$ by $\nu
r(\epsilon,t)$:
$\displaystyle\log
Z_{\epsilon=0}=\int_{0}^{\infty}\frac{dt}{2t}\,F_{\nu}(t)\qquad\Rightarrow\qquad\log
Z_{\epsilon}=\int_{\epsilon}^{\infty}\frac{dt}{2rt}\,F_{r\nu}(t)\,,\qquad
r\equiv\frac{\sqrt{t^{2}-\epsilon^{2}}}{t}\,.$ (240)
Note that $0<r<1$ for all $t>\epsilon$, $r\sim{\cal O}(1)$ for $t\sim\epsilon$
and $r\to 1$ for $t\gg\epsilon$. Therefore, given the $t\to 0$ behavior of the
integrand in the formal character formula for $Z_{\epsilon=0}$,
$\displaystyle\frac{1}{2t}F_{\nu}(t)\to\frac{1}{t}\sum_{k=0}^{d+1}b_{k}(\nu)\,t^{-(d+1-k)}=:\frac{1}{2t}F_{\nu}^{\rm
uv}(t)\,,\qquad b_{k}(\nu)=\sum_{\ell=0}^{k}b_{k\ell}\,\nu^{\ell}\,,$ (241)
we get the $t\sim\epsilon\to 0$ behavior of the integrand for the exact
$Z_{\epsilon}$:
$\displaystyle\frac{1}{2rt}\,F_{r\nu}(t)\to\frac{1}{2rt}\,F^{\rm
uv}_{r\nu}(t)=\frac{1}{rt}\sum_{k,\ell}b_{k\ell}\,\nu^{\ell}\,r^{\ell}\,t^{-(d+1-k)}\,.$
(242)
Thus we can separate $\log Z_{\epsilon}=\log\tilde{Z}^{\rm
uv}_{\epsilon}+\log\tilde{Z}^{\rm ir}$, with
$\displaystyle\log\tilde{Z}^{\rm
uv}_{\epsilon}\equiv\int_{\epsilon}^{\infty}\frac{dt}{2rt}\,F^{\rm
uv}_{r\nu}(t)\,e^{-\mu t}\,,\qquad\log\tilde{Z}^{\rm
ir}\equiv\int_{0}^{\infty}\frac{dt}{2t}\,\bigl{(}F_{\nu}(t)-F^{\rm
uv}_{\nu}(t)\bigr{)}e^{-\mu t}\,.$ (243)
Again the limit $\mu\to 0$ is understood. We were allowed to put $\epsilon=0$
in the IR part because it is UV finite.
#### Evaluation of UV part
Using the expansion (242), the UV part can be evaluated explicitly as
$\displaystyle\log\tilde{Z}^{\rm uv}_{\epsilon}=\frac{1}{2}\sum_{\ell,k\leq
d}b_{k\ell}\,B\bigl{(}\tfrac{d+1-k}{2},\tfrac{\ell+1}{2}\bigr{)}\,\nu^{\ell}\,\epsilon^{-(d+1-k)}-\sum_{\ell}b_{d+1,\ell}\bigl{(}H_{\ell}-\tfrac{1}{2}H_{\ell/2}+\log(\tfrac{e^{\gamma}\,\epsilon\,\mu}{2})\bigr{)}\nu^{\ell}$
(244)
where $B(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$ is the Euler beta
function and $H_{x}=\gamma+\frac{\Gamma^{\prime}(1+x)}{\Gamma(1+x)}$ which for
integer $x$ is the $x$-th harmonic number
$H_{x}=1+\frac{1}{2}+\cdots+\frac{1}{x}$. For example for $d=3$, we get
$\displaystyle\log\tilde{Z}^{\rm
uv}_{\epsilon}=\tfrac{4}{3}\,\epsilon^{-4}-\tfrac{4\nu^{2}+1}{12}\,\epsilon^{-2}-\bigl{(}\tfrac{\nu^{4}}{9}+\tfrac{\nu^{2}}{24}\bigr{)}-\bigl{(}\tfrac{\nu^{4}}{12}+\tfrac{\nu^{2}}{24}-\tfrac{17}{2880}\bigr{)}\log\bigl{(}\tfrac{e^{\gamma}\epsilon\mu}{2}\bigr{)}\,.$
(245)
This gives an explicit expression for the part of $\log Z$ denoted ${\rm
Pol}(\Delta)$ in Denef:2009kn , without having to invoke an independent
computation of the heat kernel coefficients. Indeed, turning this around, by
comparing (244) to (239), we can express the heat kernel coefficients
$\alpha_{k}$ explicitly in terms of the character coefficients $b_{k,\ell}$.
In particular the Weyl anomaly coefficient is simply given by the coefficient
$b_{d+1}=\sum_{\ell}b_{d+1,\ell}\nu^{\ell}$ of the $1/t$ term in the integrand
of the formal character formula (236). More generally,
$\displaystyle\alpha_{k}=\sum_{\ell}\frac{\Gamma(\frac{\ell+1}{2})}{2^{d+1-k}\Gamma(\frac{d+1-k+\ell+1}{2})}\,b_{k\ell}\,\nu^{\ell}\,.$
(246)
For example for $d=3$, this becomes $\alpha_{0}=\frac{1}{12}b_{00}$,
$\alpha_{2}=\frac{1}{2}b_{20}+\frac{\nu^{2}}{6}b_{22}$ and $\alpha_{4}=b_{4}$.
From the small-$t$ expansion $\frac{1}{2t}F_{\nu}(t)\to\sum_{k}b_{k}t^{3-k}$
in (236) we read off $b_{0}=2$, $b_{2}=-\frac{1}{12}-\nu^{2}$ and
$b_{4}=-\frac{17}{2880}+\frac{1}{24}\nu^{2}+\frac{1}{12}\nu^{4}$. Thus
$\alpha_{0}=\frac{1}{6}$, $\alpha_{2}=-\frac{1}{24}-\frac{1}{6}\nu^{2}$ and
$\alpha_{4}=-\frac{17}{2880}+\frac{1}{24}\nu^{2}+\frac{1}{12}\nu^{4}$.
#### Evaluation of IR part
As we explain momentarily, the IR part can be evaluated as
$\displaystyle\log\tilde{Z}^{\rm
ir}=\frac{1}{2}\zeta_{\nu}^{\prime}(0)+b_{d+1}\log\mu\,,\qquad\zeta_{\nu}(z)\equiv\frac{1}{\Gamma(z)}\int_{0}^{\infty}\frac{dt}{t}\,t^{z}\,F_{\nu}(t)\,,$
(247)
where like for the spectral zeta function $\zeta_{D}(z)$, the “character zeta
function” $\zeta_{\nu}(z)$ is defined by the above integral for $z$
sufficiently large and by analytic continuation for $z\to 0$. This zeta
function representation of $\log Z^{\rm ir}$ follows from the following
observations. If we define $\zeta^{\rm
ir}_{\nu}(z)\equiv\frac{1}{\Gamma(z)}\int_{0}^{\infty}\frac{dt}{t}\,t^{z}\bigl{(}F_{\nu}(t)-F_{\nu}^{\rm
uv}(t)\bigr{)}\,e^{-\mu t}$, then since the integral remains finite for $z\to
0$, while $\Gamma(z)\sim 1/z$ and $\partial_{z}(1/\Gamma(z))\to 1$, we
trivially have $\frac{1}{2}\partial_{z}\zeta^{\rm
ir}_{\nu}(z)|_{z=0}={\log\tilde{Z}^{\rm ir}}$. Moreover for $z$ sufficiently
large we have in the limit $\mu\to 0$ that $\frac{1}{2}\zeta^{\rm
uv}(z)\equiv\frac{1}{\Gamma(z)}\int_{0}^{\infty}\frac{dt}{2t}\,t^{z}F^{\rm
uv}_{\nu}(t)\,e^{-\mu t}=b_{d+1}\mu^{-z}$, so upon analytic continuation we
have ${1/2}\partial_{z}\zeta^{\rm uv}(z)|_{z=0}=-b_{d+1}\log\mu$, and (247)
follows.
In contrast to the spectral zeta function, the character zeta function can
straightforwardly be evaluated in terms of Hurwitz zeta functions. Indeed,
denoting $\Delta_{\pm}=\frac{d}{2}\pm i\nu$, we have
$F_{D}(t)=\sum_{n}Q(n)\,e^{-t(n+\Delta_{+})(n+\Delta_{-})}$ where the spectral
degeneracy $Q(n)$ is some polynomial in $n$, and
$\zeta_{D}(z)=\sum_{n=0}^{\infty}Q(n)\,\bigl{(}(n+\Delta_{+})(n+\Delta_{-})\bigr{)}^{-z}$,
which is quite tricky to evaluate, whereas
$F_{\nu}(t)=\sum_{n}Q(n)\bigl{(}e^{-t(n+\Delta_{+})}+e^{-t(n+\Delta_{-})}\bigr{)}$,
and we can immediately express the associated character zeta function as a
finite sum of Hurwitz zeta functions
$\zeta(z,\Delta)=\sum_{n=0}^{\infty}(n+\Delta)^{-z}$:
$\displaystyle\zeta_{\nu}(z)=\sum_{\pm}\sum_{n=0}^{\infty}Q(n)(n+\Delta_{\pm})^{-z}=\sum_{\pm}Q(\hat{\delta}-\Delta_{\pm})\,\zeta(z,\Delta_{\pm})\,.$
(248)
Here $\hat{\delta}$ is the unit $z$-shift operator acting as
$\hat{\delta}^{n}\zeta(z,\Delta)=\zeta(z-n,\Delta)$; for example if
$Q(n)=n^{2}$ we have
$Q(\hat{\delta}-\Delta)\,\zeta(z,\Delta)=(\hat{\delta}^{2}-2\Delta\hat{\delta}+\Delta^{2})\zeta(z,\Delta)=\zeta(z-2,\Delta)-2\Delta\zeta(z-1,\Delta)+\Delta^{2}\zeta(z,\Delta)$.
### C.2 Result and examples
#### Result
Altogether we conclude that given a formal character integral formula
$\displaystyle\log Z_{\rm PI}=\int_{0}^{\infty}\frac{dt}{2t}\,F_{\nu}(t)\,,$
(249)
for a field corresponding to a dSd+1 irrep of dimension $\frac{d}{2}+i\nu$,
with IR and UV expansions
$\displaystyle
F_{\nu}(t)=\sum_{\Delta}\sum_{n=0}^{\infty}P_{\Delta}(n)\,e^{-(n+\Delta)t}\,,\qquad\frac{1}{2t}F_{\nu}(t)=\frac{1}{t}\sum_{k=0}^{d+1}b_{k}(\nu)\,t^{-(d+1-k)}\,+{\cal
O}(t^{0})\,,$ (250)
where $b_{k}(\nu)=\sum_{\ell}b_{k\ell}\,\nu^{\ell}$, we obtain the exact
$Z_{\rm PI}$ with heat kernel regulator $e^{-\epsilon^{2}/4\tau}$ as
$\boxed{\begin{aligned} \log Z_{{\rm
PI},\epsilon}=&\frac{1}{2}\sum_{\Delta}P_{\Delta}(\hat{\delta}-\Delta)\,\zeta^{\prime}(0,\Delta)-\sum_{\ell=0}^{d+1}b_{d+1,\ell}\bigl{(}H_{\ell}-\tfrac{1}{2}H_{\ell/2}\bigr{)}\nu^{\ell}+b_{d+1}(\nu)\log(2e^{-\gamma}/\epsilon)\\!\\\
&+\frac{1}{2}\sum_{k=0}^{d}\sum_{\ell=0}^{k}b_{k\ell}\,B\bigl{(}\tfrac{d+1-k}{2},\tfrac{\ell+1}{2}\bigr{)}\,\nu^{\ell}\,\epsilon^{-(d+1-k)}\,.\end{aligned}}$
(251)
Here $B(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$,
$H_{x}=\gamma+\frac{\Gamma^{\prime}(1+x)}{\Gamma(1+x)}$, which for integer $x$
is the $x$-th harmonic number $H_{x}=1+\frac{1}{2}+\cdots+\frac{1}{x}$, and
$\hat{\delta}$ is the unit shift operator acting on the first argument of the
Hurwitz zeta function $\zeta(z,\Delta)$: the polynomial
$P_{\Delta}(\hat{\delta}-\Delta)$ is to be expanded in powers of
$\hat{\delta}$, setting
$\hat{\delta}^{n}\zeta^{\prime}(0,\Delta)\equiv\zeta^{\prime}(-n,\Delta)$.
Finally the heat kernel coefficients are
$\displaystyle\alpha_{k}=\sum_{\ell}\frac{\Gamma(\frac{\ell+1}{2})}{2^{d+1-k}\Gamma(\frac{d+1-k+\ell+1}{2})}\,b_{k\ell}\,\nu^{\ell}\,.$
(252)
If we are only interested in the finite part of $\log Z$, only the first three
terms in (251) matter. Note that the third and the second term ${\cal
M}_{\nu}\equiv\sum_{\ell}b_{d+1,\ell}\bigl{(}H_{\ell}-\tfrac{1}{2}H_{\ell/2}\bigr{)}$
is in general nonvanishing for even $d+1$. By comparing (251) to (239), say in
the scalar case discussed earlier, we see that
$\zeta_{D}^{\prime}(0)=\zeta_{\nu}^{\prime}(0)+2{\cal M}_{\nu}$. Thus $2{\cal
M}_{\nu}$ can be thought of as correcting the formal factorization
$\sum_{n}\log(n+\Delta_{+})(n+\Delta_{-})=\sum_{n}\log(n+\Delta_{+})+\sum_{n}\log(n+\Delta_{-})$
in zeta function regularization. For this reason ${\cal M}_{\nu}$ is called
the multiplicative “anomaly”, as reviewed in Dowker:2014xca . The above thus
generalizes the explicit formulae in Dowker:2014xca for ${\cal M}_{\nu}$ to
fields of arbitrary representation content.
#### Examples
1. A scalar on $S^{2}$ ($d=1$) with $\Delta_{\pm}=\frac{1}{2}\pm i\nu$ has $F_{\nu}(t)=\frac{1+e^{-t}}{1-e^{-t}}\frac{e^{-\Delta_{+}t}+e^{-\Delta_{-}t}}{1-e^{-t}}$ so the IR and UV expansions are $F_{\nu}(t)=\sum_{\pm}\sum_{n=0}^{\infty}(2n+1)e^{-(\Delta_{\pm}+n)t}$ and $\frac{1}{2t}\,F_{\nu}(t)=\frac{2}{t^{3}}+\frac{\frac{1}{12}-\nu^{2}}{t}+{\cal O}(t^{0})$. Therefore according to (251)
$\displaystyle\log Z_{\rm PI,\epsilon}=$
$\displaystyle\sum_{\Delta=\frac{1}{2}\pm
i\nu}\Bigl{(}\zeta^{\prime}(-1,\Delta)-(\Delta-\tfrac{1}{2})\zeta^{\prime}(0,\Delta)\Bigr{)}+\nu^{2}+\bigl{(}\tfrac{1}{12}-\nu^{2}\bigr{)}\log\bigl{(}2\,e^{-\gamma}/\epsilon\bigr{)}+\frac{2}{\epsilon^{2}}\,.$
(253)
The heat kernel coefficients are obtained from (252) as $\alpha_{0}=1$ and
$\alpha_{2}=\frac{1}{12}-\nu^{2}$.
2. For a scalar on $S^{3}$, $F_{\nu}(t)=\sum_{\pm}\sum_{n=0}^{\infty}(n+1)^{2}e^{-(\Delta_{\pm}+n)t}$, $\frac{1}{2t}F_{\nu}(t)\to\frac{2}{t^{4}}-\frac{\nu^{2}}{t^{2}}+{\cal O}(t^{0})$, so
$\displaystyle\log Z_{\rm PI,\epsilon}=$
$\displaystyle\sum_{\pm}\Bigl{(}\tfrac{1}{2}\zeta^{\prime}(-2,1\pm i\nu)\mp
i\nu\zeta^{\prime}(-1,1\pm i\nu)-\tfrac{1}{2}\nu^{2}\zeta^{\prime}(0,1\pm
i\nu)\Bigr{)}-\frac{\pi\nu^{2}}{4\epsilon}+\frac{\pi}{2\epsilon^{3}}\,.$ (254)
The heat kernel coefficients are $\alpha_{0}=\frac{\sqrt{\pi}}{4}$,
$\alpha_{2}=-\frac{\sqrt{\pi}}{4}\nu^{2}$. In particular for a conformally
coupled scalar, i.e. $\Delta=\frac{1}{2},\frac{3}{2}$ or equivalently
$\nu=i/2$, we get for the finite part the familiar result $\log Z_{\rm
PI}=\frac{3\zeta(3)}{16\pi^{2}}-\frac{\log(2)}{8}$. For $\Delta=1$, i.e.
$\nu=0$, we get $\log Z_{\rm PI}=-\frac{\zeta(3)}{4\pi^{2}}$. Notice that the
finite part looks quite different from (50) obtained by contour integration.
Nevertheless they are in fact the same function. 3. A more interesting example
is the massive spin-$s$ field on $S^{4}$ with $\Delta_{\pm}=\frac{3}{2}\pm
i\nu$. In this case, (83) combined with (329) or equivalently (84) gives
$F_{\nu}=F_{\rm bulk}-F_{\rm edge}$ with
$\displaystyle F_{\rm bulk}(t)$ $\displaystyle=\sum_{\Delta=\frac{3}{2}\pm
i\nu}\sum_{n=-1}^{\infty}D^{3}_{s}D^{5}_{n}\,e^{-(n+\Delta)t}=D_{s}^{3}\,\frac{1+e^{-t}}{1-e^{-t}}\frac{e^{-(\frac{3}{2}+i\nu)t}+e^{-(\frac{3}{2}-i\nu)t}}{(1-e^{-t})^{3}}\,,\qquad\,$
(255) $\displaystyle F_{\rm edge}(t)$
$\displaystyle=\sum_{\Delta=\frac{1}{2}\pm
i\nu}\sum_{n=-1}^{\infty}D^{5}_{s-1}D^{3}_{n+1}\,e^{-(n+\Delta)t}=D_{s-1}^{5}\,\frac{1+e^{-t}}{1-e^{-t}}\frac{e^{-(\frac{1}{2}+i\nu)t}+e^{-(\frac{1}{2}-i\nu)t}}{(1-e^{-t})}\,,$
(256)
where $D_{p}^{3}=2p+1$, $D^{5}_{p}=\frac{1}{6}(2p+3)(p+2)(p+1)$. In particular
note that with $g_{s}\equiv D_{s}^{3}=2s+1$, we have
$D_{s-1}^{5}=\frac{1}{24}g_{s}(g_{s}^{2}-1)$. The small-$t$ expansions are
$\displaystyle\tfrac{1}{2t}F_{\rm bulk}(t)$ $\displaystyle\to
g_{s}\Bigl{(}2\,t^{-5}-\bigl{(}\nu^{2}+\tfrac{1}{12}\bigr{)}t^{-3}+\bigl{(}\tfrac{\nu^{4}}{12}+\tfrac{\nu^{2}}{24}-\tfrac{17}{2880}\bigr{)}t^{-1}+{\cal
O}(t^{0})\Bigr{)}$ (257) $\displaystyle\tfrac{1}{2t}F_{\rm edge}(t)$
$\displaystyle\to\tfrac{1}{24}g_{s}(g_{s}^{2}-1)\Bigl{(}2\,t^{-3}+\bigl{(}\tfrac{1}{12}-\nu^{2}\bigr{)}t^{-1}+{\cal
O}(t^{0})\Bigr{)}\,.$ (258)
Thus the exact partition function for a massive spin-$s$ field is
$\displaystyle\log Z_{\rm PI,\epsilon}$
$\displaystyle=g_{s}\sum_{\Delta=\frac{3}{2}\pm
i\nu}\Bigl{(}\tfrac{1}{6}\zeta^{\prime}(-3,\Delta)\mp\tfrac{1}{2}i\nu\zeta^{\prime}(-2,\Delta)-\bigl{(}\tfrac{1}{2}\nu^{2}+\tfrac{1}{24}\bigr{)}\zeta^{\prime}(-1,\Delta)\pm
i\bigl{(}\tfrac{1}{24}\nu+\tfrac{1}{6}\nu^{3}\bigr{)}\zeta^{\prime}(0,\Delta)\Bigr{)}$
$\displaystyle\quad-\tfrac{1}{24}g_{s}(g_{s}^{2}-1)\sum_{\Delta=\frac{1}{2}\pm
i\nu}\Bigl{(}\zeta^{\prime}(-1,\Delta)\mp
i\nu\zeta^{\prime}(0,\Delta)\Bigr{)}\,-\tfrac{1}{24}g_{s}^{3}\nu^{2}-\tfrac{1}{9}g_{s}\nu^{4}$
(259)
$\displaystyle\quad+\Bigl{(}g_{s}^{3}\bigl{(}\tfrac{1}{24}\nu^{2}-\tfrac{1}{288}\bigr{)}+g_{s}\bigl{(}\tfrac{1}{12}\nu^{4}-\tfrac{7}{2880}\bigr{)}\Bigr{)}\log(2\,e^{-\gamma}/\epsilon)-\bigl{(}\tfrac{1}{12}g_{s}^{3}+\tfrac{1}{3}g_{s}\nu^{2}\bigr{)}\epsilon^{-2}+\tfrac{4}{3}g_{s}\epsilon^{-4}\,.$
Finally the heat kernel coefficients are
$\displaystyle\alpha_{0}=\tfrac{1}{6}g_{s}\,,\qquad\alpha_{2}=-\tfrac{1}{24}g_{s}^{3}-\tfrac{1}{6}g_{s}\nu^{2}\,,\qquad\alpha_{4}=g_{s}^{3}\bigl{(}\tfrac{1}{24}\nu^{2}-\tfrac{1}{288}\bigr{)}+g_{s}\bigl{(}\tfrac{1}{12}\nu^{4}-\tfrac{7}{2880}\bigr{)}\,.$
(260)
#### Single-mode contributions
Contributions from single path integral modes and contributions of single
quasinormal modes are of use in some of our derivations and applications.
These are essentially special cases of the above general results, but for
convenience we collect some explicit formulae here:
$\bullet$ Path integral single-mode contributions: For our choice of heat-
kernel regulator $e^{-\epsilon^{2}/4\tau}$, the contribution to $\log Z_{{\rm
PI},\epsilon}$ from a single bosonic eigenmode with eigenvalue $\lambda$ is
$\displaystyle
I_{\lambda}=\int_{0}^{\infty}\frac{d\tau}{2\tau}\,e^{-{\epsilon^{2}}/{4\tau}}\,e^{-\tau\lambda}=K_{0}(\epsilon\sqrt{\lambda})\to-\frac{1}{2}\log\frac{\lambda}{M^{2}}\,,\qquad
M\equiv\frac{2e^{-\gamma}}{\epsilon}\,,$ (261)
Different regulator insertions lead to a similar result in the limit
$\epsilon\to 0$, with $M=c/\epsilon$ for some regulator-dependent constant
$c$. A closely related formula is obtained for the contribution from an
individual term in the sum (234) or equivalently in the IR expansion of (250),
which amounts to computing (249) with $F_{\nu}(t)\equiv e^{-\rho t}$,
$\rho=a\pm i\nu$. The small-$t$ expansion is
$\frac{1}{2t}F_{\nu}(t)=\frac{1}{2t}+{\cal O}(t^{0})$, so the UV part is given
by the log term in (251) with coefficient $\frac{1}{2}$, and the IR part is
$\frac{1}{2}\zeta_{\nu}^{\prime}(0)=-\frac{1}{2}\log\rho$ as in (247). Thus
$\displaystyle I^{\prime}_{\rho}=\int_{0}^{\infty}\frac{dt}{2t}\,e^{-\rho
t}\to-\frac{1}{2}\log\frac{\rho}{M}\,,\qquad
M=\frac{2e^{-\gamma}}{\epsilon}\,,$ (262)
where the integral is understood to be regularized as in (234),
$I^{\prime}_{\rho}=\int_{\epsilon}^{\infty}\frac{dt}{2\sqrt{t^{2}-\epsilon^{2}}}\,e^{-ta-i\nu\sqrt{t^{2}-\epsilon^{2}}}$,
left implicit here. The similarities between (261) and (262) are of course no
accident, since in our setup, the former splits into the sum of two integrals
of the latter type: writing $\lambda=a^{2}+\nu^{2}=(a+i\nu)(a-i\nu)$, we have
$I_{\lambda}=I^{\prime}_{a+i\nu}+I^{\prime}_{a-i\nu}$. $\bullet$ Quasinormal
mode contributions: Considering a character quasinormal mode expansion
$\chi(t)=\sum_{r}N_{r}\,e^{-r|t|}$ as in (14), the IR contribution from a
single bosonic/fermionic QNM is
$\displaystyle\int_{0}^{\infty}\frac{dt}{2t}\frac{1+e^{-t}}{1-e^{-t}}\,e^{-r\,t}\biggr{|}_{\rm
IR}=\log\frac{\Gamma(r+1)}{\mu^{r}\sqrt{2\pi
r}}\,,\qquad-\int_{0}^{\infty}\frac{dt}{2t}\frac{2e^{-t/2}}{1-e^{-t}}\,e^{-r\,t}\biggr{|}_{\rm
IR}=-\log\frac{\Gamma(r+\frac{1}{2})}{\mu^{r}\sqrt{2\pi}}$ (263)
$\bullet$ Harmonic oscillator: The character of a $d=0$ scalar of mass $\nu$
is $\chi(t)=e^{-i\nu t}+e^{i\nu t}$, hence
$\displaystyle\log Z_{\rm
PI,\epsilon}=\int_{0}^{\infty}\frac{dt}{2t}\,\frac{1+e^{-t}}{1-e^{-t}}\,\bigl{(}e^{-i\nu
t}+e^{i\nu
t}\bigr{)}=\frac{\pi}{\epsilon}-\log\bigl{(}e^{\pi\nu}-e^{-\pi\nu}\bigr{)}\,.$
(264)
The finite part gives the canonical bosonic harmonic oscillator thermal
partition function ${\rm Tr}\,e^{-\beta
H}=\sum_{n}e^{-\beta\nu(n+\frac{1}{2})}=\bigl{(}e^{\beta\nu/2}-e^{-\beta\nu/2}\bigr{)}^{-1}$
at $\beta=2\pi$. The fermionic version is
$\displaystyle\log Z_{\rm
PI,\epsilon}=-\int_{0}^{\infty}\frac{dt}{2t}\,\frac{2e^{-t/2}}{1-e^{-t}}\,\bigl{(}e^{-i\nu
t}+e^{i\nu
t}\bigr{)}=-\frac{\pi}{\epsilon}+\log\bigl{(}e^{\pi\nu}+e^{-\pi\nu}\bigr{)}\,.$
(265)
### C.3 Massless case
Here we give a few more details on how to use (251) to explicitly evaluate
$Z_{\rm PI}$ in the massless case, and work out the exact $Z_{\rm PI}$ for
Einstein gravity on $S^{4}$ as an example.
Our final result for the massless one-loop $Z_{\rm PI}=Z_{G}\cdot Z_{\rm
char}$ is given by (112):
$\displaystyle Z_{\rm PI}=i^{-P}\,\frac{\gamma^{\rm dimG}}{{\rm
vol}{(G)}_{{\rm c}}}\cdot\exp\int^{\times}\frac{dt}{2t}\,F\,,\qquad
F=\frac{1+q}{1-q}\Bigl{(}\bigl{[}\hat{\chi}_{\rm
bulk}\bigr{]}_{+}-\bigl{[}\hat{\chi}_{\rm edge}\bigr{]}_{+}-2\dim
G\Bigr{)}\,,$ (266)
where for $s=2$ gravity $\gamma=\sqrt{\frac{8\pi G_{\rm N}}{A_{d-1}}}$,
$P=d+3$, $G=SO(d+2)$ and ${\rm vol}{(G)}_{{\rm c}}=(\ref{vcSO})$.
$\bullet$ UV part: As always, the coefficient of the log-divergent term simply
equals the coefficient of the $1/t$ term in the small-$t$ expansion of the
integrand in (266). For the other UV terms in (251) (including the
“multiplicative anomaly”), a problem might seem to be that we need a
continuously variable dimension parameter $\Delta=\frac{d}{2}+i\nu$, whereas
massless fields, and our explicit formulae for
$\hat{\chi}\to[\hat{\chi}]_{+}$, require fixed integer dimensions. This
problem is easily solved, as the UV part can actually be computed from the
original naive character formula (364):
$\displaystyle\log Z_{\rm PI}\bigr{|}_{\rm
UV}=\int\frac{dt}{2t}\,\hat{F}\Bigr{|}_{\rm
UV}\,,\qquad\hat{F}=\frac{1+q}{1-q}\bigl{(}\hat{\chi}_{\rm
bulk}-\hat{\chi}_{\rm edge}\bigr{)}\,,$ (267)
Indeed since $\hat{F}\to F=\\{\hat{F}\\}_{+}$ in (366) affects just a finite
number of terms $c_{k}q^{k}\to c_{k}q^{-k}$, it does not alter the small-$t$
(UV) part of the integral. Moreover
$\hat{\chi}_{s}=\hat{\chi}_{s,\nu_{\phi}}-\hat{\chi}_{s,\nu_{\xi}}$, where
$\hat{\chi}_{s,\nu}$ is a massive spin-$s$ character. Thus the UV part may be
obtained simply by combining the results of (251) for general $\nu$ and $s$,
substituting the values $\nu_{\phi}$, $\nu_{\xi}$ set by (95).
$\bullet$ IR part: The IR part is the $\zeta^{\prime}$ part of (251), obtained
from the $q$-expansion of $F(q)$ in (266). This can be found in general by
using
$\displaystyle\frac{1+q}{1-q}\,\frac{q^{\Delta}}{(1-q)^{k}}=\sum_{n=0}^{\infty}P(n)\,q^{n+\Delta}\,,\qquad
P(n)=D_{n}^{k+2}\,,$ (268)
with $D_{n}^{k+2}$ the polynomial given in (209). For $k=0$, (263) is useful.
In particular, using the $\int^{\times}$ prescription (372), the IR
contribution from the last term in (266) is obtained by considering the $r\to
0$ limit of the bosonic formula in (263):
$\displaystyle\int^{\times}\frac{dt}{2t}\,\frac{1+q}{1-q}\bigl{(}-2\dim
G\bigr{)}\biggr{|}_{\rm IR}=\dim G\cdot\log(2\pi)\,.$ (269)
#### Example: Einstein gravity on $S^{4}$
As a simple application, let us compute the exact one-loop Euclidean path
integral for pure gravity on $S^{4}$. In this case $G=SO(5)$, $\dim G=10$,
$d=3$ and $s=2$. From (95) we read off $i\nu_{\phi}=\frac{3}{2}$,
$i\nu_{\xi}=\frac{5}{2}$, and from (102) we get
$\displaystyle\chi_{\rm bulk}=\bigl{[}\hat{\chi}_{\rm
bulk}\bigr{]}_{+}=\frac{10\,q^{3}-6\,q^{4}}{(1-q)^{3}}\,,\qquad\chi_{\rm
edge}=\bigl{[}\hat{\chi}_{\rm
edge}\bigr{]}_{+}=\frac{10\,q^{2}-2\,q^{3}}{1-q}\,.$ (270)
The small-$t$ expansion of the integrand in (266) is
$\frac{1}{2t}F=4\,t^{-5}-\frac{47}{3}\,t^{-3}-\frac{571}{45}\,t^{-1}+O(t^{0})$.
The coefficient of the log-divergent part of $\log Z_{\rm PI}$ is the
coefficient of $t^{-1}$:
$\displaystyle\log Z_{\rm PI}|_{\rm
log\,div}=-\frac{571}{45}\,\log\bigl{(}2e^{-\gamma}\epsilon^{-1}\bigr{)}\,,$
(271)
in agreement with Christensen:1979iy . The complete heat-kernel regularized UV
part of (251) can be read off directly from our earlier results for massive
spin-$s$ in $d=3$ as
$\displaystyle\log Z_{\rm PI}\bigr{|}_{\rm UV}$ $\displaystyle=\log Z_{\rm
PI}(s=2,\nu=\tfrac{3}{2}i)\bigr{|}_{\rm UV}\,-\,\log Z_{\rm
PI}(s=1,\nu=\tfrac{5}{2}i)\bigr{|}_{\rm UV}$
$\displaystyle=\frac{8}{3}\,\epsilon^{-4}-\frac{32}{3}\,\epsilon^{-2}-\frac{571}{45}\log\bigl{(}2e^{-\gamma}\epsilon^{-1}\bigr{)}+\frac{715}{48}\,.$
(272)
Here ${\cal M}=\frac{715}{48}$ is the “multiplicative anomaly” term. The
integrated heat kernel coefficients are similarly obtained from (260):
$\alpha_{0}=\frac{1}{3}$, $\alpha_{2}=-\frac{16}{3}$,
$\alpha_{4}=-\frac{571}{45}$.
The IR ($\zeta^{\prime}$) contributions from bulk and edge characters are
obtained from the expansions
$\displaystyle\frac{1+q}{1-q}\bigl{(}\chi_{\rm bulk}-\chi_{\rm
edge}\bigr{)}=\sum_{n}P_{\rm
b}(n)\,\bigl{(}10\,q^{3+n}-6\,q^{4+n}\bigr{)}-\sum_{n}P_{\rm
e}(n)\,\bigl{(}10\,q^{2+n}-2\,q^{3+n}\bigr{)}\,,$ (273)
where $P_{\rm b}(n)=D^{5}_{n}=\frac{1}{6}(n+1)(n+2)(2n+3)$, $P_{\rm
e}(n)=D^{3}_{n}=2n+1$. According to (251) this gives a contribution to $\log
Z_{\rm char}|_{\rm IR}$ equal to
$\displaystyle 5\,P_{\rm b}(\hat{\delta}-3)\,\zeta^{\prime}(0,3)-3\,P_{\rm
b}(\hat{\delta}-4)\,\zeta^{\prime}(0,4)-5\,P_{\rm
e}(\hat{\delta}-2)\,\zeta^{\prime}(0,2)+P_{\rm
e}(\hat{\delta}-3)\,\zeta^{\prime}(0,3)\,,$ (274)
where the polynomials are to be expanded in powers of $\hat{\delta}$, putting
$\hat{\delta}^{n}\zeta^{\prime}(0,\Delta)\equiv\zeta^{\prime}(-n,\Delta)$.
Working this out and adding the contribution (269), we find
$\displaystyle\log Z_{\rm char}\bigr{|}_{\rm IR}=-\log
2-\frac{47}{3}\,\zeta^{\prime}(-1)+\frac{2}{3}\,\zeta^{\prime}(-3)\,.$ (275)
Combining this with the UV part and reinstating $\ell$, we get272727This
splits as $\log Z_{\rm char}=10\,\log(2\pi)+\log Z_{\rm bulk}-\log Z_{\rm
edge}$ where $\log Z_{\rm
bulk}=\frac{8\ell^{4}}{3\epsilon^{4}}-\frac{8\ell^{2}}{3\epsilon^{2}}-\frac{331}{45}\log\frac{2e^{-\gamma}\ell}{\epsilon}+\frac{475}{48}\\\
-\frac{23}{3}\zeta^{\prime}(-1)+\frac{2}{3}\zeta^{\prime}(-3)-5\log(2\pi)$ and
$\log Z_{\rm
edge}=\frac{8\ell^{2}}{\epsilon^{2}}+\frac{16}{3}\log\frac{2e^{-\gamma}\ell}{\epsilon}-5+8\zeta^{\prime}(-1)+\log
2+5\log(2\pi)$.
$\displaystyle\log Z_{\rm char}=$
$\displaystyle\frac{8}{3}\,\frac{\ell^{4}}{\epsilon^{4}}-\frac{32}{3}\,\frac{\ell^{2}}{\epsilon^{2}}-\frac{571}{45}\log\frac{2e^{-\gamma}L}{\epsilon}$
$\displaystyle-\frac{571}{45}\log\frac{\ell}{L}+\frac{715}{48}-\log
2-\frac{47}{3}\zeta^{\prime}(-1)+\frac{2}{3}\zeta^{\prime}(-3)\,,$ (276)
where $L$ is an arbitrary length scale introduced to split off a finite part:
$\displaystyle\log Z_{\rm char}^{\rm
fin}=-\frac{571}{45}\log(\ell/L)+\frac{715}{48}-\log
2-\frac{47}{3}\zeta^{\prime}(-1)+\frac{2}{3}\zeta^{\prime}(-3)\,,$ (277)
To compute the group volume factor $Z_{G}$ in (266), we use (287) for
$G=SO(5)$ to get ${\rm vol}{(G)}_{{\rm c}}=\frac{2}{3}(2\pi)^{6}$, and
$\gamma=\sqrt{8\pi G_{\rm N}/4\pi\ell^{2}}$. Finally, $i^{-P}=i^{-(d+3)}=-1$.
Thus we conclude that the one-loop Euclidean path integral for Einstein
gravity on $S^{4}$ is
$\displaystyle Z_{\rm PI}=-\frac{(8\pi G_{\rm N}/4\pi\ell^{2})^{5}\,Z_{\rm
char}}{\frac{2}{3}(2\pi)^{6}}\,,$ (278)
where $Z_{\rm char}$ is given by (C.3).
#### Example: Einstein gravity on $S^{5}$
For $S^{5}$ an analogous (actually simpler) computation gives $Z_{\rm
PI}=i^{-7}Z_{G}Z_{\rm char}$ with
$\displaystyle\log Z_{\rm char}$
$\displaystyle=\frac{15\,\pi}{8}\,\frac{\ell^{5}}{\epsilon^{5}}-\frac{65\,\pi}{24}\,\frac{\ell^{3}}{\epsilon^{3}}-\frac{105\,\pi}{16}\,\frac{\ell}{\epsilon}+\frac{65\,\zeta(3)}{48\,\pi^{2}}+\frac{5\,\zeta(5)}{16\,\pi^{4}}+15\log(2\pi)$
(279) $\displaystyle\log Z_{G}$ $\displaystyle=\frac{15}{2}\log\frac{8\pi
G_{\rm N}}{2\pi^{2}\ell^{3}}-\log\frac{(2\pi)^{9}}{12}\,.$
### C.4 Different regularization schemes
If we simply cut off the character integral at $t=\epsilon$, we get the
following instead of (251):
$\displaystyle\log Z_{\epsilon}=$
$\displaystyle\frac{1}{2}\sum_{\Delta}P_{\Delta}(\hat{\delta}-\Delta)\,\zeta^{\prime}(0,\Delta)+b_{d+1}(\nu)\log(e^{-\gamma}/\epsilon)+\sum_{k=0}^{d}\frac{b_{k}(\nu)}{d+1-k}\,\epsilon^{-(d+1-k)}\,,$
(280)
with $b_{k}(\nu)$ defined as before,
$\frac{1}{2t}F_{\nu}(t)=\sum_{k=0}^{d+1}b_{k}(\nu)\,t^{-(d+2-k)}\,+{\cal
O}(t^{0})$. Unsurprisingly, this differs from (251) only in its UV part, more
specifically in the terms polynomial in $\nu$, including the “multiplicative
anomaly” term discussed below (252). The transcendental ($\zeta^{\prime}$)
part and the $\log\epsilon$ coefficient remain unchanged. This remains true in
any other regularization.
If we stick with heat-kernel regularization but pick a different regulator
$f(\tau/\epsilon^{2})$ instead of $e^{-\epsilon^{2}/4\tau}$ (e.g. the
$f=(1-e^{-\tau\Lambda^{2}})^{k}$ PV regularization of section 2) or use zeta
function regularization, more is true: the same finite part is obtained for
any choice of $f$ provided logarithmically divergent terms (arising in even
$d+1$) are expressed in terms of $M$ defined as in (261) with
$e^{-\epsilon^{2}/4\tau}\to f$. The relation $M(\epsilon)$ will depend on $f$,
but nothing else.
In dimensional regularization, some polynomial terms in $\nu$ will be
different, including the “multiplicative anomaly” term. Of course no physical
quantity will be affected by this, as long as self-consistency is maintained.
In fact any regularization scheme (even (280)) will lead to the same
physically unambiguous part of the one-loop corrected dS entropy/sphere
partition function of section 8. However to go beyond this, e.g. to extract
more physically unambiguous data by comparing different saddles along the
lines of (536) and (539), a portable covariant regularization scheme, like
heat-kernel regularization, must be applied consistently to each saddle. A
sphere-specific ad-hoc regularization as in (280) is not suitable for such
purposes.
## Appendix D Some useful dimensions, volumes and metrics
### D.1 Dimensions of representations of $SO(K)$
General irreducible representations of $SO(K)$ with $K=2r$ or $K=2r+1$ are
labeled by $r$-row Young diagrams or more precisely a set
$S=(s_{1},\ldots,s_{r})$ of highest weights ordered from large to small, which
are either all integer (bosons) or all half-integer (fermions). When $K=2r$,
$s_{r}$ can be either positive of negative, distinguishing the chirality of
the representation. For various applications in this paper we need the
dimensions $D^{K}_{S}$ of these $SO(K)$ representations $S$. The Weyl
dimension formula gives a general expression for the dimensions of irreducible
representations of simple Lie groups. For the $SO(K)$ this is
$\bullet$ $K=2r$:
$\displaystyle D^{K}_{S}={\cal N}_{K}^{-1}\prod_{1\leq i<j\leq
r}\bigl{(}\ell_{i}+\ell_{j}\bigr{)}\bigl{(}\ell_{i}-\ell_{j}\bigr{)}\,,\qquad\ell_{i}\equiv
s_{i}+\tfrac{K}{2}-i$ (281)
with ${\cal N}_{K}$ independent of $S$, hence fixed by $D^{K}_{0}=1$, i.e.
${\cal N}_{K}=\prod_{1\leq i<j\leq r}(K-i-j)(j-i)$.
$\bullet$ $K=2r+1$:
$\displaystyle D^{K}_{S}={\cal N}_{K}^{-1}\prod_{1\leq i\leq
r}(2\ell_{i})\prod_{1\leq i<j\leq
r}\bigl{(}\ell_{i}+\ell_{j}\bigr{)}\bigl{(}\ell_{i}-\ell_{j}\bigr{)}\,,\qquad\ell_{i}\equiv
s_{i}+\tfrac{K}{2}-i\,,$ (282)
where ${\cal N}_{K}$ is fixed as above: ${\cal N}_{K}=\prod_{1\leq i\leq
r}(K-2i)\prod_{1\leq i<j\leq r}(K-i-j)(j-i)$.
For convenience we list here some low-dimensional explicit expressions:
$\begin{array}[]{l|l|l|l}K&D_{s}^{K}&D_{n,s}^{K}&D^{K}_{k+\frac{1}{2},{\bf\frac{1}{2}}}\\\
\hline\cr 2&1&&1\\\ 3&2s+1&&2{k+1\choose 1}\\\
4&(s+1)^{2}&\left(n-s+1\right)\left(n+s+1\right)&2{k+2\choose 2}\\\
5&\frac{(s+1)(s+2)(2s+3)}{6}&\frac{\left(2n+3\right)\left(n-s+1\right)\left(n+s+2\right)\left(2s+1\right)}{6}&4{k+3\choose
3}\\\
6&\frac{(s+1)(s+2)^{2}(s+3)}{12}&\frac{\left(n+2\right){}^{2}\left(n-s+1\right)\left(n+s+3\right)\left(s+1\right){}^{2}}{12}&4{k+4\choose
4}\\\
7&\frac{(s+1)(s+2)(s+3)(s+4)(2s+5)}{120}&\frac{\left(n+2\right)\left(n+3\right)\left(2n+5\right)\left(n-s+1\right)\left(n+s+4\right)\left(s+1\right)\left(s+2\right)\left(2s+3\right)}{720}&8{k+5\choose
5}\\\
8&\frac{(s+1)(s+2)(s+3)^{2}(s+4)(s+5)}{360}&\frac{\left(n+2\right)\left(n+3\right){}^{2}\left(n+4\right)\left(n-s+1\right)\left(n+s+5\right)\left(s+1\right)\left(s+2\right){}^{2}\left(s+3\right)}{4320}&8{k+6\choose
6}\\\ \end{array}$ (283)
Here $(k+\frac{1}{2},{\bf\frac{1}{2}})$ means
$(s_{1},\ldots,s_{r})=(k+\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})$, i.e.
the spin $s=k+\frac{1}{2}$ representation.
For general $d\geq 3$, we can use (209) and (329) to compute
$\displaystyle
D^{K}_{s}=\mbox{\large$\binom{s+K-1}{K-1}-\binom{s+K-3}{K-1}$}\,,\qquad
D^{K}_{n,s}=D^{K}_{n}D^{K-2}_{s}-D^{K}_{s-1}D^{K-2}_{n+1}\,.$ (284)
Denoting 1 repeated $m$ times by $1^{m}$, e.g.
$(5,1^{2})=(5,1,1)={\tiny\Yvcentermath 1\yng(5,1,1)}$, we furthermore have
$\displaystyle
D_{1^{p}}^{d}=\mbox{\large$\binom{d}{p}$}\quad(p<\tfrac{d}{2}),\qquad
D^{2p}_{1^{p-1},\pm 1}=\mbox{\large$\tfrac{1}{2}\binom{2p}{p}$}\,,\qquad
D^{d+2}_{n,s,1^{m}}=D^{d+2}_{n}D^{d}_{s,1^{m}}-D^{d+2}_{s-1}D^{d}_{n+1,1^{m}}\,.$
(285)
### D.2 Volumes
The volume of the unit sphere $S^{n}$ is
$\displaystyle\Omega_{n}\equiv{\rm
vol}(S^{n})=\frac{2\,\pi^{\frac{n+1}{2}}}{\Gamma\bigl{(}\frac{n+1}{2}\bigr{)}}\,=\,\frac{2\pi}{n-1}\cdot\Omega_{n-2}$
(286)
The volume of $SO(d+2)$ with respect to the invariant group metric normalized
such that minimal $SO(2)$ orbits have length $2\pi$ is
$\displaystyle{\rm vol}\bigl{(}SO(d+2)\bigr{)}_{{\rm c}}=\prod_{k=2}^{d+2}{\rm
vol}(S^{k-1})=\prod_{k=2}^{d+2}\frac{2\pi^{\frac{k}{2}}}{\Gamma(\frac{k}{2})}\,.$
(287)
This follows from the fact that the unit sphere $S^{n-1}=SO(n)/SO(n-1)$, which
implies ${\rm vol}(SO(n))_{{\rm c}}={\rm vol}(S^{n-1})\,{\rm
vol}(SO(n-1))_{{\rm c}}$ in the assumed normalization.
The volume of $SU(N)$ with respect to the invariant metric derived from the
matrix trace norm on the Lie algebra ${\rm su}(N)$ viewed as traceless
$N\times N$ matrices is (see e.g. Ooguri:2002gx )
$\displaystyle{\rm vol}\bigl{(}SU(N)\bigr{)}_{{\rm
Tr}_{N}}\,=\,\sqrt{N}\prod_{k=2}^{N}\frac{(2\pi)^{k}}{\Gamma(k)}\,=\,\sqrt{N}\,\frac{(2\pi)^{\frac{1}{2}(N-1)(N+2)}}{{\tt
G}(N+1)}\,.$ (288)
### D.3 de Sitter and its Wick rotations to the sphere
Figure 14: Penrose diagrams of dSd+1 and $S^{d+1}$ with coordinates 290, 292.
Each point corresponds to an $S^{d-1}$, contracted to zero size at thin-line
boundaries. a: Global dSd+1 in slices of constant $\bar{T}$. b: Wick rotation
of global dSd+1 to $S^{d+1}$. c: S/N = southern/northern static patch, F/P =
future/past wedge; slices of constant $T$ (gray) and $r$ (blue/red) = flows
generated by $H$. Yellow dot = horizon $r=1$. d: Wick-rotation of static patch
$S$ to $S^{d+1}$; slices of constant $\tau$ and constant $r$.
Global dSd+1 has a convenient description as a hyperboloid embedded in
${\mathbb{R}}^{1,d+1}$,
$\displaystyle X^{I}X_{I}\equiv\eta_{IJ}X^{I}X^{J}\equiv-
X_{0}^{2}+X_{1}^{2}+\cdots+X_{d+1}^{2}=\ell^{2}\,,\qquad
ds^{2}=\eta_{IJ}dX^{I}dX^{J}\,.$ (289)
Below we set $\boxed{\ell\equiv 1}$. The isometry group is $SO(1,d+1)$, with
generators $M_{IJ}=X_{I}\partial_{J}-X_{J}\partial_{I}$. Various coordinate
patches are shown in fig. 14a,c, with coordinates and metric given by
$\small\begin{array}[]{|l|l|l|l|}\hline\cr\text{co}&\text{embedding
}(X^{0},\ldots,X^{d+1})&\text{coordinate range}&\text{metric
}ds^{2}=\eta_{IJ}dX^{I}dX^{J}\\\ \hline\cr
G&(\sinh\bar{T},\cosh\bar{T}\,\bar{\Omega})&\bar{T}\in{\mathbb{R}},\,\bar{\Omega}\in
S^{d}&-d\bar{T}^{2}+\cosh^{2}\bar{T}\,d\bar{\Omega}^{2}\\\
S&(\sqrt{1-r^{2}}\sinh T,r\Omega,\sqrt{1-r^{2}}\cosh T)&T\in{\mathbb{R}},0\leq
r<1,\,\Omega\in
S^{d-1}&-(1-r^{2})dT^{2}+\frac{dr^{2}}{1-r^{2}}+r^{2}d\Omega^{2}\\\
F&(\sqrt{r^{2}-1}\cosh T,r\Omega,\sqrt{r^{2}-1}\sinh
T)&T\in{\mathbb{R}},\,r>1,\,\Omega\in
S^{d-1}&-\frac{dr^{2}}{r^{2}-1}+(r^{2}-1)dT^{2}+r^{2}d\Omega^{2}\\\
\hline\cr\end{array}$ (290)
illustrated in fig. 14a,c. $N$ is obtained from $S$ by $X^{d+1}\to-X^{d+1}$,
and $P$ from $F$ by $X^{0}\to-X^{0}$. The southern static patch $S$ is the
part of de Sitter causally accessible to an inertial observer at the south
pole of the global spatial $S^{d}$. The metric in this patch is static, with
the observer at $r=0$ and a horizon at $r=1$. The $SO(1,1)$ generator
|
to0.0pt{\raisebox{7.5pt}{$\leavevmode\resizebox{49.53105pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}(V_{{\mathbf{N}}}\shuffle\makebox[11.86668pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}}}$}})}}{\hbox
to0.0pt{\raisebox{7.5pt}{$\leavevmode\resizebox{49.53105pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}(V_{{\mathbf{N}}}\shuffle\makebox[11.86668pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}}}$}})}}$}}\cup\\{\epsilon\\}$.
### B.3 Proof of Proposition 18: Regularity and Connectives
###### Proposition 84.
1. 1.
If ${\mathbf{N}}$ is regular then $\shpos{\mathbf{N}}$ is regular.
2. 2.
If ${\mathbf{P}}$ is regular then $\shneg{\mathbf{P}}$ is regular.
###### Proof.
1. 1.
Following Proposition 75:
* •
By internal completeness, the trivial views of $\shpos{\mathbf{N}}$ are of the
form $\kappa_{\blacktriangledown}{\mathbb{t}}$ where ${\mathbb{t}}$ is a
trivial view of ${\mathbf{N}}$. Since ${\mathbf{N}}$ is regular
${\mathbb{t}}\in V_{{\mathbf{N}}}$. Hence by Proposition 78,
$\kappa_{\blacktriangledown}{\mathbb{t}}\in V_{{\mathbf{\shpos N}}}$.
* •
Since $V_{{\mathbf{N}}}$ is stable by shuffle, so is $V_{{\mathbf{\shpos
N}}}=\kappa_{\blacktriangledown}V_{{\mathbf{N}}}^{x}$ where
$\kappa_{\blacktriangledown}$ is a positive action.
* •
For all paths $\kappa_{\blacktriangle}\mathpzc{s}$,
$\kappa_{\blacktriangle}\mathpzc{t}\in V_{{\mathbf{(\shpos
N)^{\perp}}}}=\kappa_{\blacktriangle}V_{{\mathbf{N^{\perp}}}}^{x}$ such that
$\kappa_{\blacktriangle}\mathpzc{s}\shuffle\kappa_{\blacktriangle}\mathpzc{t}$
is defined, $\mathpzc{s}$ and $\mathpzc{t}$ start necessarily by the same
positive action and $\mathpzc{s}\shuffle\mathpzc{t}\subseteq
V_{{\mathbf{N^{\perp}}}}^{x}$ because $V_{{\mathbf{N^{\perp}}}}$ (thus also
$V_{{\mathbf{N^{\perp}}}}^{x}$) is stable by $\shuffle$, hence
$\kappa_{\blacktriangle}\mathpzc{s}\shuffle\kappa_{\blacktriangle}\mathpzc{t}=\kappa_{\blacktriangle}(\mathpzc{s}\shuffle\mathpzc{t})\subseteq
V_{{\mathbf{(\shpos N)^{\perp}}}}$.
2. 2.
If ${\mathbf{P}}$ is regular then ${\mathbf{P}}^{\perp}$ is too. Then by
previous point $\shpos{\mathbf{P}}^{\perp}$ is regular, therefore so is
$(\shpos{\mathbf{P}}^{\perp})^{\perp}$. By Lemma 77, this means that
$\shneg{\mathbf{P}}$ is regular.
∎
###### Proposition 85.
If ${\mathbf{M}}$ and ${\mathbf{N}}$ are regular then
${\mathbf{M}}\oplus{\mathbf{N}}$ is regular.
###### Proof.
Similar to Proposition 84 (1), by the same remark as in proof of Proposition
79. ∎
In order to show that $\otimes$ preserves regularity, consider first the
following definitions and lemma. We call quasi-path a positive-ended P-visible
aj-sequence. The shuffle $\mathpzc{s}\shuffle\mathpzc{t}$ of two negative
quasi-paths $\mathpzc{s}$ and $\mathpzc{t}$ is the set of paths $\mathpzc{u}$
formed with actions from $\mathpzc{s}$ and $\mathpzc{t}$ such that
$\mathinner{\mathchoice{\mathpzc{u}\\!\\!\upharpoonright\\!\\!\mathpzc{s}}{\mathpzc{u}\\!\\!\upharpoonright\\!\\!\mathpzc{s}}{\mathpzc{u}\mkern
1.0mu\upharpoonright\mkern
1.0mu\mathpzc{s}}{\mathpzc{u}\\!\\!\upharpoonright\\!\\!\mathpzc{s}}}=\mathpzc{s}$
and
$\mathinner{\mathchoice{\mathpzc{u}\\!\\!\upharpoonright\\!\\!\mathpzc{t}}{\mathpzc{u}\\!\\!\upharpoonright\\!\\!\mathpzc{t}}{\mathpzc{u}\mkern
1.0mu\upharpoonright\mkern
1.0mu\mathpzc{t}}{\mathpzc{u}\\!\\!\upharpoonright\\!\\!\mathpzc{t}}}=\mathpzc{t}$.
###### Lemma 86.
Let $\mathpzc{s}$ and $\mathpzc{t}$ be negative quasi-paths. If
$\mathpzc{s}\shuffle\mathpzc{t}\neq\emptyset$ then $\mathpzc{s}$ and
$\mathpzc{t}$ are paths.
###### Proof.
We prove the result by contradiction. Let us suppose that there exists a
triple $(\mathpzc{s},\mathpzc{t},\mathpzc{u})$ such that $\mathpzc{s}$ and
$\mathpzc{t}$ are two negative quasi-paths,
$\mathpzc{u}\in\mathpzc{s}\shuffle\mathpzc{t}$ is a path, and at least one of
$\mathpzc{s}$ or $\mathpzc{t}$ does not satisfy O-visibility, say
$\mathpzc{s}$: there exists a negative action $\kappa^{-}$ and a prefix
$\mathpzc{s}_{0}\kappa^{-}$ of $\mathpzc{s}$ such that the action $\kappa^{-}$
is justified in $\mathpzc{s}_{0}$ but $\mathrm{just}(\kappa^{-})$ does not
appear in ${\llcorner\mathpzc{s}_{0}\lrcorner}$.
We choose the triple $(\mathpzc{s},\mathpzc{t},\mathpzc{u})$ such that the
length of $\mathpzc{u}$ is minimal with respect to all such triples. Without
loss of generality, we can assume that $\mathpzc{u}$ and $\mathpzc{s}$ are of
the form $\mathpzc{u}=\mathpzc{u}_{0}\kappa^{-}\maltese$ and
$\mathpzc{s}=\mathpzc{s}_{0}\kappa^{-}\maltese$ respectively. Indeed, if this
is not true, $\mathpzc{u}$ has a strict prefix of the form
$\mathpzc{u}_{0}\kappa^{-}$; in this case we can replace
$(\mathpzc{s},\mathpzc{t},\mathpzc{u})$ by the triple
$(\mathpzc{s}_{0}\kappa^{-}\maltese,\mathinner{\mathchoice{\mathpzc{u}_{0}\\!\\!\upharpoonright\\!\\!\mathpzc{t}}{\mathpzc{u}_{0}\\!\\!\upharpoonright\\!\\!\mathpzc{t}}{\mathpzc{u}_{0}\mkern
1.0mu\upharpoonright\mkern
1.0mu\mathpzc{t}}{\mathpzc{u}_{0}\\!\\!\upharpoonright\\!\\!\mathpzc{t}}},\mathpzc{u}_{0}\kappa^{-}\maltese)$
which satisfies all the constraints, and where the length of
$\mathpzc{u}_{0}\kappa^{-}\maltese$ is less or equal to the length of
$\mathpzc{u}$.
Let $\kappa^{+}=\mathrm{just}(\kappa^{-})$. $\mathpzc{u}$ is necessarily of
the form
$\mathpzc{u}=\mathpzc{u}_{1}\alpha^{-}\mathpzc{u}_{2}\alpha^{+}\kappa^{-}\maltese$
where $\alpha^{-}$ justifies $\alpha^{+}$ and $\kappa^{+}$ appears in
$\mathpzc{u}_{1}$, indeed:
* •
$\kappa^{+}$ does not appear immediately before $\kappa^{-}$ in $\mathpzc{u}$,
otherwise it would also be the case in $\mathpzc{s}$, contradicting the fact
that $\kappa^{-}$ is not O-visible in $\mathpzc{s}$.
* •
The action $\alpha^{+}$ which is immediately before $\kappa^{-}$ in
$\mathpzc{u}$ is justified by an action $\alpha^{-}$, and $\kappa^{+}$ appears
before $\alpha^{-}$ in $\mathpzc{u}$, otherwise $\kappa^{+}$ would not appear
in ${\llcorner\mathpzc{u}_{0}\lrcorner}$ and that would contradict
O-visibility of $\mathpzc{u}$.
Let us show by contradiction something that will be useful for the rest of
this proof: in the path $\mathpzc{u}$, all the actions of $\mathpzc{u}_{2}$
(which cannot be initial) are justified in $\alpha^{-}\mathpzc{u}_{2}$. If it
is not the case, let $\mathpzc{u}_{1}\alpha^{-}\mathpzc{u}^{\prime}_{2}\beta$
be longest prefix of $\mathpzc{u}$ such that $\beta$ is an action of
$\mathpzc{u}_{2}$ justified in $\mathpzc{u}_{1}$, and let $\beta^{\prime}$ be
the following action (necessarily in $\mathpzc{u}_{2}\alpha^{+}$), thus
$\beta^{\prime}$ is justified in $\alpha^{-}\mathpzc{u}_{2}$. If
$\beta^{\prime}$ is positive (resp. negative) then $\beta$ is negative (resp.
positive), thus
$\raisebox{2.02205pt}{$\ulcorner$}{\mathpzc{u}_{1}\alpha^{-}\mathpzc{u}^{\prime}_{2}\beta}\raisebox{2.02205pt}{$\urcorner$}=\raisebox{0.9387pt}{$\ulcorner$}{\mathpzc{u}^{\prime}_{1}}\raisebox{0.9387pt}{$\urcorner$}$
(resp.
${\llcorner\mathpzc{u}_{1}\alpha^{-}\mathpzc{u}^{\prime}_{2}\beta\lrcorner}={\llcorner\mathpzc{u}^{\prime}_{1}\lrcorner}$)
where $\mathpzc{u}^{\prime}_{1}$ is the prefix of $\mathpzc{u}_{1}$ ending on
$\mathrm{just}(\beta)$. But then
$\raisebox{2.02205pt}{$\ulcorner$}{\mathpzc{u}_{1}\alpha^{-}\mathpzc{u}^{\prime}_{2}\beta}\raisebox{2.02205pt}{$\urcorner$}$
(resp.
${\llcorner\mathpzc{u}_{1}\alpha^{-}\mathpzc{u}^{\prime}_{2}\beta\lrcorner}$)
does not contain $\mathrm{just}(\beta^{\prime})$: this contradicts the fact
that $\mathpzc{u}$ is a path, since P-visibility (resp. O-visibility) is not
satisfied.
Now define $\mathpzc{u}^{\prime}=\mathpzc{u}_{1}\kappa^{-}\maltese$,
$\mathpzc{s}^{\prime}=\mathinner{\mathchoice{\mathpzc{u}^{\prime}\\!\\!\upharpoonright\\!\\!\mathpzc{s}}{\mathpzc{u}^{\prime}\\!\\!\upharpoonright\\!\\!\mathpzc{s}}{\mathpzc{u}^{\prime}\mkern
1.0mu\upharpoonright\mkern
1.0mu\mathpzc{s}}{\mathpzc{u}^{\prime}\\!\\!\upharpoonright\\!\\!\mathpzc{s}}}$
and
$\mathpzc{t}^{\prime}=\mathinner{\mathchoice{\mathpzc{u}^{\prime}\\!\\!\upharpoonright\\!\\!\mathpzc{t}}{\mathpzc{u}^{\prime}\\!\\!\upharpoonright\\!\\!\mathpzc{t}}{\mathpzc{u}^{\prime}\mkern
1.0mu\upharpoonright\mkern
1.0mu\mathpzc{t}}{\mathpzc{u}^{\prime}\\!\\!\upharpoonright\\!\\!\mathpzc{t}}}$,
and remark that:
* •
$\mathpzc{u}^{\prime}$ is a path, indeed, O-visibility for $\kappa^{-}$ is
still satisfied since
${\llcorner\mathpzc{u}_{1}\alpha^{-}\mathpzc{u}_{2}\alpha^{+}\kappa^{-}\lrcorner}={\llcorner\mathpzc{u}_{1}\lrcorner}\alpha^{-}\alpha^{+}\kappa^{-}$
and
${\llcorner\mathpzc{u}_{1}\kappa^{-}\lrcorner}={\llcorner\mathpzc{u}_{1}\lrcorner}\kappa^{-}$
both contain $\kappa^{+}$ in ${\llcorner\mathpzc{u}_{1}\lrcorner}$.
* •
$\mathpzc{s}^{\prime}$ and $\mathpzc{t}^{\prime}$ are quasi-paths, since
$\mathpzc{s}^{\prime}$ is of the form
$\mathpzc{s}^{\prime}=\mathpzc{s}_{1}\kappa^{-}\maltese$ where
$\mathpzc{s}_{1}=\mathinner{\mathchoice{\mathpzc{u}_{1}\\!\\!\upharpoonright\\!\\!\mathpzc{s}}{\mathpzc{u}_{1}\\!\\!\upharpoonright\\!\\!\mathpzc{s}}{\mathpzc{u}_{1}\mkern
1.0mu\upharpoonright\mkern
1.0mu\mathpzc{s}}{\mathpzc{u}_{1}\\!\\!\upharpoonright\\!\\!\mathpzc{s}}}$ is
a prefix of $\mathpzc{s}$ containing $\kappa^{+}=\mathrm{just}(\kappa^{-})$,
and
$\mathpzc{t}^{\prime}=\mathinner{\mathchoice{\mathpzc{u}^{\prime}\\!\\!\upharpoonright\\!\\!\mathpzc{t}}{\mathpzc{u}^{\prime}\\!\\!\upharpoonright\\!\\!\mathpzc{t}}{\mathpzc{u}^{\prime}\mkern
1.0mu\upharpoonright\mkern
1.0mu\mathpzc{t}}{\mathpzc{u}^{\prime}\\!\\!\upharpoonright\\!\\!\mathpzc{t}}}=\mathinner{\mathchoice{\mathpzc{u}_{1}\\!\\!\upharpoonright\\!\\!\mathpzc{t}}{\mathpzc{u}_{1}\\!\\!\upharpoonright\\!\\!\mathpzc{t}}{\mathpzc{u}_{1}\mkern
1.0mu\upharpoonright\mkern
1.0mu\mathpzc{t}}{\mathpzc{u}_{1}\\!\\!\upharpoonright\\!\\!\mathpzc{t}}}$ is
a prefix of $\mathpzc{t}$.
* •
$\mathpzc{u}^{\prime}\in\mathpzc{s}^{\prime}\shuffle\mathpzc{t}^{\prime}$.
* •
$\mathpzc{s}^{\prime}$ is not a path: Note that $\mathpzc{s}$ is of the form
$\mathpzc{s}_{1}\mathpzc{s}_{2}\kappa^{-}\maltese$ where
$\mathpzc{s}_{1}=\mathinner{\mathchoice{\mathpzc{u}_{1}\\!\\!\upharpoonright\\!\\!\mathpzc{s}}{\mathpzc{u}_{1}\\!\\!\upharpoonright\\!\\!\mathpzc{s}}{\mathpzc{u}_{1}\mkern
1.0mu\upharpoonright\mkern
1.0mu\mathpzc{s}}{\mathpzc{u}_{1}\\!\\!\upharpoonright\\!\\!\mathpzc{s}}}$ and
$\mathpzc{s}_{2}=\mathinner{\mathchoice{\alpha^{-}\mathpzc{u}_{2}\alpha^{+}\\!\\!\upharpoonright\\!\\!\mathpzc{s}}{\alpha^{-}\mathpzc{u}_{2}\alpha^{+}\\!\\!\upharpoonright\\!\\!\mathpzc{s}}{\alpha^{-}\mathpzc{u}_{2}\alpha^{+}\mkern
1.0mu\upharpoonright\mkern
1.0mu\mathpzc{s}}{\alpha^{-}\mathpzc{u}_{2}\alpha^{+}\\!\\!\upharpoonright\\!\\!\mathpzc{s}}}$.
By hypothesis, $\mathpzc{s}$ is not a path because $\kappa^{+}$ does not
appear in ${\llcorner\mathpzc{s}_{1}\mathpzc{s}_{2}\lrcorner}$. But
${\llcorner\mathpzc{s}_{1}\mathpzc{s}_{2}\lrcorner}$ is of the form
${\llcorner\mathpzc{s}_{1}\lrcorner}\mathpzc{s}_{2}^{\prime}$, since all the
actions of $\mathpzc{s}_{2}$ are hereditarily justified by the first
(necessarily negative) action of $\mathpzc{s}_{2}$, indeed: we have proved
that, in $\mathpzc{u}$, all the actions of $\mathpzc{u}_{2}$ (in particular
those of $\mathpzc{s}_{2}$) were justified in $\alpha^{-}\mathpzc{u}_{2}$.
Thus $\kappa^{+}$ does not appear in ${\llcorner\mathpzc{s}_{1}\lrcorner}$,
which means that O-visibility is not satisfied for $\kappa^{-}$ in
$\mathpzc{s}^{\prime}=\mathpzc{s}_{1}\kappa^{-}\maltese$.
Hence the triple
$(\mathpzc{s}^{\prime},\mathpzc{t}^{\prime},\mathpzc{u}^{\prime})$ satisfies
all the conditions. This contradicts the minimality of $\mathpzc{u}$. ∎
###### Proposition 87.
If ${\mathbf{M}}$ and ${\mathbf{N}}$ are regular, then
${\mathbf{M}}\otimes{\mathbf{N}}$ is regular.
###### Proof.
Following Proposition 75, we will prove that the positive-ended trivial views
of ${\mathbf{M}}\otimes{\mathbf{N}}$ are visitable in
${\mathbf{M}}\otimes{\mathbf{N}}$, and that $V_{{\mathbf{M\otimes N}}}$ and
$V_{{\mathbf{(M\otimes N)^{\perp}}}}$ are stable by shuffle.
Every trivial view of ${\mathbf{M}}\otimes{\mathbf{N}}$ is of the form
$\kappa_{\bullet}{\mathbb{t}}$. It follows from internal completeness
(incarnated form) that $\kappa_{\bullet}{\mathbb{t}}$ is a trivial view of
${\mathbf{M}}\otimes{\mathbf{N}}$ iff ${\mathbb{t}}$ is a trivial view either
of ${\mathbf{M}}^{x}$ or of ${\mathbf{N}}^{y}$. As ${\mathbf{M}}$ (resp.
${\mathbf{N}}$) is regular, positive-ended trivial views of ${\mathbf{M}}^{x}$
(resp. ${\mathbf{N}}^{y}$) are in $V_{{\mathbf{M}}}^{x}$ (resp.
$V_{{\mathbf{N}}}^{y}$). Thus by Proposition 82, positive-ended trivial views
of ${\mathbf{M}}\otimes{\mathbf{N}}$ are in $V_{{\mathbf{M\otimes N}}}$.
From Proposition 82, and from the fact that $\shuffle$ is associative and
commutative, we also have that $V_{{\mathbf{M\otimes N}}}$ is stable by
shuffle.
Let us prove that $V_{{\mathbf{M\otimes N}}}$ is stable by anti-shuffle. Let
$\mathpzc{t},\mathpzc{u}\in V_{{\mathbf{M\otimes N}}}$ and let
$\mathpzc{s}\in\mathpzc{t}\text{\rotatebox[origin={c}]{180.0}{$\shuffle$}}\mathpzc{u}$,
we show that $\mathpzc{s}\in V_{{\mathbf{M\otimes N}}}$ by induction on the
length of $\mathpzc{s}$. Notice first that, from Proposition 82, there exist
paths $\mathpzc{t}_{1},\mathpzc{u}_{1}\in V_{{\mathbf{M}}}^{x}$ and
$\mathpzc{t}_{2},\mathpzc{u}_{2}\in V_{{\mathbf{N}}}^{y}$ such that
$\mathpzc{t}\in\kappa_{\bullet}(\mathpzc{t}_{1}\shuffle\mathpzc{t}_{2})$ and
$\mathpzc{u}\in\kappa_{\bullet}(\mathpzc{u}_{1}\shuffle\mathpzc{u}_{2})$. In
the case $\mathpzc{s}$ of length $1$, either $\mathpzc{s}=\maltese$ or
$\mathpzc{s}=\kappa_{\bullet}$, thus the result is immediate. So suppose
$\mathpzc{s}=\mathpzc{s}^{\prime}\kappa^{-}\kappa^{+}$ and by induction
hypothesis $\mathpzc{s}^{\prime}\in V_{{\mathbf{M\otimes N}}}$. Hence, it
follows from Proposition 82 that there exist paths $\mathpzc{s}_{1}\in
V_{{\mathbf{M}}}^{x}$ and $\mathpzc{s}_{2}\in V_{{\mathbf{N}}}^{y}$ such that
$\mathpzc{s}^{\prime}\in\kappa_{\bullet}(\mathpzc{s}_{1}\shuffle\mathpzc{s}_{2})$.
Without loss of generality, we can suppose that $\kappa^{-}$ is an action of
$\mathpzc{t}_{1}$, hence of $\mathpzc{t}$. We study the different cases,
proving each time either that $\mathpzc{s}\in V_{{\mathbf{M\otimes N}}}$ or
that the case is impossible.
* •
Either $\kappa^{+}=\maltese$. In that case,
$\mathpzc{s}_{1}\kappa^{-}\maltese$ is a negative quasi-path. As $\mathpzc{s}$
is a path and
$\mathpzc{s}\in\kappa_{\bullet}(\mathpzc{s}_{1}\kappa^{-}\maltese\shuffle\mathpzc{s}_{2})$,
by Lemma 86, we have moreover that $\mathpzc{s}_{1}\kappa^{-}\maltese$ is a
path. Notice that
$\kappa_{\bullet}\langle\mathpzc{s}_{1}\kappa^{-}\rangle=\langle\kappa^{-}\rangle_{\mathpzc{s}}=\langle\kappa^{-}\rangle_{\mathpzc{t}}=\kappa_{\bullet}\langle\kappa^{-}\rangle_{\mathpzc{t}_{1}}$.
Hence
$\langle\mathpzc{s}_{1}\kappa^{-}\rangle=\langle\kappa^{-}\rangle_{\mathpzc{t}_{1}}$
is a trivial view of ${\mathbf{M}}^{x}$. Let
${\mathbb{t}}\kappa^{-}=\langle\mathpzc{s}_{1}\kappa^{-}\rangle$. By Lemma 74,
$\mathpzc{s}_{1}$ is a shuffle of anti-shuffles of trivial views of
${\mathbf{M}}^{x}$, one of which is the trivial view ${\mathbb{t}}$. Then
remark that $\mathpzc{s}_{1}\kappa^{-}\maltese$ is also a shuffle of anti-
shuffles of trivial views of ${\mathbf{M}}^{x}$, replacing ${\mathbb{t}}$ by
${\mathbb{t}}\kappa^{-}\maltese$ (note that ${\mathbb{t}}\kappa^{-}\maltese$
is indeed a trivial view of ${\mathbf{M}}^{x}$ since
${\mathbb{t}}\kappa^{-}\maltese=\langle\mathpzc{t}_{0}\kappa^{-}\maltese\rangle$
where $\mathpzc{t}_{0}\kappa^{-}$ is the prefix of $\mathpzc{t}_{1}$ ending
with $\kappa^{-}$, and $\mathpzc{t}_{0}\kappa^{-}\maltese\in
V_{{\mathbf{M}}}^{x}$ by Lemma 71). It follows from Proposition 75 that
$\mathpzc{s}_{1}\kappa^{-}\maltese\in V_{{\mathbf{M}}}^{x}$. Finally, as
$\mathpzc{s}\in\kappa_{\bullet}(\mathpzc{s}_{1}\kappa^{-}\maltese\shuffle\mathpzc{s}_{2})$
and by Proposition 82, we have $\mathpzc{s}\in V_{{\mathbf{M\otimes N}}}$.
* •
Or $\kappa^{+}$ is a proper action of $\mathpzc{t}_{1}$, hence of
$\mathpzc{t}$. Remark that
$\raisebox{0.9387pt}{$\ulcorner$}{\mathpzc{s}^{\prime}\kappa^{-}}\raisebox{0.9387pt}{$\urcorner$}=\raisebox{0.5887pt}{$\ulcorner$}{\kappa_{\bullet}\mathpzc{s}_{1}\kappa^{-}}\raisebox{0.5887pt}{$\urcorner$}=\kappa_{\bullet}\raisebox{0.5887pt}{$\ulcorner$}{\mathpzc{s}_{1}\kappa^{-}}\raisebox{0.5887pt}{$\urcorner$}$,
thus $\mathrm{just}(\kappa^{+})$ appears in
$\raisebox{0.5887pt}{$\ulcorner$}{\mathpzc{s}_{1}\kappa^{-}}\raisebox{0.5887pt}{$\urcorner$}$
hence $\mathpzc{s}_{1}\kappa^{-}\kappa^{+}$ is a (negative) quasi-path. As
$\mathpzc{s}$ is a path and as
$\mathpzc{s}\in\kappa_{\bullet}(\mathpzc{s}_{1}\kappa^{-}\kappa^{+}\shuffle\mathpzc{s}_{2})$,
by Lemma 86 $\mathpzc{s}_{1}\kappa^{-}\kappa^{+}$ is a path. We already know
from previous item that $\mathpzc{s}_{1}\kappa^{-}\maltese\in
V_{{\mathbf{M}}}^{x}$. Notice that
$\kappa_{\bullet}\langle\mathpzc{s}_{1}\kappa^{-}\kappa^{+}\rangle=\langle\kappa^{+}\rangle_{\mathpzc{s}}=\langle\kappa^{+}\rangle_{\mathpzc{t}}=\kappa_{\bullet}\langle\kappa^{+}\rangle_{\mathpzc{t}_{1}}$.
Hence
$\langle\mathpzc{s}_{1}\kappa^{-}\kappa^{+}\rangle=\langle\kappa^{+}\rangle_{\mathpzc{t}_{1}}$
is a trivial view of ${\mathbf{M}}^{x}$. Let
${\mathbb{u}}\kappa^{+}=\langle\mathpzc{s}_{1}\kappa^{-}\kappa^{+}\rangle$. By
Lemma 74, $\mathpzc{s}_{1}\kappa^{-}\maltese$ is a shuffle of anti-shuffles of
trivial views of ${\mathbf{M}}^{x}$, one of which is the trivial view
${\mathbb{u}}\maltese$. Remark that $\mathpzc{s}_{1}\kappa^{-}\kappa^{+}$ is
also a shuffle of anti-shuffles of trivial views of ${\mathbf{M}}^{x}$,
replacing ${\mathbb{u}}\maltese$ by ${\mathbb{u}}\kappa^{+}$. By Proposition
75, $\mathpzc{s}_{1}\kappa^{-}\kappa^{+}\in V_{{\mathbf{M}}}^{x}$. Finally, as
$\mathpzc{s}\in\kappa_{\bullet}(\mathpzc{s}_{1}\kappa^{-}\kappa^{+}\shuffle\mathpzc{s}_{2})$
and by Proposition 82, we have $\mathpzc{s}\in V_{{\mathbf{M\otimes N}}}$.
* •
Or $\kappa^{+}$ is a proper action of $\mathpzc{u}_{1}$, hence of
$\mathpzc{u}$. The reasoning is similar to previous item, using $\mathpzc{u}$
and $\mathpzc{u}_{1}$ instead of $\mathpzc{t}$ and $\mathpzc{t}_{1}$
respectively.
* •
Or $\kappa^{+}$ is a proper action of $\mathpzc{t}_{2}$, hence of
$\mathpzc{t}$. This is impossible, being given the structure of $\mathpzc{s}$:
the action $\kappa_{0}^{+}$ following the negative action $\kappa^{-}$ in
$\mathpzc{t}$ is necessarily in $\mathpzc{t}_{1}$ (due to the structure of a
shuffle), hence the action following $\kappa^{-}$ in $\mathpzc{s}$ is
necessarily either $\kappa_{0}^{+}$ (hence in $\mathpzc{t}_{1}$) or in
$\mathpzc{u}$.
* •
Or $\kappa^{+}$ is a proper action of $\mathpzc{u}_{2}$, hence of
$\mathpzc{u}$: this case also leads to a contradiction. We know from previous
item that a positive action of $\mathpzc{t}_{2}$ cannot immediately follow a
negative action of $\mathpzc{t}_{1}$ in $\mathpzc{s}$. Similarly, a positive
action of $\mathpzc{u}_{2}$ (resp. $\mathpzc{t}_{1}$, $\mathpzc{u}_{1}$)
cannot immediately follow a negative action of $\mathpzc{u}_{1}$ (resp.
$\mathpzc{t}_{2}$, $\mathpzc{u}_{2}$) in $\mathpzc{s}$. Suppose that there
exists a positive action $\kappa_{0}^{+}$ of $\mathpzc{u}_{2}$ (or resp.
$\mathpzc{t}_{2}$, $\mathpzc{u}_{1}$, $\mathpzc{t}_{1}$) which follows
immediately a negative action $\kappa_{0}^{-}$ of $\mathpzc{t}_{1}$ (or resp.
$\mathpzc{u}_{1}$, $\mathpzc{t}_{2}$, $\mathpzc{u}_{2}$). Let
$\mathpzc{s}_{0}\kappa_{0}^{-}\kappa_{0}^{+}$ be the shortest prefix of
$\mathpzc{s}$ satisfying such a property, say $\kappa_{0}^{+}$ is an action of
$\mathpzc{u}_{2}$ and $\kappa_{0}^{-}$ is an action of $\mathpzc{t}_{1}$. Then
the view
$\raisebox{0.5887pt}{$\ulcorner$}{\mathpzc{s}_{0}\kappa_{0}^{-}}\raisebox{0.5887pt}{$\urcorner$}$
is necessarily only made of $\kappa_{\bullet}$ and of actions from
$\mathpzc{t}_{1}$ or $\mathpzc{u}_{1}$, thus it does not contain
$\mathrm{just}(\kappa_{0}^{+})$ (where $\kappa_{0}^{+}$ cannot be initial
because ${\mathbf{N}}$ is negative), i.e., $\mathpzc{s}$ does not satisfy
P-visibility: contradiction.
∎
###### Corollary 88.
If ${\mathbf{N}}$ and ${\mathbf{P}}$ are regular, then
${\mathbf{N}}\multimap{\mathbf{P}}$ is regular.
### B.4 Proofs of Propositions 19 and 21: Purity and Connectives
###### Proof (Proposition 19).
We must prove:
* •
If ${\mathbf{N}}$ is pure then $\shpos{\mathbf{N}}$ is pure.
* •
If ${\mathbf{P}}$ is pure then $\shneg{\mathbf{P}}$ is pure.
* •
If ${\mathbf{M}}$ and ${\mathbf{N}}$ are pure then
${\mathbf{M}}\oplus{\mathbf{N}}$ is pure.
* •
If ${\mathbf{M}}$ and ${\mathbf{N}}$ are pure then
${\mathbf{M}}\otimes{\mathbf{N}}$ is pure.
For the shifts and plus, the result is immediate given the form of visitable
paths of $\shpos{\mathbf{N}}$, $\shneg{\mathbf{P}}$ and
${\mathbf{M}}\oplus{\mathbf{N}}$ (Propositions 78 and 79). Let us prove the
result for the tensor.
Let $\mathpzc{s}=\mathpzc{s}^{\prime}\maltese\in
V_{{\mathbf{M}}\otimes{\mathbf{N}}}$. According to Proposition 80, either
$\mathpzc{s}=\maltese$ or there exist $\mathpzc{s}_{1}\in
V_{{\mathbf{M}}}^{x}$ and $\mathpzc{s}_{2}\in V_{{\mathbf{N}}}^{y}$ such that
$\mathpzc{s}\in\kappa_{\bullet}(\mathpzc{s}_{1}\shuffle\mathpzc{s}_{2})$. If
$\mathpzc{s}=\maltese$ then it is extensible with $\kappa_{\bullet}$, so
suppose
$\mathpzc{s}\in\kappa_{\bullet}(\mathpzc{s}_{1}\shuffle\mathpzc{s}_{2})$.
Without loss of generality, suppose
$\mathpzc{s}_{1}=\mathpzc{s}_{1}^{\prime}\maltese$. Since ${\mathbf{M}}$ is
pure, $\mathpzc{s}_{1}$ is extensible: there exists a proper positive action
$\kappa^{+}$ such that $\mathpzc{s}_{1}^{\prime}\kappa^{+}\in
V_{{\mathbf{M}}}^{x}$. Then, note that $\mathpzc{s}^{\prime}\kappa^{+}$ is a
path: indeed, since $\mathpzc{s}_{1}^{\prime}\kappa^{+}$ is a path, the
justification of $\kappa^{+}$ appears in
$\raisebox{0.9387pt}{$\ulcorner$}{\mathpzc{s}_{1}^{\prime}}\raisebox{0.9387pt}{$\urcorner$}=\raisebox{0.9387pt}{$\ulcorner$}{\mathpzc{s}^{\prime}}\raisebox{0.9387pt}{$\urcorner$}$.
Moreover
$\mathpzc{s}^{\prime}\kappa^{+}\in\kappa_{\bullet}(V_{{\mathbf{M}}}^{x}\shuffle
V_{{\mathbf{N}}}^{y})$, let us show that $\mathpzc{s}^{\prime}\kappa^{+}\in
V_{{\mathbf{M}}\otimes{\mathbf{N}}}$. Let $\mathpzc{t}\in
V_{{\mathbf{M}}}^{x}\shuffle V_{{\mathbf{N}}}^{y}$ and $\kappa^{-}$ a negative
action such that $\overline{\kappa_{\bullet}\mathpzc{t}\kappa^{-}}$ is a path
of
$\raisebox{1.01648pt}{$\ulcorner\mkern-6.0mu\ulcorner\mkern-2.0mu$}{\makebox[15.60158pt][c]{\mbox{\rule{0.0pt}{5.93887pt}$\mathchoice{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{15.60158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}^{\prime}\kappa^{+}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{15.60158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}^{\prime}\kappa^{+}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{15.60158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}^{\prime}\kappa^{+}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{15.60158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}^{\prime}\kappa^{+}}}$}}}\raisebox{1.01648pt}{$\mkern-2.0mu\urcorner\mkern-6.0mu\urcorner$}^{c}$,
and by Proposition 80 it suffices to show that
$\mathpzc{t}\kappa^{-}\maltese\in V_{{\mathbf{M}}}^{x}\shuffle
V_{{\mathbf{N}}}^{y}$. But
$\raisebox{1.01648pt}{$\ulcorner\mkern-6.0mu\ulcorner\mkern-2.0mu$}{\makebox[15.60158pt][c]{\mbox{\rule{0.0pt}{5.93887pt}$\mathchoice{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{15.60158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}^{\prime}\kappa^{+}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{15.60158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}^{\prime}\kappa^{+}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{15.60158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}^{\prime}\kappa^{+}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{15.60158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}^{\prime}\kappa^{+}}}$}}}\raisebox{1.01648pt}{$\mkern-2.0mu\urcorner\mkern-6.0mu\urcorner$}^{c}=\raisebox{2.0pt}{$\ulcorner\mkern-6.0mu\ulcorner\mkern-2.0mu$}{\overline{\mathpzc{s}^{\prime}\kappa^{+}}\maltese}\raisebox{2.0pt}{$\mkern-2.0mu\urcorner\mkern-6.0mu\urcorner$}^{c}=\raisebox{0.75537pt}{$\ulcorner\mkern-6.0mu\ulcorner\mkern-2.0mu$}{\overline{\mathpzc{s}^{\prime}}}\raisebox{0.75537pt}{$\mkern-2.0mu\urcorner\mkern-6.0mu\urcorner$}^{c}=\raisebox{-0.61685pt}{$\ulcorner\mkern-6.0mu\ulcorner\mkern-2.0mu$}{\makebox[3.94444pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$}}}\raisebox{-0.61685pt}{$\mkern-2.0mu\urcorner\mkern-6.0mu\urcorner$}^{c}$,
therefore $\overline{\kappa_{\bullet}\mathpzc{t}\kappa^{-}}$ is a path of
$\raisebox{-0.61685pt}{$\ulcorner\mkern-6.0mu\ulcorner\mkern-2.0mu$}{\makebox[3.94444pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$}}}\raisebox{-0.61685pt}{$\mkern-2.0mu\urcorner\mkern-6.0mu\urcorner$}^{c}$.
Since $\mathpzc{s}\in V_{{\mathbf{M}}\otimes{\mathbf{N}}}$, by Proposition 80
we get $\mathpzc{t}\kappa^{-}\maltese\in V_{{\mathbf{M}}}^{x}\shuffle
V_{{\mathbf{N}}}^{y}$. Finally $\mathpzc{s}^{\prime}\kappa^{+}\in
V_{{\mathbf{M}}\otimes{\mathbf{N}}}$, hence $\mathpzc{s}$ is extensible. ∎
###### Proof (Proposition 21).
Since ${\mathbf{N}}$ and ${\mathbf{P}}$ are regular, $V_{{\mathbf{(N\multimap
P)^{\perp}}}}=\kappa_{\bullet}(V_{{\mathbf{N}}}^{x}\shuffle\makebox[11.86668pt][c]{\mbox{\rule{0.0pt}{8.03886pt}$\mathchoice{\hbox
to0.0pt{\raisebox{8.03886pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}^{y}}}{\hbox
to0.0pt{\raisebox{8.03886pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}^{y}}}{\hbox
to0.0pt{\raisebox{8.03886pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}^{y}}}{\hbox
to0.0pt{\raisebox{8.03886pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}^{y}}}$}})\cup\\{\maltese\\}$
by Corollary 83. Let $\mathpzc{s}\in
V_{({{\mathbf{N}}\multimap{\mathbf{P}}})^{\perp}}$ and suppose
$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$
is $\maltese$-ended, i.e., $\mathpzc{s}$ is $\maltese$-free. We must show that
either $\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$
is extensible or $\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$
is not well-bracketed. The path $\mathpzc{s}$ is of the form
$\mathpzc{s}=\kappa_{\bullet}\mathpzc{s}^{\prime}$ and there exist
$\maltese$-free paths $\mathpzc{t}\in V_{{\mathbf{N}}}^{x}$ and
$\mathpzc{u}\in\makebox[11.86668pt][c]{\mbox{\rule{0.0pt}{8.03886pt}$\mathchoice{\hbox
to0.0pt{\raisebox{8.03886pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}^{y}}}{\hbox
to0.0pt{\raisebox{8.03886pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}^{y}}}{\hbox
to0.0pt{\raisebox{8.03886pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}^{y}}}{\hbox
to0.0pt{\raisebox{8.03886pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}^{y}}}$}}$
such that $\mathpzc{s}^{\prime}\in\mathpzc{t}\shuffle\mathpzc{u}$. We are in
one of the following situations:
* •
Either $\makebox[5.55557pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}$}}\in
V_{{\mathbf{P}}}^{y}$ is not well-bracketed, hence neither is
$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$.
* •
Otherwise, since ${\mathbf{P}}$ is quasi-pure,
$\makebox[5.55557pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}$}}=\overline{\mathpzc{u}}\maltese$
is extensible, i.e., there exists a proper positive action
$\kappa_{\mathpzc{u}}^{+}$ such that
$\overline{\mathpzc{u}}\kappa_{\mathpzc{u}}^{+}\in V_{{\mathbf{P}}}^{y}$. If
$\overline{\mathpzc{s}}\kappa_{\mathpzc{u}}^{+}$ is a path, then
$\overline{\mathpzc{s}}\kappa_{\mathpzc{u}}^{+}\in V_{{\mathbf{N\multimap
P}}}$, hence $\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$
is extensible: indeed,
$\makebox[13.87271pt][c]{\mbox{\rule{0.0pt}{5.93887pt}$\mathchoice{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{13.87271pt}{0.0pt}{$\sim$}$}\hss}{\overline{\mathpzc{s}}\kappa_{\mathpzc{u}}^{+}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{13.87271pt}{0.0pt}{$\sim$}$}\hss}{\overline{\mathpzc{s}}\kappa_{\mathpzc{u}}^{+}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{13.87271pt}{0.0pt}{$\sim$}$}\hss}{\overline{\mathpzc{s}}\kappa_{\mathpzc{u}}^{+}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{13.87271pt}{0.0pt}{$\sim$}$}\hss}{\overline{\mathpzc{s}}\kappa_{\mathpzc{u}}^{+}}}$}}=\mathpzc{s}\overline{\kappa_{\mathpzc{u}}^{+}}\maltese\in\kappa_{\bullet}(\mathpzc{t}\shuffle\mathpzc{u}\overline{\kappa_{\mathpzc{u}}^{+}}\maltese)$,
thus
$\mathpzc{s}\overline{\kappa_{\mathpzc{u}}^{+}}\maltese\in\kappa_{\bullet}(V_{{\mathbf{N}}}^{x}\shuffle\makebox[11.86668pt][c]{\mbox{\rule{0.0pt}{8.03886pt}$\mathchoice{\hbox
to0.0pt{\raisebox{8.03886pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}^{y}}}{\hbox
to0.0pt{\raisebox{8.03886pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}^{y}}}{\hbox
to0.0pt{\raisebox{8.03886pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}^{y}}}{\hbox
to0.0pt{\raisebox{8.03886pt}{$\leavevmode\resizebox{11.86668pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{P}}}^{y}}}$}})$.
In the case $\overline{\mathpzc{s}}\kappa_{\mathpzc{u}}^{+}$ is not a path,
this means that $\kappa_{\mathpzc{u}}^{+}$ is justified by an action
$\kappa_{\mathpzc{u}}^{-}$ that does not appear in
$\raisebox{0.75537pt}{$\ulcorner$}{\overline{\mathpzc{s}}}\raisebox{0.75537pt}{$\urcorner$}$,
thus we have something of the form:
$\overline{\mathpzc{s}}\kappa_{\mathpzc{u}}^{+}=\hskip 14.22636pt\dots\hskip
14.22636pt\kappa^{+}\hskip 14.22636pt\dots\hskip
14.22636pt\kappa_{\mathpzc{u}}^{-}\hskip 14.22636pt\dots\hskip
14.22636pt\kappa^{-}\hskip 14.22636pt\dots\hskip
14.22636pt\kappa_{\mathpzc{u}}^{+}$just.just.view
$\raisebox{1.01648pt}{$\ulcorner$}{\overline{\mathpzc{s}}\kappa_{\mathpzc{u}}^{+}}\raisebox{1.01648pt}{$\urcorner$}$
If $\kappa^{-}$ comes from $\overline{\mathpzc{t}}$, and thus also
$\kappa^{+}$, then $\overline{s}$ is not well-bracketed, indeed: since
$\kappa_{\mathpzc{u}}^{-}$ is hereditarily justified by
$\overline{\kappa_{\bullet}}$ and by no action from $\overline{\mathpzc{t}}$,
we have:
$\overline{\mathpzc{s}}=\hskip 14.22636pt\overline{\kappa_{\bullet}}\hskip
14.22636pt\dots\hskip 14.22636pt\kappa^{+}\hskip 14.22636pt\dots\hskip
14.22636pt\kappa_{\mathpzc{u}}^{-}\hskip 14.22636pt\dots\hskip
14.22636pt\kappa^{-}\hskip 14.22636pt\dots$just.just.
So suppose now that $\kappa^{-}$ comes from $\overline{\mathpzc{u}}$, thus
also $\kappa^{+}$. We know that
$\raisebox{0.75537pt}{$\ulcorner$}{\overline{\mathpzc{u}}}\raisebox{0.75537pt}{$\urcorner$}$
contains $\kappa_{\mathpzc{u}}^{-}=\mathrm{just}(\kappa_{\mathpzc{u}}^{+})$,
thus in particular
$\raisebox{0.75537pt}{$\ulcorner$}{\overline{\mathpzc{u}}}\raisebox{0.75537pt}{$\urcorner$}$
does not contain $\kappa^{-}$; on the contrary, we have seen that
$\raisebox{0.75537pt}{$\ulcorner$}{\overline{\mathpzc{s}}}\raisebox{0.75537pt}{$\urcorner$}$
contains $\kappa^{-}$. By definition of the view of a sequence, this
necessarily means that, in $\overline{\mathpzc{s}}$, between the action
$\kappa^{-}$ and the end of the sequence, the following happens:
$\raisebox{0.75537pt}{$\ulcorner$}{\overline{\mathpzc{s}}}\raisebox{0.75537pt}{$\urcorner$}$
comes across an action $\alpha_{\mathpzc{t}}^{-}$ from
$\overline{\mathpzc{t}}$, justified by an action $\alpha_{\mathpzc{t}}^{+}$
also from $\overline{\mathpzc{t}}$, making the view miss at least one action
$\alpha_{\mathpzc{u}}$ from $\overline{\mathpzc{u}}$ appearing in
$\raisebox{0.75537pt}{$\ulcorner$}{\overline{\mathpzc{u}}}\raisebox{0.75537pt}{$\urcorner$}$,
as depicted below.
$\overline{\mathpzc{s}}=\hskip 14.22636pt\overline{\kappa_{\bullet}}\hskip
14.22636pt\dots\hskip 14.22636pt\kappa^{-}\hskip 14.22636pt\dots\hskip
14.22636pt\alpha_{\mathpzc{t}}^{+}\hskip 14.22636pt\dots\hskip
14.22636pt\alpha_{\mathpzc{u}}\hskip 14.22636pt\dots\hskip
14.22636pt\alpha_{\mathpzc{t}}^{-}\hskip 14.22636pt\dots$just.view
$\raisebox{0.75537pt}{$\ulcorner$}{\overline{\mathpzc{s}}}\raisebox{0.75537pt}{$\urcorner$}$
Since $\alpha_{\mathpzc{u}}$ is hereditarily justified by
$\overline{\kappa_{\bullet}}$ and by no action from $\overline{\mathpzc{t}}$,
the path $\overline{\mathpzc{s}}$ is not well-bracketed: the justifications of
$\alpha_{\mathpzc{u}}$ and of $\alpha_{\mathpzc{t}}^{-}$ intersect.
To sum up, we have proved that in the case when
$\makebox[5.55557pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}$}}=\overline{\mathpzc{u}}\maltese$
is extensible, either $\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$
is extensible too or it is not well-bracketed.
Hence ${\mathbf{N}}\multimap{\mathbf{P}}$ is quasi-pure. ∎
## Appendix C Proofs of Section 4
In this section we prove:
* •
that the functions $\phi^{A}_{\sigma}$ are Scott-continuous (Proposition 25),
* •
internal completeness for particular infinite unions of behaviours (Theorem
30),
* •
two lemmas of Subsection 4.3 (Lemmas 32 and 37).
### C.1 Proof of Proposition 25
###### Lemma 89.
Let $E,F$ be sets of atomic negative designs and $G$ be a set of atomic
positive designs.
1. 1.
$\shpos(E^{\perp\perp})=\blacktriangledown\langle E\rangle^{\perp\perp}$
2. 2.
$\shneg(G^{\perp\perp})=\\{{\mathfrak{n}}\leavevmode\nobreak\
\boldsymbol{|}\leavevmode\nobreak\
\mathinner{\mathchoice{{\mathfrak{n}}\\!\\!\upharpoonright\\!\\!\blacktriangle}{{\mathfrak{n}}\\!\\!\upharpoonright\\!\\!\blacktriangle}{{\mathfrak{n}}\mkern
1.0mu\upharpoonright\mkern
1.0mu\blacktriangle}{{\mathfrak{n}}\\!\\!\upharpoonright\\!\\!\blacktriangle}}\in\blacktriangle(x).G^{x}\\}^{\perp\perp}$
3. 3.
$(E^{\perp\perp})\oplus(F^{\perp\perp})=(\iota_{1}\langle
E\rangle\cup\iota_{2}\langle F\rangle)^{\perp\perp}$
4. 4.
$(E^{\perp\perp})\otimes(F^{\perp\perp})=\bullet\langle
E,F\rangle^{\perp\perp}$
###### Proof.
We prove (1) and (2), the other cases are very similar to (1).
1. 1.
$\blacktriangledown\langle
E\rangle^{\perp\perp}=\\{{\mathfrak{n}}\leavevmode\nobreak\
\boldsymbol{|}\leavevmode\nobreak\
\mathinner{\mathchoice{{\mathfrak{n}}\\!\\!\upharpoonright\\!\\!\blacktriangle}{{\mathfrak{n}}\\!\\!\upharpoonright\\!\\!\blacktriangle}{{\mathfrak{n}}\mkern
1.0mu\upharpoonright\mkern
1.0mu\blacktriangle}{{\mathfrak{n}}\\!\\!\upharpoonright\\!\\!\blacktriangle}}\in\blacktriangle(x).(E^{\perp})^{x}\\}^{\perp}=(\shneg(E^{\perp}))^{\perp}=(\shpos(E^{\perp\perp}))^{\perp\perp}=\shpos(E^{\perp\perp})$,
2. 2.
$\\{{\mathfrak{n}}\leavevmode\nobreak\ \boldsymbol{|}\leavevmode\nobreak\
\mathinner{\mathchoice{{\mathfrak{n}}\\!\\!\upharpoonright\\!\\!\blacktriangle}{{\mathfrak{n}}\\!\\!\upharpoonright\\!\\!\blacktriangle}{{\mathfrak{n}}\mkern
1.0mu\upharpoonright\mkern
1.0mu\blacktriangle}{{\mathfrak{n}}\\!\\!\upharpoonright\\!\\!\blacktriangle}}\in\blacktriangle(x).G^{x}\\}^{\perp\perp}=\\{\blacktriangledown\langle{\mathfrak{m}}\rangle\leavevmode\nobreak\
\boldsymbol{|}\leavevmode\nobreak\ {\mathfrak{m}}\in
G^{\perp}\\}^{\perp}=(\shpos(G^{\perp}))^{\perp}=\shneg(G^{\perp\perp})$,
using the definition of the orthogonal, internal completeness, and Lemma 77. ∎
###### Proof (Proposition 25).
By induction on $A$, we prove that for every $X$ and every $\sigma$ the
function $\phi^{A}_{\sigma}$ is continuous. Note that $\phi^{A}_{\sigma}$ is
continuous if and only if for every directed subset
$\mathbb{P}\subseteq\mathcal{B}^{+}$ we have
$\bigvee_{{\mathbf{P}}\in\mathbb{P}}(\llbracket
A\rrbracket^{\sigma,X\mapsto{\mathbf{P}}})=\llbracket
A\rrbracket^{\sigma,X\mapsto\bigvee\mathbb{P}}$. The cases $A=Y\in\mathcal{V}$
and $A=a\in\mathcal{S}$ are trivial, and the case $A=A_{1}\oplus^{+}A_{2}$ is
very similar to the tensor, hence we only treat the two remaining cases. Let
$\mathbb{P}\subseteq\mathcal{B}^{+}$ be directed.
* •
Suppose $A=A_{1}\otimes^{+}A_{2}$, thus $\llbracket
A\rrbracket^{\sigma,X\mapsto{\mathbf{P}}}=\llbracket
A_{1}\rrbracket^{\sigma,X\mapsto{\mathbf{P}}}\otimes^{+}\llbracket
A_{2}\rrbracket^{\sigma,X\mapsto{\mathbf{P}}}$, with both functions
$\phi^{A_{i}}_{\sigma}:{\mathbf{P}}\mapsto\llbracket
A_{i}\rrbracket^{\sigma,X\mapsto{\mathbf{P}}}$ continuous by induction
hypothesis. For any positive behaviour ${\mathbf{P}}$, let us write
$\sigma_{{\mathbf{P}}}$ instead of $\sigma,X\mapsto{\mathbf{P}}$. We have
$\bigvee_{{\mathbf{P}}\in\mathbb{P}}\llbracket
A\rrbracket^{\sigma_{{\mathbf{P}}}}=(\bigcup_{{\mathbf{P}}\in\mathbb{P}}\llbracket
A\rrbracket^{\sigma_{{\mathbf{P}}}})^{\perp\perp}=(\bigcup_{{\mathbf{P}}\in\mathbb{P}}(\llbracket
A_{1}\rrbracket^{\sigma_{{\mathbf{P}}}}\otimes^{+}\llbracket
A_{2}\rrbracket^{\sigma_{{\mathbf{P}}}}))^{\perp\perp}$
Let us show that
$\bigcup_{{\mathbf{P}}\in\mathbb{P}}(\llbracket
A_{1}\rrbracket^{\sigma_{{\mathbf{P}}}}\otimes^{+}\llbracket
A_{2}\rrbracket^{\sigma_{{\mathbf{P}}}})=\bullet\langle\bigcup_{{\mathbf{P}}^{\prime}\in\mathbb{P}}\shneg\llbracket
A_{1}\rrbracket^{\sigma_{{\mathbf{P}}^{\prime}}},\bigcup_{{\mathbf{P}}^{\prime\prime}\in\mathbb{P}}\shneg\llbracket
A_{2}\rrbracket^{\sigma_{{\mathbf{P}}^{\prime\prime}}}\rangle\cup\\{\maltese\\}$
By internal completeness we have $\llbracket
A_{1}\rrbracket^{\sigma_{{\mathbf{P}}}}\otimes^{+}\llbracket
A_{2}\rrbracket^{\sigma_{{\mathbf{P}}}}=\bullet\langle\shneg\llbracket
A_{1}\rrbracket^{\sigma_{{\mathbf{P}}}},\shneg\llbracket
A_{2}\rrbracket^{\sigma_{{\mathbf{P}}}}\rangle\cup\\{\maltese\\}$ for every
${\mathbf{P}}\in\mathbb{P}$. The inclusion $(\subseteq)$ of (• ‣ C.1) is then
immediate, so let us prove $(\supseteq)$. First, indeed, $\maltese$ belongs to
the left side. Let
${\mathbf{P}}^{\prime},{\mathbf{P}}^{\prime\prime}\in\mathbb{P}$, let
${\mathfrak{m}}\in\shneg\llbracket
A_{1}\rrbracket^{\sigma_{{\mathbf{P}}^{\prime}}}$,
${\mathfrak{n}}\in\shneg\llbracket
A_{2}\rrbracket^{\sigma_{{\mathbf{P}}^{\prime\prime}}}$, and let us show that
$\bullet\langle{\mathfrak{m}},{\mathfrak{n}}\rangle\in\llbracket
A_{1}\rrbracket^{\sigma_{{\mathbf{P}}}}\otimes^{+}\llbracket
A_{2}\rrbracket^{\sigma_{{\mathbf{P}}}}$ where
${\mathbf{P}}={\mathbf{P}}^{\prime}\vee{\mathbf{P}}^{\prime\prime}$ (note that
${\mathbf{P}}\in\mathbb{P}$ since $\mathbb{P}$ is directed). By induction
hypothesis, $\phi^{A_{1}}_{\sigma}$ is continuous, thus in particular
increasing; since ${\mathbf{P}}^{\prime}\subseteq{\mathbf{P}}$, it follows
that $\llbracket
A_{1}\rrbracket^{\sigma_{{\mathbf{P}}^{\prime}}}=\phi^{A_{1}}_{\sigma}({\mathbf{P}}^{\prime})\subseteq\phi^{A_{1}}_{\sigma}({\mathbf{P}})=\llbracket
A_{1}\rrbracket^{\sigma_{{\mathbf{P}}}}$. Similarly, $\llbracket
A_{2}\rrbracket^{\sigma_{{\mathbf{P}}^{\prime\prime}}}\subseteq\llbracket
A_{2}\rrbracket^{\sigma_{{\mathbf{P}}}}$. We get
$\bullet\langle{\mathfrak{m}},{\mathfrak{n}}\rangle\in\bullet\langle\shneg\llbracket
A_{1}\rrbracket^{\sigma_{{\mathbf{P}}}},\shneg\llbracket
A_{2}\rrbracket^{\sigma_{{\mathbf{P}}}}\rangle\subseteq\llbracket
A_{1}\rrbracket^{\sigma_{{\mathbf{P}}}}\otimes^{+}\llbracket
A_{2}\rrbracket^{\sigma_{{\mathbf{P}}}}$, using internal completeness for
$\shneg$, which proves (• ‣ C.1). Using internal completeness, Lemma 89 and
induction hypothesis, we deduce
$\displaystyle(\bigcup_{{\mathbf{P}}\in\mathbb{P}}(\llbracket
A_{1}\rrbracket^{\sigma_{{\mathbf{P}}}}\otimes^{+}\llbracket
A_{2}\rrbracket^{\sigma_{{\mathbf{P}}}}))^{\perp\perp}$
$\displaystyle=\bullet\langle\bigcup_{{\mathbf{P}}^{\prime}\in\mathbb{P}}\shneg\llbracket
A_{1}\rrbracket^{\sigma_{{\mathbf{P}}^{\prime}}},\bigcup_{{\mathbf{P}}^{\prime\prime}\in\mathbb{P}}\shneg\llbracket
A_{2}\rrbracket^{\sigma_{{\mathbf{P}}^{\prime\prime}}}\rangle^{\perp\perp}$
$\displaystyle=(\bigcup_{{\mathbf{P}}^{\prime}\in\mathbb{P}}\shneg\llbracket
A_{1}\rrbracket^{\sigma_{{\mathbf{P}}^{\prime}}})^{\perp\perp}\otimes(\bigcup_{{\mathbf{P}}^{\prime\prime}\in\mathbb{P}}\shneg\llbracket
A_{2}\rrbracket^{\sigma_{{\mathbf{P}}^{\prime\prime}}})^{\perp\perp}$
$\displaystyle=(\bigcup_{{\mathbf{P}}^{\prime}\in\mathbb{P}}\llbracket
A_{1}\rrbracket^{\sigma_{{\mathbf{P}}^{\prime}}})^{\perp\perp}\otimes^{+}(\bigcup_{{\mathbf{P}}^{\prime\prime}\in\mathbb{P}}\llbracket
A_{2}\rrbracket^{\sigma_{{\mathbf{P}}^{\prime\prime}}})^{\perp\perp}$
$\displaystyle=\llbracket{A_{1}}\rrbracket^{\sigma,X\mapsto\bigvee\mathbb{P}}\otimes^{+}\llbracket{A_{2}}\rrbracket^{\sigma,X\mapsto\bigvee\mathbb{P}}$
$\displaystyle=\llbracket A\rrbracket^{\sigma,X\mapsto\bigvee\mathbb{P}}$
Consequently $\phi^{A}_{\sigma}$ is continuous.
* •
If $A=\mu Y.A_{0}$, define $f_{0}:{\mathbf{Q}}\mapsto\llbracket
A_{0}\rrbracket^{\sigma,X\mapsto\bigvee\mathbb{P},Y\mapsto{\mathbf{Q}}}$ and,
for every ${\mathbf{P}}\in\mathcal{B}^{+}$,
$f_{{\mathbf{P}}}:{\mathbf{Q}}\mapsto\llbracket
A_{0}\rrbracket^{\sigma,X\mapsto{\mathbf{P}},Y\mapsto{\mathbf{Q}}}$. Those
functions are continuous by induction hypothesis, thus using Kleene fixed
point theorem we have
$\mathrm{lfp}(f_{0})=\bigvee_{n\in\mathbb{N}}{f_{0}}^{n}(\maltese)\mbox{ and
}\mathrm{lfp}(f_{{\mathbf{P}}})=\bigvee_{n\in\mathbb{N}}{f_{{\mathbf{P}}}}^{n}(\maltese)$.
Therefore $\bigvee_{{\mathbf{P}}\in\mathbb{P}}(\llbracket
A\rrbracket^{\sigma,X\mapsto{\mathbf{P}}})=\bigvee_{{\mathbf{P}}\in\mathbb{P}}(\mathrm{lfp}(f_{{\mathbf{P}}}))=\bigvee_{{\mathbf{P}}\in\mathbb{P}}(\bigvee_{n\in\mathbb{N}}{f_{{\mathbf{P}}}}^{n}(\maltese))=\bigvee_{n\in\mathbb{N}}(\bigvee_{{\mathbf{P}}\in\mathbb{P}}{f_{{\mathbf{P}}}}^{n}(\maltese))$.
For every ${\mathbf{Q}}\in\mathcal{B}^{+}$ the function
$g_{{\mathbf{Q}}}:{\mathbf{P}}\mapsto f_{{\mathbf{P}}}({\mathbf{Q}})$ is
continuous by induction hypothesis, hence
$f_{0}({\mathbf{Q}})=\bigvee_{{\mathbf{P}}\in\mathbb{P}}f_{{\mathbf{P}}}({\mathbf{Q}})$.
From this, we prove easily by induction on $m$ that for every
${\mathbf{Q}}\in\mathcal{B}^{+}$ we have
${f_{0}}^{m}({\mathbf{Q}})=\bigvee_{{\mathbf{P}}\in\mathbb{P}}{f_{{\mathbf{P}}}}^{m}({\mathbf{Q}})$.
Thus $\bigvee_{{\mathbf{P}}\in\mathbb{P}}(\llbracket
A\rrbracket^{\sigma,X\mapsto{\mathbf{P}}})=\bigvee_{n\in\mathbb{N}}{f_{0}}^{n}(\maltese)=\mathrm{lfp}(f_{0})=\llbracket
A\rrbracket^{\sigma,X\mapsto\bigvee\mathbb{P}}$. We conclude that the function
$\phi^{A}_{\sigma}$ is continuous.
∎
### C.2 Proof of Theorem 30
Before proving Theorem 30 we need some lemmas. Suppose
$({\mathbf{A}}_{n})_{n\in\mathbb{N}}$ is an infinite sequence of regular
behaviours such that for all $n\in\mathbb{N}$,
$|{\mathbf{A}}_{n}|\subseteq|{\mathbf{A}}_{n+1}|$; the simplicity hypothesis
is not needed for now. Let us note
${\mathbf{A}}=\bigcup_{n\in\mathbb{N}}{\mathbf{A}}_{n}$. Notice that the
definition of visitable paths can harmlessly be extended to any set $E$ of
designs of same polarity, even if it is not a behaviour; the same applies to
the definition of incarnation, provided that $E$ satisfies the following: if
${\mathfrak{d}},{\mathfrak{e}}_{1},{\mathfrak{e}}_{2}\in E$ are cut-free
designs such that ${\mathfrak{e}}_{1}\sqsubseteq{\mathfrak{d}}$ and
${\mathfrak{e}}_{2}\sqsubseteq{\mathfrak{d}}$ then there exists
${\mathfrak{e}}\in E$ cut-free such that
${\mathfrak{e}}\sqsubseteq{\mathfrak{e}}_{1}$ and
${\mathfrak{e}}\sqsubseteq{\mathfrak{e}}_{2}$. In particular, as a union of
behaviours, ${\mathbf{A}}$ satisfies this condition.
###### Lemma 90.
1. 1.
$\forall n\in\mathbb{N}$, $V_{{\mathbf{A}}_{n}}\subseteq
V_{{\mathbf{A}}_{n+1}}$.
2. 2.
$V_{\bigcup_{n\in\mathbb{N}}{\mathbf{A}}_{n}}=\bigcup_{n\in\mathbb{N}}V_{{\mathbf{A}}_{n}}$.
3. 3.
$|\bigcup_{n\in\mathbb{N}}{\mathbf{A}}_{n}|=\bigcup_{n\in\mathbb{N}}|{\mathbf{A}}_{n}|$.
###### Proof.
1. 1.
Fix $n$ and let $\mathpzc{s}\in V_{A_{n}}$. There exist
${\mathfrak{d}}\in|{\mathbf{A}}_{n}|$ such that $\mathpzc{s}$ is a path of
${\mathfrak{d}}$. Since $|{\mathbf{A}}_{n}|\subseteq|{\mathbf{A}}_{n+1}|$ we
have ${\mathfrak{d}}\in|{\mathbf{A}}_{n+1}|$, thus by regularity of
${\mathbf{A}}_{n+1}$, $\mathpzc{s}\in V_{A_{n+1}}$.
2. 2.
$(\subseteq)$ Let $\mathpzc{s}\in V_{{\mathbf{A}}}$. There exist
$n\in\mathbb{N}$ and ${\mathfrak{d}}\in|{\mathbf{A}}_{n}|$ such that
$\mathpzc{s}$ is a path of ${\mathfrak{d}}$. By regularity of
${\mathbf{A}}_{n}$ we have $\mathpzc{s}\in V_{{\mathbf{A}}_{n}}$.
$(\supseteq)$ Let $m\in\mathbb{N}$ and $\mathpzc{s}\in V_{{\mathbf{A}}_{m}}$.
For all $n\geq m$, $V_{{\mathbf{A}}_{m}}\subseteq V_{{\mathbf{A}}_{n}}$ by
previous item, thus $\mathpzc{s}\in V_{{\mathbf{A}}_{n}}$. Hence if we take
${\mathfrak{e}}=\raisebox{-0.61685pt}{$\ulcorner\mkern-6.0mu\ulcorner\mkern-2.0mu$}{\makebox[3.94444pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$}}}\raisebox{-0.61685pt}{$\mkern-2.0mu\urcorner\mkern-6.0mu\urcorner$}^{c}$,
we have ${\mathfrak{e}}\in{{\mathbf{A}}_{n}}^{\perp}$ for all $n\geq m$ by
monotonicity. We deduce ${\mathfrak{e}}\in\bigcap_{n\geq
m}{{\mathbf{A}}_{n}}^{\perp}=(\bigcup_{n\geq
m}{\mathbf{A}}_{n})^{\perp}=(\bigcup_{n\in\mathbb{N}}{\mathbf{A}}_{n})^{\perp}={\mathbf{A}}^{\perp}$.
Let ${\mathfrak{d}}\in{\mathbf{A}}_{m}$ such that $\mathpzc{s}$ is a path of
${\mathfrak{d}}$; we have ${\mathfrak{d}}\in{\mathbf{A}}$ and
${\mathfrak{e}}\in{\mathbf{A}}^{\perp}$, thus
$\langle{\mathfrak{d}}\leftarrow{\mathfrak{e}}\rangle=\mathpzc{s}\in
V_{{\mathbf{A}}}$.
3. 3.
$(\subseteq)$ Let ${\mathfrak{d}}$ be cut-free and minimal for $\sqsubseteq$
in ${\mathbf{A}}$. There exists $m\in\mathbb{N}$ such that
${\mathfrak{d}}\in{\mathbf{A}}_{m}$. Thus ${\mathfrak{d}}$ is minimal for
$\sqsubseteq$ in ${\mathbf{A}}_{m}$ otherwise it would not be minimal in
${\mathbf{A}}$, hence the result.
$(\supseteq)$ Let $m\in\mathbb{N}$, and let
${\mathfrak{d}}\in|{\mathbf{A}}_{m}|$. By hypothesis,
${\mathfrak{d}}\in|{\mathbf{A}}_{n}|$ for all $n\geq m$. Suppose
${\mathfrak{d}}$ is not in $|{\mathbf{A}}|$, so there exists
${\mathfrak{d}}^{\prime}\in{\mathbf{A}}$ such that
${\mathfrak{d}}^{\prime}\sqsubseteq{\mathfrak{d}}$ and
${\mathfrak{d}}^{\prime}\neq{\mathfrak{d}}$. In this case, there exists $n\geq
m$ such that ${\mathfrak{d}}^{\prime}\in{\mathbf{A}}_{n}$, but this
contradicts the fact that ${\mathfrak{d}}\in|{\mathbf{A}}_{n}|$.
∎
###### Lemma 91.
$V_{\bigcup_{n\in\mathbb{N}}{\mathbf{A}}_{n}}=\makebox[31.96376pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{31.96376pt}{0.0pt}{$\sim$}$}\hss}{V_{(\bigcup_{n\in\mathbb{N}}{\mathbf{A}}_{n})^{\perp}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{31.96376pt}{0.0pt}{$\sim$}$}\hss}{V_{(\bigcup_{n\in\mathbb{N}}{\mathbf{A}}_{n})^{\perp}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{31.96376pt}{0.0pt}{$\sim$}$}\hss}{V_{(\bigcup_{n\in\mathbb{N}}{\mathbf{A}}_{n})^{\perp}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{31.96376pt}{0.0pt}{$\sim$}$}\hss}{V_{(\bigcup_{n\in\mathbb{N}}{\mathbf{A}}_{n})^{\perp}}}}$}}=V_{(\bigcup_{n\in\mathbb{N}}{\mathbf{A}}_{n})^{\perp\perp}}$.
###### Proof.
In this proof we use the alternative definition of regularity (Proposition
75). We prove
$V_{{\mathbf{A}}}=\makebox[14.65556pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{14.65556pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{A^{\perp}}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{14.65556pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{A^{\perp}}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{14.65556pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{A^{\perp}}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{14.65556pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{A^{\perp}}}}}}$}}$,
and the result will follow from the fact that for any behaviour ${\mathbf{B}}$
(in particular if ${\mathbf{B}}={\mathbf{A}}^{\perp\perp}$) we have
$\makebox[14.42223pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{14.42223pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{B^{\perp}}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{14.42223pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{B^{\perp}}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{14.42223pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{B^{\perp}}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{14.42223pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{B^{\perp}}}}}}$}}=V_{{\mathbf{B}}}$.
First note that the inclusion
$V_{{\mathbf{A}}}\subseteq\makebox[14.65556pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{14.65556pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{A^{\perp}}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{14.65556pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{A^{\perp}}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{14.65556pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{A^{\perp}}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{14.65556pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{A^{\perp}}}}}}$}}$
is immediate.
Let $\mathpzc{s}\in V_{{\mathbf{A^{\perp}}}}$ and let us show that
$\makebox[3.94444pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$}}\in
V_{{\mathbf{A}}}$. Let ${\mathfrak{e}}\in|{\mathbf{A}}^{\perp}|$ such that
$\mathpzc{s}$ is a path of ${\mathfrak{e}}$. By Lemma 74 and the remark
following it, $\mathpzc{s}$ is in the shuffle of anti-shuffles of trivial
views ${\mathbb{t}}_{1},\dots,{\mathbb{t}}_{k}$ of ${\mathbf{A}}^{\perp}$. For
every $i\leq k$, suppose ${\mathbb{t}}_{i}=\langle\kappa_{i}\rangle$;
necessarily, there exists a design ${\mathfrak{d}}_{i}\in{\mathbf{A}}$ such
that $\kappa_{i}$ occurs in
$\langle{\mathfrak{e}}\leftarrow{\mathfrak{d}}_{i}\rangle$, i.e., such that
${\mathbb{t}}_{i}$ is a subsequence of
$\langle{\mathfrak{e}}\leftarrow{\mathfrak{d}}_{i}\rangle$, otherwise
${\mathfrak{e}}$ would not be in the incarnation of ${\mathbf{A}}^{\perp}$ (it
would not be minimal). Let $n$ be big enough such that
${\mathfrak{d}}_{1},\dots,{\mathfrak{d}}_{k}\in{\mathbf{A}}_{n}$, and note
that in particular ${\mathfrak{e}}\in{{\mathbf{A}}_{n}}^{\perp}$. For all $i$,
$\mathchoice{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{9.42928pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{i}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{9.42928pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{i}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{9.42928pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{i}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{9.42928pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{i}}}$
is a trivial view of $|{\mathfrak{d}}_{i}|_{{\mathbf{A}}_{n}}$, thus it is a
trivial view of ${\mathbf{A}}_{n}$. By regularity of ${\mathbf{A}}_{n}$ we
have $\makebox[9.42928pt][c]{\mbox{\rule{0.0pt}{7.0pt}$\mathchoice{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{9.42928pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{i}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{9.42928pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{i}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{9.42928pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{i}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{9.42928pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{i}}}$}}\in
V_{{\mathbf{A}}_{n}}$. Since $\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$
is in the anti-shuffle of shuffles of
$\makebox[10.3pt][c]{\mbox{\rule{0.0pt}{7.0pt}$\mathchoice{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{10.3pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{1}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{10.3pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{1}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{10.3pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{1}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{10.3pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{1}}}$}},\dots,\makebox[10.59166pt][c]{\mbox{\rule{0.0pt}{7.0pt}$\mathchoice{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{10.59166pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{k}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{10.59166pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{k}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{10.59166pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{k}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{10.59166pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}_{k}}}$}}$,
we have
$\makebox[3.94444pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$}}\in
V_{{\mathbf{A}}_{n}}$ using regularity again. Therefore
$\makebox[3.94444pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$}}\in
V_{{\mathbf{A}}}$ by Lemma 90. ∎
###### Lemma 92.
$(\bigcup_{n\in\mathbb{N}}{\mathbf{A}}_{n})^{\perp}$ and
$(\bigcup_{n\in\mathbb{N}}{\mathbf{A}}_{n})^{\perp\perp}$ are regular.
###### Proof.
Let us show ${\mathbf{A}}^{\perp}$ is regular using the equivalent definition
(Proposition 75).
* •
Let ${\mathbb{t}}$ be a trivial view of ${\mathbf{A}}^{\perp}$. By a similar
argument as in the proof above, there exists $n\in\mathbb{N}$ such that
$\mathchoice{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}}}$
is a trivial view of ${\mathbf{A}}_{n}$, thus
$\makebox[7.5pt][c]{\mbox{\rule{0.0pt}{7.0pt}$\mathchoice{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{t}}}}$}}\in
V_{{\mathbf{A}}_{n}}\subseteq V_{{\mathbf{A}}}$. By Lemma 91 ${\mathbb{t}}\in
V_{{\mathbf{A^{\perp}}}}$.
* •
Let $\mathpzc{s},\mathpzc{t}\in V_{{\mathbf{A^{\perp}}}}$. By Lemma 91,
$\makebox[3.94444pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$}},\makebox[3.8889pt][c]{\mbox{\rule{0.0pt}{6.15079pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}$}}\in
V_{{\mathbf{A}}}$. By Lemma 90(2), there exists $n\in\mathbb{N}$ such that
$\makebox[3.94444pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$}},\makebox[3.8889pt][c]{\mbox{\rule{0.0pt}{6.15079pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}$}}\in
V_{{\mathbf{A}}_{n}}$, thus by regularity of ${\mathbf{A}}_{n}$ we have
$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$
$\shuffle$ $\mathchoice{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}$,
$\makebox[3.94444pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$}}\shuffle\makebox[3.8889pt][c]{\mbox{\rule{0.0pt}{6.15079pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}$}}\subseteq
V_{{\mathbf{A}}_{n}}\subseteq V_{{\mathbf{A}}}$, in other words
$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{15.33336pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\shuffle\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{15.33336pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\shuffle\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{15.33336pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\shuffle\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{15.33336pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\shuffle\mathpzc{t}}}$,
$\makebox[16.90277pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{16.90277pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\text{\rotatebox[origin={c}]{180.0}{$\shuffle$}}\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{16.90277pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\text{\rotatebox[origin={c}]{180.0}{$\shuffle$}}\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{16.90277pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\text{\rotatebox[origin={c}]{180.0}{$\shuffle$}}\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{16.90277pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\text{\rotatebox[origin={c}]{180.0}{$\shuffle$}}\mathpzc{t}}}$}}\subseteq
V_{{\mathbf{A}}}$. By Lemma 91 we deduce $\mathpzc{s}\shuffle\mathpzc{t}$,
$\mathpzc{s}\text{\rotatebox[origin={c}]{180.0}{$\shuffle$}}\mathpzc{t}\subseteq
V_{{\mathbf{A^{\perp}}}}$, hence $V_{{\mathbf{A^{\perp}}}}$ is stable under
shuffle and anti-shuffle.
Finally ${\mathbf{A}}^{\perp}$ is regular. We deduce that
${\mathbf{A}}^{\perp\perp}$ is regular since regularity is stable under
orthogonality. ∎
Let us introduce some more notions for next proof. An $\infty$-path (resp.
$\infty$-view) is a finite or infinite sequence of actions satisfying all the
conditions of the definition of path (resp. view) but the requirement of
finiteness. In particular, a finite $\infty$-path (resp. $\infty$-view) is a
path (resp. a view). An $\infty$-path (resp. $\infty$-view) of a design
${\mathfrak{d}}$ is such that any of its positive-ended prefix is a path
(resp. a view) of ${\mathfrak{d}}$. We call infinite chattering a closed
interaction which diverges because the computation never ends; note that
infinite chattering occurs in the interaction between two atomic designs
${\mathfrak{p}}$ and ${\mathfrak{n}}$ if and only if there exists an infinite
$\infty$-path $\mathpzc{s}$ of ${\mathfrak{p}}$ such that $\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$
is an $\infty$-path of ${\mathfrak{n}}$ (where, when $\mathpzc{s}$ is
infinite, $\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$
is obtained from $\mathpzc{s}$ by simply reversing the polarities of all the
actions). Given an infinite $\infty$-path $\mathpzc{s}$, the design
$\raisebox{-0.61685pt}{$\ulcorner\mkern-6.0mu\ulcorner\mkern-2.0mu$}{\mathpzc{s}}\raisebox{-0.61685pt}{$\mkern-2.0mu\urcorner\mkern-6.0mu\urcorner$}^{c}$
is constructed similarly to the case when $\mathpzc{s}$ is finite (see §
B.1.1).
For the proof of the theorem, suppose now that the behaviours
$({\mathbf{A}}_{n},)_{n\in\mathbb{N}}$ are simple. Remark that the second
condition of simplicity implies in particular that the dual of a path in a
design of a simple behaviour is a view.
###### Proof (Theorem 30).
We must show that ${\mathbf{A}}^{\perp\perp}\subseteq{\mathbf{A}}$ since the
other inclusion is trivial. Remark the following: given designs
${\mathfrak{d}}$ and ${\mathfrak{d}}^{\prime}$, if
${\mathfrak{d}}\in{\mathbf{A}}$ and
${\mathfrak{d}}\sqsubseteq{\mathfrak{d}}^{\prime}$ then
${\mathfrak{d}}^{\prime}\in{\mathbf{A}}$. Indeed, if
${\mathfrak{d}}\in{\mathbf{A}}$ then there exists $n\in\mathbb{N}$ such that
${\mathfrak{d}}\in{\mathbf{A}}_{n}$; if moreover
${\mathfrak{d}}\sqsubseteq{\mathfrak{d}}^{\prime}$ then in particular
${\mathfrak{d}}\preceq{\mathfrak{d}}^{\prime}$, and by monotonicity
${\mathfrak{d}}^{\prime}\in{\mathbf{A}}_{n}$, hence
${\mathfrak{d}}^{\prime}\in{\mathbf{A}}$. Thus it is sufficient to show
$|{\mathbf{A}}^{\perp\perp}|\subseteq{\mathbf{A}}$ since for every
${\mathfrak{d}}^{\prime}\in{\mathbf{A}}^{\perp\perp}$ we have
$|{\mathfrak{d}}^{\prime}|\in|{\mathbf{A}}^{\perp\perp}|$ and
$|{\mathfrak{d}}^{\prime}|\sqsubseteq{\mathfrak{d}}^{\prime}$.
So let ${\mathfrak{d}}\in|{\mathbf{A}}^{\perp\perp}|$ and suppose
${\mathfrak{d}}\notin{\mathbf{A}}$. First note the following: by Lemmas 91 and
92, every path $\mathpzc{s}$ of ${\mathfrak{d}}$ is in
$V_{{\mathbf{A^{\perp\perp}}}}=V_{{\mathbf{A}}}$, thus there exists
${\mathfrak{d}}^{\prime}\in|{\mathbf{A}}|$ containing $\mathpzc{s}$. We
explore separately the possible cases, and show how they all lead to a
contradiction.
If ${\mathfrak{d}}$ has an infinite number of maximal slices then:
* •
Either there exists a negative subdesign
${\mathfrak{n}}=\sum_{a\in\mathcal{S}}a(\overrightarrow{x^{a}}).{\mathfrak{p}}_{a}$
of ${\mathfrak{d}}$ for which there is an infinity of names $a\in\mathcal{A}$
such that ${\mathfrak{p}}_{a}\neq\Omega$. In this case, let ${\mathbb{v}}$ be
the view of ${\mathfrak{d}}$ such that for every action $\kappa^{-}$ among the
first ones of ${\mathfrak{n}}$, ${\mathbb{v}}\kappa^{-}$ is the prefix of a
view of ${\mathfrak{d}}$. All such sequences ${\mathbb{v}}\kappa^{-}$ being
prefixes of paths of ${\mathfrak{d}}$, we deduce by regularity of
${\mathbf{A}}^{\perp\perp}$ and using Lemma 71 that
${\mathbb{v}}\kappa^{-}\maltese\in V_{{\mathbf{A^{\perp\perp}}}}$. Let
${\mathfrak{d}}^{\prime}\in|{\mathbf{A}}|$ be such that ${\mathbb{v}}$ is a
view of ${\mathfrak{d}}^{\prime}$. Since ${\mathfrak{d}}^{\prime}$ is also in
${\mathbf{A}}^{\perp\perp}$, we deduce by Lemma 72 that for every action
$\kappa^{-}$ among the first ones of ${\mathfrak{n}}$,
${\mathbb{v}}\kappa^{-}$ is the prefix of a view of ${\mathfrak{d}}^{\prime}$.
Thus ${\mathfrak{d}}^{\prime}$ has an infinite number of slices:
contradiction.
* •
Or we can find an infinite $\infty$-view
${\mathbb{v}}=(\kappa^{-}_{0})\kappa^{+}_{1}\kappa^{-}_{1}\kappa^{+}_{2}\kappa^{-}_{1}\kappa^{+}_{3}\kappa^{-}_{3}\dots$
of ${\mathfrak{d}}$ (the first action $\kappa^{-}_{0}$ being optional
depending on the polarity of ${\mathfrak{d}}$) satisfying the following: there
is an infinity of $i\in\mathbb{N}$ such than $\kappa^{-}_{i}$ is one of the
first actions of a negative subdesign
$\sum_{a\in\mathcal{S}}a(\overrightarrow{x^{a}}).{\mathfrak{p}}_{a}$ of
${\mathfrak{d}}$ with at least two names $a\in\mathcal{A}$ such that
${\mathfrak{p}}_{a}\neq\Omega$. Let ${\mathbb{v}}_{i}$ be the prefix of
${\mathbb{v}}$ ending on $\kappa^{+}_{i}$. There is no design
${\mathfrak{d}}^{\prime}\in|{\mathbf{A}}|$ containing ${\mathbb{v}}$, indeed:
in this case, for all $i$ and all negative action $\kappa^{-}$ such that
${\mathbb{v}}_{i}\kappa^{-}$ is a prefix of a view of ${\mathfrak{d}}$,
${\mathbb{v}}_{i}\kappa^{-}$ would be a prefix of a view of
${\mathfrak{d}}^{\prime}$ by Lemma 72, thus ${\mathfrak{d}}^{\prime}$ would
have an infinite number of slices, which is impossible since the
${\mathbf{A}}_{n}$ are simple. Thus consider
${\mathfrak{e}}=\raisebox{2.0776pt}{$\ulcorner\mkern-6.0mu\ulcorner\mkern-2.0mu$}{\makebox[7.5pt][c]{\mbox{\rule{0.0pt}{7.0pt}$\mathchoice{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{v}}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{v}}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{v}}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{v}}}}$}}}\raisebox{2.0776pt}{$\mkern-2.0mu\urcorner\mkern-6.0mu\urcorner$}^{c}$:
since all the ${\mathbb{v}}_{i}$ are views of designs in
$|{\mathbf{A}}|=\bigcup_{n\in\mathbb{N}}|{\mathbf{A}}_{n}|$ and since the
${\mathbf{A}}_{n}$ are simple, the sequences $\mathchoice{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{9.42928pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{v}}_{i}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{9.42928pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{v}}_{i}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{9.42928pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{v}}_{i}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{9.42928pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{v}}_{i}}}$
are views, thus $\mathchoice{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{v}}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{v}}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{v}}}}{\hbox
to0.0pt{\raisebox{7.0pt}{$\leavevmode\resizebox{7.5pt}{0.0pt}{$\sim$}$}\hss}{{\mathbb{v}}}}$
is an $\infty$-view. Therefore an interaction between a design
${\mathfrak{d}}^{\prime}\in{\mathbf{A}}$ and ${\mathfrak{e}}$ necessarily
eventually converges by reaching a daimon of ${\mathfrak{e}}$, indeed:
infinite chattering is impossible since we cannot follow ${\mathbb{v}}$
forever, and interaction cannot fail after following a finite portion of
${\mathbb{v}}$ since those finite portions ${\mathbb{v}}_{i}$ are in
$V_{{\mathbf{A}}}$. Hence ${\mathfrak{e}}\in{\mathbf{A}}^{\perp}$. But
${\mathfrak{d}}\not\perp{\mathfrak{e}}$, because of infinite chattering
following ${\mathbb{v}}$. Contradiction.
If ${\mathfrak{d}}$ has a finite number of maximal slices
${\mathfrak{c}}_{1},\dots,{\mathfrak{c}}_{k}$ then for every $i\leq k$ there
exist an $\infty$-path $\mathpzc{s}_{i}$ that visits all the positive proper
actions of ${\mathfrak{c}}_{i}$. Indeed, any (either infinite or positive-
ended) sequence $\mathpzc{s}$ of proper actions in a slice
${\mathfrak{c}}\sqsubseteq{\mathfrak{d}}$, without repetition, such that
polarities alternate and the views of prefixes of $\mathpzc{s}$ are views of
${\mathfrak{c}}$, is an $\infty$-path:
* •
(Linearity) is ensured by the fact that we are in only one slice,
* •
(O-visibility) is satisfied since positive actions of ${\mathfrak{d}}$, thus
also of ${\mathfrak{c}}$, are justified by the immediate previous negative
action (a condition true in $|{\mathbf{A}}|$, thus also satisfied in
${\mathfrak{d}}$ because all its views are views of designs in
$|{\mathbf{A}}|$)
* •
(P-visibility) is natively satisfied by the fact that $\mathpzc{s}$ is a
promenade in the tree representing a design.
For example, $\mathpzc{s}$ can travel in the slice ${\mathfrak{c}}$ as a
breadth-first search on couples of nodes $(\kappa^{-},\kappa^{+})$ such that
$\kappa^{+}$ is just above $\kappa^{-}$ in the tree, and $\kappa^{+}$ is
proper. Then 2 cases:
* •
Either for all $i$, there exists $n_{i}\in\mathbb{N}$ and
${\mathfrak{d}}_{i}\in{\mathbf{A}}_{n_{i}}$ such that $\mathpzc{s}_{i}$ is an
$\infty$-path of ${\mathfrak{d}}_{i}$. Without loss of generality we can even
suppose that ${\mathfrak{c}}_{i}\sqsubseteq{\mathfrak{d}}_{i}$: if it is not
the case, replace some positive subdesigns (possibly $\Omega$) of
${\mathfrak{d}}_{i}$ by $\maltese$ until you obtain
${\mathfrak{d}}^{\prime}_{i}$ such that
${\mathfrak{c}}_{i}\sqsubseteq{\mathfrak{d}}^{\prime}_{i}$, and note that
indeed ${\mathfrak{d}}^{\prime}_{i}\in{\mathbf{A}}_{n_{i}}$ since
${\mathfrak{d}}_{i}\preceq{\mathfrak{d}}^{\prime}_{i}$. Let
$N=\mathrm{max}_{1\leq i\leq k}(n_{i})$. Since
${\mathfrak{d}}\not\in{\mathbf{A}}$, thus in particular
${\mathfrak{d}}\not\in{\mathbf{A}}_{N}$, there exists
${\mathfrak{e}}\in{\mathbf{A}}_{N}^{\perp}$ such that
${\mathfrak{d}}\not\perp{\mathfrak{e}}$. The reason of divergence cannot be
infinite chattering, otherwise there would exist an infinite $\infty$-path
$\mathpzc{t}$ in ${\mathfrak{d}}$ such that $\mathchoice{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{3.8889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}}}$
is in ${\mathfrak{e}}$, and $\mathpzc{t}$ is necessarily in a single slice of
${\mathfrak{d}}$ (say ${\mathfrak{c}}_{i}$) to ensure its linearity; but in
this case we would also have ${\mathfrak{d}}_{i}\not\perp{\mathfrak{e}}$ where
${\mathfrak{d}}_{i}\in{\mathbf{A}}_{N}$, impossible. Similarly, for all
(finite) path $\mathpzc{s}$ of ${\mathfrak{d}}$, there exists $i$ such that
$\mathpzc{s}$ is a path of ${\mathfrak{c}}_{i}$ thus of
${\mathfrak{d}}_{i}\in{\mathbf{A}}_{N}$; this ensures that interaction between
${\mathfrak{d}}$ and ${\mathfrak{e}}$ cannot diverge after a finite number of
steps either, leading to a contradiction.
* •
Or there is an $i$ such that the (necessarily infinite) $\infty$-path
$\mathpzc{s}_{i}$ is in no design of ${\mathbf{A}}$. In this case, let
${\mathfrak{e}}=\raisebox{-0.61685pt}{$\ulcorner\mkern-6.0mu\ulcorner\mkern-2.0mu$}{\makebox[5.5pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.5pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}_{i}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.5pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}_{i}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.5pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}_{i}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.5pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}_{i}}}$}}}\raisebox{-0.61685pt}{$\mkern-2.0mu\urcorner\mkern-6.0mu\urcorner$}^{c}$
(where $\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.5pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}_{i}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.5pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}_{i}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.5pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}_{i}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.5pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}_{i}}}$
is a view since the ${\mathbf{A}}_{n}$ are simple), and with a similar
argument as previously we have ${\mathfrak{e}}\in{\mathbf{A}}^{\perp}$ but
${\mathfrak{d}}\not\perp{\mathfrak{e}}$ by infinite chattering, contradiction.
∎
### C.3 Proofs of Subsection 4.3
###### Proof (Lemma 32).
By induction on $A$, we prove that for all $X\in\mathcal{V}$ and
$\sigma:\mathrm{FV}(A)\setminus\\{X\\}\to\mathcal{B}^{+}$ simple and regular,
the induction hypothesis consisting in all the following statements holds:
1. 1.
for all simple regular behaviours
${\mathbf{P}},{\mathbf{P}}^{\prime}\in\mathcal{B}^{+}$, if
$|{\mathbf{P}}|\subseteq|{\mathbf{P}}^{\prime}|$ then
$|{\phi^{A}_{\sigma}}({\mathbf{P}})|\subseteq|{\phi^{A}_{\sigma}}({\mathbf{P}}^{\prime})|$;
2. 2.
for all $n\in\mathbb{N}$,
$|(\phi^{A}_{\sigma})^{n}(\maltese)|\subseteq|(\phi^{A}_{\sigma})^{n+1}(\maltese)|$;
3. 3.
for all simple regular behaviour ${\mathbf{P}}\in\mathcal{B}^{+}$,
${\phi^{A}_{\sigma}}({\mathbf{P}})$ is simple and regular;
4. 4.
$\llbracket\mu
X.A\rrbracket^{\sigma}=\bigcup_{n\in\mathbb{N}}(\phi^{A}_{\sigma})^{n}(\maltese)$.
5. 5.
$|\llbracket\mu
X.A\rrbracket^{\sigma}|=\bigcup_{n\in\mathbb{N}}|(\phi^{A}_{\sigma})^{n}(\maltese)|$.
Let us write $\sigma_{{\mathbf{P}}}$ for $\sigma,X\mapsto{\mathbf{P}}$. Note
that the base cases are immediate. If $A=A_{1}\oplus^{+}A_{2}$ or
$A=A_{1}\otimes^{+}A_{2}$ then:
1. 1.
Follows from the incarnated form of internal completeness (in Theorem 8).
2. 2.
Easy by induction on $n$, using previous item.
3. 3.
Regularity of ${\phi^{A}_{\sigma}}({\mathbf{P}})$ comes from Proposition 18,
and simplicity is easy since the structure of the designs in $\llbracket
A\rrbracket^{\sigma_{{\mathbf{P}}}}$ is given by internal completeness.
4. 4.
By Corollary 26 we have $\llbracket\mu
X.A\rrbracket^{\sigma}=(\bigcup_{n\in\mathbb{N}}(\phi^{A}_{\sigma})^{n}(\maltese))^{\perp\perp}$,
and by Theorem 30 we have
$(\bigcup_{n\in\mathbb{N}}(\phi^{A}_{\sigma})^{n}(\maltese))^{\perp\perp}=\bigcup_{n\in\mathbb{N}}(\phi^{A}_{\sigma})^{n}(\maltese)$
since items (2) and (3) guarantee that the hypotheses of the theorem are
satisfied.
5. 5.
By previous item and Lemma 90(3).
If $A=\mu Y.A_{0}$ then:
1. 1.
Suppose $|{\mathbf{P}}|\subseteq|{\mathbf{P}}^{\prime}|$, where ${\mathbf{P}}$
and ${\mathbf{P}}^{\prime}$ are simple regular. We have
$|{\phi^{A}_{\sigma}}({\mathbf{P}})|=|\llbracket\mu
Y.A_{0}\rrbracket^{\sigma_{{\mathbf{P}}}}|=\bigcup_{n\in\mathbb{N}}|(\phi^{A_{0}}_{\sigma_{{\mathbf{P}}}})^{n}(\maltese)|$
by induction hypothesis (5), and similarly for ${\mathbf{P}}^{\prime}$. By
induction on $n$, we prove that
$|(\phi^{A_{0}}_{\sigma_{{\mathbf{P}}}})^{n}(\maltese)|\subseteq|(\phi^{A_{0}}_{\sigma_{{\mathbf{P}}^{\prime}}})^{n}(\maltese)|$
It is immediate for $n=0$, and the inductive case is:
$\displaystyle|(\phi^{A_{0}}_{\sigma_{{\mathbf{P}}}})^{n+1}(\maltese)|$
$\displaystyle=|\phi^{A_{0}}_{\sigma_{{\mathbf{P}}}}((\phi^{A_{0}}_{\sigma_{{\mathbf{P}}}})^{n}(\maltese))|$
$\displaystyle\subseteq|\phi^{A_{0}}_{\sigma_{{\mathbf{P}}}}((\phi^{A_{0}}_{\sigma_{{\mathbf{P}}^{\prime}}})^{n}(\maltese))|$
by induction hypotheses (1), (3) and ($\delta$)
$\displaystyle=|\phi^{A_{0}}_{\sigma,Y\mapsto(\phi^{A_{0}}_{\sigma_{{\mathbf{P}}^{\prime}}})^{n}(\maltese)}({\mathbf{P}})|$
$\displaystyle\subseteq|\phi^{A_{0}}_{\sigma,Y\mapsto(\phi^{A_{0}}_{\sigma_{{\mathbf{P}}^{\prime}}})^{n}(\maltese)}({\mathbf{P}}^{\prime})|$
by induction hypotheses (1) and (3)
$\displaystyle=|(\phi^{A_{0}}_{\sigma_{{\mathbf{P}}^{\prime}}})^{n+1}(\maltese)|$
2. 3.
By induction hypotheses (2), (3) and (4) respectively, we have
* •
for all $n\in\mathbb{N}$,
$|(\phi^{A_{0}}_{\sigma})^{n}(\maltese)|\subseteq|(\phi^{A_{0}}_{\sigma})^{n+1}(\maltese)|$,
* •
for all $n\in\mathbb{N}$, $(\phi^{A_{0}}_{\sigma})^{n}(\maltese)$ is simple
regular,
* •
$\llbracket\mu
Y.A_{0}\rrbracket^{\sigma}=\bigcup_{n\in\mathbb{N}}(\phi^{A_{0}}_{\sigma})^{n}(\maltese)$.
Consequently, by Corollary 31, $\llbracket\mu Y.A_{0}\rrbracket^{\sigma}$ is
simple regular.
3. 2.
4\. 5. Similar to the cases $A=A_{1}\oplus^{+}A_{2}$ and
$A=A_{1}\otimes^{+}A_{2}$.
∎
###### Proof (Lemma 37).
By induction on $A$:
* •
If $A=a$ then it has basis $\llbracket a\rrbracket={\mathbf{C}}_{a}$.
* •
If $A=A_{1}\oplus^{+}A_{2}$, without loss of generality suppose $A_{1}$ is
steady, with basis ${\mathbf{B}}_{1}$. Take
$\otimes_{1}\shneg{\mathbf{B}}_{1}$, as a basis for $A$, where the connective
$\otimes_{1}$ is defined like $\shpos$ with a different name of action:
$\otimes_{1}{\mathbf{N}}:=\iota_{1}\langle{\mathbf{N}}\rangle^{\perp\perp}$
and by internal completeness
$\otimes_{1}{\mathbf{N}}:=\iota_{1}\langle{\mathbf{N}}\rangle$.
* •
If $A=A_{1}\otimes^{+}A_{2}$ then both $A_{1}$ and $A_{2}$ are steady, of
respective base ${\mathbf{B}}_{1}$ and ${\mathbf{B}}_{2}$. The behaviour
${\mathbf{B}}={\mathbf{B}}_{1}\otimes^{+}{\mathbf{B}}_{2}$ is a basis for $A$,
indeed: since ${\mathbf{B}}_{1}$ and ${\mathbf{B}}_{2}$ are regular,
Proposition 82 gives
$V_{{\mathbf{B}}_{1}\otimes^{+}{\mathbf{B}}_{2}}=\kappa_{\bullet}(V_{\shneg{\mathbf{B}}_{1}}^{x}\shuffle
V_{\shneg{\mathbf{B}}_{2}}^{y})\cup\\{\maltese\\}$ where, by Proposition 78,
$V_{\shneg{\mathbf{B}}_{i}}=\kappa_{\blacktriangle}V_{{\mathbf{B}}_{i}}^{x}\cup\\{\epsilon\\}$
for $i\in\\{1,2\\}$; from this, and using internal completeness, we deduce
that ${\mathbf{B}}$ satisfies all the conditions.
* •
Suppose $A=\mu X.A_{0}$, where $A_{0}$ is steady and has a basis
${\mathbf{B}}_{0}$, let us show that ${\mathbf{B}}_{0}$ is also a basis for
$A$.
* –
By Proposition 34, $\llbracket
A\rrbracket^{\sigma}=\bigcup_{n\in\mathbb{N}}(\phi^{A_{0}}_{\sigma})^{n}(\maltese)$,
and since ${\mathbf{B}}_{0}$ is a basis for $A_{0}$ we have
${\mathbf{B}}_{0}\subseteq\llbracket
A_{0}\rrbracket^{\sigma,X\to\maltese}=(\phi^{A_{0}}_{\sigma})(\maltese)$, so
indeed ${\mathbf{B}}_{0}\subseteq\llbracket A\rrbracket^{\sigma}$.
* –
By induction hypothesis, we immediately have that for every path
$\mathpzc{s}\in V_{{\mathbf{B}}_{0}}$, there exists $\mathpzc{t}\in
V_{{\mathbf{B}}_{0}}^{max}$ $\maltese$-free extending $\mathpzc{s}$.
* –
By Lemma 90(2) $V_{\llbracket
A\rrbracket^{\sigma}}=\\{\maltese\\}\cup\bigcup_{n\in\mathbb{N}}V_{(\phi^{A_{0}}_{\sigma})^{n+1}(\maltese)}=\\{\maltese\\}\cup\bigcup_{n\in\mathbb{N}}V_{\llbracket
A_{0}\rrbracket^{\sigma_{n}}}$ where
$\sigma_{n}=\sigma,X\mapsto(\phi^{A_{0}}_{\sigma})^{n}(\maltese)$ has a simple
regular image. By induction hypothesis, for all $n\in\mathbb{N}$,
$V_{{\mathbf{B}}}^{max}\subseteq V_{\llbracket
A_{0}\rrbracket^{\sigma_{n}}}^{max}$, therefore
$V_{{\mathbf{B}}}^{max}\subseteq V_{\llbracket A\rrbracket^{\sigma}}^{max}$.
∎
## Appendix D Proof of Proposition 43
In this section, we prove Proposition 43, which requires first several lemmas.
Let us denote the set of functional behaviours by $\mathcal{F}$, and recall
that $\mathcal{D}$ stands for the set of data behaviours.
###### Lemma 93.
Let ${\mathbf{P}}\in\mathcal{D}$, and let ${\mathbf{Q}}$ be a pure regular
behaviour. The behaviour ${\mathbf{P}}\multimap^{+}{\mathbf{Q}}$ is pure.
###### Proof.
By Proposition 19 it suffices to show that
$(\shneg{\mathbf{P}})\multimap{\mathbf{Q}}$ is pure. Remark first that, by
construction of data behaviours, the following assertion is satisfied in views
(thus also in paths) of $\shneg{\mathbf{P}}$: every proper positive action is
justified by the negative action preceding it.
By regularity and Corollary 83, we have $V_{{\mathbf{(\shneg P)\multimap
Q}}}=\makebox[53.88661pt][c]{\mbox{\rule{0.0pt}{7.5pt}$\mathchoice{\hbox
to0.0pt{\raisebox{7.5pt}{$\leavevmode\resizebox{53.88661pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}(V_{{\mathbf{\shneg
P}}}\shuffle\makebox[12.41112pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}$}})}}{\hbox
to0.0pt{\raisebox{7.5pt}{$\leavevmode\resizebox{53.88661pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}(V_{{\mathbf{\shneg
P}}}\shuffle\makebox[12.41112pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}$}})}}{\hbox
to0.0pt{\raisebox{7.5pt}{$\leavevmode\resizebox{53.88661pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}(V_{{\mathbf{\shneg
P}}}\shuffle\makebox[12.41112pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}$}})}}{\hbox
to0.0pt{\raisebox{7.5pt}{$\leavevmode\resizebox{53.88661pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}(V_{{\mathbf{\shneg
P}}}\shuffle\makebox[12.41112pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}$}})}}$}}\cup\\{\epsilon\\}$.
Let $\mathpzc{s}\maltese\in V_{{\mathbf{(\shneg P)\multimap Q}}}$, and we will
prove that it is extensible. There exist $\mathpzc{t}_{1}\in
V_{{\mathbf{\shneg P}}}$ and $\mathpzc{t}_{2}\in V_{{\mathbf{Q}}}$ such that
$\makebox[12.2778pt][c]{\mbox{\rule{0.0pt}{6.9224pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}$}}=\overline{\mathpzc{s}}\in\kappa_{\bullet}(\mathpzc{t}_{1}\shuffle\makebox[6.6889pt][c]{\mbox{\rule{0.0pt}{6.15079pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}$}})$.
In particular $\mathpzc{t}_{1}$ is $\maltese$-free and $\mathpzc{t}_{2}$ is
$\maltese$-ended, say $\mathpzc{t}_{2}=\mathpzc{t}_{2}^{\prime}\maltese$.
Since ${\mathbf{Q}}$ is pure, there exists $\kappa^{+}$ such that
$\mathpzc{t}_{2}^{\prime}\kappa^{+}\in V_{{\mathbf{Q}}}$. Let us show that
$\mathpzc{s}\kappa^{+}$ is a path, i.e., that if $\kappa^{+}$ is justified
then $\mathrm{just}(\kappa^{+})$ appears in
$\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}}\raisebox{-0.61685pt}{$\urcorner$}$,
by induction on the length of $\mathpzc{t}_{1}$:
* •
If $\mathpzc{t}_{1}=\epsilon$ then
$\mathpzc{s}\kappa^{+}=\mathpzc{t}_{2}^{\prime}\kappa^{+}$ hence it is a path.
* •
Suppose
$\mathpzc{t}_{1}=\mathpzc{t}_{1}^{\prime}\kappa_{p}^{-}\kappa_{p}^{+}$. Since
$\mathpzc{t}_{1}$ is $\maltese$-free, $\kappa_{p}^{+}$ is proper. Thus
$\mathpzc{s}$ is of the form
$\mathpzc{s}=\mathpzc{s}_{1}\overline{\kappa_{p}^{-}\kappa_{p}^{+}}\mathpzc{s}_{2}$,
and by induction hypothesis $\mathpzc{s}_{1}\mathpzc{s}_{2}\kappa^{+}$ is a
path, i.e., $\mathrm{just}(\kappa^{+})$ appears in
$\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}_{1}\mathpzc{s}_{2}}\raisebox{-0.61685pt}{$\urcorner$}$.
* –
Either
$\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}}\raisebox{-0.61685pt}{$\urcorner$}=\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}_{1}\mathpzc{s}_{2}}\raisebox{-0.61685pt}{$\urcorner$}$
and indeed $\mathrm{just}(\kappa^{+})$ also appears in
$\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}}\raisebox{-0.61685pt}{$\urcorner$}$.
* –
Or
$\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}}\raisebox{-0.61685pt}{$\urcorner$}$
is of the form
$\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}}\raisebox{-0.61685pt}{$\urcorner$}=\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}_{1}}\raisebox{-0.61685pt}{$\urcorner$}\overline{\kappa_{p}^{-}}\overline{\kappa_{p}^{+}}\mathpzc{s}^{\prime}_{2}$
since, by the remark at the beginning of this proof, $\kappa_{p}^{+}$ is
justified by $\kappa_{p}^{-}$. This means in particular that
$\mathpzc{s}^{\prime}_{2}$ start with the same positive action as
$\mathpzc{s}_{2}$, thus we have
$\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}_{1}\mathpzc{s}_{2}}\raisebox{-0.61685pt}{$\urcorner$}=\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}_{1}}\raisebox{-0.61685pt}{$\urcorner$}\mathpzc{s}^{\prime}_{2}$.
Since $\mathrm{just}(\kappa^{+})$ appears in
$\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}_{1}\mathpzc{s}_{2}}\raisebox{-0.61685pt}{$\urcorner$}$
and it is an action of $\mathpzc{s}_{1}$, it appears in
$\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}_{1}}\raisebox{-0.61685pt}{$\urcorner$}$
thus also in
$\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}}\raisebox{-0.61685pt}{$\urcorner$}$.
Therefore $\mathpzc{s}\kappa^{+}$ is a path. Since
$\mathpzc{s}\kappa^{+}\in\makebox[53.88661pt][c]{\mbox{\rule{0.0pt}{7.5pt}$\mathchoice{\hbox
to0.0pt{\raisebox{7.5pt}{$\leavevmode\resizebox{53.88661pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}(V_{{\mathbf{\shneg
P}}}\shuffle\makebox[12.41112pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}$}})}}{\hbox
to0.0pt{\raisebox{7.5pt}{$\leavevmode\resizebox{53.88661pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}(V_{{\mathbf{\shneg
P}}}\shuffle\makebox[12.41112pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}$}})}}{\hbox
to0.0pt{\raisebox{7.5pt}{$\leavevmode\resizebox{53.88661pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}(V_{{\mathbf{\shneg
P}}}\shuffle\makebox[12.41112pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}$}})}}{\hbox
to0.0pt{\raisebox{7.5pt}{$\leavevmode\resizebox{53.88661pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}(V_{{\mathbf{\shneg
P}}}\shuffle\makebox[12.41112pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.41112pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{Q}}}}}$}})}}$}}$
and the behaviours are regular, $\mathpzc{s}\kappa^{+}\in
V_{{\mathbf{P\multimap^{+}Q}}}$, thus $\mathpzc{s}\maltese$ is extensible. As
this is true for every $\maltese$-ended path in $V_{{\mathbf{(\shneg
P)\multimap Q}}}$, the behaviour $(\shneg{\mathbf{P}})\multimap{\mathbf{Q}}$
is pure, and so is ${\mathbf{P}}\multimap^{+}{\mathbf{Q}}$. ∎
###### Lemma 94.
If ${\mathbf{P}}\in\mathcal{F}$ and ${\mathbf{Q}}\in\mathrm{Const}$ then
${\mathbf{P}}\multimap^{+}{\mathbf{Q}}$ is pure.
###### Proof.
We prove that $(\shneg{\mathbf{P}})\multimap{\mathbf{Q}}$ is pure, and the
conclusion will follow from Proposition 19. Let
$\kappa^{+}=x_{0}|\overline{a}\langle\overrightarrow{y}\rangle$ where
${\mathbf{Q}}={\mathbf{C}}_{a}$, and let $\mathpzc{s}\maltese\in
V_{{\mathbf{(\shneg P)\multimap Q}}}$. As in the proof of Lemma 93, there
exist $\mathpzc{t}_{1}\in V_{{\mathbf{\shneg P}}}$ and $\mathpzc{t}_{2}\in
V_{{\mathbf{Q}}}$ such that
$\makebox[12.2778pt][c]{\mbox{\rule{0.0pt}{6.9224pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}$}}=\overline{\mathpzc{s}}\in\kappa_{\bullet}(\mathpzc{t}_{1}\shuffle\makebox[6.6889pt][c]{\mbox{\rule{0.0pt}{6.15079pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}$}})$
with $\mathpzc{t}_{2}$ $\maltese$-ended. But
$V_{{\mathbf{Q}}}=\\{\maltese,\kappa^{+}\\}$, thus $\mathpzc{t}_{2}=\maltese$
and $\makebox[6.6889pt][c]{\mbox{\rule{0.0pt}{6.15079pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}$}}=\epsilon$.
Hence
$\mathpzc{s}\maltese=\makebox[15.25049pt][c]{\mbox{\rule{0.0pt}{6.15079pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{15.25049pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}\mathpzc{t}_{1}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{15.25049pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}\mathpzc{t}_{1}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{15.25049pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}\mathpzc{t}_{1}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{15.25049pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}\mathpzc{t}_{1}}}$}}$,
and this path is extensible with action $\kappa^{+}$, indeed:
$\mathpzc{s}\kappa^{+}$ is a path because $\kappa^{+}$ is justified by
$\kappa_{\bullet}$, which is the only initial action of
$\mathpzc{s}\kappa^{+}$ thus appearing in
$\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}}\raisebox{-0.61685pt}{$\urcorner$}$;
moreover
$\makebox[14.06158pt][c]{\mbox{\rule{0.0pt}{5.93887pt}$\mathchoice{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{14.06158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\kappa^{+}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{14.06158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\kappa^{+}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{14.06158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\kappa^{+}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{14.06158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\kappa^{+}}}$}}\in\kappa_{\bullet}(\mathpzc{t}_{1}\shuffle\makebox[10.11714pt][c]{\mbox{\rule{0.0pt}{5.93887pt}$\mathchoice{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{10.11714pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{+}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{10.11714pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{+}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{10.11714pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{+}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{10.11714pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{+}}}$}})$
where $\kappa^{+}\in V_{{\mathbf{Q}}}$, therefore $\mathpzc{s}\kappa^{+}\in
V_{{\mathbf{(\shneg P)\multimap Q}}}$. ∎
###### Lemma 95.
Let ${\mathbf{P}},{\mathbf{Q}}\in\mathcal{F}$. If there is $\mathpzc{s}\in
V_{{\mathbf{Q}}}$ $\maltese$-free (resp. $\maltese$-ended) and maximal, then
there is $\mathpzc{t}\in V_{{\mathbf{P\multimap^{+}Q}}}$ $\maltese$-free
(resp. $\maltese$-ended) and maximal.
###### Proof.
Suppose there exists $\mathpzc{s}\in V_{{\mathbf{Q}}}$ $\maltese$-free (resp.
$\maltese$-ended) and maximal. Since ${\mathbf{P}}$ is positive and different
from $\maltese$, there exists $\mathpzc{s}^{\prime}\in V_{{\mathbf{\shneg
P}}}$ $\maltese$-free and non-empty. Let
$\mathpzc{t}^{\prime}=\makebox[17.99046pt][c]{\mbox{\rule{0.0pt}{5.8611pt}$\mathchoice{\hbox
to0.0pt{\raisebox{5.8611pt}{$\leavevmode\resizebox{17.99046pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}\mathpzc{s}^{\prime}\makebox[3.94444pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$}}}}{\hbox
to0.0pt{\raisebox{5.8611pt}{$\leavevmode\resizebox{17.99046pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}\mathpzc{s}^{\prime}\makebox[3.94444pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$}}}}{\hbox
to0.0pt{\raisebox{5.8611pt}{$\leavevmode\resizebox{17.99046pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}\mathpzc{s}^{\prime}\makebox[3.94444pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$}}}}{\hbox
to0.0pt{\raisebox{5.8611pt}{$\leavevmode\resizebox{17.99046pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}\mathpzc{s}^{\prime}\makebox[3.94444pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{3.94444pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}}}$}}}}$}}$,
and remark that
$\mathpzc{t}^{\prime}=\overline{\kappa_{\bullet}\mathpzc{s}^{\prime}}\mathpzc{s}$.
This is a path (O- and P-visibility are satisfied), it belongs to
$V_{{\mathbf{(\shneg P)\multimap Q}}}$, it is $\maltese$-free (resp.
$\maltese$-ended). Suppose it is extensible, and consider both the
“$\maltese$-free” and the “$\maltese$-ended” cases:
* •
if $\mathpzc{s}$ and $\mathpzc{t}^{\prime}$ are $\maltese$-free, then there
exists a negative action $\kappa^{-}$ such that
$\mathpzc{t}^{\prime}\kappa^{-}\maltese\in V_{{\mathbf{(\shneg P)\multimap
Q}}}$. But
$\mathpzc{t}^{\prime}\kappa^{-}\maltese=\overline{\kappa_{\bullet}\mathpzc{s}^{\prime}}\mathpzc{s}\kappa^{-}\maltese$,
and since it belongs to $V_{{\mathbf{(\shneg P)\multimap
Q}}}=\makebox[56.28662pt][c]{\mbox{\rule{0.0pt}{7.5pt}$\mathchoice{\hbox
to0.0pt{\raisebox{7.5pt}{$\leavevmode\resizebox{56.28662pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}(V_{{\mathbf{\shneg
P}}}\shuffle V_{{\mathbf{Q^{\perp}}}})}}{\hbox
to0.0pt{\raisebox{7.5pt}{$\leavevmode\resizebox{56.28662pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}(V_{{\mathbf{\shneg
P}}}\shuffle V_{{\mathbf{Q^{\perp}}}})}}{\hbox
to0.0pt{\raisebox{7.5pt}{$\leavevmode\resizebox{56.28662pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}(V_{{\mathbf{\shneg
P}}}\shuffle V_{{\mathbf{Q^{\perp}}}})}}{\hbox
to0.0pt{\raisebox{7.5pt}{$\leavevmode\resizebox{56.28662pt}{0.0pt}{$\sim$}$}\hss}{\kappa_{\bullet}(V_{{\mathbf{\shneg
P}}}\shuffle V_{{\mathbf{Q^{\perp}}}})}}$}}\cup\\{\epsilon\\}$, we necessarily
have $\mathpzc{s}\kappa^{-}\maltese\in V_{{\mathbf{Q}}}$ – indeed, the
sequence $\overline{\mathpzc{s}^{\prime}}\kappa^{-}$ has two adjacent negative
actions. This contradicts the maximality of $\mathpzc{s}$ in
$V_{{\mathbf{Q}}}$.
* •
if $\mathpzc{s}$ and $\mathpzc{t}^{\prime}$ are $\maltese$-ended, there exists
a positive action $\kappa^{+}$ that extends $\mathpzc{t}^{\prime}$ and a
contradiction arises with a similar reasoning.
Hence $\mathpzc{t}^{\prime}$ is maximal in $V_{{\mathbf{(\shneg P)\multimap
Q}}}$. Finally, $\mathpzc{t}=\kappa_{\blacktriangledown}\mathpzc{t}^{\prime}$
fulfills the requirements.∎
###### Lemma 96.
For every behaviour ${\mathbf{P}}\in\mathcal{F}$, there exists $\mathpzc{s}\in
V_{{\mathbf{P}}}$ maximal and $\maltese$-free.
###### Proof.
By induction on ${\mathbf{P}}$. If ${\mathbf{P}}\in\mathcal{D}$ then take
$\mathpzc{s}\in V_{{\mathbf{B}}}$ maximal, where ${\mathbf{B}}$ is a base of
${\mathbf{P}}$. Use Lemma 95 in the case of $\multimap^{+}$, and the result is
easy for $\otimes^{+}$ and $\oplus^{+}$. ∎
###### Lemma 97.
Let ${\mathbf{P}}\in\mathcal{F}$ and let $\mathcal{C}$ be a context. If
$\mathcal{C}[{\mathbf{P}}]$ pure then ${\mathbf{P}}$ pure.
###### Proof.
We prove the contrapositive by induction on $\mathcal{C}$. Suppose
${\mathbf{P}}$ is impure.
* •
If $\mathcal{C}=[\leavevmode\nobreak\ ]$ then
$\mathcal{C}[{\mathbf{P}}]={\mathbf{P}}$, thus $\mathcal{C}[{\mathbf{P}}]$ is
impure.
* •
If $\mathcal{C}=\mathcal{C}^{\prime}\oplus^{+}{\mathbf{Q}}$ or
${\mathbf{Q}}\oplus^{+}\mathcal{C}^{\prime}$ and by induction hypothesis
$\mathcal{C}^{\prime}[{\mathbf{P}}]$ is impure, i.e., there exists a maximal
path $\mathpzc{s}\maltese\in V_{{\mathbf{\mathcal{C}^{\prime}[P]}}}$, then one
of $\kappa_{\iota_{1}}\kappa_{\blacktriangle}\mathpzc{s}\maltese$ or
$\kappa_{\iota_{2}}\kappa_{\blacktriangle}\mathpzc{s}\maltese$ is maximal in
$V_{{\mathbf{\mathcal{C}[P]}}}$, hence the result.
* •
If $\mathcal{C}=\mathcal{C}^{\prime}\otimes^{+}{\mathbf{Q}}$ or
${\mathbf{Q}}\otimes^{+}\mathcal{C}^{\prime}$ and by induction hypothesis
there exists a maximal path $\mathpzc{s}\maltese\in
V_{{\mathbf{\mathcal{C}^{\prime}[P]}}}$, then by Lemma 96, there exists a
$\maltese$-free maximal path $\mathpzc{t}\in V_{{\mathbf{Q}}}$. Consider the
path
$\mathpzc{u}=\kappa_{\bullet}\kappa_{\blacktriangle}^{\mathpzc{t}}\mathpzc{t}\kappa_{\blacktriangle}^{\mathpzc{s}}\mathpzc{s}\maltese$,
where:
* –
$\kappa_{\blacktriangle}^{\mathpzc{t}}$ justifies the first action of
$\mathpzc{t}$,
* –
$\kappa_{\blacktriangle}^{\mathpzc{s}}$ justifies the first one of
$\mathpzc{s}$, and
* –
$\kappa_{\bullet}$ justifies $\kappa_{\blacktriangle}^{\mathpzc{t}}$ and
$\kappa_{\blacktriangle}^{\mathpzc{s}}$, one on each (1st or 2nd) position,
depending on the form of $\mathcal{C}$.
We have $\mathpzc{u}\in V_{{\mathbf{\mathcal{C}[{\mathbf{P}}]}}}$, and
$\mathpzc{u}$ is $\maltese$-ended and maximal, hence the result.
* •
If $\mathcal{C}={\mathbf{Q}}\multimap^{+}\mathcal{C}^{\prime}$ and by
induction hypothesis $\mathcal{C}^{\prime}[{\mathbf{P}}]$ is impure, then
Lemma 95 (in its “$\maltese$-ended” version) concludes the proof.
∎
###### Proof (Proposition 43).
$(\Rightarrow)$ Suppose ${\mathbf{P}}$ impure. By induction on behaviour
${\mathbf{P}}$:
* •
${\mathbf{P}}\in\mathcal{D}$ impossible by Corollary 40.
* •
If ${\mathbf{P}}={\mathbf{P}}_{1}\oplus^{+}{\mathbf{P}}_{2}$ (resp.
${\mathbf{P}}={\mathbf{P}}_{1}\otimes^{+}{\mathbf{P}}_{2}$) then one of
${\mathbf{P}}_{1}$ or ${\mathbf{P}}_{2}$ is impure by Proposition 19, say
${\mathbf{P}}_{1}$. By induction hypothesis, ${\mathbf{P}}_{1}$ is of the form
${\mathbf{P}}_{1}=\mathcal{C}^{\prime}_{1}[\leavevmode\nobreak\
\mathcal{C}^{\prime}_{2}[{\mathbf{Q}}_{1}\multimap^{+}{\mathbf{Q}}_{2}]\multimap^{+}{\mathbf{R}}\leavevmode\nobreak\
]$. Let $\mathcal{C}_{1}=\mathcal{C}^{\prime}_{1}\oplus^{+}{\mathbf{P}}_{2}$
(resp. $\mathcal{C}_{1}=\mathcal{C}^{\prime}_{1}\otimes^{+}{\mathbf{P}}_{2}$)
and $\mathcal{C}_{2}=\mathcal{C}^{\prime}_{2}$, in order to get the result for
${\mathbf{P}}$.
* •
If ${\mathbf{P}}={\mathbf{P}}_{1}\multimap^{+}{\mathbf{P}}_{2}$, then
${\mathbf{P}}_{2}\not\in\mathrm{Const}$ by Lemma 94, and:
* –
If ${\mathbf{P}}_{2}$ impure, then by induction hypothesis ${\mathbf{P}}_{2}$
is of the form ${\mathbf{P}}_{2}=\mathcal{C}^{\prime}_{1}[\leavevmode\nobreak\
\mathcal{C}^{\prime}_{2}[{\mathbf{Q}}_{1}\multimap^{+}{\mathbf{Q}}_{2}]\multimap^{+}{\mathbf{R}}\leavevmode\nobreak\
]$, and it suffices to take
$\mathcal{C}_{1}={\mathbf{P}}_{1}\to\mathcal{C}^{\prime}_{1}$ and
$\mathcal{C}_{2}=\mathcal{C}^{\prime}_{2}$ to get the result for
${\mathbf{P}}$.
* –
If ${\mathbf{P}}_{2}$ is pure, since it is also regular the conclusion follows
from Lemma 93.
$(\Leftarrow)$ Let $\mathcal{C}_{1},\mathcal{C}_{2}$ be contexts,
${\mathbf{Q}}_{1},{\mathbf{Q}}_{2},{\mathbf{R}}\in\mathcal{P}$ with
${\mathbf{R}}\not\in\mathrm{Const}$. Let
${\mathbf{P}}=\mathcal{C}_{1}[\leavevmode\nobreak\
\mathcal{C}_{2}[{\mathbf{Q}}_{1}\multimap^{+}{\mathbf{Q}}_{2}]\multimap^{+}{\mathbf{R}}\leavevmode\nobreak\
]$ and
${\mathbf{Q}}=\mathcal{C}_{2}[{\mathbf{Q}}_{1}\multimap^{+}{\mathbf{Q}}_{2}]$.
We prove that ${\mathbf{P}}$ is impure.
First suppose that
${\mathbf{P}}=\mathcal{C}_{2}[{\mathbf{Q}}_{1}\multimap^{+}{\mathbf{Q}}_{2}]\multimap^{+}{\mathbf{R}}$,
and in this case we show the result by induction on the depth of context
$\mathcal{C}_{2}$. The exact induction hypothesis will be: there exists a
maximal $\maltese$-ended path in $V_{{\mathbf{P}}}$ of the form
$\kappa_{\blacktriangledown}\mathpzc{s}\maltese$ where
$\overline{\mathpzc{s}}\in\kappa_{\bullet}((\kappa_{\blacktriangle}V_{{\mathbf{Q}}})\shuffle\makebox[12.17778pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.17778pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{R}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.17778pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{R}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.17778pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{R}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.17778pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{R}}}}}$}})$.
* •
If $\mathcal{C}_{2}=[\leavevmode\nobreak\ ]$, then
${\mathbf{Q}}={\mathbf{Q}}_{1}\multimap^{+}{\mathbf{Q}}_{2}=\shpos(\shneg{\mathbf{Q}}_{1}\multimap{\mathbf{Q}}_{2})$
and
${\mathbf{P}}={\mathbf{Q}}\multimap^{+}{\mathbf{R}}=\shpos(\shneg{\mathbf{Q}}\multimap{\mathbf{R}})$.
In order to differentiate actions
$\kappa_{\blacktriangledown},\kappa_{\blacktriangle},\kappa_{\bullet}$ used to
construct ${\mathbf{Q}}$ from those to construct ${\mathbf{P}}$, we will use
corresponding superscripts. Let $\kappa^{Q}_{\blacktriangle}\mathpzc{t}_{1}\in
V_{{\mathbf{\shneg Q_{1}}}}$ be $\maltese$-free (and non-empty). Let
$\mathpzc{t}_{2}\in V_{{\mathbf{Q}}_{2}}$ be a maximal $\maltese$-free path:
its existence is ensured by Lemma 96, and it has one proper positive initial
action $\kappa_{2}^{+}$. Now let
$\mathpzc{t}=\makebox[33.75516pt][c]{\mbox{\rule{0.0pt}{6.21887pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.21887pt}{$\leavevmode\resizebox{33.75516pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{Q}_{\bullet}\kappa^{Q}_{\blacktriangle}\mathpzc{t}_{1}\makebox[6.6889pt][c]{\mbox{\rule{0.0pt}{6.15079pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}$}}}}{\hbox
to0.0pt{\raisebox{6.21887pt}{$\leavevmode\resizebox{33.75516pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{Q}_{\bullet}\kappa^{Q}_{\blacktriangle}\mathpzc{t}_{1}\makebox[6.6889pt][c]{\mbox{\rule{0.0pt}{6.15079pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}$}}}}{\hbox
to0.0pt{\raisebox{6.21887pt}{$\leavevmode\resizebox{33.75516pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{Q}_{\bullet}\kappa^{Q}_{\blacktriangle}\mathpzc{t}_{1}\makebox[6.6889pt][c]{\mbox{\rule{0.0pt}{6.15079pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}$}}}}{\hbox
to0.0pt{\raisebox{6.21887pt}{$\leavevmode\resizebox{33.75516pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{Q}_{\bullet}\kappa^{Q}_{\blacktriangle}\mathpzc{t}_{1}\makebox[6.6889pt][c]{\mbox{\rule{0.0pt}{6.15079pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}{\hbox
to0.0pt{\raisebox{6.15079pt}{$\leavevmode\resizebox{6.6889pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{t}_{2}}}$}}}}$}}=\overline{\kappa^{Q}_{\bullet}\kappa^{Q}_{\blacktriangle}\mathpzc{t}_{1}}\mathpzc{t}_{2}$.
Similarly to the path constructed in proof of Lemma 95, we have that
$\mathpzc{t}$ is $\maltese$-free, it is in $V_{{\mathbf{(\shneg
Q_{1})\multimap Q_{2}}}}$, and it is maximal. Thus
$\kappa^{Q}_{\blacktriangledown}\mathpzc{t}\in V_{{\mathbf{Q}}}$. Since
${\mathbf{R}}\notin\mathrm{Const}$, there exists a path of the form
$\kappa^{+}\kappa^{-}\maltese\in V_{{\mathbf{R}}}$, and thus necessarily
$\kappa^{+}$ justifies $\kappa^{-}$. Define the sequence:
$\mathpzc{s}\maltese=\overline{\kappa^{P}_{\bullet}\kappa^{P}_{\blacktriangle}\kappa^{Q}_{\blacktriangledown}}\kappa^{Q}_{\bullet}\kappa^{Q}_{\blacktriangle}\kappa^{+}\kappa^{-}\mathpzc{t}_{1}\overline{\mathpzc{t}_{2}}\maltese$
and notice the following facts:
1. 1.
$\mathpzc{s}\maltese$ is a path: it is a linear aj-sequence. Since
$\kappa^{-}$ is justified by $\kappa^{+}$, O- and P-visibility are easy to
check.
2. 2.
$\mathpzc{s}\maltese\in V_{\shneg{\mathbf{Q}}\multimap{\mathbf{R}}}$: indeed,
we have $\makebox[12.2778pt][c]{\mbox{\rule{0.0pt}{6.9224pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}$}}\in\kappa^{P}_{\bullet}(\kappa^{P}_{\blacktriangle}\kappa^{Q}_{\blacktriangledown}\mathpzc{t}\shuffle\makebox[26.07875pt][c]{\mbox{\rule{0.0pt}{6.9224pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{26.07875pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{+}\kappa^{-}\maltese}}{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{26.07875pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{+}\kappa^{-}\maltese}}{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{26.07875pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{+}\kappa^{-}\maltese}}{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{26.07875pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{+}\kappa^{-}\maltese}}$}})$
where
$\kappa^{P}_{\blacktriangle}\kappa^{Q}_{\blacktriangledown}\mathpzc{t}\in
V_{{\mathbf{\shneg Q}}}$ and $\kappa^{+}\kappa^{-}\maltese\in
V_{{\mathbf{R}}}$.
3. 3.
$\mathpzc{s}\maltese$ is maximal: Let us show that $\mathpzc{s}\maltese$ is
not extensible. First, it is not possible to extend it with an action from
${\mathbf{Q}}^{\perp}$, because this would contradict the maximality of
$\mathpzc{t}$ in $V_{{\mathbf{Q}}}$. Suppose it is extensible with an action
$\kappa^{+\prime}$ from ${\mathbf{R}}$, that is
$\mathpzc{s}\kappa^{+\prime}\in V_{{\mathbf{\shneg Q\multimap R}}}$ and
$\makebox[15.60158pt][c]{\mbox{\rule{0.0pt}{5.93887pt}$\mathchoice{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{15.60158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\kappa^{+\prime}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{15.60158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\kappa^{+\prime}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{15.60158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\kappa^{+\prime}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{15.60158pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\kappa^{+\prime}}}$}}\in\kappa^{P}_{\bullet}(\kappa^{P}_{\blacktriangle}\kappa^{Q}_{\blacktriangledown}\mathpzc{t}\shuffle\makebox[29.40253pt][c]{\mbox{\rule{0.0pt}{5.93887pt}$\mathchoice{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{29.40253pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{+}\kappa^{-}\kappa^{+\prime}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{29.40253pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{+}\kappa^{-}\kappa^{+\prime}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{29.40253pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{+}\kappa^{-}\kappa^{+\prime}}}{\hbox
to0.0pt{\raisebox{5.93887pt}{$\leavevmode\resizebox{29.40253pt}{0.0pt}{$\sim$}$}\hss}{\kappa^{+}\kappa^{-}\kappa^{+\prime}}}$}})$
where $\kappa^{+}\kappa^{-}\kappa^{+\prime}\in V_{{\mathbf{R}}}$. The action
$\kappa^{+\prime}$ (that cannot be initial) is necessarily justified by
$\kappa^{-}$. But
$\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}}\raisebox{-0.61685pt}{$\urcorner$}$
contains necessarily the first negative action of
$\overline{\mathpzc{t}_{2}}$, which is the only initial action in
$\overline{\mathpzc{t}_{2}}$, and this action is justified by
$\kappa^{Q}_{\bullet}$ in $\mathpzc{s}$. Therefore
$\raisebox{-0.61685pt}{$\ulcorner$}{\mathpzc{s}}\raisebox{-0.61685pt}{$\urcorner$}$
does not contain any action from $\mathpzc{s}$ between $\kappa^{Q}_{\bullet}$
and $\overline{\mathpzc{t}_{2}}$, in particular it does not contain
$\kappa^{-}=\mathrm{just}(\kappa^{+\prime})$. Thus
$\mathpzc{s}\kappa^{+\prime}$ is not P-visible: contradiction. Hence
$\mathpzc{s}\maltese$ maximal.
Finally $\kappa^{P}_{\blacktriangledown}\mathpzc{s}\maltese\in
V_{{\mathbf{P}}}$ is not extensible, and of the required form.
* •
If $\mathcal{C}_{2}={\mathbf{Q}}_{0}\multimap^{+}\mathcal{C}$, then
${\mathbf{Q}}$ is of the form
${\mathbf{Q}}={\mathbf{Q}}_{0}\multimap^{+}{\mathbf{Q}}^{\prime}$, thus
previous reasoning applies.
* •
If $\mathcal{C}_{2}=\mathcal{C}\otimes^{+}{\mathbf{Q}}_{0}$ or
${\mathbf{Q}}_{0}\otimes^{+}\mathcal{C}$, the induction hypothesis gives us
the existence of a maximal path in
$V_{{\mathbf{\mathcal{C}[Q_{1}\multimap^{+}Q_{2}]\multimap^{+}R}}}$ of the
form
$\kappa^{P}_{\blacktriangledown}\overline{\kappa^{P}_{\bullet}\kappa^{P}_{\blacktriangle}}\mathpzc{s}^{\prime}\maltese$
where
$\kappa^{P}_{\blacktriangle}\overline{\mathpzc{s}^{\prime}}\in(\kappa^{P}_{\blacktriangle}\mathpzc{t}^{\prime})\shuffle\makebox[5.55557pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}$}}$
with $\mathpzc{t}^{\prime}\in
V_{{\mathbf{\mathcal{C}[Q_{1}\multimap^{+}Q_{2}]}}}$ and $\mathpzc{u}\in
V_{{\mathbf{R}}}$. Let $\mathpzc{t}_{0}\in V_{{\mathbf{Q_{0}}}}$ be
$\maltese$-free and maximal, using Lemma 96. Consider the following sequence:
$\mathpzc{s}\maltese=\overline{\kappa^{P}_{\bullet}\kappa^{P}_{\blacktriangle}\kappa^{Q}_{\bullet}\kappa^{0}_{\blacktriangle}\mathpzc{t}_{0}\kappa^{1}_{\blacktriangle}}\mathpzc{s}^{\prime}\maltese$
where:
* –
$\kappa_{\blacktriangle}^{0}$ justifies the first action of $\mathpzc{t}_{0}$,
* –
$\kappa_{\blacktriangle}^{1}$ justifies the first action of
$\overline{\mathpzc{s}^{\prime}}$ thus the first action of
$\mathpzc{t}^{\prime}$,
* –
$\kappa^{Q}_{\bullet}$ justifies $\kappa_{\blacktriangle}^{0}$ and
$\kappa_{\blacktriangle}^{1}$,
* –
$\kappa_{\blacktriangle}^{P}$ now justifies $\kappa^{Q}_{\bullet}$,
* –
$\kappa_{\bullet}^{P}$ justifies the same actions as before.
Notice that:
1. 1.
$\mathpzc{s}\maltese$ is a path: O- and P-visibility are satisfied.
2. 2.
$\mathpzc{s}\maltese\in V_{\shneg{\mathbf{Q}}\multimap{\mathbf{R}}}$: We have
$\kappa^{Q}_{\bullet}\kappa^{0}_{\blacktriangle}\mathpzc{t}_{0}\kappa^{1}_{\blacktriangle}\overline{\mathpzc{t}^{\prime}}\in\kappa^{Q}_{\bullet}(\kappa^{0}_{\blacktriangle}V_{{\mathbf{{\mathbf{Q}}_{0}}}}\shuffle\kappa^{1}_{\blacktriangle}V_{{\mathbf{\mathcal{C}[Q_{1}\multimap^{+}Q_{2}]}}})=V_{{\mathbf{Q}}}$,
hence $\makebox[12.2778pt][c]{\mbox{\rule{0.0pt}{6.9224pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}{\hbox
to0.0pt{\raisebox{6.9224pt}{$\leavevmode\resizebox{12.2778pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{s}\maltese}}$}}\in\kappa^{P}_{\bullet}(V_{{\mathbf{\shneg
Q}}}\shuffle\makebox[12.17778pt][c]{\mbox{\rule{0.0pt}{6.83331pt}$\mathchoice{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.17778pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{R}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.17778pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{R}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.17778pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{R}}}}}{\hbox
to0.0pt{\raisebox{6.83331pt}{$\leavevmode\resizebox{12.17778pt}{0.0pt}{$\sim$}$}\hss}{V_{{\mathbf{R}}}}}$}})$.
3. 3.
$\mathpzc{s}\maltese$ is maximal: Indeed, it cannot be extended neither by an
action of ${\mathbf{Q}}_{0}^{\perp}$ (contradicts the maximality of
$\mathpzc{t}_{0}$) nor by an action of
$\mathcal{C}[{\mathbf{Q}}_{1}\multimap^{+}{\mathbf{Q}}_{2}]^{\perp}$ or
${\mathbf{R}}$ (contradicts the maximality of $\mathpzc{s}^{\prime}$).
Finally $\kappa^{P}_{\blacktriangledown}\mathpzc{s}\maltese\in
V_{{\mathbf{P}}}$ is a path satisfying the constraints.
* •
If $\mathcal{C}_{2}=\mathcal{C}\oplus^{+}{\mathbf{Q}}_{0}$ or
${\mathbf{Q}}_{0}\oplus^{+}\mathcal{C}$, by induction hypothesis, there exists
a path of the form
$\kappa^{P}_{\blacktriangledown}\overline{\kappa^{P}_{\bullet}\kappa^{P}_{\blacktriangle}}\mathpzc{s}^{\prime}\maltese$
maximal in
$V_{{\mathbf{\mathcal{C}[Q_{1}\multimap^{+}Q_{2}]\multimap^{+}R}}}$, where
$\kappa^{P}_{\blacktriangle}\overline{\mathpzc{s}^{\prime}}\in(\kappa^{P}_{\blacktriangle}\mathpzc{t}^{\prime})\shuffle\makebox[5.55557pt][c]{\mbox{\rule{0.0pt}{4.30554pt}$\mathchoice{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}{\hbox
to0.0pt{\raisebox{4.30554pt}{$\leavevmode\resizebox{5.55557pt}{0.0pt}{$\sim$}$}\hss}{\mathpzc{u}}}$}}$
with $\mathpzc{t}^{\prime}\in
V_{{\mathbf{\mathcal{C}[Q_{1}\multimap^{+}Q_{2}]}}}$ and $\mathpzc{u}\in
V_{{\mathbf{R}}}$. Reasoning as previous item, we see that for one of
$i\in\\{1,2\\}$ (depending on the form of context $\mathcal{C}_{2}$) the path
$\kappa^{P}_{\blacktriangledown}\overline{\kappa^{P}_{\bullet}\kappa^{P}_{\blacktriangle}\kappa^{Q}_{\iota_{i}}\kappa_{\blacktriangle}}\mathpzc{s}^{\prime}\maltese$
(where $\kappa_{\blacktriangle}^{P}$ now justifies $\kappa^{Q}_{\iota_{i}}$)
is in $V_{{\mathbf{P}}}$, maximal, and of the required form.
The result for the general case, where
${\mathbf{P}}=\mathcal{C}_{1}[\leavevmode\nobreak\
\mathcal{C}_{2}[{\mathbf{Q}}_{1}\multimap^{+}{\mathbf{Q}}_{2}]\multimap^{+}{\mathbf{R}}\leavevmode\nobreak\
]$, finally comes from Lemma 97. ∎
|
# Demonstration of a plasmonic nonlinear pseudo-diode
Sergejs Boroviks<EMAIL_ADDRESS>Nanophotonics and Metrology
Laboratory, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne,
Switzerland Andrei Kiselev Nanophotonics and Metrology Laboratory, Swiss
Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland Karim
Achouri Nanophotonics and Metrology Laboratory, Swiss Federal Institute of
Technology Lausanne (EPFL), Lausanne, Switzerland Olivier J.F. Martin
<EMAIL_ADDRESS>Nanophotonics and Metrology Laboratory, Swiss Federal
Institute of Technology Lausanne (EPFL), Lausanne, Switzerland
(January 30, 2023)
###### Abstract
We demonstrate a nonlinear plasmonic metasurface that exhibits strongly
asymmetric second-harmonic generation: nonlinear scattering is efficient upon
excitation in one direction and it is substantially suppressed when the
excitation direction is reversed, thus enabling a diode-like functionality. A
significant (approximately $10\text{\,}\mathrm{d}\mathrm{B}$) extinction ratio
of SHG upon opposite excitations is measured experimentally and those findings
are substantiated with full-wave simulations. The combination of two commonly
used metals – aluminium and silver – produces a material composition asymmetry
that results into a bianisotropic response of the system, as confirmed by
performing homogenization analysis and extracting an effective susceptibility
tensor. Finally, we discuss the implications of our results from the more
fundamental perspectives of reciprocity and time-reversal asymmetry.
Plasmonics, Metasrufaces, Bianisotropy, Time-reversal asymmetry, Nonlocality,
Second-Harmonic Generation
High-performance nanoscale devices that allow transmission of light only in
one direction – optical isolators – remain a long-coveted research objective
for optical engineers. This problem is nontrivial due to the fundamental
property of electromagnetic waves: in linear time-invariant (LTI) media and in
the absence of an external time-odd bias, such as a magnetic field, they
propagate reciprocally, i.e. the same way in the forward and backward
directions. This property is linked with the time-reversal symmetry of the
macroscopic Maxwell’s equations and can be shown via the Lorentz reciprocity
theorem, which specifically applies to LTI media Caloz _et al._ (2018);
Asadchy _et al._ (2020); Achouri and Martin (2021). However, despite recent
comprehensive publications on this topic Jalas _et al._ (2013); Sounas and
Alu (2017); Caloz _et al._ (2018); Asadchy _et al._ (2020); Sigwarth and
Miniatura (2022), there remains a tangible confusion in the community about
the difference between true nonreciprocity and deceptively similar time-
reversal asymmetric response. For example, time-invariant and bias-less lossy
systems may exhibit contrast upon excitation from opposite directions, but
they do not qualify as optical isolators since they possess a symmetric
scattering matrix and thus obey Lorentz reciprocity Fan _et al._ (2012).
Furthermore, in the case of devices based on nonlinear effects, the
distinction between true and pseudo-isolators is even more intricate. In
particular, devices based on Kerr-type nonlinearities Cotrufo _et al._ (2021)
are intrinsically limited by dynamic reciprocity: they can only perform as
pseudo-isolators, since they do not exhibit unidirectional transmission upon
simultaneous excitation from opposite directions Shi _et al._ (2015);
Fernandes and Silveirinha (2018). One aim of this work is to explore
possibilities to overcome this limitation and demonstrate how it can be turned
into an advantage with an appropriate application.
In that context, photonic metasurfaces – artificial planar materials
constituted of subwavelength elements – have been identified as a promising
platform for the realization of miniature optical isolators or asymmetric
devices Shaltout _et al._ (2019). To this end, let us highlight recent
progress in the development of two classes of metasurfaces – nonlinear and
bianisotropic metasurfaces. These two classes are particularly relevant to the
scope of our work, since combining their features enables realization of
unconventional functionalities, such as aforementioned nonlinearly induced
nonreciprocity Menzel _et al._ (2010); Mahmoud _et al._ (2015); Lawrence
_et al._ (2018); Chen _et al._ (2021); Cheng _et al._ (2021), directional
harmonic generation Yang _et al._ (2017a); Xu _et al._ (2020); Nauman _et
al._ (2021) and nonlinear beam shaping Tymchenko _et al._ (2016); Bar-David
and Levy (2019).
Nonlinear metasurfaces Minovich _et al._ (2015); Li _et al._ (2017); Krasnok
_et al._ (2018) have the potential to replace bulky optical crystals and thus
miniaturize nonlinear optical devices. Among other applications, plasmonic
metasurfaces have proven to be interesting for second-harmonic generation
(SHG) Lee _et al._ (2014); Kauranen and Zayats (2012); Butet _et al._
(2015), which is a second-order nonlinear optical process in which an
excitation wave with frequency $\omega$ is converted into a wave with double
frequency $2\omega$ Boyd (2020). However, the second-order nonlinear response
of plasmonic metals is weak due to their centrosymmetric crystal structure,
which is only broken at the surface, giving rise to a non-vanishing surface
normal component of the second-order susceptibility tensor
$\chi_{\perp\perp\perp}^{(2)}$. Yet, the overall SHG efficiency remains small
due to the reduced interaction volume: essentially, the nonlinear process
occurs within the few atomic layers at the metal surface, since the bulk metal
is opaque for visible and infrared light and its bulk second-order response is
vanishing. Nevertheless, this limitation can be partially overcome by the
virtue by virtue of the field enhancement associated with surface plasmon
resonances at metal surfaces. Thus, various SHG enhancement schemes were
proposed for plasmonic metasurfaces, based on multipolar resonances
Chandrasekar _et al._ (2015); Kruk _et al._ (2015); Smirnova and Kivshar
(2016); Bernasconi _et al._ (2016); Butet _et al._ (2017); Yang _et al._
(2017b); Kiselev _et al._ (2019), plasmonic lattice resonances Gupta _et
al._ (2021); Abir _et al._ (2022) and even light-induced centrosymmetry
breaking Li _et al._ (2021).
On the other hand, bianisotropic metasurfaces allow engineering the
polarization response to realize highly efficient refraction devices through
the combination of electric and magnetic effects Asadchy _et al._ (2018);
Achouri and Caloz (2021). The bianisotropic response, which emerges in
structures with broken spatial symmetries Achouri _et al._ (2022a), implies
that the material acquires magnetic polarization upon excitation with an
electric field, and vice versa, electric polarization is produced by a
magnetic field. Such a magneto-electric coupling gives rise to the spatial
dispersion (i.e. wavevector-dependent response) that enables an excitation
angle-dependent operation Overvig and Alù (2022). For example, in lossy
systems, it may lead to asymmetric reflection and absorption, which will be
discussed further in relation to our work.
Figure 1: Comparison of conventional and asymmetric SHG: (a) symmetric SHG
from a nonlinear (NL) crystal; (b) asymmetric SHG from a nonlinear
bianisotropic metasurface.
In this work, we demonstrate theoretically and experimentally a plasmonic
metasurface that exhibits asymmetric SHG. The operation of the device is
conceptually depicted in Fig. 1: in contrast to a conventional nonlinear
crystal, second-harmonic (SH) is efficiently generated only upon one
excitation direction, which essentially, enables a nonlinear optical pseudo-
diode functionality (to be distinguished from optical isolators and pseudo-
isolators). Such an asymmetric response imposes a structural asymmetry of the
system and previously proposed theoretical designs with similar
functionalities have relied on a geometric asymmetry, which might be difficult
to realize experimentally Poutrina and Urbas (2016); Mobini _et al._ (2021);
Jin and Argyropoulos (2020); Kim (2021); Liu _et al._ (2021). Here, we take a
different route and implement a structural asymmetry through the utilization
of two common plasmonic meterials – silver (Ag) and aluminium (Al) – in a
metasurface and show that substantial direction-dependent SHG (up to approx.
$16.9\text{\,}\mathrm{d}\mathrm{B}$ in theory and approx.
$10\text{\,}\mathrm{d}\mathrm{B}$ in experiment). A major advantage of this
two-dimensional design is that such a material asymmetry is relatively easy to
implement using standard nanofabrication techniques, e.g. single-exposure
electron-beam lithography (EBL) Abasahl _et al._ (2021). Furthermore, the
combination of plasmonic metals is known to enhance nonlinear processes Wang
_et al._ (2019, 2021). To the best of our knowledge, this is the first
experimental demonstration of a plasmonic metasurface for asymmetric SHG,
although we note that in a recent experimental demonstration Kruk et al.
utilized a combination of dielectric nonlinear materials for third-harmonic
generation Kruk _et al._ (2022). Additionally, we perform homogenization
analysis of the metasurface to extract effective susceptibilities and reveal
bianiostropic property of our metasurface. Finally, we discuss the fundamental
implications of our results in the context of nonreciprocity.
The building block of the metasurface – the meta-atom – is schematically
depicted in Fig. 2a. It is comprised of two T-shaped nanostructures made of Al
and Ag that are stacked one on top of the other and separated by a thin
silicone dioxide (SiO2) spacer. These nanostructures are embedded in SiO2 and
arranged in a square lattice with the period of
$\Lambda=$$250\text{\,}\mathrm{nm}$. Such a periodicity is sufficiently small
to avoid diffraction in both linear and nonlinear regimes, as the metasurface
is designed for the excitation with the vacuum wavelength of
$\lambda_{0}=$$800\text{\,}\mathrm{nm}$ (the effective wavelength in SiO2 is
$\sim$$537\text{\,}\mathrm{nm}$) and SHG at
$\lambda_{\textrm{SH}}=$$400\text{\,}\mathrm{nm}$
($\sim$$268\text{\,}\mathrm{nm}$ in SiO2).
As shown in Fig. 2b, we consider two different excitation conditions that are
indicated with red thick arrows: forward (in the direction along the
$+z$-axis) and backward (along the $-z$-axis) propagating plane waves that are
$x$-polarized. In the linear regime, each of the two waves gives rise to
transmitted (red solid arrows) and reflected (red dashed arrows) waves, which
are labeled as forward-excited reflection (FR) and transmission (FT), or
backward-excited reflection (BR) and transmission (BT). Additionally, both
excitations produce signals at the SH frequency (shown with blue arrows). For
the SH signals, we use the same naming convention as the waves produced by
linear scattering, Fig. 2b. For the reflected and transmitted waves at the
excitation frequency, we measure the co-polarized $x$-component of the
electric field, whereas for the SHG waves, the cross-polarized $y$-component
is measured, as it is found to be dominant (see Fig. S3 in the Supporting
Information).
T-shaped meta-atoms provide almost independent control of the spectral
positions for the resonances both at the excitation and SH frequencies by
varying the lateral dimensions $L_{x}$ and $L_{y}$ Czaplicki _et al._ (2015).
As can be seen from Fig. S1 in the Supporting Information, for a fixed
wavelength, the transmission in the linear regime is tuned by varying $L_{x}$.
In the nonlinear regime, the transmission and reflection are controlled by
both $L_{x}$ and $L_{y}$. Importantly, for forward excitation, the maximum in
SHG transmission coincides with the minimum in linear transmission (compare
panels a and b in Fig. S1 in the Supporting Information). The other geometric
parameters $L_{\textrm{s}}$, $D$, $t_{\ce{Ag}}$ and $t_{\ce{Al}}$ do not have
a strong influence on the resonance wavelength of the fundamental mode,
however they affect the scattering cross-section of the meta-atoms via the
retardation effects Kottmann and Martin (2001), which, in turn determines the
overall transmission and SHG intensity (see Fig. S2 in the Supporting
Information). The sidewalls of the meta-atom are tilted by
$10\text{\,}\mathrm{\SIUnitSymbolDegree}$ and the edges and corners are
rounded with a $5\text{\,}\mathrm{nm}$ radius to mimic the experimentally
fabricated structures, as discussed below.
We select $L_{x}=$$135\text{\,}\mathrm{nm}$,
$L_{y}=$$195\text{\,}\mathrm{nm}$, $L_{\textrm{s}}=$$25\text{\,}\mathrm{nm}$
and $D=t_{\ce{Ag}}=t_{\ce{Al}}=$$50\text{\,}\mathrm{nm}$, since these
parameters maximize SHG upon forward excitation at the design wavelength. Such
meta-atom dimensions result in minimal transmission in the linear regime and
sufficiently high extinction ratio of SHG upon forward and backward excitation
(see the parametric sweeps in Fig. S1 in the Supporting Information).
Furthermore, in the $L_{x}$ and $L_{y}$ parameter space, the forward-
excitation SHG peak is broad, which implies that the metasurface efficiency is
weakly sensitive to deviations from the nominal dimensions, thus easing
nanofabrication tolerances.
The simulations are performed in two steps using a custom-developed numerical
electromagnetic solver based on the surface integral equation Gallinet _et
al._ (2010); Butet _et al._ (2013). First, the linear fields are computed
with a plane-wave excitation and periodic boundary conditions. For the SHG
simulations, the nonlinear surface polarization
$P_{\perp}^{(2\omega)}=\chi^{(2)}_{\perp\perp\perp}E_{\perp}^{\omega}E_{\perp}^{\omega}$
is used as a source, where the normal components of the surface fields
$E_{\perp}^{\omega}$ are obtained from the linear simulations.
The simulated reflectance and transmittance in the linear and SHG regimes are
shown in Fig. 2c and d. In the simulations, we use interpolated values
$\varepsilon_{\ce{Al}}$ and $\varepsilon_{\ce{Ag}}$ of the experimental
permittivity data from McPeak et al. McPeak _et al._ (2015), and for the
permittivity of the background medium we use $\varepsilon_{\ce{SiO2}}=2.22$.
Among the noble metals, Ag is known to have the lowest losses at optical
frequencies, whereas Al has recently attracted attention as a low cost
alternative plasmonic material Castro-Lopez _et al._ (2011); Knight _et al._
(2014); Gérard and Gray (2014); Thyagarajan _et al._ (2016). Apart from its
low cost, Al is known to have the highest second-order nonlinear
susceptibility among the plasmonic materials, in particular its surface normal
component $\chi^{(2)}_{\perp\perp\perp}$ Krause _et al._ (2004), it also
exhibits an interband transition-related absorption peak at
$800\text{\,}\mathrm{nm}$ (see Fig. S4 in the Supporting Information).
Figure 2: Design and simulated performance of the nonlinear bianisotropic
metasurface. (a) Schematics of the system and the considered forward-
excitation transmission (FT) and reflection (FR), as well as backward-
excitation transmission (BT) and reflection (BR); thick solid red arrows
indicate the excitation waves; thin solid (dashed) arrows indicate the
transmitted (reflected) waves at the excitation frequency in red and at the SH
frequency in blue. (b) Schematic drawing of the metasurface unit cell in
isometric-, top- and side-views with indicated geometric and material
parameters. Simulated metasurface reflectance and transmittance (c) in the
linear regime and (d) at the SH frequency. Relevant components of the
extracted (e) linear and (f) nonlinear effective susceptibility tensors.
As shown in Fig. 2c, in the linear regime, the transmission $T$ for both
forward and backward excitations is exactly the same, as imposed by
reciprocity. However, the reflection $R$ and absorption $A$, which are related
to transmission as $A+R=1-T$, depend on the excitation direction, as they are
not restricted by reciprocity and depend on the spatial asymmetry of the
system. The asymmetric reflection and absorption of the system can be analyzed
by considering an isolated meta-atom. As can be seen in Fig. S5c and d in the
Supporting Information, forward and backward excitations give rise to two
distinct electric field distributions. In particular, the electric field
concentration in the Al part of the structure is strongly dependent on the
excitation direction. Although the response is primarily dipolar for both
excitations (see Fig. S6a and b in the Supporting Information), this results
in asymmetric linear scattering and absorption cross-sections, which is a
characteristic of bianisotropic systems Cheng _et al._ (2021). In fact, it is
presence of the losses that enables asymmetric scattering when the structure
is illuminated from opposite directions, whereas the extinction cross-section
remains exactly the same, as imposed by reciprocity Sounas and Alù (2014).
In turn, the SHG response that is plotted in Fig. 2d, has an even stronger
dependence on the excitation direction: both nonlinear FT and RT are more than
two orders of magnitude stronger than the BT an BR at
$400\text{\,}\mathrm{nm}$. A multipolar analysis of an isolated meta-atom (see
Fig. S6c and d in the Supporting Information), shows that the electric dipolar
and quadruplar modes are excited more efficiently at $400\text{\,}\mathrm{nm}$
upon forward excitation. This is due to the aforementioned different electric-
field distributions at the surface of the T-shaped particles, that become the
sources for the SHG.
To further elucidate the significance of bianisotropy in such an asymmetric
response, we extracted the effective susceptibilities from the simulated
electromagnetic fields following the previously documented procedure of
metasurface homogenization analysis Achouri _et al._ (2017, 2018, 2022b).
Briefly, the expressions for nonlinear susceptibilities are derived from the
generalized sheet transition conditions and are calculated using the simulated
reflected and transmitted fields upon different excitation conditions at
$\omega$ and $2\omega$ frequencies.
In Fig. 2e and f we plot the extracted effective susceptibility tensor
elements that are relevant to the considered excitation conditions. For both
linear and nonlinear susceptibilities, the magneto-electric coupling
(corresponding to the terms with mixed “e” and “m” subscripts in Fig. 2e and
f) is non-negligible. The asymmetric response becomes apparent by noting that
the induced linear and nonlinear polarizations are given by
$\displaystyle\mathbf{P}^{\omega}=\overline{\overline{\mathbf{\chi}}}_{\textrm{ee}}^{\>\omega}\cdot\mathbf{E}^{\omega}+\overline{\overline{\mathbf{\chi}}}_{\textrm{em}}^{\>\omega}\cdot\mathbf{H}^{\omega},$
(1a)
$\displaystyle\mathbf{P}^{2\omega}=\overline{\overline{\mathbf{\chi}}}_{\textrm{ee}}^{\>2\omega}\cdot\mathbf{E}^{2\omega}+\overline{\overline{\mathbf{\chi}}}_{\textrm{em}}^{\>2\omega}\cdot\mathbf{H}^{2\omega}+\overline{\overline{\mathbf{\chi}}}_{\textrm{eee}}^{\>\omega}:\mathbf{E}^{\omega}\mathbf{E}^{\omega}+\overline{\overline{\mathbf{\chi}}}_{\textrm{eem}}^{\>\omega}:\mathbf{E}^{\omega}\mathbf{H}^{\omega}+\overline{\overline{\mathbf{\chi}}}_{\textrm{emm}}^{\>\omega}:\mathbf{H}^{\omega}\mathbf{H}^{\omega}.$
(1b)
In the linear regime, the non-negligible magneto-electric coupling term
$\chi_{\textrm{me}}$ results in an asymmetric absorption and reflection. As
for the nonlinear effective susceptibility tensors, the dominant components
are $\chi_{\textrm{mem}}^{yyx}$ and $\chi_{\text{eem}}^{xyx}$, which relate
magnetic/electric excitations with electric/magnetic responses along
orthogonal directions and result in strongly asymmetric SHG.
To verify experimentally this asymmetric nonlinear response, we fabricated and
characterized a metasurface device. Instead of the widespread lift-off
process, we employ the ion beam etching (IBE) technique which enables the
fabrication of stratified nanostructures, in particular metal-dielectric
composites, with sharper features Ray _et al._ (2020); Abasahl _et al._
(2021). The schematic flowchart of the fabrication process is shown in Fig.
3a. We use a $150\text{\,}\mathrm{\SIUnitSymbolMicro m}$-thick D 263 glass
wafer (Schott) which is coated with $50\text{\,}\mathrm{nm}$-thick Al and
$25\text{\,}\mathrm{nm}$ SiO2 films using RF sputtering (Pfeiffer SPIDER 600).
Next, we deposit a $50\text{\,}\mathrm{nm}$ thick Ag layer using an e-beam
assisted evaporator (Alliance-Concept EVA 760). The T-shaped pattern arrays
are exposed in the hydrogen silsesquioxane (HSQ, XR-1541-006 from DuPont),
which is a negative tone e-beam resist, using electron beam lithography (Raith
EBPG5000+). The formation of the exposed patterns in the thin films is
performed using a low-power argon IBE (Veeco Nexus IBE350, operated at a
$300\text{\,}\mathrm{V}$ acceleration voltage). An important point for this
last step is the pulsed IBE operation: 10 s of etching followed by 30 s of
cooling to avoid damaging the sample by substrate overheating. The typical
overall IBE process time is $160\text{\,}\mathrm{s}$, and the etching depth is
controlled in-situ using a mass-spectrometer, which allows real-time
monitoring of the etched material composition: the etching process is stopped
as soon as the Al flux drops to a minimum. The fabrication results are shown
in the scanning electron microscope (SEM) images in Fig. 3b-d. The morphology
of the fabricated structure can be inspected in Fig. 3c: intrinsically, the
IBE process results in tilted sidewalls (approx.
$10\text{\,}\mathrm{\SIUnitSymbolDegree}$) and rounded corners and edges.
Although such features are typically undesired, they are not expected to
degrade the performance of the metasurface, as these were taken into account
in the simulations. In turn, the layered material composition can be well
identified in the image acquired with the back-scattered electron (BSE)
detector in Fig. 3d. In the last fabrication step, we cover the metallic
nanostructures with a thick SiO2 layer (approx. $300\text{\,}\mathrm{nm}$)
which serves two purposes: it acts as a protective layer preventing
degradation of the Al and Ag nanostructures, and simplifies the physical
conditions by having identical permittivities above and below the metasurface.
Figure 3: Fabrication of the bimetallic metasurface. (a) Flowchart of the
fabrication: 1. Initial substrate; 2. Al, SiO2, Ag and HSQ thin films
deposition; 3. E-beam exposure; 4. IBE; 5. Covering with a thick SiO2 film.
SEM images of the fabricated structure acquired using different detectors and
tilt angles: (b) top view SE (scale bar: $1\text{\,}\mathrm{\SIUnitSymbolMicro
m}$. (c) $45\text{\,}\mathrm{\SIUnitSymbolDegree}$-tilted view SE (scale bar:
$200\text{\,}\mathrm{nm}$. (d)
$45\text{\,}\mathrm{\SIUnitSymbolDegree}$-tilted view BSE (scale bar:
$200\text{\,}\mathrm{nm}$.
The experimental setup and the results for the optical characterization of the
fabricated sample are shown in Fig. 4. As an excitation light source, we use a
mode-locked Ti:Saph laser that outputs approx. $120\text{\,}\mathrm{fs}$
pulses with a central wavelength of $800\text{\,}\mathrm{nm}$. The excitation
light is weakly focused onto the metasurface with a low magnification
objective ($\text{NA}=0.1$), which results in a focal spot with a
$10\text{\,}\mathrm{\SIUnitSymbolMicro m}$ FWHM mimicking the plane wave
excitation used in the simulations. The spectrum of the nonlinearly generated
light is shown in Fig. 3a. Apart from the characteristic SHG peak at
$400\text{\,}\mathrm{n}\mathrm{m}$, it has a tail at longer wavelengths, which
is attributed to nonlinear photo-luminescence (NPL).
As an interesting side-effect, we note that the NPL signal is substantially
larger for BT than for FT. This fact can be explained by the peculiarity of
the two-photon absorption mechanism in metals that induces the NPL. As opposed
to the coherent nature of two-photon absorption in molecules or dielectrics,
in metals it can be regarded as a cascaded process. Specifically, two photons
are absorbed sequentially rather than simultaneously Beversluis _et al._
(2003); Mühlschlegel _et al._ (2005); Biagioni _et al._ (2009). Absorption
of the first photon gives rise to an intraband transition in the conduction
band and creates a vacancy below the Fermi level. Thus, the second photon
results in an interband transition that fills the vacancy in the conduction
band and creates one in the valence band. Both of these photon absorption
steps are linear, but result in an effective nonlinearity. Thus, higher linear
absorption upon backward excitation (see Fig. S5 and discussion above),
results in a higher probability of two-photon absorption and subsequent NPL,
which is consistent with our observations.
Figure 4: Optical characterization of the metasurface: (a) measurement setup.
(b) Excitation spectrum. (c) Nonlinear spectra.
Such asymmetric behaviour is sometimes referred to as ”nonreciprocal SHG”,
both in the metasurfaces Poutrina and Urbas (2016) and solid state physics
Toyoda _et al._ (2021); Mund _et al._ (2021) communities. We share the view
that such a nomenclature is improper in the case of SHG, since the concept of
nonrecipocity is not well-defined for nonlinear optics Trzeciecki and Hübner
(2000); Sounas and Alù (2017); Achouri _et al._ (2018). For any $N$-port
system, the Lorentz reciprocity implies the symmetry of the scattering matrix
$\overline{\overline{\mathbf{S}}}^{T}=\overline{\overline{\mathbf{S}}}$, where
T denotes the transpose operator. In the case of a two-port system like that
considered in this work in the linear regime, the scattering matrix is given
by
$\overline{\overline{\mathbf{S}}}=\begin{bmatrix}S_{11}&S_{12}\\\
S_{21}&S_{22}\end{bmatrix},$ (2)
and reciprocity requires that the transmission coefficients $S_{12}$ and
$S_{21}$ are equal. However, it does not impose any limitations on the
reflection coefficients $S_{11}$ and $S_{22}$. This is true for our system in
the linear regime, since the transmissions for forward and backward
excitations are equal, while the reflections are asymmetric.
However, in the nonlinear regime, our metasurface cannot be regarded as a two-
port system anymore, since the SH emission represents a distinct
electromagnetic mode. Therefore, this system must be at least considered as a
4-port system (assuming that higher-order harmonic generation is negligible),
represented with the following scattering matrix:
$\overline{\overline{\mathbf{S}}}=\begin{bmatrix}\overline{\overline{\mathbf{S}}}^{\omega{\scriptscriptstyle\rightarrow}\omega}&\overline{\overline{\mathbf{S}}}^{2\omega{\scriptscriptstyle\rightarrow}\omega}\\\
\overline{\overline{\mathbf{S}}}^{\omega{\scriptscriptstyle\rightarrow}2\omega}&\overline{\overline{\mathbf{S}}}^{2\omega{\scriptscriptstyle\rightarrow}2\omega}\end{bmatrix}\\\
=\begin{bmatrix}S_{11}^{\omega{\scriptscriptstyle\rightarrow}\omega}&S_{12}^{\omega{\scriptscriptstyle\rightarrow}\omega}&S_{11}^{2\omega{\scriptscriptstyle\rightarrow}\omega}&S_{12}^{2\omega{\scriptscriptstyle\rightarrow}\omega}\\\
S_{21}^{\omega{\scriptscriptstyle\rightarrow}\omega}&S_{22}^{\omega{\scriptscriptstyle\rightarrow}\omega}&S_{21}^{2\omega{\scriptscriptstyle\rightarrow}\omega}&S_{22}^{2\omega{\scriptscriptstyle\rightarrow}\omega}\\\
S_{11}^{\omega{\scriptscriptstyle\rightarrow}2\omega}&S_{12}^{\omega{\scriptscriptstyle\rightarrow}2\omega}&S_{11}^{2\omega{\scriptscriptstyle\rightarrow}2\omega}&S_{12}^{2\omega{\scriptscriptstyle\rightarrow}2\omega}\\\
S_{21}^{\omega{\scriptscriptstyle\rightarrow}2\omega}&S_{22}^{\omega{\scriptscriptstyle\rightarrow}2\omega}&S_{21}^{2\omega{\scriptscriptstyle\rightarrow}2\omega}&S_{22}^{2\omega{\scriptscriptstyle\rightarrow}2\omega}\end{bmatrix},$
(3)
which describes both linear transmission/reflection at frequencies $\omega$
and $2\omega$, as well as nonlinear processes
$\omega{\scriptscriptstyle\rightarrow}2\omega$ and
$2\omega{\scriptscriptstyle\rightarrow}\omega$.
In our experiment, we do not directly probe
$S_{21}^{\omega{\scriptscriptstyle\rightarrow}2\omega}\stackrel{{\scriptstyle?}}{{=}}S_{12}^{2\omega{\scriptscriptstyle\rightarrow}\omega}$,
where $S_{12}^{2\omega{\scriptscriptstyle\rightarrow}\omega}$ parameters
corresponds to the excitation at SH frequency and generation of a wave at
frequency $\omega$. In fact, this process is known as known as parametric down
conversion and it has an extremely low efficiency in comparison with SHG Boyd
(2020). Probing this equality, as well as equality of 8 other parameters that
are flipped by the $\overline{\overline{\mathbf{S}}}^{T}$ operation, namely
$S_{21}^{\omega{\scriptscriptstyle\rightarrow}\omega}\stackrel{{\scriptstyle?}}{{=}}S_{12}^{\omega{\scriptscriptstyle\rightarrow}\omega}$,
$S_{11}^{\omega{\scriptscriptstyle\rightarrow}2\omega}\stackrel{{\scriptstyle?}}{{=}}S_{11}^{2\omega{\scriptscriptstyle\rightarrow}\omega}$,
$S_{12}^{\omega{\scriptscriptstyle\rightarrow}2\omega}\stackrel{{\scriptstyle?}}{{=}}S_{21}^{2\omega{\scriptscriptstyle\rightarrow}\omega}$,
$S_{22}^{\omega{\scriptscriptstyle\rightarrow}2\omega}\stackrel{{\scriptstyle?}}{{=}}S_{22}^{2\omega{\scriptscriptstyle\rightarrow}\omega}$
and
$S_{21}^{2\omega{\scriptscriptstyle\rightarrow}2\omega}\stackrel{{\scriptstyle?}}{{=}}S_{12}^{2\omega{\scriptscriptstyle\rightarrow}2\omega}$
stand for a true reciprocity test in a four-port system. Instead, within our
experiment we show that
$S_{21}^{\omega{\scriptscriptstyle\rightarrow}2\omega}\neq
S_{12}^{\omega{\scriptscriptstyle\rightarrow}2\omega}$, which corresponds to
an asymmetric nonlinear scattering process that is reciprocal. Yet, a rigorous
probing of reciprocity in a nonlinear system would require sophisticated
experiments that involve simultaneous excitation with the two waves at
frequencies $\omega$ and $2\omega$ and precise control over their amplitude
and phase Trzeciecki and Hübner (2000). Nevertheless, we assert that our
device essentially functions as a nonlinear optical pseudo-diode, allowing the
transmission of SH signal only in one direction, which is a desired
functionality for various signal processing applications Willner _et al._
(2014).
In summary, we have demonstrated that strongly asymmetric SHG can be achieved
in a plasmonic metasurface that is comprised of two common plasmonic metals –
aluminium and silver. The structural asymmetry created by the material
contrast results in a strong dependence on the excitation direction, with an
extinction ratio of approx. $16.9\text{\,}\mathrm{d}\mathrm{B}$ in theory and
approx. $10\text{\,}\mathrm{d}\mathrm{B}$ in the experiment. We anticipate
that our findings can pave the way for further developments in the field of
nonlinear bianisotropic and nonreciprocal devices, as well as inspire novel
plasmonic devices with unrivaled functionalities.
## acknowledgement
The authors thank Christian Santschi and Zdenek Benes for their valuable
advises on nanofabrication.
Funding from the Swiss National Science Foundation (grant PZ00P2_193221) is
gratefully acknowledged.
## References
* Caloz _et al._ (2018) C. Caloz, A. Alù, S. Tretyakov, D. Sounas, K. Achouri, and Z.-L. Deck-Léger, Phys. Rev. Applied 10, 047001 (2018).
* Asadchy _et al._ (2020) V. S. Asadchy, M. S. Mirmoosa, A. Díaz-Rubio, S. Fan, and S. A. Tretyakov, Proceedings of the IEEE 108, 1684 (2020).
* Achouri and Martin (2021) K. Achouri and O. J. F. Martin, Phys. Rev. B 104, 165426 (2021).
* Jalas _et al._ (2013) D. Jalas, A. Petrov, M. Eich, W. Freude, S. Fan, Z. Yu, R. Baets, M. Popovic, A. Melloni, J. D. Joannopoulos, M. Vanwolleghem, C. R. Doerr, and H. Renner, Nature Photonics 7, 579 (2013).
* Sounas and Alu (2017) D. L. Sounas and A. Alu, Nature Photonics 11, 774 (2017).
* Sigwarth and Miniatura (2022) O. Sigwarth and C. Miniatura, AAPPS Bulletin 32 (2022).
* Fan _et al._ (2012) S. Fan, R. Baets, A. Petrov, Z. Yu, J. D. Joannopoulos, W. Freude, A. Melloni, M. Popović, M. Vanwolleghem, D. Jalas, M. Eich, M. Krause, H. Renner, E. Brinkmeyer, and C. R. Doerr, Science 335, 38 (2012).
* Cotrufo _et al._ (2021) M. Cotrufo, S. A. Mann, H. Moussa, and A. Alù, IEEE Transactions on Microwave Theory and Techniques 69, 3569 (2021).
* Shi _et al._ (2015) Y. Shi, Z. Yu, and S. Fan, Nature Photonics 9, 388 (2015).
* Fernandes and Silveirinha (2018) D. E. Fernandes and M. G. Silveirinha, IEEE Antennas and Wireless Propagation Letters 17, 1953 (2018).
* Shaltout _et al._ (2019) A. M. Shaltout, V. M. Shalaev, and M. L. Brongersma, Science 364, eaat3100 (2019).
* Menzel _et al._ (2010) C. Menzel, C. Helgert, C. Rockstuhl, E.-B. Kley, A. Tünnermann, T. Pertsch, and F. Lederer, Phys. Rev. Lett. 104, 253902 (2010).
* Mahmoud _et al._ (2015) A. M. Mahmoud, A. R. Davoyan, and N. Engheta, Nature Communications 6, 1 (2015).
* Lawrence _et al._ (2018) M. Lawrence, D. R. Barton, and J. A. Dionne, Nano Letters 18, 1104 (2018), pMID: 29369641.
* Chen _et al._ (2021) X. Chen, J. Zhang, C. Wen, K. Liu, Z. Zhu, S. Qin, and X. Yuan, Carbon 173, 126 (2021).
* Cheng _et al._ (2021) L. Cheng, R. Alaee, A. Safari, M. Karimi, L. Zhang, and R. W. Boyd, ACS Photonics 8, 585 (2021).
* Yang _et al._ (2017a) K.-Y. Yang, R. Verre, J. Butet, C. Yan, T. J. Antosiewicz, M. Käll, and O. J. F. Martin, Nano Letters 17, 5258 (2017a), pMID: 28829601\.
* Xu _et al._ (2020) L. Xu, G. Saerens, M. Timofeeva, D. A. Smirnova, I. Volkovskaya, M. Lysevych, R. Camacho-Morales, M. Cai, K. Zangeneh Kamali, L. Huang, F. Karouta, H. H. Tan, C. Jagadish, A. E. Miroshnichenko, R. Grange, D. N. Neshev, and M. Rahmani, ACS Nano 14, 1379 (2020), pMID: 31877017.
* Nauman _et al._ (2021) M. Nauman, J. Yan, D. de Ceglia, M. Rahmani, K. Zangeneh Kamali, C. De Angelis, A. E. Miroshnichenko, Y. Lu, and D. N. Neshev, Nature Communications 12, 1 (2021).
* Tymchenko _et al._ (2016) M. Tymchenko, J. S. Gomez-Diaz, J. Lee, N. Nookala, M. A. Belkin, and A. Alù, Phys. Rev. B 94, 214303 (2016).
* Bar-David and Levy (2019) J. Bar-David and U. Levy, Nano Letters 19, 1044 (2019).
* Minovich _et al._ (2015) A. E. Minovich, A. E. Miroshnichenko, A. Y. Bykov, T. V. Murzina, D. N. Neshev, and Y. S. Kivshar, Laser & Photonics Reviews 9, 195 (2015).
* Li _et al._ (2017) G. Li, S. Zhang, and T. Zentgraf, Nature Reviews Materials 2, 1 (2017).
* Krasnok _et al._ (2018) A. Krasnok, M. Tymchenko, and A. Alù, Materials Today 21, 8 (2018).
* Lee _et al._ (2014) J. Lee, M. Tymchenko, C. Argyropoulos, P.-Y. Chen, F. Lu, F. Demmerle, G. Boehm, M.-C. Amann, A. Alu, and M. A. Belkin, Nature 511, 65 (2014).
* Kauranen and Zayats (2012) M. Kauranen and A. V. Zayats, Nature Photonics 6, 737 (2012).
* Butet _et al._ (2015) J. Butet, P.-F. Brevet, and O. J. F. Martin, ACS Nano 9, 10545 (2015), pMID: 26474346\.
* Boyd (2020) R. W. Boyd, _Nonlinear Optics_, fourth edition ed. (Academic Press, 2020).
* Chandrasekar _et al._ (2015) R. Chandrasekar, N. K. Emani, A. Lagutchev, V. M. Shalaev, C. Ciracì, D. R. Smith, and A. V. Kildishev, Opt. Mater. Express 5, 2682 (2015).
* Kruk _et al._ (2015) S. Kruk, M. Weismann, A. Y. Bykov, E. A. Mamonov, I. A. Kolmychek, T. Murzina, N. C. Panoiu, D. N. Neshev, and Y. S. Kivshar, ACS Photonics 2, 1007 (2015).
* Smirnova and Kivshar (2016) D. Smirnova and Y. S. Kivshar, Optica 3, 1241 (2016).
* Bernasconi _et al._ (2016) G. D. Bernasconi, J. Butet, and O. J. F. Martin, J. Opt. Soc. Am. B 33, 768 (2016).
* Butet _et al._ (2017) J. Butet, G. D. Bernasconi, M. Petit, A. Bouhelier, C. Yan, O. J. F. Martin, B. Cluzel, and O. Demichel, ACS Photonics 4, 2923 (2017).
* Yang _et al._ (2017b) K.-Y. Yang, J. Butet, C. Yan, G. D. Bernasconi, and O. J. F. Martin, ACS Photonics 4, 1522 (2017b).
* Kiselev _et al._ (2019) A. Kiselev, G. D. Bernasconi, and O. J. F. Martin, Opt. Express 27, 38708 (2019).
* Gupta _et al._ (2021) T. D. Gupta, L. Martin-Monier, J. Butet, K.-Y. Yang, A. Leber, C. Dong, T. Nguyen-Dang, W. Yan, O. J. Martin, and F. Sorin, Nanophotonics 10, 3465 (2021).
* Abir _et al._ (2022) T. Abir, M. Tal, and T. Ellenbogen, Nano Letters 22, 2712 (2022), pMID: 35369689\.
* Li _et al._ (2021) G.-C. Li, D. Lei, M. Qiu, W. Jin, S. Lan, and A. V. Zayats, Nature Communications 12, 4326 (2021).
* Asadchy _et al._ (2018) V. S. Asadchy, A. Díaz-Rubio, and S. A. Tretyakov, Nanophotonics 7, 1069 (2018).
* Achouri and Caloz (2021) K. Achouri and C. Caloz, _Electromagnetic Metasurfaces: Theory and Applications_ (John Wiley & Sons, 2021).
* Achouri _et al._ (2022a) K. Achouri, V. Tiukuvaara, and O. J. Martin, arXiv preprint arXiv:2208.12504 (2022a).
* Overvig and Alù (2022) A. Overvig and A. Alù, Laser & Photonics Reviews 16, 2100633 (2022).
* Poutrina and Urbas (2016) E. Poutrina and A. Urbas, Scientific reports 6, 1 (2016).
* Mobini _et al._ (2021) E. Mobini, R. Alaee, R. W. Boyd, and K. Dolgaleva, ACS Photonics 8, 3234 (2021).
* Jin and Argyropoulos (2020) B. Jin and C. Argyropoulos, Phys. Rev. Applied 13, 054056 (2020).
* Kim (2021) K.-H. Kim, Plasmonics 16, 77 (2021).
* Liu _et al._ (2021) W. Liu, L. Huang, J. Ding, C. Xie, Y. Luo, and W. Hong, Nanomaterials 11 (2021), 10.3390/nano11092410.
* Abasahl _et al._ (2021) B. Abasahl, C. Santschi, T. V. Raziman, and O. J. F. Martin, Nanotechnology 32, 475202 (2021).
* Wang _et al._ (2019) J. Wang, J. Butet, G. D. Bernasconi, A.-L. Baudrion, G. Lévêque, A. Horrer, A. Horneber, O. J. F. Martin, A. J. Meixner, M. Fleischer, P.-M. Adam, and D. Zhang, Nanoscale 11, 23475 (2019).
* Wang _et al._ (2021) J. Wang, A.-L. Baudrion, J. Béal, A. Horneber, F. Tang, J. Butet, O. J. F. Martin, A. J. Meixner, P.-M. Adam, and D. Zhang, The Journal of Chemical Physics 154, 074701 (2021).
* Kruk _et al._ (2022) S. S. Kruk, L. Wang, B. Sain, Z. Dong, J. Yang, T. Zentgraf, and Y. Kivshar, Nature Photonics 16, 561 (2022).
* Czaplicki _et al._ (2015) R. Czaplicki, J. Mäkitalo, R. Siikanen, H. Husu, J. Lehtolahti, M. Kuittinen, and M. Kauranen, Nano Letters 15, 530 (2015), pMID: 25521745.
* Kottmann and Martin (2001) J. P. Kottmann and O. J. F. Martin, Opt. Lett. 26, 1096 (2001).
* Gallinet _et al._ (2010) B. Gallinet, A. M. Kern, and O. J. F. Martin, J. Opt. Soc. Am. A 27, 2261 (2010).
* Butet _et al._ (2013) J. Butet, B. Gallinet, K. Thyagarajan, and O. J. F. Martin, J. Opt. Soc. Am. B 30, 2970 (2013).
* McPeak _et al._ (2015) K. M. McPeak, S. V. Jayanti, S. J. P. Kress, S. Meyer, S. Iotti, A. Rossinelli, and D. J. Norris, ACS Photonics 2, 326 (2015).
* Castro-Lopez _et al._ (2011) M. Castro-Lopez, D. Brinks, R. Sapienza, and N. F. van Hulst, Nano Letters 11, 4674 (2011), pMID: 21970569\.
* Knight _et al._ (2014) M. W. Knight, N. S. King, L. Liu, H. O. Everitt, P. Nordlander, and N. J. Halas, ACS Nano 8, 834 (2014), pMID: 24274662.
* Gérard and Gray (2014) D. Gérard and S. K. Gray, Journal of Physics D: Applied Physics 48, 184001 (2014).
* Thyagarajan _et al._ (2016) K. Thyagarajan, C. Santschi, P. Langlet, and O. J. F. Martin, Advanced Optical Materials 4, 871 (2016).
* Krause _et al._ (2004) D. Krause, C. W. Teplin, and C. T. Rogers, Journal of Applied Physics 96, 3626 (2004).
* Sounas and Alù (2014) D. L. Sounas and A. Alù, Opt. Lett. 39, 4053 (2014).
* Achouri _et al._ (2017) K. Achouri, Y. Vahabzadeh, and C. Caloz, Opt. Express 25, 19013 (2017).
* Achouri _et al._ (2018) K. Achouri, G. D. Bernasconi, J. Butet, and O. J. F. Martin, IEEE Transactions on Antennas and Propagation 66, 6061 (2018).
* Achouri _et al._ (2022b) K. Achouri, A. Kiselev, and O. J. F. Martin, New Journal of Physics 24, 025006 (2022b).
* Ray _et al._ (2020) D. Ray, T. V. Raziman, C. Santschi, D. Etezadi, H. Altug, and O. J. F. Martin, Nano Letters 20, 8752 (2020), pMID: 33206533.
* Beversluis _et al._ (2003) M. R. Beversluis, A. Bouhelier, and L. Novotny, Phys. Rev. B 68, 115433 (2003).
* Mühlschlegel _et al._ (2005) P. Mühlschlegel, H.-J. Eisler, O. J. F. Martin, B. Hecht, and D. W. Pohl, Science 308, 1607 (2005).
* Biagioni _et al._ (2009) P. Biagioni, M. Celebrano, M. Savoini, G. Grancini, D. Brida, S. Mátéfi-Tempfli, M. Mátéfi-Tempfli, L. Duò, B. Hecht, G. Cerullo, and M. Finazzi, Phys. Rev. B 80, 045411 (2009).
* Toyoda _et al._ (2021) S. Toyoda, M. Fiebig, T. hisa Arima, Y. Tokura, and N. Ogawa, Science Advances 7, eabe2793 (2021).
* Mund _et al._ (2021) J. Mund, D. R. Yakovlev, A. N. Poddubny, R. M. Dubrovin, M. Bayer, and R. V. Pisarev, Phys. Rev. B 103, L180410 (2021).
* Trzeciecki and Hübner (2000) M. Trzeciecki and W. Hübner, Phys. Rev. B 62, 13888 (2000).
* Sounas and Alù (2017) D. L. Sounas and A. Alù, Phys. Rev. Lett. 118, 154302 (2017).
* Willner _et al._ (2014) A. E. Willner, S. Khaleghi, M. R. Chitgarha, and O. F. Yilmaz, J. Lightwave Technol. 32, 660 (2014).
## Supporting Information
Figure S1: Control over linear and SH transmission via $L_{x}$ and $L_{y}$
geometrical parameters at excitation wavelength
$\lambda_{0}=$800\text{\,}\mathrm{nm}$$. Other parameters are fixed:
$D=t_{\ce{Ag}}=t_{\ce{Al}}=$50\text{\,}\mathrm{nm}$$ and
$L_{\textrm{s}}=$25\text{\,}\mathrm{n}\mathrm{m}$$. (a) Linear and (b) SH
transmission upon forward excitation; (c) linear and (d) SH transmission upon
backward excitation. (e) Forward/backward-excitation SH transmission
extinction ratio. Figure S2: Dependence of the linear and SH transmission and
reflection upon forward excitaiton (FT and FR) on the geometrical parameters
(a) Linear and (b) SH. Other geometrical parameters are fixed:
$L_{x}=$$135\text{\,}\mathrm{nm}$, $L_{y}=$$195\text{\,}\mathrm{nm}$ and
$D=t_{\ce{Ag}}=t_{\ce{Al}}=$50\text{\,}\mathrm{nm}$$. Figure S3: Electric
field components calculated for the reflected and transmitted SH waves. In
both cases, (a) forward excitation (FE) and (b) backward excitation (BE) the
$E_{y}^{2\omega}$ component (that is orthogonal to the excitation field
$E_{x}^{\omega}$) is dominant. Figure S4: Dielectric permittivity of Al (blue
lines) and Ag (gray lines) used in the simulations: real (bottom panel) and
imaginary (top panel) parts of the interpolated experimental data from ref.
McPeak _et al._ (2015). Figure S5: Isolated meta-atom linear scattering and
absorption analysis. (a) Scattering (red solid lines), absorption (red dotted
lines) and extinction (black-solid lines) cross-sections upon forward
excitation (upward triangles) and backward excitation (downward triangles);
(b) absoption in Ag (gray solid lines) and Al (blue dotted lines) domains upon
forward excitation (upward triangles) and backward excitation (downward
triangles). Pseud-color images of electric field distribution upon forward (c)
and backward (d) excitations. Normalized magnitude of the electric field on
the logarithmic scale is plotted. Figure S6: Isolated meta-atom multipole
analysis. Vector spherical harmonic decomposition of (a) linear scattering
upon forward excitation; (b) linear scattering upon backward excitation; (c)
SHG scattering upon forward excitation; (d) SHG scattering upon backward
excitation.
|
# Betatron frequency and the Poincaré rotation number
Sergei Nagaitsev Fermilab, Batavia, IL 60510, USA The Enrico Fermi
Institute, The University of Chicago, Chicago, IL 60637, USA Timofey Zolkin
Fermilab, Batavia, IL 60510, USA
###### Abstract
Symplectic maps are routinely used to describe single-particle dynamics in
circular accelerators. In the case of a linear accelerator map, the rotation
number (the betatron frequency) can be easily calculated from the map itself.
In the case of a nonlinear map, the rotation number is normally obtained
numerically, by iterating the map for given initial conditions, or through a
normal form analysis, a type of a perturbation theory for maps. Integrable
maps, a subclass of symplectic maps, allow for an analytic evaluation of their
rotation numbers. In this paper we propose an analytic expression to determine
the rotation number for integrable symplectic maps of the plane and present
several examples, relevant to accelerators. These new results can be used to
analyze the topology of the accelerator Hamiltonians as well as to serve as
the starting point for a perturbation theory for maps.
## I Introduction
The first mention of the betatron frequency was in the 1941 pioneering work by
Kerst and Serber Kerst and Serber (1941), where they defined it as the
fractional number of particle oscillations around the orbit per one revolution
period in a betatron (a type of an induction accelerator). Later, the theory
of the alternating-gradient (AG) synchrotron Courant and Snyder (1958)
demonstrated the existence of an integral of motion (the so-called Courant-
Snyder invariant) for particles in an AG synchrotron and established a
powerful connection between the modern AG focusing systems and linear
symplectic maps, thus connecting the betatron frequency and the Poincaré
rotation number Poincaré (1885).
In modern accelerators (for example, in the LHC) particles are stored for up
to $10^{9}$ revolutions and understanding their dynamics is crucially
important for maintaining long-term particle stability Todesco (1998);
Papaphilippou (2014). One important parameter of particle dynamics in an
accelerator is the betatron frequency and its dependence on a particle’s
amplitude. It turns out that the accelerator focusing systems conserve the
Courant-Snyder invariant only approximately and there is a need to analyze the
conditions for stable particle dynamics. Over the recent years, several
methods were developed to analyze the particle motion in accelerator systems,
using either numeric tools, like the Frequency Map Analysis Dumas and Laskar
(1993), or the Normal Form Analysis Bazzani _et al._ (1994); Turchetti and
Panichi (2019), a type of a perturbation theory, which uses a linear map and a
Courant-Snyder invariant as a starting point.
At the same time, there has been continuous interest, starting with E.
McMillan McMillan (1971), in making the accelerator maps nonlinear, yet
integrable Chow and Cary (1994); Danilov (2008); Danilov and Nagaitsev (2010,
2014). However, there does not exist an analytic method to calculate the
betatron frequency (the Poincaré rotation number) for nonlinear symplectic
integrable maps. This present paper is set to remedy this deficiency.
## II Betatron frequency
For a one degree-of-freedom time-independent system, the Hamiltonian function,
$\mathrm{H}[p,q;t]=\mathrm{E}$, is the integral of the motion. If the motion
is bounded, it is also periodic, and the period of oscillations can be
determined by integrating
$T(\mathrm{E})=\oint\left(\frac{\partial\mathrm{H}}{\partial
p}\right)^{-1}\,\mathrm{d}q,$ (1)
where $p=p(\mathrm{E},q)$ Lichtenberg and Lieberman (1992). The oscillation
period and its dependence on initial conditions is one of the key properties
of the periodic motion.
Let us now consider a symplectic map of the plane (corresponding to a one-turn
map of an accelerator), $\mathrm{M}:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}$,
$(q^{\prime},p^{\prime})=\mathrm{M}\,(q,p),$
where the prime symbols (′) indicate the transformed phase space coordinates.
Suppose that the sequence, generated by a repeated application of the map,
$(q_{0},p_{0})\rightarrow(q_{1},p_{1})\rightarrow(q_{2},p_{2})\rightarrow(q_{3},p_{3})\rightarrow\ldots$
belongs to a closed invariant curve. We do not describe how this map is
obtained (see, for example, Dragt (2013)) but let us suppose that we know the
mapping equations. Let $R_{n}$ be the rotation angle in the phase space
$(q,p)$ around a stable fixed point between two consecutive iterations
$(q_{n},p_{n})$ and $(q_{n+1},p_{n+1})$. Then, the limit, when it exists,
$\nu=\lim_{N\to\infty}\frac{1}{2\,\pi\,N}\,\sum_{n=0}^{N}R_{n}$ (2)
is called the rotation number (the betatron frequency of the one-turn map) for
that particular orbit of the map $\mathrm{M}$ Dilão and Alves-Pires (1996).
Unlike Eq. (1), which allows to express the oscillation period analytically,
Eq. (2) can be only evaluated numerically for each orbit. Let us now suppose
that there exists a non-constant real-valued continuous function
$\mathcal{K}(q,p)$, which is invariant under $\mathrm{M}$. The function
$\mathcal{K}(q,p)$ is called integral and the map is called integrable. In
this paper, we are describing the case, for which the level sets
$\mathcal{K}=\mathrm{const}$ are compact closed curves (or sets of points) and
for which the identity
$\mathcal{K}(q^{\prime},p^{\prime})=\mathcal{K}(q,p)$ (3)
holds for all $(q,p)$. There are many examples of integrable maps, including
the famous McMillan map McMillan (1971), described below. The dynamics is in
many ways similar to that of a continuous system, however, Eq. (1) is not
directly applicable since the integral $\mathcal{K}(q,p)$ is not the
Hamiltonian function. Below, we will present an expression (the Danilov
theorem) to obtain the rotation number from $\mathcal{K}(q,p)$ for an
integrable map, $\mathrm{M}$.
The Arnold-Liouville theorem for integrable maps Arnold and Avez (1968);
Veselov (1991); Meiss (1992) states that (1) the action-angle variables exist
and (2) in these variables, consecutive iterations of integrable map
$\mathrm{M}$ lie on nested circles of radius $J$ and that the map can be
written in the form of a twist map,
$\begin{bmatrix}J_{n+1}\\\\[5.69046pt]
\theta_{n+1}\end{bmatrix}=\begin{bmatrix}J_{n}\\\\[5.69046pt]
\theta_{n}+2\,\pi\,\nu(J)\mod 2\,\pi\end{bmatrix},$ (4)
where $|\nu(J)|\leq 0.5$ is the rotation number, $\theta$ is the angle
variable and $J$ is the action variable, defined by the map $\mathrm{M}$ as
$J=\frac{1}{2\,\pi}\,\oint p\,\mathrm{d}q.$ (5)
Thus, in this paper, we would like to consider the following question: how
does one determine the rotation number, $\nu(\mathcal{K})$, from the known
integral, $\mathcal{K}(q,p)$, and the known integrable map, $\mathrm{M}$? In
addition, in the ”Examples” section we propose how to use this theorem when
only an approximate invariant is known.
## III Danilov theorem
###### Theorem 1 (Danilov theorem Danilov (deceased)).
Suppose a symplectic map of the plane,
$(q^{\prime},p^{\prime})=\mathrm{M}\,(q,p),$
is integrable with the invariant (integral) $\mathcal{K}(q,p)$, then its
Poincaré rotation number is
$\nu(\mathcal{K})=\int_{q}^{q^{\prime}}\left(\frac{\partial\mathcal{K}}{\partial
p}\right)^{-1}\,\mathrm{d}q\Bigg{/}\oint\left(\frac{\partial\mathcal{K}}{\partial
p}\right)^{-1}\,\mathrm{d}q,$ (6)
where the integrals are evaluated along the invariant curve,
$\mathcal{K}(q,p)$.
Figure 1: Constant level sets of the integral
$\mathcal{K}(q,p)=\mathrm{const}$ (left). A particular curve representing a
level set of $\mathcal{K}$ and several iterates of the map $\mathrm{M}$
(center). A three-dimensional phase space, $(q,p)$ \+ time, of the system (7)
(right). Dark gray planes $t=0,\tau,2\tau,\ldots$ represent stroboscopic
Poincaré section of the continuous flow of the system (red curve) which is
identical to map $\mathrm{M}$.
###### Proof.
Consider the following system of differential equations:
$\frac{\mathrm{d}Q}{\mathrm{d}t}=\frac{\partial\mathcal{K}(Q,P)}{\partial
P},\qquad\qquad\frac{\mathrm{d}P}{\mathrm{d}t}=-\frac{\partial\mathcal{K}(Q,P)}{\partial
Q}.$ (7)
We notice that $\mathcal{K}(Q,P)$ does not change along a solution of the
system, because it is an integral of the motion, meaning
$\frac{\mathrm{d}\mathcal{K}}{\mathrm{d}t}=\frac{\partial\mathcal{K}}{\partial
Q}\frac{\mathrm{d}Q}{\mathrm{d}t}+\frac{\partial\mathcal{K}}{\partial
P}\frac{\mathrm{d}P}{\mathrm{d}t}=0$ (8)
for any solution $Q(t)$ and $P(t)$. Let $q(t)$ and $p(t)$ be the solutions of
the system (7) with the following initial conditions $q(0)=q_{0}$ and
$p(0)=p_{0}$. Define a new map, $\widetilde{\mathrm{M}}(q,p)$ (see Fig. 1)
$(q^{\prime},p^{\prime})=\widetilde{\mathrm{M}}(q,p)=\left(q(\tau),p(\tau)\right)$
(9)
where $\tau$ is a discrete time step. For a given $\mathcal{K}$, which is an
integral of both $\mathrm{M}$ and $\widetilde{\mathrm{M}}$, one can always
select $\tau(\mathcal{K})$ such that the maps $\mathrm{M}(q,p)$ and
$\widetilde{\mathrm{M}}(q,p)$ are identical. This follows from the Arnold-
Liouville theorem. Since $\mathcal{K}(q,p)$ is compact and closed, the
functions $q(t)$ and $p(t)$ are periodic with a period $T(\mathcal{K})$. By
its definition,
$\tau=\nu(\mathcal{K})\,T(\mathcal{K}).$ (10)
Let us now calculate $\nu(\mathcal{K})$:
$\begin{array}[]{l}\displaystyle\nu(\mathcal{K})\equiv\frac{\tau}{T}=\frac{\int_{q}^{q^{\prime}}\,\mathrm{d}t}{\oint\,\mathrm{d}t}=\frac{\int_{q}^{q^{\prime}}\left(\frac{\mathrm{d}q}{\mathrm{d}t}\right)^{-1}\,\mathrm{d}q}{\oint\left(\frac{\mathrm{d}q}{\mathrm{d}t}\right)^{-1}\,\mathrm{d}q}\\\\[18.49411pt]
\displaystyle\qquad\,\,=\frac{\int_{q}^{q^{\prime}}\left(\frac{\partial\mathcal{K}}{\partial
p}\right)^{-1}\,\mathrm{d}q}{\oint\left(\frac{\partial\mathcal{K}}{\partial
p}\right)^{-1}\,\mathrm{d}q}.\end{array}$ (11)
Q.E.D. ∎
Figure 2: The partial action is defined as a sector area (blue) for one map
iteration, divided by ${2\,\pi}$ (a.). Convenient choices of the partial
action for mappings in McMillan form: an area under the curve in II (blue) or
IV (green) quadrants (b.), and, areas for initial conditions in a form of
$(q_{0},q_{0})$ (c.).
###### Corollary 1.1.
$\nu(\mathcal{K})=\frac{\mathrm{d}J^{\prime}}{\mathrm{d}J},$ (12)
where
$J^{\prime}(\mathcal{K})=\frac{1}{2\,\pi}\,\int_{q}^{q^{\prime}}p(\mathcal{K},q)\,\mathrm{d}q.$
(13)
is the partial action calculated as a sector integral (see Fig. 2) around the
stable fixed point.
###### Proof.
First, we will consider the denominator in Eq. (11):
$\frac{1}{2\,\pi}\,\oint\left(\frac{\partial\mathcal{K}}{\partial
p}\right)^{-1}\,\mathrm{d}q=\frac{1}{2\,\pi}\,\frac{\mathrm{d}}{\mathrm{d}\mathcal{K}}\,\oint
p\,\mathrm{d}q=\frac{\mathrm{d}J}{\mathrm{d}\mathcal{K}}.$ (14)
Second, we will evaluate the numerator. Using the equations of motion in Eq.
(7), we notice that
$\int_{q}^{q^{\prime}}\left(\frac{\partial\mathcal{K}}{\partial
p}\right)^{-1}\,\mathrm{d}q=-\int_{p}^{p^{\prime}}\left(\frac{\partial\mathcal{K}}{\partial
q}\right)^{-1}\,\mathrm{d}p.$ (15)
Now, we will utilize the Leibniz integral rule together with Eq. (15) to
obtain
$\begin{array}[]{l}\displaystyle\frac{1}{2\,\pi}\,\int_{q}^{q^{\prime}}\left(\frac{\partial\mathcal{K}}{\partial
p}\right)^{-1}\,\mathrm{d}q=\frac{1}{2\,\pi}\times\\\\[18.49411pt]
\displaystyle\,\,\,\,\,\,\frac{\mathrm{d}}{\mathrm{d}\mathcal{K}}\left(\frac{q\,p-q^{\prime}\,p^{\prime}}{2}+\int_{q}^{q^{\prime}}p\,\mathrm{d}q\right)=\frac{\mathrm{d}J^{\prime}}{\mathrm{d}\mathcal{K}}.\end{array}$
(16)
Finally, by combining Eqs. (14) and (16) we obtain the Eq. (12). ∎
###### Corollary 1.2.
For a linear map ($\nu=\mathrm{const}$),
$\nu=J^{\prime}/J.$ (17)
###### Proof.
Since $\nu=\mathrm{const}$, the Hamiltonian function is
$\mathrm{H}(J)=\nu\,J$. Using Eq. (12), we obtain Eq. (17). ∎
###### Corollary 1.3.
The Hamiltonian function corresponding to the map $M$ is
$\mathrm{H}(\mathcal{K})=J^{\prime}(\mathcal{K}).$ (18)
###### Proof.
Since $\nu=\mathrm{d}\mathrm{H}/\mathrm{d}J$, one can use Eq. (12) to obtain
$\mathrm{H}=J^{\prime}+\mathrm{const}$. ∎
###### Corollary 1.4.
$\nu(\mathcal{K})=\int_{p}^{p^{\prime}}\left(\frac{\partial\mathcal{K}}{\partial
q}\right)^{-1}\,\mathrm{d}p\Bigg{/}\oint\left(\frac{\partial\mathcal{K}}{\partial
q}\right)^{-1}\,\mathrm{d}p,$ (19)
where the integrals are evaluated along the invariant curve,
$\mathcal{K}(q,p)$.
###### Proof.
Because of the $p\leftrightarrow-q$ symmetry in Eqs. (7), the proof is similar
to Eq. (11). ∎
In order to generalize the Danilov theorem to higher-dimensional integrable
maps, one has to know the variables, where such a map is separated into maps
for each degree of freedom. Below we will consider an example of a 4D map,
which is separable in polar coordinates with two integrals of motion.
## IV Examples
In order to employ this theorem in practice, one would need to recall that
with $p=p(\mathcal{K},q)$, the integrand in Eq. (6) is
$\left(\frac{\partial\mathcal{K}}{\partial p}\right)^{-1}=\frac{\partial
p(\mathcal{K},q)}{\partial\mathcal{K}}.$ (20)
Also, the lower limit of the integral can be chosen to be any convenient value
of $q$, for example 0, as long it belongs to a given level set,
$\mathcal{K}(q,p)$. Finally, the upper limit of the integral, $q^{\prime}$, is
obtained from the selected $q$ and $p=p(\mathcal{K},q)$ by iterating the map,
$\mathrm{M}(q,p)$. It is clear that not all functions $\mathcal{K}(q,p)$ can
be inverted analytically to obtain $p=p(\mathcal{K},q)$. This drawback of this
method can be overcome by numeric evaluations (see Appendix B).
For maps in a special (McMillan) form McMillan (1971),
$\begin{bmatrix}q^{\prime}\\\ p^{\prime}\end{bmatrix}=\begin{bmatrix}p\\\
-q+f(p)\end{bmatrix},$ (21)
the convenient choices for integration limits in Eq. 6 are $(q,p)=(q_{0},0)$
and $(q^{\prime},p^{\prime})=(0,-q_{0}+f(0))$, Fig. 2.b, and $(q,p)=(a,a)$ and
$(q^{\prime},p^{\prime})=(a,-a+f(a))$, Fig. 2.c. Finally, for twist maps, Eq.
(4), the Danilov theorem Eq. (6) gives $\nu$, as expected.
Let us now consider several non-trivial examples. Linear maps are presented in
Appendix A.
### IV.1 McMillan map
As our first example, we will consider the so-called McMillan map McMillan
(1971),
$\begin{bmatrix}q^{\prime}\\\ p^{\prime}\end{bmatrix}=\begin{bmatrix}p\\\
-q+a\,p/(b\,p^{2}+1)\end{bmatrix}.$ (22)
This map has been considered in detailed in Ref. Iatrou and Roberts (2002). To
illustrate the Danilov theorem, we will limit ourselves to a case with $b>0$
and $|a|<2$, which corresponds to stable motion at small amplitudes. Mapping
(22) has the following integral:
$\mathcal{K}(q,p)=b\,q^{2}p^{2}+q^{2}+p^{2}-a\,q\,p,$ (23)
which is non-negative for the chosen parameters.
We first notice that for small amplitudes, $b\,p^{2}\ll 1$, this map can be
approximated as
$\begin{bmatrix}q^{\prime}\\\
p^{\prime}\end{bmatrix}\approx\begin{bmatrix}p\\\
-q+a\,p-a\,b\,p^{3}+a\,b^{2}\,p^{5}-...\end{bmatrix},$ (24)
and its zero-amplitude rotation number is Courant and Snyder (1958)
$\nu(0)=\frac{1}{2\,\pi}\,\arccos\frac{a}{2}.$ (25)
At large amplitudes ($b\,p^{2}\gg 1$), the rotation number becomes $0.25$. We
will now evaluate the rotation number analytically, using Eq. (6): Let us
define a parameter,
$w(\mathcal{K})=\frac{1}{\sqrt{2}}\sqrt{1+\frac{d(\mathcal{K})}{\sqrt{d(\mathcal{K})^{2}+4\,\mathcal{K}\,b}}},$
(26)
which spans from 0 to 1 and where $d(\mathcal{K})=a^{2}/4+\mathcal{K}\,b-1$.
Then, the rotation number can be expressed through Jacobi elliptic functions
as follows:
$\nu(\mathcal{K})=\frac{1}{4\,\mathrm{K}(w)}\mathrm{arcds}\left({\left(d(\mathcal{K})^{2}+4\,\mathcal{K}\,b\right)^{-1/4}},w\right),$
(27)
where $\mathrm{K}(w)$ is the complete elliptic integral of the first kind and
the inverse Jacobi function, $\mathrm{arcds}(x,w)$, is defined as follows
$\mathrm{arcds}(x,w)=\int_{x}^{\infty}\frac{\mathrm{d}t}{\sqrt{(t^{2}+w^{2})(t^{2}+w^{2}-1)}}.$
(28)
The rotation number, Eq. (27), has the following series expansion:
$\nu(\mathcal{K})\approx\nu(0)+\frac{3}{2\,\pi}\,\frac{b\,a}{\sqrt{(4-a^{2})^{3}}}\mathcal{K}.$
(29)
Figure 3: The left plot contains iterations (green dots) of the McMillan map
($a=1.6$, $b=1$). Constant level sets of the invariant are shown with blue
lines. The right plot is the rotation number, Eq. (27), as a function of its
integral, $\mathcal{K}$. The inset shows the linear approximation, Eq. (29).
Figure 3 shows an example of the rotation number, for the case of $a=1.6$ and
$b=1$ ($\nu(0)\approx 0.102$), as a function of integral, $\mathcal{K}$.
The McMillan invariant (23) also allows for an analytic evaluation of the
action integral (5). We will omit the lengthy expressions, but will only
present a small-amplitude series expansion:
$J(\mathcal{K})\approx\frac{\mathcal{K}}{\sqrt{4-a^{2}}}-\frac{b\,(2+a^{2})\,\mathcal{K}^{2}}{\sqrt{(4-a^{2})^{5}}}.$
(30)
Finally, we can also present a small-amplitude series expansion of the
rotation number (27):
$\nu(J)\approx\nu(0)+\frac{3}{2\,\pi}\,\frac{b\,a}{4-a^{2}}J.$ (31)
### IV.2 Cubic map
Figure 4: The top row: phase-space trajectories of a cubic map, obtained by
tracking with $a=-0.85$ (left plot) and level sets of the approximate
invariant (34) (right plot), on the same scale. The red and blue lines in the
top left plot corresponds to symmetry lines $p=q$ and
$p=(a\,q+\epsilon\,q^{3})/2$ respectively. The bottom row: the left plot shows
the rotation number as a function of initial conditions in the form
$q_{0}=p_{0}$, by using the Eq. (2) (black solid line), and by using the
Danilov theorem, Eq. (12) numerically (orange dashed). The red solid line
corresponds to the rotation number obtained from the approximate invariant
(34) using the Danilov theorem as well. The right bottom plot shows the
dependence of $\nu$ as a function of action $J$, from tracking (orange dashed)
and from the approximate invariant (34) (red solid).
As our second example, we will consider a non-integrable Hénon cubic map Hénon
(1969); Dullin and Meiss (2000):
$\begin{bmatrix}q^{\prime}\\\ p^{\prime}\end{bmatrix}=\begin{bmatrix}p\\\
-q+a\,p+\epsilon\,p^{3}\end{bmatrix}.$ (32)
This map is well-known in accelerator physics as a symplectic octupole map. At
small amplitudes this map is linear and the rotation number is
$\nu\approx\frac{1}{2\,\pi}\,\arccos\left(\frac{a}{2}\right).$ (33)
At large amplitudes this map becomes chaotic and unstable. Let us propose an
approximate integral (the exact integral does not exist since it is a non-
integrable map).
$\begin{array}[]{l}\displaystyle\mathcal{K}_{\text{c}}(q,p)=p^{2}+q^{2}-a\,p\,q-\frac{\epsilon}{a}\,p^{2}q^{2}\\\\[8.5359pt]
\displaystyle\qquad+\frac{7\,\epsilon}{5\,a\,(4-a^{2})}\,\left(p^{2}+q^{2}-a\,p\,q\right)^{2}+O\left(\epsilon^{2}\right).\end{array}$
(34)
The derivation of this approximate integral goes beyond the scope of this
article and will be described in subsequent publications. For this
illustration, the reader can verify by inspection that this integral is
approximately conserved, near the origin. We will now use the Danilov theorem
to evaluate the rotation number of this map for various initial conditions
with $q_{0}=p_{0}$. Figure 4 shows the exact (numeric), Eq. (2), and the
approximate rotation number, calculated from (34) and (32) using the Danilov
theorem, Eq. (6).
A small-amplitude series expansion of the rotation number is:
$\nu(J)\approx\nu(0)-\frac{3}{2\pi}\frac{\epsilon}{4-a^{2}}J,$ (35)
which is the same as in Dullin and Meiss (2000) and similar to Eq. (31).
### IV.3 4-D integrable map
In this section we will sketch out an example of how to use the Danilov
theorem to analyze an integrable multi-dimensional map. Consider the following
map, which can be realized in accelerators by employing the so-called electron
lens Shiltsev _et al._ (1999, 2008); Lobach _et al._ (2018),
$\begin{bmatrix}x^{\prime}\\\\[1.42271pt] p_{x}^{\prime}\\\\[1.42271pt]
y^{\prime}\\\\[1.42271pt]
p_{y}^{\prime}\end{bmatrix}=\begin{bmatrix}\alpha_{x}x+\beta\,p_{x}\\\
-\gamma_{x}x-\alpha_{x}\,p_{x}+\frac{a\,x^{\prime}}{b\,r^{\prime 2}+1}\\\
\alpha_{y}y+\beta\,p_{y}\\\
-\gamma_{y}y-\alpha_{y}\,p_{y}+\frac{a\,y^{\prime}}{b\,r^{\prime
2}+1}\end{bmatrix},$ (36)
where $r^{2}=x^{2}+y^{2}$, $\beta\,\gamma_{x}=1+\alpha_{x}^{2}$,
$\beta\,\gamma_{y}=1+\alpha_{y}^{2}$, with $\alpha_{x}$, $\alpha_{y}$, $a$,
$b$ and $\beta$ being some arbitrary parameters. This map has two integrals of
motion in involution (having a vanishing Poisson bracket):
$L=(\alpha_{y}-\alpha_{x})\,x\,y+\beta(x\,p_{y}-y\,p_{x})$ (37)
and
$\mathcal{K}=\Big{(}b+\frac{1}{r^{2}}\Big{)}\,T^{2}+\beta\,a\,T+r^{2}+\frac{L^{2}}{r^{2}},$
(38)
where $T=\alpha_{x}x^{2}+\alpha_{y}y^{2}+\beta\,r\,p_{r}$ and
$p_{r}=(x\,p_{x}+y\,p_{y})/r$. In order to employ the Danilov theorem, we must
rewrite the map (36) in new variables, where this map is separated into two
maps. Such variables exist by virtue of this map being integrable. We first
notice that by introducing new variables,
$\begin{matrix}\tilde{x}=x/\sqrt{\beta}\\\
\tilde{p}_{x}=x\,\alpha_{x}/\sqrt{\beta}+p_{x}\sqrt{\beta}\\\
\tilde{y}=y/\sqrt{\beta}\\\
\tilde{p}_{y}=y\,\alpha_{y}/\sqrt{\beta}+p_{y}\sqrt{\beta},\end{matrix}$ (39)
the map (36) becomes symmetric in $\tilde{x}$ and $\tilde{y}$ with
$\tilde{a}=a\sqrt{\beta}$ and $\tilde{b}=b\beta$. The resulting map is
separable in polar coordinates, $r$ and $\theta$, such that
$x=r\,\cos(\theta)$ and $y=r\,\sin(\theta)$, where we omitted the tilde
($\,\tilde{}\,$) sign for clarity. The resulting map is
$\begin{bmatrix}r^{\prime}\\\\[8.5359pt] p_{r}^{\prime}\\\\[8.5359pt]
\theta^{\prime}\\\\[8.5359pt]
p_{\theta}^{\prime}\end{bmatrix}=\begin{bmatrix}\sqrt{p_{r}^{2}+\frac{p_{\theta}^{2}}{r^{2}}}\\\\[7.11317pt]
-p_{r}\frac{r}{r^{\prime}}+\frac{a\,r^{\prime}}{b\,r^{\prime
2}+1}\\\\[7.11317pt] \theta+\arctan\frac{p_{\theta}}{r\,p_{r}}\\\\[7.11317pt]
p_{\theta}\end{bmatrix},$ (40)
where the angular momentum $p_{\theta}=x\,p_{y}-y\,p_{x}=\mathrm{const}$ is
the integral of the motion. An additional integral is
$\mathcal{K}(r,p_{r},p_{\theta})=b\,r^{2}p_{r}^{2}+r^{2}+p_{r}^{2}-a\,r\,p_{r}+\frac{p_{\theta}^{2}}{r^{2}}.$
(41)
Now we will use the Danilov theorem to obtain two unknown rotation numbers,
$\nu_{\theta}$ and $\nu_{r}$. We first notice that $\mathcal{K}$ does not
depend on $\theta$ and thus can be used to evaluate $\nu_{r}$ in Eq. (11)
directly, by treating $p_{\theta}$ as a parameter.
$\begin{array}[]{l}\displaystyle\nu_{r}(\mathcal{K},p_{\theta})=\frac{\tau}{T_{r}}=\frac{\int_{r}^{r^{\prime}}\left(\frac{\partial\mathcal{K}}{\partial
p_{r}}\right)^{-1}\,\mathrm{d}r}{\oint\left(\frac{\partial\mathcal{K}}{\partial
p_{r}}\right)^{-1}\,\mathrm{d}r}\\\\[22.76228pt]
\displaystyle\qquad=\mathrm{F}\left[\arcsin\sqrt{\frac{\zeta_{3}-\zeta_{1}}{\zeta_{3}+1}},\kappa\right]\Big{/}\left(2\,\mathrm{K}\left(\kappa\right)\right),\end{array}$
(42)
where $\mathrm{K}(\kappa)$ is the complete elliptic integral of the first
kind, $\mathrm{F}(\phi,\kappa)$ is the incomplete elliptic integral of the
first kind, elliptic modulus $\kappa$ is given by
$\kappa=\sqrt{\frac{\zeta_{3}-\zeta_{2}}{\zeta_{3}-\zeta_{1}}},$
and $\zeta_{1}<0<\zeta_{2}<\zeta_{3}$ are the roots of the polynomial
$\mathcal{P}_{3}(\zeta)=-\zeta^{3}+\left[\mathcal{K}+\left(\frac{a}{2}\right)^{2}-1\right]\,\zeta^{2}+(\mathcal{K}-p_{\theta}^{2})\,\zeta-
p_{\theta}^{2}.$
In order to evaluate the angular rotation number, $\nu_{\theta}$, we first
notice that there is some uncertainty as to which integral of the motion to
employ: one can add an arbitrary function of $p_{\theta}$ to $\mathcal{K}$,
$\mathcal{K}^{\prime}=\mathcal{K}+f\left(p_{\theta}\right)$, to obtain another
integral. This new integral of motion, $\mathcal{K}^{\prime}$, gives the same
$\nu_{r}$, but modifies the angular motion by some unknown linear function of
time:
$\frac{d\theta}{dt}=\frac{d\mathcal{K}^{\prime}}{dp_{\theta}}=\frac{d\mathcal{K}}{dp_{\theta}}+f^{\prime}\left(p_{\theta}\right),$
(43)
$\theta(t)=\int\frac{d\mathcal{K}}{dp_{\theta}}dt+f^{\prime}\left(p_{\theta}\right)t.$
(44)
Fortunately, we can resolve this uncertainty by using the angular portion of
the map, Eq. (40). By its definition, the angular rotation number is
$\nu_{\theta}=\nu_{r}\frac{\Delta\theta\left(T_{r}\right)}{2\,\pi},$ (45)
where
$\Delta\theta\left(T_{r}\right)=\oint\frac{d\mathcal{K}}{dp_{\theta}}\left(\frac{\partial\mathcal{K}}{\partial{p_{r}}}\right)^{-1}dr+k\,T_{r},$
(46)
$k$ is an unknown coefficient and $T_{r}$ is the period of the radial motion,
$T_{r}=\oint\left(\frac{\partial\mathcal{K}}{\partial
p_{r}}\right)^{-1}\,\mathrm{d}r.$ (47)
To determine the coefficient $k$ we will notice from Eq. (40) that
$\Delta\theta(\tau)=\arctan\left(\frac{p_{\theta}}{r\,p_{r}}\right)$. Thus,
$k=\frac{1}{\tau}\left(\arctan\left(\frac{p_{\theta}}{r\,p_{r}}\right)-\int_{r}^{r^{\prime}}\frac{d\mathcal{K}}{dp_{\theta}}\left(\frac{\partial\mathcal{K}}{\partial{p_{r}}}\right)^{-1}dr\right)$
(48)
with
$\tau=\int_{r}^{r^{\prime}}\left(\frac{\partial\mathcal{K}}{\partial
p_{r}}\right)^{-1}\,\mathrm{d}r.$ (49)
Now, recalling that $\nu_{r}=\tau/{T_{r}}$, we finally obtain
$\begin{array}[]{l}\displaystyle\nu_{\theta}=\frac{\nu_{r}}{2\,\pi}\oint\frac{d\mathcal{K}}{dp_{\theta}}\left(\frac{\partial\mathcal{K}}{\partial{p_{r}}}\right)^{-1}dr\,+\\\\[9.95863pt]
\displaystyle\frac{1}{2\pi}\left(\arctan\left(\frac{p_{\theta}}{r\,p_{r}}\right)-\int_{r}^{r^{\prime}}\frac{d\mathcal{K}}{dp_{\theta}}\left(\frac{\partial\mathcal{K}}{\partial{p_{r}}}\right)^{-1}dr\right).\end{array}$
(50)
After some math, this expression can be rewritten as
$\begin{array}[]{l}\displaystyle\nu_{\theta}(\mathcal{K},p_{\theta})=\frac{\Delta}{2\,\pi}\,\left[\nu_{r}-\frac{\Delta^{\prime}}{\Delta}+\frac{\arctan\left(\frac{2\,p_{\theta}}{a}\,\frac{\zeta_{3}+1}{\zeta_{3}}\right)}{\Delta},\right]\end{array}$
(51)
where
$\begin{array}[]{l}\displaystyle\Delta=\frac{2\,p_{\theta}}{\zeta_{3}\,\sqrt{\zeta_{3}-\zeta_{1}}}\,\Pi\left[\kappa\,\bigg{|}\frac{\zeta_{3}-\zeta_{2}}{\zeta_{3}}\right],\\\\[9.95863pt]
\displaystyle\Delta^{\prime}=\frac{p_{\theta}}{\zeta_{3}\,\sqrt{\zeta_{3}-\zeta_{1}}}\,\Pi\left[\arcsin\sqrt{\frac{\zeta_{3}-\zeta_{1}}{\zeta_{3}+1}},\kappa\,\Bigg{|}\frac{\zeta_{3}-\zeta_{2}}{\zeta_{3}}\right],\end{array}$
and, $\Pi(\kappa\,|\alpha)$ and $\Pi(\phi,\kappa\,|\alpha)$ are the complete
and the incomplete elliptic integrals of the third kind, respectively. One can
note that for a linear 4D map ($b=0$), we have $\nu_{r}=2\,\nu_{\theta}$ for
any value of $p_{\theta}$. Fig. 5 shows an example of the radial and the
angular rotation numbers as a function of $\mathcal{K}$ for various values of
$p_{\theta}$.
Figure 5: Radial (left) and angular (right) rotation numbers as a function of
the first integral of the map, $\mathcal{K}$, for different values of its
second integral, $p_{\theta}$ (shown with color labels). The map parameters
are $b=1$ and $a=1.6$. Note that for $p_{\theta}=0$,
$\nu_{r}=2\,\nu_{\theta}$, as expected, and equals to the frequency $\nu$ from
the one-dimensional example of Fig. 3
## V Summary
In this paper we demonstrated a general and exact method of how to find a
Poincaré rotation number for integrable symplectic maps of a plane and its
connection to accelerator physics. It complements the discrete Arnold-
Liouville theorem for maps Veselov (1991); Arnold and Avez (1968) and permits
the analysis of dynamics for integrable systems. Eq. (18) also permits to
express the Hamilton function of a given integrable map explicitly. Several
examples were presented in our paper. These examples demonstrate that the
Danilov theorem is a powerful tool. The McMillan integrable map is a classic
example of a nonlinear integrable discrete-time system, which finds
applications in many areas of physics, including accelerators Antipov _et
al._ (2017); Lobach _et al._ (2018). It is a typical member of a wide class
of area-preserving transformations called a twist map Meiss (1992). For non-
integrable maps, which are also very common in accelerator science, this new
theorem could allow for an approximate evaluation of rotation numbers,
provided there exists an approximate integral of motion, like Eq. (34).
## VI Acknowledgments
The authors would like to thank Jeffrey Holmes and Stanislav Baturin for
carefully reading this manuscript and for their helpful comments. This
research is supported by Fermi Research Alliance, LLC under Contract No. DE-
AC02-07CH11359 with the U.S. Department of Energy and by the University of
Chicago.
## Appendix A Linear maps
In this appendix we will consider two examples of linear maps and we will use
Eq. (6) for one and Eq. (17) for the second one.
### A.1 Linear accelerator map
Consider a linear symplectic map,
$\begin{bmatrix}q^{\prime}\\\ p^{\prime}\end{bmatrix}=\begin{bmatrix}a&b\\\
c&d\end{bmatrix}\begin{bmatrix}q\\\ p\end{bmatrix},$ (52)
with $a\,d-b\,c=1$ and $|a+d|\leq 2$. This map is very common in accelerator
physics and has been described in Courant and Snyder (1958). The rotation
number (the betatron frequency) for this map is well known:
$\nu=\frac{1}{2\,\pi}\,\arccos\frac{a+d}{2}.$ (53)
To obtain this equation using the Danilov theorem, we will recall that this
map has the following Courant-Snyder integral (invariant):
$\mathcal{K}=c\,q^{2}+(d-a)\,q\,p-b\,p^{2}.$ (54)
Let us assume that $c>0$, then $b\leq 0$ and $\mathcal{K}(q,p)\geq 0$ for any
$q$ and $p$. From this, we obtain
$\left(\frac{\partial\mathcal{K}}{\partial p}\right)^{-1}=\frac{\partial
p}{\partial\mathcal{K}}=\frac{\pm
1}{\sqrt{\left[(a+d)^{2}-4\right]\,q^{2}-4\,b\,\mathcal{K}}}.$ (55)
We will use
$(q,p)=(\sqrt{\mathcal{K}/c},0)$ (56)
and
$(q^{\prime},p^{\prime})=(b\,\sqrt{\mathcal{K}/c},\sqrt{\mathcal{K}\,c})$ (57)
After a straightforward evaluation of integrals in Eq. (6), we obtain:
$\nu=\frac{1}{2\,\pi}\,\arccos\frac{a+d}{2},$ (58)
same is in Eq. (53).
### A.2 Brown map
As a second example we will consider the Brown map Brown (1983, 1993),
$\mathrm{M}_{\mathrm{B}}$,
$\begin{bmatrix}q^{\prime}\\\ p^{\prime}\end{bmatrix}=\begin{bmatrix}p\\\
-q+|p|\end{bmatrix},$ (59)
which has the following integral,
$\displaystyle\mathcal{K}(q,p)$ $\displaystyle=$
$\displaystyle\frac{1}{8}\Bigg{(}q+p+\big{|}q-|p|\big{|}+\big{|}p-|q|\big{|}+$
(60) $\displaystyle
2\,\Big{|}q-\big{|}p-|q|\big{|}\Big{|}+2\,\Big{|}p-\big{|}q-|p|\big{|}\Big{|}+$
$\displaystyle\bigg{|}q-|p|+\Big{|}p-\big{|}q-|p|\big{|}\Big{|}\bigg{|}+$
$\displaystyle\bigg{|}p-|q|+\Big{|}q-\big{|}p-|q|\big{|}\Big{|}\bigg{|}\Bigg{)}.$
The map has only one stable fixed point, located at the origin, with
$\mathcal{K}=0$. Constant level sets of $\mathcal{K}>0$ are polygons,
geometrically similar to each other, with 9 sides, labeled by Roman numerals,
see Fig. 6.a. All orbits belonging to these levels are periodic with
$\mathrm{M}_{\mathrm{B}}^{9}(q,p)=(q,p),$ (61)
and in fact, they are permutation 9-cycles such that
$\displaystyle\ldots\rightarrow\text{I}$ $\displaystyle\rightarrow$
$\displaystyle\text{III}\rightarrow\text{V}\rightarrow\text{VII}\rightarrow\text{IX}\rightarrow$
$\displaystyle\rightarrow$
$\displaystyle\text{II}\rightarrow\text{IV}\rightarrow\text{VI}\rightarrow\text{VIII}\rightarrow\text{I}\rightarrow\ldots.$
Figure 6: Brown map. (a.) Constant level sets of the invariant,
$\mathcal{K}(q,p)=\mathrm{const}$ (black solid polygons). Dashed black line
$p=q$ and blue line $p=\frac{1}{2}\,|q|$ illustrate two reflection symmetries
of the invariant polygons. Line segments are labeled with Roman numerals.
Green points are an example of a 9-cycle orbit, where the Arabic numerals show
the iteration number. (b.) An example of possible contour of integration for
the numerator and denominator in Danilov theorem.
Since it is a linear map ($\nu=\mathrm{const}$ for all orbits), we will use
Eq. (17) to determine its rotation number. It is obvious from Fig. 6.b that
$J=4.5\,\alpha$, while $J^{\prime}=1\,\alpha$, where $\alpha$ is some
arbitrary scale parameter, resulting in $\nu=\frac{2}{9}$.
## Appendix B Numerical procedure for Danilov theorem
In this appendix we will consider two numerical procedures, which can be
employed in order to use Eq. (12) for mappings in McMillan form when only the
mapping equation is known or when we have an approximate (or an exact)
invariant of the motion but we can not compute action integrals analytically.
We will start with the case when we have only the mapping equations. As a
first step we will rewrite the map in polar coordinates
$q=r\,\cos\phi,\qquad\qquad p=r\,\sin\phi.$
Then we will iterate for various initial conditions $q_{\text{ini}}^{(k)}$,
let say in a form of $q_{\text{ini}}^{(k)}=q_{0}^{(k)}=p_{0}^{(k)}$, so that
we have a collection of points in a form
$(r_{0}^{(k)},\phi_{0}^{(k)}),(r_{1}^{(k)},\phi_{1}^{(k)}),(r_{2}^{(k)},\phi_{2}^{(k)}),\ldots,(r_{n}^{(k)},\phi_{n}^{(k)}).$
We can then sort each orbit such that
$\widetilde{\phi}_{0}^{(k)}<\widetilde{\phi}_{1}^{(k)}<\widetilde{\phi}_{2}^{(k)}<\ldots<\widetilde{\phi}_{n}^{(k)},\vspace{0.1cm}$
where $(\widetilde{r}_{i}^{(k)},\widetilde{\phi}_{i}^{(k)})$ are the points of
a new sorted $k$-th orbit. Now, for each orbit we can compute the action and
the partial action numerically as
$J^{(k)}=\frac{1}{2\,\pi}\,\sum_{i=0}^{n}\frac{\left(\widetilde{r}_{i}^{(k)}\right)^{2}}{2}\,\left[\widetilde{\phi}_{i}^{(k)}-\widetilde{\phi}_{i-1}^{(k)}\right]$
(62)
and
$J^{\prime(k)}=\frac{1}{2\,\pi}\,\sum_{\pi/2<\widetilde{\phi}_{i}^{(k)}<\pi}\frac{\left(\widetilde{r}_{i}^{(k)}\right)^{2}}{2}\,\left[\widetilde{\phi}_{i}^{(k)}-\widetilde{\phi}_{i-1}^{(k)}\right]$
(63)
respectively. Finally, using the Danilov theorem, we can find the rotation
number as a numerical derivative
$\nu^{(k)}=\frac{J^{\prime(k+1)}-J^{\prime(k)}}{J^{(k+1)}-J^{(k)}}.$ (64)
If one would like to apply the Danilov theorem directly to an approximate or
exact invariant of motion, we can proceed in a similar manner. First, we
rewrite the invariant of motion in polar coordinates,
$\mathcal{K}_{\text{approx}}(r,\phi)$. Then, for different values
$\mathcal{K}_{\text{approx}}^{(k)}$ we will numerically solve $n$ equations
$\mathcal{K}_{\text{approx}}(r,\phi^{(k)}_{i})=\mathcal{K}_{\text{approx}}^{(k)}$
(65)
with $\phi^{(k)}_{i}=2\,\pi\,i/n$ and $i=0,1,\ldots,n-1$. Denoting the
smallest positive root of equation above as $r^{(k)}_{i}$, we can find action
and partial actions as
$J^{(k)}=\frac{1}{2\,\pi}\,\sum_{i=0}^{n-1}\frac{\left(r_{i}^{(k)}\right)^{2}}{2}\,\left[\phi_{i}^{(k)}-\phi_{i-1}^{(k)}\right]$
(66)
and
$J^{\prime(k)}=\frac{1}{2\,\pi}\,\sum_{\pi/2<\phi_{i}^{(k)}<\pi}\frac{\left(r_{i}^{(k)}\right)^{2}}{2}\,\left[\phi_{i}^{(k)}-\phi_{i-1}^{(k)}\right],$
(67)
along with the rotation number
$\nu^{(k)}=\frac{J^{\prime(k+1)}-J^{\prime(k)}}{J^{(k+1)}-J^{(k)}}.$ (68)
## References
* Kerst and Serber (1941) D. W. Kerst and R. Serber, Phys. Rev. 60, 53 (1941).
* Courant and Snyder (1958) E. D. Courant and H. S. Snyder, Annals of physics 3, 1 (1958).
* Poincaré (1885) H. Poincaré, Journal de mathématiques pures et appliquées 4e série 1, 167 (1885).
* Todesco (1998) E. Todesco, in _Analysis and Modelling of Discrete Dynamical Systems_ , edited by D. Benest and C. Froeschle (Taylor & Francis, 1998).
* Papaphilippou (2014) Y. Papaphilippou, Chaos: An Interdisciplinary Journal of Nonlinear Science 24, 024412 (2014).
* Dumas and Laskar (1993) H. S. Dumas and J. Laskar, Phys. Rev. Lett. 70, 2975 (1993).
* Bazzani _et al._ (1994) A. Bazzani, E. Todesco, G. Turchetti, and G. Servizi, Reports- CERN (1994).
* Turchetti and Panichi (2019) G. Turchetti and F. Panichi, “Birkhoff normal forms and stability indicators for betatronic motion,” in _Nonlinear Dynamics and Collective Effects in Particle Beam Physics_ (World Scientific, 2019) pp. 47–69.
* McMillan (1971) E. M. McMillan, in _Topics in modern physics, a tribute to E.V. Condon_ , edited by E. Brittin and H. Odabasi (Colorado Assoc. Univ. Press, Boulder, CO, 1971) pp. 219–244.
* Chow and Cary (1994) C. C. Chow and J. R. Cary, Phys. Rev. Lett. 72, 1196 (1994).
* Danilov (2008) V. Danilov, Phys. Rev. ST Accel. Beams 11, 114001 (2008).
* Danilov and Nagaitsev (2010) V. Danilov and S. Nagaitsev, Phys. Rev. ST Accel. Beams 13, 084002 (2010).
* Danilov and Nagaitsev (2014) V. Danilov and S. Nagaitsev, Phys. Rev. ST Accel. Beams 17, 124402 (2014).
* Lichtenberg and Lieberman (1992) A. Lichtenberg and M. Lieberman, _Regular and Chaotic Dynamics_ (Springer-Verlag New York, 1992).
* Dragt (2013) A. Dragt, in _Handbook of Accelerator Physics and Engineering_ , edited by A. Chao, K. Mess, M. Tigner, and F. Zimmermann (World Scientific, 2013) pp. 99–105.
* Dilão and Alves-Pires (1996) R. Dilão and R. Alves-Pires, _Nonlinear Dynamics in Particle Accelerators_ (World Scientific Publishing, 1996).
* Arnold and Avez (1968) V. Arnold and A. Avez, _Ergodic Problems of Classical Mechanics. (The Mathematical Physics Monograph Series)_ (W. A. Benjamin, Inc., New York/Amsterdam, 1968).
* Veselov (1991) A. P. Veselov, Russian Mathematical Surveys 46, 1 (1991).
* Meiss (1992) J. D. Meiss, Rev. Mod. Phys. 64, 795 (1992).
* Danilov (deceased) V. Danilov (deceased), proposed this theorem in private communications (2013-02-03).
* Iatrou and Roberts (2002) A. Iatrou and J. A. G. Roberts, Nonlinearity 15, 459 (2002).
* Hénon (1969) M. Hénon, Quart. Appl. Math. 27, 291 (1969).
* Dullin and Meiss (2000) H. Dullin and J. Meiss, Physica D Nonlinear Phenomena 143, 262 (2000).
* Shiltsev _et al._ (1999) V. Shiltsev, V. Danilov, D. Finley, and A. Sery, Phys. Rev. ST Accel. Beams 2, 071001 (1999).
* Shiltsev _et al._ (2008) V. Shiltsev, K. Bishofberger, V. Kamerdzhiev, S. Kozub, M. Kufer, G. Kuznetsov, A. Martinez, M. Olson, H. Pfeffer, G. Saewert, V. Scarpine, A. Seryi, N. Solyak, V. Sytnik, M. Tiunov, L. Tkachenko, D. Wildman, D. Wolff, and X.-L. Zhang, Phys. Rev. ST Accel. Beams 11, 103501 (2008).
* Lobach _et al._ (2018) I. Lobach, S. Nagaitsev, E. Stern, and T. Zolkin, in _Proc. 9th International Particle Accelerator Conference (IPAC’18), Vancouver, BC, Canada, April 29-May 4, 2018_ , International Particle Accelerator Conference No. 9 (JACoW Publishing, Geneva, Switzerland, 2018) pp. 3143–3145.
* Antipov _et al._ (2017) S. Antipov, D. Broemmelsiek, D. Bruhwiler, D. Edstrom, E. Harms, V. Lebedev, J. Leibfritz, S. Nagaitsev, C. Park, H. Piekarz, P. Piot, E. Prebys, A. Romanov, J. Ruan, T. Sen, G. Stancari, C. Thangaraj, R. Thurman-Keup, A. Valishev, and V. Shiltsev, Journal of Instrumentation 12, T03002 (2017).
* Brown (1983) M. Brown, The American Mathematical Monthly 90, 569 (1983).
* Brown (1993) M. Brown, in _Continuum Theory and Dynamical Systems (Lecture notes in pure and applied mathematics)_ , Vol. 149, edited by T. West (CRC Press, 1993) pp. 83–87.
|
# Towards Clinical Encounter Summarization:
Learning to Compose Discharge Summaries from Prior Notes
Han-Chin Shing$\dagger$, Chaitanya Shivade$\ddagger$, Nima
Pourdamghani$\ddagger$, Feng Nan$\ddagger$,
Philip Resnik$\dagger$, Douglas Oard$\dagger$, Parminder Bhatia$\ddagger$
$\dagger$University of Maryland, $\ddagger$Amazon Web Service AI
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
The records of a clinical encounter can be extensive and complex, thus placing
a premium on tools that can extract and summarize relevant information. This
paper introduces the task of generating discharge summaries for a clinical
encounter. Summaries in this setting need to be faithful, traceable, and scale
to multiple long documents, motivating the use of extract-then-abstract
summarization cascades. We introduce two new measures, faithfulness and
hallucination rate for evaluation in this task, which complement existing
measures for fluency and informativeness. Results across seven medical
sections and five models show that a summarization architecture that supports
traceability yields promising results, and that a sentence-rewriting approach
performs consistently on the measure used for faithfulness (faithfulness-
adjusted $F_{3}$) over a diverse range of generated sections.
## 1 Introduction
Figure 1: A medical encounter is an interaction between a patient and a
healthcare provider.
Clinical notes in the electronic health record (EHR) are used to document the
patient’s progress and interaction with clinical professionals. These notes
contain rich and diverse information, including but not limited to admission
notes, nursing notes, radiology notes, and physician notes. The information
the clinicians need, however, is often buried in the sheer amount of text, as
the number of clinical notes in an encounter can be in the hundreds. Finding
the information can be time-consuming; time that is already in short supply
for the clinicians to attend to the patients (Weiner and Biondich, 2006;
Sinsky et al., 2016), and can even contribute to the worsening physician
burnout crisis (Tawfik et al., 2018; West et al., 2018).
Summarization has the potential to help clinicians make sense of these
clinical notes. In this paper, we aim to make progress toward summarizing one
of the most common information sources clinicians interact with — the
patient’s clinical encounter. A clinical encounter (Figure 1) documents an
interaction between a patient and a healthcare provider (e.g., a visit to the
hospital), including structured and unstructured data. Our work focuses on the
unstructured clinical notes.
A natural target for summarization is the discharge summary: a specialized
clinical note meant to be a summary of the clinical encounter, typically
written at the time of patient discharge. Each section (e.g., past medical
history, brief hospital course, medications on admission) in the discharge
summary represents a different aspect of the encounter. By building a system
to extract and compose these medical sections from prior clinical notes in the
same encounter, we can summarize the information in a format clinicians are
already trained to read and understand.
There are significant challenges ahead, however. In this work, we identify
three main challenges of summarizing a clinical encounter: (1) an _evidence-
based fallback_ that allows traceable inspection, (2) the _faithfulness_ of
the summary, and (3) the _long text_ in a clinical encounter. We believe that
all three challenges need to be properly addressed before a discussion about
deployment can happen. Thus, this work focuses on measuring and understanding
how existing state-of-the-art summarization systems perform on these
challenges. Additionally, we propose an extractive-abstractive summarization
pipeline that directly addresses the _evidence-based fallback_ challenge and
the _long text_ challenge. For the third challenge, _faithfulness_ , we
introduce an evaluation measure that uses a medical NER system, inspired by
recent work on faithfulness in summarization (Maynez et al., 2020; Zhang et
al., 2020).
### Contributions
* •
We identify three challenges for summarizing clinical encounters: (1)
faithfulness, (2) evidence, and (3) long text.
* •
We introduce the task of discharge summary composition from prior clinical
notes.
* •
We evaluate our proposed extractive-abstractive pipeline for multi-document
summarization with medical NER-based scores and ROUGE across seven discharge
summary sections.
* •
We create a collection derived from a public database (MIMIC III); a potential
benchmark for clinical multi-document summarization.
## 2 Evidence, Faithfulness, and Long Text
In this section, we identify the three main challenges in discharge summary
composition.
### Evidence.
A summary should be displayed with means for the clinician to inspect and
understand where the information come from. In this respect, extractive
summarization has a clear advantage over abstractive summarization, as the
source of extracted content can be easily traced and displayed in context.
However, abstractive summarization does benefit from a more fluent generation
and the potential to function as a writing aid to alleviate the clinicians’
documentation burden. The challenge lies in how to design the system such that
_evidence_ can be traced.
### Faithfulness.
Like any model supporting clinical decision making, measuring and
understanding the faithfulness of the model output is important. As
abstractive summarization systems are evaluated by their ability to generate
fluent output, faithfulness can be a challenge to these models. Addressing
this problem is an active area of research (Maynez et al., 2020; Zhang et al.,
2020).
### Long text.
Summarizing an encounter (a sequence of documents), the quantity of text
available can easily exceed the memory limit of the model. This memory
limitation is especially challenging for modern transformer-based
architectures that typically require large GPU-memory. Tokens that do not fit
in memory can contain relevant clinical information for summarization.
Attempting to train an abstractive model to generate a summary without the
source information available can encourage the model to hallucinate at test
time; a dangerous outcome in the context of clinical summarization.
## 3 Extract and then Abstract
Figure 2: An extractive-abstractive summarization pipeline. The recall-
oriented extractor extract relevant sentences from prior documents, the
abstractor smooths out irrelevant or duplicated information.
These challenges are common in summarization. In particular, one of the main
challenges in multi-document abstractive summarization is to summarize a large
number of documents. While significant progress has been made to scale the
abstractive models (Beltagy et al., 2020; Zaheer et al., 2020), recent work
still involves using an extractive model (e.g., tf-idf based cosine similarity
(Liu et al., 2018), logistic regression Liu and Lapata (2019a)) to limit the
number of paragraphs before abstraction.
Here we proposed a similar extractive-abstractive pipeline. However, what is
different in a clinical context is that we wish to place more weight on the
extractor than rely on the abstractor to summarize a large amount of text.
This decision is motivated by the fact that extractive models are inherently
better at being faithful to the source, as they do not introduce novel
information. This characteristic makes them ideal candidates for clinical
summarization.
Our proposed extractor-abstractor pipeline involves two stages (Figure 2). The
first stage functions as a recall-oriented extractive summarization system to
extract relevant sentences from prior documents. The extracted sentences are
then passed through post-processing steps that remove duplicated sentences and
arrange them to form an extractive summary. The second stage is an abstractive
summarization system that aims to take the extractive summary from the
previous step and smooths out irrelevant or duplicated information. We
describe the details of implementations and how to scale this pipeline to very
long text in Section 7.
Another advantage of this pipeline is that it provides a clear path of
_evidence-based fallbacks_ (Figure 2). Notably, both the extractor and the
abstractor in the extractor-abstractor pipeline are capable of producing full
summaries. Clinicians can reference the extractive summary if they find the
abstractive summary problematic or if the abstractor model has low confidence.
The extractive summary also has another level of fallback. The extracted
sentences came from the source documents, so we can also display the extracted
sentences in context or even use the extractor as a highlighter.
## 4 Measuring Faithfulness
Figure 3: Relationship between source documents, reference summary, and
system-generated summary.
Following prior work, we report ROUGE-n ($n=\\{1,2\\}$) to measure n-gram
overlap as a proxy for informativeness, and ROUGE -L (longest common
subsequence, with possible gaps) as a proxy for fluency (Lin and Hovy, 2003;
Maynez et al., 2020). However, as Schluter (2017) and Cohan and Goharian
(2016) have argued, ROUGE alone is insufficient and possibly misleading for
measuring informativeness, specifically when it comes to faithfulness and
factualness.
In a summarization setting, a faithful summary refers to a summary that does
not contain information from outside of the source. On the other hand, a
factual summary allows information not presented in the source, as long as the
information is factually correct. In the setting of clinical summarization, we
argue that faithfulness is far more important. Novel information appearing in
a summary that has no support from the source, whether factual or not, can
affect the transparency of the model.
A downside of this definition of faithfulness, however, is that it does not
take reference summaries into account. Any extracted sentences (e.g., the
first three sentences) from the source are always faithful by definition. Such
extraction, however, might not be a summary relevant to this task. Figure 3
helps us illustrate the relationship between source documents, reference
summary, and system-generated summary using a Venn diagram.111Here we are
showing a single reference summary, but in reality, the reference summary
available is just one possible manifestation of all possible, potentially
equally valid summaries (Nenkova and Passonneau, 2004). Our discussion can be
extended to multiple reference summaries by treating each one independently in
the calculations and averaging them to report the final scores.
A desirable summary, especially in a clinical setting, is faithful to the
source and relevant as measured by the reference summary. In Figure 3, this
region corresponds to $B+C$, the ideal set of information a clinical
summarization system should target. Based on this observation, we define
_Faithfulness-adjusted Precision_ as $\frac{C}{System}$ and _Faithfulness-
adjusted Recall_ as $\frac{C}{B+C}$. Intuitively, faithfulness-adjusted
precision measures how much information in the system-generated summary is
both relevant and faithful. Similarly, faithfulness-adjusted recall measures
the amount of faithful and relevant information that has been included by the
system. In a clinical setting, recall is often more important than precision;
it is better to over-extract and have clinicians ignore or remove the
irrelevant content than have missing content. While our extractive-abstractive
pipeline provides a series of _fallbacks_ that allows clinicians to inspect
what could be missing by looking at the context of the extracted sentences,
under-extraction can still happen. We therefore report a recall-oriented
measure to combine the two above measures: _Faithfulness-adjusted F $\beta$_,
where we set $\beta=3$. In this setting, faithfulness-adjusted recall is three
times more important than faithfulness-adjusted precision (Van Rijsbergen,
1979).222We plan to explore the values of $\beta$ in consultation with
clinicians in future work.
Hallucination is perhaps the leading concern of applying abstractive
summarization system in a clinical setting. If one defines hallucination as a
system generating content that is not faithful to the source, we can identify
hallucination as the region $F+G$. $G$, the information that is not present in
neither the source nor the reference, is particularly problematic. We
therefore measure _Incorrect Hallucination Rate_ as $\frac{G}{System}$.
However, an important underlying assumption of these measures is that the
regions in Figure 3 are quantifiable. While there are many ways to approximate
these regions, as a starting point, we use the medical named entity
recognition (NER) system in SciSpacy (Neumann et al., 2019). The SciSpacy NER
matches any spans in the text which might be an entity in UMLS, a large
biomedical database, and transforms the text into a set of medical entities.
The cardinalities of the sets and their overlaps can then be used to calculate
the above measures.
## 5 Related Work
### Clinical Summarization.
Most literature on clinical summarization focuses on extractive summarization,
due to the risk involved in a clinical application (Demner-Fushman and Lin,
2006; Feblowitz et al., 2011; Liang et al., 2019; Moen et al., 2016). For
abstractive summarization, summarization of radiology reports has been a topic
of interest in NLP research recently. Zhang et al. (2018) show promising
results generating assessment section of a chest x-ray radiology report from
the findings and background section. MacAvaney et al. (2019) improved this
model through the incorporation of domain-specific ontologies. However, such
generated reports may not be clinically sound, and the models generate
sentences inconsistent with the patient’s background. Therefore, in subsequent
work Zhang et al. (2020) added a reinforcement learning based fact-checking
mechanism to generate a clinically consistent assessment. Lee (2018) explores
the generation of the Chief Complaint of emergency department cases from age
group, gender, and discharge diagnosis code. Ive et al. (2020) follow a
closely related approach of extracting keyphrases from mental health records
to generate synthetic notes. They further evaluate the quality of generated
synthetic data for downstream tasks. Work from Lee (2018) generates clinical
notes by conditioning transformer-based models on a limited window of past
patient data.
In our work, instead of focusing on purely extractive or abstractive clinical
summarization, we proposed an extractive-abstractive pipeline as a framework
for clinical multi-document summarization.
### Faithfulness in Summarization.
Recognizing the limitation of the existing measures and the danger of
hallucination in summarization systems, faithfulness in summarization has
gained attention recently (Kryscinski et al., 2020; Cao et al., 2017). Recent
work on faithfulness evaluation in summarization involves using textual
entailment (Maynez et al., 2020) or question answer generation (Arumae and
Liu, 2019; Wang et al., 2020). For radiology summarization, Zhang et al.
(2020) proposed using a radiology information extraction system to extract a
pre-defined set of 14 pieces of factual information tailored to radiology
reports.
In this paper, we approximate information overlap using the overlap of medical
named entities. We argue that the domain of clinical encounter summarization
is very different from the domains of most textual entailment tasks or
question answer generation tasks. It is often much more specific, allowing us
to apply the medical NER model. However, it is not as specific as the
radiology summarization task, where a set of pre-defined information can more
easily be identified.
## 6 Dataset
Dataset Input Ouput # Data Gigaword $10^{1}$ $10^{1}$ $10^{6}$ CNN/DailyMail
$10^{2}$–$10^{3}$ $10^{1}$ $10^{5}$ WikiSum $10^{2}$–$10^{6}$
$10^{1}$–$10^{3}$ $10^{6}$ Our Dataset $10^{4}$–$10^{5}$ *$10^{0}$–$10^{3}$
$10^{3}$
Table 1: Size comparison of summarization datasets. *For stats of the output
sections, see Table 2.
We derive our dataset from the MIMIC III database v1.4 (Johnson et al., 2016):
a freely accessible, English-language, critical care database consisting of a
set of de-identified, comprehensive clinical data of patients admitted to the
Beth Israel Deaconess Medical Center’s Intensive Care Unit (ICU). The database
includes structured data such as medications and laboratory results and
unstructured data such as clinical notes written by medical professionals. For
this work, we will focus on the unstructured data.
The challenge for adapting the MIMIC III database for our purpose, however, is
that MIMIC III is incomplete. Due to the way that MIMIC III was collected, not
all clinical notes are available; only notes from ICU, radiology, echo, ECG,
and discharge summary (Johnson and Shivade, 2020) are guaranteed to be
available. It is important to note that the incompleteness is not a property
of the problem we are trying to address; it is a property of that database. We
limit the incompleteness issue by focusing on the subset of encounters that
contain at least one admission note (a clinical note written at the time of
admission) as a proxy for completeness. This leaves us about 10% of the total
encounters, or around 6,000 encounters.
We identify seven medical sections in the discharge summary as our targets for
summarization: (1) chief complaint, (2) family history, (3) social history,
(4) medications on admission, (5) past medical history, (6) history of present
illness, and (7) brief hospital course. These medical sections were chosen
based on their high prevalence in discharge summaries and their length
diversity (see Table 2).
### Target Section Extraction.
To extract the target medical sections from the discharge summary, we use a
regular expression based approach to identify the medical section headers’
variants from the training set. We then collect the content from the target
medical section header and stop right before the next section header in the
discharge summary. Around one hundred randomly selected extracted medical
sections are manually examined to ensure no missing content or over-
extraction. For each of these target medical sections, we then collect all the
prior clinical notes (according to the chart date timestamp in MIMIC III) as
their source documents. On average, the source documents consist of 64
documents and 36,3567 words. Table 1 shows a comparison with other dataset.
After the rule-based target extraction, we split the 6,000 encounters based on
the _subject id_ to prevent data leakage. Each section is split into training,
validation, and test set (80/10/10) using the same set of subject ids. If the
rule-based target extraction returns nothing, the encounter is excluded. See
Table 2 for the statistic of sample size.
## 7 Models and Experiments
As explained in Section 3, our proposed pipeline involves an extractive
summarization component and an abstractive summarization component. This
section identifies a set of existing extractors and abstractors across a
diverse range of different approaches to understand what models are suitable
for encounter-level clinical summarization. To understand the robustness of
these approaches, we train and test these models across seven medical sections
with a diverse range of length.
### Extractors.
Since our goal is to summarize an encounter conditioned on a targeted medical
section, we focus our attention on supervised extractors. Supervised
extractive summarization is often framed as a sentence extraction problem.
Each sentence is encoded into a representation used to determine whether the
sentence should be included in the extracted summary. RNN or transformer-based
attention are often used to encode the surrounding sentences as context.
RNN+RLext: Chen and Bansal (2018) proposed a method to use reinforcement
learning to fine-tune a pretrained RNN sentence extractor to a pointer network
operating over sentences. By modeling the next sentence to extract (including
the extra “end-of-extraction” sentence) as the action space, the current
extracted sentences as the state space, and by using ROUGE between reference
summary sentence and rewritten extracted sentence (rewritten by a separate
pretrained abstractor) as the reward, the authors repurposed the sentence
extractor to extract sentences from the source documents and reorder them as
they might appear in the summary.
Presummext: Liu and Lapata (2019b) proposed Presumm, a family of summarization
models. Here we are especially interested in the extractive summarization
variant that uses a modified pretrained BERT model to encode sentences to
determine whether the sentence should be included in the extracted summary.
While the model has been shown to achieve competitive results, applying a BERT
encoder to very long text can be challenging in terms of memory limitations.
Thus, we apply a split-map-reduce framework, where the long text is split into
smaller units during training and inference. After inference, each smaller
unit’s extracted sentences are then concatenated back together in the same
order as appeared in the original source. Since the model only assigns scores
to sentences, we sweep the score cutoff threshold on the validation set using
ROUGE-L score, and apply that cutoff on the test set.
### Abstractors.
In our extractive-abstractive pipeline, abstractors play a role in mapping the
extracted sentences to the reference summary. Here we include two abstractor
variants:333We also experimented with a pointer-generator See et al. (2017),
but we found that BART consistently outperforms pointer-generator, so we leave
the results in the appendix.
RNN+RLabs: Similar to RNN+RLext, however, after each sentence is extracted, it
is immediately rewritten by passing through a pretrained sentence-level
abstractor. The goal is to rewrite each extracted sentence to the format of
what might appear in the reference summary. This sentence-rewriting approach
has the disadvantage of only having a local view when rewriting (thus no
merging of information). However, the advantage is that the memory limitation
of sentence-level rewriting does not grow with the number of sentences, so it
can be applied to longer summaries.
BART: Lewis et al. (2019) propose BART as a transformer variant that uses a
bidirectional encoder similar to BERT and an autoregressive (left to right)
decoder similar to GPT. The model has competitive performance for
summarization, and thus is our choice for transformer-based abstractor. In
contrast to the sentence-rewriting approach of RNN+RLabs, we train BART to
rewrite all the extracted sentences directly to the summary.
### Baselines.
Since clinical encounter summarization is a new task, there are no baselines
from prior work. Following prior work on summarization, we include two special
baselines: (1) Oracleext: Extraction by using the reference summary; for each
sentence in the reference summary, greedily select the source sentence in the
source document that yields the maximum ROUGE-L score. (2) Rule-basedext:
apply the same rule-based target section extraction method in in Section 6
that was used to construct the dataset. Instead of applying to the discharge
summary, we apply the same extraction method to the prior clinical documents.
### Evaluating the extractor-abstractor pipeline.
For the two extractive models, Rnn+RLext and PreSummext, as well as the two
extraction baselines, we report ROUGE scores as well as our proposed
factualness-adjusted {precision/recall/F3} scores across the seven medical
sections.
For the abstractive models, we measure the combinations of abstractive models
with extractive models in our proposed pipeline. This implies measuring the
performance of three models (two pointer-generator models shown in Appendix):
Rnn+RLabs (uses Rnn+RLext as the extractor), Rnn+RLext \+ BART, and PreSummext
\+ BART. For the abstractors, we additionally measure _incorrect hallucination
rate_ defined in Section 4.
## 8 Results and Discussion
(a) ROUGE-L of extractors vs. word count.
(b) ROUGE-L of abstractors vs. word count.
Figure 4: ROUGE-L of summarization models vs. average word lengths of the
medical sections. Sections (dotted vertical lines) from short to long: (A)
Chief complaint, (B) Family history, (C) Social history, (D) Medications on
admission, (E) Past medical history, (F) History of present illness, and (G)
Brief hospital course.
Chief Complaint Family History Social History Medications on Admission Past
medical History History of Present Illness Brief Hospital Course train / val /
test 4,757/559/625 4,686/555/614 4,677/552/618 4,689/557/616 4,746/558/623
4,754/559/625 4,758/558/625 Output # words 7.25 17.03 44.90 69.58 75.36 274.88
491.97 Output # sents 2.04 2.63 4.93 4.67 5.99 16.62 35.39 Oracleext
71.1/85.2/83.6 52.8/75.4/72.3 63.4/73.3/72.2 69.7/66.5/66.8 74.2/80.8/80.1
76.6/83.9/83.1 44.7/51.5/50.7 Rule-basedext 97.4/49.7/52.2 87.6/47.3/49.6
94.7/23.1/25.0 97.2/32.8/35.2 94.9/16.9/18.4 70.8/08.6/09.5 00.3/00.9/00.7
Presummext 10.8/24.1/21.4 30.7/63.1/57.1 42.6/40.6/40.8 48.7/52.0/51.7
51.2/66.6/64.7 54.4/74.5/71.9 26.5/47.7/44.2 RNN+RLext 44.2/72.8/68.4
54.5/70.6/68.6 43.2/71.0/66.7 45.7/67.2/64.2 43.6/81.7/75.1 27.6/88.8/72.7
15.3/69.7/51.4 Presummext \+ BART 45.5/63.6/61.2 46.1/70.2/66.7 60.0/66.0/65.3
67.1/77.7/76.5 69.7/73.3/72.9 68.0/64.5/64.8 37.4/26.8/27.6 RNN+RLext \+ BART
48.6/70.4/67.4 44.7/74.2/69.6 61.2/66.7/66.1 67.0/80.2/78.7 70.0/74.6/74.2
67.4/64.7/64.9 34.1/23.6/24.4 RNN+RLabs 67.8/69.1/69.0 75.8/73.0/73.3
60.1/68.2/67.3 70.9/69.0/69.2 64.7/68.8/68.3 40.8/82.2/74.6 20.4/52.9/45.6
Table 2: Dataset statistic and faithfulness-adjusted
$\\{Precision/Recall/F_{3}\\}$ scores based on medical NER. Figure 5: NER-
based incorrect hallucination rate of abstractive models vs. average word
lengths. Extractors do not hallucinate. Order the same as Figure 4.
Summary ground_truth past medical history : # hypertension # hyperlipidemia #
gerd # ckd with baseline cr 1.3 # stable angina on long acting nitrate
Presummext # hypertension # hyperlipidemia # gerd # ckd with baseline cr 1.3
nc occupation : changes to medical and family history : RNN+RLext #
simvastatin 20 mg once a day # isosorbide mononitrate 40 mg once a day #
furosemide 40 mg once a day # pantoprazole 40 mg once a day # diltiazem xr 180
mg once a day # tylenol for gum pain # proair hfa 90 mcg/actuation aerosol
inhaler [ hospital1 ] prn # prednisone per pt ’s son 2 weeks ago # antibiotic
for pneumonia per pt ’s son 2 weeks ago past medical history : # hypertension
# hyperlipidemia # gerd # ckd with baseline cr 1.3 nc occupation : sinus
rhythm . Presummext \+ BART past medical history : # hypertension #
hyperlipidemia # gerd # ckd with baseline cr 1.3 RNN+RLext \+ BART past
medical history : # hypertension # hyperlipidemia # gerd # ckd with baseline
cr 1.3 RNN+RLabs past medical history : # hypertension # hyperlipidemia # gerd
# ckd with baseline cr 1.5 . .
Table 3: A random example showing summaries of past medical section. Despite
RNN+RLext over-extracted in this example, BART was able to smooth out the
noise and generate the same output Presummext \+ BART.
Extractive summarization and abstractive summarization are often applied in
different settings and should thus be compared separately. For full results
table, see Appendix A. Here we highlight the main findings. In Figure 4(a), we
highlight the ROUGE-L scores (ROUGE-1 and ROUGE-2 have a similar pattern) of
the two extractive summarization systems compared to the oracle and rule-based
extractive summary. An interesting observation is the effect of length:
average word count of the reference summary medical section. RNN+RLext
outperforms Presummext on shorter sections, and vice-versa for the longer
sections. This difference can be partially attributed to the way _cutoff_ is
being done at the extractors. For RNN+RLext, an RL agent is trained to decide
when to stop extracting sentences. For the shorter sections, the RL learns to
stop at just a few sentences (e.g., a typical chief complaint has two
sentences, family history has on average 2.6 sentences). On longer sections,
however, we find that the RL agent has difficulty stopping, causing over-
extraction. In contrast, for Presummext, a score cutoff threshold is tuned on
the development set using the ROUGE-L score. This approach has a more balanced
performance but suffers at short sections. Another factor contributing to the
lead of Presummext in the longer sections is our split-map-reduce framework,
which enables the extractive model to conduct inference over all the clinical
documents.
Interestingly, the baseline Rule-basedext performs surprisingly well on Rouge
for the two shortest sections. Upon inspection, most of the extraction is just
the medical section’s title, without any content. This observation is backed
up by the lower faithfulness-adjusted recall of this baseline.
For abstractive summarization, we highlight the ROUGE-L of the three
abstractors in Figure 4(b). Interestingly, after being abstracted by BART,
both RNN+RLext and Presummext converged to roughly the same ROUGE-L scores.
This suggests that in our extractor-abstractor pipeline, BART is effective in
taking different extracted summaries and _smoothing_ them into the format and
content expected for the medical sections. On the other hand, RNN+RLabs
outperforms BART at the shorter sections, and even Oracleext at the family
history section. Note that Oracleext is not necessarily an upper-bound for the
abstractive summarization models; abstractors allow rewriting content in prior
notes into the format of discharge summary. Sentence seqmentation (the basic
unit of extraction) can also be noisy in clinical notes. On the other hand,
the curve for RNN+RLabs is almost identical to RNN+RLext in Figure 4(a), with
a constant increase. This is largely attributed to the sentence-rewriting of
the sentence-level abstractor that allows RNN+RLabs to keep the benefit of its
extractor counterpart, while rewriting the content to reduce over-extracted
sentences.
Table 2 shows our faithfulness adjusted measures. For the extractors,
RNN+RLext out-performs all other extractors on faithfulness-adjusted F3 and
even outperforms Oracleext in the brief hospital course section. This is
possible because Oracleext is selected using ROUGE-L, not faithfulness-
adjusted F3. For the abstractors, a similarly good performance is found for
RNN+RLabs, where its precision consistently increases compared to RNN+RLext.
The good performance of RNN+RLext,abs can largely be attributed to the high
recall that has hurt their ROUGE-L performance in Figure 4. Interestingly, the
two BART models again perform roughly the same, with recall of RNN+RLext \+
BART higher than Presummext \+ BART. For the longest section, generation for
BART proves to be difficult, as indicated by the large drop of recall, whereas
the sentence-wise rewriting strategy of RNN+RLabs has scaled better to longer
sections.
The overall incorrect hallucination rate shown in Figure 5 is relatively low,
with the notable exception of the family history section. Inspection of the
generated summaries shows that the most common hallucination of both BART
systems is the phrase “no family history”. Interestingly, the ground truths
corresponding to these hallucinations are mostly variations of the term “non-
contributory”; inspection of the source also shows that the family history
section was often left blank. That being said, there are still cases of
hallucinations where “no family history” is followed by a condition (e.g.,
arrhythmia, cardiomyopathies) that is not mentioned in the source.
Table 3 shows a qualitative analysis of a randomly chosen summary of the past
medical section. In this case, RNN+RLext over-extracted content from the
previous sections. However, after passing through BART, BART successfully
smooths out the noise and generates the same output as Presummext \+ BART. In
this case, RNN+RLabs happens to be hallucinating (mapping cr 1.3 to cr 1.5).
All summarization systems missed “# stable angina on long acting nitrate”;
mention of “angina” is actually not present in the prior clinical notes.
## 9 Conclusion and Future Work
We present a novel clinical summarization task – discharge summary composition
by summarizing prior clinical documents, derived from a public database (MIMIC
III). By summarizing the vast number of clinical notes in a format clinicians
are already trained to read and understand, our work has the potential to
reduce the time clinicians spend on making sense of the data, allowing them to
allocate more time to the patients.
We view this work as a promising first step to measure how existing models
perform on the task and share the task with the community. One limitation of
this work is that if there is novel information available only when writing
the discharge summary, there will be no way of summarizing it. It is also
important to note that since we are using MIMIC III for training and
evaluation, the results shown are biased to the dataset, as MIMIC III is an
English-language collection from the ICU of a single hospital, and may not
necessary be applicable to other clinical setting.
We identify three main challenges: (1) faithfulness, (2) evidence, and (3)
long text. An extractor-abstractor pipeline is proposed to provide a natural
way of fallback with an increasing amount of evidence at each fallback and
also enable scaling to very long documents. To investigate the risk of
hallucination and faithfulness in the summaries, we evaluate with a NER-based
measure on top of ROUGE. Adapting state-of-the-art summarization models, our
experiments over seven medical sections demonstrate the potential for the
extractor-abstractor pipeline and represent a framework towards a set of
enabling technologies that can assist clinicians to better make sense of the
vast amount of unstructured data in the EHR.
## 10 Ethical Considerations
### Deidentification.
Our dataset is derived from the public database MIMIC III v1.4 (Johnson et
al., 2016). Johnson et al. (2016) deidentified the database in accordance with
the Health Insurance Portability and Accountability Act (HIPAA) standards.
This standard requires removing all eighteen identifying data elements,
including patient name, telephone number, address, and dates. These fields are
replaced with placeholders. A constant (but different per patient) offset is
applied to shift the dates. Patients over 89 years old were mapped to over
300, in compliance with HIPAA.
Although under U.S. federal guidelines, secondary use of fully deidentified,
publicly available data is exempt from institutional review board (IRB) review
(45 CFR § 46.104, “Exempt research”), we still consider the dataset sensitive.
We are careful to treat it as such. During training and error analysis, we of
course do not attempt to identify individuals, and when the qualitative
analysis is shown, we double-check to avoid showing potentially identifiable
information.
### Population.
In MIMIC III, out of the 38,161 patients, 71.34% are White, 7.69% Black, 2.38%
other, 2.37% Asian, and the rest unknown. Most of the patients in MIMIC III
were older adults, with the most common age group being 71–80, followed by the
61–70 age group. (Dai et al., 2020).
### Broader Impact.
Clinical application has the genuine potential to affect people’s lives. As we
have emphasized in Section 1 and Section 9, this work is not about a
discussion for deployment, but rather a first step in understanding how the
current existing summarization models perform. Importantly, we need to
understand the failure modes of these systems and how to address these
failures. Our emphasis on faithfulness and traceability of summarization
reflects those beliefs. Hopefully, the three challenges we identify are the
first of many future steps that will make progress toward alleviating the
documentation burden of clinicians and ultimately result in a better quality
of care for patients.
## References
* Arumae and Liu (2019) Kristjan Arumae and Fei Liu. 2019. Guiding extractive summarization with question-answering rewards. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 2566–2577.
* Beltagy et al. (2020) Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. _arXiv:2004.05150_.
* Cao et al. (2017) Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2017. Faithful to the original: Fact aware neural abstractive summarization. _arXiv preprint arXiv:1711.04434_.
* Chen and Bansal (2018) Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In _Proceedings of ACL_ , pages 675–686.
* Cohan and Goharian (2016) Arman Cohan and Nazli Goharian. 2016. Revisiting summarization evaluation for scientific articles. In _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16)_ , pages 806–813.
* Dai et al. (2020) Zheng Dai, Siru Liu, Jinfa Wu, Mengdie Li, Jialin Liu, and Ke Li. 2020. Analysis of adult disease characteristics and mortality on mimic-iii. _PLOS ONE_ , 15(4):1–12.
* Demner-Fushman and Lin (2006) Dina Demner-Fushman and Jimmy Lin. 2006. Answer extraction, semantic clustering, and extractive summarization for clinical question answering. In _Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics_ , pages 841–848.
* Feblowitz et al. (2011) Joshua C. Feblowitz, Adam Wright, Hardeep Singh, Lipika Samal, and Dean F. Sittig. 2011. Summarization of clinical information: A conceptual model. _Journal of Biomedical Informatics_ , 44(4):688 – 699.
* Ive et al. (2020) Julia Ive, Natalia Viani, Joyce Kam, Lucia Yin, Somain Verma, Stephen Puntis, Rudolf N. Cardinal, Angus Roberts, Robert Stewart, and Sumithra Velupillai. 2020\. Generation and evaluation of artificial mental health records for Natural Language Processing. _npj Digital Medicine_ , 3(1):1–9.
* Johnson and Shivade (2020) Alistair Johnson and Chaitanya Shivade. 2020. Notes and data not in mimic-iii · issue 771 · mit-lcp/mimic-code.
* Johnson et al. (2016) Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-Wei, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. _Scientific data_ , 3(1):1–9.
* Kryscinski et al. (2020) Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 9332–9346.
* Lee (2018) Scott H. Lee. 2018. Natural language generation for electronic health records. _npj Digital Medicine_ , 1(1):1–7.
* Lewis et al. (2019) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. _arXiv preprint arXiv:1910.13461_.
* Liang et al. (2019) Jennifer Liang, Ching-Huei Tsou, and Ananya Poddar. 2019. A novel system for extractive clinical note summarization using ehr data. In _Proceedings of the 2nd Clinical Natural Language Processing Workshop_ , pages 46–54.
* Lin and Hovy (2003) Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurrence statistics. In _Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics_ , pages 150–157.
* Liu et al. (2018) Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. _arXiv preprint arXiv:1801.10198_.
* Liu and Lapata (2019a) Yang Liu and Mirella Lapata. 2019a. Hierarchical transformers for multi-document summarization. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 5070–5081.
* Liu and Lapata (2019b) Yang Liu and Mirella Lapata. 2019b. Text summarization with pretrained encoders. In _Proceedings of EMNLP_ , pages 3721–3731.
* MacAvaney et al. (2019) Sean MacAvaney, Sajad Sotudeh, Arman Cohan, Nazli Goharian, Ish Talati, and Ross W. Filice. 2019. Ontology-Aware Clinical Abstractive Summarization. In _Proceedings of SIGIR_ , pages 1013–1016. Association for Computing Machinery, Inc.
* Maynez et al. (2020) Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. _arXiv preprint arXiv:2005.00661_.
* Moen et al. (2016) Hans Moen, Laura-Maria Peltonen, Juho Heimonen, Antti Airola, Tapio Pahikkala, Tapio Salakoski, and Sanna Salanterä. 2016. Comparison of automatic summarisation methods for clinical free text notes. _Artificial Intelligence in Medicine_ , 67:25 – 37.
* Nenkova and Passonneau (2004) Ani Nenkova and Rebecca J Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In _Proceedings of the human language technology conference of the north american chapter of the association for computational linguistics: Hlt-naacl 2004_ , pages 145–152.
* Neumann et al. (2019) Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing. In _Proceedings of the 18th BioNLP Workshop and Shared Task_ , pages 319–327, Florence, Italy. Association for Computational Linguistics.
* Ott et al. (2019) Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In _Proceedings of NAACL-HLT 2019: Demonstrations_.
* Schluter (2017) Natalie Schluter. 2017. The limits of automatic summarisation according to rouge. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers_ , pages 41–45.
* See et al. (2017) Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. In _Proceedings of ACL_ , pages 1073–1083.
* Sinsky et al. (2016) Christine Sinsky, Lacey Colligan, Ling Li, Mirela Prgomet, Sam Reynolds, Lindsey Goeders, Johanna Westbrook, Michael Tutty, and George Blike. 2016. Allocation of physician time in ambulatory practice: A time and motion study in 4 specialties. _Annals of Internal Medicine_ , 165(11):753–760. PMID: 27595430.
* Tawfik et al. (2018) Daniel S Tawfik, Jochen Profit, Timothy I Morgenthaler, Daniel V Satele, Christine A Sinsky, Liselotte N Dyrbye, Michael A Tutty, Colin P West, and Tait D Shanafelt. 2018. Physician burnout, well-being, and work unit safety grades in relationship to reported medical errors. In _Mayo Clinic Proceedings_ , volume 93, pages 1571–1580. Elsevier.
* Van Rijsbergen (1979) Cornelis J Van Rijsbergen. 1979. Information retrieval. (2nd ed.). _University of Glasgow_ , pages 133–134.
* Wang et al. (2020) Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the factual consistency of summaries. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 5008–5020, Online. Association for Computational Linguistics.
* Weiner and Biondich (2006) Michael Weiner and Paul Biondich. 2006. The influence of information technology on patient-physician relationships. _Journal of general internal medicine_ , 21(1):35–39.
* West et al. (2018) C. P. West, L. N. Dyrbye, and T. D. Shanafelt. 2018. Physician burnout: contributors, consequences and solutions. _Journal of Internal Medicine_ , 283(6):516–529.
* Zaheer et al. (2020) Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big bird: Transformers for longer sequences.
* Zhang et al. (2018) Yuhao Zhang, Daisy Yi Ding, Tianpei Qian, Christopher D. Manning, and Curtis P. Langlotz. 2018. Learning to Summarize Radiology Findings. In _Proceedings of the Workshop on Health Text Mining and Information Analysis (EMNLP-LOUHI)_ , pages 204–213. Association for Computational Linguistics (ACL).
* Zhang et al. (2020) Yuhao Zhang, Derek Merck, Emily Tsai, Christopher D. Manning, and Curtis Langlotz. 2020. Optimizing the factual correctness of a summary: A study of summarizing radiology reports. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 5108–5120, Online. Association for Computational Linguistics.
## Appendix A Appendix: Full Results
Chief Complaint Family History Social History Medications on Admission Past
medical History History of Present Illness Brief Hospital Course Oracle ext
73.0/59.0/72.9 55.7/40.5/55.3 62.0/48.2/61.0 61.5/47.7/60.6 75.1/67.0/74.1
77.4/66.8/75.8 45.7/22.3/41.8 Rule-based ext 59.8/44.5/59.8 43.9/31.8/43.9
18.6/12.1/18.6 26.1/22.2/26.1 20.6/16.3/20.6 8.3/7.3/8.3 9.2/8.5/9.2 RNN+RL
ext 45.1/33.1/45.0 40.2/28.6/40.0 37.6/27.2/36.6 43.4/35.6/42.1 47.9/40.2/46.3
34.8/28.3/33.4 21.3/6.7/18.6 Presumm ext 12.3/6.9/11.9 33.2/24.0/32.9
36.3/27.5/35.4 47.2/40.7/46.2 50.8/41.9/49.7 53.2/45.4/51.8 29.6/10.6/26.1
RNN+RL ext + PointGen 21.2/13.2/21.1 29.8/22.0/29.5 36.7/26.3/36.2
49.2/41.7/48.1 46.3/38.6/45.0 38.8/28.3/37.4 20.6/8.6/19.2 Presumm ext +
PointGen 19.8/11.6/19.7 30.6/23.5/30.5 42.5/31.1/41.4 50.0/43.0/49.0
52.4/45.0/51.2 43.0/35.2/41.6 20.9/9.6/19.4 RNN+RL ext + BART 53.5/37.5/53.1
48.9/38.6/48.6 50.3/38.0/49.4 58.2/51.9/57.0 66.9/58.5/65.2 61.1/51.3/59.1
28.2/10.6/25.7 Presumm ext + BART 49.9/33.0/49.6 47.4/37.5/47.2 49.6/38.3/48.8
57.8/50.9/56.7 66.0/58.3/64.7 61.0/52.4/59.2 28.0/12.4/25.5 RNN+RL abs
61.2/47.5/60.9 61.6/50.5/61.3 45.9/33.7/44.8 49.9/42.2/48.2 57.5/47.9/55.3
47.6/38.4/45.4 32.1/10.4/28.0 # words 7.25037 17.026 44.9034 69.5803 75.3616
274.881 491.971 # sents 2.04183 2.63082 4.92901 4.67285 5.99115 16.6193 35.389
Table 4: ROUGE-{1/2/L} scores, across different models and sections
See Table 4 and Table 5 for the full scores for all models on all seven
sections.
Chief Complaint Family History Social History Medications on Admission Past
medical History History of Present Illness Brief Hospital Course Oracleext
71.1/85.2/83.6 52.8/75.4/72.3 63.4/73.3/72.2 69.7/66.5/66.8 74.2/80.8/80.1
76.6/83.9/83.1 44.7/51.5/50.7 Rule-basedext 97.4/49.7/52.2 87.6/47.3/49.6
94.7/23.1/25.0 97.2/32.8/35.2 94.9/16.9/18.4 70.8/08.6/09.5 00.3/00.9/00.7
Presummext 10.8/24.1/21.4 30.7/63.1/57.1 42.6/40.6/40.8 48.7/52.0/51.7
51.2/66.6/64.7 54.4/74.5/71.9 26.5/47.7/44.2 RNN+RLext 44.2/72.8/68.4
54.5/70.6/68.6 43.2/71.0/66.7 45.7/67.2/64.2 43.6/81.7/75.1 27.6/88.8/72.7
15.3/69.7/51.4 RNN+RLext \+ PointGen 40.6/70.2/65.4 38.2/73.9/67.6
59.9/58.7/58.8 66.4/72.7/72.0 65.6/59.0/59.6 69.1/37.1/38.9 39.8/15.2/16.2
Presummext \+ PointGen 31.3/62.6/56.9 37.0/72.3/66.0 54.7/61.9/61.1
65.1/73.7/72.8 64.0/62.6/62.7 69.8/42.4/44.1 42.2/17.9/19.0 Presummext \+ BART
45.5/63.6/61.2 46.1/70.2/66.7 60.0/66.0/65.3 67.1/77.7/76.5 69.7/73.3/72.9
68.0/64.5/64.8 37.4/26.8/27.6 RNN+RLext \+ BART 48.6/70.4/67.4 44.7/74.2/69.6
61.2/66.7/66.1 67.0/80.2/78.7 70.0/74.6/74.2 67.4/64.7/64.9 34.1/23.6/24.4
RNN+RLabs 67.8/69.1/69.0 75.8/73.0/73.3 60.1/68.2/67.3 70.9/69.0/69.2
64.7/68.8/68.3 40.8/82.2/74.6 20.4/52.9/45.6
Table 5: Faithfulness-adjusted $\\{Precision/Recall/F_{3}\\}$ scores based on
medical NER.
## Appendix B Appendix: Qualitative Analysis
Table 6 shows a random example on social history section.
Summary ground_truth social history : retired from [ country 11150 ] . brother
and son are part of support network . Presummext [ last name ( un ) 574 ] :
retired gentleman from [ country 4952 ] ; currently living with sons who are
his main caretakers . . pt is hindi speaking only but able to communicate his
needs and pleasant and cooperative . RNN+RLext family / social history :
retired gentleman from [ country 4952 ] ; currently living with sons who are
his main caretakers . . saw pt ; did carotid massage ; give lopressor 50 mg po
bid starting tonight social history : Presummext \+ BART social history : [
last name ( un ) ] : retired gentleman from [ country ] ; currently living
with sons who are his main caretakers . pt is hindi speaking only but able to
communicate his needs and pleasant and cooperative . RNN+RLext \+ BART social
history : retired gentleman from [ country 651 ] ; currently living with sons
who are his main caretakers . RNN+RLabs social history : retired gentleman
from [ country ] ] ; currently living with sons who are his main caretakers .
Table 6: A random example showing summaries of social history section.
## Appendix C Appendix: Reproducibility
Here we describe the training details of the models for reproducibility.
### RNN+NLext and RNN+NLabs.
Both models are trained following the original recipe Chen and Bansal (2018).
The training setup involves the following steps: (1) use gensim to train a
word2vec embedding from scratch from the training set of the source documents,
(2) construct pseudo pairs of sentences (source sentence, summary sentence):
for each summary sentence, greedily finds the one-best source sentence using
ROUGE-L recall, (3) use the pseudo pairs to train an RNN extractor, (4) use
the pseudo pairs to train an pointer-generator that rewrites the sentences,
and (5) train an RL agent that fine-tunes the RNN extractor with the sentence-
rewriting pointer-generator. Model is trained on one V100 GPU, with an Adam
optimizer of learning rate 1e-3. Here we use the same set of hyperparameters
as Chen and Bansal (2018). For more details, please refer to the original
paper.
For each of the seven medical sections, we follow the training recipe, and
repeat it five times. The reported models are chosen based on the validation
set. We found that the RL fine-tuning step can potentially be very unstable.
For the longer sections (e.g., brief hospital course and history of present
illness), the RL fine-tuning can even fail to converge.
### Presummext.
We use the original implementation released with Presumm (Liu and Lapata,
2019b). Learning rate is set to 2e-3 and extractor dropout rate is set to 0.1,
following the original paper. bert-base-uncased is used as the pretrained BERT
model. We made three important changes: (1) increase the maximum tokens the
encoder can consume to 1024 tokens, (2) in the data preprocessing step, we
construct pseudo pairs of sentences that will be later used to train the
extractor: for each summary sentence, greedily finds the one-best source
sentence using ROUGE-L recall, and (3) before the training begin, we split the
source documents and their labels into segments smaller than 1024 tokens.
After inference finishes, we concatenate the segments (together with a
extraction score for each sentence) back together in the original order.
For each of the seven medical sections, we train the model on 4 V100 GPUs,
with 150,000 training steps and model checkpointing every 2,000 steps. We
report the model with the lowest model loss on the validation set. Since the
model only assigns scores to sentences, we sweep the threshold of score cutoff
on the validation set using ROUGE-L score, and apply that cutoff on the test
set.
### PointGen.
We use an open implementation of pointer-generator (See et al., 2017),
implemented with PyTorch and AllenNlp.444https://github.com/kukrishna/pointer-
generator-pytorch-allennlp Our model follows the original paper and has
256-dimensional hidden states and 128-dimensional word embeddings. The
vocabulary size is set to 50k words for both source and target. The model is
optimized using Adagrad with learning rate 0.15 and an initial accumulator
value of 0.1, and trained on one v100 GPU for 50 epochs with early stopping on
the validation set.
### BART.
We use the Fairseq Ott et al. (2019) implementation of BART-large Lewis et al.
(2019) as it is shown to achieve the state-of-the-art ROUGE scores for
abstractive summarization. We fine-tune the BART-large model with the standard
learning rate of $3\times 10^{-5}$. We utilize a machine with 8 GPUs and batch
size of 2048 input tokens per GPU. We train for a maximum of 10 epochs with
early stopping to select the checkpoint with the smallest loss on the
validation set. During decoding, we use beam search with beam size of 6. We
restrict the generation length to be between 10 to 300 tokens.
|
∎
Random vibrations of stress-driven nonlocal beams with external damping
Preprint of the article published in
Meccanica
DOI: 10.1007/s11012-020-01181-7
Francesco P. Pinnola,
Marzia S. Vaccaro Barretta,
Raffaele Barretta,
Francesco Marotti de Sciarra
https://doi.org/10.1007/s11012-020-01181-7
ⓒ 2020. This manuscript version is made available under the CC-BY-NC-ND 4.0
license http://creativecommons.org/licenses/by-nc-nd/4.0/
11institutetext: F.P. Pinnola 22institutetext: M.S. Vaccaro 33institutetext:
R. Barretta 44institutetext: F. Marotti de Sciarra55institutetext: Department
of Structures for Engineering and Architecture,
University of Naples Federico II,
via Claudio 21, Ed. 6, 80125 - Naples, Italy
55email<EMAIL_ADDRESS>
# Random vibrations of stress-driven nonlocal beams with external damping
Francesco P. Pinnola Marzia S. Vaccaro Raffaele Barretta Francesco Marotti de
Sciarra
(Received: date / Accepted: date)
###### Abstract
Stochastic flexural vibrations of small-scale Bernoulli-Euler beams with
external damping are investigated by stress-driven nonlocal mechanics. Damping
effects are simulated considering viscous interactions between beam and
surrounding environment. Loadings are modeled by accounting for their random
nature. Such a dynamic problem is characterized by a stochastic partial
differential equation in space and time governing time-evolution of the
relevant displacement field. Differential eigenanalyses are performed to
evaluate modal time coordinates and mode shapes, providing a complete
stochastic description of response solutions. Closed-form expressions of power
spectral density, correlation function, stationary and non-stationary
variances of displacement fields are analytically detected. Size-dependent
dynamic behaviour is assessed in terms of stiffness, variance and power
spectral density of displacements. The outcomes can be useful for design and
optimization of structural components of modern small-scale devices, such as
Micro- and Nano-Electro-Mechanical-Systems (MEMS and NEMS).
###### Keywords:
Stochastic dynamics small-scale beams size effects viscous damping stress-
driven nonlocal integral elasticity MEMS/NEMS
## 1 Literature survey, motivation and outline
Methodologies to predict random vibrations in structural systems have reached
over the last century a significant importance in design and optimization of
new-generation composites Pourasghar2019 ; Xia2019 and technological devices,
such as: micro-bridges Mojahedi2017 , nano-switches Moradweysi2018 , nano-
generators Hosseini ; debellis , nano-sensors Mohammadian ; natsuki , energy
harvesters Tran2018 ; Basutkar2019 ; GhayeshFarokhi2020 . It is acknowledged
that continuum mechanics is able to model structural components of small-scale
systems FarajpourReview2018 , but some mechanical expedients are needed to
accurately predict nonconventional phenomena, such as size and damping
effects. It is well-established that inter-atomic forces and molecular
interactions cannot be overlooked in small-scale structures which exhibit
technically significant size effects Bauer ; Kiang ; Xiao ; Zienert ;
Chowdhury ; Tang . Such a phenomenon cannot be captured by the classical
theory of local continuum marB due to lack of internal characteristic scales.
Nonlocal continua, driven by suitably chosen scale characteristic parameters,
are instead appropriate to model micro- and nano-structures 3 ; 6 ; alo16 ;
marA , as confirmed by molecular dynamic simulations Ansari2010 ; Murmu ;
Ansari2012 . Nonlocal methodologies allow for modeling complex mechanical
behaviours avoiding computationally expensive procedures 1 ; 2 .
Nonlocal theory, in its earliest formulation Rogula1965 ; Rogula1982 is based
on the idea that the stress at a point of a continuum depends not only on
elastic strain at that point but involves local responses of the whole
structure. Long-range interactions are thus described by a strain-driven
convolution integral supplemented with an averaging kernel characterized by a
nonlocal length-scale parameter. Such an approach was consistently applied by
Eringen Eringen ; Eringen1 to nonlocal problems involving screw dislocation
and wave propagation, that are formulated in unbounded domains. However,
mathematical difficulties are apparent when the strain-driven strategy is
applied to structural problems which are generally defined in bounded domains.
In fact, assuming that the averaging kernel is the Green function of a
differential operator, from the integral convolution it is possible to obtain
a consequent differential formulation Tricomi1985 ; Polyanin2008 . For bounded
structural domains, to the differential formulation above is needed to add a
proper set of Constitutive Boundary Conditions (CBCs) BarMar . Paradoxical
results and unacceptable nonlocal responses are obtained Challamel2008 ;
ReddyParadoxSolved if CBCs are ignored. Several nonlocal theories have been
adopted to overcome the aforementioned difficulties, such as: two-phase models
7 ; KhodabakhshiReddy2015 , strain and stress gradient theories 4 ;
ChallamelReddy2016 ; CivalekIJES2018 ; CornacchiaJCOMB2019 ;
CornacchiaMAMS2019 , nonlocal gradient techniques LimJMPS2015 ;
BarrettaMar2018 ; ApuzzoBarretta2018 , strain-difference approaches 48 ;
FusPisPol ; marC , displacement-based nonlocal models DiPaola ; pir1 ; alo13 ;
alo15 , stress-driven formulation of nonlocality RomBar ; BarVacc .
Advantageously, the stress-driven approach has been shown to be able to
effectively model the nonlocal behaviour of small-scale structures and
provides exact solutions for problems of applicative interest in Nano-
Engineering 41 ; BarMar2 . The stress-driven nonlocal integral model is
therefore adopted in this paper to significantly tackle size-dependent random
vibrations in inflected elastic small-scale beams. A first effort on the
matter, disregarding random phenomena, was performed in ApuBarMar .
Another import effect in dynamics of micro- and nano-systems concerns damping
phenomena which should be properly modeled in applicative problems of nano-
engineering. Instances are listed as follows: external magnetic force
LeeLin2010 , humidity, thermal and paddling effects Chen2011 and internal
viscous force due to material rheological properties DiPPinVal . Modeling of
internal viscous forces is successfully performed by properly selecting
constitutive formulations DiMDiPPin . Other effects, mentioned above, are
related to surrounding environmental interactions Calleja2012 . In the present
paper, particular attention is paid to capture external damping effects,
useful to analyse modern small-scale structures, such as: sensors inside
viscous fluid, devices under magnetic field, nano-systems for biological
detection. Specifically, a bed of independent dashpots will be considered to
simulate external viscous interactions between nonlocal beam and surrounding
environment.
At micro- and nano-scales, structures can be excited by different kind of
force systems. An example is the effect of environmental thermal and/or
mechanical noises in nano-sensors Mems . Stochastic approaches can be
conveniently exploited to model external loadings Verma ; Spanos ; Crandal ,
effectively representable by random time-process Pirr1 ; Pirr2 ; AloDiPPin .
For the aforesaid reasons, the present research provides a novel strategy for
stress-driven nonlocal analysis of damped vibrations of elastic nano-beams due
to stochastic excitation. Steady-state solutions are established, detecting
thus analytical expressions of power spectral density and stationary variance
of displacements. Closed-form solutions are also evaluated for non-stationary
responses of nonlocal damped beams forced by Gaussian white noise.
The manuscript is organized as follows. Strain- and stress-driven models of
pure nonlocal integral elasticity are recalled and specialized to
Bernoulli–Euler beams in Section 2. Dynamic equilibrium equations of damped
beams are established in Section 3 by using the well-posed stress-driven
nonlocal strategy of elasticity. Mode shape functions and natural frequencies
are analytically detected and an effective methodology to perform dynamic
eigenanalysis is also elucidated. A stochastic analysis of nonlocal damped
beams forced by Gaussian white noise is developed in Section 4. Both
stationary and non-stationary examinations are performed, detecting thus
closed form solutions of displacement variance, power spectral density and
correlation function. Numerical simulations are implemented in Section 5 to
test accuracy of obtained solutions. Analytical stationary and non-stationary
variances are compared with numerical outcomes derived by Monte Carlo
simulations. A parametric study of stochastic responses, in terms of
displacement variances and natural frequencies, is given in Section 5 to study
nonlocal effects. Closing remarks are outlined in Section 6.
## 2 Purely nonlocal integral elasticity
Two purely nonlocal models of elasticity are available in literature:
1. 1.
strain-driven integral law Rogula1965 , applied to problems of screw
dislocation and wave propagation Eringen ; Eringen1 ;
2. 2.
stress-driven integral law RomBar applied to nonlocal mechanics of structures.
These theories are preliminarily recalled below for $\,3$-D continua and
specialized to Bernoulli-Euler beams. Eringen’s strain-driven law is based on
the idea that the stress $\boldsymbol{\sigma}$ at a point $\boldsymbol{x}$ of
a nonlocal $\,3$-D continuous body $\mathcal{B}$ is the output of a
convolution between the local response to the elastic strain field
$\boldsymbol{\varepsilon}$ and a scalar kernel $\Phi_{\lambda}$ depending on a
non-dimensional positive nonlocal parameter $\lambda$. That is,
$\boldsymbol{\sigma}(\boldsymbol{x})=\int_{\mathcal{B}}\Phi_{\lambda}(\boldsymbol{x},\bar{\boldsymbol{x}})\boldsymbol{E}(\bar{\boldsymbol{x}})\,\boldsymbol{\varepsilon}(\bar{\boldsymbol{x}})d\bar{\boldsymbol{x}}$
(1)
with $\boldsymbol{E}$ fourth-order local elasticity stiffness tensor.
For a Bernoulli-Euler inflected beam of length $\,L\,$, the nonlocal strain-
driven relation Eq. (1) takes the form
$M(z)=EI\int_{0}^{L}\Phi_{\lambda}(z,\bar{z})\chi(\bar{z})d\bar{z}$ (2)
with $\,M\,$ bending moment, $\,E\,$ Euler-Young modulus, $\,I\,$ cross-
sectional moment of inertia along the bending axis $\,y\,$, $\,z\,$ beam axial
abscissa and $\,\chi\,$ elastic curvature. The integral kernel
$\Phi_{\lambda}$ can be selected among exponential, Gaussian or power-law type
functions and must satisfy properties of positivity, symmetry and limit
impulsivity Eringen1 . A convenient choice for the averaging kernel is the
special bi-exponential function
$\Phi_{\lambda}(z,\bar{z})=\frac{1}{2\lambda
L}\exp\left({-\frac{|z-\bar{z}|}{\lambda L}}\right)$ (3)
where $\lambda L$ is the characteristic length $L_{c}$. With the assumption
above, the integral law Eq. (2) is equivalent to the second-order differential
equation BarMar
$M^{(2)}(z)-\frac{1}{(\lambda L)^{2}}M(z)=-\frac{EI}{(\lambda L)^{2}}\chi(z)$
(4)
supplemented with the following constitutive boundary conditions (CBC)
$\left\\{\begin{split}&M^{(1)}(0)=\frac{1}{\lambda L}M(0)\\\
&M^{(1)}(L)=-\frac{1}{\lambda L}M(L)\end{split}\right.$ (5)
It is worth noting that, for structural problems of applicative interest, CBCs
Eq. (5) are in contrast with equilibrium requirements BarMar . Incompatibility
between equilibrium and constitutive conditions reveals that Eringen’s
nonlocal model leads to ill-posed structural problems, generally formulated in
bounded domains. Such an obstruction can be overcome by using the stress-
driven approach RomBar which is well-posed. Exact nonlocal structural
solutions can be found in BarMar2 .
In stress-driven mechanics, the elastic strain $\boldsymbol{\varepsilon}$ at a
point $\boldsymbol{x}$ of $\mathcal{B}$ is obtained by convoluting the local
stress $\boldsymbol{\sigma}$ with an averaging kernel $\,\Phi_{\lambda}\,$
$\boldsymbol{\varepsilon}(\boldsymbol{x})=\int_{\mathcal{B}}\Phi_{\lambda}(\boldsymbol{x},\bar{\boldsymbol{x}})\boldsymbol{C}(\bar{\boldsymbol{x}})\,\boldsymbol{\sigma}(\bar{\boldsymbol{x}})d\bar{\boldsymbol{x}}$
(6)
with $\boldsymbol{C}=\boldsymbol{E}^{-1}$ local elastic compliance. The
stress-driven model for Bernoulli-Euler beams is governed by the following
moment-curvature relation
$\chi(z)=\frac{1}{EI}\int_{0}^{L}\Phi_{\lambda}(z-\bar{z})M(\bar{z})d\bar{z}$
(7)
The integral formulation Eq. (7), with the special kernel Eq. (3), is
equivalent to the second order differential equation RomBar
$\chi^{(2)}(z)-\frac{1}{(\lambda L)^{2}}\chi(z)=-\frac{1}{EI(\lambda
L)^{2}}M({z})$ (8)
equipped with the following constitutive boundary conditions
$\left\\{\begin{split}&\chi^{(1)}(0)=\frac{1}{\lambda L}\chi(0)\\\
&\chi^{(1)}(L)=-\frac{1}{\lambda L}\chi(L)\end{split}\right.$ (9)
The stress-driven approach Eqs. (8), (9) provides exact solutions in both
static and dynamic structural problems and is exploited in the present study
to analytically tackle size-dependent random vibrations of slender elastic
beams.
## 3 Dynamical analysis of nonlocal beams
Let us consider a nonlocal beam of length $L$ and cross-sectionial area
$A_{t}$ subjected to a transverse loading per unit of length $q(z,t)$, see
Figure 1. $(x,y,z)$ is the adopted system of orthogonal coordinates, $v(z,t)$
denotes the transverse displacement and $\rho(z)$ is the mass density.
Figure 1: Bernoulli-Euler beam with external damping Figure 2: Free-body
diagram of a beam differential element
The stress-driven formulation Eqs. (8), (9) is used as nonlocal elasticy law
while the effect of external damping is introduced and modeled as a bed of
dashpots with viscosity $\eta$. This kind of viscous interaction is able to
model a possible external interaction between nonlocal beam and viscous fluid.
Damping can be also simulated as a material effect adik ; adik2 ; Failla ;
alo18 ; PirDim ; AlottaA by a viscoelastic law.
In this paper, following the Newtonian approach, we consider a beam
differential element of length $dz$, whose free-body diagram is represented in
Figure 2. The equilibrium equation along the $y$-direction, involving external
loading, inertial and damping forces, and bending and shearing fields, writes
as
$\frac{\partial T(z,t)}{\partial z}dz+q(z,t)dz=\eta dz\frac{\partial
v(z,t)}{\partial t}+\rho Adz\frac{\partial^{2}v(z,t)}{\partial
t^{2}},\;\;\;0<z<L$ (10)
By ignoring second-order terms in $dz$ and neglecting mass moment of inertia
and angular acceleration, the rotational equilibrium along the $x$-axis gives
$\frac{\partial M(z,t)}{\partial z}-T(z,t)=0,\;\;\;\;\;0<z<L$ (11)
Combining the equilibrium equations along $x$\- and $y$-directions we get the
partial differential equation
$-\frac{\partial^{2}M(z,t)}{\partial z^{2}}+\eta\frac{\partial
v(z,t)}{\partial t}+\rho A\frac{\partial^{2}v(z,t)}{\partial
t^{2}}=q(z,t),\;\;\;0<z<L$ (12)
By introducing the nonlocal stress-driven relation in Eq. (8), from Eq. (12)
we get
$-EI\left[\frac{\partial^{2}\chi(z,t)}{\partial z^{2}}-(\lambda
L)^{2}\frac{\partial^{4}\chi(z,t)}{\partial z^{4}}\right]+\eta\frac{\partial
v(z,t)}{\partial t}+\rho A\frac{\partial^{2}v(z,t)}{\partial t^{2}}=q(z,t)$
(13)
According to Bernoulli-Euler kinematics, curvature $\chi(z,t)$ is related to
the transverse displacement $v(z,t)$ by
$\chi(z,t)=-\frac{\partial^{2}v(z,t)}{\partial z^{2}}$ (14)
and by placing Eq. (14) into Eq. (13) we have that
$\frac{\partial^{4}v(z,t)}{\partial z^{4}}-(\lambda
L)^{2}\frac{\partial^{6}v(z,t)}{\partial z^{6}}+\frac{\eta}{EI}\frac{\partial
v(z,t)}{\partial t}+\frac{\rho A}{EI}\frac{\partial^{2}v(z,t)}{\partial
t^{2}}=\frac{q(z,t)}{EI}$ (15)
which is the partial differential equilibrium equation ruling bending
vibrations of a nonlocal Bernoulli-Euler beam resting on a bed of dashpots.
For vanishing nonlocal parameter $\lambda\to 0^{+}$, Eq. (15) provides the
known differential equation governing forced vibrations of local Bernoulli-
Euler beams resting on a bed of independent dashpots. Moreover, setting
$\eta=0$, the classical formulation of undamped local beams is obtained
Meirovi ; Pirrotta . Solutions of the introduced partial differential equation
may be found by imposing two initial conditions, four standard BCs and two
constitutive BCs of the stress-driven model Eq. (9).
### 3.1 Free vibrations of undamped nonlocal beam
In order to solve Eq. (15), we first consider the undamped nonlocal beam in
free vibration. In other words, we set $q(z,t)=0$ and $\eta=0$. These
assumptions imply that Eq. (15) yields
$\frac{\partial^{4}v(z,t)}{\partial z^{4}}-(\lambda
L)^{2}\frac{\partial^{6}v(z,t)}{\partial z^{6}}+\frac{\rho
A}{EI}\frac{\partial^{2}v(z,t)}{\partial t^{2}}=0,\;\;\;0<z<L$ (16)
which rules free vibrations of undamped nonlocal beam. We suppose that the
displacement function $v(z,t)$ is with separable variables and so it can be
expressed by a product of a space function $\phi(z)$ (mode shape) and a time-
dependent function $y(t)$ that modulates the amplitude of mode shape in time.
That is,
$v(z,t)=\phi(z)y(t)$ (17)
by substituting Eq. (17) into Eq. (16) we get
$y(t)\left[\frac{d^{4}\phi(z)}{dz^{4}}-(\lambda
L)^{2}\frac{d^{6}\phi(z)}{dz^{6}}\right]+\frac{\rho
A}{EI}\phi(z)\frac{d^{2}y(t)}{dt^{2}}=0,\;\;\;0<z<L$ (18)
where partial derivatives have been replaced by total derivatives due to the
assumption in Eq. (17). Moreover, by using Lagrange’s differential notation
for space derivative and Newton’s notation for time derivative we can rewrite
Eq. (18) as
$\frac{EI}{\rho A}\frac{(\lambda
L)^{2}\phi^{(6)}(z)-\phi^{(4)}(z)}{\phi(z)}=\frac{\ddot{y}(t)}{y(t)},\;\;\;\;\;0<z<L$
(19)
where both sides of Eq. (19) must be equal to a constant $\alpha$ that can be
associated with the natural frequency of the oscillation $\omega_{0}$ as
$\alpha=-\omega_{0}^{2}$ (20)
Hence, Eq. (19) can be rewritten as
$\frac{EI}{\rho A}\frac{(\lambda
L)^{2}\phi^{(6)}(z)-\phi^{(4)}(z)}{\phi(z)}=\frac{\ddot{y}(t)}{y(t)}=-\omega_{0}^{2}$
(21)
by considering the left side of Eq. (21), we get the following sixth order
differential equation in the space variable $z$
${(\lambda L)^{2}\phi^{(6)}(z)-\phi^{(4)}(z)}+\omega_{0}^{2}\frac{\rho
A}{EI}{\phi(z)}=0,\;\;\;\;\;0<z<L$ (22)
whose solution must satisfy the four BCs depending on the type of loads and
constraints at the bounds and the two constitutive BCs in Eq. (9). That is,
$\left\\{\begin{split}&\phi^{(3)}(0)=\frac{1}{\lambda L}\phi^{(2)}(0)\\\
&\phi^{(3)}(L)=-\frac{1}{\lambda L}\phi^{(2)}(L)\end{split}\right.$ (23)
The problem to find the constant $\omega_{0}$ and $\phi(z)$ such that Eq. (22)
admits nontrivial solution is known as differential eigenvalue-eigenfunction
problem, where $\omega_{0}$ is the eigenvalue and $\phi(z)$ represents the
corresponding eigenfunction. Eq. (22) admits infinite eigenvalues and then
infinite eigenfunctions. Therefore, the solution in terms of displacement can
be expressed as a sum of infinite products between the modal time coordinates
$y_{j}(t)$ and the mode shapes $\phi_{j}(z)$
$v(z,t)=\sum_{j=1}^{\infty}\phi_{j}(z)y_{j}(t)$ (24)
where the $j$-th eigenfunction $\phi_{j}(z)$ is a solution of the $j$-th
homogeneous sixth-order differential equation in Eq. (22). Specifically,
$\phi_{j}(z)=\sum_{i=1}^{3}\left\\{C_{i}\exp\left[{\sqrt{\gamma_{i}(\omega_{0,j})}z}\right]+C_{i+3}\exp\left[-{\sqrt{\gamma_{i}(\omega_{0,j})}z}\right]\right\\}$
(25)
where $\gamma_{i}(\omega_{0,j})$ with $i=1,2,3$ are the roots of the following
characteristic third degree polynomial
$(\lambda L)^{2}\gamma^{3}-\gamma^{2}+\omega_{0,j}^{2}\frac{\rho A}{EI}=0$
(26)
and the coefficients $C_{i}$ are obtained by imposing that solution in Eq.
(25) satisfies the previous six BCs and the normality condition. In this
manner, each eigenfunction possesses the following orthonormality property
$\int_{0}^{L}\phi_{i}(z)\phi_{j}(z)dz=\delta_{kj}$ (27)
where $\delta_{kj}$ indicates the Kronecker delta defined as
$\delta_{kj}=\left\\{\begin{split}&1\;\;\textrm{if}\;k=j,\\\
&0\;\;\textrm{if}\;k\neq j\end{split}\right.$ (28)
Such solution of differential eigenvalue problem will be used in the next
section to solve the more general case of a forced nonlocal damped beam.
### 3.2 Forced vibrations of damped nonlocal beam
Now, we consider the dynamical problem of a forced beam with external damping.
The solution in terms of displacements expressed in Eq. (24) in function of
mode shapes $\phi_{j}(z)$ is placed into the equilibrium equation (15).
Specifically,
$\begin{split}&\frac{EI}{\rho
A}\left[\sum_{j=1}^{\infty}y_{j}(t)\phi^{(4)}_{j}(z)-\sum_{j=1}^{\infty}y_{j}(t)(\lambda
L)^{2}\phi^{(6)}_{j}(z)\right]+\\\ &+\frac{\eta}{\rho
A}\sum_{j=1}^{\infty}\dot{y}_{j}(t)\phi_{j}(z)+\sum_{j=1}^{\infty}\ddot{y}_{j}(t)\phi_{j}(z)=\frac{q(z,t)}{\rho
A}\end{split}$ (29)
by multiplying both sides of Eq. (29) by the i-th eigenfunction $\phi_{i}(z)$
and integrating on the domain $[0,L]$, Eq. (29), we get
$\begin{split}&\frac{EI}{\rho
A}\left[\sum_{j=1}^{\infty}y_{j}(t)\int_{0}^{L}\phi_{i}(z)\phi^{(4)}_{j}(z)dz\right]+\\\
&-\frac{EI}{\rho A}\left[\sum_{j=1}^{\infty}y_{j}(t)(\lambda
L)^{2}\int_{0}^{L}\phi_{i}(z)\phi^{(6)}_{j}(z)dz\right]+\\\ &+\frac{\eta}{\rho
A}\sum_{j=1}^{\infty}\dot{y}_{j}(t)\int_{0}^{L}\phi_{i}(z)\phi_{j}(z)dz+\sum_{j=1}^{\infty}\ddot{y}_{j}(t)\int_{0}^{L}\phi_{i}(z)\phi_{j}(z)dz=\\\
&\frac{1}{\rho A}\int_{0}^{L}\phi_{i}(z)q(z,t)dz\end{split}$ (30)
Taking into account the eigenfunctions orthonormality property in Eq. (27),
from Eq. (30) we get the second order differential equation in terms of modal
coordinate $y_{i}(t)$ that rules the motion of a forced damped modal
oscillator
$\frac{EI}{\rho A}y_{i}(t)\left[a_{i}-(\lambda
L)^{2}b_{i}\right]+\frac{\eta}{\rho
A}\dot{y}_{i}(t)+\ddot{y}_{i}(t)=\frac{1}{\rho
A}\int_{0}^{L}\phi_{i}(z)q(z,t)dz$ (31)
where the coefficients $a_{i}$ and $b_{i}$ are
$a_{i}=\int_{0}^{L}\phi_{i}(z)\phi^{(4)}_{i}(z)dz,\;\;\;\;\;b_{i}=\int_{0}^{L}\phi_{i}(z)\phi^{(6)}_{i}(z)dz$
(32)
Moreover, we assume that also the load is with separable variables. Therefore,
$q(z,t)=g(z)f(t)$ (33)
this implies that Eq. (31) can be rewritten as
$\frac{k_{\lambda,i}}{\rho A}y_{i}(t)+\frac{\eta}{\rho
A}\dot{y}_{i}(t)+\ddot{y}_{i}(t)=\frac{c_{i}}{\rho A}f(t)$ (34)
where the _nonlocal modal stiffness_ $k_{\lambda,i}$ is defined as
${k}_{\lambda,i}=EI\left[a_{i}-(\lambda L)^{2}b_{i}\right]$ (35)
and the coefficient $c_{i}$ is
$c_{i}=\int_{0}^{L}\phi_{i}(z)g(z)dz$ (36)
Notice that the following term
$\omega_{0,i}=\sqrt{\frac{k_{\lambda,i}}{\rho A}}$ (37)
represents the natural frequency associated to the $i$-th mode $\phi_{j}(z)$.
Solution of Eq. (34) provides the i-th modal time coordinate of the infinite
series in Eq. (24). Truncating the sum to an appropriate number $n$ of
eigenfunctions and modal coordinates leads to the following approximated
solution, that will be used for the numerical applications
$v(z,t)\approx\sum_{j=1}^{n}\phi_{j}(z)y_{j}(t)$ (38)
## 4 Stochastic analysis of nonlocal beams
Now we turn the attention to random vibrations of nonlocal beams, supposing
that the time-dependent part of the transverse load $q(z,t)$ has a stochastic
nature. Specifically, from Eq. (33) we assume that
$q(z,t)=g(z)F(t)$ (39)
where $g(z)$ is a deterministic function, while $F(t)$ is a stochastic
process, the latter is noted by a capital letter to distinguish it from the
deterministic function $f(t)$. Moreover, we assume that $F(t)$ is a stationary
Gaussian process with zero mean $\mu_{F}$ and with an assigned correlation
function (CF) denoted by $R_{F}(\tau)$. Thus, the input process is completely
described by the following time-independent parameters
$\mu_{F}=\mu_{F}(t):=\mathbb{E}\left[F(t)\right]=0$ (40a)
$\begin{split}R_{F}(\tau)=R_{F}(t,t+\tau):&=\mathbb{E}\left[F(t)F(t+\tau)\right]-\mu_{F}^{2}\\\
&=\mathbb{E}\left[F(t)F(t+\tau)\right]\end{split}$ (40b)
where $\mathbb{E}[\cdot]$ is the averaging operator. For $\tau=0$ the CF gives
the value of the variance $\sigma_{F}^{2}$. That is,
$\sigma_{F}^{2}=\mathbb{E}\left[F(t)F(t)\right]=R_{F}(0)$ (41)
Being $F(t)$ a stationary process, by virtue of Wiener-Khinchin theorem, the
power spectral density (PSD), denoted by $S_{F}(\omega)$, is the Fourier
transform of the correlation function. That is,
$\begin{split}S_{F}(\omega):&=\frac{1}{2\pi}\int_{-\infty}^{\infty}R_{f}(\tau)e^{-\mathrm{i}\omega\tau}d\tau\\\
&=\lim_{\mathrm{T}\to\infty}\frac{1}{2\pi\mathrm{T}}\mathbb{E}\left[\hat{F}^{*}(\omega,\mathrm{T})\hat{F}(\omega,\mathrm{T})\right]\end{split}$
(42)
where $\mathrm{i}=\sqrt{-1}$ is the imaginary unit,
$\hat{F}(\omega,\mathrm{T})$ indicates the truncated Fourier transform of the
process $F(t)$ in a finite time interval $[0,\mathrm{T}]$, and
$\hat{F}^{*}(\omega,\mathrm{T})$ denotes its complex conjugate.
By taking into account Eq. (39) and the definition in Eq. (40b), CF of the
loading $q(z,t)$ is given by
$\begin{split}R_{q}(z,\tau)&=\mathbb{E}\left[q(z,t)q(z,t+\tau)\right]\\\
&=g(z)\mathbb{E}\left[F(t)F(t+\tau)\right]g(z)\\\
&=g^{2}(z)R_{F}(\tau)\end{split}$ (43)
Similarly, the PSD of the loading is
$\begin{split}S_{q}(z,\omega)&=\lim_{\mathrm{T}\to\infty}\frac{1}{2\pi\mathrm{T}}\mathbb{E}\left[\hat{q}^{*}(z,\omega,\mathrm{T})\hat{q}(z,\omega,\mathrm{T})\right]\\\
&=g(z)\lim_{\mathrm{T}\to\infty}\frac{1}{2\pi\mathrm{T}}\mathbb{E}\left[\hat{F}^{*}(\omega,\mathrm{T})\hat{F}(\omega,\mathrm{T})\right]g(z)\\\
&=g^{2}(z)S_{F}(\omega)\end{split}$ (44)
and the variance $\sigma_{q}^{2}(z)$ is
$\sigma^{2}_{q}(z)=\mathbb{E}\left[q(z,t)q(z,t)\right]=g(z)\sigma^{2}_{F}g(z)$
(45)
Now we want to characterize the response process in terms of displacement
$v(z,t)$ when the input is a Gaussian stationary process. Being the stochastic
input Gaussian also the stochastic output will be Gaussian, but the response
process will have a stationary part for $t\gg 0$ and a transient non-
stationary one. Therefore, it is needed the evaluation of the evolution in
time of the statistics.
### 4.1 Time-domain response
By observing Eq. (34) we deduce that if the forcing load $f(t)=F(t)$, then
also the time response of the beam in terms of modal coordinate will be a
stochastic process $Y(t)$. Hence, from Eq. (24) we get
$v(z,t)=\sum_{j=1}^{\infty}\phi_{j}(z)Y_{j}(t)$ (46)
where $\phi_{j}(z)$ is a deterministic function that can be analytically
evaluated. While, the stochastic response $Y_{j}(t)$ is solution of the
following stochastic differential equation
$\ddot{Y}_{j}(t)+\frac{\eta}{\rho A}\dot{Y}_{j}(t)+\frac{k_{\lambda,j}}{\rho
A}Y_{j}(t)=\frac{c_{j}}{\rho A}F(t)$ (47)
The forced input $F(t)$ is a Gaussian process and the differential equation in
Eq. (47) is linear. This implies that the output process $Y_{j}(t)$ will be
Gaussian too, and then can be completely characterized by the mean
$\mu_{Y_{j}}(t)$ and the correlation function $R_{Y_{j}}(t,t+\tau)$. Under the
assumption that the beam is quiescent at $t=0$, the two initial conditions
become
$\begin{cases}v(z,0)=0,\,\forall z\in[0,L]\Rightarrow Y_{j}(0)=0,\,\forall
j\in\mathbb{N}^{+}\\\ \dot{v}(z,0)=0,\,\forall
z\in[0,L]\Rightarrow\dot{Y}_{j}(0)=0,\,\forall j\in\mathbb{N}^{+}\\\
\end{cases}$ (48)
The output process is obtained by applying the Duhamel superposition integral
$Y_{j}(t)=\frac{c_{j}}{\rho A}\int_{0}^{t}h_{j}(t-\tau)F(\tau)d\tau$ (49)
being $h_{j}(t)$ a deterministic function which represents the impulse
response of the $j$-th modal oscillator. Such a function is defined as
$h_{j}(t)=\sqrt{\frac{4(\rho A)^{2}}{4k_{\lambda,j\rho
A}-\eta^{2}}}\exp\left(-\frac{\eta}{2\rho
A}t\right)\sin\left(\sqrt{\frac{4k_{\lambda,j}\rho A-\eta^{2}}{4(\rho
A)^{2}}}t\right)$ (50)
We can observe that the term
$\omega_{D,j}=\sqrt{\frac{4k_{\lambda,j}\rho A-\eta^{2}}{4(\rho A)^{2}}}$ (51)
is the damped frequency. Therefore, the impulse response can be rewritten as
$h_{j}(t)=\frac{1}{\omega_{D,j}}\exp\left(-\frac{\eta}{2\rho
A}t\right)\sin\left(\omega_{D,j}t\right)$ (52)
Taking Eq. (40a) into account and by applying the averaging operator to Eq.
(49) we can prove that the mean of the response process is zero. That is,
$\begin{split}\mu_{Y_{j}}(t)&=\mathbb{E}\left[\frac{c_{j}}{\rho
A}\int_{0}^{t}h_{j}(t-\tau)F(\tau)d\tau\right]\\\ &=\frac{c_{j}}{\rho
A}\int_{0}^{t}h_{j}(t-\tau)\mathbb{E}\left[F(\tau)\right]d\tau\\\
&=\frac{c_{j}}{\rho
A}\int_{0}^{t}h_{j}(t-\tau)\mu_{F}(\tau)d\tau=0\end{split}$ (53)
Eq. (53) implies that also the mean of displacements is zero $\mu_{v}(z,t)=0$
for all $t\geqslant 0$, and for $z\in[0,L]$.
CF of the response process $Y_{j}(t)$ can be evaluated taking into account the
assumption in Eq. (40b) and the Duhamel integral in Eq. (49). We apply the
averaging operator to the process $Y_{j}(t)$ considering two different time-
step $t=t_{1}$ and $t+\tau=t_{2}$. That is,
$\begin{split}R_{Y_{j}}(t_{1},t_{2}):=&\mathbb{E}\left[Y_{j}(t_{1})Y_{j}(t_{2})\right]\\\
=&\frac{c_{j}^{2}}{(\rho
A)^{2}}\int_{0}^{t_{1}}\int_{0}^{t_{2}}h_{j}(t_{1}-\tau_{1})h_{j}(t_{2}-\tau_{2})\mathbb{E}\left[F(\tau_{1})F(\tau_{2})\right]d\tau_{1}d\tau_{2}\\\
=&\frac{c_{j}^{2}}{(\rho
A)^{2}}\int_{0}^{t_{1}}\int_{0}^{t_{2}}h_{j}(t_{1}-\tau_{1})h_{j}(t_{2}-\tau_{2})R_{F}(\tau_{2}-\tau_{1})d\tau_{1}d\tau_{2}\end{split}$
(54)
from Eq. (54) the variance $\sigma^{2}_{Y_{j}}(t)$ can be also evaluated by
placing $t_{1}=t_{2}$.
By taking into account the Eq. (46) the CF of the displacement $v(z,t)$ is
$\begin{split}R_{v}(z,t_{1},t_{2}):&=\mathbb{E}\left[v(z,t_{1})v(z,t_{2})\right]\\\
&=\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}\phi_{j}(z)R_{Y_{j}Y_{i}}(t_{1},t_{2})\phi_{i}(z)\end{split}$
(55)
where $R_{Y_{j}Y_{i}}(t_{1},t_{2})$ is the cross correlation of the modal
response processes $Y_{j}(t$) and $Y_{i}(t)$ defined as
$\begin{split}R_{Y_{j}Y_{i}}(t_{1},t_{2}):=&\mathbb{E}\left[Y_{j}(t_{1})Y_{i}(t_{2})\right]\\\
=&\frac{c_{j}c_{i}}{(\rho
A)^{2}}\int_{0}^{t_{1}}\int_{0}^{t_{2}}h_{j}(t_{1}-\tau_{1})h_{i}(t_{2}-\tau_{2})R_{F}(\tau_{2}-\tau_{1})d\tau_{1}d\tau_{2}\end{split}$
(56)
If $t_{1}=t_{2}=t$ Eq. (55) provides the non-stationary variance of the
displacement $v(z,t)$. That is,
$\begin{split}\sigma^{2}_{v}(z,t):&=\mathbb{E}\left[v(z,t)v(z,t)\right]\\\
&=\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}\phi_{j}(z)\sigma^{2}_{Y_{j}Y_{i}}(t)\phi_{i}(z)\end{split}$
(57)
where $\sigma^{2}_{Y_{j}Y_{i}}(t)$ is
$\begin{split}\sigma^{2}_{Y_{j}Y_{i}}(t):=&\mathbb{E}\left[Y_{j}(t)Y_{i}(t)\right]\\\
=&\frac{c_{j}c_{i}\sigma^{2}_{F}}{(\rho
A)^{2}}\int_{0}^{t}\int_{0}^{t}h_{j}(t-\tau)h_{i}(t-\tau)d\tau d\tau\\\
=&\frac{c_{j}c_{i}\sigma^{2}_{F}}{(\rho
A)^{2}}t\int_{0}^{t}h_{j}(t-\tau)h_{i}(t-\tau)d\tau\end{split}$ (58)
and represents the cross variance of the modal response processes $Y_{j}(t$)
and $Y_{i}(t)$.
#### 4.1.1 Monte Carlo simulation
In some cases, Eq. (56) and Eq. (58) cannot be evaluated in closed form and it
is needed a numerical approach to characterized the response process form a
stochastic point of view. In this context, Monte Carlo (MC) method is a
powerful tool that provides a time-domain response with the aid of digital
simulations. Specifically, as a first step, it is needed the generation of a
proper number $N$ of samples (or realizations) of the stochastic input $F(t)$.
The $i$-th realization of the stochastic input process is denoted as
$F^{i}(t)$ and can be generated by harmonic superposition method proposed by
Shinozuka and Deodatis Shino . According to this approach the generic $i$-th
sample of the forced process is given as
$F^{i}(t)=\sqrt{2}\sum_{j=1}^{m}\sqrt{2S_{F}(\omega_{j})\Delta\omega}\cos{\left(\omega_{j}t+\theta_{j}^{i}\right)}$
(59)
where $\omega_{j}=j\Delta\omega$, $\Delta\omega$ is the discretization step in
the frequency domain of the PSD function $S_{F}(\omega)$, $\theta_{j}^{i}$
represents $i$-th realization of the independent random phase with uniform
distributed probability density function between $0$ and $2\pi$.
As second step, it is needed to evaluate the response samples. In particular,
for each input sample $F^{i}(t)$ we need to evaluate the output process
$Y^{i}_{j}(t)$ in Eq. (47) with the aid of the Duhamel superposition integral
in Eq. (49). Thus,
$Y_{j}^{i}(t)=\frac{c_{j}}{\rho A}\int_{0}^{t}h_{j}(t-\tau)F^{i}(\tau)d\tau$
(60)
As last step, after the evaluation of the $n\times N$ response processes
$Y_{j}^{i}(t)$ with $i=1,2\dots,N$, and $j=1,2,\dots,n$, the stochastic
displacement process samples $v^{i}(z,t)$ can be used to evaluate the
statistics numerically.
### 4.2 Frequency domain approach
Steady-state analysis and characterization of stationary responses can be
driven in analytical way. Specifically, we make the truncated Fourier
transform of the stochastic differential equation (47). That is,
$\hat{Y}_{j}(\omega,\mathrm{T})\left[-\omega^{2}+\frac{\eta}{\rho
A}\mathrm{i}\omega+\frac{k_{\lambda,j}}{\rho A}\right]=\frac{c_{j}}{\rho
A}\hat{F}(\omega,\mathrm{T})$ (61)
which transforms the differential equation in time domain to an algebraic
equation in frequency domain. From Eq. (61) we get the solution
$\hat{Y}_{j}(\omega,\mathrm{T})$ as
$\begin{split}\hat{Y}_{j}(\omega,\mathrm{T})&=\frac{1}{-\omega^{2}\hat{+}\frac{\eta}{\rho
A}\mathrm{i}\omega+\frac{k_{\lambda,j}}{\rho A}}\frac{c_{j}}{\rho
A}\hat{F}(\omega,\mathrm{T})\\\ &=H_{j}(\omega)\frac{c_{j}}{\rho
A}\hat{F}(\omega,\mathrm{T})\end{split}$ (62)
where $H_{j}(\omega)$ is the transfer function of the $j$-th modal oscillator.
Once the response $\hat{Y}_{j}(\omega)$ in frequency domain is known, we can
place it in the expression of the cross PSD of the response processes
$Y_{j}(t)$ and $Y_{i}(t)$. That is,
$\begin{split}S_{Y_{j}Y_{i}}(\omega)&=\lim_{\mathrm{T}\to\infty}\frac{1}{2\pi\mathrm{T}}\mathbb{E}\left[\hat{Y}_{j}^{*}(\omega,\mathrm{T})\hat{Y}_{i}(\omega,\mathrm{T})\right]\\\
&=\frac{c_{j}}{\rho
A}H_{j}^{*}(\omega)\lim_{\mathrm{T}\to\infty}\frac{1}{2\pi\mathrm{T}}\mathbb{E}\left[\hat{F}^{*}(\omega,\mathrm{T})\hat{F}(\omega,\mathrm{T})\right]\frac{c_{i}}{\rho
A}H_{i}(\omega)\\\ &=\frac{c_{j}c_{i}}{\left(\rho
A\right)^{2}}H_{j}^{*}(\omega)H_{i}(\omega)S_{F}(\omega)\end{split}$ (63)
Finally, by using Eq. (63), it is possible to evaluate the analytical form of
the PSD of beam displacements as follows
$\begin{split}S_{v}(z,\omega)&=\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}\phi_{j}(z)\phi_{i}(z)\lim_{\mathrm{T}\to\infty}\frac{1}{2\pi\mathrm{T}}\mathbb{E}\left[\hat{Y}_{j}^{*}(\omega,\mathrm{T})\hat{Y}_{i}(\omega,\mathrm{T})\right]\\\
&=\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}\phi_{j}(z)\phi_{i}(z)S_{Y_{j}Y_{i}}(\omega)\\\
&=\frac{S_{F}(\omega)}{(\rho
A)^{2}}\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}\phi_{j}(z)\phi_{i}(z){c_{j}c_{i}}H_{j}^{*}(\omega)H_{i}(\omega)\end{split}$
(64)
The PSD in Eq. (64) allows for evaluating the stationary variance of
transverse displacements $\sigma^{2}_{v}(z)$. That is,
$\sigma^{2}_{v}(z)=\int_{-\infty}^{\infty}S_{v}(z,\omega)d\omega=\frac{1}{(\rho
A)^{2}}\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}\phi_{j}(z)\phi_{i}(z)c_{j}c_{i}\int_{-\infty}^{\infty}H_{j}^{*}(\omega)S_{F}(\omega)H_{i}(\omega)d\omega$
(65)
### 4.3 Nonlocal beam under Gaussian white noise
In this section we consider the case in which the input process is a Gaussian
white noise with zero mean denoted by $W(t)$ and characterized by a constant
PSD and a Dirac delta as CF. That is,
$S_{W}(\omega)=S_{0},\;\;\;R_{W}(\tau)=2\pi S_{0}\delta(\tau)$ (66)
With this assumption the present study does not lose generality inasmuch
several real excitation process can be modeled as summation of modulated white
noises.
Moreover, from this considered case analytical solutions in terms of
statistics of the response can be obtained and some useful results about the
time and frequency domain analysis can be drawn. Under the assumptions in Eq.
(66), Eq. (64) and Eq. (65) provide the characterization of the stochastic
output process in terms of PSD and stationary variance. Specifically,
$S_{v}(z,\omega)=\frac{S_{0}}{(\rho
A)^{2}}\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}\phi_{j}(z)\phi_{i}(z){c_{j}c_{i}}H_{j}^{*}(\omega)H_{i}(\omega)$
(67)
and
$\sigma^{2}_{v}(z)=\frac{S_{0}}{(\rho
A)^{2}}\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}\phi_{j}(z)\phi_{i}(z)c_{j}c_{i}\int_{-\infty}^{\infty}H_{j}^{*}(\omega)H_{i}(\omega)d\omega$
(68)
However, the quantities in Eqs. (67) and (68) provide a characterization of
the displacement just at steady-state. In order to provide a complete
stochastic characterization of the response it is needed to evaluate the CF in
Eq. (46) and the time-dependent non-stationary variance in Eq. (57). In this
regard, exploiting the properties of the stochastic input, the CF of the
displacement is given by
$R_{v}(z,t_{1},t_{2})=\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}\phi_{j}(z)R_{Y_{j}Y_{i}}(t_{1},t_{2})\phi_{i}(z)$
(69)
where each term in the summations can be evaluated in closed form. That is,
$\begin{split}R_{Y_{j}Y_{i}}(t_{1},t_{2})&=\frac{c_{j}c_{i}2\pi S_{0}}{(\rho
A)^{2}}\int_{0}^{t_{1}}\int_{0}^{t_{2}}h_{j}(t_{1}-\tau_{1})h_{i}(t_{2}-\tau_{2})\delta(\tau_{2}-\tau_{1})d\tau_{1}d\tau_{2}\\\
&=\frac{c_{j}c_{i}2\pi S_{0}}{(\rho
A)^{2}}\int_{0}^{t_{1}}h_{j}(t_{1}-\tau_{1})h_{i}(t_{2}-\tau_{1})d\tau_{1}\end{split}$
(70)
From Eq. (69) the time-dependent variance of the displacement is
$\begin{split}\sigma_{v}^{2}(z,t)&=R_{v}(z,t,t)=\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}\phi_{j}(z)\phi_{i}(z)R_{Y_{j}Y_{i}}(t,t)\\\
&=\frac{2\pi S_{0}}{(\rho
A)^{2}}\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}c_{j}c_{i}\phi_{j}(z)\phi_{i}(z)\int_{0}^{t}h_{j}(t-\tau)h_{i}(t-\tau)d\tau\\\
\end{split}$ (71)
Note that both the expressions in Eq. (70) and in Eq. (71) can be evaluated in
analytical form.
## 5 Numerical simulation and parametric study
This section is devoted to the stationary and non-stationary analysis of the
stochastic response of a nonlocal damped Bernoulli-Euler beam forced by a
random load. Such numerical analysis aims to study the influence of the
nonlocal parameter $\lambda$ in the response in terms of CF, PSD and
stationary and non-stationary variance.
We consider a micro-beam of length $L=300$ $\mu$m, rectangular cross section
with width $b=30$ $\mu$m and thickness $h=25$ $\mu$m, made of epoxy
characterized by density $\rho=1.20$ g/cm3 and elastic modulus $E=2.20$ GPa
Mems ; Ashby1 . Damping effects due to surrounding environment are modeled by
the following value of viscosity $\eta=2.00$ cP describing a wide variety of
viscous fluids of technical interest Ashby1 ; Ashby2 . The considered micro-
beam is constrained as a cantilever beam and is forced by a ground motion
acceleration as depicted in Figure 3(a). Without loss of generality, such
imposed acceleration on the basis $z=0$ is a Gaussian white noise
$\ddot{v}(0,t)=a_{g}(t)=W(t)$ (72)
where the white noise $W(t)$ has zero-mean and constant PSD $S_{0}=10^{3}$
N2s, and $v(z,t)$ denotes the relative displacement with respect to the basis.
In this case the partial differential equation (15) becomes
$\frac{\partial^{4}v(z,t)}{\partial z^{4}}-(\lambda
L)^{2}\frac{\partial^{6}v(z,t)}{\partial z^{6}}+\frac{\eta}{EI}\frac{\partial
v(z,t)}{\partial t}+\frac{\rho A}{EI}\left[\frac{\partial^{2}v(z,t)}{\partial
t^{2}}+a_{g}(t)\right]=0,\;\;\;\;\;0<z<L$ (73)
and the BCs $\forall t$ are
$\left\\{\begin{split}&v(0,t)=0,&\;\;\;\;&M(L,t)=0,\\\
&\varphi(0,t)=0,&\;\;\;&T(L,t)=0\end{split}\right.$ (74)
Let us recall that, by virtue Eq.(8) and Eq.(11), bending moment $\,M\,$ and
shear force $\,T\,$ fields can be expressed in terms of elastic curvature and
its derivatives as
$M(z,t)=EI\chi(z,t)-EI(\lambda L)^{2}\frac{\partial^{2}\chi(z,t)}{\partial
z^{2}}$ (75a) $T(z,t)=EI\frac{\partial\chi(z,t)}{\partial z}-EI(\lambda
L)^{2}\frac{\partial^{3}\chi(z,t)}{\partial z^{3}}$ (75b)
To show the effects of the nonlocal parameters in the response, different
cases of Eq. (73) are considered. Specifically, we select three values of the
nonlocal parameter $\lambda$, i.e.,
$\lambda=\left\\{0.1,\,0.2,\,0.3\right\\}$.
(a) Layout of the cantilever micro-beam
(b) First five eigenfunctions for $\lambda=0.2$
Figure 3: Cantilever micro-beam forced by ground motion acceleration
The solution in terms of displacement function is obtained with the aid of Eq.
(38), where each eigenfunction is obtained by solving the differential problem
in Eq. (22) with the two constitutive BCs in Eq. (23) and the four BCs in Eq.
(74). The latter BCs in terms of eigenfunctions are
$\left\\{\begin{split}&\phi(0)=0,&\;&\phi^{(2)}(L)-(\lambda
L)^{2}\phi^{(4)}(L)=0,\\\ &\phi^{(1)}(0)=0,&\;\;\;&\phi^{(3)}(L)-(\lambda
L)^{2}\phi^{(5)}(L)=0\end{split}\right.$ (76)
The first five eigenfunctions for $\lambda=0.2$ are shown in Figure 3(b) and
the first five natural frequencies for different values of $\lambda$ are
reported in Table 1.
$\lambda$ | $\omega_{0,1}$ | $\omega_{0,2}$ | $\omega_{0,3}$ | $\omega_{0,4}$ | $\omega_{0,5}$
---|---|---|---|---|---
0.10 | $4.2323\times 10^{5}$ | $2.8373\times 10^{6}$ | $8.8780\times 10^{6}$ | $1.9924\times 10^{7}$ | $3.7820\times 10^{7}$
0.15 | $4.4551\times 10^{5}$ | $3.1643\times 10^{6}$ | $1.0609\times 10^{6}$ | $2.5278\times 10^{7}$ | $5.0090\times 10^{7}$
0.20 | $4.6795\times 10^{5}$ | $3.5244\times 10^{6}$ | $1.2495\times 10^{7}$ | $3.1001\times 10^{7}$ | $6.2992\times 10^{7}$
0.25 | $4.9002\times 10^{5}$ | $3.9038\times 10^{6}$ | $1.4464\times 10^{6}$ | $3.6905\times 10^{7}$ | $7.6186\times 10^{7}$
0.30 | $5.1192\times 10^{5}$ | $4.2951\times 10^{6}$ | $1.6481\times 10^{7}$ | $4.2909\times 10^{7}$ | $8.9539\times 10^{7}$
Table 1: Natural frequencies in $rad/s$ of cantilever micro-beam for different
values of $\lambda$.
Taking Eq. (73) into account and with the aid of the definition in Eq. (67)
the PSD of the displacement is
$S_{v}(z,\omega)\approx
S_{0}\sum_{j=1}^{n}\sum_{i=1}^{n}\phi_{j}(z)\phi_{i}(z)c_{j}c_{i}H_{j}^{*}(\omega)H_{i}(\omega)$
(77)
where for the present numerical simulations we assume $n=5$.
In Figure 4 the PSD of the displacements are reported for different values of
$\lambda$. Specifically, Figure 4(a) shows the PSDs of the mid-point
displacements, while in Figure 4(b) the PSDs at $z=L$ are shown. We can
observe that the nonlocal parameter influences both the natural frequencies
and the peak amplitudes in the PSD. Specifically, when the nonlocal parameter
increases the amplitude of the peaks decrease and their frequencies increases.
(a) PSD for $z=L/2$
(b) PSD for $z=L$
Figure 4: PSD of a cantilever micro-beam response for different values of
$\lambda$
From the PSD in Eq. (77) and with the aid of Eq. (65) we can also evaluate the
stationary variance of the displacement. Such quantity can be evaluated by the
integration of the PSD in Eq. (77) as follow
$\sigma_{v}^{2}(z)=\int_{-\infty}^{\infty}S_{v}(z,\omega)d\omega\approx
S_{0}\sum_{j=1}^{n}\sum_{i=1}^{n}\phi_{j}(z)\phi_{i}(z)c_{j}c_{i}\int_{-\infty}^{\infty}H_{j}^{*}(\omega)H_{i}(\omega)d\omega$
(78)
In Table 2 stationary variances at $z=L/2$ and $z=L$ for different values of
$\lambda$ are reported.
$\lambda$ | | $\sigma^{2}_{v}(L/2)$ | $\sigma^{2}_{v}(L)$
---|---|---|---
0.10 | | $0.0200$ | $0.1975$
0.15 | | $0.0165$ | $0.1791$
0.20 | | $0.0144$ | $0.1645$
0.25 | | $0.0138$ | $0.1548$
0.30 | | $0.0115$ | $0.1416$
Table 2: Displacement variances in $\mu$m2 at $z=L/2$ and $z=L$, for different
values of $\lambda$.
Taking into account the values in this table we can state that the nonlocal
parameter also influences stationary variances. Specifically, when the
nonlocal parameter $\lambda$ grows up then the displacement stationary
variances decrease.
The PSD in Eq. (77) and the stationary variance in Eq. (78) provide a steady-
state characterization of the displacement process due to a stochastic ground
motion acceleration. However, taking Eq. (73) into account, a characterization
of the non-stationary response can be pursued by using Eq. (69) to evaluate
the CF of the process $v(z,t)$. That is,
$R_{v}(z,t_{1},t_{2})=2\pi
S_{0}\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}\phi_{j}(z)\phi_{i}(z){c_{j}c_{i}}\int_{0}^{t_{1}}h_{j}(t_{1}-\tau_{1})h_{i}(t_{2}-\tau_{1})d\tau_{1}$
(79)
By virtue of the definition in Eq. (71) the time-dependent variance is
$\begin{split}\sigma_{v}^{2}(z,t)&=R_{v}(z,t,t)=\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}\phi_{j}(z)\phi_{i}(z)R_{Y_{j}Y_{i}}(t,t)\\\
&={2\pi
S_{0}}\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}c_{j}c_{i}\phi_{j}(z)\phi_{i}(z)\int_{0}^{t}h_{j}(t-\tau)h_{i}(t-\tau)d\tau\\\
\end{split}$ (80)
From Eq. (80) it is possible to obtain the stationary variance in Eq. (78) by
performing the following limit
$\sigma_{v}^{2}(z)=\lim_{t\to\infty}\sigma_{v}^{2}(z,t)$ (81)
while the stationary CF is given by
$R_{v}(z,\tau)=\lim_{t\to\infty}R_{v}(z,t,t+\tau)$ (82)
the latter equation is related to the PSD in Eq. (77) by Fourier transform
(Wiener-Khinchin theorem).
(a) Displacement variance at $z=L/2$
(b) Displacement variance at $z=L$
Figure 5: Exact stationary (dashed line) and non-stationary (continuous line)
displacement variance in contrast with those obtained by MC simulations
Figure 5 shows the stationary and non-stationary displacement variances for
different values of the nonlocal parameter $\lambda$. Dashed lines represent
the stationary variances, whereas continuous lines are the non-stationary ones
obtained with the aid of Eq. (80). Such exact variances are compared with the
numerical results obtained by means of the MC approach described in Section
4.1.1. Specifically, the numerical variances (dotted lines in Figure 5) are
obtained considering $N=4\times 10^{3}$ samples and by assuming $m=2\times
10^{3}$ and $\Delta\omega=\omega_{0,1}/50$ in Eq. (59). From Figure 5 we can
observe that the steady state is reached in all considered case in a few tens
of microseconds. This is due to the fact that the involved stiffnesses in such
micro-beams are great and the masses are little. Moreover, the nonlocal
parameter influences the duration of the transient state, indeed, if $\lambda$
grows up the variance reaches the stationary value more quickly.
## 6 Concluding remarks
Random vibrations of damped nonlocal Bernoulli-Euler beams due to stochastic
excitation have been investigated in the present research. Two specific
effects have been accurately analyzed: size and damping phenomena,
respectively modeled by stress-driven nonlocal mechanics and external viscous
interactions. A stochastic input for the loading has been assumed to simulate
external actions randomness.
A stochastic differential problem in space and time, governing the motion of
nonlocal beams under stochastic loading, has been formulated.
Exact solutions of power spectral density, correlation function and
displacement variance have been evaluated by differential eigenanalysis.
From the analytical formulation of stationary and non-stationary responses and
with the aid of numerical simulations, it has been highlighted a significant
reduction of stationary variances and duration of the transient state in
responses and an increasing of natural frequencies for increasing nonlocal
scale parameter. The predicted smaller-is-stiffer phenomenon, confirmed
recently in FusPisPol , agrees with most of experimental outcomes associated
with inflected small-scale beams AbazariSensors2015 .
In summary, the nonlocal approach developed to model damped small-scale beams
is able to capture size and damping effects and random excitations. The
methodology provides analytical solutions in terms of statistics of the
response and closed-form natural frequencies. The contributed results can be
exploited for structural design and optimization of smaller and smaller
devices used in modern technological applications, such as: sensors,
actuators, MEMS/NEMS, resonators.
## Acknowledgments
Financial supports from the MIUR in the framework of the Projects PRIN 2015
”COAN 5.50.16.01” (code 2015JW9NJT _Advanced mechanical modeling of new
materials and structures for the solution of 2020 Horizon challenges_) and
PRIN 2017 (code 2017J4EAYB _Multiscale Innovative Materials and Structures
(MIMS)_ ; University of Naples Federico II Research Unit) and from the
research program ReLUIS 2019 are gratefully acknowledged.
## Conflict of interest
The authors declare that they have no conflict of interest.
## References
* (1) Pourasghar A, Chen Z (2019) Effect of hyperbolic heat conduction on the linear and nonlinear vibration of CNT reinforced size-dependent functionally graded microbeams. International Journal of Engineering Science 137:57–72.
* (2) Xia X, Weng GJ, Hou D, Wen W (2019) Tailoring the frequency-dependent electrical conductivity and dielectric permittivity of CNT-polymer nanocomposites with nanosized particles. International Journal of Engineering Science 142:1–19.
* (3) Mojahedi M (2017) Size dependent dynamic behaviour of electrostatically actuated microbridges. International Journal of Engineering Science 111:74–85.
* (4) Moradweysi P, Ansari R, Hosseini K, Sadeghi F (2018) Application of modified Adomian decomposition method to pull-in instability of nano-switches using nonlocal Timoshenko beam theory. Applied Mathematical Modelling 54:594–604.
* (5) Hosseini SM (2018) Analytical solution for nonlocal coupled thermoelasticity analysis in a heat-affected MEMS/NEMS beam resonator based on Green-Naghdi theory, Applied Mathematical Modelling 57:21-36.
* (6) De Bellis ML, Bacigalupo A, Zavarise G (2019) Characterization of hybrid piezoelectric nanogenerators through asymptotic homogenization. Computer Methods in Applied Mechanics and Engineering 355:1148–1186.
* (7) Natsuki T, Urakami K (2019) Analysis of Vibration Frequency of Carbon Nanotubes used as Nano-Force Sensors Considering Clamped Boundary Condition. Electronics 8(10):1082.
* (8) Mohammadian M, Abolbashari MH, Hosseini S.M (2019) Application of hetero junction CNTs as mass nanosensor using nonlocal strain gradient theory: An analytical solution. Applied Mathematical Modelling 76:26–49.
* (9) Tran N, Ghayesh MH, Arjomandi M (2018) Ambient vibration energy harvesters: A review on nonlinear techniques for performance enhancement. International Journal of Engineering Science 127:162–185.
* (10) Basutkar R (2019) Analytical modelling of a nanoscale series-connected bimorph piezoelectric energy harvester incorporating the flexoelectric effect. International Journal of Engineering Science 139:42–61.
* (11) Ghayesh MH, Farokhi H (2020) Nonlinear broadband performance of energy harvesters. International Journal of Engineering Science 147:103202.
* (12) Farajpour A, Ghayesh MH, Farokhi H (2018) A review on the mechanics of nanostructures. International Journal of Engineering Science 133:231–263.
* (13) Bauer S, Pittrof A, Tsuchiya H, Schmuki P (2011) Size-effects in TiO(2) nanotubes: diameter dependent anatase/rutile stabilization. Electrochemistry Communications 13:538–541.
* (14) Kiang C, Endo M, Ajayan P, Dresselhaus G, Dresselhaus M (1998) Size effects in carbon nanotubes. Physical Review Letters 81:1869–1872.
* (15) Xiao S, Hou W (2006) Studies of size effects on carbon nanotubes’ mechanical properties by using different potential functions. Fullerenes Nanotubes and Carbon Nanostructures 14:9–16.
* (16) Zienert A, Schuster J, Streiter R, Gessner T (2010) Transport in carbon nanotubes: contact models and size effects. Physica Status Solidi B-basic Solid State Physics 247:3002–3005.
* (17) Chowdhury R, Adhikari S, Wang C, Scarpa F (2010) A molecular mechanics approach for the vibration of single-walled carbon nanotubes. Computational Materials Science 48:730–735.
* (18) Tang C, Meng L, Sun L, Zhang K, Zhong J (2008) Molecular dynamics study of ripples in graphene nanoribbons on 6H-SiC(0001): temperature and size effects. Journal of Applied Physics 104. Paper 113536.
* (19) Marotti de Sciarra F (2009) A nonlocal model with strain-based damage. International Journal of Solids and Structures 46(22-23):4107–4122.
* (20) Wang LF, Hu HY (2005) Flexural wave propagation in single-walled carbon nanotube. Physical Review B 71(19):195412–195418.
* (21) Lu P, Lee HP, Lu C, Zhang PQ (2007) Application of nonlocal beam models for carbon nanotubes. International Journal of Solids and Structures 44(16):5289–5300.
* (22) Alotta G, Failla G, Zingales M (2014) Finite element method for a nonlocal Timoshenko beam model. Finite Element in Analysis and Design 89:77–92.
* (23) Marotti de Sciarra F (2014) Finite element modelling of nonlocal beams. Physica E: Low-Dimensional Systems and Nanostructures 59:144–149.
* (24) Ansari R., Sahmani S, Arash B (2010) Nonlocal plate model for free vibrations of single-layered graphene sheets. Physics Letters A 375:53–62.
* (25) Murmu T, Adhikari S (2011) Nonlocal vibration of carbon nanotubes with attached buckyballs at tip. Mechanics Research Communications 38:62–67.
* (26) Ansari R, Sahmani S (2012) Small scale effect on vibrational response of single-walled carbon nanotubes with different boundary conditions based on nonlocal beam models. Communications in Nonlinear Science and Numerical Simulation 17:1965–1979.
* (27) Lakes RS (1991) Experimental micro mechanics methods for conventional and negative Poissons ratio cellular solids as Cosserat continua. Journal of Engineering Materials and Technology 113(1):148–155.
* (28) Arash B, Wang Q (2012) A review on the application of nonlocal elastic models in modeling of carbon nanotubes and graphenes. Computational Materials Science 51(1):303–313.
* (29) Rogula D (1965) Influence of spatial acoustic dispersion on dynamical properties of dislocations. Bull Acad Pol Sci Ser Sci Tech 13:337-343.
* (30) Rogula D (1982) Introduction to nonlocal theory of material media. In: Nonlocal theory of material media. CISM courses and lectures, Rogula D, ed., Springer, Wien, 268:125-222.
* (31) Eringen AC (1972) Linear theory of nonlocal elasticity and dispersion of plane waves. International Journal of Engineering Science 10:425–435.
* (32) Eringen AC (1983) On differential equations of nonlocal elasticity and solutions of screw dislocation and surface waves. Journal of Applied Physics 54:4703.
* (33) Tricomi FG (1957) Integral Equations. Interscience, New-York, USA. Reprinted by Dover Books on Mathematics, 1985.
* (34) Polyanin AD, Manzhirov AV (2008) Handbook of integral equations. 2nd ed. Boca Raton, FL: Chapman & Hall/CRC.
* (35) Romano G, Barretta R, Diaco M, Marotti de Sciarra F (2017) Constitutive boundary conditions and paradoxes in nonlocal elastic nano-beams. Int J Mech Sci 121:151–156.
* (36) Challamel N, Wang CM (2008) The small length scale effect for a non-local cantilever beam: a paradox solved. Nanotechnology 19:345703.
* (37) Fernández-Sáez J, Zaera R, Loya JA, Reddy JN (2016) Bending of Euler-Bernoulli beams using Eringen’s integral formulation: A paradox resolved. International Journal of Engineering Science 99:107-116.
* (38) Borino G, Failla B, Parrinello F (2003) A symmetric nonlocal damage theory. International Journal of Solids and Structures 40:3621–3645.
* (39) Khodabakhshi P, Reddy JN (2015) A unified integro-differential nonlocal model. International Journal of Engineering Science 95:60–75.
* (40) Lam DCC, Yang F, Chong ACM, Wang J, Tong P (2003) Experiments and theory in strain gradient elasticity. Journal of Mechanics and Physics of Solids 51(8):1477–1508.
* (41) Challamel N, Reddy JN, Wang CM (2016) Eringen’s stress gradient model for bending of nonlocal beams. Journal of Engineering Mechanics 142(12):04016095.
* (42) Numanoǧlu HM, Akgöz, B., Civalek Ö (2018) On dynamic analysis of nanorods. International Journal of Engineering Science 130:33-50.
* (43) Cornacchia F, Fantuzzi N, Luciano R, Penna R (2019) Solution for cross- and angle-ply laminated Kirchhoff nano plates in bending using strain gradient theory. Composites Part B: Engineering 173:107006.
* (44) Cornacchia F, Fabbrocino F, Fantuzzi N, Luciano R, Penna R (2019). Analytical solution of cross- and angle-ply nano plates with strain gradient theory for linear vibrations and buckling, Mechanics of Advanced Materials and Structures, DOI: 10.1080/15376494.2019.1655613
* (45) Lim CW, Zhang G, Reddy JN (2015) A higher-order nonlocal elasticity and strain gradient theory and its applications in wave propagation. Journal of the Mechanics and Physics of Solids 78:298-313.
* (46) Barretta R, Marotti de Sciarra F (2018) Constitutive boundary conditions for nonlocal strain gradient elastic nano-beams. International Journal of Engineering Science 130:187-198.
* (47) Apuzzo A, Barretta R, Faghidian SA, Luciano R, Marotti de Sciarra F (2018) Free vibrations of elastic beams by modified nonlocal strain gradient theory. International Journal of Engineering Science 133:99-108.
* (48) Polizzotto C, Fuschi P, Pisano AA (2004) A strain-difference-based nonlocal elasticity model. International Journal of Solids and Structures 41:2383–2401.
* (49) Fuschi P, Pisano AA, Polizzotto C (2019) Size effects of small-scale beams in bending addressed with a strain-difference based nonlocal elasticity theory. International Journal of Mechanical Sciences 151:661-671.
* (50) Marotti de Sciarra F (2009) On non-local and non-homogeneous elastic continua. International Journal of Solids and Structures 46(3-4):651–676.
* (51) Di Paola M, Zingales M (2008) Long-range cohesive interactions of non-local continuum faced by fractional calculus. International Journal of Solids and Structures 45:5642–5659.
* (52) Di Paola M, Pirrotta A, Zingales M (2010) Mechanically-based approach to non-local elasticity: Variational principles. International Journal of Solids and Structures 47(5):539–548.
* (53) Di Paola M, Failla G, Zingales M (2013) Non-local stiffness and damping models for shear-deformable beams. European Journal of Mechanics A/Solids 40:69–83.
* (54) Failla G, Sofi A, Zingales M (2015) A new displacement-based framework for non-local Timoshenko beams. Meccanica 50(8):2103–2122.
* (55) Romano G, Barretta R (2017) Nonlocal elasticity in nanobeams: the stress-driven integral model. International Journal of Engineering Science 115:14–27.
* (56) Barretta R, Marotti de Sciarra F, Vaccaro MS (2019) On nonlocal mechanics of curved elastic beams. International Journal of Engineering Science 144:103–140.
* (57) Romano G, Barretta R (2017) Stress-driven versus strain-driven nonlocal integral model for elastic nano-beams. Composites Part B: Engineering 114:184–188.
* (58) Barretta R, Čanadija M, Feo L, Luciano R, Marotti de Sciarra F, R. Penna (2018) Exact solutions of inflected functionally graded nano-beams in integral elasticity, Composites Part B 142:273–286.
* (59) Apuzzo A, Barretta R, Luciano R, Marotti de Sciarra F, Penna R (2017) Free vibrations of Bernoulli-Euler nano-beams by the stress-driven nonlocal integral model. Composites Part B: Engineering 123:105–111.
* (60) Lee J, Lin C (2010) The magnetic viscous damping effect on the natural frequency of a beam plate subject to an in-plane magnetic field. Journal of Applied Mechanics 77. Paper 011014.
* (61) Chen C, Ma M, Liu J, Zheng Q, Xu Z (2011) Viscous damping of nanobeam resonators: humidity, thermal noise, and a paddling effect. Journal of Applied Physics 110. Paper 034320.
* (62) Di Paola M, Fiore V, Pinnola FP, Valenza A (2014) On the influence of the initial ramp for a correct definition of the parameters of the fractional viscoelastic material. Mechanics of Materials 69:63–70.
* (63) Di Mino G, Airey G, Di Paola M, Pinnola FP, D’Angelo G, Lo Presti D (2016) Linear and nonlinear fractional hereditary constitutive laws of asphalt mixtures. Journal of Civil Engineering and Management 22(7):882–889.
* (64) Calleja M, Kosaka P, San Paulo A, Tamayo J (2012) Challenges for nanomechanical sensors in biological detection. Nanoscale 4:4925–4938.
* (65) T. Baidyk et al. (2005) MEMS/NEMS. Handbook techniques and applications. Edited by CT Leondes, University of California, Los Angeles, USA.
* (66) Verma VK, Yadava RDS (2016) Stochastic resonance in MEMS capacitive sensors. Sensors and Actuators B: Chemical 235:583–602.
* (67) Roberts JB, Spanos PD (1999) Random vibrations and statistical linearization. Dover Publication, Inc., New-York, USA.
* (68) Crandall SH,Mark WD (1963) Random Vibration in Mechanical Systems. Academic Press, Inc., New-York, USA.
* (69) Di Paola M, Pirrotta A (1999) Non-linear systems under impulsive parametric input. International Journal of Non-Linear Mechanics 34(5):843–851.
* (70) Pirrotta A (2005) Non-linear systems under parametric white noise input: Digital simulation and response. International Journal of Non-Linear Mechanics 40(8):1088–1101.
* (71) Alotta G, Di Paola M, Pinnola FP (2017) Cross-correlation and cross-power spectral density representation by complex spectral moments. International Journal of Non-Linear Mechanics 94:20–27.
* (72) Lei Y, Murmu T, Adhikari S, Friswell MI (2013) Dynamic characteristics of damped viscoelastic nonlocal Euler–Bernoulli beams. European Journal of Mechanics A/Solids 42:125–136.
* (73) Lei Y, Adhikari S, Friswell MI (2013) Vibration of nonlocal Kelvin–Voigt viscoelastic damped Timoshenko beams. International Journal of Engineering Science 66–67:1–13.
* (74) Alotta G, Failla G, Pinnola FP (2017) Stochastic Analysis of a Nonlocal Fractional Viscoelastic Bar Forced by Gaussian White Noise. ASCE-ASME J. of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering 3(3):030904-030904-7.
* (75) Alotta G, Di Paola M, Failla G, Pinnola FP (2018) On the dynamics of non-local fractional viscoelastic beams under stochastic agencies. Composites Part B: Engineering 137:102–110.
* (76) Pirrotta A, Cutrona S., Di Lorenzo S, Di Matteo A (2015) Fractional visco-elastic Timoshenko beam deflection via single equation. International Journal for Numerical Methods in Engineering 104:869–886.
* (77) Alotta G, Failla G, Zingales M (2017) Finite element formulation of a non-local hereditary fractional order Timoshenko beam. Journal of Engineering Mechanics - ASCE 143(5): 1943–7889.0001035.
* (78) L. Meirovitch (2001) Fundamentals of Vibrations. McGraw-Hill International Edition.
* (79) Di Lorenzo S, Di Paola M, Pinnola FP, Pirrotta A (2014) Stochastic response of fractionally damped beams, Probabilistic Engineering Mechanics 35:37–43.
* (80) Shinozuka M, Deodatis G (1988) Stochastic process models for earthquake ground motion. Probabilistic Engineering Mechanics 3(3):114–123.
* (81) Ashby M, Shercliff H, Cebon D (2007) Materials engineering, science, processing and design. Edited by Elsevier, Burlington, USA.
* (82) Ashby M (1999) Materials selection in mechanical design. Edited by Butterworth-Heinemann, Woburn, USA.
* (83) Abazari AM, Safavi SM, Rezazadeh G, Villanueva LG (2015) Modelling the Size Effects on the Mechanical Properties of Micro/Nano Structures. Sensors 15:28543–28562.
|
# Stressor Type Matters! — Exploring Factors Influencing Cross-Dataset
Generalizability of Physiological Stress Detection
Pooja Prajod<EMAIL_ADDRESS>0000-0002-3168-3508 University of
AugsburgAugsburgGermany , Bhargavi Mahesh University of
AugsburgAugsburgGermany<EMAIL_ADDRESS>and Elisabeth André
University of AugsburgAugsburgGermany<EMAIL_ADDRESS>
###### Abstract.
Automatic stress detection using heart rate variability (HRV) features has
gained significant traction as it utilizes unobtrusive wearable sensors
measuring signals like electrocardiogram (ECG) or blood volume pulse (BVP).
However, detecting stress through such physiological signals presents a
considerable challenge owing to the variations in recorded signals influenced
by factors, such as perceived stress intensity and measurement devices.
Consequently, stress detection models developed on one dataset may perform
poorly on unseen data collected under different conditions. To address this
challenge, this study explores the generalizability of machine learning models
trained on HRV features for binary stress detection. Our goal extends beyond
evaluating generalization performance; we aim to identify the characteristics
of datasets that have the most significant influence on generalizability. We
leverage four publicly available stress datasets (WESAD, SWELL-KW,
ForDigitStress, VerBIO) that vary in at least one of the characteristics such
as stress elicitation techniques, stress intensity, and sensor devices.
Employing a cross-dataset evaluation approach, we explore which of these
characteristics strongly influence model generalizability. Our findings reveal
a crucial factor affecting model generalizability: stressor type. Models
achieved good performance across datasets when the type of stressor (e.g.,
social stress in our case) remains consistent. Factors like stress intensity
or brand of the measurement device had minimal impact on cross-dataset
performance. Based on our findings, we recommend matching the stressor type
when deploying HRV-based stress models in new environments. To the best of our
knowledge, this is the first study to systematically investigate factors
influencing the cross-dataset applicability of HRV-based stress models.
Stress, Generalizability, Cross-dataset, Machine learning, Heart rate
variability, Electrocardiography, Photoplethysmography
††copyright: none††ccs: Computing methodologies Cross-validation††ccs: Human-
centered computing Empirical studies in ubiquitous and mobile computing††ccs:
Computing methodologies Machine learning
## 1\. Introduction
The ability to detect stress in real-time has become increasingly important
within the fields of affective computing and human-machine interaction (HMI)
(Alberdi et al., 2016). Early stress detection offers a valuable tool for
promoting well-being and potentially preventing long-term health consequences
(Greene et al., 2016; Akmandor and Jha, 2017; Alberdi et al., 2016).
Consequently, a growing area of research focuses on developing interactive
systems that can not only detect stress but also provide personalized
interventions to manage it effectively (Yu et al., 2018; Balcombe and De Leo,
2022).
Researchers have explored various modalities for stress detection, including
psychological tools, behavioral patterns, and physiological signals
(Giannakakis et al., 2019; Alberdi et al., 2016). However, physiological
signals are considered more reliable than the other methods as they eliminate
certain measurement biases. Moreover, the growing popularity of unobtrusive
wearable sensors further facilitates continuous stress monitoring through
physiological signals.
Stress manifests differently depending on the context (Alberdi et al., 2016).
For instance, the stress experienced during an exam likely differs from that
of public speaking. While controlled lab settings are often used to develop
stress detection models, real-world HMI scenarios are far more diverse.
Therefore, assessing the generalizability of these models – their ability to
perform well in unseen contexts – is crucial for real-world applications.
Cross-dataset evaluation, where a model trained on one dataset is tested on
another, is a common approach to assess generalizability. Good cross-dataset
performance suggests broader applicability of the model.
The existing literature on stress detection models acknowledges the importance
of generalizability, with a few studies exploring cross-dataset evaluations
(Vos et al., 2023b). These studies typically investigate the applicability of
models across different datasets and sometimes explore combining datasets for
improved performance (refer to Section 2 for details). While prior works often
report limited cross-dataset performance, a key gap exists: a lack of research
systematically investigating the factors influencing this limited
generalizability.
We address this research gap by conducting extensive cross-dataset evaluations
using multiple stress datasets. We train three popular machine learning models
- random forest classifier (RFC), support vector machine (SVM), multi-layer
perceptron (MLP) - using heart rate variability (HRV) features. We aim to
identify the characteristics of these datasets that significantly impact model
generalizability. Our findings are crucial for developing stress detection
models with broader applicability in HMI systems.
## 2\. Related Work
The publicly available datasets such as WESAD (Schmidt et al., 2018) and
SWELL-KW (Koldijk et al., 2014), led to the development of numerous stress
detection models (Can et al., 2019; Haque et al., 2024). Some of these studies
(e.g., (Bobade and Vani, 2020)) focused on comparing multiple models to
determine the best model for stress detection. However, many of these
comparison studies are trained and evaluated on the same datasets. In this
section, we discuss some of the existing works that performed cross-dataset
evaluations on stress detection models.
Mishra et al. (Mishra et al., 2020) conducted cross-dataset evaluations across
four cognitive stress datasets - two had electrocardiogram (ECG) signals, and
the other two had blood volume pulse (BVP) signals. In addition to arithmetic
tasks, the ECG datasets had a startle response test and cold pressor task as
stressors. The authors trained SVM models using HRV features extracted from
the mental stress segments of these datasets. While the ECG-based HRV models
performed well in detecting stress in each other’s arithmetic tasks, they had
a performance drop of 15 - 30% in predicting stress in the BVP datasets. The
authors attributed this drop in performance, despite having the same stressor,
to the difference in sensors. They also noticed an approximately 20 - 40% drop
in the performance when detecting overall stress (including startle and cold
pressor segments), even within the same datasets. Their findings suggest that
the models trained on one type of stressor may not be efficient in detecting
other stress responses.
Prajod and André (Prajod and André, 2022) trained ECG-based deep-learning
models and HRV-based shallow models (RFC, SVM, MLP) on WESAD and SWELL-KW
datasets. While deep learning models outperformed other methods in within-
dataset evaluations, they performed poorly in cross-dataset evaluations (WESAD
models tested on SWELL-KW and vice versa). Although the HRV-based models
performed better than deep learning models in cross-dataset evaluations, their
performances were still low. Moreover, the combining datasets did not improve
the model’s performance on individual datasets. Similarly, Albaladejo-González
et al. (Albaladejo-González et al., 2023) trained HRV-based models on the
WESAD and SWELL-KW datasets. They also observed poor cross-dataset
performances and lack of stress detection improvements by combining datasets.
Benchekroun et al. (Benchekroun et al., 2023) trained two HRV-based models
(RFC, logistic regression) on the MMSD (Benchekroun et al., 2022) and UWS
(Velmovitsky et al., 2021) datasets. They tested the MMSD models using the UWS
data and found that the f1-scores were 12 – 14% lower than the UWS models
(from within-dataset evaluations). They further noted that the f1-score for
stress class was very low (less than 50 %), meaning the models were not very
efficient in detecting stressful instances.
Vos et al. (Vos et al., 2023a) trained shallow models (RFC, SVM, XGBoost)
using heart rate and electro-dermal activity (EDA) features from the SWELL-KW
dataset and evaluated them on WESAD and NEURO (Birjandtalab et al., 2016)
datasets. All three models showed poor cross-dataset performances. They also
implemented an ensemble model and repeated the cross-dataset evaluation.
Although this ensemble model yielded a slight improvement, the performance was
still poor (f1-score $<$ 50%). Furthermore, they trained the ensemble model
using a combined dataset (SWELL-KW, NEURO, UBFC-Phys (Sabour et al., 2021))
and evaluated it on WESAD. While the accuracy increased slightly, the f1-score
dropped further. Their experiments highlight the challenges of developing a
generic stress model.
The above works primarily focused on assessing the generalizability of HRV-
based models. However, Liapis et al. (Liapis et al., 2021) demonstrated that
EDA-based shallow models also struggle with generalizability. They trained
their models on the WESAD dataset and then evaluated them on their own
dataset. Notably, their dataset contained subtle stress instances, unlike
WESAD, implying that the stress intensity might further impact
generalizability.
Baird et al. (Baird et al., 2021) compared models trained on speech features
from three social stress datasets (FAU-TSST, Ulm-TSST and Reg-TSST). These
datasets induced stress following the TSST technique. They predicted cortisol
levels as a proxy for stress levels. In cross-dataset evaluations, the trends
of predicted cortisol levels were aligned for the models, indicating
compatibility between datasets. Due to the dataset compatibility, they
suggested that training models using data from both these datasets could
result in better-performing models.
Table 1 presents an overview of the existing studies on cross-dataset
generalizability of stress detection models. Most studies assess the
generalizability of a model to determine if it is applicable in other stress
scenarios. Some studies take a step further to evaluate whether combining
datasets to train models improves stress detection performances. Although most
studies observe low generalizability of stress models, few studies provide
insights into plausible factors influencing the models’ performance. For
example, Mishra et al. highlighted the poor performance of mental stress
models in detecting physical stress. Similarly, Liapis et al. noted the
difference in stress intensities of the evaluated datasets. Prajod and André
hinted at multiple factors such as stress intensity and measurement devices
that could affect generalizability. However, these studies did not further
investigate these factors. Hence, a crucial question remains relatively
unexplored - What factors or characteristics of the stress datasets need to
match for cross-dataset applicability of models?
Table 1. An overview of the existing works that perform cross-dataset evaluations of their stress models Paper | Input | Datasets | Aim
---|---|---|---
Mishra et al. (Mishra et al., 2020) | HRV | Own datasets (mainly mental arithmetic tasks) | Assess generalizability
Liapis et al. (Liapis et al., 2021) | EDA | WESAD, Own UX stress dataset | Assess generalizability
Baird et al. (Baird et al., 2021) | Speech | Own TSST datasets | Assess compatibility for combining datasets
Prajod and André (Prajod and André, 2022) | Raw ECG, HRV | WESAD, SWELL-KW | Assess generalizability, Combine datasets
Albaladejo-González et al. (Albaladejo-González et al., 2023) | HRV | WESAD, SWELL-KW | Assess generalizability, Combine datasets
Benchekroun et al. (Benchekroun et al., 2023) | HRV | MMSD, UWS | Assess generalizability
Vos et al. (Vos et al., 2023a) | HR, EDA | WESAD, SWELL-KW, NEURO, UBFC-Phys | Assess generalizability, Combine datasets
This work | HRV | WESAD, SWELL-KW, ForDigitStress, VerBIO | Assess generalizability, Combine datasets, Identify factors influencing generalizability
## 3\. Materials and Methods
### 3.1. Datasets
In this study, we focus on binary stress detection (stress vs. no-stress)
using HRV features. We leverage four publicly available stress datasets: WESAD
(Schmidt et al., 2018), SWELL-KW (Koldijk et al., 2014), ForDigitStress
(Heimerl et al., 2023), and VerBIO (Yadav et al., 2020). While SWELL-KW
contains ECG signals that can be used for extracting HRV, the ForDigitStress
dataset has BVP signals for extracting HRV. The WESAD and VerBIO datasets
contain both ECG and BVP signals. However, we consider only the BVP signals
from the VerBIO dataset to remove some redundant comparisons in our analysis.
A brief overview of the four datasets is presented in Table 2.
#### 3.1.1. WESAD
The WESAD dataset is a multimodal stress and affect dataset containing various
physiological signals, including ECG, EDA, and BVP. The data from 15
participants were collected using a chest-worn RespiBan and a wrist-worn
Empatica E4 device. This investigation utilizes the ECG data recorded by the
chest-worn device at 700 Hz.
The participants were subject to three conditions: neutral, amusement, and
stress. In the stress condition, the participants experienced social stress
induced by the TSST technique. The participants engaged in public speaking and
mental arithmetic tasks while being evaluated by a three-member panel. To
induce amusement, the participants watched selected funny video clips. The
experimental sessions began with the neutral condition, followed by the stress
and amusement conditions in alternating order. For each participant, the
neutral condition lasted for approximately 20 minutes, the stress condition
for 10 minutes, and the amusement condition for around 6.5 minutes.
We focus on stress detection, i.e., distinguishing between stress and no-
stress samples. Following the labeling scheme proposed by the dataset
creators, data from both neutral and amusement conditions were considered as
no-stress samples.
#### 3.1.2. SWELL-KW
The SWELL-KW dataset is also a multimodal stress dataset that contains two
physiological signals, ECG and EDA. This dataset consists of data from 25
participants who engaged in typical knowledge tasks like writing reports and
presentations. The ECG data was collected using the TMSI Mobi device at a
sampling rate of 2048 Hz.
The participants underwent three experimental conditions: neutral, email
interruptions, and time pressure. During the email interruption session,
participants received eight emails, many irrelevant and some requiring
responses. In the time pressure condition, participants had to complete the
tasks within two-thirds of the allotted neutral session time. Like the WESAD
dataset, the first session was always neutral, followed by the other two
conditions in alternating order. The neutral and email interruption sessions
lasted approximately 45 minutes, while the time pressure session was around 30
minutes long.
Notably, the participants did not report experiencing high stress in any of
the three conditions. However, they indicated a higher temporal demand during
the time pressure session. While training stress detection models, the dataset
creators considered the data from email interruptions and time pressure
sessions as stress samples and the neutral session as no-stress samples
(Koldijk et al., 2016). Therefore, we follow the same labeling scheme for
consistency. However, three participants were excluded due to missing data.
#### 3.1.3. ForDigitStress
The ForDigitStress dataset represents another multimodal stress dataset with
various behavioral (facial expression, body pose, etc.) and physiological
(BVP, EDA) signals. The dataset was collected from 40 participants who
attended a mock job interview session. The BVP data was collected using IOM-
biofeedback device, operating at 27 Hz.
The experimental session was divided into three phases: preparation,
interview, and post-interview. The participants were asked to submit their
resumes in advance so that the interviewer could customize the questions
depending on the participant. During the interview phase, the interviewer
questioned the participants on topics such as their strengths/weaknesses,
salary expectations, and hypothetical job-related scenarios. The interview
phase lasted for about 25 mins. In addition to self-reported stress levels,
the participants’ saliva samples were collected for assessing the cortisol
levels. The cortisol levels served as ground truths for the presence of
stress.
Although both preparation and post-interview phases can be considered as no-
stress conditions, the authors suggest using data from later parts of the
post-interview phase based on the cortisol levels. So, we utilize the
interview phase as stress samples and the last segments of post-interview
phase (15 - 20 mins) as no-stress samples.
#### 3.1.4. VerBIO
The VerBIO dataset was collected to investigate if exposing participants to
public speaking through virtual reality (VR) training would reduce the public
speaking anxiety.The dataset contains data collected from 55 participants who
were recruited for two real and eight virtual oral presentations over two
days. The physiological data from two wearable sensors, Empatica E4 (BVP, EDA,
skin temperature) and Actiwave Cardio Monitor (ECG) was acquired. Several
self-assessments questionnaires were employed to capture demographics, state-
and trait-based psychological measures.
The experiment was divided into three sessions: pre-interview (Pre), interview
(Test), and post-interview (Post) sessions. The Pre and Post sessions involved
presenting in front of real audience, whereas the Test session involved VR
audience. Each session further constituted relaxation, preparation, and
presentation phases. The presentation topics were of general interest and were
about four minutes long.
We note that the participation dropped through the three sessions, with
maximum participants in the Pre phase and lowest in Post phase. To avoid
drastic imbalances in data between participants, we utilized only the data
from the Pre session. We labeled the data belonging to relaxation phases as
no-stress and those of presentation phase as stressful. In our analysis, we
normalize the data for each participant using few minutes of no-stress data
(see Section 3.3). Hence, 10 participants with low amount of no-stress data
were excluded.
Table 2. An overview of some key characteristics of the four stress datasets | WESAD | SWELL-KW | ForDigitStress | VerBIO
---|---|---|---|---
Stressor | TSST | Interruptions, Time pressure | Job interview | Public speaking
Stressor type | Social | Cognitive | Social | Social
Sensor | ECG: RespiBan, 700 Hz, BVP: Empatica E4, 64 Hz | ECG: TMSI Mobi, 2048 Hz | BVP: IOM, 27 Hz | BVP: Empatica E4, 64 Hz
Avg. stress level | 18.5/24 (STAI) | 3.5/10 (Likert scale) | 5.4/10 (Likert scale), 6.5/10 (Cortisol) |
Participants | 15 | 22 | 40 | 45
Data duration (per participant) | stress: 10 mins, no-stress: 26.5 mins | stress: 75 mins, no-stress: 45 mins | stress: 25 mins, no-stress: 15 - 20 mins | stress: 2 - 5 mins, no-stress: 4 - 6 mins
### 3.2. Approach
Our investigations involve the three assessments listed below. The idea is to
run a series of these assessments using the datasets described in Section 3.1
to investigate the dataset characteristics that considerably influence model
generalizability.
1. (1)
Within-dataset Assessment: involves training and evaluating models on the same
dataset. We utilize the leave-one-subject-out (LOSO) technique to evaluate the
models’ performance on unseen participants of the same dataset.
2. (2)
Cross-dataset Assessment: involves evaluating the models trained on one
dataset using another dataset. This evaluation assesses to what extent these
models can detect stress in new participants in different settings.
3. (3)
Combining Datasets: involves training new models on a combined dataset
consisting of data from two or more stress datasets, again using the LOSO
technique. This step investigates potential improvements in models’
performances due to an increase in the number and variations of the training
data.
While ECG-based and BVP-based HRVs reflect similar physiological information
related to heart activity, the models trained on these features might not
perform equally well (Gupta et al., 2023). This discrepancy can be attributed
to the BVP signals being more prone to noise from body movements than ECG
(Martinho et al., 2018). This noise can affect the signal quality, which can,
in turn, impact the stress detection performance. To avoid such influences in
cross-dataset assessment, the above assessments are followed for ECG-based and
BVP-based HRV models separately.
### 3.3. Data Processing
#### 3.3.1. ECG signals
Figure 1. An example plot of ECG signal, marked with various repeated
components of the signal.
A woman and a girl in white dresses sit in an open car.
The ECG signals are typically sampled at high frequencies and contain noises
such as baseline wander (low-frequency, 0.5 - 0.6 Hz) and powerline
interference (50 or 60 Hz). Figure 1 illustrates two beats from an ECG signal.
Computing HRV relies on detecting the R peaks in the QRS complex of the ECG
signal. We applied a second-order Butterworth band-pass filter with a
frequency band of 8 - 20 Hz for optimal QRS signal-to-noise ratio (Elgendi et
al., 2010).
For R-peak detection, we utilized the algorithm proposed by (Elgendi et al.,
2010). This algorithm is based on two key assumptions for healthy adults:
1. (a)
A QRS complex contains one and only one heartbeat
2. (b)
The duration of a typical QRS complex is in the range of 80 - 120 milliseconds
#### 3.3.2. BVP signals
Figure 2. An example plot of BVP signal, marked with various repeated
components of the signal.
A woman and a girl in white dresses sit in an open car.
Like ECG, the BVP signal is susceptible to baseline wander and high-frequency
noise. Hence, a band-pass filter (0.5 - 8 Hz) was applied to reduce these
noises (Elgendi et al., 2013). Figure 2 illustrates two beats from a BVP
signal. To derive the HRV signal from the BVP, the systolic peaks had to be
detected. For this purpose, we employed the peak-finding algorithm from
(Heimerl et al., 2023) to detect points that meet the following criteria:
1. (a)
Amplitude threshold: The peaks had to be taller than a certain threshold. This
threshold was set based on the distribution of peak heights in the entire
signal.
2. (b)
Distance between peaks: To avoid identifying every fluctuation as a peak,
consecutive peaks had to be separated by a minimum interval of 0.333 seconds.
This value corresponds to a maximum heart rate of 3 beats per second (180
beats per minute).
#### 3.3.3. HRV Features
Once the R-peaks (for ECG) or systolic peaks (for BVP) were identified, the
time intervals between successive peaks were calculated to form the HRV
signals. The features were calculated using 60-second segments with 59 seconds
of overlap between consecutive segments.
A total of 22 well-known features (Schmidt et al., 2018; Pham et al., 2021;
Heimerl et al., 2023; Prajod et al., [n. d.]) were computed from the extracted
HRV signals. These features belonged to the time domain (13 features),
frequency domain (5 features), and poincaré plot characteristics (4 features).
These features were calculated using the NeuroKit2 Python library (Makowski et
al., 2021).
The sensors used in the different datasets differ, potentially resulting in
values recorded on different scales. In addition, the range of physiological
recordings may vary from participant to participant (Braithwaite et al., 2013;
Nkurikiyeyezu et al., 2019; Sarkar and Etemad, 2020). To mitigate the effect
of these differences, participant-specific Min-Max normalization was applied
to each HRV feature. For real-time stress detection, the entire dataset would
not be available for normalization. Similar to (Luong et al., 2020; Prajod and
André, 2022), we used 5 minutes of neutral data to compute normalization
parameters (minimum and maximum values) for each participant. For the VerBIO
dataset, we used neutral data for normalization because many of the
participants had less than 5 minutes of neutral data.
### 3.4. Machine Learning Models
We trained the following three machine learning models using the extracted HRV
features, following the LOSO procedure. To account for the imbalanced sample
distribution of the datasets, the “class_weight” hyperparameter for all models
was set inversely proportional to the sample frequencies.
#### 3.4.1. Random Forest Classifier (RFC)
This is an ensemble learning method that combines predictions from multiple
decision trees for improved performance and reduced overfitting. Each tree is
trained on a subset of the available training set. The final prediction is
determined by aggregating the predictions from all the trees (e.g., majority
vote). This strategy often results in a better performance, even if the
individual decision trees are weak predictors. A total of 200 decision trees
(also called estimators) were trained, with a maximum depth of 5 for each
tree.
#### 3.4.2. Support Vector Machine (SVM)
This is a commonly used supervised learning method for binary classification
tasks. During the training process of this model, the objective is to find a
hyperplane within the feature space that separates the data points belonging
to different classes. We utilized a linear SVM classifier.
#### 3.4.3. Multi-layer Perceptron (MLP)
This is a simple feed-forward neural network (also called simple artificial
neural network), which has been growing in popularity for stress detection
(Bobade and Vani, 2020; Zawad et al., 2023; Albaladejo-González et al., 2023).
Our implementation followed an architecture consisting of an input layer, two
hidden layers, and a prediction layer. The input layer received data
represented as the normalized HRV features. A dropout layer (rate $=$ 0.2) was
included after the input layer to mitigate overfitting of the model. The two
hidden layers (ReLU activation) followed the dropout layer, with 12 nodes for
the first hidden layer and 6 nodes for the second hidden layer. This final
layer outputs the classification result using a Sigmoid activation function.
This model was trained using the SGD optimizer (learning rate $=$ 0.001) and
weighted loss. It was also trained in batches of 256 samples. We utilized the
early stopping technique, where the training stopped if the validation loss
did not decrease for 15 consecutive iterations.
## 4\. Results
We employed three evaluation strategies to assess model performances: within-
dataset, cross-dataset, and combined dataset evaluations. For all evaluations,
accuracy and f1-score metrics were used to quantify model performance. The
detailed results for each evaluation strategy are presented below.
### 4.1. Within-dataset Assessment
We employed LOSO technique to train and evaluate model performance within each
of the four stress datasets. The average f1-score and accuracy for each
dataset are presented in Table 3.
Table 3. Results of within-dataset LOSO evaluation of HRV models conducted on the four stress datasets Model | F1-score | Accuracy
---|---|---
Test on SWELL-KW, ECG
SWELL-KW baseline (Koldijk et al., 2016) | - | 0.589
SWELL-KW RFC | 0.634 | 0.664
SWELL-KW SVM | 0.597 | 0.626
SWELL-KW MLP | 0.667 | 0.692
Test on WESAD, ECG
WESAD (ECG-HRV) baseline (Schmidt et al., 2018) | 0.813 | 0.854
WESAD (ECG-HRV) RFC | 0.819 | 0.863
WESAD (ECG-HRV) SVM | 0.814 | 0.862
WESAD (ECG-HRV) MLP | 0.844 | 0.880
Test on WESAD, BVP
WESAD (BVP-HRV) baseline (Schmidt et al., 2018) | 0.830 | 0.858
WESAD (BVP-HRV) RFC | 0.768 | 0.826
WESAD (BVP-HRV) SVM | 0.763 | 0.792
WESAD (BVP-HRV) MLP | 0.780 | 0.829
Test on ForDigitStress, BVP
ForDigitStress baseline (Heimerl et al., 2023) | 0.784 | 0.797
ForDigitStress RFC | 0.810 | 0.829
ForDigitStress SVM | 0.787 | 0.811
ForDigitStress MLP | 0.814 | 0.831
Test on VerBIO, BVP
VerBIO baseline (Yadav et al., 2020) | - | -
VerBIO RFC | 0.949 | 0.962
VerBIO SVM | 0.879 | 0.904
VerBIO MLP | 0.924 | 0.939
The VerBIO dataset yielded the best overall performance, with models achieving
average accuracies and f1-scores exceeding 90%. On the other hand, models
trained on the SWELL-KW dataset exhibited lower average performance, with
f1-scores and accuracies ranging from 60% to 70%. All other datasets achieved
average f1-scores above 75% and average accuracies greater than 80%.
Except for VerBIO, MLP consistently achieved the highest performance within
each dataset, followed by RFC and then SVM. For VerBIO, RFC outperformed all
other models. Notably, all our models surpassed the baselines established in
the respective dataset papers, except for the WESAD BVP-HRV models. The
performance differences between models were relatively small, typically within
a 5% margin.
Within the WESAD dataset, models trained on ECG-derived HRV features
outperformed those using BVP-HRV features. However, the WESAD dataset paper
(Schmidt et al., 2018) reported slightly higher performance for BVP-HRV
models.
### 4.2. Cross-dataset Assessment
To assess the generalizability of the models, we conducted cross-dataset
evaluations. Each LOSO model trained on one dataset was tested on unseen data
from another dataset. For ECG-derived HRV features, we tested the SWELL-KW
models on data from the WESAD dataset, and vice versa. The results for these
cross-dataset evaluations are presented in Table 4 (SWELL-KW) and Table 5
(WESAD, ECG).
Table 4. Cross-dataset evaluation of SWELL-KW models on WESAD (ECG-HRV) dataset Model | F1-score | Accuracy
---|---|---
Test on WESAD (ECG-HRV)
SWELL-KW RFC | 0.557 | 0.676
SWELL-KW SVM | 0.357 | 0.482
SWELL-KW MLP | 0.427 | 0.574
The SWELL-KW models performed poorly in the cross-dataset evaluation using
WESAD ECG-HRV data. The RFC models achieved the best average performance with
an f1-score of 55.7% and an accuracy of 67.6%. These values are considerably
lower than the worst-performing within-dataset WESAD (ECG-HRV) model, which
achieved f1-score $=$ 81.4% and accuracy $=$ 86.2%.
Table 5. Cross-dataset evaluation of WESAD (ECG-HRV) models on SWELL-KW dataset Model | F1-score | Accuracy
---|---|---
Test on SWELL-KW
WESAD (ECG-HRV) RFC | 0.432 | 0.450
WESAD (ECG-HRV) SVM | 0.421 | 0.439
WESAD (ECG-HRV) MLP | 0.468 | 0.479
Like SWELL-KW models, the WESAD ECG-HRV models also performed poorly in cross-
dataset evaluation. While MLP achieved the highest average performance in this
cross-dataset evaluation, all models yielded f1-scores and accuracies below
50%.
We employed a similar cross-dataset evaluation approach for models trained on
BVP-derived HRV features. Each LOSO model was tested on unseen data from two
different datasets. For instance, the WESAD (BVP-HRV) models were evaluated
using data from the ForDigitStress and VerBIO datasets. The results of cross
dataset evaluation for WESAD (BVP-HRV), ForDigitStress, and VerBIO models are
presented in Tables 6, 7, and 8, respectively.
Table 6. Cross-dataset evaluation of WESAD (BVP-HRV) models on ForDigitStress and VerBIO datasets Model | F1-score | Accuracy
---|---|---
Test on ForDigitStress
WESAD (BVP-HRV) RFC | 0.731 | 0.740
WESAD (BVP-HRV) SVM | 0.774 | 0.775
WESAD (BVP-HRV) MLP | 0.740 | 0.744
Test on VerBIO
WESAD (BVP-HRV) RFC | 0.904 | 0.907
WESAD (BVP-HRV) SVM | 0.873 | 0.877
WESAD (BVP-HRV) MLP | 0.888 | 0.893
The models trained on WESAD BVP-HRV features showed good cross-dataset
performance on ForDigitStress and VerBIO datasets. SVM achieved the best
average f1-score and accuracy on the ForDigitStress dataset, whereas RFC was
better in the VerBIO dataset. The performance drop compared to the best
within-dataset models on ForDigitStress and VerBIO was minimal, ranging from
4% to 6%.
Table 7. Cross-dataset evaluation of ForDigitStress models on WESAD (BVP-HRV) and VerBIO datasets Model | F1-score | Accuracy
---|---|---
Test on WESAD (BVP-HRV)
ForDigitStress RFC | 0.789 | 0.820
ForDigitStress SVM | 0.779 | 0.812
ForDigitStress MLP | 0.763 | 0.810
Test on VerBIO
ForDigitStress RFC | 0.856 | 0.865
ForDigitStress SVM | 0.836 | 0.846
ForDigitStress MLP | 0.784 | 0.805
The ForDigitStress models performed well on data from WESAD (BVP-HRV) and
VerBIO datasets, with all models achieving f1-scores higher than 75% and more
than 80% accuracy. The RFC showed the best performance on both external
datasets.
Table 8. Cross-dataset evaluation of VerBIO models on WESAD (BVP-HRV) and ForDigitStress datasets Model | F1-score | Accuracy
---|---|---
Test on WESAD (BVP-HRV)
VerBIO RFC | 0.744 | 0.759
VerBIO SVM | 0.769 | 0.789
VerBIO MLP | 0.780 | 0.803
Test on ForDigitStress
VerBIO RFC | 0.823 | 0.823
VerBIO SVM | 0.684 | 0.688
VerBIO MLP | 0.747 | 0.752
The VerBIO models exhibited good generalizability to unseen data from both
WESAD (BVP-HRV) and ForDigitStress datasets. A notable exception was the SVM
models tested on ForDigitStress achieved f1-scores slightly below 70%,
representing a drop of 10 - 12% compared to the lowest performing within-
dataset ForDigitStress model. Interestingly, the VerBIO MLP - which performed
the best on ForDigitStress data - achieved a slightly better performance than
the within-dataset ForDigitStress models.
### 4.3. Combining Datasets
To explore the potential benefits of combining datasets, we trained additional
models using the LOSO technique. The SWELL-KW and WESAD (ECG-HRV) data were
combined to train ECG-derived HRV models. Similarly, a combined dataset
consisting of WESAD (BVP-HRV), ForDigitStress, and VerBIO data was used to
train BVP-derived HRV models. The LOSO results for combined ECG-based and BVP-
based HRV models are presented in Tables 9 and 10, respectively.
Table 9. Results of LOSO evaluation of ECG-derived HRV models trained by combining data from SWELL-KW and WESAD (ECG-HRV) datasets Model | F1-score | Accuracy
---|---|---
Test on SWELL-KW
Combined ECG-HRV RFC | 0.644 | 0.671
Combined ECG-HRV SVM | 0.587 | 0.615
Combined ECG-HRV MLP | 0.660 | 0.679
Test on WESAD (ECG-HRV)
Combined ECG-HRV RFC | 0.679 | 0.761
Combined ECG-HRV SVM | 0.526 | 0.652
Combined ECG-HRV MLP | 0.718 | 0.792
Combined ECG-HRV Results
Combined ECG-HRV RFC | 0.658 | 0.707
Combined ECG-HRV SVM | 0.587 | 0.615
Combined ECG-HRV MLP | 0.683 | 0.725
Among the models trained on combined SWELL-KW and WESAD (ECG-HRV), MLP
outperformed others on both datasets. However, the performances on each
dataset were lower than the within-dataset models. While the drop in
performance for SWELL-KW dataset was relatively small, there was around 12%
drop in f1-score and 9% drop in accuracy for WESAD dataset.
Table 10. Results of LOSO evaluation of BVP-derived HRV models trained by combining data from WESAD (BVP-HRV), ForDigitStress, and VerBIO datasets Model | F1-score | Accuracy
---|---|---
Test on WESAD (BVP-HRV)
Combined BVP-HRV RFC | 0.785 | 0.816
Combined BVP-HRV SVM | 0.768 | 0.813
Combined BVP-HRV MLP | 0.823 | 0.863
Test on ForDigitStress
Combined BVP-HRV RFC | 0.809 | 0.828
Combined BVP-HRV SVM | 0.776 | 0.800
Combined BVP-HRV MLP | 0.811 | 0.831
Test on VerBIO
Combined BVP-HRV RFC | 0.909 | 0.934
Combined BVP-HRV SVM | 0.845 | 0.886
Combined BVP-HRV MLP | 0.880 | 0.913
Combined BVP-HRV Results
Combined BVP-HRV RFC | 0.850 | 0.874
Combined BVP-HRV SVM | 0.806 | 0.840
Combined BVP-HRV MLP | 0.844 | 0.873
Combining WESAD (BVP-HRV), ForDigitStress, and VerBIO datasets resulted in
models with good stress detection performance across all three datasets. The
best average f1-scores and accuracies for each dataset was greater than 80%.
MLP trained on the combined dataset outperformed the best within-dataset WESAD
(BVP-HRV) model by around 4%. The best performance of combined models on the
ForDigitStress dataset was similar to the within-dataset results. However, in
the VerBIO dataset, the highest average performance was 3 - 4% lower than the
best within-dataset performance.
## 5\. Discussion
The MLP models achieved the best results in most within-dataset evaluations.
Our observation aligns with the findings of (Bobade and Vani, 2020; Prajod and
André, 2022; Albaladejo-González et al., 2023), where a simple feed-forward
network achieved better performance than other machine learning methods such
as SVM and RFC. We also observed that RFC performed better in many cross-
dataset evaluations. However, this trend is not very consistent across
datasets.
Cross-dataset evaluations revealed significant limitations in generalizability
for models trained on SWELL-KW and WESAD (ECG-HRV) datasets. Combining these
datasets did not improve performance and, in the case of WESAD models, even
led to a decline. This observation highlights the importance of data
compatibility when considering such strategies. Our findings regarding the
cross-dataset performance and combining datasets with respect to SWELL-KW and
WESAD datasets are in line with the observations of previous works (Prajod and
André, 2022; Albaladejo-González et al., 2023).
As highlighted in Table 2, the SWELL-KW and WESAD (ECG-HRV) datasets differ in
many factors including stressors, experienced stress intensity, and the
measurement devices. To pin-point the dataset characteristics which
considerably influences the cross-dataset performance, we considered two
additional datasets: ForDigitStress and VerBIO. Cross-dataset evaluations were
conducted using these additional datasets and WESAD. The WESAD and VerBIO
datasets share more similarities than the ForDigitStress dataset. Both
datasets utilized the Empatica E4 device to measure the BVP signals. While
WESAD employed TSST to elicit stress, VerBIO relied on public speaking task
that is a sub-task of the TSST protocol. On the other hand, the ForDigitStress
dataset utilized a different measurement device and employed a mock job
interview to induce stress. However, we note that the three datasets rely on
social evaluation as a primary source of stress.
The models trained on these three datasets (WESAD, VerBIO, ForDigitStress)
exhibited good cross-dataset performance. This suggests that factors like
brand of measurement device, stress-elicitation technique, and even stress
intensity (within a reasonable range) may not significantly impact
generalizability when the stressor type remains consistent (social stress in
this case). Together with the findings of (Mishra et al., 2020) and (Baird et
al., 2021) \- where good cross-dataset performances were observed in datasets
involving virtually same tasks (mental arithmetic and TSST, respectively) - we
infer that stressor type plays a crucial role in cross-dataset
generalizability of stress models.
While our results suggest stress intensity may not be a critical factor within
a reasonable range, it warrants further exploration. The low stress intensity
in the SWELL-KW dataset might have contributed to the poor performance of
WESAD models trained on high-intensity stress data. Interestingly, SWELL-KW
models, designed for low-intensity stress detection, also struggled with high-
intensity stress from WESAD data.
Overall, this study highlights the importance of considering stressor type and
data compatibility when developing generalizable stress detection models.
## 6\. Conclusion
Stress detection models with broader applicability are crucial due to the
diverse nature of stress experiences across various scenarios. Identifying
factors that influence cross-dataset generalizability is essential for
achieving this goal. This study addressed this gap by conducting cross-dataset
evaluations on four datasets (SWELL-KW, WESAD, ForDigitStress, VerBIO), which
contains both shared and distinct characteristics. We trained HRV-based
machine learning models (RFC, SVM, MLP) using ECG or BVP signals from these
datasets. Our key finding is that stressor type is the most prominent factor
influencing cross-dataset applicability. Models trained on datasets with
similar stressor types exhibited good generalizability, while those with
different stressors showed lower performance. Conversely, factors like stress
elicitation method and stress intensity had a minimal impact within the
explored datasets. Furthermore, matching stressor type proved crucial for
enhancing stress detection performance when combining datasets.
This study focused on ECG- and BVP-derived HRV features. In future works, we
will explore whether these findings extend to other stress-related modalities
(e.g., EDA). Additionally, a more extensive evaluation involving a wider range
of datasets encompassing different stress types (e.g., physical stress) would
further validate these observations.
## References
* (1)
* Akmandor and Jha (2017) Ayten Ozge Akmandor and Niraj K Jha. 2017. Keep the stress away with SoDA: Stress detection and alleviation system. _IEEE Transactions on Multi-Scale Computing Systems_ 3, 4 (2017), 269–282.
* Albaladejo-González et al. (2023) Mariano Albaladejo-González, José A Ruipérez-Valiente, and Félix Gómez Mármol. 2023. Evaluating different configurations of machine learning models and their transfer learning capabilities for stress detection using heart rate. _Journal of Ambient Intelligence and Humanized Computing_ 14, 8 (2023), 11011–11021.
* Alberdi et al. (2016) Ane Alberdi, Asier Aztiria, and Adrian Basarab. 2016. Towards an automatic early stress recognition system for office environments based on multimodal measurements: A review. _Journal of biomedical informatics_ 59 (2016), 49–75.
* Baird et al. (2021) Alice Baird, Andreas Triantafyllopoulos, Sandra Zänkert, Sandra Ottl, Lukas Christ, Lukas Stappen, Julian Konzok, Sarah Sturmbauer, Eva-Maria Meßner, Brigitte M Kudielka, et al. 2021\. An evaluation of speech-based recognition of emotional and physiological markers of stress. _Frontiers in Computer Science_ 3 (2021), 750284.
* Balcombe and De Leo (2022) Luke Balcombe and Diego De Leo. 2022. Human-computer interaction in digital mental health. In _Informatics_ , Vol. 9. MDPI, 14.
* Benchekroun et al. (2022) Mouna Benchekroun, Dan Istrate, Vincent Zalc, and Dominique Lenne. 2022. A Multi-Modal Dataset (MMSD) for Acute Stress Bio-Markers. In _International Joint Conference on Biomedical Engineering Systems and Technologies_. Springer, 377–392.
* Benchekroun et al. (2023) Mouna Benchekroun, Pedro Elkind Velmovitsky, Dan Istrate, Vincent Zalc, Plinio Pelegrini Morita, and Dominique Lenne. 2023. Cross dataset analysis for generalizability of HRV-based stress detection models. _Sensors_ 23, 4 (2023), 1807.
* Birjandtalab et al. (2016) Javad Birjandtalab, Diana Cogan, Maziyar Baran Pouyan, and Mehrdad Nourani. 2016. A non-EEG biosignals dataset for assessment and visualization of neurological status. In _2016 IEEE International Workshop on Signal Processing Systems (SiPS)_. IEEE, 110–114.
* Bobade and Vani (2020) Pramod Bobade and M Vani. 2020. Stress detection with machine learning and deep learning using multimodal physiological data. In _2020 Second International Conference on Inventive Research in Computing Applications (ICIRCA)_. IEEE, 51–57.
* Braithwaite et al. (2013) Jason J Braithwaite, Derrick G Watson, Robert Jones, and Mickey Rowe. 2013. A guide for analysing electrodermal activity (EDA) & skin conductance responses (SCRs) for psychological experiments. _Psychophysiology_ 49, 1 (2013), 1017–1034.
* Can et al. (2019) Yekta Said Can, Bert Arnrich, and Cem Ersoy. 2019. Stress detection in daily life scenarios using smart phones and wearable sensors: A survey. _Journal of biomedical informatics_ 92 (2019), 103139.
* Elgendi et al. (2010) Mohamed Elgendi, Mirjam Jonkman, and Friso De Boer. 2010. Frequency Bands Effects on QRS Detection. _Biosignals_ 2003 (2010), 2002.
* Elgendi et al. (2013) Mohamed Elgendi, Ian Norton, Matt Brearley, Derek Abbott, and Dale Schuurmans. 2013. Systolic peak detection in acceleration photoplethysmograms measured from emergency responders in tropical conditions. _PloS one_ 8, 10 (2013), e76585.
* Giannakakis et al. (2019) Giorgos Giannakakis, Dimitris Grigoriadis, Katerina Giannakaki, Olympia Simantiraki, Alexandros Roniotis, and Manolis Tsiknakis. 2019. Review on psychological stress detection using biosignals. _IEEE transactions on affective computing_ 13, 1 (2019), 440–460.
* Greene et al. (2016) Shalom Greene, Himanshu Thapliyal, and Allison Caban-Holt. 2016. A survey of affective computing for stress detection: Evaluating technologies in stress detection for better health. _IEEE Consumer Electronics Magazine_ 5, 4 (2016), 44–56.
* Gupta et al. (2023) Rohit Gupta, Amit Bhongade, and Tapan Kumar Gandhi. 2023. Multimodal Wearable Sensors-based Stress and Affective States Prediction Model. In _2023 9th International Conference on Advanced Computing and Communication Systems (ICACCS)_ , Vol. 1. IEEE, 30–35.
* Haque et al. (2024) Yeaminul Haque, Rahat Shahriar Zawad, Chowdhury Saleh Ahmed Rony, Hasan Al Banna, Tapotosh Ghosh, M Shamim Kaiser, and Mufti Mahmud. 2024. State-of-the-Art of Stress Prediction from Heart Rate Variability Using Artificial Intelligence. _Cognitive Computation_ 16, 2 (2024), 455–481.
* Heimerl et al. (2023) Alexander Heimerl, Pooja Prajod, Silvan Mertes, Tobias Baur, Matthias Kraus, Ailin Liu, Helen Risack, Nicolas Rohleder, Elisabeth André, and Linda Becker. 2023. ForDigitStress: A multi-modal stress dataset employing a digital job interview scenario. _arXiv preprint arXiv:2303.07742_ (2023).
* Koldijk et al. (2016) Saskia Koldijk, Mark A Neerincx, and Wessel Kraaij. 2016. Detecting work stress in offices by combining unobtrusive sensors. _IEEE Transactions on affective computing_ 9, 2 (2016), 227–239.
* Koldijk et al. (2014) Saskia Koldijk, Maya Sappelli, Suzan Verberne, Mark A Neerincx, and Wessel Kraaij. 2014. The SWELL knowledge work dataset for stress and user modeling research. In _Proceedings of the 16th international conference on multimodal interaction_. 291–298.
* Liapis et al. (2021) Alexandros Liapis, Evanthia Faliagka, Christos Katsanos, Christos Antonopoulos, and Nikolaos Voros. 2021. Detection of subtle stress episodes during UX evaluation: Assessing the performance of the WESAD bio-signals dataset. In _Human-Computer Interaction–INTERACT 2021: 18th IFIP TC 13 International Conference, Bari, Italy, August 30–September 3, 2021, Proceedings, Part III 18_. Springer, 238–247.
* Luong et al. (2020) Tiffany Luong, Nicolas Martin, Anais Raison, Ferran Argelaguet, Jean-Marc Diverrez, and Anatole Lécuyer. 2020. Towards real-time recognition of users mental workload using integrated physiological sensors into a VR HMD. In _2020 IEEE international symposium on mixed and augmented reality (ISMAR)_. IEEE, 425–437.
* Makowski et al. (2021) Dominique Makowski, Tam Pham, Zen J Lau, Jan C Brammer, François Lespinasse, Hung Pham, Christopher Schölzel, and SH Annabel Chen. 2021. NeuroKit2: A Python toolbox for neurophysiological signal processing. _Behavior research methods_ (2021), 1–8.
* Martinho et al. (2018) Miguel Martinho, Ana Fred, and Hugo Silva. 2018. Towards continuous user recognition by exploring physiological multimodality: An electrocardiogram (ECG) and blood volume pulse (BVP) approach. In _2018 International Symposium in Sensing and Instrumentation in IoT Era (ISSI)_. IEEE, 1–6.
* Mishra et al. (2020) Varun Mishra, Sougata Sen, Grace Chen, Tian Hao, Jeffrey Rogers, Ching-Hua Chen, and David Kotz. 2020. Evaluating the reproducibility of physiological stress detection models. _Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies_ 4, 4 (2020), 1–29.
* Nkurikiyeyezu et al. (2019) Kizito Nkurikiyeyezu, Anna Yokokubo, and Guillaume Lopez. 2019. The effect of person-specific biometrics in improving generic stress predictive models. _arXiv preprint arXiv:1910.01770_ (2019).
* Pham et al. (2021) Tam Pham, Zen Juen Lau, SH Annabel Chen, and Dominique Makowski. 2021. Heart rate variability in psychology: A review of HRV indices and an analysis tutorial. _Sensors_ 21, 12 (2021), 3998.
* Prajod and André (2022) Pooja Prajod and Elisabeth André. 2022. On the Generalizability of ECG-based Stress Detection Models. In _2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)_. IEEE, 549–554.
* Prajod et al. ([n. d.]) Pooja Prajod, Matteo Lavit Nicora, Marta Mondellini, Matteo Meregalli Falerni, Rocco Vertechy, Matteo Malosio, and Elisabeth André. [n. d.]. Flow in Human-Robot Collaboration-Multimodal Analysis and Perceived Challenge Detection in Industrial Scenarios. _Frontiers in Robotics and AI_ 11 ([n. d.]), 1393795.
* Sabour et al. (2021) Rita Meziati Sabour, Yannick Benezeth, Pierre De Oliveira, Julien Chappe, and Fan Yang. 2021. Ubfc-phys: A multimodal database for psychophysiological studies of social stress. _IEEE Transactions on Affective Computing_ 14, 1 (2021), 622–636.
* Sarkar and Etemad (2020) Pritam Sarkar and Ali Etemad. 2020. Self-supervised ECG representation learning for emotion recognition. _IEEE Transactions on Affective Computing_ 13, 3 (2020), 1541–1554.
* Schmidt et al. (2018) Philip Schmidt, Attila Reiss, Robert Duerichen, Claus Marberger, and Kristof Van Laerhoven. 2018. Introducing WESAD, a multimodal dataset for wearable stress and affect detection. In _Proceedings of the 20th ACM international conference on multimodal interaction_. 400–408.
* Velmovitsky et al. (2021) Pedro Elkind Velmovitsky, Paulo Alencar, Scott T Leatherdale, Donald Cowan, and Plinio Pelegrini Morita. 2021. Towards real-time public health: a novel mobile health monitoring system. In _2021 IEEE International Conference on Big Data (Big Data)_. IEEE, 6049–6051.
* Vos et al. (2023a) Gideon Vos, Kelly Trinh, Zoltan Sarnyai, and Mostafa Rahimi Azghadi. 2023a. Ensemble machine learning model trained on a new synthesized dataset generalizes well for stress prediction using wearable devices. _Journal of Biomedical Informatics_ 148 (2023), 104556.
* Vos et al. (2023b) Gideon Vos, Kelly Trinh, Zoltan Sarnyai, and Mostafa Rahimi Azghadi. 2023b. Generalizable machine learning for stress monitoring from wearable devices: a systematic literature review. _International Journal of Medical Informatics_ 173 (2023), 105026.
* Yadav et al. (2020) Megha Yadav, Md Nazmus Sakib, Ehsanul Haque Nirjhar, Kexin Feng, Amir H Behzadan, and Theodora Chaspari. 2020. Exploring individual differences of public speaking anxiety in real-life and virtual presentations. _IEEE Transactions on Affective Computing_ 13, 3 (2020), 1168–1182.
* Yu et al. (2018) Bin Yu, Mathias Funk, Jun Hu, Qi Wang, and Loe Feijs. 2018. Biofeedback for everyday stress management: A systematic review. _Frontiers in ICT_ 5 (2018), 23.
* Zawad et al. (2023) Md Rahat Shahriar Zawad, Chowdhury Saleh Ahmed Rony, Md Yeaminul Haque, Md Hasan Al Banna, Mufti Mahmud, and M Shamim Kaiser. 2023. A Hybrid Approach for Stress Prediction from Heart Rate Variability. In _Frontiers of ICT in Healthcare: Proceedings of EAIT 2022_. Springer, 111–121.
|
# A Theoretical Perspective on Hyperdimensional Computing
Anthony Thomas<EMAIL_ADDRESS>
Sanjoy Dasgupta<EMAIL_ADDRESS>
Tajana Rosing<EMAIL_ADDRESS>
Department of Computer Science
University of California, San Diego
San Diego, CA 92093, USA
###### Abstract
Hyperdimensional (HD) computing is a set of neurally inspired methods for
obtaining high-dimensional, low-precision, distributed representations of
data. These representations can be combined with simple, neurally plausible
algorithms to effect a variety of information processing tasks. HD computing
has recently garnered significant interest from the computer hardware
community as an energy-efficient, low-latency, and noise-robust tool for
solving learning problems. In this review, we present a unified treatment of
the theoretical foundations of HD computing with a focus on the suitability of
representations for learning.
## 1 Introduction
Hyperdimensional (HD) computing is an emerging area at the intersection of
computer architecture and theoretical neuroscience (?). It is based on the
observation that brains are able to perform complex tasks using circuitry
that: (1) uses low power, (2) requires low precision, and (3) is highly robust
to data corruption. HD computing aims to carry over similar design principles
to a new generation of digital devices that are highly energy-efficient, fault
tolerant, and well-suited to natural information processing (?).
The wealth of recent work on neural networks also draws its inspiration from
the brain, but modern instantiations of these methods have diverged from the
desiderata above. The success of these networks has rested upon choices that
are not neurally plausible, most notably significant depth and training via
backpropagation. Moreover, from a practical perspective, training these models
often requires high precision and substantial amounts of energy. While a large
body of literature has sought to ameliorate these issues with neural networks,
these efforts have largely been designed to address specific performance
limitations. By contrast, the properties above emerge naturally from the basic
architecture of HD computing.
Hyperdimensional computing focuses on the very simplest neural architectures.
Typically, there is a single, static, mapping from inputs $x$ to much higher-
dimensional “neural” representations $\phi(x)$ living in some space
$\mathcal{H}$. All computational tasks are performed in $\mathcal{H}$-space,
using simple, operations like element-wise additions and dot products. The
mapping $\phi$ is often taken to be random, and the embeddings have
coordinates that have low precision; for instance, they might take values $-1$
and $+1$. The entire setup is elementary and lends itself to fast, low-power
hardware realizations.
Indeed, a cottage industry has emerged around developing optimized
implementations of HD computing based algorithms on hardware accelerators (?,
?, ?, ?, ?, ?). Broadly speaking, this line of work touts HD computing as an
energy efficient, low-latency, and noise-resilient alternative to conventional
realizations of general purpose ML algorithms like support vector machines,
multilayer perceptrons, and nearest-neighbor classifiers. While this work has
reported impressive performance benefits, there has been relatively little
formal treatment of HD computing as a tool for general purpose learning.
This review has two broad aims. The first, more modest, goal is to introduce
the area of hyperdimensional computing to a machine learning audience. The
second is to develop a particular mathematical framework for understanding and
analyzing these models. The recent literature has suggested a variety of
different HD architectures that conform to the overall blueprint given above,
but differ in many important details. We present a unified treatment of many
such architectures that enables their properties to be compared. The most
basic types of questions we wish to answer are:
1. 1.
How can individual items, sets of items, and sequences of items, be
represented and stored in $\mathcal{H}$-space, in a manner that permits
reliable decoding?
2. 2.
What kinds of noise can be tolerated in $\mathcal{H}$-space?
3. 3.
What kinds of structure in the input $x$-space are preserved by the mapping to
$\mathcal{H}$-space?
4. 4.
What is the power of linear separators on the $\phi$-representation?
Some of these questions have been introduced in the HD computing literature
and studied in isolation (?, ?, ?, ?). In this work we address these questions
formally and in greater generality.
## 2 Introduction to HD Computing
In the following section we provide an introduction to the fundamentals of HD
computing and provide some brief discussion of its antecedents in the
neuroscience literature.
### 2.1 High-Dimensional Representations in Neuroscience
Neuroscience has proven to be a rich source of inspiration for the machine
learning community: from the perceptron (?), which introduced a simple and
general-purpose learning algorithm for linear classifiers, to neural networks
(?), to convolutional architectures inspired by visual cortex (?), to sparse
coding (?) and independent component analysis (?). One of the most
consequential discoveries from the neuroscience community, underlying much
research at the intersection of neuroscience and machine learning, has been
the notion of high-dimensional distributed representations as the fundamental
data structure for diverse types of information. In the neuroscience context,
these representations are also typically sparse.
To give a concrete example, the sensory systems of many organisms have a
critical component consisting of a transformation from relatively low
dimensional sensory inputs to much higher-dimensional _sparse_
representations. These latter representations are then used for subsequent
tasks such as recall and learning. In the olfactory system of the fruit fly
(?, ?, ?, ?), the mapping consists of two steps that can be roughly captured
as follows:
1. 1.
An input $\mathbf{x}\in\mathbb{R}^{n}$ is collected via a sensory organ and
mapped under a _random linear transformation_ to a point
$\phi(\mathbf{x})\in\mathbb{R}^{d}$ ($d\gg n$) in a high-dimensional space.
2. 2.
The coordinates of $\phi(\mathbf{x})$ are “sparsified” by a thresholding
operation which just retains the locations of the largest $k$ coordinates.
In the fly, the olfactory input is a roughly 50-dimensional vector ($n=50$)
corresponding to different types of odor receptor neurons while the sparse
representation to which it is mapped is roughly 2,000-dimensional ($d=2000$).
A similar “expand-and-sparsify” template is also found in other species,
suggesting that this process somehow exposes the information present in the
input signal in a way that is amenable to learning by the brain (?, ?, ?). The
precise mechanisms by which this occurs are still not fully understood, but
may have close connections to some of the literature on the theory of neural
networks and kernel methods (?, ?, ?).
### 2.2 HD Computing
ptInput Data: $x\in\mathcal{X}$HD Encoding:
$\phi:\mathcal{X}\rightarrow\mathcal{H}$Memory and Data Structures
$\in\mathcal{H}$HD DecodingHD Algorithms:
Learning/ReasoningOutputNoise/Corruption: $\Delta\in\mathcal{H}$Entirely in
$\mathcal{H}$-spaceMixed in $\mathcal{H},\mathcal{X}$-space Figure 1: The flow
of data in HD computing. Data is mapped from the input space to HD-space under
an encoding function $\phi:\mathcal{X}\rightarrow\mathcal{H}$. HD
representations of data are stored in data structures and may be corrupted by
noise or hardware failures. HD representations can be used as input for
learning algorithms or other information processing tasks and may be decoded
to recover the input data.
The notion of high-dimensional, distributed, data representations has
engendered a number of computational models that have collectively come to be
known as _vector symbolic architectures_ (VSA) (?). In general, VSAs provide a
systematic way to generate and manipulate high-dimensional representations of
symbols so as to implement cognitive operations like association between
related concepts. Notable examples of VSAs include “holographic reduced
representations” (?, ?), “binary spatter codes” (?, ?), and “matrix binding of
additive terms” (?). HD computing can be seen as a successor to these early
VSA models, with a strong additional slant towards hardware efficiency. While
our treatment focuses primarily on recent work on HD computing, many of our
results apply to these earlier VSA models as well.
An overview of data-flow in HD computing is given in Figure 1. The first step
in HD computing is encoding, which maps a piece of input data to its high-
dimensional representation under some function
$\phi:\mathcal{X}\rightarrow\mathcal{H}$. The nature of $\phi$ depends on the
type of input and the choice of $\mathcal{H}$. In this review, we consider
inputs consisting of sets, sequences, and structures composed from a finite
alphabet as well as vectors in a Euclidean space. The space $\mathcal{H}$ is
some $d$-dimensional inner-product space defined over the real numbers or a
subset thereof. Work in the literature on both HD computing and traditional
neural networks has also explored complex-valued embeddings (?, ?, ?).
However, we here focus on the more common case of real-valued embeddings. For
computational reasons, it is common to restrict $\mathcal{H}$ to be defined
over integers in a limited range $[-b,b]$. We emphasize that the dimension of
$\mathcal{H}$ need not, in general, be greater than that of $\mathcal{X}$.
Indeed, in several cases the encoding methods discussed can be used to reduce
the dimension of the data.
The HD representations of data can be manipulated using simple element-wise
operators. Two common and important such operations are “bundling” and
“binding.” The bundling operator is used to compile a set of elements in
$\mathcal{H}$ and takes the form of a function
$\oplus:\mathcal{H}\times\mathcal{H}\rightarrow\mathcal{H}$. The function
takes two points in $\mathcal{H}$ and returns a third point that is similar to
both operands. The binding operator is used to create ordered tuples of points
in $\mathcal{H}$ and is likewise a function
$\otimes:\mathcal{H}\times\mathcal{H}\rightarrow\mathcal{H}$. The function
takes a pair of points in $\mathcal{H}$ as input, and produces a third point
dissimilar to both operands. We make these notions more precise in our
subsequent discussion of encoding.
Given the HD representation $\phi(\mathcal{S})$ of a set of items
$\mathcal{S}\subset\mathcal{X}$ (produced by bundling the items), we may be
interested to query the representation to determine if it contains the
encoding of some $x\in\mathcal{X}$. To do so, we compute a metric of
similarity $\rho(\phi(x),\phi(\mathcal{S}))$ and declare that the item is
present in $\mathcal{S}$ if the similarity is greater than some critical
value. This process can be used to decode the HD representation so as to
recover the original points in $\mathcal{X}$ (?, ?). We may additionally wish
to assert that we can decode reliably even if $\phi(\mathcal{S})$ has been
corrupted by some noise process. One of our chief aims in this paper is to
mathematically characterize sufficient conditions for robust decoding under
different noise models and input data types.
Beyond simply storing and recalling specific patterns, HD representations may
also be used for learning. HD computing is most naturally applicable to
classification problems. Suppose we are given some collection of labeled
examples $\mathcal{S}=\\{(x_{i},y_{i})\\}_{i=1}^{N}$, where
$x_{i}\in\mathcal{X}$ and $y_{i}\in\\{c_{i}\\}_{i=1}^{K}$ is a categorical
variable indicating the class label of a particular $x_{i}$. One simple form
of HD classification bundles together the data corresponding to a particular
class to generate a “prototypical” example for the class (?, ?, ?):
$\displaystyle\phi(c_{k})=\bigoplus_{i\,:\,y_{i}=c_{k}}\phi(x_{i})$ (1)
The resulting $\phi(c_{k})$ are sometimes quantized to lower precision or
sparsified via a thresholding operation. A nice feature of this scheme is that
it is extremely simple to implement in an on-line fashion: that is, on
streaming data arriving continuously over time (?). It is common to fine-tune
the class prototypes using a few rounds of perceptron training (?, ?). Given
some subsequent piece of query data $x_{q}\in\mathcal{X}$ for which we do not
know the correct label, we simply return the label of the most similar
prototype:
$k^{\star}=\underset{k\in 1,\ldots,K}{\text{argmax
}}\rho(\phi(x_{q}),\phi(c_{k})).$
The similarity metric $\rho$ is typically taken to be the dot-product, with
the operands normalized if necessary. Thus, on the whole, the scheme is quite
similar to classical statistical methods like naive Bayes and Fisher’s linear
discriminant. In Section 5.2.1, we consider properties of the HD encoding that
can make linear models more powerful in HD space than in the original space.
HD computing and closely related techniques have been applied to a wide
variety of practical problems in fields ranging from bio-signal processing (?,
?), to natural language processing (?), and robotics (?, ?). We are here
concerned with a more abstract treatment that focuses on the basic properties
of HD computing and will not attempt to survey this literature. The interested
reader is referred to (?, ?) for discussions related to practical aspects of
HD computing.
## 3 Encoding and Decoding Discrete Data
A central object in HD computing is the mapping from inputs to their high-
dimensional representations. The design of this mapping, typically referred to
as “encoding” in the literature on HD computing, has been the subject of
considerable research. There is a wide range of possible encoding methods.
Some of these have been introduced in the HD computing literature and studied
in isolation (?, ?, ?). In this review, we present a novel unifying framework
in which to study these mappings and to characterize their key properties in a
non-asymptotic setting. We first discuss the encoding and decoding of _sets_
in some detail. Many HD encoding procedures for more complex data types such
as sequences essentially amount to transforming the data into a set and then
applying the standard set-encoding method.
### 3.1 Finite Sets
Let $\mathcal{A}=\\{a_{i}\\}_{i=1}^{m}$ be some finite alphabet of $m$
symbols. Symbols $a\in\mathcal{A}$ are mapped to $\mathcal{H}$ under an
encoding function $\phi:\mathcal{A}\rightarrow\mathcal{H}$. Our goal in this
section is to consider the encoding of sets $\mathcal{S}$ whose elements are
drawn from $\mathcal{A}$. The HD representation of $\mathcal{S}$ is
constructed by superimposing the embeddings of the constituent elements using
the bundling operator
$\oplus:\mathcal{H}\times\mathcal{H}\rightarrow\mathcal{H}$. The encoding of
$\mathcal{S}$ is defined to be
$\phi(\mathcal{S})=\oplus_{a\in\mathcal{S}}\phi(a)$. We first focus on the
intuitive setting in which $\oplus$ is the element-wise sum and then address
other forms of bundling.
To determine if some $a\in\mathcal{A}$ is contained in $\mathcal{S}$, we check
if the dot product $\langle\phi(a),\phi(\mathcal{S})\rangle$ exceeds some
fixed threshold. If the codewords $\\{\phi(a):a\in\mathcal{A}\\}$ are
orthogonal and have a constant length $L$, then we have
$\langle\phi(a),\phi(\mathcal{S})\rangle=L^{2}\,\mathbbm{1}(a\in\mathcal{S})$,
where $\mathbbm{1}$ is the indicator function which evaluates to one if its
argument is true and zero otherwise. However, when the codewords are not
perfectly orthogonal, we have
$\langle\phi(a),\phi(\mathcal{S})\rangle=L\mathbbm{1}(a\in\mathcal{S})+\Delta$,
where $\Delta$ is the “cross-talk” caused by interference between the
codewords. In order to decode reliably, we must ensure the contribution of the
cross-talk is small and bounded. We formalize this using the notion of
incoherence popularized in the sparse coding literature. We define incoherence
formally as (?):
###### Definition 1
Incoherence. For $\mu\geq 0$, we say $\phi:\mathcal{A}\to\mathcal{H}$ is
$\mu$-incoherent if for all distinct $a,a^{\prime}\in\mathcal{A}$, we have
$|\langle\phi(a),\phi(a^{\prime})\rangle|\leq\mu L^{2}$
where $L=\min_{a\in\mathcal{A}}\|\phi(a)\|$.
When $d\geq m$, it is possible to have codewords that are mutually orthogonal,
whereupon $\mu=0$. In general, we will be interested in results that do not
require $d\geq m$.
#### 3.1.1 Exact Decoding of Sets
In the following section, we show how the cross-talk can be bounded in terms
of the incoherence of $\phi$, and use this to derive a simple threshold rule
for exact decoding.
###### Theorem 2
Let $L=\min_{a\in\mathcal{A}}\|\phi(a)\|$ and let the bundling operator be the
element wise sum. To decode whether an element $a$ lies in set $S$, we use the
rule
$\langle\phi(a),\phi(S)\rangle\geq\frac{1}{2}L^{2}.$
This gives perfect decoding for sets of size $\leq s$ if $\phi$ is
$1/(2s)$-incoherent.
Proof Consider some symbol $a$. Then:
$\langle\phi(a),\phi(\mathcal{S})\rangle=\mathbbm{1}(a\in\mathcal{S})\langle\phi(a),\phi(a)\rangle+\sum_{a^{\prime}\in\mathcal{S}\setminus\\{a\\}}\langle\phi(a),\phi(a^{\prime})\rangle$
If $a\in\mathcal{S}$, then the above is lower bounded by $L^{2}-sL^{2}\mu$,
where $\mu$ is the incoherence of $\phi$. Otherwise, it is upper bounded by
$sL^{2}\mu$. So we decode perfectly if $sL^{2}\mu<L^{2}/2$, or $\mu<1/(2s)$.
#### 3.1.2 Random Codebooks
In practice, each $\phi(a)$ is usually generated by sampling from some
distribution over $\mathcal{H}$ or a subset of $\mathcal{H}$ (?, ?, ?). We
typically require that this distribution is factorized so that coordinates of
$\phi(a)$ are i.i.d.. Intuitively, the incoherence condition stipulated in
Theorem 2 will hold if dot products between two different codewords are
concentrated around zero. Furthermore, we would like it to be the case that
this concentration occurs quickly as the encoding dimension is increased. It
turns out that a fairly broad family of simple distributions satisfies these
properties.
As an example, suppose $\phi(a)$ is sampled from the uniform distribution over
$\\{\pm 1\\}^{d}$, which we denote $\phi(a)\sim\\{\pm 1\\}^{d}$. In this case,
$L=\sqrt{d}$ exactly, and a direct application of Hoeffding’s inequality and
the union bound yields:
$\mathbb{P}(\text{$\exists$ distinct $a,a^{\prime}\in\mathcal{A}$ s.t.\
}|\langle\phi(a),\phi(a^{\prime})\rangle|\geq\mu d)\leq
m^{2}\exp\left(-\frac{\mu^{2}d}{2}\right).$
(Recall that $m=|\mathcal{A}|$.) Stated another way, with high probability
$\mu=O(\sqrt{(\ln m)/d})$, meaning that we can make $\mu$ as small as desired
by increasing $d$.
In fact, the same basic approach holds for the much broader class of _sub-
Gaussian_ distributions, which can be characterized as follows (?):
###### Definition 3
Sub-Gaussian Random Variable. A random variable $X\sim P_{X}$ is said to be
sub-Gaussian if there exists $\sigma\in\mathbb{R}^{+}$, referred to as the
sub-Gaussian parameter, such that:
$\mathbb{E}[\exp\left(\lambda(X-\mathbb{E}[X])\right)]\leq\exp\left(\frac{\sigma^{2}\lambda^{2}}{2}\right)\text{
for all }\lambda\in\mathbb{R}.$
Intuitively, the tails of a sub-Gaussian random variable decay at least as
fast those of a Gaussian. We say the encoding $\phi$ is $\sigma$-sub-Gaussian
if $\phi(a)$ is generated by sampling its $d$ coordinates independently from
the same sub-Gaussian distribution with parameter $\sigma$. We say $\phi$ is
“centered” if the distribution from which it is sampled is of mean zero. In
general, we assume $\phi$ is centered unless stated otherwise.
Codewords drawn from a sub-Gaussian distribution have the useful property that
their lengths concentrate fairly rapidly around their expected value. This
concentration is, in general, worse than sub-Gaussian but well behaved
nonetheless. The following result is well known but we reiterate it here as it
is useful for our subsequent discussion. A proof is available in the appendix.
###### Theorem 4
Let $\phi$ be centered and $\sigma$-sub-Gaussian. Then:
$\mathbb{P}(\exists\,a\in\mathcal{A}\text{ s.t.
}|\|\phi(a)\|_{2}^{2}-\mathbb{E}[\|\phi(a)\|_{2}^{2}]|\geq t)\leq
2m\exp\left(-c\min\left\\{\frac{t^{2}}{d\sigma^{4}},\frac{t}{\sigma^{2}}\right\\}\right)$
for some positive absolute constant $c$.
Like the conventional Gaussian distribution, sub-Gaussianity is preserved
under linear transformations. That is, if $\mathbf{x}=\\{x_{i}\\}_{i=1}^{n}$
is a sequence of i.i.d. sub-Gaussian random variables and $\mathbf{a}$ is an
arbitrary vector in $\mathbb{R}^{n}$, then
$\langle\mathbf{a},\mathbf{x}\rangle$ is sub-Gaussian with parameter
$\sigma\|a\|_{2}$ (?). We can obtain a more general version of the previous
result about $\phi\sim\\{\pm 1\\}^{d}$ which applies to $\phi(a)$ sampled from
any sub-Gaussian distribution.
###### Theorem 5
Let $\phi$ be $\sigma$-sub-Gaussian. Then, for $\mu>0$,
$\mathbb{P}(\exists\,\text{{distinct }}a,a^{\prime}\in\mathcal{A}\text{ s.t.
}|\langle\phi(a),\phi(a^{\prime})\rangle|\geq\mu L^{2})\leq
m^{2}\exp\left(-\frac{\mu^{2}\kappa L^{2}}{2\sigma^{2}}\right)$
where $\kappa=(\min_{a}\|\phi(a)\|^{2})/(\max_{a}\|\phi(a)\|^{2})$.
Proof Fix some $a$ and $a^{\prime}$. Treating $\phi(a)$ as a fixed vector in
$\mathbb{R}^{d}$ and using the fact that sub-Gaussianity is preserved under
linear transformations, we may apply a Chernoff bound for sub-Gaussian random
variables (e.g. Prop 2.1 of (?)) to obtain:
$\mathbb{P}(|\langle\phi(a),\phi(a^{\prime})\rangle|\geq\mu L^{2})\leq
2\exp\left(-\frac{\mu^{2}L^{4}}{2\sigma^{2}\|\phi(a)\|_{2}^{2}}\right)\leq
2\exp\left(-\frac{\mu^{2}L^{4}}{2\sigma^{2}L_{\max}^{2}}\right)$
where $L_{\max}=\max_{a\in\mathcal{A}}\|\phi(a)\|_{2}$. Therefore, taking
$\kappa=L^{2}/L_{\max}^{2}$, we have:
$\mathbb{P}(|\langle\phi(a),\phi(a^{\prime})\rangle|\geq\mu L^{2})\leq
2\exp\left(-\frac{\mu^{2}\kappa L^{2}}{2\sigma^{2}}\right)$
and the claim follows by applying the union bound over all
$\binom{m}{2}<m^{2}/2$ pairs of codewords. We note that, per Theorem 4,
$\kappa\to 1$ as $d$ becomes large.
To be concrete and provide useful practical guidance, we here introduce three
running examples of codeword distributions.
Dense Binary Codewords. In our first example, the most common in practice in
our impression, $\phi(a)$ is sampled from the uniform distribution over the
$d$-dimensional unit cube $\\{-1,+1\\}^{d}$. This approach is advantageous
because it leads to efficient hardware implementations (?, ?) and is simple to
analyze.
Gaussian Codewords. Our second example consists of codewords sampled from the
$d$-dimensional Gaussian distribution (?). That is,
$\phi(a)\sim\mathcal{N}(\mathbf{0}_{d},\sigma^{2}\mathbf{I}_{d})$, where
$\mathbf{0}_{d}$ is the $d$-dimensional zero vector. Here, the codewords will
not be of exactly the same length. However, Theorem 4 ensures that squared
codeword lengths are concentrated around their expected value of
$\sigma^{2}d$. More formally, for some $\tau>0$:
$\mathbb{P}(\exists\,a\in\mathcal{A}\text{ s.t.
}|\|\phi(a)\|_{2}^{2}-\sigma^{2}d|\geq\tau\sigma^{2}d)\leq
2m\exp\left(-c\min\left\\{\tau^{2}d,\tau d\right\\}\right).$
In both cases, we can see that to obtain a $\mu$-incoherent codebook with
probability $1-\delta$, is it sufficient to choose:
$d=O\left(\frac{2}{\mu^{2}}\ln\frac{m}{\delta}\right)$
Or, stated another way, we have $\mu=O(\sqrt{(\ln m)/d})$ with high
probability. The key point in the two examples above is that the encoding
dimension is inversely proportional to $\mu^{2}$. Per Theorem 2, to decode
correctly it is sufficient to have $\mu=1/(2s)$, meaning that the encoding
dimension scales quadratically with the number of elements in the set, but
only logarithmically in the alphabet size and probability of error.
We will also consider a third example in which the codewords are _sparse_ and
binary. However, we defer this for the time being as slightly different
encoding methods and analysis techniques are appropriate.
#### 3.1.3 Decoding with Small Probability of Error
The analysis above gives strong _uniform_ bounds showing that, with
probability at least $1-\delta$ over random choice of the codebook, _every_
subset of size at most $s$ will be correctly decoded. However, this guarantee
requires us to impose the unappealing restriction that $s\ll\sqrt{d}$ which is
a significant practical limitation. We here show that we can obtain $s=O(d)$
but with a weaker _pointwise_ guarantee: any arbitrarily chosen set of size at
most $s$ will be correctly decoded with probability $1-\delta$ over the random
choice of codewords. Rather than insist on a hard upper bound on the
incoherence of the codebook, we can instead require the milder condition that
random sums over dot-products between $\leq s$ codewords are small with high-
probability. We define this property more formally as follows:
###### Definition 6
Subset Incoherence. For $\tau>0$, we say a random mapping
$\phi:\mathcal{A}\rightarrow\mathcal{H}$ satisfies $(s,\tau,\delta)$-subset
incoherence if, for any $\mathcal{S}\subset\mathcal{A}$ of size at most $s$,
with probability at least $1-\delta$ over the choice of $\phi$:
$\underset{a\notin\mathcal{S}}{\max}\,\left|\sum_{a^{\prime}\in\mathcal{S}}\langle\phi(a),\phi(a^{\prime})\rangle\right|\leq\tau
L^{2}$
where $L=\min_{a\in\mathcal{A}}||\phi(a)||$.
Once again, it turns out that sampling the codewords from a sub-Gaussian
distribution can readily be seen to satisfy a subset-incoherence condition
with high-probability:
###### Theorem 7
Let $\phi$ be $\sigma$-sub-Gaussian and fix some
$\mathcal{S}\subset\mathcal{A}$ of size $s$. Then
$\mathbb{P}\left(\underset{a\notin\mathcal{S}}{\max}\,\left|\sum_{a^{\prime}\in\mathcal{S}}\langle\phi(a),\phi(a^{\prime})\rangle\right|\geq\tau
L^{2}\right)\leq 2m\exp\left(-\frac{\kappa\tau^{2}L^{2}}{2s\sigma^{2}}\right)$
where $\kappa$ and $L$ are as in Theorem 5.
The proof is similar to Theorem 5 and is available in the appendix. As a
concrete example, in the practically relevant case that $\phi\sim\\{\pm
1\\}^{d}$ the above boils down to:
$\mathbb{P}\left(\exists\,a\notin\mathcal{S}\text{ s.t.
}\left|\sum_{a^{\prime}\in\mathcal{S}}\langle\phi(a),\phi(a^{\prime})\rangle\right|\geq\tau
d\right)\leq 2m\exp\left(-\frac{\tau^{2}d}{2s}\right).$
Stated another way, we have: $\tau=O(\sqrt{(s\ln m)/d})$. Following Theorem 2,
in order to ensure correct decoding with high probability, we must simply
argue that the codebook satisfies the subset-incoherence property with
$\tau=1/2$, meaning we should choose the encoding dimension to be $d=O(s\ln
m)$.
This method of analysis is similar to that of (?, ?, ?), who reach the same
conclusion vis-à-vis linear scaling using the central limit theorem. However,
our formalism is more general and is non-asymptotic.
#### 3.1.4 Comparing Set Representations
We can estimate the size of a set by computing the norm of its encoding, where
the precision of the estimate can be bounded in terms of the incoherence of
$\phi$. In the following discussion, we make the simplifying assumption that
the codewords are all of a constant length $L$. Again appealing to Theorem 4,
we can see that this assumption is not onerous since the codeword lengths
concentrate around their expected value.
###### Theorem 8
Let $\mathcal{S}$ be a set of size $s$. Then:
$s(1-s\mu)\leq\frac{1}{L^{2}}\|\phi(\mathcal{S})\|_{2}^{2}\leq s(1+s\mu)$
Proof The proof is by direct manipulation:
$\displaystyle\frac{1}{L^{2}}\|\phi(\mathcal{S})\|_{2}^{2}$
$\displaystyle=\frac{1}{L^{2}}\langle\phi(\mathcal{S}),\phi(\mathcal{S})\rangle=\frac{1}{L^{2}}\sum_{a\in\mathcal{S}}\langle\phi(a),\phi(a)\rangle+\frac{1}{L^{2}}\sum_{a,a^{\prime}\neq
a\in\mathcal{S}}\langle\phi(a),\phi(a^{\prime})\rangle$
$\displaystyle\leq\frac{1}{L^{2}}(sL^{2}+s^{2}\mu L^{2}).$
The other direction is analogous.
Given a pair of sets $\mathcal{S},\mathcal{S}^{\prime}$ over the same
alphabet, we can estimate the size of their intersection and union directly
from their encoded representation.
###### Theorem 9
Let $\mathcal{S}$ and $\mathcal{S}^{\prime}$ be sets of size $s$ and
$s^{\prime}$ drawn from $\mathcal{A}$ and denote their encodings by
$\phi(\mathcal{S})$ and $\phi(\mathcal{S}^{\prime})$ respectively.
$|\mathcal{S}\cap\mathcal{S}^{\prime}|-ss^{\prime}\mu\leq\frac{1}{L^{2}}\langle\phi(\mathcal{S}),\phi(\mathcal{S}^{\prime})\rangle\leq|\mathcal{S}\cap\mathcal{S}^{\prime}|+ss^{\prime}\mu$
The proof is similar to Theorem 8 and is deferred to the appendix. Noting as
well that
$|\mathcal{S}\cup\mathcal{S}^{\prime}|=|\mathcal{S}|+|\mathcal{S}^{\prime}|-|\mathcal{S}\cap\mathcal{S}^{\prime}|$,
we see that we can estimate the size of the union using the previous theorem.
In practice, it may be unnecessary to compute these quantities with a high
degree of precision. For instance, it may only be necessary to identify sets
with a large intersection-over-union. Provided the definition of “large” is
somewhat loose, we can accept a higher incoherence among the codewords in
exchange for reducing the encoding dimension.
#### 3.1.5 Sparse and Low-Precision Encodings
In the previous discussion, we assumed the bundling operator was the element-
wise sum. This is a natural choice when the codewords are dense or non-binary.
However, the resulting encodings are of unconstrained precision which may be
undesirable from a computational perspective. For the purposes of representing
sets of size $\leq s$, we may truncate $\phi(\mathcal{S})$ to lie in the range
$[-c,c]$, with negligible loss in accuracy provided $c=O(\sqrt{s})$. In
practice, it is common to quantize the encodings more aggressively to binary
precision by thresholding (?, ?, ?, ?). In other words, we encode as
$\phi(\mathcal{S})=g_{t}(\mathcal{S})$, where $g_{t}$ is a thresholding
function that is applied coordinate-wise: $g_{t}(x)=1$ if $x\geq t$ and $0$
otherwise.
As a notable special case of the thresholding rule described above, we here
consider encoding with _sparse_ codewords. In this case, we assume that a
coordinate in a codeword is non-zero with some small probability. In other
words, $\phi(a)_{i}\sim\text{Bernoulli}(p)$, where $p\ll 1/2$. We may then
bundle items by taking an element-wise sum of their codewords with threshold
$t=1$, which is equivalent to taking the element-wise maximum over the
codewords. That is, $\phi(\mathcal{S})=\max_{a\in\mathcal{S}}\phi(a)$, where
the $\max$ operator is applied coordinate-wise. Noting that the max is upper
bounded by the sum in this setting, the notion of incoherence is a relevant
quantity and the analysis of Theorem 2 continues to apply.
This encoding procedure is essentially a standard implementation of the
popular “Bloom filter” data structure for representing sets (?). The
conventional Bloom filter differs slightly in that the typical decoding rule
is to threshold $\langle\phi(a),\phi(\mathcal{S})\rangle$ at
$\|\phi(a)\|_{1}$. There is a large literature on Bloom filters with
applications ranging from networking and database systems to neural coding,
and several schemes for generating good codewords have been proposed (?, ?,
?). Using the random coding scheme described here, the optimal value of $p$
can be seen to be $(\ln 2)/s$ and, to ensure the probability of a false
positive is at most $\delta$, the encoding dimension should be chosen on the
order of $s\ln(1/\delta)$ (?). A practical benefit of Bloom filters is that
they have an efficient implementation using hash functions which does not
require materializing a codebook as in methods based on random sampling. This
may be beneficial when the alphabet size is large enough that storing
codewords is not possible. The connections between HD computing and Bloom
filters are examined in greater detail in (?).
We remark that this method of encoding is related to an interesting procedure
known as “context dependent thinning” (CDT) which can be used to control the
density of binary representations (?, ?). CDT takes the logical “and” of
$\phi(\mathcal{S})$ and some permutation $\sigma(\phi(\mathcal{S}))$ to obtain
the thinned representation
$\phi(\mathcal{S})^{\prime}=\phi(\mathcal{S})\land\sigma(\phi(\mathcal{S}))$.
This process can be repeated until the desired density of $\phi(\mathcal{S})$
is achieved. A capacity analysis of CDT representations can be found in (?).
### 3.2 Robustness to Noise
In this section we explore the noise robustness properties of the encoding
methods discussed above using the formalism of incoherence. We consider some
unspecified noise process which corrupts the encoding of a set
$\mathcal{S}\subset\mathcal{A}$ of size at most $s$ according to
$\tilde{\phi}(\mathcal{S})=\phi(\mathcal{S})+\Delta_{\mathcal{S}}$. We say
$\Delta_{\mathcal{S}}$ is $\rho$-bounded if:
$\underset{a\in\mathcal{A}}{\max}\,|\langle\phi(a),\Delta_{\mathcal{S}}\rangle|\leq\rho.$
We are interested in understanding the conditions under which we can still
decode reliably.
###### Theorem 10
Suppose $\mathcal{S}$ has size $\leq s$ and $\Delta_{\mathcal{S}}$ is
$\rho$-bounded. We can correctly decode $\mathcal{S}$ using the thresholding
rule from Theorem 2 if:
$\frac{\rho}{L^{2}}+s\mu<\frac{1}{2}$
where $L=\min_{a\in\mathcal{A}}\|\phi(a)\|_{2}$.
The proof is a straightforward extension of Theorem 2 and is available in the
appendix. The practical implication is that there is a tradeoff between the
incoherence of the codebook and robustness to noise: a higher incoherence
allows for a smaller encoding dimension but at the cost of a tighter
constraint on $\rho$. We can analyze several practically relevant noise models
by placing additional restrictions on $\Delta_{\mathcal{S}}$ and by
considering worst or typical case bounds on $\rho$. We here consider different
forms of noise under constraints on $\mathcal{H}$. Our goal is to understand
how the magnitude of noise that can be tolerated scales with the encoding
dimension, size $s$ of the encoded set, and size $m$ of the alphabet. In each
setting we consider a “passive” model in which the noise is sampled randomly
from some distribution, and an “adversarial” model in which the noise is
arbitrary and may be designed to maliciously corrupt the encodings. We again
appeal to Theorem 4 to justify a simplifying assumption that the codewords are
of equal length.
###### Lemma 11
Sub-Gaussian Codewords. Fix a centered and $\sigma$-sub-Gaussian codebook
$\phi$ whose codewords are of length $L$. Consider the passive additive white
Gaussian noise model
$\Delta_{\mathcal{S}}\sim\mathcal{N}(0,\sigma_{\Delta}^{2}\mathbf{I}_{d})$;
that is, each coordinate is corrupted by Gaussian noise with mean zero and
variance $\sigma_{\Delta}^{2}$. Then, we can correctly decode with probability
$1-\delta$ over random draws of $\Delta_{\mathcal{S}}$ provided:
$\sigma_{\Delta}<\frac{L}{\sqrt{2\ln(2m/\delta)}}\left(\frac{1}{2}-s\mu\right)$
Now consider an adversarial model in which $\Delta_{\mathcal{S}}$ is arbitrary
save for a constraint on the norm: $\|\Delta_{\mathcal{S}}\|_{2}\leq\omega L$.
Then, we can correctly decode provided:
$\omega<\frac{1}{2}-s\mu$
Proof Let us first consider the passive case in which
$\Delta_{\mathcal{S}}\sim\mathcal{N}(0,\sigma_{\Delta}^{2}\mathbf{I}_{d})$.
Fix some $a\in\mathcal{A}$. Then
$\langle\phi(a),\Delta_{\mathcal{S}}\rangle\sim\mathcal{N}(0,\sigma_{\Delta}^{2}L^{2})$.
By a standard tail bound on the Gaussian distribution (?) and the union bound,
we have:
$\mathbb{P}(\exists\,a\text{ s.t.
}|\langle\phi(a),\Delta_{\mathcal{S}}\rangle|\geq\rho)\leq
2m\exp\left(-\frac{\rho^{2}}{2\sigma_{\Delta}^{2}L^{2}}\right).$
Therefore, with probability $1-\delta$, we have that $\Delta_{\mathcal{S}}$ is
$\rho$-bounded for
$\rho\leq\sigma_{\Delta}L\sqrt{2\ln(2m/\delta)}.$
By Theorem 10 we can decode correctly if:
$\displaystyle\frac{\sigma_{\Delta}L\sqrt{2\ln(2m/\delta)}}{L^{2}}+s\mu<\frac{1}{2}\Rightarrow\sigma_{\Delta}<\frac{L}{\sqrt{2\ln(2m/\delta)}}\left(\frac{1}{2}-s\mu\right).$
Now consider the adversarial case in which
$\|\Delta_{\mathcal{S}}\|_{2}\leq\omega L$. By the Cauchy-Schwarz inequality,
$|\langle\phi(a),\Delta_{\mathcal{S}}\rangle|\leq\omega L^{2}$. Therefore, by
Theorem 10, we can decode correctly if
$\displaystyle\frac{\omega
L^{2}}{L^{2}}+s\mu<\frac{1}{2}\Rightarrow\omega<\frac{1}{2}-s\mu.$
We again emphasize that, per Theorem 5, $\mu=O(\sqrt{(\ln m)/d})$. Since
$L=O(\sqrt{d})$, we can see that we can tolerate
$\sigma_{\Delta}\approx\sqrt{d/(\ln m)}-s$ in the passive case. We next turn
to a notable special case of the above in which the codewords are dense and
binary. In this case, we may assume that $\mathcal{H}$ is constrained to be
integers in the range $[-s,s]$.
###### Lemma 12
Dense Binary Codewords. Fix a codebook $\phi$ such that $\phi(a)\sim\\{\pm
1\\}^{d}$ for each $a\in\mathcal{A}$. Consider a passive noise model in which
$\Delta_{\mathcal{S}}\sim\text{\rm unif}(\\{-c,...,c\\}^{d})$; that is, each
coordinate is shifted by an integer amount chosen uniformly at random between
$-c$ and $c$. Then, we can correctly decode with probability $1-\delta$
provided:
$c<\sqrt{\frac{d}{2\ln(2m/\delta)}}\left(\frac{1}{2}-s\mu\right)$
Now consider an adversarial model in which we assume
$\|\Delta_{\mathcal{S}}\|_{1}\leq\omega sd$. Then we can decode correctly if:
$\omega<\frac{1}{2s}-\mu.$
A proof is available in the Appendix. We next consider the case of Section
3.1.5 in which the codewords are sparse and binary and the bundling operator
is the element-wise maximum. We here assume that
$\tilde{\phi}(\mathcal{S})=\phi(\mathcal{S})+\Delta_{\mathcal{S}}$ is
truncated so that each coordinate is either $0$ or $+1$.
###### Lemma 13
Sparse Binary Codewords. Fix a codebook $\phi$ such that
$\phi(a)\in\\{0,1\\}^{d}$, and assume some fraction $p\ll 1/2$ of coordinates
are non-zero for each $a\in\mathcal{A}$. Consider a passive noise model in
which:
$\Delta_{\mathcal{S}}\sim\begin{cases}-1&\text{ w.p. }\frac{\theta}{2}\\\
0&\text{ w.p. }1-\theta\\\ +1&\text{ w.p. }\frac{\theta}{2}.\end{cases}$
Then we can decode correctly with probability $1-\delta$ provided:
$\theta<\frac{1}{2}-2s\mu-\sqrt{\frac{1}{2dp}\ln\frac{2m}{\delta}}.$
Now consider an adversarial model in which
$\|\Delta_{\mathcal{S}}\|_{1}\leq\omega d$. Then we can decode correctly if
$\omega<p(\frac{1}{2}-s\mu)$.
Proof Consider first the passive noise model. Fix some $\phi(a)$. Then:
$|\langle\phi(a),\Delta_{\mathcal{S}}\rangle|\leq\sum_{i=1}^{d}|\phi(a)^{(i)}\Delta_{\mathcal{S}}^{(i)}|.$
Treating $\phi(a)$ as a fixed vector with $dp$ non-zero entries, the sum is
concentrated in the range $dp(\theta\pm\epsilon)$, and so $\rho\leq
dp(\theta+\epsilon)$ with high probability. By Chernoff/Hoeffding and the
union-bound, with probability $1-\delta$:
$\epsilon\leq\sqrt{\frac{1}{2dp}\ln\frac{2m}{\delta}}.$
The result is obtained by noting that $L=\sqrt{pd}$ and applying Theorem 10.
For the adversarial case, the result is obtained by again observing that
$|\langle\phi(a),\Delta_{\mathcal{S}}\rangle|\leq\|\phi(a)\|_{\infty}\|\Delta_{\mathcal{S}}\|_{1}\leq\omega
d$ for any $a\in\mathcal{A}$ and applying Theorem 10.
## 4 Encoding Structures
We are often interested in representing more complex data types, such as
objects with multiple attributes or “features.” In general, we suppose that we
observe a set of features $\mathcal{F}$ whose values are assumed to lie in
some set $\mathcal{A}$. Let $\psi:\mathcal{F}\rightarrow\mathcal{H}$ be an
embedding of features, and $\phi:\mathcal{A}\rightarrow\mathcal{H}$ be an
embedding of values. We associate a feature with its value through use of the
_binding_ operator
$\otimes:\mathcal{H}\times\mathcal{H}\rightarrow\mathcal{H}$ that creates an
embedding for a (feature,value) pair. For a feature $f\in\mathcal{F}$ taking
on a value $a\in\mathcal{A}$, its embedding is constructed as
$\psi(f)\otimes\phi(a)$. A data point
$\mathbf{x}=\\{(f_{i}\in\mathcal{F},x_{i}\in\mathcal{A})\\}_{i=1}^{n}$
consists of $n$ such pairs. For simplicity, we assume each $x$ possesses all
attributes, although our analysis also applies to the case that $x$ possesses
only some subset of attributes. The entire embedding for $\mathbf{x}$ is
constructed as (?):
$\displaystyle\phi(\mathbf{x})=\bigoplus_{i=1}^{n}\psi(f_{i})\otimes\phi(x_{i})$
(2)
As with sets we would typically like $\phi(\mathbf{x})$ to be _decodable_ in
the sense that we can recover the value associated with a particular feature,
and _comparable_ in the sense that
$\langle\phi(\mathbf{x}),\phi(\mathbf{x}^{\prime})\rangle$ is reflective of a
reasonable notion of similarity between $\mathbf{x}$ and
$\mathbf{x}^{\prime}$.
From a formal perspective, we require the binding operator to satisfy several
properties. First, binding should be associative and commutative. That is, for
all $\mathbf{a},\mathbf{b},\mathbf{c}\in\mathcal{H}$,
$(\mathbf{a}\otimes\mathbf{b})\otimes\mathbf{c}=\mathbf{a}\otimes(\mathbf{b}\otimes\mathbf{c})$,
and $\mathbf{a}\otimes\mathbf{b}=\mathbf{b}\otimes\mathbf{a}$. Second, there
should exist an identity element $\mathbf{I}\in\mathcal{H}$ such that
$\mathbf{I}\otimes\mathbf{a}=\mathbf{a}$ for all $\mathbf{a}\in\mathcal{H}$.
Third, for all $\mathbf{a}\in\mathcal{H}$, there should exist some
$\mathbf{a}^{-1}\in\mathcal{H}$ such that
$\mathbf{a}\otimes\mathbf{a}^{-1}=\mathbf{I}$. These properties are equivalent
to stipulating that $\mathcal{H}$ be an abelian group under $\otimes$.
Furthermore, binding should distribute over bundling. That is, for any
$\mathbf{a},\mathbf{b},\mathbf{c}\in\mathcal{H}$, it should be the case that
$\mathbf{a}\otimes(\mathbf{b}+\mathbf{c})=\mathbf{a}\otimes\mathbf{b}+\mathbf{a}\otimes\mathbf{c}$.
We here also require that the lengths of bound pairs are bounded, that is to
say: $\max_{f\in\mathcal{F},a\in\mathcal{A}}\|\psi(f)\otimes\phi(a)\|_{2}\leq
M$.
A natural choice of embedding satisfying these properties is to sample
$\psi(f)$ randomly from $\\{\pm 1\\}^{d}$ and choose $\otimes$ to be the
element-wise product. In this case $\psi(f)$ is its own inverse, that is
$\psi(f)\otimes\psi(f)=\mathbf{I}$, and binding preserves lengths of
codewords. We focus on this case here as it is intuitive, but our analysis
generalizes in a straightforward way to any particular implementation
satisfying the properties listed above. One can see the bound pairs satisfy
various incoherence properties with high probability. For instance, we may
declare the binding to be $\mu$-incoherent if:
$\underset{a\in\mathcal{A}}{\max}\,\underset{a^{\prime}\in\mathcal{A},f\in\mathcal{F}}{\max}\,\langle\phi(a),\psi(f)\otimes\phi(a^{\prime})\rangle\leq\mu
L^{2}$
where $L=\min_{a\in\mathcal{A}}\|\phi(a)\|_{2}$. We can extend Theorem 5 to
see this property is satisfied with high probability:
###### Theorem 14
Fix $d,n,m\in\mathbb{Z}^{+}$ and $\mu\in\mathbb{R}^{+}$. Let $\phi$ be
centered and $\sigma$-sub-Gaussian, $\otimes$ be the element-wise product, and
$\psi(f)\sim\\{\pm 1\\}^{d}$. Then:
$\mathbb{P}(\exists\,a,a^{\prime}\in\mathcal{A},f\in\mathcal{F}\text{ s.t.
}|\langle\phi(a),\phi(a^{\prime})\otimes\psi(f)\rangle|\geq\mu L^{2})\leq
nm^{2}\exp\left(-\frac{\kappa\mu^{2}L^{2}}{2\sigma^{2}}\right)$
where $L=\min_{a\in\mathcal{A}}\|\phi(a)\|_{2}$ and $\kappa$ is as defined in
Theorem 5.
The proof is similar to Theorem 5 and is available in the Appendix. This
result is appealing because it means that the incoherence scales only
logarithmically with $m\times n$ which may be large in practice. As a
corollary to the previous theorem, we also obtain the following useful
incoherence property:
$\displaystyle\mathbb{P}(\exists\,a,a^{\prime},f\neq f^{\prime}\text{ s.t.
}|\langle\phi(a),(\phi(a^{\prime})\otimes\psi(f))\otimes\psi^{-1}(f^{\prime})\rangle|\geq\mu
L^{2})\leq m^{2}n^{2}\exp\left(-\frac{\kappa\mu^{2}L^{2}}{2\sigma^{2}}\right)$
(3)
where $\psi^{-1}(f)$ is the inverse of $\psi(f)$ with respect to $\otimes$.
This notion of incoherence is useful for decoding representations. Along
similar lines:
$\displaystyle\mathbb{P}(\exists\,a,a^{\prime},f\neq f^{\prime}\text{ s.t.
}|\langle\phi(a)\otimes\psi(f),\phi(a^{\prime})\otimes\psi(f^{\prime})\rangle|\geq\mu
L^{2})\leq m^{2}n^{2}\exp\left(-\frac{\kappa\mu^{2}L^{2}}{2\sigma^{2}}\right)$
(4)
We note that the previous statement refers to symbols associated with
different attributes and thus does not require any particular incoherence
assumption on the $\phi(a)$.
### 4.1 Decoding Structures
This representation can be decoded to recover the value associated with a
particular feature. To recover the value of the $i$-th feature, we use the
following rule:
$\displaystyle\hat{x}_{i}=\underset{a\in\mathcal{A}}{\text{argmax
}}\langle\phi(a),\phi(\mathbf{x})\otimes\psi^{-1}(f_{i})\rangle$
where $\psi^{-1}(f)$ denotes the group inverse of $\psi(f)$. Since the binding
operator is assumed to distribute over bundling, the dot-product above expands
to:
$\displaystyle\langle\phi(a),\phi(x_{i})\rangle+\sum_{j\neq
i}\langle\phi(a),(\phi(x_{j})\otimes\psi(f_{j}))\otimes\psi^{-1}(f_{i})\rangle$
$\displaystyle\begin{cases}\geq L^{2}(1-n\mu)&\text{ if }x_{i}=a\\\ \leq
nL^{2}\mu&\text{ otherwise }\end{cases}$
where the incoherence can be bounded as as in Equation 3. Thus $\mu<1/(2n)$ is
a sufficient condition for decodability.
### 4.2 Comparing Structures
As with sets, we may wish to compare two structures without decoding them. As
one would expect given Theorem 9, this is can be achieved by computing the
dot-product between their encodings:
###### Theorem 15
Let $\mathbf{x}$ and $\mathbf{x}^{\prime}$ be two structures drawn from a
common alphabet $\mathcal{F}\times\mathcal{A}$. Denote their encodings using
Equation 2 by $\phi(\mathbf{x})$ and $\phi(\mathbf{x}^{\prime})$. Then, if
binding is $\mu$-incoherent:
$|\mathbf{x}\cap\mathbf{x}^{\prime}|-n^{2}\mu\leq\frac{1}{L^{2}}\langle\phi(\mathbf{x}),\phi(\mathbf{x}^{\prime})\rangle\leq|\mathbf{x}\cap\mathbf{x}^{\prime}|+n^{2}\mu$
where $\mathbf{x}\cap\mathbf{x}^{\prime}$ is defined to be the set
$\\{i\,:\,x_{i}=x_{i}^{\prime}\\}_{i=1}^{n}$, that is, the features on which
$\mathbf{x}$ and $\mathbf{x}^{\prime}$ agree.
Proof Expanding:
$\displaystyle\langle\phi(\mathbf{x}),\phi(\mathbf{x}^{\prime})\rangle=\langle\sum_{i=1}^{n}\phi(x_{i})\otimes\psi(f_{i}),\sum_{j=1}^{n}\phi(x_{j}^{\prime})\otimes\psi(f_{j})\rangle$
$\displaystyle=\sum_{i=1}^{n}\langle\phi(x_{i})\otimes\psi(f_{i}),\phi(x_{i}^{\prime})\otimes\psi(f_{i})\rangle+\sum_{i\neq
j}\langle\phi(x_{i})\otimes\psi(f_{i}),\phi(x_{j}^{\prime})\otimes\psi(f_{j})\rangle$
A term in the first sum is $L^{2}$ if $x_{i}=x_{i}^{\prime}$ and bounded in
$\pm L^{2}\mu$ otherwise. So the expression above is bounded as:
$\leq L^{2}|\mathbf{x}\cap\mathbf{x}^{\prime}|+L^{2}n^{2}\mu$
and the other direction of the inequality is analogous.
As a practical example, in bioinformatics it is common to search for regions
of high similarity between a “reference” and “query” genome. Work in (?) and
(?) explored the use HD computing to accelerate this process by encoding short
segments of DNA and estimating similarity on the HD representations.
### 4.3 Encoding Sequences
Sequences are an important form of structured data. In this case, the feature
set is simply the list of positions $\\{1,2,3,...\\}$ in the sequence. In
practical applications, we are often interested in streams of data which
arrive continuously over time. Typically, real-world processes do not exhibit
infinite memory and we only need to store the $n\geq 1$ most recent
observations at any time. In the streaming setting, we would like to avoid
needing to fully re-encode all $n$ data points each time we receive a new
sample, as would be the case using the method described above. This motivates
the use of shift based encoding schemes (?, ?, ?). Let
$\rho^{(i)}(\mathbf{z})$ denote a cyclic left-shift of the elements of
$\mathbf{z}$ by $i$ coordinates, and $\rho^{(-i)}(\mathbf{z})$ denote a cyclic
right-shift by $i$ coordinates. In other words:
$\rho^{(1)}((z_{1},z_{2},\ldots,z_{d-1},z_{d}))=(z_{2},z_{3},\ldots,z_{d},z_{1})$.
In shift-based encoding a sequence $\mathbf{x}=(x_{1},...,x_{n})$ is
represented as:
$\phi(\mathbf{x})=\bigoplus_{i=1}^{n}\rho^{(n-i)}(\phi(x_{i})),$
where we take $\oplus$ to be the element wise sum. Now suppose we receive
symbol $n+1$ and wish to append it to $\phi(\mathbf{x})$ while removing
$\phi(x_{1})$. Then we may apply the rule:
$\rho^{(1)}(\phi(\mathbf{x})-\rho^{(n-1)}(\phi(x_{1})))\oplus\phi(x_{n+1})=\bigoplus_{i=1}^{n}\rho^{(n-i)}\phi(x_{i+1})$
where we can additionally note that $\rho$ is a special type of permutation
and that permutations distribute over sums. However, in order to decode
correctly, each $\phi(a)$ must satisfy an incoherence condition with the
$\rho^{(j)}(\phi(a^{\prime}))$. We can again use the randomly generated nature
of the codewords to argue this is the case; however, we must here impose the
additional restriction that the $\phi(a)$ be bounded, and accordingly restrict
attention to the case $\phi(a)\sim\\{\pm 1\\}^{d}$.
###### Theorem 16
Fix $d,m,n<d\in\mathbb{Z}^{+}$ and $\mu\in\mathbb{R}^{+}$ and let
$\phi(a)\sim\\{\pm 1\\}^{d}$. Then:
$\mathbb{P}(\exists\,a,a^{\prime}\in\mathcal{A},i\neq 0\text{ s.t.
}|\langle\phi(a),\rho^{(i)}(\phi(a^{\prime}))\rangle|\geq\mu d)\leq
nm^{2}\exp\left(-\frac{\mu^{2}d}{4}\right)$
Proof Fix some $a,a^{\prime}$ and $i$. In the case that $a\neq a^{\prime}$,
$\phi(a)$ and $\rho^{(i)}(\phi(a))$ are mutually independent. However, when
$a=a^{\prime}$, $\phi(a)$ and $\rho^{(i)}(\phi(a))$ only satisfy pairwise
independence and the techniques of Theorem 5 cannot be applied. To resolve
this difficulty, let $f(\phi(a))=\langle\phi(a),\rho^{(i)}(\phi(a))\rangle$,
and denote by $\phi(a)^{\setminus k}$ the vector formed by replacing the
$k$-th coordinate in $\phi(a)$ with an arbitrary value $\in\\{+1,-1\\}$. Then
$|f(\phi(a))-f(\phi(a)^{\setminus k})|\leq 4$ and so by the bounded-
differences inequality (?):
$\mathbb{P}(|\langle\phi(a),\rho^{(i)}(\phi(a^{\prime}))\rangle|\geq\mu d)\leq
2\exp\left(-\frac{\mu^{2}d}{4}\right).$
The result follows by the union bound.
Several other related methods for encoding sequential information have been
proposed in the literature (?, ?). For an extensive discussion of these
approaches as well as an interesting discussion involving sequences of
infinite length, the reader is referred to (?).
### 4.4 Discussion and Comparison with Prior Work
We conclude our treatment of encoding and decoding discrete data with some
brief discussion of our approach and its relation to antecedents in the
literature. A key question addressed here and by several pieces of prior work
is to bound the magnitude of crosstalk noise in terms of the encoding
dimension ($d$), the number of items to encode ($s$) and the alphabet size
($m$). Early analysis in (?, ?, ?) recovers the same asymptotic relationship
as we do, but only under specific assumptions about the method used to
generate the codewords and particular instantiations of the bundling and
binding operators.
Work in (?) provides a significantly more general treatment which, like ours,
aims to abstract away from the particular choice of distribution from which
codewords are sampled and from the particular implementation of bundling and
binding operator. Their approach assumes the codewords are generated by
sampling each component i.i.d. from some distribution and uses the central
limit theorem (CLT) to justify modeling the crosstalk noise by a Gaussian
distribution. Error bounds in the non-asymptotic setting are then obtained by
applying a Chernoff style bound to the resulting Gaussian distribution. This
approach again recovers the same asymptotic relationship between $d,s$ and $m$
as us, but does not generally yield formal bounds in the non-asymptotic
setting. Our approach based on sub-Gaussianity formalizes this analysis in the
non-asymptotic setting. Like us, (?) also considers the effect of noise on the
HD representations, but their treatment is limited to additive white noise,
whereas we address both arbitrary additive passive noise and adversarial
noise.
In summary, our formalism using the notion of incoherence allows us to
decouple the analysis of decoding and noise-robustness from any particular
method for generating codewords and readily yields rigorous bounds in the non-
asymptotic setting. Our approach is applicable to a large swath of HD
computing and enables us to offer more general conditions under which
thresholding based decoding schemes will succeed and of the effect of noise
than is available in prior work.
## 5 Encoding Euclidean Data
One option for encoding Euclidean vectors is to treat them as a special case
of the “structured data” considered in the preceding section. As before, we
think of our data as a collection of (feature,value) pairs
$\mathbf{x}=\\{(f_{i},x_{i})\\}_{i=1}^{n}$ with the important caveat that
$x_{i}\in\mathbb{R}^{n}$. This case is more complex because the feature values
may now be continuous, and because the data possesses geometric structure
which is typically relevant for downstream tasks and must be preserved by
encoding. We here analyze two of the most widely used methods for encoding
Euclidean data and discuss general properties of structure preserving
embeddings in the context of HD computing.
### 5.1 Position-ID Encoding
A widely-used method in practice is to quantize the raw signal to a suitably
low precision and then apply the structure encoding method discussed in the
previous section (?, ?, ?, ?).
In this approach, we first quantize the support of each feature
$f\in\mathcal{F}$ into some set of $m$ bins with centroids
$a_{1}<\cdots<a_{m}$ and assign each bin a codeword $\phi(a)\in\mathcal{H}$.
However, instead of requiring the codewords to be incoherent, we now require
the correlation between codewords to reflect the distance between
corresponding quantizer bins. In other words
$\langle\phi(a),\phi(a^{\prime})\rangle$ should be monotonically decreasing in
$|a-a^{\prime}|$.
A simple method can be used to generate monotonic codebooks when the codewords
are randomly sampled from $\\{\pm 1\\}^{d}$ (?, ?). Fixing some feature $f$,
the codeword for the minimal quantizer bin, $\phi(a_{1})$, is generated by
sampling randomly from $\\{\pm 1\\}^{d}$. To generate the codeword for the
second bin, we simply flip some set of $\lceil b\rceil$ bits in $\phi(a_{1})$,
where:
$b=\frac{a_{2}-a_{1}}{a_{m}-a_{1}}\cdot\frac{d}{2}$
The codeword for the third bin is generated analogously from the second, where
we assume the bits to be flipped are sampled such that a bit is flipped at
most once. Thus the codewords for the minimal and maximal bins are orthogonal
and the correlation between codewords for intermediate bins is monotonically
decreasing in the distance between their corresponding bin centroids.
In practice, it seems to be typical to use a single codebook for all features
and for the quantizer to be a set of evenly spaced bins over the support of
the data. While simple, this approach is likely to have sub-optimal rate when
the features are on different scales or are far from the uniform distribution.
Encoding then proceeds as follows:
$\phi(\mathbf{x})=\sum_{i=1}^{n}\phi(x_{i})\otimes\psi(f_{i})$
where, as before $\psi\in\\{\pm 1\\}^{d}$ is a vector which encodes the index
$i$ of a feature value $x_{i}$ as in the previous section on encoding
sequences; hence the name “position-ID” encoding. There are several variations
on this theme which are compared empirically in (?).
This general encoding method was analyzed by (?), in the specific case of
sparse and binary codewords, who show it preserves the L1 distance between
points in expectation but do not provide distortion bounds. We here provide
such bounds using our formalism of matrix incoherence. We assume that the
underlying quantization of the points is sufficiently fine that it is a low-
order term that can be ignored.
###### Theorem 17
Let $\mathbf{x}$ and $\mathbf{x}^{\prime}$ be points in $[0,1]^{n}$ with
encodings $\phi(\mathbf{x})$ and $\phi(\mathbf{x}^{\prime})$ generated using
the rule described above. Assume that $\phi$ satisfies
$\langle\phi(a),\phi(a^{\prime})\rangle=d(1-|a-a^{\prime}|)$ for all
$a,a^{\prime}\in\mathcal{A}$, and let $\psi\sim\\{\pm 1\\}^{d}$. Then, for all
$\mathbf{x},\mathbf{x}^{\prime}$:
$2d(\|\mathbf{x}-\mathbf{x}^{\prime}\|_{1}-2n^{2}\mu)\leq||\phi(\mathbf{x})-\phi(\mathbf{x}^{\prime})||^{2}_{2}\leq
2d(\|\mathbf{x}-\mathbf{x}^{\prime}\|_{1}+2n^{2}\mu)$
The proof is similar to Theorem 15 and is available in the Appendix.
The practical implication of the previous theorem is that the position-ID
encoding method preserves the L1 distance between points up to an additive
distortion which can be bounded by the incoherence of the codebook. Per
Equation 4, $\mu=O(\sqrt{\ln(mn)/d})$. Therefore, to ensure that
$\frac{1}{d}\|\phi(\mathbf{x})-\phi(\mathbf{x}^{\prime})\|_{2}^{2}\approx\|\mathbf{x}-\mathbf{x}^{\prime}\|_{1}\pm\epsilon$,
the previous result implies we should choose
$d=O(\frac{n^{4}}{\epsilon^{2}}\ln(nm))$. This can be relaxed to a quadratic
dependence on $n$ in exchange for a weaker pointwise bound, but in either case
means the encoding method may be problematic when the dimension of the
underlying data is high.
Noting that $||\phi(\mathbf{x})||_{2}^{2}\in nd\pm n^{2}d\mu$, we can see that
the encodings of each point are roughly of equal norm and lie in a ball of
radius at most $n\sqrt{d\mu}$, where the exact position depends on the
instantiation of the codebook. Thus, we can loosely interpret the encoding
procedure as mapping the data into a thin shell around the surface of a high
dimensional sphere.
### 5.2 Random Projection Encoding
Another popular family of encoding methods embeds the data into $\mathcal{H}$
under some random linear map followed by a quantization (?, ?). More formally,
for some $\mathbf{x}\in\mathbb{R}^{n}$, these embeddings take the form:
$\phi(\mathbf{s})=g(\mathbf{\Phi}\mathbf{x})$
where $\mathbf{\Phi}\in\mathbb{R}^{d\times n}$ is a matrix whose rows are
sampled uniformly at random from the surface of the $n$-dimensional unit
sphere, and $g$ is a quantizer — typically the sign function — restricting the
embedding to $\mathcal{H}$. The embedding matrix $\mathbf{\Phi}$ may also be
quantized to lower precision. This encoding method has also been studied in
the context of kernel approximation where it is used to approximate the
angular kernel (?), and to construct low-distortion binary embeddings (?, ?).
While the following result is well known, we here show this encoding method
preserves angular distance up to an additive distortion as this fact is
important for subsequent analysis.
###### Theorem 18
Let $\mathcal{S}^{n-1}\subset\mathbb{R}^{n}$ denote the $n$-dimensional unit
sphere. Let $\mathbf{\Phi}\in\mathbb{R}^{d\times n}$ be a matrix whose rows
are sampled uniformly at random from $\mathcal{S}^{n-1}$. Let $\mathcal{X}$ be
a set of points supported on $\mathcal{S}^{n-1}$. Denote the embedding of a
point by $\phi(\mathbf{x})=\text{\rm sign}(\mathbf{\Phi}\mathbf{x})$. Then,
for any $\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X}$, with high probability:
$d\theta-O(\sqrt{d})\leq
d_{ham}(\phi(\mathbf{x}),\phi(\mathbf{x}^{\prime}))\leq d\theta+O(\sqrt{d})$
where $d_{ham}(a,b)$ is the Hamming distance between $a$ and $b$, defined to
be the number of coordinates on which $a$ and $b$ differ, and
$\theta=\frac{1}{\pi}\cos^{-1}(\langle\mathbf{x},\mathbf{x^{\prime}}\rangle)\in[0,1]$
is proportional to the angle between $\mathbf{x}$ and $\mathbf{x^{\prime}}$.
Proof Let $\mathbf{\Phi}^{(i)}$ denote the $i$th row of the matrix
$\mathbf{\Phi}$. Then, the $i$th coordinate in the embedding of $\mathbf{x}$
can be written as $\text{sign}(\langle\mathbf{\Phi}^{(i)},\mathbf{x}\rangle)$.
The probability that the embeddings differ on their $i$th coordinate, that is
$(\langle\mathbf{\Phi}^{(i)},\mathbf{x}\rangle)(\langle\mathbf{\Phi}^{(i)},\mathbf{x}^{\prime}\rangle)<0$,
is exactly $\angle(\mathbf{x},\mathbf{x}^{\prime})/\pi$: the angle (in
radians) between $\mathbf{x}$ and $\mathbf{x}^{\prime}$ divided by $\pi$.
Therefore, the number of coordinates on which $\phi(\mathbf{x})$ and
$\phi(\mathbf{x}^{\prime})$ disagree is, concentrated in the range,
$d(\theta\pm\epsilon)$. By Chernoff/Hoeffding, we have that with probability
$1-\delta$:
$d\epsilon\leq\sqrt{2d\ln\frac{2}{\delta}}.$
Noting that
$\langle\phi(\mathbf{x}),\phi(\mathbf{x}^{\prime})\rangle=d-2d_{ham}(\phi(\mathbf{x}),\phi(\mathbf{x}^{\prime}))$,
we obtain the following simple corollary:
###### Corollary 19
Let $\phi$ and $\theta$ be as defined in Theorem 18. Then, with high
probability:
$d(1-2\theta)-O(\sqrt{d})\leq\langle\phi(\mathbf{x}),\phi(\mathbf{x}^{\prime})\rangle\leq
d(1-2\theta)+O(\sqrt{d})$
To obtain a more explicit relationship with the dot product, we can use the
first-order approximation $\cos^{-1}(x)\approx(\pi/2)-x$, to obtain
$\theta\approx\frac{1}{2}-\frac{1}{\pi}\langle\mathbf{x},\mathbf{x}^{\prime}\rangle$,
from which we obtain:
$d(1-2\theta)\approx\frac{2d}{\pi}\langle\mathbf{x},\mathbf{x}^{\prime}\rangle.$
We emphasize that, in comparison to the position-ID method, the distortion in
this case does not depend on the dimension of the underlying data which means
this method may be preferable when the data dimension is large.
#### 5.2.1 Connection with Kernel Approximation
A natural question is whether the encoding procedure described above, which
preserves dot-products, can be generalized to capture more diverse notions of
similarity? We can answer in the affirmative by noting that the random
projection encoding method is closely related to the notion of random Fourier
features which have been widely used for kernel approximation (?). The basic
idea is to construct an embedding
$\phi:\mathbb{R}^{n}\rightarrow\mathbb{R}^{d}$, such that
$\langle\phi(\mathbf{x}),\phi(\mathbf{x}^{\prime})\rangle\approx
k(\mathbf{x},\mathbf{x}^{\prime})$, where $k$ is a shift-invariant kernel. The
construction exploits the fact that the Fourier transform of a shift-invariant
kernel $k$ is a probability measure: a well known result from harmonic
analysis known as Bochner’s Theorem (?). The embedding itself is given by
$\phi(\mathbf{x})=\frac{1}{\sqrt{d}}\cos(\mathbf{\Phi}\mathbf{x}+\mathbf{b})$,
where the rows of $\mathbf{\Phi}$ are sampled from the distribution induced by
$k$ and the coordinates of $\mathbf{b}$ are sampled uniformly at random from
$[0,2\pi]$.
Subsequent work in (?) gave a simple scheme for quantizing the embeddings
produced from random Fourier features to binary precision. Their construction
yields an embedding $\psi:\mathbb{R}^{n}\rightarrow\\{0,1\\}^{d}$ such that:
$f_{1}(k(\mathbf{x},\mathbf{x}^{\prime}))-\Delta\leq\frac{1}{d}d_{ham}(\psi(\mathbf{x}),\psi(\mathbf{x}^{\prime}))\leq
f_{2}(k(\mathbf{x},\mathbf{x}^{\prime}))+\Delta$
where $f_{1},f_{2}:\mathbb{R}\rightarrow\mathbb{R}$ are independent of the
choice of kernel, and $\Delta$ is a distortion term. The embedding itself is
constructed by applying a quantizer $Q_{t}(x)=\text{sign}(x+t)$ coordinate
wise over the embeddings constructed from random Fourier features. In other
words $\psi(\mathbf{x})_{i}=\frac{1}{2}(1+Q_{t_{i}}(\phi(\mathbf{x})_{i}))$,
where $t_{i}\sim\text{Unif}[-1,1]$, and $\phi(\mathbf{x})$ is a random Fourier
feature.
This connection is highly appealing for HD computing. The quantized random
Fourier feature scheme presents a simple recipe for constructing encoding
methods meeting the desiderata of HD computing while preserving a rich variety
of structure in data. For instance, shift-invariant kernels preserving the L1
and L2 distance—among many others—can be approximated using the method
discussed above. Furthermore, this observation provides a natural point of
contact between HD computing and the vast literature on kernel methods which
has produced a wealth of algorithmic and theoretical insights.
### 5.3 Consequences of Distance Preservation
The encoding methods discussed above are both appealing because they preserve
reasonable notions of distance between points in the original data. Distance
preservation is a sufficient condition to establish other desirable properties
of encodings, namely preservation of neighborhood/cluster structure,
robustness to various forms of noise, and in some cases, preservation of
linear separability. We address the first two items here and defer the latter
for our discussion of learning on HD representations. We formalize our notion
of distance preservation as follows:
###### Definition 20
Distance-Preserving Embedding: Let $\delta_{\mathcal{X}}$ be a distance
function on $\mathcal{X}\subset\mathbb{R}^{n}$ and $\delta_{H}$ be a distance
function on $\mathcal{H}$. We say $\phi$ preserves $\delta_{\mathcal{X}}$
under $\delta_{H}$ if, there exist functions
$\alpha,\beta:\mathbb{Z}^{+}\rightarrow\mathbb{R}$ such that
$\beta(d)/\alpha(d)\rightarrow 0$ as $d\to\infty$, and:
$\displaystyle\alpha(d)\delta_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})-\beta(d)\leq\delta_{\mathcal{H}}(\phi(\mathbf{x}),\phi(\mathbf{x}^{\prime}))\leq\alpha(d)\delta_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})+\beta(d)$
(5)
for all $\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X}$.
We typically wish the distance function $\delta_{\mathcal{H}}$ on
$\mathcal{H}$ to be simple to compute. In practice, it is often taken to be
the Euclidean, Hamming, or angular distance. The position-ID method preserves
the L1 distance with $\delta_{H}$ the squared Euclidean distance,
$\alpha(d)=2d$, and $\beta(d)\leq n^{2}\mu d$; recall that in the
constructions above, $\mu$ scales inversely with $d$ and thus
$\beta(d)/\alpha(d)\to 0$. The signed random-projection method preserves the
angular distance with $\alpha(d)=O(d)$, $\beta(d)=O(\sqrt{d})$, and
$\delta_{H}$ the Hamming, angular, or Euclidean distance.
#### 5.3.1 Preservation of Cluster Structure
In general, there is no universally applicable definition of cluster
structure. Indeed, numerous algorithms have been proposed in the literature to
target various reasonable notions of what constitutes a “cluster” in the data.
Preservation of a distance function accords naturally with K-means like
algorithms which, given a set of data $\mathcal{X}\subset\mathbb{R}^{n}$
compute a set of centroids $\mathcal{C}=\\{\mathbf{c}_{i}\\}_{i=1}^{k}$, and
define associated clusters as the Voronoi cells associated with each centroid.
We here adopt this notion and state that cluster structure $\mathcal{C}$ is
preserved if, for any $\mathbf{x}\in\mathcal{X}$:
$\underset{\mathbf{c}\in\mathcal{C}}{\text{argmin
}}\delta_{\mathcal{X}}(\mathbf{x},\mathbf{c})=\underset{\mathbf{c}\in\mathcal{C}}{\text{argmin
}}\delta_{\mathcal{H}}(\phi(\mathbf{x}),\phi(\mathbf{c}))$
In other words, that the set of points bound to a particular cluster centroid
does not change under the encoding. We can restate the above as requiring
that, for some point $\mathbf{x}$ bound to a cluster centroid $\mathbf{c}$, it
is the case that:
$\delta_{\mathcal{H}}(\phi(\mathbf{x}),\phi(\mathbf{c}))<\delta_{\mathcal{H}}(\phi(\mathbf{x}),\phi(\mathbf{c^{\prime}}))$
for any $\mathbf{c}^{\prime}\in\mathcal{C}\setminus\\{\mathbf{c}\\}$. From
Definition 20 we have:
$\delta_{\mathcal{H}}(\phi(\mathbf{x}),\phi(\mathbf{c^{\prime}}))-\delta_{\mathcal{H}}(\phi(\mathbf{x}),\phi(\mathbf{c}))\geq\alpha(d)(\delta_{\mathcal{X}}(\mathbf{x},\mathbf{c^{\prime}})-\delta_{\mathcal{X}}(\mathbf{x},\mathbf{c}))-2\beta(d)$
for any $\mathbf{x}\in\mathcal{X}$ and
$\mathbf{c},\mathbf{c}^{\prime}\in\mathcal{C}$. Rearranging the expressions
above we can see the desired property will be satisfied if:
$\frac{\beta(d)}{\alpha(d)}<\underset{\mathbf{x}\in\mathcal{X}}{\min}\,\underset{\mathbf{c}^{\prime}\neq\mathbf{c}(\mathbf{x})}{\min}\,\frac{1}{2}(\delta_{\mathcal{X}}(\mathbf{x},\mathbf{c}^{\prime})-\delta_{\mathcal{X}}(\mathbf{x},\mathbf{c}(\mathbf{x}))),$
where
$\mathbf{c}(\mathbf{x})={\text{argmin}}_{\mathbf{c}\in\mathcal{C}}\delta_{\mathcal{X}}(\mathbf{x},\mathbf{c})$
denotes the center in $\mathcal{C}$ closest to $\mathbf{x}$. A sufficient
condition for the existence of some $d$ satisfying this property is that
$\alpha(d)$ is monotone increasing and that $\alpha(d)$ is faster growing than
$\beta(d)$. This condition is satisfied for both the random projection and
position-ID encoding methods.
#### 5.3.2 Noise Robustness
It is also of interest to consider robustness to noise in the context of
encoding Euclidean data. Suppose we have a set of points, $\mathcal{X}$, in
$\mathbb{R}^{n}$, and a distance function of interest
$\delta_{\mathcal{X}}(\cdot,\cdot)$ which is preserved _à la_ Definition 20.
Given an arbitrary point $\mathbf{x}\in\mathcal{X}$ we consider a noise model
which corrupts $\phi(\mathbf{x})$ to $\phi(\mathbf{x})+\Delta$, where $\Delta$
is some unspecified noise process. Along the lines of Section 3.2, we say
$\Delta$ is $\rho$-bounded if:
$\underset{\mathbf{x}\in\mathcal{X}}{\max}\,|\langle\phi(\mathbf{x}),\Delta\rangle|\leq\rho$
Suppose we wish to ensure the encodings can distinguish between all points at
a distance $\leq\epsilon_{1}$ from $\mathbf{x}$ and all points at a distance
$\geq\epsilon_{2}$. That is:
$\|\phi(\mathbf{x})+\Delta-\phi(\mathbf{x}^{\prime})\|<\|\phi(\mathbf{x})+\Delta-\phi(\mathbf{x}^{\prime\prime})\|$
for all $\mathbf{x}^{\prime}\in\mathcal{X}$ such that
$\delta_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})\leq\epsilon_{1}$ and all
$\mathbf{x}^{\prime\prime}\in\mathcal{X}$ such that
$\delta_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})\geq\epsilon_{2}$. We say
that such an encoding is $(\epsilon_{1},\epsilon_{2})$-robust.
###### Theorem 21
Let $\delta_{\mathcal{X}}$ be a distance function on
$\mathcal{X}\subset\mathbb{R}^{n}$ and suppose $\phi$ is an embedding
preserving $\delta_{\mathcal{X}}$ under the squared Euclidean distance on
$\mathcal{H}$ as described in Definition 20. Suppose $\Delta$ is
$\rho$-bounded noise. Then $\phi$ is $(\epsilon_{1},\epsilon_{2})$ robust if:
$\rho<\frac{\alpha(d)}{4}(\epsilon_{2}-\epsilon_{1})-\frac{\beta(d)}{2}.$
Proof Fix a point $\mathbf{x}$ whose encoding is corrupted as
$\phi(\mathbf{x})+\Delta$. Then for any
$\mathbf{x}^{\prime},\mathbf{x}^{\prime\prime}\in\mathcal{X}$ with
$\delta_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})\leq\epsilon_{1}$ and
$\delta_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime\prime})\geq\epsilon_{2}$,
we have:
$\displaystyle\|\phi(\mathbf{x})+\Delta-\phi(\mathbf{x}^{\prime\prime})\|_{2}^{2}-\|\phi(\mathbf{x})+\Delta-\phi(\mathbf{x}^{\prime})\|_{2}^{2}$
$\displaystyle=\|\phi(\mathbf{x})-\phi(\mathbf{x}^{\prime\prime})\|_{2}^{2}-\|\phi(\mathbf{x})-\phi(\mathbf{x}^{\prime})\|_{2}^{2}-2\langle\phi(\mathbf{x}^{\prime\prime}),\Delta\rangle+2\langle\phi(\mathbf{x}^{\prime}),\Delta\rangle$
$\displaystyle\geq\alpha(d)\delta_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime\prime})-\beta(d)-\alpha(d)\delta_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})-\beta(d)-4\rho$
$\displaystyle\geq\alpha(d)(\epsilon_{2}-\epsilon_{1})-2\beta(d)-4\rho\ >\ 0,$
as desired.
As before, we may consider passive and adversarial examples.
Additive White Gaussian Noise. First consider the case that
$\mathcal{H}=\mathbb{R}^{d}$ and
$\Delta\sim\mathcal{N}(0,\sigma_{\Delta}^{2}\mathbf{I}_{d})$; that is, each
coordinate of $\Delta$ has a Gaussian distribution with mean zero and variance
$\sigma_{\Delta}^{2}$. Then, as before, we can note that
$\langle\phi(\mathbf{x}),\Delta\rangle\sim\mathcal{N}(0,\sigma_{\Delta}^{2}\|\phi(\mathbf{x})\|_{2}^{2})$.
Then, it is very likely (four standard deviations in the tail of the normal
distribution) that $\rho<4L\sigma_{\Delta}$, where
$L=\max_{\mathbf{x}\in\mathcal{X}}\|\phi(\mathbf{x})\|_{2}$. So then, we have
the desired robustness property if:
$\sigma_{\Delta}<\frac{\alpha(d)}{16L}(\epsilon_{2}-\epsilon_{1})-\frac{\beta(d)}{8L}$
Assuming that $\alpha(d)$ is faster growing in $d$ than $L$ and $\beta(d)$,
there will exist some encoding dimension for which we can tolerate any given
level of noise. In the case of the random projection encoding scheme described
above $\alpha(d)=O(d),\beta(d)=O(\sqrt{d})$ and $L=\sqrt{d}$ exactly. And so
we can tolerate noise on the order of:
$\sigma_{\Delta}\approx\sqrt{d}\,(\epsilon_{2}-\epsilon_{1})-O(1)$
For the position-ID encoding method, $\alpha(d)=O(d)$, $L=O(\sqrt{nd})$ and
$\beta(d)=O(n^{2}d\mu)$, and so we can tolerate noise:
$\sigma_{\Delta}\approx\sqrt{\frac{d}{n}}((\epsilon_{2}-\epsilon_{1})-O(n^{2}\mu))$
Adversarial Noise. We now consider the case that $\mathcal{H}=\\{\pm 1\\}$, as
in the random-projection encoding method, and $\Delta$ is noise in which some
fraction $\omega\cdot d$ of coordinates in $\phi(\mathbf{x})$ are maliciously
corrupted by an adversary. Since $\|\Delta\|_{1}\leq\omega d$, we have, for
any $\mathbf{x}\in\mathcal{X}$:
$|\langle\phi(\mathbf{x}),\Delta\rangle|\leq\|\phi(\mathbf{x})\|_{\infty}\|\Delta\|_{1}\leq\omega
d$
So then we can tolerate $\omega$ on the order of:
$\omega<\frac{\alpha(d)}{4d}(\epsilon_{2}-\epsilon_{1})-\frac{\beta(d)}{2d}$
In the case of the random-projection encoding method this boils down to:
$\omega\approx(\epsilon_{2}-\epsilon_{1})-\frac{1}{\sqrt{d}},$
meaning the total number of coordinates that can be corrupted is
$O(d(\epsilon_{2}-\epsilon_{1}))$.
Robustness to Input Noise. A natural question is whether the HD
representations also confer any robustness to noise in the input space
$\mathcal{X}$ rather than the HD space $\mathcal{H}$. In general, preservation
of distance does not imply any particular robustness to input noise and the
answer to this question depends on the particulars of the encoding method in
question. Since a general treatment is difficult to give, we will not pursue
this matter in depth at present.
## 6 Learning on HD Data Representations
We now turn to the question of using HD representations in learning
algorithms. Our goal is to clarify in what precise sense the HD encoding
process can make learning easier. We study two ways in which this can happen:
the encoding process can increase the separation between classes and/or can
induce sparsity. Both of these characteristics can be exploited by neurally
plausible algorithms to simplify learning. Throughout this discussion, we
assume access to a set of $N$ labelled examples
$\mathcal{S}=\\{(\mathbf{x}_{i},y_{i})\\}_{i=1}^{N}$, where $\mathbf{x}_{i}$
lies in $[0,1]^{n}$ and $y_{i}\in\mathcal{C}$ is a categorical variable
indicating the class label. In general, we are interested in the case that
training examples arrive in a streaming, or online, fashion, although our
conclusions apply to fixed and finite data as well.
### 6.1 Learning by Bundling
The simplest approach to learning with HD representations is to bundle
together the training examples corresponding to each class into a set of
exemplars—often referred to as “prototypes”—which are then used for
classification (?, ?, ?). More formally, as described in Section 2, we
construct the prototype $\mathbf{c}_{k}$ for the k-th class as:
$\mathbf{c}_{k}=\bigoplus_{i\text{ s.t. }y_{i}=k}\phi(\mathbf{x}_{i})$
and then assign a class label for some “query” point $\mathbf{x}_{q}$ as:
$\displaystyle\hat{y}=\underset{k\in\mathcal{C}}{\text{argmax}}\,\frac{\langle\mathbf{c}_{k},\phi(\mathbf{x})\rangle}{||\mathbf{c}_{k}||}$
(6)
This approach bears a strong resemblance to naive Bayes and Fisher’s linear
discriminant, which are both classic simple statistical procedures for
classification (?). Like these methods, the bundling approach is appealing due
to its simplicity. However, it also shares their weaknesses in that it may
fail to separate data that is in fact linearly separable.
### 6.2 Learning Arbitrary Linear Separators
Linear separability is one of the most basic types of structure that can aid
learning. The theory of linear models is well developed and several simple,
neurally plausible, algorithms for learning linear separators are known, for
instance, the Perceptron and Winnow (?, ?). Thus, if our data is linearly
separable in low-dimensional space we would like it to remain so after
encoding, so that these methods can be applied. We now show formally that
preservation of distance is sufficient, under some conditions, to preserve
linear separability.
###### Theorem 22
Let $\mathcal{X}$ and $\mathcal{X}^{\prime}$ be two disjoint, closed, and
convex sets of points in $\mathbb{R}^{n}$. Let $\mathbf{p}\in\mathcal{X}$ and
$\mathbf{q}\in\mathcal{X}^{\prime}$ be the closest pair of points between the
two sets. Suppose $\phi$ preserves L2 distance on $\mathcal{X}$ under the L2
distance on $\mathcal{H}$ in the sense of Definition 20. Then, the function
$f(\mathbf{x})=\langle\phi(\mathbf{x}),\phi(\mathbf{p})-\phi(\mathbf{q})\rangle-\frac{1}{2}(||\phi(\mathbf{p})||_{2}^{2}-||\phi(\mathbf{q})||_{2}^{2})$
is positive for all $\mathbf{x}\in\mathcal{X}$ and negative for all
$\mathbf{x}^{\prime}\in\mathcal{X}^{\prime}$ provided:
$\displaystyle\frac{\beta(d)}{\alpha(d)}<\frac{1}{2}\|\mathbf{p}-\mathbf{q}\|_{2}^{2}.$
Proof We first observe:
$\displaystyle\langle\phi(\mathbf{x}),\phi(\mathbf{p})-\phi(\mathbf{q})\rangle-\frac{1}{2}\left(\|\phi(\mathbf{p})\|_{2}^{2}-\|\phi(\mathbf{q})\|_{2}^{2}\right)=\frac{1}{2}\|\phi(\mathbf{x})-\phi(\mathbf{q})\|_{2}^{2}-\frac{1}{2}\|\phi(\mathbf{x})-\phi(\mathbf{p})\|_{2}^{2}.$
We may then use Definition 20 to obtain:
$\displaystyle f(\mathbf{x})$
$\displaystyle=\frac{1}{2}\|\phi(\mathbf{x})-\phi(\mathbf{q})\|_{2}^{2}-\frac{1}{2}\|\phi(\mathbf{x})-\phi(\mathbf{p})\|_{2}^{2}$
$\displaystyle\geq\frac{\alpha(d)}{2}\|\mathbf{x}-\mathbf{q}\|_{2}^{2}-\frac{\alpha(d)}{2}\|\mathbf{x}-\mathbf{p}\|_{2}^{2}-\beta(d)$
$\displaystyle=\alpha(d)\left(\langle\mathbf{x},\mathbf{p}-\mathbf{q}\rangle-\frac{1}{2}\left(\|\mathbf{p}\|_{2}^{2}-\|\mathbf{q}\|_{2}^{2}\right)\right)-\beta(d).$
By a standard proof of the hyperplane separation theorem (e.g., Section 2.5.1
of (?)),
$\langle\mathbf{x},\mathbf{p}-\mathbf{q}\rangle-\frac{1}{2}(\|\mathbf{p}\|_{2}^{2}-\|\mathbf{q}\|_{2}^{2})\geq\frac{1}{2}\|\mathbf{p}-\mathbf{q}\|_{2}^{2}$
for any $\mathbf{x}\in\mathcal{X}$, and thus $f(\mathbf{x})>0$ if
$\displaystyle\frac{\beta(d)}{\alpha(d)}<\frac{1}{2}\|\mathbf{p}-\mathbf{q}\|_{2}^{2}.$
the proof for $\mathbf{x}\in\mathcal{X}^{\prime}$ is analogous.
A natural question is whether a linear separator on the HD representation can
capture a _nonlinear_ decision boundary on the original data? The connection
with kernel methods discussed in Section 5.2.1 presents one avenue for
rigorously addressing this question. As noted there, the encoding function can
sometimes be interpreted as approximating the feature map of a kernel, which
in turn can be used to linearize learning problems in some settings (?).
However, a thorough examination of this question is beyond the scope of the
present work.
#### 6.2.1 Learning Sparse Classifiers on Random Projection Encodings
The random projection encoding method can be seen to lead to representations
that are _sparse_ in the sense that a subset of just $k\ll d$ coordinates
suffice for determining the class label. This setting accords naturally with
the Winnow algorithm (?) which is known to make on the order of $k\log d$
mistakes when the target function class is a linear function of $k\leq d$
variables. This can offer substantially faster convergence than the Perceptron
when the margin is small. Curiously, while the Perceptron algorithm is
commonly used in the HD community, we are unaware of any work using Winnow for
learning.
###### Theorem 23
Let $\mathcal{X}$ and $\mathcal{X}^{\prime}$ be two sets of points supported
on the $n$-dimensional unit sphere and separated by a unit-norm hyperplane
$\mathbf{w}$ with margin
$\gamma=\min_{\mathbf{x}\in\mathcal{X}}|\langle\mathbf{x},\mathbf{w}\rangle|$.
Let $\mathbf{\Phi}\in\mathbb{R}^{d\times n}$ be a matrix whose rows are
sampled from the uniform distribution over the $n$-dimensional unit-sphere.
Define the encoding of a point $\mathbf{x}$ by
$\phi(\mathbf{x})=\mathbf{\Phi}\mathbf{x}$. With high probability,
$\mathcal{X}$ and $\mathcal{X}^{\prime}$ are linearly separable using just $k$
coordinates in the encoded space, provided:
$d=\Omega\left(k\exp\left(\frac{n}{2k\gamma^{2}}\right)\right).$
To prove the theorem we first use the following simple Lemma:
###### Lemma 24
Suppose there exists a row $\mathbf{\Phi}^{(i)}$ of the projection matrix such
that $\langle\mathbf{\Phi}^{(i)},\mathbf{w}\rangle>1-\gamma^{2}/2$. Then
$\langle\mathbf{\Phi}^{(i)},\mathbf{x}\rangle$ is positive for any
$\mathbf{x}\in\mathcal{X}$ and negative for any
$\mathbf{x}\in\mathcal{X}^{\prime}$.
Proof The constraint on the dot product of $\mathbf{\Phi}^{(i)}$ and
$\mathbf{w}$ implies
$\|\mathbf{\Phi}^{(i)}-\mathbf{w}\|^{2}=\|\mathbf{\Phi}^{(i)}\|^{2}+\|\mathbf{w}\|^{2}-2\langle\mathbf{\Phi}^{(i)},\mathbf{w}\rangle<\gamma^{2}$.
Thus for any $\mathbf{x}\in\mathcal{X}$,
$\langle\mathbf{\Phi}^{(i)},\mathbf{x}\rangle=\langle\mathbf{w},\mathbf{x}\rangle+\langle\mathbf{\Phi}^{(i)}-\mathbf{w},\mathbf{x}\rangle\geq\gamma+\langle\mathbf{\Phi}^{(i)}-\mathbf{w},\mathbf{x}\rangle\geq\gamma-\|\mathbf{\Phi}^{(i)}-\mathbf{w}\|>0.$
A similar argument shows that $\langle\mathbf{\Phi}^{(i)},\mathbf{x}\rangle$
is negative on $\mathcal{X}^{\prime}$.
Unfortunately, the probability of randomly sampling such a direction is tiny,
on the order of $\gamma^{n}$. However, we might instead hope to sample $k$
vectors that are weakly correlated with $\mathbf{w}$ and exploit their
cumulative effect on $\mathbf{x}$. We say a vector
$\mathbf{u}\in\mathbb{R}^{n}$ is $\rho$-correlated with $\mathbf{w}$ if
$\langle\mathbf{u},\mathbf{w}\rangle\geq\rho$. We are now in a position to
prove the theorem.
Proof For $\mathbf{w}\in\mathcal{S}^{n-1}$ and $\rho\in(0,1)$, let
$\mathcal{C}=\\{\mathbf{u}\in
S^{n-1}:\langle\mathbf{u},\mathbf{w}\rangle\geq\rho\\}$ denote the spherical
cap of vectors $\rho$-correlated with $\mathbf{w}$. Suppose we pick vectors
$\mathbf{u}^{(1)},\ldots,\mathbf{u}^{(k)}$ uniformly at random from
$\mathcal{C}$. Then, with probability at least $1/2$:
$\displaystyle\frac{\langle\sum_{j}\mathbf{u}^{(j)},\mathbf{w}\rangle}{\|\sum_{j}\mathbf{u}^{(j)}\|_{2}}\geq
1-\frac{1}{2k\rho^{2}}$ (7)
To see this, note that without loss of generality we may assume
$\mathbf{w}=\mathbf{e}_{1}$, the first standard basis vector of
$\mathbb{R}^{n}$, and write any $\mathbf{u}\in\mathbb{R}^{n}$ as
$\mathbf{u}=(u_{1},\mathbf{u}_{R})$: the first coordinate and the remaining
$n-1$ coordinates. Now, let
$N=\langle\sum_{j}\mathbf{u}^{(j)},\mathbf{w}\rangle=\sum_{j}\mathbf{u}^{(j)}_{1}\geq
k\rho$. Then:
$\displaystyle\bigg{\|}\sum_{j}\mathbf{u}^{(j)}\bigg{\|}_{2}^{2}$
$\displaystyle=\left(\sum_{j}\mathbf{u}_{1}^{(j)}\right)^{2}+\bigg{\|}\sum_{j}\mathbf{u}_{R}^{(j)}\bigg{\|}_{2}^{2}$
$\displaystyle=N^{2}+\sum_{j}\|\mathbf{u}_{R}^{(j)}\|_{2}^{2}+\sum_{i\neq
j}\langle\mathbf{u}_{R}^{(i)},\mathbf{u}_{R}^{(j)}\rangle$ $\displaystyle\leq
N^{2}+k+\sum_{i\neq
j}\langle\mathbf{u}_{R}^{(i)},\mathbf{u}_{R}^{(j)}\rangle.$
The last term has a symmetric distribution around zero over random samplings
of the $\mathbf{u}^{(j)}$. Thus, with probability $\geq 1/2$, it is $\leq 0$,
whereupon
$\frac{\langle\sum_{j}\mathbf{u}^{(j)},\mathbf{w}\rangle}{\|\sum_{j}\mathbf{u}^{(j)}\|_{2}}\geq\frac{N}{\sqrt{N^{2}+k}}\geq
1-\frac{k}{2N^{2}}\geq 1-\frac{1}{2k\rho^{2}}.$
To ensure the quantity above is at least $1-\gamma^{2}/2$, we must have:
$\rho^{2}\geq\frac{1}{k\gamma^{2}}.$
It now remains to compute the probability that a vector $\mathbf{\Phi}^{(i)}$
sampled uniformly from $\mathcal{S}^{n-1}$ lies in $\mathcal{C}$, or
equivalently, that $\mathbf{\Phi}_{1}^{(i)}\geq\rho$. Noting that we may
simulate a random direction on $\mathcal{S}^{n-1}$ by sampling
$\mathbf{z}\sim\mathcal{N}(0,\mathbf{I}_{n})$ and normalizing, we obtain the
reasonable approximation: $\mathbf{\Phi}^{(i)}_{1}\sim\mathcal{N}(0,1/n)$.
Therefore, the probability that $\mathbf{\Phi}_{1}^{(i)}\geq\rho$ is on the
order of $e^{-n\rho^{2}/2}$. So we need:
$d=\Omega\left(k\exp\left(\frac{n}{2k\gamma^{2}}\right)\right)$
In summary, the random projection method in tandem with the Winnow algorithm
seems to be well suited to the HD setting, where sparsity can be exploited to
simplify learning.
## 7 Conclusion
To conclude, we lay out several research directions related to HD computing we
believe it would be of particular interest to further explore. There are
several interesting open problems related to encoding. Our analysis
established preservation of only the most basic forms of structure in data.
Can encoding procedures satisfying the desiderata of HD computing be designed
that capture other forms of structure? The quantized random Fourier feature
construction discussed in Section 5 presents one such option, but is only
applicable to structure that can be captured using a shift-invariant kernel on
a Euclidean space. For instance, can we devise encoding methods that exploit
low-dimensional manifold structure in the data or which are adaptive and can
be learned from a particular data set?
Several recent works have claimed, based on empirical evidence, that HD
computing evinces one-shot learning (?, ?, ?) in which a single labeled
example is needed to learn a generalizable classifier (?, ?). However, this
work has focused on settings in which specialized hand-crafted features could
be extracted, and it is not clear to us that existing encoding procedures
would lead to one-shot classifiers absent such outside information. We would
be interested to explore whether the HD representation makes one-shot learning
easier in any broader sense. We expect this will necessitate the use of more
sophisticated encoding procedures that can learn salient properties of a given
domain. For this latter point we see dictionary learning (?) as a promising
avenue for developing adaptive encoding procedures. Dictionary learning is a
well studied problem and can be solved using online and neurally plausible
methods (?, ?) and would thus seem to be a promising avenue to address the
limitations of existing encoding procedures without sacrificing the simplicity
and neural plausibility of existing HD based methods.
## Acknowledgements
This work was supported in part by CRISP, one of six centers in JUMP, an SRC
program sponsored by DARPA, in part by an SRC-Global Research Collaboration
grant, GRC TASK 3021.001, GRC TASK 2942.001, DARPA-PA-19-03-03 Agreement
HR00112090036, and also NSF grants 1527034, 1730158, 1826967, 2100237,
2112167, 2052809, 2003279, 1830399, 1911095, 2028040, and 1911095.
## Appendix A. Proofs of Selected Theorems
### A.1 Proof of Theorem 4
Proof The result is an immediate consequence of the Hanson-Wright inequality
(?, ?) which holds that, for $\mathbf{x}$ a centered, $d$-dimensional,
$\sigma$-sub-Gaussian random vector, and $\mathbf{A}\in\mathbb{R}^{d\times d}$
an arbitrary square matrix, the quadratic form
$\mathbf{x}^{T}\mathbf{A}\mathbf{x}$ obeys the following concentration bound:
$\mathbb{P}(|\mathbf{x}^{T}\mathbf{A}\mathbf{x}-\mathbb{E}[\mathbf{x}^{T}\mathbf{A}\mathbf{x}]|\geq
t)\leq
2\exp\left(-c\min\left(\frac{t^{2}}{\sigma^{4}\|\mathbf{A}\|_{F}^{2}},\frac{t}{\sigma^{2}\|\mathbf{A}\|}\right)\right)$
where $c$ is a positive absolute constant,
$\|\mathbf{A}\|_{F}^{2}=\sum_{i,j}|\mathbf{A}_{ij}|^{2}$ is the Frobenius norm
and $\|\mathbf{A}\|=\max_{\|\mathbf{x}\|\leq 1}\|\mathbf{A}\mathbf{x}\|$ is
the operator norm. The result follows by taking $\mathbf{A}$ to be the
$d\times d$ identity matrix, in which case
$\mathbf{x}^{T}\mathbf{I}_{d}\mathbf{x}=\|\mathbf{x}\|_{2}^{2}$, and union
bounding over all $m$ symbols in the alphabet.
### A.2 Proof of Theorem 7
Proof Fix some $a\notin\mathcal{S}$. As described in Theorem 5, the quantity
$\langle\phi(a),\phi(a^{\prime})\rangle$ is sub-Gaussian with parameter at
most $L_{\max}^{2}\sigma^{2}$, where $L_{\max}=\max_{a}\|\phi(a)\|$. Then,
again using the fact that sub-Gaussianity is preserved under sums, by
Hoeffding’s inequality we have:
$\mathbb{P}\left(\left|\sum_{a^{\prime}\in\mathcal{S}}\langle\phi(a),\phi(a^{\prime})\rangle\right|\geq\tau
L^{2}\right)\leq
2\exp\left(-\frac{\tau^{2}L^{4}}{2sL_{\max}^{2}\sigma^{2}}\right)\leq
2\exp\left(-\frac{\kappa\tau^{2}L^{2}}{2s\sigma^{2}}\right)$
where $\kappa=L^{2}/L_{\max}^{2}$. The result follows by union bounding over
all $m$ possible $a$.
### A.3 Proof of Theorem 9
Proof Expanding the dot product between the two representations:
$\displaystyle\frac{1}{L^{2}}\langle\phi(\mathcal{S}),\phi(\mathcal{S}^{\prime})\rangle$
$\displaystyle=\frac{1}{L^{2}}\sum_{a\in\mathcal{S}\cap\mathcal{S}^{\prime}}\langle\phi(a),\phi(a)\rangle+\frac{1}{L^{2}}\sum_{a\in\mathcal{S}}\sum_{a^{\prime}\in\mathcal{S}^{\prime}\setminus\\{a\\}}\langle\phi(a),\phi(a^{\prime})\rangle$
$\displaystyle\leq|\mathcal{S}\cap\mathcal{S}^{\prime}|+ss^{\prime}\mu.$
The other direction is analogous.
### A.4 Proof of Theorem 10
Proof Consider some symbol $a\in\mathcal{A}$. In the event $a\in\mathcal{S}$:
$\langle\phi(a),\phi(\mathcal{S})+\Delta_{\mathcal{S}}\rangle=\langle\phi(a),\phi(\mathcal{S})\rangle+\langle\phi(a),\Delta_{\mathcal{S}}\rangle\geq
L^{2}-sL^{2}\mu-\rho$
and when $a\notin\mathcal{S}$:
$\langle\phi(a),\phi(\mathcal{S})+\Delta_{\mathcal{S}}\rangle\leq
sL^{2}\mu+\rho$
Therefore we can decode correctly if:
$\frac{\rho}{L^{2}}+s\mu<\frac{1}{2}$
### Proof of Lemma 12
Proof Consider first the case of passive noise. Fix some $a\in\mathcal{A}$.
Noting that $\langle\phi(a),\Delta_{\mathcal{S}}\rangle$ is the sum of $d$
terms bounded in $[-c,c]$, another application of Hoeffding’s inequality and
the union bound will show:
$\mathbb{P}(\exists\,a\text{ s.t.
}|\langle\phi(a),\Delta_{\mathcal{S}}\rangle|\geq\rho)\leq
2m\exp\left(-\frac{\rho^{2}}{2c^{2}d}\right).$
Therefore, with probability $1-\delta$, we have that $\Delta_{\mathcal{S}}$ is
$\rho$-bounded for $\rho\leq c\sqrt{2d\ln(2m/\delta)}$. Noting that
$L=\sqrt{d}$ exactly, the result follows by applying Theorem 10.
Now let us consider the adversarial case in which
$\|\Delta_{\mathcal{S}}\|_{1}\leq\omega sd$. We first observe that
$|\langle\phi(a),\Delta_{\mathcal{S}}\rangle|\leq\|\phi(a)\|_{\infty}\|\Delta_{\mathcal{S}}\|_{1}\leq\omega
sd$. Then, applying Theorem 10 we obtain:
$\frac{\omega sd}{d}+s\mu<\frac{1}{2}\Rightarrow\omega<\frac{1}{2s}-\mu$
as claimed.
### Proof of Theorem 14
Proof Note first that $\|\phi(a)\otimes\psi(f)\|_{2}=\|\phi(a)\|_{2}$. Then,
fixing $a,a^{\prime}$ and $f$, by Hoeffding’s inequality:
$\mathbb{P}(|\langle\phi(a),\phi(a^{\prime})\otimes\psi(f)\rangle|\geq\mu
L^{2})\leq
2\exp\left(-\frac{L^{4}\mu^{2}}{2\sigma^{2}||\phi(a^{\prime})||_{2}^{2}}\right)\leq
2\exp\left(-\frac{k\mu^{2}L^{2}}{2\sigma^{2}}\right)$
where we have again defined
$\kappa=(\min_{a}\|\phi(a)\|_{2}^{2})/(\max_{a^{\prime}}\|\phi(a^{\prime})\|_{2}^{2})$.
The result follows by the union bound over all $<nm^{2}/2$ combinations of
$a,a^{\prime},f$.
### Proof of Theorem 17
Proof Expanding:
$||\phi(\mathbf{x})-\phi(\mathbf{x}^{\prime})||^{2}_{2}=||\phi(\mathbf{x})||_{2}^{2}+||\phi(\mathbf{x}^{\prime})||_{2}^{2}-2\langle\phi(\mathbf{x}),\phi(\mathbf{x}^{\prime})\rangle$
Note first that $\|\phi(\mathbf{x})\|_{2}^{2}=nd+\Delta$, where $\Delta$ is a
mean-zero noise term due to cross-talk between the codewords. Neglecting minor
errors from the ceiling function, the dot-product expands to:
$\displaystyle\langle\phi(\mathbf{x}),\phi(\mathbf{x}^{\prime})\rangle$
$\displaystyle=\sum_{i=1}^{n}\langle\phi(x_{i})\otimes\psi(f_{i}),\phi(x_{i}^{\prime})\otimes\psi(f_{i})\rangle+\sum_{i\neq
j}\langle\phi(x_{i})\otimes\psi(f_{i}),\phi(x_{j}^{\prime})\otimes\psi(f_{j})\rangle$
$\displaystyle=\sum_{i=1}^{n}\langle\phi(x_{i}),\phi(x_{i}^{\prime})\rangle+\Delta^{\prime}=\sum_{i=1}^{n}d(1-|a(x_{i})-a(x_{i}^{\prime})|)+\Delta^{\prime}$
$\displaystyle=d(n-\|\mathbf{x}-\mathbf{x}^{\prime}\|_{1})+\Delta^{\prime}$
where $a(x_{i})$ is taken to be the centroid corresponding to $x_{i}$ and
$\Delta^{\prime}$ is another noise term due to crosstalk. Putting both
together and noting that $\Delta,\Delta^{\prime}\leq n^{2}d\mu$ we have:
$2d(\|\mathbf{x}-\mathbf{x}^{\prime}\|_{1}-2n^{2}\mu)\leq\|\phi(\mathbf{x})-\phi(\mathbf{x}^{\prime})\|^{2}_{2}\leq
2d(\|\mathbf{x}-\mathbf{x}^{\prime}\|_{1}+2n^{2}\mu)$
where the incoherence can be bounded as in Equation 4.
## References
* Arora et al. Arora, S., Ge, R., Ma, T., & Moitra, A. (2015). Simple, efficient, and neural algorithms for sparse coding. Proceedings of The 28th Conference on Learning Theory, COLT 2015, 40, 113–149.
* Barron Barron, A. R. (1993). Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory, 39(3), 930–945.
* Bell & Sejnowski Bell, A., & Sejnowski, T. (1995). An information-maximization approach to blind separation and blind deconvolution. Neural Computation, 7(6), 1129–1159.
* Bishop Bishop, C. M. (2007). Pattern recognition and machine learning, 5th Edition. Information science and statistics. Springer.
* Bloom Bloom, B. H. (1970). Space/time trade-offs in hash coding with allowable errors. Communications of the ACM, 13(7), 422–426.
* Boyd & Vandenberghe Boyd, S. P., & Vandenberghe, L. (2014). Convex Optimization. Cambridge University Press.
* Broder & Mitzenmacher Broder, A., & Mitzenmacher, M. (2004). Network applications of bloom filters: A survey. Internet Mathematics, 1(4), 485–509.
* Burrello et al. Burrello, A., Schindler, K., Benini, L., & Rahimi, A. (2018). One-shot learning for ieeg seizure detection using end-to-end binary operations: Local binary patterns with hyperdimensional computing. In 2018 IEEE Biomedical Circuits and Systems Conference (BioCAS), pp. 1–4. IEEE.
* Caron et al. Caron, S. J., Ruta, V., Abbott, L., & Axel, R. (2013). Random convergence of olfactory inputs in the drosophila mushroom body. Nature, 497(7447), 113–117.
* Chacron et al. Chacron, M. J., Longtin, A., & Maler, L. (2011). Efficient computation via sparse coding in electrosensory neural networks. Current Opinion in Neurobiology, 21(5), 752–760.
* Choromanski et al. Choromanski, K. M., Rowland, M., & Weller, A. (2017). The unreasonable effectiveness of structured random orthogonal embeddings. In Advances in Neural Information Processing Systems, pp. 219–228.
* Cybenko Cybenko, G. (1989). Approximations by superpositions of sigmoidal functions. Mathematics of Control, Signals, and Systems, 2(4), 303–314.
* Dasgupta et al. Dasgupta, S., Sheehan, T. C., Stevens, C. F., & Navlakha, S. (2018). A neural data structure for novelty detection. Proceedings of the National Academy of Sciences, 115(51), 13093–13098.
* Donoho et al. Donoho, D. L., Elad, M., & Temlyakov, V. N. (2005). Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Transactions on Information Theory, 52(1), 6–18.
* Frady et al. Frady, E. P., Kleyko, D., & Sommer, F. T. (2018). A theory of sequence indexing and working memory in recurrent neural networks. Neural Computation, 30(6), 1449–1513.
* Fukushima Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4), 193–202.
* Gallant & Okaywe Gallant, S. I., & Okaywe, T. W. (2013). Representing objects, relations, and sequences. Neural Computation, 25(8), 2038–2078.
* Gupta et al. Gupta, S., Imani, M., & Rosing, T. (2018). Felix: Fast and energy-efficient logic in memory. In 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pp. 1–7. IEEE.
* Hanson & Wright Hanson, D. L., & Wright, F. T. (1971). A bound on tail probabilities for quadratic forms in independent random variables. The Annals of Mathematical Statistics, 42(3), 1079–1083.
* Imani et al. Imani, M., Kong, D., Rahimi, A., & Rosing, T. (2017). Voicehd: Hyperdimensional computing for efficient speech recognition. In 2017 IEEE International Conference on Rebooting Computing (ICRC), pp. 1–8. IEEE.
* Imani et al. Imani, M., Messerly, J., Wu, F., Pi, W., & Rosing, T. (2019a). A binary learning framework for hyperdimensional computing. In 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 126–131. IEEE.
* Imani et al. Imani, M., Morris, J., Bosch, S., Shu, H., De Micheli, G., & Rosing, T. (2019b). Adapthd: Adaptive efficient training for brain-inspired hyperdimensional computing. In 2019 IEEE Biomedical Circuits and Systems Conference (BioCAS), pp. 1–4. IEEE.
* Imani et al. Imani, M., Morris, J., Messerly, J., Shu, H., Deng, Y., & Rosing, T. (2019c). Bric: Locality-based encoding for energy-efficient brain-inspired hyperdimensional computing. In Proceedings of the 56th Annual Design Automation Conference 2019, p. 52. ACM.
* Imani et al. Imani, M., Nassar, T., Rahimi, A., & Rosing, T. (2018). Hdna: Energy-efficient dna sequencing using hyperdimensional computing. In 2018 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), pp. 271–274. IEEE.
* Imani et al. Imani, M., Rahimi, A., Kong, D., Rosing, T., & Rabaey, J. M. (2017). Exploring hyperdimensional associative memory. In 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA), pp. 445–456. IEEE.
* Imani et al. Imani, M., Salamat, S., Gupta, S., Huang, J., & Rosing, T. (2019). Fach: Fpga-based acceleration of hyperdimensional computing by reducing computational complexity. In Proceedings of the 24th Asia and South Pacific Design Automation Conference, pp. 493–498. ACM.
* Jacques et al. Jacques, L., Laska, J. N., Boufounos, P. T., & Baraniuk, R. G. (2013). Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Transactions on Information Theory, 59(4), 2082–2102.
* Kanerva Kanerva, P. (1994). The spatter code for encoding concepts at many levels. In International Conference on Artificial Neural Networks, pp. 226–229. Springer.
* Kanerva Kanerva, P. (1995). A family of binary spatter codes. In International Conference on Artificial Neural Networks, Vol. 1, pp. 517–522.
* Kanerva Kanerva, P. (2009). Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors. Cognitive Computation, 1(2), 139–159.
* Kim et al. Kim, Y., Imani, M., Moshiri, N., & Rosing, T. (2020). GenieHD: Efficient dna pattern matching accelerator using hyperdimensional computing. In 2020 Design, Automation & Test in Europe Conference & Exhibition. IEEE.
* Kim et al. Kim, Y., Imani, M., & Rosing, T. S. (2018). Efficient human activity recognition using hyperdimensional computing. In Janowicz, K., Kuhn, W., Cena, F., Haller, A., & Vamvoudakis, K. G. (Eds.), Proceedings of the 8th International Conference on the Internet of Things, IOT 2018, Santa Barbara, CA, USA, October 15-18, 2018, pp. 38:1–38:6. ACM.
* Kleyko et al. Kleyko, D., Rahimi, A., Gayler, R. W., & Osipov, E. (2019). Autoscaling bloom filter: controlling trade-off between true and false positives. Neural Computing and Applications, 32, 1–10.
* Kleyko et al. Kleyko, D., Rahimi, A., Rachkovskij, D. A., Osipov, E., & Rabaey, J. M. (2018). Classification and recall with binary hyperdimensional computing: Tradeoffs in choice of density and mapping characteristics. IEEE Transactions on Neural Networks and Learning Systems, 29(12), 5880–5898.
* Lake et al. Lake, B., Salakhutdinov, R., Gross, J., & Tenenbaum, J. (2011). One shot learning of simple visual concepts. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 33.
* Levy & Gayler Levy, S. D., & Gayler, R. (2008). Vector symbolic architectures: A new building material for artificial general intelligence. In Conference on Artificial General Intelligence, pp. 414–418. IOS Press.
* Littlestone Littlestone, N. (1988). Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2(4), 285–318.
* Mairal et al. Mairal, J., Bach, F., Ponce, J., & Sapiro, G. (2010). Online learning for matrix factorization and sparse coding.. Journal of Machine Learning Research, 11(1).
* Masse et al. Masse, N. Y., Turner, G. C., & Jefferis, G. S. (2009). Olfactory information processing in drosophila. Current Biology, 19(16), R700–R713.
* McDiarmid et al. McDiarmid, C., et al. (1989). On the method of bounded differences. Surveys in Combinatorics, 141(1), 148–188.
* Mitrokhin et al. Mitrokhin, A., Sutor, P., Fermüller, C., & Aloimonos, Y. (2019). Learning sensorimotor control with neuromorphic sensors: Toward hyperdimensional active perception. Science Robotics, 4(30).
* Neubert et al. Neubert, P., Schubert, S., & Protzel, P. (2019). An introduction to hyperdimensional computing for robotics. KI-Künstliche Intelligenz, 33(4), 319–330.
* Olshausen & Field Olshausen, B., & Field, D. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381, 607–609.
* Olshausen & Field Olshausen, B. A., & Field, D. J. (2004). Sparse coding of sensory inputs. Current Opinion in Neurobiology, 14(4), 481–487.
* Pagh et al. Pagh, A., Pagh, R., & Rao, S. S. (2005). An optimal bloom filter replacement. In Proceedings of the sixteenth annual ACM-SIAM Symposium on Discrete Algorithms, pp. 823–829.
* Parcollet et al. Parcollet, T., Ravanelli, M., Morchid, M., Linarès, G., Trabelsi, C., Mori, R. D., & Bengio, Y. (2019). Quaternion recurrent neural networks. In International Conference on Learning Representations (ICLR).
* Plan & Vershynin Plan, Y., & Vershynin, R. (2014). Dimension reduction by random hyperplane tessellations. Discrete & Computational Geometry, 51(2), 438–461.
* Plate Plate, T. (2003). Holographic Reduced Representation: Distributed Representation for Cognitive Structures. CSLI Lecture Notes (CSLI- CHUP) Series. CSLI Publications.
* Plate Plate, T. A. (1995). Holographic reduced representations. IEEE Transactions on Neural Networks, 6(3), 623–641.
* Rachkovskij Rachkovskij, D. (2015). Formation of similarity-reflecting binary vectors with random binary projections. Cybernetics and Systems Analysis, 51(2), 313–323.
* Rachkovskij Rachkovskij, D. A. (2001). Representation and processing of structures with binary sparse distributed codes. IEEE Transactions on Knowledge and Data Engineering, 13(2), 261–276.
* Rachkovskiy et al. Rachkovskiy, D. A., Slipchenko, S. V., Kussul, E. M., & Baidyk, T. N. (2005a). Sparse binary distributed encoding of scalars. Journal of Automation and Information Sciences, 37(6).
* Rachkovskiy et al. Rachkovskiy, D. A., Slipchenko, S. V., Misuno, I. S., Kussul, E. M., & Baidyk, T. N. (2005b). Sparse binary distributed encoding of numeric vectors. Journal of Automation and Information Sciences, 37(11).
* Raginsky & Lazebnik Raginsky, M., & Lazebnik, S. (2009). Locality-sensitive binary codes from shift-invariant kernels. In Advances in Neural Information Processing Systems, pp. 1509–1517.
* Rahimi et al. Rahimi, A., Benatti, S., Kanerva, P., Benini, L., & Rabaey, J. M. (2016). Hyperdimensional biosignal processing: A case study for emg-based hand gesture recognition. In 2016 IEEE International Conference on Rebooting Computing (ICRC), pp. 1–8. IEEE.
* Rahimi et al. Rahimi, A., Datta, S., Kleyko, D., Frady, E. P., Olshausen, B., Kanerva, P., & Rabaey, J. M. (2017). High-dimensional computing as a nanoscalable paradigm. IEEE Transactions on Circuits and Systems I: Regular Papers, 64(9), 2508–2521.
* Rahimi et al. Rahimi, A., Kanerva, P., Benini, L., & Rabaey, J. M. (2018). Efficient biosignal processing using hyperdimensional computing: Network templates for combined learning and classification of ExG signals. Proceedings of the IEEE, 107(1), 123–143.
* Rahimi et al. Rahimi, A., Tchouprina, A., Kanerva, P., Millán, J. d. R., & Rabaey, J. M. (2017). Hyperdimensional computing for blind and one-shot classification of eeg error-related potentials. Mobile Networks and Applications, 25, 1–12.
* Rahimi & Recht Rahimi, A., & Recht, B. (2008). Random features for large-scale kernel machines. In Advances in Neural Information Processing systems, pp. 1177–1184.
* Rosenblatt Rosenblatt, F. (1958). The Perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386–408.
* Rudelson et al. Rudelson, M., Vershynin, R., et al. (2013). Hanson-wright inequality and sub-gaussian concentration. Electronic Communications in Probability, 18.
* Rudin Rudin, W. (1962). Fourier analysis on groups. John Wiley and Sons, Ltd.
* Rumelhart et al. Rumelhart, D., McClelland, J., & the PDP Research Group (1986). Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations. MIT Press.
* Sahlgren Sahlgren, M. (2005). An introduction to random indexing. In Methods and applications of semantic indexing workshop at the 7th international conference on terminology and knowledge engineering.
* Salamat et al. Salamat, S., Imani, M., Khaleghi, B., & Rosing, T. (2019). F5-hd: Fast flexible fpga-based framework for refreshing hyperdimensional computing. In Proceedings of the 2019 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 53–62.
* Schmuck et al. Schmuck, M., Benini, L., & Rahimi, A. (2019). Hardware optimizations of dense binary hyperdimensional computing: Rematerialization of hypervectors, binarized bundling, and combinational associative memory. ACM Journal on Emerging Technologies in Computing Systems (JETC), 15(4), 1–25.
* Shawe-Taylor et al. Shawe-Taylor, J., Cristianini, N., et al. (2004). Kernel methods for pattern analysis. Cambridge university press.
* Stettler & Axel Stettler, D. D., & Axel, R. (2009). Representations of odor in the piriform cortex. Neuron, 63(6), 854–864.
* Thrun Thrun, S. (1996). Is learning the n-th thing any easier than learning the first?. In Advances in Neural Information Processing Systems, pp. 640–646.
* Turner et al. Turner, G. C., Bazhenov, M., & Laurent, G. (2008). Olfactory representations by drosophila mushroom body neurons. Journal of Neurophysiology, 99(2), 734–746.
* Wainwright Wainwright, M. J. (2019). High-dimensional statistics: A non-asymptotic viewpoint, Vol. 48. Cambridge University Press.
* Weiss et al. Weiss, E., Cheung, B., & Olshausen, B. (2016). A neural architecture for representing and reasoning about spatial relationships. In International Conference on Learning Representations (ICLR).
* Widdows & Cohen Widdows, D., & Cohen, T. (2015). Reasoning with vectors: A continuous model for fast robust inference. Logic Journal of the IGPL, 23(2), 141–173.
* Wilson Wilson, R. I. (2013). Early olfactory processing in drosophila: mechanisms and principles. Annual Review of Neuroscience, 36, 217–241.
* Zhang et al. Zhang, A., Tay, Y., Zhang, S., Chan, A., Luu, A. T., Hui, S. C., & Fu, J. (2016). Beyond fully-connected layers with quaternions: Parameterization of hypercomplex multiplications with $1/n$ parameters. In International Conference on Learning Representations (ICLR).
|
# The dilemma of quantum neural networks
Yang Qian School of Computer Science, The University of SydneyJD Explore
Academy Xinbiao Wang 22footnotemark: 2 Institute of Artificial Intelligence,
School of Computer Science, Wuhan University Yuxuan Du 22footnotemark: 2
Corresponding author<EMAIL_ADDRESS>Xingyao Wu 22footnotemark: 2
Dacheng Tao 22footnotemark: 2
###### Abstract
The core of quantum machine learning is to devise quantum models with good
trainability and low generalization error bound than their classical
counterparts to ensure better reliability and interpretability. Recent studies
confirmed that quantum neural networks (QNNs) have the ability to achieve this
goal on specific datasets. With this regard, it is of great importance to
understand whether these advantages are still preserved on real-world tasks.
Through systematic numerical experiments, we empirically observe that current
QNNs fail to provide any benefit over classical learning models. Concretely,
our results deliver two key messages. First, QNNs suffer from the severely
limited effective model capacity, which incurs poor generalization on real-
world datasets. Second, the trainability of QNNs is insensitive to
regularization techniques, which sharply contrasts with the classical
scenario. These empirical results force us to rethink the role of current QNNs
and to design novel protocols for solving real-world problems with quantum
advantages.
## 1 Introduction
The theme of deep learning is efficiently optimizing a good neural network
architecture with low generalization error such that it can well extrapolate
the underlying rule from the training data to new unseen data [1, 2, 3].
During the past decades, deep neural networks (DNNs) with diverse
architectures have been carefully designed to accomplish different tasks with
both low train and test error. Moreover, these DNNs have achieved state-of-
the-art performance compared with conventional machine learning models such as
support vector machines [2]. Concrete examples include the exploitation of
convolutional neural networks to tackle computer vision tasks [4, 5] and the
employment of recurrent neural networks to solve natural language processing
tasks [6, 7]. Alongside the huge empirical success of deep learning, numerous
studies have been dedicated to investigating the excellent trainability and
generalization ability of DNNs [8, 9, 10], since a good understanding of these
two properties does not only contribute to make DNNs more interpretable, but
it might also lead to more reliable model architecture design.
A milestone in the regime of quantum computing is Google’s experimental
demonstration that modern quantum machines can solve certain computational
tasks faster than classical computers [11, 12]. Such a superior power fuels a
growing interest of designing quantum machine learning (QML) models, which can
be effectively executed on both noisy intermediate-scale quantum (NISQ) and
fault-tolerant quantum machines with provable advantages [13, 44, 14, 15].
Following this routine, the quantum neural networks (QNNs), as the quantum
extension of DNNs, has been extensively investigated [16, 17, 18, 19, 20, 21,
22]. Celebrated by their flexible structures, experimental studies have
implemented QNNs on different NISQ platforms to accomplish various learning
tasks such as data classification [20, 23], image generation [24, 25, 26], and
electronic-structure problems in material science and condensed matter physics
[27, 28, 29, 30].
Driven by the promising empirical achievements of QML and the significance of
understanding the power of QNNs, initial studies have been conducted to
explore the trainability and the generalization ability of QNNs [31, 32, 33,
34, 35, 36, 37] by leveraging varied model complexity measures developed in
statistical learning theory [38] and advanced tools in optimization theory
[39]. Notably, the obtained results transmitted both positive and negative
signals, as indicated in Figure 1. To be more concrete, theoretical evidence
validated that QNNs can outperform DNNs for specific learning tasks, i.e.,
quantum synthetic data classification [20] and discrete logarithm problem
[36]. However, Ref. [40] revealed the barren plateaus’ issue of QNNs, which
challenges the applicability of QNNs on large-scale problems. Considering that
an ambitious aim of QNNs is providing computational advantages over DNNs on
real-world tasks, it is important to answer: ‘Are current QNNs sufficient to
solve certain real-world problems with potential advantages?’ If the response
is negative, it is necessary to figure out ‘how is the gap between QNNs and
DNNs?’
Figure 1: An overview of the classical and quantum learning models.
Generalization ability: $\mathcal{H}$ is the whole hypothesis space.
$\mathcal{H}_{D}$ and $\mathcal{H}_{Q}$ refer to the hypothesis space
represented by QNN and DNN respectively. When the target concept is covered by
$\mathcal{H}_{Q}\backslash\mathcal{H}_{D}$
($\mathcal{H}_{D}\backslash\mathcal{H}_{Q}$), as highlighted by the red
(green) hollow star, QNNs can definitely (fail to) guarantee computational
advantages over DNNs. When the target concept lies in
$\mathcal{H}_{D}\cap\mathcal{H}_{Q}$ (highlighted by the black color), it is
unknown whether QNNs may possess any advantage over DNNs. Trainability: The
function graph corresponding to the arrow of the optimization path of QNN
indicates the possible barren plateau which is characterized by the vanished
gradients.
Problem setup. We inherit the tradition in DNNs to understand the trainability
and generalization of QNNs [41]. Particularly, the explicit form of the
measure of the generalization error bound is
$\displaystyle\hat{\mathcal{R}}_{S}(\hat{\bm{\theta}})-\mathcal{R}(\hat{\bm{\theta}}):=$
$\displaystyle\frac{1}{n}\sum_{i=1}^{n}C\left(h(\hat{\bm{\theta}},\bm{x}^{(i)}),\bm{y}^{(i)}\right)-\mathbb{E}_{\bm{x},\bm{y}}\left(C(h(\hat{\bm{\theta}},\bm{x}),\bm{y})\right),$
where $S=\\{(\bm{x}^{(i)},\bm{y}^{(i)})\\}_{i=1}^{n}$ denotes the given
training dataset sampled from the domain $\mathcal{X}\times\mathcal{Y}$,
$h(\hat{\bm{\theta}},\cdot)\in\mathcal{H}$ refers to the hypothesis inferred
by QNN with $\mathcal{H}$ being the hypothesis space and $\hat{\bm{\theta}}$
being the trained parameters,
$C:\mathcal{H}\times\mathcal{(}\mathcal{X}\times\mathcal{Y})\to\mathbb{R}^{+}$
is the designated loss function, and
$\hat{\mathcal{R}}_{S}(\hat{\bm{\theta}})$ (or
$\mathcal{R}(\hat{\bm{\theta}})$) represents the empirical (or expected) risk
[42]. The generalization error bound in Eqn. (1) concerns when and how
minimizing $\hat{\mathcal{R}}_{S}(\hat{\bm{\theta}})$ is a sensible approach
to minimizing $\mathcal{R}(\hat{\bm{\theta}})$. A low error bound suggests
that the unearthed rule $h(\hat{\bm{\theta}})$ from the dataset $S$ can well
generalize to the unseen data sampled from the same domain. Note that since
the probability distribution behind data domain is generally inaccessible, the
term $\mathcal{R}(\hat{\bm{\theta}})$ is intractable. A generic strategy is
employing the test dataset $\tilde{S}\sim\mathcal{X}\times\mathcal{Y}$ to
estimate this term, i.e.,
$\mathcal{R}(\hat{\bm{\theta}})\approx\frac{1}{\tilde{n}}\sum_{i=1}^{\tilde{n}}\ell(h(\hat{\bm{\theta}},\tilde{\bm{x}}^{(i)}),\tilde{\bm{y}}^{(i)})$
with $(\tilde{\bm{x}}^{(i)}),\tilde{\bm{y}}^{(i)})\in\tilde{S}$.
The trainability concerns the convergence rate of the trained parameters of
QNN towards the optimal parameters. The mathematical form of the optimal
parameters $\bm{\theta}^{*}$ satisfies
$\bm{\theta}^{*}=\arg\min_{\bm{\theta}}\hat{\mathcal{R}}_{S}({\bm{\theta}})=\arg\min_{\bm{\theta}}\frac{1}{n}\sum_{i=1}^{n}C\left(h({\bm{\theta}},\bm{x}^{(i)}),\bm{y}^{(i)}\right).$
(1)
Intuitively, the inferred hypothesis (or equivalently, the trained parameters)
is expected to achieve the minimized empirical risk
$\hat{\mathcal{R}}_{S}(\bm{\theta})$. Considering that the loss landscape of
QNNs is generally non-convex and non-concave, which implies the computational
hardness of seeking $\bm{\theta}^{*}$, an alternative way to examine the
trainability of QNN is analyzing its convergence rate, i.e.,
$\mathcal{J}(\bm{\theta})=\mathbb{E}[\|\nabla_{\bm{\theta}}\hat{\mathcal{R}}_{S}(\bm{\theta})\|],$
(2)
where the expectation is taken over the randomness from the sample error and
gate noise [34]. In other words, the metric $\mathcal{J}(\bm{\theta})$
evaluates how far the trainable parameters of QNN are away from the stationary
point $\|\nabla_{\bm{\theta}}\hat{\mathcal{R}}_{S}(\bm{\theta})\|=0$.
Following the above explanations, understanding the learnability of QNNs
amounts to exploring whether QNNs possess better generalization ability than
DNNs on certain real-world datasets in terms of Eqn. (1) under both the
noiseless and NISQ scenarios. Furthermore, it is crucial to understand whether
the trainability of QNNs can be enhanced by regularization techniques measured
by Eqn. (2).
Contributions. Through systematic numerical simulations, we empirically
exhibit the dilemma of QNNs such that it is hard to directly use current QNNs
to gain quantum advantages on real-world datasets. Meanwhile, current QNNs
suffers from the poor trainability. The main contributions of our study are as
follows.
1. 1.
We compare the performance of QNNs and DNNs on larger-scale datasets than
those used in previous literature to quantify the trainability and
generalization ability of QNNs under both the noiseless and NISQ scenarios. As
exhibited in Figure 1, we observe the poor model capacity of QNNs by
conducting randomization experiments proposed by [10]. Since the effective
model capacity determines the model’s generalization error bounds, our results
suggest that the generalization error bounds of QNNs achieved by statistical
learning theory are generally tight [31, 32, 33, 35, 36, 37]. In addition,
QNNs do not gain obvious trainability enhanced assisted by regularization
techniques, which sharply differs from DNNs. These observations partially
explain why current QNNs fail to surpass DNNs on real-world tasks.
2. 2.
We indicate the negative role of noise with respect to the generalization
ability and trainability of QNNs. Specifically, quantum noise degenerates the
model capacity and exacerbates the difficulty of optimization. To this end, we
discuss possible solutions such as advanced error mitigation techniques to
enhance the capability of QNNs on real-world datasets.
3. 3.
We build a benchmark to evaluate the performance of QNNs and DNNs on both
quantum synthetic data and classical data, supporting a variety of predefined
models of QNNs and providing flexible interface for researchers to define
customizable architectures. The released benchmark will facilitate the
standardization of assessment of various QNNs in QML community and provide a
comparable reference in the design of QNNs. The related code will be released
to the Github repository.
## 2 Preliminary
Here we briefly recap QNNs that will be explored in this study. The foundation
of quantum computing is provided in Appendix A. Please refer to literature
[43, 44, 45, 46] for comprehensive explanations.
### 2.1 Quantum neural network
Quantum neural networks (QNNs) can be treated as the quantum generalization of
deep neural networks (DNNs), as illustrated in Figure 2. Both of them leverage
an optimizer to iteratively update parameters $\bm{\theta}$ of a trainable
model $h(\bm{\theta},\cdot)$ to minimize a predefined loss function
$C(\cdot,\cdot)$.
The key difference between QNNs and DNNs is the strategy to implement the
trainable model $h(\bm{\theta},\cdot)$, where the former employs the parameter
quantum circuits (PQCs), or equivalently ansätzes [43, 44, 47], while the
latter utilizes neural networks [1]. In particular, PQCs are constituted by
the encoding part $U_{E}(\cdot)$, the trainable part $U(\bm{\theta})$, and the
readout part. The purpose of $U_{E}(\cdot)$ is loading classical information
into the quantum form, as the precondition to proceed further quantum
operators. Although there are many data encoding methods [48], here we mainly
focus on the qubit-encoding method and its variants, because of their
resource-efficient property. Once the quantum example is prepared, the
trainable unitary $U(\bm{\theta})$ is applied to this state, followed by the
quantum measurement $\\{\Pi_{i}\\}$ to extract quantum information into the
classical form (see following subsections for details). The collect classical
information can either be used as the predicted label or the hidden feature,
depending on the detailed QNN-based protocols.
In the subsequent three subsections, we elaborate on the implementation of
three representative protocols, i.e., quantum naive neural network (QNNN)
[20], quantum embedding neural network (QENN) [49], and quantum convolutional
neural network (QCNN) [50], respectively.
Figure 2: The machinery of various QNNs. The schematic of QNN, depicted in the
upper left, consists of a hybrid quantum-classical loop, where the quantum
computer is employed to train the learnable parameters and the classical
processor is utilized to perform the optimization or post-processing to the
collected information from the quantum computer. The dashed arrows mean that
the loop is finite and terminated in the classical processor. The paradigms of
QNNN, QENN, and QCNN are shown in the lower left, lower right, and upper
right, respectively. The detailed realization of these QNNs is presented in
Sections 2.1.1, 2.1.2, and 2.1.3.
#### 2.1.1 Quantum naive neural network
We first follow Eqn. (1) to elaborate on the implementation of PQCs, or
equivalently, the hypothesis $h(\bm{\theta},\bm{x}^{(i)})$, in QNNN. As shown
in Figure 2, the encoding circuit $U_{E}(\cdot)$ loads the classical example
into the quantum state by specifying data features as rotational angles of
single-qubit gates. Note that the topology of quantum in $U_{E}(\cdot)$ can be
varied, e.g., a possible implementation is
$U_{E}(\bm{x}^{(i)})=\bigotimes_{j=1}^{d}\mathop{\text{RY}}(\bm{x}^{(i)}_{j})$
[49]. The trainable part $U(\bm{\theta})$ consists of trainable single-qubit
quantum gates and fixed two quantum gates. Analogous to $U_{E}(\cdot)$, the
topology of $U(\bm{\theta})$ are versatile, where involving more gates
promises a higher expressivity but a more challenged trainability [35, 51].
Here we mainly focus on the hardware-efficient structure such that the
construction of $U(\bm{\theta})$ obeys a layer-wise structure and the gates
arrangement in each layer is identical. The explicit form satisfies
$U(\bm{\theta})=\prod_{l=1}^{L}U_{l}{(\bm{\theta}_{l})}$, where $L$ is the
layer number and $\bm{\theta}_{l}$ denotes the trainable parameters at the
$l$-th layer. To extract the quantum information into the classical form, QNNN
applies POVMs $\\{\Pi_{i}\\}_{i=1}^{d_{y}}$ to the state
$\ket{\psi(\bm{x}^{(i)},\bm{\theta})}=U(\bm{\theta})U_{E}(\bm{x}^{(i)})\ket{0}^{\otimes
d}$, i.e.,
$\displaystyle h(\bm{\theta},\bm{x}^{(i)})=\Big{[}$
$\displaystyle\operatorname{Tr}\left(\Pi_{1}\ket{\psi(\bm{x}^{(i)},\bm{\theta})}\bra{\psi(\bm{x}^{(i)},\bm{\theta})}\right),\cdots,\operatorname{Tr}\left(\Pi_{d_{y}}\ket{\psi(\bm{x}^{(i)},\bm{\theta})}\bra{\psi(\bm{x}^{(i)},\bm{\theta})}\right)\Big{]}^{\top},$
(3)
where $d_{y}$ is the dimension of the label space. In the training process, we
adopt the first-order optimizer to update the parameters $\bm{\theta}$ to
minimize the loss function in Eqn. (1), where the gradients $\partial
C\left(h(\bm{\theta},\bm{x}^{(i)}),y^{(i)}\right)/\partial\bm{\theta}$ can be
analytically evaluated by the parameter shift rule [21]. The specific
optimization algorithms used in this study are stochastic gradient descent
(SGD) [52] and stochastic quantum natural gradient descent (SQNGD) [53]. More
details can be seen in Appendix B.
#### 2.1.2 Quantum embedding neural network
Instead of separating the encoding part from the training part, QENN
integrates them together into the embedding circuit where the encoding circuit
and the training circuit are carried out alternately. Specifically, in QENN,
the employed PQCs are composed of multiple embedding layers equipped with
trainable parameters, i.e., in each layer an encoding circuit
$U_{E}^{(l)}(\bm{x}^{(i)})$ is followed by a trainable circuit
$U_{l}(\bm{\theta}_{l})$, as shown in Figure 2. Throughout the whole study, we
consider an identical topology of $U_{E}^{(l)}(\cdot)$ for different layers,
as the repetition of embedding layer is demonstrated to implement the
classically intractable feature maps [54] and increase the expressive power of
QNN [55]. The explicit form of such PQCs can be written as
$U(\bm{x},\bm{\theta})=\prod_{l=1}^{L}U_{E}(\bm{x}^{(i)})U_{l}(\bm{\theta}_{l})$,
where the meanings of $L$ and $\bm{\theta}_{l}$ are the same with those in
QNNN. As for the training of QENN, the strategy of measurement and
optimization are identical to QNNN in Subsection 2.1.1.
#### 2.1.3 Quantum convolutional neural network
Convolutional neural network (CNN) has demonstrated the superiority in images
processing tasks, including two special local operators, i.e., convolution and
pooling. The function of convolutional and pooling operations is extracting
local features from the whole image and aggregating information from adjacent
patches, respectively. Unlike CNN, quantum CNN (QCNN) [50] completes the
convolutional operation by the quantum convolutional layer to pursue better
learning performance. Particularly, in the quantum convolutional layer, a
fraction of the input image is embedded into the quantum circuit
$U_{E}(\cdot)$, interacted with PQCs $U(\bm{\theta})$, followed by the quantum
measurements to extract the corresponding semantic features. As shown in
Figure 2, unlike QNNN and QENN, where the collected classical information is
directly utilized as the predictable label, the output of the quantum
convolutional layer is treated as a hidden feature map which is taken as the
input for the next quantum convolutional layer. After several quantum
convolutional operations, a classical fully connected layer with activation
function [1] acts on the extracted features to make predictions.
## 3 The generalization of quantum neural networks
In this section, we explore the generalization ability of representative QNNs
introduced in Section 2. To be more concrete, we first apply these QNNs to
learn real-world datasets and compare their performance with classical DNNs
with varied parameter settings, i.e., the multi-layer perceptron (MLP) and
convolutional neural network (CNN) whose number of trainable parameters is
similar to QNNs, the over-parameterized multi-layer perceptron (MLP++) whose
number of trainable parameters can be extremely large [56, 5]. We further
conduct systematical simulations to benchmark the effective model capacity of
QNNs and DNNs, since this measure the determines generalization ability of
learning models [2, 10],
We experiment on two real-world datasets, i.e., the Wine dataset [57] and
MNIST handwritten digit dataset [58], to examine the generalization ability of
QNNs and DNNs. The Wine dataset, collected from UCI Machine Learning
Repository [57], consists of 130 examples, where each example is described by
a feature vector with $13$ attributes determining the origin of wines. MNIST
dataset includes ten thousands hand-written digit images, where each image has
$28\times 28$ pixels. With the aim of removing data-dependent bias as much as
possible, we also assess the generalization ability of the quantum synthetic
data proposed by [20]. Specifically, we train QNNN, QENN, and MLP on the Wine
dataset and quantum synthetic dataset which represent 1-dimensional features;
and apply QCNN, CNN and MLP on MNIST dataset for the case of $2$-dimensional
features. Note that QNNN and QENN are excluded when processing image data
because it requires unaffordable number of qubits when embedding the high
dimensional image into a quantum circuit. For suppressing the effects of
randomness, the statistical results are collected by repeating each setting
with $10$ times.
Figure 3: Learning performance on quantum data and classical data with true
labels. G-Error represents the generalization error. (a), (b) and (c) show the
accuracy of various models changing with training epochs, when training on
quantum synthetic data, the Wine data and MNIST respectively. The bar chart
inserted into each figure represents the generalization error of each model.
Before moving on to present experiment results, let us address the
generalization error measure defined in Eqn. (1). Particularly, there are two
components that together completely characterize the generalization error,
i.e., the empirical risk $\hat{\mathcal{R}}_{S}(\hat{\bm{\theta}})$ and the
expected risk $\mathcal{R}(\hat{\bm{\theta}})$. In our experiments, we employ
the accuracy of the training set to quantify the empirical risk in which high
train accuracy reflects low $\hat{\mathcal{R}}_{S}(\hat{\bm{\theta}})$.
Meanwhile, following the explanation in Eqn. (1), the accuracy on the test set
is adopted to estimate the expected risk such that high test accuracy implies
low $\mathcal{R}(\hat{\bm{\theta}})$. Under the above insights, when a
learning model possesses a good generalization ability, it should achieve high
train accuracy and test accuracy, as well as a small gap between them.
The learning performance of QNNs and DNNs for the quantum synthetic dataset
and real-world datasets is exhibited in Figure 3. Towards the quantum
synthetic dataset, both the train and test accuracy of QNNN and QENN fast
converge to the $92\%$ after $20$ epoch. Conversely, although the train
accuracy of MLP reaches $90\%$ after $30$ epoch, its test accuracy is no
better than the random guess. Therefore, the generalization error of classical
DNNs, i.e., the discrepancy between train accuracy and test accuracy, is much
higher than that of quantum models ($0.4$ for MLP versus $0.01$ for QNNN and
QENN), as demonstrated in the bar chart inserted in Figure 3 (a). However, the
learning performance behaves quite different when the above models are applied
to learn real-world datasets. As shown in Figure 3 (b), there exists an
evident step-by-step accuracy dropping on Wine dataset along the sequence of
MLP, QENN, and QNNN. In particular, QNNN and QENN fall behind MLP by $10\%$ to
$20\%$. Meantime, there is a more serious performance degradation for quantum
models evaluated by test accuracy, especially for QNNN which holds almost
$15\%$ generalization error that is three times higher than that of MLP. The
learning performance of QCNN and CNN on MNIST dataset obeys the same manner.
As depicted in Figure 3 (c), QCNN achieves $93\%$ accuracy on both training
and test set, which is slightly worse than CNN by approximately $3\%$. It is
worth noting that the relatively small gap among three models is most
attributed to the subtle differences in the network structure, where QCNN, CNN
and MLP only differs in the first layer (Appendix C).
Figure 4: Trainability of different models on quantum data and classical data
with random labels. (a) shows how various models fit quantum data with random
labels. MLP++, representing MLP with larger scale, achieves zero training
error. (b) shows the changes of accuracy when fitting classical data with
random labels. MLP can still completely fit the random labels. (c) shows the
ability of fitting MNIST with random labels. No model performs better than
random guess.
The generalization ability of a learning model is dominated by its effective
model capacity, which concerns the model’s ability to fit random labels [10].
Namely, a learning model possesses a high effective model capacity when it
reaches a high train accuracy on a dataset with random labels, as ensured by
the randomization test in non-parametric statistics. Empirical studies have
validated that DNNs possess sufficient effective model capacity and speculated
that this property contributes to the great success of DNN. With this regard,
we conduct randomization experiments on both QNNs and DNNs to compare their
effective model capacity.
The results related to the effective model capacity are shown in Figure 4,
which reveal a large gap between QNNs and DNNs, regardless of whether the
training data is quantum or classical. In particular, QNNs achieve relatively
low train accuracy ($0.562$ for QNNN and $0.630$ for QENN) which is only
slightly better than random guess. By contrast, for DNNs, MLP reaches $90\%$
and the perfect $100\%$ train accuracy on the quantum synthetic dataset and
Wine dataset respectively after $40$ epochs. If we further increase the number
of trainable parameters of MLP (MLP++), it will also completely fit the
quantum synthetic data with random labels, as shown by the purple line in
Figure 4 (a). Note that the same strategy is inappropriate to be applied to
QNNs, because increasing trainable parameters will incur both the barren
plateau issues and the accumulated quantum noise, which hinder the
optimization [34]. As for MNIST, all these models fail to fit the random
labels, whose behavior imitates the random guess. Similarly, we can enlarge
the scale of MLP to obtain a leap of accuracy on training set with random
labels at the expense of little growth of training time, as indicated by the
purple line in Figure 4 (c).
Remark. We defer the study of the generalization ability of QNNs under the
NISQ case in Appendix D. In a nutshell, by realizing QNNs on NISQ chips to
complete above experiments, we conclude that quantum noise largely weakens the
effective model capacity and generalization of QNNs.
### 3.1 Implications
The achieved results indicate the following three substantial implications
with respect to the dilemma of existing QNNs.
1. 1.
The learning performance of current QNNs is no better than DNNs on real-world
datasets. This observation questions the necessity to employ QNNs to tackle
real-world learning tasks, since it remains elusive how QNNs can benefit these
tasks.
2. 2.
The effective model complexity of current QNNs is poor, which is stark
contrast with DNNs. The low effective model complexity enables us to leverage
statistical learning theory to analyze the generalization ability of QNNs with
a tight bound [36, 32]. Nevertheless, as shown in Figure 1, a severely
restricted model capacity fails to cover complicated target concepts in real-
world tasks, which prohibits the applicability of QNNs.
3. 3.
The limited model capacity is further reduced by imperfection of NISQ
machines. The narrowed hypothesis space deteriorates the performance of QNNs.
There are two possible directions to seek potential advantages of QNNs over
DNNs. The first way is designing clever over-parameterization quantum learning
models as with DNNs. Partial evidence to support this solution is the improved
performance of QENN compared with QNNN. A critical issue in such a model
design is how to avoid barren plateau phenomenons [40]. The second way is to
develop new paradigm of quantum models to further introduce nonlinearity into
quantum learning pipeline. For instance, theoretical results have proven
potential advantages of quantum kernels [20, 59, 60].
## 4 Trainability of quantum models
Here we investigate the trainability of QNNs, which serves as another dominant
factor manipulating the learnability of quantum models. In particular, we
first examine the performance of QNNN and QENN on the Wine dataset with
consideration of various implicit and explicit regularization techniques under
the noiseless scenario. Subsequently, for the purpose of understanding how
quantum noise affects the performance of QNNs, we conduct the same experiments
under the noisy scenario in which the noisy model is extracted from
ibmq_16_melbourne, which is one of the IBM Quantum Canary Processors [61].
Recall a learning model, e.g., QNNs or DNNs, is warranted to possessing a good
trainability if it requires a small number of epochs to surpass a threshold
accuracy and fast converges to a stationary point, i.e., the term
$\mathcal{J}(\bm{\theta})\rightarrow 0$ in Eqn. (2). Many theoretical studies
in the regime of deep learning have proven that SGD can help learning models
escape saddle points efficiently [62, 63]. Moreover, recent study demonstrated
that a quantum variant of SGD, i.e., stochastic quantum natural gradient
descent (SQNGD) [53], can well address the barren plateau’s issue. Driven by
the success of SGD in deep learning and the power of SQNGD, an immediate
interest is exploring whether these two methods can enhance the trainability
of QNNs, especially for alleviating barren plateaus issues.
Figure 5: Effects of regularizations on the performance of quantum model on
Wine dataset. The labels ‘GD’, ‘SGD’, ‘SQNGD’, ‘WD’, and ‘N’ refer to the
gradient descent optimizer, stochastic gradient descent optimizer, the
stochastic quantum natural gradient descent optimizer, the weight decay, and
execution of experiments on NISQ chips, respectively. (a) describes the
effects of regularizations on optimization. SGD plays a significant role in
accelerating convergence and achieving higher accuracy, while others the
optimization process instead of boosting performance. (b) is the learning
curve of GD with more training epochs.
The experiment results are summarized in Figure 5. With the aim of
investigating whether SGD facilitates the quantum model optimization, we also
apply gradient descent (GD) optimizer to learn the same tasks as a reference.
Specifically, Figure 5 (a) depicts that the train accuracy of QNNN and QENN
optimized by SGD rapidly rises to $70\%$ after $20$ epochs, while its train
accuracy remains at around $50\%$ with the GD optimizer. Meanwhile, QNNN with
SGD optimizer achieves higher test accuracy than that in the setting of GD (at
least $20\%$). Surprisingly, SQNGD, the quantum natural gradient version of
SGD, further expands the accuracy gap by $10\%$, reaching to the highest
accuracy and fastest convergence of QNNN. Notably, the performance of QNNN
with GD optimizer presents a monotone increasing trend, which suggests that
the model may potentially reach higher performance with more training epochs.
Therefore, we extend the total training epochs from $100$ to $500$ and train
the QENN with GD. Figure 5 (b) shows that the accuracy of QENN trained by GD
has been increasing smoothly and reaches $80\%$ after $500$ epochs, narrowing
the gap between GD and SGD from $30\%$ to $10\%$.
Motivated by the large gain from SGD, we conduct additional experiments to
exploit how the batch size affects the learning performance of QNNs.
Specifically, we train QNNs many times on the Wine data by SGD with batch size
growing exponentially. As shown in Figure 6, for both QNNN and QENN, the
increased batch size suppresses the convergence rate. For example, QNNN
achieves $80\%$ accuracy on the Wine dataset when batch size equals to $4$
after $40$ epochs, which is $10\%$ higher than that of batch size with $8$.
(a) Training process (b) Accuracy vs batch size
Figure 6: The relation between batch size of SGD and trainability of QNNs on
Wine dataset. (a) shows the convergence of quantum models with different
setting of batch size. bs is abbreviation of batch size. (b) shows the the
fluctuation of training accuracy with respective to batch size.
We then study how the explicit regularization technique, i.e., the weight
decay, effects the trainability of QNNs. Mathematically, the weight decay
stragety refers to adding a penalty term for the trainable paremters, i.e.,
$\arg\min_{\bm{\theta}}\mathcal{L}(\bm{\theta})=\frac{1}{n}\sum_{i=1}^{n}\ell(y^{(i)},\hat{y}^{(i)})+\lambda\|\bm{\theta}\|$.
The effect of weight decay on the trainability of QNNs is shown in Figure 5.
An immediate observation is that this strategy fails to enhance the
performance of QNNN with GD optimizer. With respect to SGD optimizer, the
weight decay method improves the test accuracy of QNNN by $2\%$ at the expense
of slightly low convergence rate. For QENN, weight decay together with SGD
assists models to obtain the fastest convergence in the first $20$ epochs.
However, it fails to efficiently narrow the gap between train accuracy and
test accuracy after the $40$-th epoch when the phenomenon of over fitting
begins to appear.
We last explore the performance of QNNs under the NISQ scenario. As depicted
by the brown line in Figure 5, quantum noise leads to the degraded
trainability QNNs, i.e., around $10\%$ accuracy decline compared with
noiseless settings. We defer the comparison of the runtime cost of simulating
QNNs and DNNs in Appendix D, which shows that QNNN under NISQ setting spends
up to $126s$ on one iteration, while the noiseless setting and classical MLP
only need $4s$ and $0.02s$ respectively.
### 4.1 Implications
The achieved results indicate that the widely used regularization techniques
in classical deep learning plays a different role in quantum machine learning.
Although SGD with the appropriate batch size slightly benefits the
optimization of QNNs, others regularization strategies such as weight decay
fail to enhance the trainability of QNNs. This differs QNNs with DNNs.
Advanced techniques are highly desired to improve the trainability of QNNs,
especially addressing the barren plateaus phenomenons. In addition, empirical
results exhibit that quantum system noise exacerbates the training difficulty
of QNNs. A promising way to resolve this issue is introducing various error
mitigation techniques into QNNs [64, 65, 66, 67, 68, 69].
## 5 Discussion and conclusions
In this study, we proceed systematic numerical experiments to understand the
generalization ability and trainability of typical QNNs in the view of
statistical learning theory. The achieved results exhibited that current QNNs
struggle a poor effective model capacity. As depicted in Figure 1, this
observation well explains why current QNNs can attain computational advantages
on quantum synthetic data classification tasks and discrete logarithm
problems, while they fail to compete with DNNs in tackling real-world learning
tasks. Moreover, our study illustrate that the regularization techniques,
which greatly contribute to the success of DNNs, have limited effects towards
the trainability of QNNs. In addition, our study exhibits that quantum system
noise suppresses the learnability of QNNs, which echoes with the theoretical
study [35]. Last, to alleviate the dilemma of current QNNs, we discuss several
prospective directions such as designing over-parameterized QNNs without
barren plateaus and developing effective error mitigation techniques.
Besides the contributions towards the understanding the power of QNNs, we
build an open-source benchmark to fairly and comprehensively assess the
learnability of various QNNs in a standard process, and consequently benefit
the design of new paradigms of QNNs. Specifically, this benchmark provides
several ready-to-use datasets, quantum and classical models as well as
evaluation scripts. Furthermore, we adopt the factory method in the software
design to help users easily register their self-defined models into the whole
framework. More models and tasks will be supported in the future. We believe
that this benchmark will facilitate the whole quantum machine learning
community.
Note added. During the preparation of the manuscript, we notice that a very
recent theoretical study [70] indicated that to deeply understand the power of
QNNs, it is necessary to demonstrate whether QNNs possess the ability to
achieve zero risk for a randomly-relabeled real-world classification task.
Their motivation highly echoes with our purpose such that statistical learning
theory can be harnessed as a powerful tool to study the capability and
limitations of QNNs. In this perspective, the achieved results in this study
provide a negative response to their question. Combining the analysis in [70]
and our results, a promising research direction is analyzing the non-uniform
generalization bounds of QNNs to understand their power.
## References
* [1] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
* [2] Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of machine learning, 2012.
* [3] Vladimir Vapnik. The nature of statistical learning theory. Springer science & business media, 2013.
* [4] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
* [5] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097–1105, 2012.
* [6] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
* [7] Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. IEEE transactions on Signal Processing, 45(11):2673–2681, 1997\.
* [8] Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nathan Srebro. Exploring generalization in deep learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, page 5949–5958, Red Hook, NY, USA, 2017. Curran Associates Inc.
* [9] Ruoyu Sun. Optimization for deep learning: theory and algorithms. arXiv preprint arXiv:1912.08957, 2019.
* [10] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017.
* [11] Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando GSL Brandao, David A Buell, et al. Quantum supremacy using a programmable superconducting processor. Nature, 574(7779):505–510, 2019.
* [12] John Preskill. Quantum computing in the nisq era and beyond. Quantum, 2:79, 2018.
* [13] Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd. Quantum machine learning. Nature, 549(7671):195, 2017.
* [14] Aram W Harrow and Ashley Montanaro. Quantum computational supremacy. Nature, 549(7671):203, 2017.
* [15] Maria Schuld, Ilya Sinayskiy, and Francesco Petruccione. The quest for a quantum neural network. Quantum Information Processing, 13(11):2567–2586, 2014.
* [16] Kerstin Beer, Dmytro Bondarenko, Terry Farrelly, Tobias J Osborne, Robert Salzmann, Daniel Scheiermann, and Ramona Wolf. Training deep quantum neural networks. Nature Communications, 11(1):1–6, 2020.
* [17] Iris Cong, Soonwon Choi, and Mikhail D Lukin. Quantum convolutional neural networks. Nature Physics, 15(12):1273–1278, 2019.
* [18] Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, and Dacheng Tao. A grover-search based quantum learning scheme for classification. New Journal of Physics, 2021.
* [19] Edward Farhi and Hartmut Neven. Classification with quantum neural networks on near term processors. arXiv preprint arXiv:1802.06002, 2018.
* [20] Vojtěch Havlíček, Antonio D Córcoles, Kristan Temme, Aram W Harrow, Abhinav Kandala, Jerry M Chow, and Jay M Gambetta. Supervised learning with quantum-enhanced feature spaces. Nature, 567(7747):209, 2019.
* [21] Kosuke Mitarai, Makoto Negoro, Masahiro Kitagawa, and Keisuke Fujii. Quantum circuit learning. Physical Review A, 98(3):032309, 2018.
* [22] Maria Schuld and Nathan Killoran. Quantum machine learning in feature hilbert spaces. Physical review letters, 122(4):040504, 2019.
* [23] Takeru Kusumoto, Kosuke Mitarai, Keisuke Fujii, Masahiro Kitagawa, and Makoto Negoro. Experimental quantum kernel machine learning with nuclear spins in a solid. arXiv preprint arXiv:1911.12021, 2019.
* [24] He-Liang Huang, Yuxuan Du, Ming Gong, Youwei Zhao, Yulin Wu, Chaoyue Wang, Shaowei Li, Futian Liang, Jin Lin, Yu Xu, et al. Experimental quantum generative adversarial networks for image generation. arXiv preprint arXiv:2010.06201, 2020.
* [25] Manuel S Rudolph, Ntwali Toussaint Bashige, Amara Katabarwa, Sonika Johr, Borja Peropadre, and Alejandro Perdomo-Ortiz. Generation of high resolution handwritten digits with an ion-trap quantum computer. arXiv preprint arXiv:2012.03924, 2020.
* [26] Daiwei Zhu, Norbert M Linke, Marcello Benedetti, Kevin A Landsman, Nhung H Nguyen, C Huerta Alderete, Alejandro Perdomo-Ortiz, Nathan Korda, A Garfoot, Charles Brecque, et al. Training of quantum circuits on a hybrid quantum computer. Science advances, 5(10):eaaw9918, 2019.
* [27] Cornelius Hempel, Christine Maier, Jonathan Romero, Jarrod McClean, Thomas Monz, Heng Shen, Petar Jurcevic, Ben P Lanyon, Peter Love, Ryan Babbush, et al. Quantum chemistry calculations on a trapped-ion quantum simulator. Physical Review X, 8(3):031022, 2018.
* [28] Abhinav Kandala, Antonio Mezzacapo, Kristan Temme, Maika Takita, Markus Brink, Jerry M Chow, and Jay M Gambetta. Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. Nature, 549(7671):242–246, 2017.
* [29] Google AI Quantum et al. Hartree-fock on a superconducting qubit quantum computer. Science, 369(6507):1084–1089, 2020.
* [30] Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J Love, Alán Aspuru-Guzik, and Jeremy L O’brien. A variational eigenvalue solver on a photonic quantum processor. Nature communications, 5:4213, 2014.
* [31] Amira Abbas, David Sutter, Christa Zoufal, Aurélien Lucchi, Alessio Figalli, and Stefan Woerner. The power of quantum neural networks. arXiv preprint arXiv:2011.00027, 2020.
* [32] Leonardo Banchi, Jason Pereira, and Stefano Pirandola. Generalization in quantum machine learning: a quantum information perspective. arXiv preprint arXiv:2102.08991, 2021.
* [33] Kaifeng Bu, Dax Enshan Koh, Lu Li, Qingxian Luo, and Yaobo Zhang. On the statistical complexity of quantum circuits. arXiv preprint arXiv:2101.06154, 2021.
* [34] Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, Shan You, and Dacheng Tao. On the learnability of quantum neural networks. arXiv preprint arXiv:2007.12369, 2020.
* [35] Yuxuan Du, Zhuozhuo Tu, Xiao Yuan, and Dacheng Tao. An efficient measure for the expressivity of variational quantum algorithms. arXiv preprint arXiv:2104.09961, 2021.
* [36] Hsin-Yuan Huang, Michael Broughton, Masoud Mohseni, Ryan Babbush, Sergio Boixo, Hartmut Neven, and Jarrod R McClean. Power of data in quantum machine learning. arXiv preprint arXiv:2011.01938, 2020.
* [37] Hsin-Yuan Huang, Richard Kueng, and John Preskill. Information-theoretic bounds on quantum advantage in machine learning. arXiv preprint arXiv:2101.02464, 2021.
* [38] Vladimir Vapnik. Principles of risk minimization for learning theory. In Advances in neural information processing systems, pages 831–838, 1992.
* [39] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
* [40] Jarrod R McClean, Sergio Boixo, Vadim N Smelyanskiy, Ryan Babbush, and Hartmut Neven. Barren plateaus in quantum neural network training landscapes. Nature communications, 9(1):1–6, 2018.
* [41] Zeyuan Allen-Zhu, Yuanzhi Li, and Yingyu Liang. Learning and generalization in overparameterized neural networks, going beyond two layers. In Advances in neural information processing systems, pages 6158–6169, 2019.
* [42] Kenji Kawaguchi, Leslie Pack Kaelbling, and Yoshua Bengio. Generalization in deep learning. arXiv preprint arXiv:1710.05468, 2017.
* [43] Marcello Benedetti, Erika Lloyd, Stefan Sack, and Mattia Fiorentini. Parameterized quantum circuits as machine learning models. Quantum Science and Technology, 4(4):043001, 2019.
* [44] M Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, et al. Variational quantum algorithms. arXiv preprint arXiv:2012.09265, 2020.
* [45] Michael A Nielsen and Isaac L Chuang. Quantum computation and quantum information. Cambridge University Press, 2010.
* [46] Peter Wittek. Quantum machine learning: what quantum computing means to data mining. Academic Press, 2014.
* [47] Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, and Dacheng Tao. Expressive power of parametrized quantum circuits. Phys. Rev. Research, 2:033125, Jul 2020.
* [48] Ryan LaRose and Brian Coyle. Robust data encodings for quantum classifiers. Physical Review A, 102(3):032420, 2020.
* [49] Seth Lloyd, Maria Schuld, Aroosa Ijaz, Josh Izaac, and Nathan Killoran. Quantum embeddings for machine learning. arXiv preprint arXiv:2001.03622, 2020.
* [50] Maxwell Henderson, Samriddhi Shakya, Shashindra Pradhan, and Tristan Cook. Quanvolutional neural networks: powering image recognition with quantum circuits. Quantum Machine Intelligence, 2(1):1–9, 2020.
* [51] Zoë Holmes, Kunal Sharma, M Cerezo, and Patrick J Coles. Connecting ansatz expressibility to gradient magnitudes and barren plateaus. arXiv preprint arXiv:2101.02138, 2021.
* [52] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
* [53] James Stokes, Josh Izaac, Nathan Killoran, and Giuseppe Carleo. Quantum natural gradient. Quantum, 4:269, 2020.
* [54] Seth Lloyd. Quantum approximate optimization is computationally universal. arXiv preprint arXiv:1812.11075, 2018.
* [55] Maria Schuld, Ryan Sweke, and Johannes Jakob Meyer. Effect of data encoding on the expressive power of variational quantum-machine-learning models. Physical Review A, 103(3):032430, 2021.
* [56] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [57] Dheeru Dua and Casey Graff. UCI machine learning repository, 2017.
* [58] Yann LeCun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/, 1998.
* [59] Maria Schuld. Quantum machine learning models are kernel methods. arXiv preprint arXiv:2101.11020, 2021.
* [60] Xinbiao Wang, Yuxuan Du, Yong Luo, and Dacheng Tao. Towards understanding the power of quantum kernels in the nisq era. arXiv preprint arXiv:2103.16774, 2021.
* [61] IBM Quantum. https://quantum-computing.ibm.com/, 2021.
* [62] Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. Siam Review, 60(2):223–311, 2018.
* [63] Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points—online stochastic gradient for tensor decomposition. In Conference on learning theory, pages 797–842. PMLR, 2015.
* [64] Yuxuan Du, Tao Huang, Shan You, Min-Hsiu Hsieh, and Dacheng Tao. Quantum circuit architecture search: error mitigation and trainability enhancement for variational quantum solvers. arXiv preprint arXiv:2010.10217, 2020.
* [65] Suguru Endo, Simon C Benjamin, and Ying Li. Practical quantum error mitigation for near-future applications. Physical Review X, 8(3):031027, 2018.
* [66] Suguru Endo, Zhenyu Cai, Simon C Benjamin, and Xiao Yuan. Hybrid quantum-classical algorithms and quantum error mitigation. Journal of the Physical Society of Japan, 90(3):032001, 2021.
* [67] Abhinav Kandala, Kristan Temme, Antonio D Córcoles, Antonio Mezzacapo, Jerry M Chow, and Jay M Gambetta. Error mitigation extends the computational reach of a noisy quantum processor. Nature, 567(7749):491–495, 2019.
* [68] Armands Strikis, Dayue Qin, Yanzhu Chen, Simon C Benjamin, and Ying Li. Learning-based quantum error mitigation. arXiv preprint arXiv:2005.07601, 2020.
* [69] Kristan Temme, Sergey Bravyi, and Jay M Gambetta. Error mitigation for short-depth quantum circuits. Physical review letters, 119(18):180509, 2017.
* [70] Matthias C. Caro, Elies Gil-Fuster, Johannes Jakob Meyer, Jens Eisert, and Ryan Sweke. Encoding-dependent generalization bounds for parametrized quantum circuits, 2021.
* [71] Aram W Harrow and John C Napp. Low-depth gradient measurements can improve convergence in variational hybrid quantum-classical algorithms. Physical Review Letters, 126(14):140502, 2021.
* [72] Ville Bergholm, Josh Izaac, Maria Schuld, Christian Gogolin, M Sohaib Alam, Shahnawaz Ahmed, Juan Miguel Arrazola, Carsten Blank, Alain Delgado, Soran Jahangiri, et al. Pennylane: Automatic differentiation of hybrid quantum-classical computations. arXiv preprint arXiv:1811.04968, 2018.
* [73] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc., 2019.
## Appendix A Quantum computing
Analogous to the fundamental role of bit in classical computing, the
fundamental unit in quantum computation is quantum bit (qubit), which refers
to a two-dimensional vector. Under Dirac notation, a qubit state is defined as
$\ket{\bm{\alpha}}=a_{0}\ket{0}+a_{1}\ket{1}\in\mathbb{C}^{2}$, where
$\ket{0}=[1,0]^{\top}$ and $\ket{1}=[0,1]^{\top}$ specify two unit bases, and
the coefficients $a_{0},a_{1}\in\mathbb{C}$ satisfy
$|a_{0}|^{2}+|a_{1}|^{2}=1$. Similarly, an $N$-qubit state is denoted by
$\ket{\Psi}=\sum_{i=1}^{2^{N}}a_{i}\ket{\bm{e}_{i}}\in\mathbb{C}^{2^{N}}$,
where $\ket{\bm{e}_{i}}\in\mathbb{R}^{2^{N}}$ is the unit vector whose $i$-th
entry being 1 and other entries are 0, and $\sum_{i=0}^{2^{N}-1}|a_{i}|^{2}=1$
with $a_{i}\in\mathbb{C}$. Apart from Dirac notation, the density matrix can
be used to describe more general qubit states. For example, the density matrix
corresponding to the state $\ket{\Psi}$ is
$\rho=\ket{\Psi}\bra{\Psi}\in\mathbb{C}^{2^{N}\times 2^{N}}$. For a set of
qubit states $\\{p_{i},\ket{\Psi_{i}}\\}_{i=1}^{m}$ with $p_{i}>0$,
$\sum_{i=1}^{m}p_{i}=1$, and $\ket{\Psi_{i}}\in\mathbb{C}^{2^{N}}$ for
$\forall i\in m$, its density matrix is $\rho=\sum_{i=1}^{m}p_{i}\rho_{i}$
with $\rho_{i}=\ket{\psi_{i}}\bra{\psi_{i}}$ and
$\operatorname{Tr}({\rho})=1$.
Figure 7: The quantum logic gates. The table contains the abbreviation, the
mathematical form, and the graph representation of a set of universal quantum
gates explored in this study.
There are three types of quantum operations using to manipulate qubit states,
which are quantum (logic) gates, quantum channels, and quantum measurements.
Specifically, quantum gates, as unitary transformations, can be treated as the
computational toolkit for quantum circuit models, i.e., an $N$-qubit gate
$U\in\mathcal{U}(2^{N})$ obeys $UU^{\dagger}=\mathbb{I}_{2^{N}}$, where
$\mathcal{U}(\cdot)$ stands for the unitary group. Throughout the whole study,
we focus on the single-qubit and two-qubit quantum gate set
$\\{\mathop{\text{H}},\mathop{\text{X}},\mathop{\text{Y}},\mathop{\text{Z}},\mathop{\text{RX}},\mathop{\text{RY}},\mathop{\text{RZ}},\mathop{\text{CNOT}},\mathop{\text{CZ}}\\}$,
as summarized in Figure 7. Note that the investigated set is universal such
that such that these quantum gates can be used to reproduce the functions of
all the other quantum gates [45]. Different from quantum gates that evolve
qubit states in the closed system, quantum channels are applied to formalize
the evolving of qubits states in the open system. Mathematically, every
quantum channel $\mathcal{E}(\cdot)$ is a linear, completely positive, and
trace-preserving map [45]. A special quantum channel is called depolarization
channel, which is defined as
$\mathcal{E}_{p}(\rho)=(1-p)\rho+p\frac{\mathbb{I}_{2^{N}}}{2^{N}}.$ (4)
Intuitively, the depolarizing channel considers the worst-case scenario such
that the information of the input state can be entirely lost with some
probability. The aim of quantum measurements is extracting quantum information
of the evolved state, which contains the computation result, into the
classical form. In this study, we concentrate on the positive operator-valued
measures (POVM), which is described by a collection of positive operators
$0\preceq\Pi_{i}$ satisfying $\sum_{i}\Pi_{i}=\mathbb{I}$. Specifically,
applying the measurement $\\{\Pi_{i}\\}$ to the state $\rho$, the probability
of outcome $i$ is given by
$\Pr(i)=\operatorname{Tr}(\rho\Pi_{i}).$ (5)
## Appendix B Optimization of QNNs
There are many methods to optimize Eqn. (1) when the hypothesis is represented
by Eqn. (3), which include the zero-order, the first-order, and the second-
order optimizers [43]. For clearness, here we concentrate on the first-order
optimizer, where the gradients $\partial
C\left(h(\bm{\theta},\bm{x}^{(i)}),y^{(i)}\right)/\partial\bm{\theta}$ are
obtained by the parameter shift rule [21], which can be effectively realized
on NISQ devices. Specifically, we denote the parameter as
$\bm{\theta}=(\bm{\theta}_{1}^{T},\cdots,\bm{\theta}_{L}^{T})^{T}$ with
$\bm{\theta}_{l}=(\theta_{l1},\cdots,\theta_{l,k_{l}})^{T}$, where the
subscript $l_{k}$ refers to the number of parameters or gates of the $l$-th
learnable circuit layer. Then the derivative of the hypothesis
$h(\bm{\theta},\bm{x}^{(i)})$ with respect to $\theta_{l,k_{l}}$ can be
evaluated by running twice the same quantum circuit which differs only in a
shift of this parameter. Mathematically, as each parameter gate we use in the
designed circuit is generated by a Pauli operator $P_{l,k_{l}}$, i.e.,
$U_{l,k_{l}}(\theta_{l,k_{l}})=\exp(-i\theta_{l,k_{l}}P_{l,k_{l}}/2)$, the
parameter shift rule yields the equality
$\frac{\partial
h(\bm{\theta},\bm{x}^{(i)})}{\partial{\theta}_{l,k_{l}}}=\frac{1}{2\sin\alpha}\left[h(\bm{\theta}+\alpha\bm{e}_{l,k_{l}},\bm{x}^{(i)})-h(\bm{\theta}-\alpha\bm{e}_{l,k_{l}},\bm{x}^{(i)})\right],$
(6)
where $\bm{e}_{l,k_{l}}$ is the unit vector along the $\theta_{l,k_{l}}$ axis
and $\alpha$ can be any real number but the multiple of $\pi$ because of the
diverging denominator. The gradient of
$C\left(h(\bm{\theta},\bm{x}^{(i)}),y^{(i)}\right)$ can be computed by using
the chain rule.
With the gradient $\partial
C\left(h(\bm{\theta},\bm{x}^{(i)}),y^{(i)}\right)/\partial\bm{\theta}$ at
hand, many gradient descent algorithms can be employed to find the optimal
parameter point. In this paper, we mainly consider stochastic gradient descent
(SGD) algorithm and stochastic quantum natural gradient descent (SQNGD)
algorithm, which both effectively deal with the randomness of gradient induced
by finite measurement and hardware noise. SGD is widely used in deep learning,
whose optimization strategy is to update the trainable parameters in the
steepest direction indicated by the gradient. Formally, each optimization step
is given by
$\bm{\theta}_{t+1}=\bm{\theta}_{t}-\eta\nabla C\left(\bm{\theta}\right)$ (7)
where $\eta$ is the learning rate and $C(\bm{\theta})$ is short for the loss
function regarding the parameter $\bm{\theta}$.
Unlike SGD which chooses vanilla gradient as its optimization guidance, SQNGD
employs the quantum natural gradient, a quantum analogy of natural gradient,
to perform the parameter updating. While vanilla gradient descent chooses the
steepest descent direction in the $l_{2}$ geometry of parameter space, which
has been shown to be sub-optimal for the optimization of quantum variational
algorithms [71], quantum natural gradient descent works on the space of
quantum states equipped with a Riemannian metric tensor (called Fubini-Study
metric tensor) that measures the sensitivity of the quantum state to
variations in the parameters. This method always updates each parameter with
optimal step-size independent of the parameterization, achieving a faster
convergence than SGD. Formally, we denote the quantum state produced by the
PQC as $\ket{\psi_{\bm{\theta}}}$. The optimization rule of SQNGD involving
the pseudo-inverse $g^{+}(\bm{\theta}_{t})$ of the metric tensor yields
$\bm{\theta}_{t+1}=\bm{\theta}_{t}-\eta g^{+}(\bm{\theta}_{t})\nabla
C\left(\bm{\theta}\right),$ (8)
where $g_{ij}(\bm{\theta})=\operatorname{Re}[G_{ij}(\bm{\theta})]$ is the
Fubini-Study metric tensor, and $G_{ij}(\bm{\theta})$ is the Quantum Geometric
Tensor which can be written as
$G_{ij}(\bm{\theta})=\braket{\frac{\partial\psi_{\bm{\theta}}}{\partial{\theta}_{i}},\frac{\partial\psi_{\bm{\theta}}}{\partial{\theta}_{j}}}-\braket{\frac{\partial\psi_{\bm{\theta}}}{\partial{\theta}_{i}},\psi_{\bm{\theta}}}\braket{\psi_{\bm{\theta}},\frac{\partial\psi_{\bm{\theta}}}{\partial{\theta}_{j}}}$
(9)
with $\theta_{i}$ being the $i$-th entry of parameter vector $\bm{\theta}$.
## Appendix C Numerical simulation details
The outline of this section is as follows. The configuration of numerical
simulations conducted in Section 3 and 4 is introduced in this section. First,
we will give a detailed description about the datasets used in Section 3 and
4. Next, we present the implementation details of each type of QNNs. Last, the
deployed simulation hardware and hyper-parameters settings are demonstrated.
### C.1 Dataset
Quantum synthetic dataset. We randomly sample classical data from uniform
distribution and then embed them into quantum circuit by assigning each
classical sample to the rotation angle of each quantum rotation gate. After
converting classical data $\bm{x}$ to quantum state $\rho(\bm{x})$, we run the
quantum circuit shown in Figure and measure the expectation of observable $O$.
The whole process is expressed as
$f_{\bm{\theta}}(\bm{x})=\bra{\bm{0}}U^{\dagger}(\bm{x},\bm{\theta})OU(\bm{x},\bm{\theta})\ket{\bm{0}},$
(10)
where $U$ denotes the quantum circuit which depends on classical sample
$\bm{x}$ and trainable parameter $\bm{\theta}$. For binary classification
task, the label $y$ of input $\bm{x}$ is calculated by
$sign(f_{\bm{\theta}}(\bm{x}))$. We totally collect 200 positive samples and
200 negative samples, and split them into training set and test set equally.
Wine dataset. For 1-D signal, we select the Wine Data Set from UCI Machine
Learning Repository, whose feature dimension is similar to the quantum
synthetic data so that they can be flexibly encoded into quantum circuit. In
addition, the categories are truncated to two classes, keeping consistent with
the quantum data.
MNIST dataset. For 2-D images, a subset of MNIST is extracted to evaluate
performance of QCNN and CNN. At the same time, the original digit images with
size of $28*28$ are resized to $10*10$ for reducing resource consumption.
License: Yann LeCun and Corinna Cortes hold the copyright of MNIST dataset,
which is a derivative work from original NIST datasets. MNIST dataset is made
available under the terms of the Creative Commons Attribution-Share Alike 3.0
license.
In summary, the statistical characteristics of three datasets are listed in
Tabel 1.
Table 1: Three datasets used in experiments Dataset | Dimension | Class number | Training sample | Test sample
---|---|---|---|---
Quantum synthetic data | 16 | 2 | 200 | 200
The Wine data | 13 | 2 | 65 | 65
MNIST | $10\times 10$ | 10 | 2000 | 2000
### C.2 Implementation details
QNNN. QNNN is roughly divided into two blocks, including the feature embedding
block and trainable measurement block. In this paper, the feature embedding
block converts the classical data into quantum state by setting the parameter
of quantum gates with classical data. The measurement block consists of a
variational quantum circuit that linearly transforms the prepared quantum
state and the measurement operation that calculates the expectation of Pauli Z
basis. The number of qubits in experiments depends on the feature dimension of
datasets. Concretely, we employ quantum circuits with 16, 13 and 4 qubits for
quantum synthetic data, the Wine Data Set and MNIST respectively. The detailed
circuit arrangement is described in Figure 8 (a).
(a) QNNN (b) QENN
Figure 8: Layout of QNNN and QENN. (a) QNNN. Gate $R_{y}(x)$ means the
rotation gate generated by Pauli Y, whose angle is determined by the given
classical data $x$. $Rot(\theta)$ represents the general rotation gate with
three rotation angles $\theta_{x},\theta_{y},\theta_{z}$. $[\cdot]_{\times l}$
denotes the layer is repeated $l$ times. The initial state $\ket{0}^{\otimes
n}$ is processed by the whole circuit, and the prediction of input $x$ is
calculated by measuring the expectation of Pauli Z basis on the first qubit.
(b) QENN. The only difference compared with the QNNN is the embedding layer,
where there are additional trainable parameters to flexibly encode the
classical data according to the feedback of measurement. $ZZ(\theta)$ gate is
formulated by $\exp(-i\frac{\theta}{2}Z^{\otimes 2})$, which is a type of two-
qubit gate introducing entanglement.
QENN. The basic architecture of QENN model is the same as that of QNNN. What
is different is that the embedding layer also contains trainable parameters.
By making the feature embedding learnable, the quantum circuit forms a more
flexible and complicated transform function, which is more powerful than the
vanilla QNNN. The detailed layout of QENN is shown in Figure 8 (b).
QCNN. The QCNN employed in this paper can be regarded as a quantum version of
classical CNN, where the classical convolutional kernel is replaced with
quantum circuit, as demonstrated in 2. In particular, we implement a $2\times
2$ kernel with a variational quantum circuit of 4 qubits and 6 trainable
parameters, as depicted in Figure 9. In addition, multiple duplicates of the
quantum circuit are introduced to simulate the multi-channel mechanism. The
feature maps returned by quantum kernels are further processed by two fully-
connected layer of 32 and 10 nodes respectively.
Figure 9: Layout of CNN and QCNN. The top half represents the overall
structure of these two models. What is the difference between CNN and QCNN is
the convolution kernel. (a) CNN kernel. The classical convolution kernel is
implemented by the fully-connected layer, which takes the flattened pixels
inside a local patch as input and outputs the convolutional result. (b) QCNN
kernel. Firstly, the value of each pixel inside the same window of a kernel is
encoded as the angle of $RX$ gate. Then several controlled-RZ (CRZ) and
controlled-RX (CRX) operators are employed to learn the transformation.
Finally, we measure the expectation of Pauli Z basis on the first qubit as the
result of quantum convolution.
CNN. As shown in Figure 9, the structure of CNN adopted in the experiments is
the same as that of QCNN, with the quantum convolutional kernels substituted
by classical convolutional kernels and other components unchanged.
MLP. MLP is constructed by sequentially connecting multiple fully-connected
layers one by one. When processing quantum synthetic data and the Wine data,
we adopt a three-layer MLP, with the dimension of hidden layer depending on
the limitation of total number of trainable parameters. For MNIST dataset, the
original 2-D image is firstly flattened into an 1-D vector, which then input
into a three-layer MLP with 32 hidden nodes, as described in Figure 10.
Therefore, the MLP designed for MNIST replaces the convolution layer by the
fully-connected layer. In this way, we can observe how quantum convolution
affects the training process.
Figure 10: Layout of MLP for MNIST. We first convert the image to an 1-D
vector, followed by two FC (fully-connected) layers. By only replacing the
first convolutional layer of CNN and QCNN with a FC layer, we can figure out
the role of quantum convolutional kernel in the training.
### C.3 The deployed hardware and hyper-parameter settings
The QNNs referred in this paper are implemented based on Pennylane [72] and
PyTorch [73]. Due to the limited accessibility of physical quantum computers,
all experiments are simulated on classical computers with Intel(R) Xeon(R)
Gold 6267C CPU @ 2.60GHz and 128 GB memory. To simulate the quantum noise, we
adopt the noise model from ibmq_16_melbourne, which is one of the IBM Quantum
Canary Processors [61].
For fair comparison, the number of trainable parameters of different models
for the same task is kept close and the structure-independent hyper-parameters
during training are not fine tuned specifically for each model but always keep
the same setting for one task. Meantime, the experiment with a specific
configuration is repeated 10 times and calculate the average as final
experiment result. The rigorous experiment setup is listed in Table 2.
Table 2: Hyper-parameter setting in the experiments Model | Learning rate | Batch size | Optimizer
---|---|---|---
QNNN | 0.01 | 4 | SGD
QENN | 0.01 | 4 | SGD
QCNN | 0.001 | 5 | Adam
CNN | 0.001 | 5 | Adam
## Appendix D The performance of QNNs in the NISQ scenario
In this section, we explore the performance of QNNs under the NISQ scenario.
The achieved results are depicted in Figure 11. In particular, the solid and
dashed lines describe the training process on noiseless and NISQ devices
respectively. An immediate observation is that quantum system noise largely
weaken the power of quantum models, leading to a severe accuracy drop (from
about $94\%$ to $80\%$) on quantum synthetic data. Furthermore, the noisy QNNs
seems to completely lose the ability of matching the random data.
Figure 11: Quantum noise reduces quantum model capacity. FT denotes the fault-
tolerant devices, TL is the abbreviation of "true label", and RL is the
abbreviation of "random label". When running the same quantum circuit on noisy
devices,there is a serious performance decline in terms of the ability of
learning the original data and fitting random label.
## Appendix E Trainability of QNNs on other datasets
Here we proceed a series of experiments investigate the trainability of QNNs
with respect to the two regularization techniques, i.e., stochastic gradient
descent (SGD) and weight decay, on the quantum synthetic dataset. The
collected results are shown in Figure 12.
Figure 12: Effects of regularization techniques on the training of QNNs on
quantum synthetic data. GD is gradient decent, SGD is stochastic gradient
descent, WD is weight decay and N means running the experiments on NISQ chips.
SGD plays a significant role in accelerating convergence and achieving higher
accuracy, while early stopping hinders the optimization process instead of
boosting performance.
SGD. The batch SGD optimizer effectively boosts the convergence rate of QNNs
without obvious over-fitting. Specifically, in the first 20 epochs, the train
and test accuracy of QNNN (QENN) with SGD rapidly rise to nearly $95\%$
($80\%$), while GD optimizer only helps QNNN (QENN) achieve $3\%$ ($2\%$)
accuracy growth with respect to the initial models. After finishing the whole
training process after 100 epochs, there is still a $20\%$ ($30\%$) accuracy
gap between QNNN (QENN) optimized by the vanilla GD and SGD optimizers.
In view of the substantial positive effects brought by SGD on the training of
QNNs, we further exploit how the batch size effects the trainability of QNNs.
As depicted in Figure 13, it appears an nonmonotonic decline of the
convergence rate and accuracy with respect to the increment of batch size. And
there exists an optimal batch size, which allows the best convergence rate and
the highest accuracy (the best batch size is around 8 in our setting).
NISQ. Although SGD optimizer facilitates the trainability of QNNs, the system
noise in NISQ devices weakens the positive effect brought by SGD. As indicated
by the purple line in Figure 12, it appears almost $10\%$ ($5\%$) accuracy
decay for QNNN (QENN) in the first 20 training epochs in the NISQ scenario.
Meantime, the existence of noise extremely amplifies the runtime of QNNs by
classical simulations (Please refer to Appendix F for more details).
(a) Training process (b) Accuracy vs batch size
Figure 13: Batch size of SGD significantly influences the training of quantum
models. (a) shows the convergence of quantum models with different setting of
batch size. bs is abbreviation of batch size. (b) shows the the fluctuation of
test accuracy with respective to batch size.
Weight decay. Weight decay plays a slightly different role in the training of
QNNN and QENN. For QNNN, there is no significant effect after applying the
weight decay regularizer to the SGD optimizer. For QENN which is more powerful
than QNNN, the strategy of weight decay leads to faster convergence and
finally achieve a higher accuracy. Note that both QNNN and QENN do not
encounter apparent over-fitting issue on quantum synthetic data, which leaves
little space for weight decay to show its potential in alleviating over-
fitting.
## Appendix F Runtime analysis
We compare the runtime of QNNs and DNNs when processing the Wine dataset with
the fixed number of qubits. As shown in Figure 14 (a), simulating QNNs under
the NISQ setting suffers from the lowest efficiency, which averagely spends
$126$s to execute one iteration. For the fault-tolerant setting, the running
time of every optimization step steeply declines to $3$s. However, there still
exists a huge gap of time cost between simulating QNNs and DNNs on classical
computers. Another noticeable phenomenon is that noise cause a greater
negative impact on the running time of larger-scale QNNs. After applying noise
in the training, QENN spends more $114$s than QNNN on every iteration, while
the increment in time with fault-tolerant setting is $1$s.
In light of the exponential growth of runtime with the increased scale of
QNNs, we next indicate how the number of qubits effects their simulation time.
As shown in Figure 14 (b), the runtime of simulating QNNN is sensitive to the
qubit scale, appearing an exponential growth with the increment of qubits,
especially for QNNN in NISQ environment. On the contrary, MLP experiences a
negligible growth of time on every training iteration when the dimensions of
input feature vector and hidden layer steadily increase.
Figure 14: Runtime of QNNs and DNNs simulated by classical computers. FT is
the abbreviation of "Fault-tolerant". (a) Comparison of running time of every
iteration for each setting. Currently, the time cost of simulating noisy
quantum circuits on classical computers is unacceptable, which is hundreds of
times higher than that of the counterpart of classical version. (b) With the
increasing number of required qubits for feature embedding, the time of
training noisy QNNN grows exponentially while MLP sees a slow and negligible
growth of running time.
|
# Detection of large exact subgraph isomorphisms with a topology-only graphlet
index built using deterministic walks
Patrick Wang2 2These authors contributed equally. Department of Computer
Science
University of California, Irvine
Irvine, United States
<EMAIL_ADDRESS>Henry Ye2 Department of Computer Science
University of California, Irvine
Irvine, United States
<EMAIL_ADDRESS>Wayne Hayes1 1Corresponding author. Department of Computer
Science
University of California, Irvine
Irvine, United States
<EMAIL_ADDRESS>
###### Abstract
We introduce the first algorithm to perform topology-only local graph matching
(a.k.a. local network alignment or subgraph isomorphism): BLANT, for Basic
Local Alignment of Network Topology. BLANT first creates a limited, high-
specificity index of a single graph containing connected $k$-node induced
subgraphs called $k$-graphlets, for $k$=6–15. The index is constructed in a
deterministic way such that, if significant common network topology exists
between two networks, their indexes are likely to overlap. This is the key
insight which allows BLANT to discover alignments using only topological
information. To align two networks, BLANT queries their respective indexes to
form large, high quality local alignments. BLANT is able to discover highly
topologically similar alignments ($S^{3}\geq 0.95$) of up to 150 node-pairs
for which up to 50% of node pairs differ from their “assigned” global
counterpart. These results compare favorably against the baseline, a state-of-
the-art local alignment algorithm which was adapted to be topology-only. Such
alignments are 3x larger and differ 30% more (additive) more from the global
alignment than alignments of similar topological similarity ($S^{3}\geq 0.95$)
discovered by the baseline. We hope that such regions of high local similarity
and low global similarity may provide complementary insights to global
alignment algorithms.
###### Index Terms:
network alignment, biological networks, social networks, graph indexing
## I Introduction
Network alignment, or graph matching, is a common graph-mining problem that
involves finding topologically similar regions between two or more networks.
This problem has applications in a wide variety of fields [1]. These include
bioinformatics [2], linguistics [3], neuroscience [4], social networks [5],
and image recognition [6]. Being a generalization of subgraph isomorphism, it
is NP-hard [7], so many heuristic algorithms exist to approximately solve it.
In this paper we consider only the case of aligning two networks, although our
algorithm can easily be extended to the multiple network case.
Network alignment can be either local or global [8]. Global network alignment
algorithms map all the nodes from the smallest network to the larger one(s),
which is useful for quantifying the overall similarity between the networks.
By contrast, local alignment algorithms find smaller conserved regions between
networks with high local similarity but not necessarily high global
similarity. The two approaches yield complementary insights [8, 9, 10],
motivating the need to research both types of algorithms.
In addition to the distinction between global and local, another key
characteristic of a network alignment algorithm is whether it uses information
beyond the graph’s topology. For example, an alignment algorithm may rely
heavily on domain-specific node attributes like usernames in a social network
[11], protein sequences in a biological network [12], or entity attributes in
a knowledge graph [13]. Some algorithms do not use node attributes but do rely
on pre-aligned seed nodes [14, 15]. However, these attributes or seeds are
difficult to obtain and reduce the algorithms’ generalizability (cf. §V-B).
On the other end of the spectrum, there are alignment algorithms which rely
solely on topological information [16, 17, 18]. These algorithms remove the
significant cost of gathering domain-specific attributes or seeds and can also
generalize to networks in any domain. Additionally, such algorithms may
discover unique insights hidden in the topology of the graph itself [19, 20].
However, the topology-only algorithms listed above are all global alignment
algorithms. To the best of our knowledge, there do not currently exist any
local alignment algorithms which use only topological information. BLANT seeks
to fill this gap in the state of the art.
Designing a topology-only local alignment algorithm is non-trivial, because it
is difficult to adapt existing techniques from either topology-only global
aligners or non-topology-only local aligners. Topology-only global alignment
algorithms operate on the entire graph as a whole, such as by randomly
permuting the alignment in an evolutionary manner or by aligning and embedding
of both graphs (see §V-A for more discussion). These techniques optimize for
global similarity and thus miss small regions of high local similarity [8].
The technique of creating a global alignment first and then mining it for
local alignments, used by [21], does not solve this fundamental issue either,
which is why we consider such algorithms to be global alignment algorithms. In
summary, it is necessary to develop an approach which exclusively operates
locally, not globally, in order to discover locally conserved regions which
are not discovered by global alignment.
However, local approaches are intrinsically difficult to develop using only
topological information because of a fundamental tension between complexity
and information specificity. If only topology is used, individual nodes are
indistinguishable; only larger structures contain enough information
specificity in order to inform alignment. However, as the size of the desired
structures increases, the number of said structures in the graph grows
exponentially [22], resulting in unacceptable complexity. Existing non-
topology-only local aligners do not face this tension, making it difficult to
adapt such algorithms to be topology-only. With node attributes, two
individual nodes may already include enough information to be aligned. With
seeds, the algorithm may “collapse” a significant amount of complexity by
“percolating” the alignment to its neighbors.
In order to overcome this tension, BLANT uses the innovative approach of
deterministically mining graphlets—small induced subgraphs of a larger
network—from a graph to create a graphlet index. BLANT’s key insight is that,
if we assume the networks have similarity, a deterministic mining algorithm
will likely extract a similar subset of graphlets across them. This idea
resolves the tension between complexity and information, as we are able to
mine a tractable set of graphlets that are large enough where their topology
alone is usable as an identifier. Specifically, we mine graphlets of 6-15
nodes with BLANT, while the exhaustive enumeration algorithm ORCA [23] only
outputs graphlets of 5 nodes as anything more is intractable. Additionally,
the inclusion of a graphlet into the deterministically mined set is, in
itself, important information. Even if a certain graphlet shape appears
hundreds of times throughout two graphs, if that shape only appears once in
both deterministic indexes, the two graphlets are likely to be counterparts.
This deterministic approach is in contrast to existing graph indexing
techniques which either exhaustively enumerate structures (such as paths,
trees, graphlets, etc.) or stochastically sample a large subset of structures
in a graph [24]. The key difference between BLANT’s index and existing indexes
is that BLANT performs an existence check, while other indexes are used for
non-existence checks (see §V-C for additional discussion). Because the goal of
other indexes is to rule out the existence of a structure in a graph, they
need to be fairly exhaustive. As a result, they are time and memory intensive:
the fastest algorithm evaluated in [24] took 3 hours and 50GB to index a graph
of 2000 nodes. On the other hand, because BLANT’s index is decidedly non-
exhaustive, we only require 1 hour and 25MB to index a graph that is 10x
larger: 20000 nodes. BLANT uses non-exhaustion as a strength, because the fact
that a given graphlet is included in an extremely narrow index at all is
valuable information when performing alignment, as described in the previous
paragraph.
After aligning a set of deterministically mined graphlets, BLANT then merges
the aligned graphlets of 6-15 nodes together in order to create large
graphlets. BLANT is able to discover highly topologically similar alignments
($S^{3}\geq 0.95$) of up to 150 node-pairs for which up to 50% of node pairs
differ from their “assigned” global counterpart on a majority of the protein-
protein interaction (PPI) network pairs in the Integrated Interactions
Database (IID) [25]. Additionally, BLANT is able to output $S^{3}\geq 0.95$
alignments of 50 nodes for temporal network pairs from the Stanford Network
Analysis Project database with 1% - 5% noise level [26] with 50% global
dissimilarity. There are no direct competitors to BLANT as it is the first
topology-only local alignment algorithm, but these results outperform those
gathered by a version of a state-of-the-art local alignment algorithm
(AlignMCL) which was adapted to be topology-only by [8]. Compared to topology-
only AlignMCL, BLANT’s results are 3x larger and have 30% (additive) more
global dissimilarity on the IID network pairs. They are 1.5x larger and have
30% more global dissimilarity on the temporal network pairs. BLANT achieves
this with a lower time complexity – $O(n)$ on a database of networks to
AlignMCL’s $O(n^{2})$ with lower wall-clock time on our testbed (§IV-D).
## II Background
### II-A Graphlets, Orbits, Ambiguity
Graphlets are defined as “small” induced subgraphs of a network. They were
originally exhaustively enumerated up to $k=5$ nodes [27], then $k=6$ nodes
[28, 29], though BLANT is able to identify any graphlet up to $k=8$ nodes.
Fig. 1 shows the complete list of $k$-graphlets for $k=3,4$. BLANT automates
the process up to $k=8$, in which there are $n_{k}=12,346$ unique graphlets.
Each $k$-graphlet is assigned a unique graphlet ID from $0$ to $n_{k}-1$
inclusive.
Figure 1: All graphlets of sizes k = 3 and 4 nodes, and their automorphism orbits; within each graphlet, nodes of equal shading are in the same orbit. This figure is taken from [28]. (Note that BLANT uses different, automatically-generated IDs than [28].) TABLE I: Number of Graphlets and Unambiguous Graphlets k | # Graphlets | # Unambig.
---|---|---
2 | 1 | 0
3 | 2 | 0
4 | 6 | 0
5 | 21 | 0
6 | 112 | 8
7 | 853 | 144
8 | 11117 | 3552
An orbit is set of nodes that are topologically identical within a graphlet;
ie., they can be swapped without changing the graphlet. In Fig. 1, nodes are
shaded based on their orbit. For example, the first 4-node graphlet in Fig. 1
is the path of length 3, which has two orbits: the two middle nodes
participate in one orbit, while the two end-nodes form another.
A graphlet is defined as “ambiguous” if it contains at least two nodes of the
same orbit. This is because there exist more than one way to create a
topologically perfect alignment between two instances of an “ambiguous”
graphlet, as any nodes of the same orbit may be swapped without affecting
isomorphism. For example, when aligning two length-2 paths A-B-C and D-E-F, A
and C may both be aligned with either D or F as the two endpoints of the path
are in the same orbit. An “unambiguous” graphlet is simply a graphlet for
which each node exists in a unique orbit. There is only one way to create a
topologically perfect alignment between two instances of an “unambiguous
graphlet”. The smallest unambiguous graphlet has 6 nodes, as shown in Table I.
Some examples of unambiguous graphlets are shown in Fig. 2. To avoid the
combinatorial explosion resulting from aligning ambiguous graphlets, from this
point onwards we use only unambiguous ones—and note that there do not exist
unambiguous graphlets with fewer than 6 nodes, meaning the “traditional” set
of graphlets with up to only 5 nodes are insufficient.
Figure 2: Examples of unambiguous graphlets on $k=6,7,8$ nodes. Each node in
each graphlet exists in a unique orbit. No unambiguous graphlets exist for
$k\leq 5$.
### II-B Alignments, and Types of Alignment Similarity
We define an alignment as a 1-to-1 mapping from a set of nodes in one network
to a set of nodes in another network. The number of nodes being mapped can
range anywhere from 1 to the number of nodes in the smaller network. Fig. 3
gives an example of an alignment of 3 node pairs between one network of 4
nodes and one network of 5 nodes.
The topological similarity of an alignment measures how close the aligned
(sub)graphs are to being isomorphic. We use the measure the symmetric
substructure score ($S^{3}$, [30]), which gives the fraction of the number of
conserved edges in an alignment over the number of total edges in the
alignment (cf. Fig. 3).
Figure 3: An figure explaining the symmetric substructure score ($S^{3}$). In
the figure, an alignment of 3 node pairs is created between a graph of size 4
and a graph of size 5. The alignment contains 3 edges total. It contains 1
“conserved edge”, or 1 edge which appears in both networks. Thus, the $S^{3}$
score is 0.33.
The functional similarity of an alignment relates to what the nodes and edges
are, or what they do, in the real world. We make a simplifying assumption that
each node has a single functional “counterpart” in the other network, which is
mostly true in real world graphs. For example, in social networks, two nodes
in different social networks are counterparts if they refer to the same person
(1-to-1 except for duplicate accounts); in protein-protein interaction (PPI)
networks, two proteins are considered identical if they are orthologs (related
to common ancestors, mostly 1-to-1). When a node is aligned to its
counterpart, we say the node pair is “correctly aligned”. For functional
similarity, we use the metric node correctness (NC), which is the fraction of
node pairs in the alignment which are correctly aligned.
The reader should note the difference between “information” and “similarity”.
Our algorithm uses only topological information but is evaluated based on
topological and functional similarity.
## III Methods
### III-A Algorithm Overview
BLANT consists of three steps (cf. Fig. 4).
The first step is indexing, which takes in a single network as input and
deterministically extracts a set of graphlets from the network. In the output,
the graphlets are indexed by their graphlet ID. Each network only needs to be
indexed once, and that index may then be used to attempt aligning the network
with any other network which has been indexed.
The second step is alignment, which takes in two indexes as input and outputs
a set of aligned pairs of identical graphlets which contain between 6-15 node
pairs. The goal of our algorithm after this step is to have a set of graphlet
pairs which have both high topological and functional similarity. Perfect
topological similarity is enforced explicitly by aligning only identical
graphlets. High functional similarity is achieved through the enforcement of
perfect topological similarity, the use of determinism (cf.
§LABEL:sec:introduction), as well as the techniques described in §III-B2,
§III-D, and §III-E.
The third and final step is merging, which takes in a set of aligned graphlets
of 6-15 node pairs and outputs a single alignment of up to thousands of node
pairs. While the aligned graphlets in the previous step were required to have
identical topology, the large output alignment in this step does not need to
be topologically perfect (we only enforce an $S^{3}$ score of 0.95, cf.
§IV-C1). This step allows us to turn a large set of small, high quality
alignments into a single large, high quality alignment. In this paper, we only
aim to extract the largest and highest quality alignment we can from the set
of aligned graphlets to demonstrate the effectiveness of our approach. We
leave the problem of discovering multiple large, high quality alignments to
future work.
Figure 4: The three steps of BLANT. Note that our algorithm can be easily
extended to more than two networks, but we use two for simplicity in this
diagram. First, graphlets are deterministically extracted from each network
individually. Then, pairs of identical graphlets from different networks are
matched, creating a pool of small alignments. Finally, a subset of the pool is
merged together into one final large alignment.
### III-B Step 1: Index Creation
The basic idea of this algorithm is to recursively build graphlets by
deterministically expanding from some root node. To ensure that every node has
a chance to be in the index, this expansion is performed $n$ times, once with
each node in the network serving as the root. The algorithm builds graphlets
in a DFS-like fashion, creating a list of nodes it will search in each
recursive step. It iterates through the list of nodes, adding each one
individually to its working set of nodes and recursing from there before
backtracking and adding the next node to the working set. In order to
deterministically create this list, the algorithm selects neighboring nodes
with the highest values according to a deterministic heuristic function. We
control the exponential nature of recursive expansion by severely limiting the
size of the node list at each recursive step.
Algorithm 1 Index Creation Algorithm
function CreateIndex(Graph $G$, $k$, $D$, $f$)
sorted = $G$’s nodes sorted by $f$
index = empty str -¿ [graphlet] map
for $u$ in sorted do
index += GetNodeEntries($G$, $k$, $D$, $f$, [$u$], index)
end for
return index
end function
function GetNodeEntries($G$, $k$, $D$, $f$, $V$, index)
if $|V|=k$ then
if graphletID of $V$ is unambiguous then
// Note: graphlets are stored as lists of nodes
index[graphletID of $V$] = $V$
end if
else
$V_{exp}$ = GetExpandNeighbors($G$, $V$, $D$, $f$, index)
for $u$ in $V_{exp}$ do
$V=V\cup\\{u\\}$
GetNodeEntries($G$, $k$, $D$, $f$, $V$, index)
$V=V-\\{u\\}$
end for
return entries
end if
end function
function GetExpandNeighbors($G$, $V$, $D$, $f$)
neighs = {all neighbors of all nodes in $V$}
uniqueValues = {unique $f$ values in neighs}
expandValues = {the $D$ largest values in uniqueValues}
$V_{exp}$ = {subset of neighs with $f$ values in expandValues}
return $V_{exp}$
end function
#### III-B1 Selecting Neighbors to Expand to
The index creation algorithm relies on a heuristic function, $f(v)$, to select
which neighbors to expand to at each step. $f(v)$ assigns a topologically
meaningful value to each node $v$ in the network in a deterministic way. We
use $f(v)$=degree$(v)$ (with a slight modification) which we have found works
well enough. Our slight modification deals with a practical problem of using
degree: the expansion from each node approaches the hubs of the network
relatively quickly, reducing the diversity of nodes in the output index. One
solution is to have $f(v)$ ignore (meaning assign a value of 0 to) the $h$
highest degree neighbors at each expansion step, but this yields low
performance because it is beneficial to eventually expand to the hubs. Thus,
we use a simple technique to resolve both these issues: we ignore the top
$k-1-c$ highest degree neighbors at each expansion step, where $c$ is the
number of nodes in our current set.
#### III-B2 Outputting Unambiguous Graphlets Only
The number of ways to align two graphlets of the same graphlet ID $G$ is:
$A(G)=\prod_{o}^{u(G)}{}_{f(G,o)}P_{f(G,o)}$ (1)
The number of unique orbits in $G$ is represented by $u(G)$, while $f(G,o)$ is
the number of nodes in $G$ are in orbit $o$. Nodes of the same orbit may be
permuted arbitrary in the alignment while maintaining isomorphism, hence the
permutation operator. Further, for every permutation of nodes for one orbit,
the nodes of any other orbit may be permuted in any way, hence why these
permutations are multiplied together.
If any orbit has $f(G,o)>1$, then there are multiple ways to “align” a pair of
graphlets. As a simple example, a triangle can be aligned in 6 possible ways
with another triangle: it can be rotated around a 3-cycle, then mirror-imaged
and rotated 3 more times. As defined in §II-A, an unambiguous graphlet as one
in which every node is its own orbit, eliminating all ambiguity in how to
align it with another graphlet of the same ID.
Given our assumption about one-to-one node correctness (cf. §II-B), there is
only a single correct way to align two graphlets. With a topology-only
alignment algorithm, it is difficult to distinguish between different ways of
aligning two graphlets of the same graphlet ID, as all of these ways yield
topologically perfect alignments. Given that $A(G)$ grows combinatorially, the
expected functional accuracy of aligning two graphlets of ID $G$ decreases
rapidly as $f(G,o)$ increases. Fortunately, it is here that we are able to
leverage BLANT’s ability to perform ambiguity checking on graphlets of up to 8
nodes in $O(1)$ time [31] in order to output only unambiguous graphlets,
greatly improving functional accuracy.
#### III-B3 Time and Output Size Complexity
We utilize the standard output files from BLANT-sample [32, 33], based on
[31], which contain information about graphlet IDs and orbits. We use these
output files in order to perform the following operations in $O(1)$:
converting a set of nodes to a graphlet ID and checking the ambiguity of a
graphlet ID. Thus, we only perform $O(1)$ work in the base case.
In the recursive case, we visit all nodes with the top $D$ values of $f$,
visiting at most $M$ nodes where $M\geq D$ ($M$ accounts for ties). Since the
recursion depth is at most $k-1$, the time complexity for a search starting at
one node is $O(M^{k-1})$. Since we call the recursive algorithm once per node,
the overall time complexity of the algorithm is $O(nM^{k-1})$. This time
complexity depends on the amount of ties in the heuristic function. For this
paper, we focus on networks which exhibit a tail-heavy degree distributions
(see Fig. 6), which means that nodes with large degree have very few ties.
Thus, $M$ will be approximately equal to $D$. Additionally, since $D$ and $k$
are both small, fixed constants in practice (cf. §IV-B), the algorithm is
fixed-parameter linear.
By definition, it is impossible to generate a file at a higher size complexity
than the time complexity of the algorithm used to generate it, because the act
of writing to a file is included in the time complexity analysis. Thus, the
output size complexity is also bounded by $O(nM^{k-1})$, though it may be
lower in practice due to duplicate in the output file.
### III-C Step 2: Alignment
The alignment algorithm takes in the indexes of any number of networks and
mines a limited list of topologically identical $k$-graphlet alignments
between them. We only output topologically identical alignments both because
of the intuitive benefits discussed in §III-A and because of the practical
benefits: it allows us to simply use key equality to determine if two
graphlets can be allowed, and it greatly simplifies the alignment process
itself. Note that this step of the algorithm can be easily extended to take in
more than two indexes, but we will use two for simplicity.
We utilize the standard output files from BLANT-sample [32, 33], based on
[31], which contain information about the orbits of each graphlet ID. We use
this information to align the nodes of two graphlets such that all aligned
pairs are in the same orbit.
Algorithm 2 Alignment Algorithm
function FindAlignedPairs(File $F_{1}$, File $F_{2}$)
// $F_{1}$ and $F_{2}$ are the output files of Alg. 1
$I$ = PatchIndex($F_{1}$)
$J$ = PatchIndex($F_{2}$)
for $k$ in $I$.keys $\cup$ $J$.keys do
if $|I_{k}|=1$ and $|J_{k}|=1$ then
$S=S\cup\\{(I_{k_{0}},J_{k_{0}})\\}$
end if
end for
return $S$
end function
function PatchIndex($F$)
$I$ = dictionary of empty sets
for each line $l$ in $F$, where each line is a graphlet do
$m$ = graphlet on line after $l$
if $m$ and $l$ have any nodes in common then
$p$ = graphlet from patching $m$ and $l$
$k$ = “patched graphlet” ID of $p$
$I_{k}=I_{k}\cup\\{p\\}$
end if
end for
return $I$
end function
### III-D Doubly-Unique Keys
In our index, the keys are graphlet IDs, which represent graphlet shapes. Say
the graphlet shape $G$ appears $n_{1}$ times in the first index and $n_{2}$
times in the second index. Given our assumption about one-to-one node
correctness (cf. §II-B), the maximum possible number of correct alignments is
$\min(n_{1},n_{2})$. However, because it is difficult to prune these pairs
when using only topology, we must output all $n_{1}n_{2}$ pairs. Assuming no
node overlap in the alignments, our node correctness on these pairs is upper-
bounded by:
$\min(n_{1},n_{2})/(n_{1}n_{2})$ (2)
Given this steep accuracy drop, we use the most stringent constraint of
$n_{1}=n_{2}=1$, which we call “doubly-unique keys”.
### III-E Patching Graphlets
A significant challenge of using doubly-unique keys when keys represent
graphlet shapes is that there are not that many different graphlet shapes for
small values of $k$, especially when only considering unambiguous graphlets.
As seen in Table I, there are only 3552 different unambiguous graphlet shapes
for the largest $k$ value BLANT is capable of, $k=8$, which is not enough keys
in practice given the constraint of doubly-unique keys. In order to increase
the number of unambiguous graphlets past what BLANT is natively capable of, we
“patch” graphlets together after building the index. We observe that BLANT
outputs graphlets with a high degree of overlap in adjacent lines in the
output index file, so we simply check every line against the line directly
below it and patch the graphlets together if they contain at least one node in
common.
For further clarity, Fig. 5 shows an example of two graphlets being patched
together. After patching, a new “patch ID” must be created that encodes the
shape of the larger graphlet. Although this ID is not completely unique (all
ID’s refer to a single shape but multiple ID’s may refer to the same shape),
we have found that canonizing this ID using a tool like NAUTY [34] decreases
accuracy and volume. We hypothesize that this is because information about the
two constituent graphlets positively informs the alignment because it retains
how the deterministic algorithm expanded in that area of the graph.
Additionally, while the patched graphlets may not be completely unambiguous,
retaining the two constituent unambiguous graphlets always allows for an
unambiguous alignment.
Figure 5: Two 5-node graphlets with 2 nodes in common being patched together.
The overlapping nodes are not being aligned; they are the exact same node in
the exact same graph. The combined graphlet may also include additional edges
that cross the two original graphlets. The patch ID is a string containing the
graphlet ID of the first graphlet, the graphlet ID of the second graphlet, the
orbits in each graphlet which overlap, and the additional edges.
### III-F Time Complexity
As patching two graphlets takes $O(1)$ time, the step of creating the patched
index takes $O(l)$, where $l$ is the number of lines in the larger index file.
After creating patched indexes, the algorithm loops through the union of keys
in both indexes and performs $O(1)$ work on each key, as it skips keys which
are not doubly-unique. Since the total number of keys is bounded by $2l$, the
total time complexity is $O(l)$. Since the size of the index file is bounded
by $O(nM^{k-1})$ (cf. §III-B3), the time complexity of Algorithm 2 is also
$O(nM^{k-1})$.
### III-G Step 3: Merging
The merging algorithms combines the smaller alignments created by the previous
step into a larger local alignment. Before beginning the merging process, we
filter out smaller alignments which are too globally similar, as described in
§III-G1. Then, an iterative merging process begins. In each iteration, the
merging algorithm randomly adds or removes an alignment $A_{i}$, with two
constraints when adding. First, the overall topological similarity of the
merged alignment $M$ must be above some threshold after adding $A_{i}$.
Second, adding $A_{i}$ must retain a 1-to-1 matching between nodes in $M$.
After some number of iterations, we terminate and output the largest alignment
found thus far.
Algorithm 3 Merging Algorithm
function Merge([[(Node, Node)]] $A$, float $m$, int $s$, float $t$)
// $A$ is the list of alignments from Alg. 1
// $M$ is the merged alignment
filter out alignments in $A$ with mean ODV sim. $>=m$
for $s$ times do
$i$ = random index of $A$
if $A_{i}$ is in $M$ then
remove $A_{i}$ from $M$
else
$o2o=IsOne2One(M,A_{i})$
$s3^{\prime}=IncS3(M,A_{i})$
if $o2o$ and $s3^{\prime}\geq t$ then
add $A_{i}$ to $M$
end if
end if
end for
return largest $M$ found
end function
#### III-G1 Filtering Out Globally Similar Alignments
As explained in §I, our goal is to discover non-trivially-sized regions within
two networks which differ from the global alignment, yet are still
topologically similar. A crucial technique which allows us to achieve this
goal is filtering out alignments which are too globally similar. In order to
predict global similarity using only topological information, we utilize the
metric of orbit degree vector (ODV) similarity [35]. ODV similarity is defined
between a pair of nodes, and is a value ranging from 0.0 to 1.0. As an
alignment is a set of node pairs, we define the ODV similarity of an alignment
as the mean ODV similarity of all of its node pairs. We filter out all input
alignments which have a mean ODV similarity $>=m$.
There exists a fundamental tradeoff in the selection of $m$. The closer $m$ is
to 1.0, the more similar the final merged alignment is to being a subset of
the global alignment, which means it does not provide novel information.
However, the further away $m$ is from 1.0, the more difficult it is to create
large merged alignments with high topological similarity, because topological
similarity and global similarity are correlated - the global alignment of any
two networks is likely to contain far more topological similarity than a
random alignment. We evaluate the optimal value of $m$ in §IV-B.
#### III-G2 Time Complexity
The merging algorithm involves a number of auxiliary data structures which
reduce its time complexity. We will briefly describe them here. First, we keep
a two sided map updated to represent $M$, allowing us to perform $IsOne2One()$
in $O(1)$ time. We also store a membership array which allows us to check
“$A_{i}$ is in $M$” in $O(1)$ time. To calculate $IncS3()$, we store the
current $S^{3}$ score of $M$ with both its numerator and denominator. We
compare each pair in $A_{i}$ with each pair in $M$, incrementing the numerator
and denominator according to the definition of $S^{3}$ (cf. Fig. 3). As the
size of $A_{i}$ is constant, $IncS3()$ takes $O(p)$ time where $p$ is the
number of pairs in $M$. The loop is repeated $k$ times, regardless of how many
addition attempts failed. Thus, the total time complexity is $O(pk)$, where
$p$ is the number of nodes in the final output alignment.
## IV Evaluation
In this section, we fix our algorithm’s parameters and investigate its
performance on a 21 large, realistic networks in the biological and social
domains. We perform an in-depth comparison on our two testbeds against a
baseline adapted from the non-topology-only local alignment setting. Finally,
we analyze the time and space requirements of our index.
### IV-A Experimental Setup
#### IV-A1 Hardware and Environment
All experiments are performed on a cluster of 96 identical machines (the
“circinus” cluster) in the Department of Computer Science at U.C. Irvine. Each
host runs Linux CentOS, has 96GB of RAM and a 24-core Intel X5680 CPU running
at 3.33GHz. Despite the large number of cores, the speed of a single core is
comparable to that of a low-end laptop. Algorithm 1 is part of the larger
BLANT package available on Github. The index creation algorithm (Algorithm 1)
is part of BLANT and is written in C. The alignment algorithm (Algorithm 2)
and merging algorithm (Algorithm 3) are in Python. Our baseline, AlignMCL is
also written in Python.
#### IV-A2 Datasets
We evaluate our algorithm on two different testbeds of networks, listed in
Table II. First, we use the mammal PPI networks from the Integrated
Interactions Database [36]. The IID networks are, by far, the largest PPI
networks available. Although they are partly synthetic, they are currently the
best available approximation to “real” PPI networks. They (a) are nontrivial
in size, and (b) share what is believed to be about the same amount of
topological similarity as we expect in the (currently unknown) real networks.
Second, we use all but two of the temporal networks (incidentally all social
networks) from the Stanford Network Analysis Project database, SNAP [37, 38,
39, 40, 41, 42]. We ignore CommResistance because it is too small (fewer than
10 nodes per network). We ignore ActMOOC because it is bipartite, something we
leave for future work. It is important to note that our work does not utilize
any special characteristics of temporal networks, a fact which we think
demonstrates the generality of our approach. We have chosen to use temporal
networks simply because they allow us to create perturbations of a network in
a manner more realistic than simply removing edges at random.
TABLE II: Network Statistics Networks | # Nodes | # Edges
---|---|---
All IIDs | 13K-18K | 256K-335K
AskUbuntu 0%-5% | 20K | 49K
BitcoinAlpha 0%-5% | 3.5K | 13K
BitcoinOTC 0%-5% | 5.5K | 19K
CollegeMsg 0%-5% | 1.0K | 5.5K
EmailEUcore 0%-5% | 682 | 2.9K
MathOverflow 0%-5% | 20K | 82K
RedditHyperlinks 0%-5% | 20K | 60K
StackOverflow 0%-5% | 20K | 191K
SuperUser 0%-5% | 20K | 80K
WikiTalk 0%-5% | 20K | 82K
Each temporal network consists of a list of edges, each marked with a
timestamp. To generate a network, we collect all edges between a start time
and an end time, in what we call a “temporal window”. We create four windows
total, and each window contains the same number of edges111We determine this
fixed number of edges by starting from the beginning of the temporal network
and adding edges one by one until we either hit 20,000 nodes, 400,000 edges,
an edge/node ratio of 20:1, or we run out of edges. We created windows based
on nodes/edges instead of times because the distribution of edges over time
was far from linear in many networks. This allows us to create networks at
comparable sizes and densities as the IID networks.. The first window begins
with the earliest edge in the network. The second window begins by shifting
the start time until we have “lost” 1% of the edges, discounting duplicate
edges which are “regained”. The third and fourth windows use 3% and 5%
shifting. The end times of each window are determined by collecting edges
until we have hit the set amount.
All networks follow a tail-heavy degree distribution, as shown in the log-log
graphs in Fig. 6. This ensures that there will not be too many ties when
choosing neighbors with the highest degree, keeping our algorithm’s time
complexity tractable (§III-B3).
In total, we use 11 IID networks and 10 temporal networks. We align every IID
network against every other one, giving us 55 total IID network pairs. For
each temporal network, we align the 0% shifted with the 1%, 3%, and 5%
respectively, giving us 30 total temporal network pairs. In total, we consider
21 networks and 85 network pairs.
Figure 6: Degree distributions of the IID networks (top) and temporal networks
with 0% shifting (middle). Only 0% shifting is included because the degree
distributions are essentially unchanged for 1-5% shifting. The bottom diagram
is a version of the middle diagram with count normalized. All three diagrams
use a log scale for both axes.
#### IV-A3 Baselines
To the best of our knowledge, BLANT is the first local aligner which was
designed to use only topology. There exist many algorithms which first
generate a global alignment using only topological information before mining
that global alignment for local alignments. We discuss why we believe such
algorithms are not “true” local alignment algorithms in §IV-C4 and §V-A. That
said, we use one algorithm under this class, AlignMCL as a baseline.
AlignMCL was originally designed to use biological information, but it can be
easily adapted to use only topological information as done in [8]. The paper
also adapted AlignNemo [43] to be topology-only, but we only use AlignMCL
because the authors (the same for both algorithms) have stated that AlignMCL
is the successor to AlignNemo. AlignMCL is a local alignment algorithm which
uses the popular idea of combining two networks into a single alignment graph
and then mining this alignment graph for higher quality local alignments. In
order to create this initial alignment graph, AlignMCL utilizes the $n$ node
pairs with the highest protein sequence similarity. Then, AlignMCL uses a
Markov Clustering Algorithm to walk the alignment graph and discover clusters,
which it outputs as local alignments.
In order to adapt AlignMCL to a topology-only setting, we used the approach
used by [8]: use the $n$ node pairs with the highest Orbit Degree Vector (ODV)
similarity [35] (which we will call “ODV pairs”), instead of protein sequence
similarity. We initially got poor performance with this approach, which we
discovered was because lots of nodes of degree 1 had identical ODVs and
crowded out the other node pairs. We worked around this issue by simply
ignoring degree 1 nodes when generating the the correctly aligned pairs.
#### IV-A4 Metrics
We measure the alignments in terms of size (in nodes), functional accuracy,
and topological accuracy. For functional accuracy, we use node correctness
(NC, cf. §II-B), or the fraction of node pairs which are correctly aligned.
For topological accuracy, we use symmetric substructure score ($S^{3}$, cf.
§II-B), which measures the fraction of conserved edges in the alignment.
Finally, we define a metric called “alignment score” which captures the
overall “quality” of an alignment with the formula $score=n*NC^{2}*S3^{2}$. We
square the accuracy values so that an alignment of 100 nodes and 50% accuracy
does not receive the same score as an alignment of 50 nodes and 100% accuracy
(“accuracy” here conceptually includes both NC and $S^{3}$).
### IV-B Parameter Selection
The parameters $k$ and $D$ in Algorithm 1 were straightforward to select. As
mentioned in §III-E, the indexes of $k=6$ and $k=7$ contain far too many
duplicate graphlets, as there are only 8 and 144 different unambiguous
graphlets of size $k=6$ and $k=7$, respectively (cf. Table I). Selecting a
value of $D$ was also easy. $D=1$ is ruled out as it only outputs a single
graphlet per node (assuming no ties in the heuristic function), since we only
add one neighbor at each recursive step. $D=2$ already results in runtimes of
30-60 minutes for $k=8$ (cf. §IV-D). As runtime increases exponentially with
$D$ (cf. §III-B3), this essentially rules out $D>2$. Thus, we use $k=8$ and
$D=2$.
The other parameters in our algorithm are $t$, $m$, and $s$ in Algorithm 3. As
mentioned in §IV-C1, we choose $t=0.95$ to find alignments of high, but not
perfect, topological similarity. To determine $m$, we selected 3 IID network
pairs and 3 temporal network pairs and looked at both the alignment size and
node correctness (NC) of the output alignment when running Algorithm 3 with
different values of $m$. As seen in Fig. 7, the NC increases (bad)
dramatically at around $m=0.96-0.98$ for all network pairs. Before this, the
NC increases fairly slowly while the size increases (good) at a solid rate.
Thus, we chose $m=0.95$ as a good tradeoff between size and NC. Finally, to
determine $s$, we ran the merging algorithm with $s=100,000$ and observed how
the largest alignment found so far increased across iterations, as shown in
Fig. 8. As most network pairs converge quickly, we choose $s=20,000$. With
this value of $s$, our Python implementation terminates in less than 1 minute
on all network pairs.
Figure 7: How alignment size and node correctness (NC) vary based on $m$. The
top half of the figure shows results from 3 IID network pairs, while the
bottom half shows 3 temporal network pairs. Each group of two columns
represents the results on a single network pair. We selected network pairs at
varying levels of similarity, with similarity determined by our algorithm’s
performance on them. $s=5,000$ is used for all runs. Higher sizes are good (so
higher size = more green) while higher NCs are bad (so higher NC = more red).
Figure 8: The progression of the merging algorithm over iterations for all 85
network pairs. The value on the y-axis represents the number of nodes found in
an intermediate point by the algorithm, as a percentage of the number of nodes
found after 100,000 iterations. The x-axis represents iterations.
### IV-C Aggregate Performance Comparison
#### IV-C1 Running and Postprocessing BLANT
We run BLANT with the $S^{3}$ threshold of $t=0.95$, as the goal of local
alignment is to discover a local region with significant, but not necessarily
perfect, topological similarity. All other parameter choices are discussed in
§IV-B. After running BLANT, we take the largest connected alignment of BLANT’s
output alignment, where “largest connected alignment” is the largest alignment
for which the induced subgraph of the nodes in one network are connected.
#### IV-C2 Running and Postprocessing AlignMCL
We run AlignMCL with the top $n$ ODV pairs in each network, ignoring nodes of
degree 1. AlignMCL has no other parameters. AlignMCL generates hundreds of
alignments per network pair, and we process each of them the following way.
First, as many alignments are not 1-to-1, we make them 1-to-1 by removing non
1-to-1 node pairs at random. Then, we take the largest connected alignment for
each of their alignments.
#### IV-C3 Comparison
In Fig. 9, we compare the performance of BLANT and AlignMCL on the IID network
pairs and the temporal network pairs, respectively.
Figure 9: Two plots comparing BLANT with AlignMCL in terms of size and node
correctness on the IID network pairs and the temporal network pairs. All
points shown have an $S^{3}$ score of $\geq 0.95$. The y-axis (node
correctness) is reversed because our goal is to generate alignments of low
node correctness.
All points in the plot represent alignments with $S^{3}\geq 0.95$. BLANT
generates a single high quality local alignment which is guaranteed to have
$S^{3}\geq 0.95$ because $t$ is set to 0.95 (cf. §IV-B). On the other hand,
AlignMCL generates hundreds of local alignments of varying quality per network
pair. We filter out all AlignMCL alignments with $S^{3}<0.95$. Additionally,
we filter out the significant number of alignments that have $<10$ nodes,
because such alignments are trivial to find (any arbitrary aligned pair in
Step 2 of our algorithm has 8-15 nodes).
#### IV-C4 Discussion
On the IID networks, AlignMCL struggles to find large and topologically
similar alignments with a low $NC$ (lower on the y-axis means higher $NC$).
This makes sense, because their algorithm performs a step that resembles
global alignment (computing global ODV pairs) before mining the global
alignment for smaller local alignments. On the other hand, BLANT is able to
discover large ($>100$ node) alignments with high topological similarity
($S^{3}>0.95$) that differ significantly from the underlying global alignment
($NC\approx 0.5$). Such local alignments are valuable because they provide
different information than the global alignment.
On the temporal networks, BLANT does not perform as well as it does on the IID
networks. However, the relative performance of BLANT compared to IID is the
same; BLANT is generally able to find larger alignments that differ more from
the underlying global alignment. Additionally, it is notable that AlignMCL is
unable to find any alignments with $S^{3}>0.95$ that consist of at least 10
nodes on most of the temporal network pairs. Specifically, it finds 0
alignments that fit these criteria on 19/30 of the temporal network pairs
tested.
### IV-D Runtime and Storage
With runtime, our goal is to be able to run the algorithm in a reasonable
amount of time on a standard laptop. This measure is fuzzy as runtime is not a
primary concern of ours, but we speculate that a researcher (the likely user
of our algorithm) would feel that “a few hours” is a reasonable one-time cost
to index a network. As shown in Fig. 10, our algorithm’s runtime grows
linearly and takes less than an hour even for networks with 20000 nodes.
Additionally, our index is fairly small, never exceeding 25MB even for the
largest networks. We hypothesize that the significant variation in runtime is
due to the different degree distributions of different networks, which results
in different numbers of tied degrees at each expansion step (the effect of
ties on time complexity is analyzed in §III-B3). Runtime and index size are
fairly correlated, but differences between the two arise due to duplicate
graphlets.
Other than indexing, we do not show figures. The alignment and merging steps
take less than 5 minutes each despite being written in Python. Even though the
time complexity of the alignment step is the same as that of the indexing
step, the output indexes are very small in practice (cf. Fig. 10), so the
alignment steps runs very quickly.
Figure 10: Plots showing index creation time vs. network size (top) and index
size vs. network size (bottom). Index size is measured after removing
duplicate lines. The trendlines are plotted with outliers removed. Both
runtime and storage size grow approximately linearly.
We do not rigorously compare our runtime with that of AlignMCL, because the
majority of AlignMCL’s runtime is caused by a Python program we wrote which
generates the list of seed pairs based on ODV similarity, which is in a
different language than our indexing algorithm. Additionally, we do not
perform a comparison because runtime is not a major concern of ours. To give
some rough numbers, our Python program takes 5 hours to run for the largest
graphs, and AlignMCL’s own program takes 60 minutes for the largest graphs.
This is comparable to the runtime of our index creation step, except our index
creation step only needs to be run once per network while both parts of
AlignMCL need to be once per network pair.
## V Related Work
### V-A Topology-only Global Network Alignment
Over the years, a number of global network alignment algorithms have been
developed which rely solely on topological information. One broad approach is
to start with some initial global alignment and randomly mutate it to improve
some topological measure of similarity. MAGNA++ [44] does this with a genetic
algorithm, while SANA [16] does this with simulated annealing. Other
algorithms, like NATALIE [45] and L-GRAAL [46], model the global alignment
problem as an integer linear program and use Lagrangian relaxation to produce
a solution. Many machine learning approaches exist as well, such as CONE-Align
[47] and DANA [18], which involve generating node embeddings and then aligning
two embedding distributions in a lower dimensional space. In general, these
global alignment algorithms process the entire graph as a whole while local
alignment algorithms mostly utilize local information, showing the
complementarity of these two approaches [8].
There exist algorithms, like GLAlign [21] and the adapted version of AlignMCL
[8], which combine these two approaches by first creating a global alignment
and then breaking it up into multiple local alignments. However, the local
alignments generated by these approaches can never differ significantly from
the initial global alignment used to generate them, meaning the additional
information they provide on top of the global alignment is limited (cf.
§IV-C4). Said another way, local alignment algorithms in this class do not
complement global alignment algorithms well.
### V-B Local Network Alignment with Side Information
“Side information” refers to information outside of topology, such as
node/edge attributes or pre-aligned seed nodes. Different domains tend to use
different types of side information, demonstrating the difficulty of
generalizing these algorithms. In aligning protein-protein interaction
networks, local alignment algorithms may use information such as genomic
sequence similarity [12, 43] or COG function [48]. In aligning knowledge
graphs, algorithms use information such as entity name [49] or entity
attributes (such as “age” or “population”) [13]. In aligning social networks,
algorithms most often apply percolation theory to grow alignments from a
preexisting set of aligned seed nodes [14, 15].
In some cases, local alignment algorithms which use side information can be
converted into topology-only cases, as was done for AlignMCL [12] and
AlignNemo [43] in the comparison article [8]. However, such conversions are
not possible in many cases, as mentioned in [8]. Additionally, these
conversions often weaken some of the assumptions an algorithm is built on. For
example, AlignMCL and AlignNemo rely on an auxiliary structure called an
alignment graph which represents the merging of two graphs. The accuracy of
this alignment graph degrades significantly when it is created with
topological information instead of genomic sequence information, as seen in
low performance of the topology-only version of these algorithms in [8] and in
§IV-C. This motivates the need for a local alignment algorithm which treats
topology as a first-class citizen. Additionally, the topology-only global
alignment algorithms have been demonstrated to produce complementary insights
compared with global alignment algorithms which use side information [9]. In
future work, we hope to investigate whether this result applies to local
alignment as well.
### V-C Subgraph Querying and Indexing
The graphlet index we generate has many similarities, and differences, to the
indexes produced by subgraph querying algorithms. Subgraph querying is the
problem of finding all graphs in a database D which contain a query graph q as
a subgraph [50]. As this task involves solving subgraph isomorphism, an NP-
hard problem, for each graph in D, a variety of heuristic algorithms exist.
One prominent heuristic approach is “filter-and-verification” [24], where a
graphlet index is first created for all graphs in D, and this index is used
quickly filter out graphs which definitely do not contain q. The remaining
graphs, called the candidate set, are then verified to see if they contain q.
BLANT’s graphlet index has many similarities with the indexes used for
subgraph querying. [24] delineates four characteristics in the design space of
indexes for subgraph querying that fully describe BLANT’s index. Within this
framework, BLANT’s index stores graphlets as its features (characteristic #1),
mines its features non-exhaustively (characteristic #2), uses the hash map
data structure (characteristic #3), and stores location information
(characteristic #4).
However, the main difference between the indexes of subgraph querying
algorithms and BLANT’s index is that the former is used for a non-existence
check, while the latter is used for an existence check. Thus, indexes of
subgraph querying algorithms need to be relatively exhaustive, while BLANT’s
index does not. Our core concept of determinism leverages this key insight and
allows BLANT to overcome a significant issue with subgraph querying
algorithms: the restrictively large index creation time and storage
requirements. All algorithms studied in [24] have index creation times that
grow exponentially or polynomially (as in, a polynomial ¿ 1), and even the
fastest evaluated algorithm, GRAPES [51], takes 3 hours just to index a graph
of 2000 nodes. The storage requirements mostly grow exponentially or
polynomially as well, and GRAPES comes with the tradeoff of requiring the most
storage: 50GB for a graph of 2000 nodes. By contrast, BLANT’s time and output
size complexity are both fixed-parameter linear and BLANT takes 1 hour and
uses 25MB to index graphs of 20000 nodes (cf. §IV-D). By deterministically
creating two indexes in the same way, we can obtain a similar slice of
graphlets in two networks which contain similarity. Even though the number of
distinct graphlets grows exponentially, the number of graphlets in the
deterministic slice only needs to grow linearly.
## VI Conclusion
To the best of our knowledge, we are the first to develop a local network
alignment algorithm which relies solely on topological information. Our
algorithm takes an entirely different approach from existing topology-only
global alignment algorithms, as we only use local information instead of
processing the graph as a whole. Our algorithm also takes a different approach
from existing non-topology-only local alignment algorithms, as we use
graphlets as our basic unit of local similarity instead of individual nodes.
We utilize a key innovation, a deterministically generated graphlet index of
the network, in order to prune the exponential search space of graphlets. This
overcomes the restrictive runtimes of other graphlet indexes. Additionally,
the use of determinism is crucial as it exploits the actual similarity—if
present—among the set of networks in a way that exponentially reduces the
search space. Then, we use numerous other techniques in order to query this
index and expand the query results into large, high quality local alignments.
## Acknowledgment
We thank Brian Song and Ronit Barman for assisting with data collection and
graph generation, and Arthur Jiejie Lafrance for implementing the heuristic
function in the index creation algorithm.
## References
* [1] F. Emmert-Streib, M. Dehmer, and Y. Shi, “Fifty years of graph matching, network alignment and network comparison,” _Information Sciences_ , vol. 346, pp. 180–197, 2016.
* [2] C. Clark and J. Kalita, “A comparison of algorithms for the pairwise alignment of biological networks,” _Bioinformatics_ , vol. 30, no. 16, pp. 2351–2359, 2014.
* [3] M. Dehmer and F. Emmert-Streib, _Mining Graph Patterns in Web-based Systems: A Conceptual View_. Dordrecht: Springer Netherlands, 2011, pp. 237–253. [Online]. Available: https://doi.org/10.1007/978-90-481-9178-9_11
* [4] E. Sommerfeld and F. Sobik, _Operations on cognitive structures — their modeling on the basis of graph theory_. Berlin, Heidelberg: Springer Berlin Heidelberg, 1994, pp. 151–196. [Online]. Available: https://doi.org/10.1007/978-3-642-52064-8_5
* [5] Y. Ren, C. C. Aggarwal, and J. Zhang, “Meta diagram based active social networks alignment,” in _2019 IEEE 35th International Conference on Data Engineering (ICDE)_ , 2019, pp. 1690–1693.
* [6] S.-M. Hsieh and C.-C. Hsu, “Graph-based representation for similarity retrieval of symbolic images,” _Data & Knowledge Engineering_, vol. 65, no. 3, pp. 401–418, 2008. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0169023X07002212
* [7] M. Garey and D. Johnson, _Computers and Intractability: A Guide to the Theory of NP-Completeness_. New York: New York: W.H. Freeman, 1979.
* [8] L. Meng, A. Striegel, and T. Milenković, “Local versus global biological network alignment,” _Bioinformatics_ , vol. 32, no. 20, pp. 3155–3164, 2016\.
* [9] O. Kuchaiev, T. Milenković, V. Memišević, W. Hayes, and N. Pržulj, “Topological network alignment uncovers biological function and phylogeny,” _Journal of The Royal Society Interface_ , vol. 7, no. 50, pp. 1341–1354, 2010.
* [10] R. Patro and C. Kingsford, “Global network alignment using multiscale spectral signatures,” _Bioinformatics_ , vol. 28, no. 23, pp. 3105–3114, 2012. [Online]. Available: http://bioinformatics.oxfordjournals.org/content/28/23/3105.abstract
* [11] L. Liu, B. Qu, B. Chen, A. Hanjalic, and H. Wang, “Modelling of information diffusion on social networks with applications to wechat,” _Physica A: Statistical Mechanics and its Applications_ , vol. 496, pp. 318–329, 2018. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0378437117312785
* [12] M. Mina and P. H. Guzzi, “AlignMCL: Comparative analysis of protein interaction networks through Markov clustering,” in _2012 IEEE International Conference on Bioinformatics and Biomedicine Workshops_. IEEE, 2012, pp. 174–181.
* [13] P. Buneman and S. Staworko, “Rdf graph alignment with bisimulation,” _Proceedings of the VLDB Endowment_ , vol. 9, 06 2016.
* [14] N. Korula and S. Lattanzi, “An efficient reconciliation algorithm for social networks,” _Proc. VLDB Endow._ , vol. 7, no. 5, p. 377–388, jan 2014. [Online]. Available: https://doi.org/10.14778/2732269.2732274
* [15] E. Kazemi, S. Hassani, and M. Grossglauser, “Growing a graph matching from a handful of seeds,” _Proceedings of the VLDB Endowment_ , vol. 8, pp. 1010–1021, 06 2015.
* [16] N. Mamano and W. B. Hayes, “SANA: Simulated annealing far outperforms many other search algorithms for biological network alignment.” _Bioinformatics (Oxford, England)_ , vol. 33, pp. 2156–2164, 2017.
* [17] M. Heimann, H. Shen, T. Safavi, and D. Koutra, “Regal,” _Proceedings of the 27th ACM International Conference on Information and Knowledge Management_ , Oct 2018. [Online]. Available: http://dx.doi.org/10.1145/3269206.3271788
* [18] T. Derr, H. Karimi, X. Liu, J. Xu, and J. Tang, “Deep adversarial network alignment,” _CoRR_ , vol. abs/1902.10307, 2019. [Online]. Available: http://arxiv.org/abs/1902.10307
* [19] S. Wang, G. R. S. Atkinson, and W. B. Hayes, “Sana: Cross-species prediction of gene ontology go annotations via topological network alignment,” 2022.
* [20] S. Wang, X. Chen, B. J. Frederisy, B. A. Mbakogu, A. D. Kanne, P. Khosravi, and W. B. Hayes, “On the current failure—but bright future—of topology-driven biological network alignment,” _Advances in Protein Chemistry and Structural Biology (accepted; preprint https://doi.org/10.48550/arXiv.2204.11999)_ , 2022.
* [21] M. Milano, P. H. Guzzi, and M. Cannataro, “Glalign: A novel algorithm for local network alignment,” _IEEE/ACM transactions on computational biology and bioinformatics_ , vol. 16, no. 6, pp. 1958–1969, 2018.
* [22] Y. Santoso, V. Srinivasan, and A. Thomo, “Efficient enumeration of four node graphlets at trillion-scale,” in _EDBT_ , 2020.
* [23] T. Hočevar and J. Demšar, “A combinatorial approach to graphlet counting,” _Bioinformatics_ , vol. 30, no. 4, pp. 559–565, Feb. 2014. [Online]. Available: http://dx.doi.org/10.1093/bioinformatics/btt717
* [24] F. Katsarou, N. Ntarmos, and P. Triantafillou, “Performance and scalability of indexed subgraph query processing methods,” _Proc. VLDB Endow._ , vol. 8, no. 12, p. 1566–1577, aug 2015. [Online]. Available: https://doi.org/10.14778/2824032.2824054
* [25] M. Kotlyar, C. Pastrello, N. Sheahan, and I. Jurisica, “Integrated interactions database: tissue-specific view of the human and model organism interactomes,” _Nucleic acids research_ , vol. 44, no. D1, pp. D536–D541, 2015.
* [26] J. Leskovec and R. Sosič, “Snap: A general-purpose network analysis and graph-mining library,” _ACM Transactions on Intelligent Systems and Technology (TIST)_ , vol. 8, no. 1, p. 1, 2016.
* [27] N. Pržulj, D. G. Corneil, and I. Jurisica, “Modeling interactome: scale-free or geometric?” _Bioinformatics_ , vol. 20, no. 18, pp. 3508–3515, 2004. [Online]. Available: http://bioinformatics.oxfordjournals.org/content/20/18/3508.abstract
* [28] I. Melckenbeeck, P. Audenaert, T. Michoel, D. Colle, and M. Pickavet, “An algorithm to automatically generate the combinatorial orbit counting equations,” _PLoS ONE_ , vol. 11, no. 1, 2016.
* [29] I. Melckenbeeck, P. Audenaert, D. Colle, and M. Pickavet, “Efficiently counting all orbits of graphlets of any order in a graph using autogenerated equations,” _Bioinformatics_ , vol. 1, p. 9, 2017.
* [30] V. Saraph and T. Milenković, “MAGNA: maximizing accuracy in global network alignment,” _Bioinformatics_ , vol. 30, no. 20, pp. 2931–2940, 2014\.
* [31] A. Hasan, P.-C. Chung, and W. Hayes, “Graphettes: Constant-time determination of graphlet and orbit identity including (possibly disconnected) graphlets up to size 8,” _PloS one_ , vol. 12, no. 8, p. e0181570, 2017.
* [32] S. Maharaj, B. Tracy, and W. B. Hayes, “BLANT - Fast Graphlet Sampling Tool,” _Bioinformatics_ , 08 2019. [Online]. Available: https://doi.org/10.1093/bioinformatics/btz603
* [33] W. Hayes and S. Maharaj, “BLANT: Sampling Graphlets in a Flash,” in _q-bio_ , 2018.
* [34] B. D. Mckay, “Nauty,” 2010. [Online]. Available: http://users.cecs.anu.edu.au/~bdm/nauty
* [35] T. Milenković and N. Pržulj, “Uncovering biological network function via graphlet degree signatures,” _Cancer Informatics_ , vol. 6, pp. 257–273, 2008.
* [36] M. Kotlyar, C. Pastrello, Z. Malik, and I. Jurisica, “IID 2018 update: context-specific physical protein–protein interactions in human, model organisms and domesticated species,” _Nucleic acids research_ , vol. 47, no. D1, pp. D581–D589, 2018.
* [37] J. Leskovec and A. Krevl, “SNAP Datasets: Stanford large network dataset collection,” http://snap.stanford.edu/data, Jun. 2014.
* [38] S. Kumar, W. L. Hamilton, J. Leskovec, and D. Jurafsky, “Community interaction and conflict on the web,” in _Proceedings of the 2018 World Wide Web Conference on World Wide Web_. International World Wide Web Conferences Steering Committee, 2018, pp. 933–943.
* [39] A. Paranjape, A. R. Benson, and J. Leskovec, “Motifs in temporal networks,” in _Proceedings of the Tenth ACM International Conference on Web Search and Data Mining_ , ser. WSDM ’17. New York, NY, USA: Association for Computing Machinery, 2017, p. 601–610. [Online]. Available: https://doi.org/10.1145/3018661.3018731
* [40] P. Panzarasa, T. Opsahl, and K. Carley, “Patterns and dynamics of users’ behavior and interaction: Network analysis of an online community,” _JASIST_ , vol. 60, pp. 911–932, 05 2009.
* [41] S. Kumar, F. Spezzano, V. Subrahmanian, and C. Faloutsos, “Edge weight prediction in weighted signed networks,” in _Data Mining (ICDM), 2016 IEEE 16th International Conference on_. IEEE, 2016, pp. 221–230.
* [42] S. Kumar, B. Hooi, D. Makhija, M. Kumar, C. Faloutsos, and V. Subrahmanian, “Rev2: Fraudulent user prediction in rating platforms,” in _Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining_. ACM, 2018, pp. 333–341.
* [43] G. Ciriello, M. Mina, P. Guzzi, M. Cannataro, and C. Guerra, “Alignnemo: A local network alignment method to integrate homology and topology,” _PloS one_ , vol. 7, p. e38107, 06 2012.
* [44] V. Vijayan, V. Saraph, and T. Milenković, “Magna++: Maximizing accuracy in global network alignment via both node and edge conservation,” _Bioinformatics_ , 2015.
* [45] G. Klau, “A new graph-based method for pairwise global network alignment,” _BMC Bioinformatics_ , vol. 10, no. Suppl 1, p. S59, 2009.
* [46] N. Malod-Dognin and N. Pržulj, “L-graal: Lagrangian graphlet-based network aligner,” _Bioinformatics_ , 2015.
* [47] X. Chen, M. Heimann, F. Vahedian, and D. Koutra, “Cone-align: Consistent network alignment with proximity-preserving node embedding,” _Proceedings of the 29th ACM International Conference on Information & Knowledge Management_, 2020.
* [48] A. P. Cootes, S. H. Muggleton, and M. J. Sternberg, “The identification of similarities between biological networks: Application to the metabolome and interactome,” _Journal of Molecular Biology_ , vol. 369, no. 4, pp. 1126–1139, 2007. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0022283607003269
* [49] C. Ge, X. Liu, L. Chen, B. Zheng, and Y. Gao, “Largeea: Aligning entities for large-scale knowledge graphs,” _Proc. VLDB Endow._ , vol. 15, pp. 237–245, 2021.
* [50] D. Yuan and P. Mitra, “Lindex: A lattice-based index for graph databases,” _The VLDB Journal_ , vol. 22, no. 2, p. 229–252, apr 2013. [Online]. Available: https://doi.org/10.1007/s00778-012-0284-8
* [51] R. Giugno, V. Bonnici, N. Bombieri, A. Pulvirenti, A. Ferro, and D. Shasha, “Grapes: A software for parallel searching on biological graphs targeting multi-core architectures,” _PLoS ONE_ , vol. 8, 2013.
|
# Full Iso-recursive Types
Litao Zhou<EMAIL_ADDRESS>1234-5678-9012 The University of Hong
KongPokfulam RoadHong KongChina , Qianyong Wan 0009-0005-9894-2462 The
University of Hong KongPokfulam RoadHong KongChina<EMAIL_ADDRESS>and Bruno
C. d. S. Oliveira 0000-0002-1846-7210 The University of Hong KongPokfulam
RoadHong KongChina<EMAIL_ADDRESS>
(2018; 20 February 2007; 12 March 2009; 5 June 2009)
###### Abstract.
There are two well-known formulations of recursive types: _iso-recursive_ and
_equi-recursive_ types. Abadi and Fiore (1996) have shown that iso- and equi-
recursive types have the same expressive power. However, their encoding of
equi-recursive types in terms of iso-recursive types requires explicit
coercions. These coercions come with significant additional _computational
overhead_ , and complicate reasoning about the equivalence of the two
formulations of recursive types.
This paper proposes a generalization of iso-recursive types called _full_ iso-
recursive types. Full iso-recursive types allow encoding all programs with
equi-recursive types without computational overhead. Instead of explicit term
coercions, all type transformations are captured by _computationally
irrelevant_ casts, which can be erased at runtime without affecting the
semantics of the program. Consequently, reasoning about the equivalence
between the two approaches can be greatly simplified. We present a calculus
called $\lambda^{\mu}_{Fi}$, which extends the simply typed lambda calculus
(STLC) with full iso-recursive types. The $\lambda^{\mu}_{Fi}$ calculus is
proved to be type sound, and shown to have the same expressive power as a
calculus with equi-recursive types. We also extend our results to subtyping,
and show that equi-recursive subtyping can be expressed in terms of iso-
recursive subtyping with cast operators.
Recursive types, Subtyping, Type system
††copyright: acmlicensed††journalyear: 2018††doi: XXXXXXX.XXXXXXX††journal:
JACM††journalvolume: 37††journalnumber: 4††article: 111††publicationmonth:
8††ccs: Theory of computation Type theory††ccs: Software and its engineering
Object oriented languages
## 1\. Introduction
Recursive types are used in many programming languages to express recursive
data structures, or recursive interfaces. There are two well-known
formulations of recursive types: _iso-recursive_ and _equi-recursive_ types.
With equi-recursive types (Morris, 1968), a recursive type $\mu\alpha.~{}A$
and its unfolding $A[\alpha\mapsto\mu\alpha.~{}A]$ are equal, since they
represent the same infinite tree (Amadio and Cardelli, 1993). With iso-
recursive types, a recursive type is only isomorphic to its unfolding (Crary
et al., 1999). To witness the isomorphism, explicit fold and unfold operators
are used.
Because both formulations provide alternative ways to model recursive types,
the relationship between iso- and equi-recursive types has been a topic of
study (Abadi and Fiore, 1996; Patrignani et al., 2021; Urzyczyn, 1995).
Understanding this relationship is important to answer questions such as
whether the expressive power of the two formulations is the same or not.
Urzyczyn proved that these two formulations have the same expressive power
when the types considered are restricted to be positive. Abadi and Fiore
extended Urzyczyn’s result and showed that unrestricted formulations of iso-
and equi-recursive types also have the same expressive power, leading to the
well-known statement that “iso-recursive types have the same expressive power
as equi-recursive types”. In addition, Patrignani et al. showed that the
translation from iso-recursive to equi-recursive types is fully abstract with
respect to contextual equivalence.
However, the encoding proposed by Abadi and Fiore requires explicit coercions,
which are interpreted as functions to be evaluated at runtime. Iso-recursive
types can only encode equi-recursive types with significant additional
_computational overhead_. Moreover, these explicit coercions cannot be easily
erased and therefore complicate the reasoning about _behavioral equivalence_.
To address the latter challenge, Abadi and Fiore defined an axiomatized
program logic and showed that the iso-recursive term obtained by their
encoding behaves in the same way as the original equi-recursive term in the
logic. However, the soundness of their program logic is left as a conjecture,
since they did not consider an operational semantics in their work. Thus,
behavioral equivalence between programs written with equi-recursive and iso-
recursive types lacks a complete proof in the literature. Without introducing
explicit coercions, iso-recursive types are strictly weaker than equi-
recursive types, since the infinite tree view of equi-recursive types equates
more types than isomorphic unfoldings of recursive types.
This paper proposes a _generalization_ of iso-recursive types called _full_
iso-recursive types. Full iso-recursive types overcome the challenges of
traditional iso-recursive types in achieving the typing expressiveness and
behavioral equivalence seen in equi-recursive types. Instead of fold and
unfold operators and explicit coercions, we use a more general notion of
_computationally irrelevant cast operators_ (Sulzmann et al., 2007; Cretin,
2014), which allow transformations on any types that are equivalent in an
equi-recursive setting. Full iso-recursive types can encode _all_ programs
with equi-recursive types _without_ computational overhead, since casts can be
erased at runtime without affecting the semantics of the program.
Consequently, the semantic equivalence between programs written with equi-
recursive and full iso-recursive types is also greatly simplified, and allows
for a complete proof, compared to Abadi and Fiore’s work.
We present a calculus called $\lambda^{\mu}_{Fi}$, which extends the simply
typed lambda calculus (STLC) with full iso-recursive types. The
$\lambda^{\mu}_{Fi}$ calculus is proved to be type sound, and shown to have
the same typing power as a calculus with equi-recursive types. To prove the
latter result, we define a type-directed elaboration from the calculus with
equi-recursive types to $\lambda^{\mu}_{Fi}$, and an erasure function that
removes all casts from full iso-recursive terms to obtain equi-recursive
terms. Moreover, the termination and divergence behavior of programs is
preserved under the elaboration and erasure operations. Therefore,
$\lambda^{\mu}_{Fi}$ is sound and complete w.r.t. the calculus with equi-
recursive types in terms of both typing and dynamic semantics. On the other
hand, traditional iso-recursive types can be seen as a special case of full
iso-recursive types. One can easily recover the traditional unfold and fold
operators by using the corresponding cast operators accordingly. So all the
results for iso-recursive types can be adapted to full iso-recursive types as
well.
We also extend our results to subtyping and show that equi-recursive subtyping
can be expressed in terms of iso-recursive subtyping with cast operators.
Although subtyping between equi-recursive types (Brandt and Henglein, 1998;
Amadio and Cardelli, 1993; Gapeyev et al., 2002) and subtyping between iso-
recursive types (Abadi and Cardelli, 1996; Zhou et al., 2022) has been studied
in depth respectively in the literature, the relationship between the two
approaches has been largely unexplored. We revisit Amadio and Cardelli
(1993)’s seminal work on equi-recursive subtyping and observe that an equi-
recursive subtyping relation can be decomposed into a combination of equi-
recursive equalities and an iso-recursive subtyping relation. Since our cast
operators can capture all the equi-recursive equalities, we can achieve a
simple encoding of equi-recursive subtyping in the setting of full iso-
recursive types with subtyping.
Full iso-recursive types open the path for new applications. For example, in
the design of realistic compilers, it is common to have source languages that
are lightweight in terms of type annotations; and target languages, which are
used internally, that are heavy on annotations, but are simple to type-check.
For instance, the GHC Haskell compiler works in this way: the source language
(Haskell) has a lot of convenience via type inference, and no explicit casts
are needed in source programs. A source program is then elaborated to a
variant of System Fc (Sulzmann et al., 2007), which is a System F like
language with explicit type annotations, type applications and also explicit
casts. Our work enables designing source languages with equi-recursive types,
which are elaborated to target languages with full iso-recursive types. Equi-
recursive types offer convenience because they can avoid explicit folds and
unfolds, but type-checking is complex. With full iso-recursive types we need
to write explicit casts, but type-checking is simple. Thus we can have an
architecture similar to that of GHC. In this scenario it is important that no
computational overhead is introduced during the elaboration, which is why
using standard iso-recursive types would not be practical. In addition, source
languages could also hide explicit casts into language constructs (such as
constructors, method calls and/or pattern matching). This would be another way
to use full iso-recursive types, which is similar to current applications of
iso-recursive types.
The main contributions of this paper are as follows:
* •
Full iso-recursive types. We propose a novel formulation of recursive types,
called full iso-recursive types, which generalizes the traditional iso-
recursive fold and unfold operators to cast operators.
* •
The $\lambda^{\mu}_{Fi}$ calculus. We introduce the $\lambda^{\mu}_{Fi}$
calculus, which extends the simply typed lambda calculus with full iso-
recursive types. The calculus is equipped with a type system, a call-by-value
operational semantics, and a type soundness proof.
* •
Equivalence to equi-recursive types. We show that $\lambda^{\mu}_{Fi}$ is
equivalent to STLC extended with equi-recursive types in terms of typing and
dynamic semantics.
* •
Extension to subtyping. We present $\lambda^{\mu<:}_{Fi}$, an extension of
$\lambda^{\mu}_{Fi}$ with iso-recursive subtyping, and show the same
metatheory results for $\lambda^{\mu<:}_{Fi}$, namely, type soundness, typing
equivalence and behavioral equivalence to equi-recursive types with subtyping.
* •
Coq formalization. We provide a mechanical formalization and proofs for all
the new metatheory results of full iso-recursive types in Coq, except for
Theorem 5.5, which is adapted from the literature (Amadio and Cardelli, 1993).
## 2\. Overview
This section provides an overview of our work. We first briefly review the two
main approaches to recursive types, namely iso-recursive types and equi-
recursive types, and the relationship between the two approaches. Then we
introduce our key ideas and results.
### 2.1. Equi-recursive Types
Equi-recursive types treat recursive types and their unfoldings as equal. The
advantage of equi-recursive types is that they are simple to use, since there
is no need to insert explicit annotations in the term language to transform
between equal types, as shown in rule Typ-eq.
Typ-eq $\displaystyle\displaystyle{\hbox{\hskip
43.52415pt\vbox{\hbox{\hskip-43.52415pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash
e:A\qquad A\doteq B$}}}\vbox{}}}\over\hbox{\hskip
19.49641pt\vbox{\vbox{}\hbox{\hskip-19.49641pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash
e:B$}}}}}}$
The metatheory of equi-recursive types has been comprehensively studied by
Amadio and Cardelli (1993). They proposed a tree model for specifying equality
(or subtyping) between equi-recursive types. In essence, two recursive types
are equal (or subtypes) if their infinite unfoldings are equal (or in a
subtyping relation). The tree model provides a clear and solid foundation for
the interpretation of equi-recursive types.
* $A\doteq B$ _(Equi-recursive Equality)_
Tyeq-contract $\displaystyle\displaystyle{\hbox{\hskip
125.02179pt\vbox{\hbox{\hskip-125.02179pt\hbox{\hbox{$\displaystyle\displaystyle
A[\alpha\mapsto B_{{\mathrm{1}}}]\doteq B_{{\mathrm{1}}}\qquad A[\alpha\mapsto
B_{{\mathrm{2}}}]\doteq B_{{\mathrm{2}}}\qquad A\text{ is contractive in
}\alpha$}}}\vbox{}}}\over\hbox{\hskip
16.02567pt\vbox{\vbox{}\hbox{\hskip-16.02567pt\hbox{\hbox{$\displaystyle\displaystyle
B_{{\mathrm{1}}}\doteq B_{{\mathrm{2}}}$}}}}}}$ Tyeq-unfold
$\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
46.72859pt\vbox{\vbox{}\hbox{\hskip-46.72858pt\hbox{\hbox{$\displaystyle\displaystyle\mu\alpha.~{}A\doteq
A[\alpha\mapsto\mu\alpha.~{}A]$}}}}}}$ Tyeq-mu-cong
$\displaystyle\displaystyle{\hbox{\hskip
14.32112pt\vbox{\hbox{\hskip-14.32112pt\hbox{\hbox{$\displaystyle\displaystyle
A\doteq B$}}}\vbox{}}}\over\hbox{\hskip
34.51796pt\vbox{\vbox{}\hbox{\hskip-34.51796pt\hbox{\hbox{$\displaystyle\displaystyle\mu\alpha.~{}A\doteq\mu\alpha.~{}B$}}}}}}$
Tyeq-trans $\displaystyle\displaystyle{\hbox{\hskip
38.8235pt\vbox{\hbox{\hskip-38.82349pt\hbox{\hbox{$\displaystyle\displaystyle
A\doteq B\qquad B\doteq C$}}}\vbox{}}}\over\hbox{\hskip
14.20897pt\vbox{\vbox{}\hbox{\hskip-14.20895pt\hbox{\hbox{$\displaystyle\displaystyle
A\doteq C$}}}}}}$ Tyeq-refl
$\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
14.02773pt\vbox{\vbox{}\hbox{\hskip-14.02773pt\hbox{\hbox{$\displaystyle\displaystyle
A\doteq A$}}}}}}$ Tyeq-symm $\displaystyle\displaystyle{\hbox{\hskip
14.32112pt\vbox{\hbox{\hskip-14.32112pt\hbox{\hbox{$\displaystyle\displaystyle
A\doteq B$}}}\vbox{}}}\over\hbox{\hskip
14.32112pt\vbox{\vbox{}\hbox{\hskip-14.32112pt\hbox{\hbox{$\displaystyle\displaystyle
B\doteq A$}}}}}}$ Tyeq-arr $\displaystyle\displaystyle{\hbox{\hskip
41.46455pt\vbox{\hbox{\hskip-41.46455pt\hbox{\hbox{$\displaystyle\displaystyle
A_{{\mathrm{1}}}\doteq A_{{\mathrm{2}}}\qquad B_{{\mathrm{1}}}\doteq
B_{{\mathrm{2}}}$}}}\vbox{}}}\over\hbox{\hskip
34.10341pt\vbox{\vbox{}\hbox{\hskip-34.10341pt\hbox{\hbox{$\displaystyle\displaystyle
A_{{\mathrm{1}}}\rightarrow B_{{\mathrm{1}}}\doteq A_{{\mathrm{2}}}\rightarrow
B_{{\mathrm{2}}}$}}}}}}$
Figure 1. Amadio and Cardelli’s equi-recursive type equality.
Amadio and Cardelli also provided a rule-based axiomatization to compare equi-
recursive types, as shown in Figure 1. They proved the soundness and
completeness of the rules to the tree-based interpretation. For example, rule
Tyeq-unfold states that a recursive type is equal to its unfolding, and rule
Tyeq-mu-cong states that the equality is congruent with respect to the
recursive type operator. Rule Tyeq-contract states that two types are equal if
they are the fixpoints of the same type function $A[\alpha]$. Note that $A$
needs to be contractive in $\alpha$, i.e. either $\alpha$ is not free in $A$
or $A$ can be unfolded to a type of the form $A_{1}\rightarrow A_{2}$. This is
to prevent equating arbitrary types using non-contractive type functions, such
as when $A$ is $\alpha$. Rule Tyeq-contract allows recursive types that have
equal infinite unfoldings, but are not directly related by finite unfoldings,
to be equal. For example, let
$A[\alpha]=\texttt{Int}\to\texttt{Int}\to\alpha$, then
$B_{1}=\mu\alpha.\texttt{Int}\to\alpha$ and
$B_{2}=\mu\alpha.\texttt{Int}\to\texttt{Int}\to\alpha$ are equal according to
rule Tyeq-contract:
… $B_{1}\doteq A[\alpha\mapsto B_{1}]$ Tyeq-unfold $B_{2}\doteq
A[\alpha\mapsto B_{2}]$ $A\text{ is contractive in }\alpha$ Tyeq-contract
$\mu\alpha.\texttt{Int}\to\alpha\doteq\mu\alpha.\texttt{Int}\to\texttt{Int}\to\alpha$
Here, the missing derivation is:
Tyeq-refl $\texttt{Int}\doteq\texttt{Int}$ Tyeq-unfold
$\mu\alpha.~{}\texttt{Int}\to\alpha\doteq\texttt{Int}\to\mu\alpha.~{}\texttt{Int}\to\alpha$
Tyeq-arrow $\texttt{Int}\to B_{1}\doteq\texttt{Int}\to\texttt{Int}\to B_{1}$
Tyeq-trans and Tyeq-unfold $B_{1}\doteq A[\alpha\mapsto B_{1}]$
Despite its equivalence to the tree model, Amadio and Cardelli’s
axiomatization is not easy to use in practice. In particular one needs to find
a generating type function $A[\alpha]$ in rule Tyeq-contract. Later on, there
have been a few alternative axiomatizations of equi-recursive types (Brandt
and Henglein, 1998; Danielsson and Altenkirch, 2010; Gapeyev et al., 2002),
which are all proved to be equivalent to the tree model. Among them, Brandt
and Henglein proposed an inductively defined relation $H\vdash A\doteq B$ for
equi-recursive type equality, shown in Figure 2. $H$ is a list of type
equality assumptions that can be used to derive the equality $A\doteq B$. New
equalities are added to $H$ every time function types are compared, as shown
in rule Tye-arrfix. Compared to rule Tyeq-contract, rule Tye-arrfix encodes
the coinductive essence of equi-recursive types in a simpler way. Therefore,
we choose Brandt and Henglein’s axiomatization as the basis for our work.
* $H\vdash A\doteq B$ _(Inductive Equi-recursive Equality)_
Tye-assump $\displaystyle\displaystyle{\hbox{\hskip
25.13356pt\vbox{\hbox{\hskip-25.13356pt\hbox{\hbox{$\displaystyle\displaystyle
A=B\in H$}}}\vbox{}}}\over\hbox{\hskip
24.71687pt\vbox{\vbox{}\hbox{\hskip-24.71687pt\hbox{\hbox{$\displaystyle\displaystyle
H\vdash A\doteq B$}}}}}}$ Tye-refl
$\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
24.42348pt\vbox{\vbox{}\hbox{\hskip-24.42348pt\hbox{\hbox{$\displaystyle\displaystyle
H\vdash A\doteq A$}}}}}}$ Tye-trans $\displaystyle\displaystyle{\hbox{\hskip
59.615pt\vbox{\hbox{\hskip-59.61499pt\hbox{\hbox{$\displaystyle\displaystyle
H\vdash A\doteq B\qquad H\vdash B\doteq C$}}}\vbox{}}}\over\hbox{\hskip
24.60472pt\vbox{\vbox{}\hbox{\hskip-24.6047pt\hbox{\hbox{$\displaystyle\displaystyle
H\vdash A\doteq C$}}}}}}$ Tye-unfold
$\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
57.12434pt\vbox{\vbox{}\hbox{\hskip-57.12433pt\hbox{\hbox{$\displaystyle\displaystyle
H\vdash\mu\alpha.~{}A\doteq A[\alpha\mapsto\mu\alpha.~{}A]$}}}}}}$ Tye-symm
$\displaystyle\displaystyle{\hbox{\hskip
24.71687pt\vbox{\hbox{\hskip-24.71687pt\hbox{\hbox{$\displaystyle\displaystyle
H\vdash A\doteq B$}}}\vbox{}}}\over\hbox{\hskip
24.71687pt\vbox{\vbox{}\hbox{\hskip-24.71687pt\hbox{\hbox{$\displaystyle\displaystyle
H\vdash B\doteq A$}}}}}}$ Tye-arrfix $\displaystyle\displaystyle{\hbox{\hskip
131.85184pt\vbox{\hbox{\hskip-131.85182pt\hbox{\hbox{$\displaystyle\displaystyle
H,A_{{\mathrm{1}}}\rightarrow B_{{\mathrm{1}}}=A_{{\mathrm{2}}}\rightarrow
B_{{\mathrm{2}}}\vdash A_{{\mathrm{1}}}\doteq A_{{\mathrm{2}}}\qquad
H,A_{{\mathrm{1}}}\rightarrow B_{{\mathrm{1}}}=A_{{\mathrm{2}}}\rightarrow
B_{{\mathrm{2}}}\vdash B_{{\mathrm{1}}}\doteq
B_{{\mathrm{2}}}$}}}\vbox{}}}\over\hbox{\hskip
44.49916pt\vbox{\vbox{}\hbox{\hskip-44.49916pt\hbox{\hbox{$\displaystyle\displaystyle
H\vdash A_{{\mathrm{1}}}\rightarrow B_{{\mathrm{1}}}\doteq
A_{{\mathrm{2}}}\rightarrow B_{{\mathrm{2}}}$}}}}}}$
Figure 2. Brandt and Henglein’s inductively defined equi-recursive type
equality.
### 2.2. Iso-recursive Types
Iso-recursive types (Crary et al., 1999) are a different approach that treats
recursive types and their unfoldings as different, but isomorphic up to an
unfold/fold operator. With iso-recursive types foldings and unfoldings of the
recursive types must be explicitly triggered, and there is no typing rule Typ-
eq to implicitly convert between equivalent types. Rule Typ-unfold and rule
Typ-fold show the typing rules for unfolding and folding a term of recursive
types. A fold expression constructs a recursive type, while an unfold
expression opens a recursive type to its unfolding.
Typ-unfold $\displaystyle\displaystyle{\hbox{\hskip
29.30144pt\vbox{\hbox{\hskip-29.30144pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash
e:\mu\alpha.~{}A$}}}\vbox{}}}\over\hbox{\hskip
72.45949pt\vbox{\vbox{}\hbox{\hskip-72.45949pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash\textsf{unfold}\,[\mu\alpha.~{}A]\,e:A[\alpha\mapsto\mu\alpha.~{}A]$}}}}}}$
Typ-fold $\displaystyle\displaystyle{\hbox{\hskip
41.80547pt\vbox{\hbox{\hskip-41.80545pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash
e:A[\alpha\mapsto\mu\alpha.~{}A]$}}}\vbox{}}}\over\hbox{\hskip
54.3999pt\vbox{\vbox{}\hbox{\hskip-54.39989pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash\textsf{fold}\,[\mu\alpha.~{}A]\,e:\mu\alpha.~{}A$}}}}}}$
One advantage of iso-recursive types is that they are easier to extend to more
complex type systems, which may easily make the type equality relation
undecidable. Instead, iso-recursive types provide explicit control over
folding and unfolding, avoiding issues with undecidability. One disadvantage
of iso-recursive types is their inconvenience in use due to the explicit fold
and unfold operators. However, this disadvantage can be mitigated by hiding
folding and unfolding under other language constructs, such as pattern
matching, constructors or method calls (Crary et al., 1999; Lee et al., 2015;
Zhou et al., 2022; Pierce, 2002; Harper and Stone, 2000; Vanderwaart et al.,
2003; Yang and Oliveira, 2019). As we shall see in Section 2.3, a further
disadvantage of iso-recursive types is that folding and unfolding alone is not
enough to provide all of the expressive power of the type equality rules. In
some cases, explicit, computationally relevant, term coercions are necessary.
### 2.3. Relating Iso-recursive and Equi-recursive Types
The relationship between iso-recursive types and equi-recursive types has been
a subject of study for a long time on the literature of recursive types (Abadi
and Fiore, 1996; Patrignani et al., 2021; Urzyczyn, 1995). This subsection
reviews the existing approaches to relate the two approaches and their issues.
#### Encoding iso-recursive types.
The encoding of iso-recursive types in equi-recursive types is
straightforward, simply by erasing the fold and unfold operators (Abadi and
Fiore, 1996). Since the rule Tyeq-unfold states that a recursive type is equal
to its unfolding, it is easy to see that the encoding is type preserving. The
encoding is also behavior preserving, since the reduction rules with fold and
unfold operators will become no-ops when erased, as shown below:
Red-fld $\displaystyle\displaystyle{\hbox{\hskip
11.95398pt\vbox{\hbox{\hskip-11.95396pt\hbox{\hbox{$\displaystyle\displaystyle
e\hookrightarrow e^{\prime}$}}}\vbox{}}}\over\hbox{\hskip
43.34291pt\vbox{\vbox{}\hbox{\hskip-43.3429pt\hbox{\hbox{$\displaystyle\displaystyle\textsf{fold}\,[A]\,e\hookrightarrow\textsf{fold}\,[A]\,e^{\prime}$}}}}}}$
Red-ufd $\displaystyle\displaystyle{\hbox{\hskip
11.95398pt\vbox{\hbox{\hskip-11.95396pt\hbox{\hbox{$\displaystyle\displaystyle
e\hookrightarrow e^{\prime}$}}}\vbox{}}}\over\hbox{\hskip
54.45406pt\vbox{\vbox{}\hbox{\hskip-54.45404pt\hbox{\hbox{$\displaystyle\displaystyle\textsf{unfold}\,[A]\,e\hookrightarrow\textsf{unfold}\,[A]\,e^{\prime}$}}}}}}$
Red-elim $\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
54.24937pt\vbox{\vbox{}\hbox{\hskip-54.24937pt\hbox{\hbox{$\displaystyle\displaystyle\textsf{unfold}\,[A]\,(\textsf{fold}\,[B]\,v)\hookrightarrow
v$}}}}}}$
Notice that in the process of reducing folded and unfolded expression $e$, we
merely reduce $e$. The type $A$ does not influence the reduction of $e$.
Eventually, when $e$ reaches a value $v$, an unfold cancels a fold and we
simply obtain $v$. In other words, folding and unfolding are _computationally
irrelevant_ : they do not influence the runtime result, and can be erased, to
avoid runtime costs. Moreover, Patrignani et al. (2021) proved that the
erasure operation is fully abstract, i.e. two terms that cannot be
distinguished by any program contexts in the iso-recursive setting are also
indistinguishable in the equi-recursive setting.
#### Encoding equi-recursive types via fold and unfold
It takes more effort to encode equi-recursive types in terms of iso-recursive
types. Since equi-recursive types treat recursive types and their unfoldings
as equal, we need to insert explicit fold and unfold operators in the iso-
recursive setting to transform between equal types. For example, let $e$ be a
function that keeps taking integer arguments and returning itself, which can
be typed as a recursive type $\mu\alpha.~{}\texttt{Int}\to\alpha$. In an equi-
recursive setting, $(e\,1)$ can be typed as
$\mu\alpha.~{}\texttt{Int}\to\alpha$, by using the rule Typ-eq and rule Tyeq-
unfold to unfold the recursive type to
$\texttt{Int}\to(\mu\alpha.~{}\texttt{Int}\to\alpha)$ so that it can be
applied to the argument $1$. However, in the iso-recursive setting, we need to
insert an unfold operator to make the transformation explicit, as shown in the
following derivation:
$\vdash e:\mu\alpha.~{}\texttt{Int}\to\alpha$ Typ-unfold
$\vdash\textsf{unfold}\,[\mu\alpha.~{}\texttt{Int}\to\alpha]\,e:\texttt{Int}\to(\mu\alpha.~{}\texttt{Int}\to\alpha)$
$\vdash 1:\texttt{Int}$ Typ-app
$\vdash(\textsf{unfold}\,[\mu\alpha.~{}\texttt{Int}\to\alpha]\,e)\,1:\mu\alpha.~{}\texttt{Int}\to\alpha$
#### Fold/unfold is not enough: computationally relevant explicit coercions
The above example shows that, for some equi-recursive terms, inserting fold
and unfold operators within the term language can achieve an encoding in terms
of iso-recursive types. However, this is not always the case. Recall that
$\mu\alpha.~{}\texttt{Int}\to\alpha$ and
$\mu\alpha.~{}\texttt{Int}\to\texttt{Int}\to\alpha$ are also equal in the
equi-recursive setting, but they are not directly related by fold and unfold
operators. To address this issue, Abadi and Fiore (1996) proposed an approach
to insert _explicit coercion functions_. They showed that, for any two equi-
recursive types $A$ and $B$ considered to be equal following the derivation in
Figure 1, there exists a coercion function $f:A\to B$ that can be applied to
terms of type $A$ to obtain terms of type $B$. With the coercion function,
terms that are well typed by rule Typ-tyeq can now have an encoding in terms
of iso-recursive types, possibly with the help of explicit coercion functions.
One issue is that the insertion of coercion functions affects the
computational structure of the terms. For example, assume that $e$ has a
function type
$\texttt{Int}\to\texttt{Int}\to(\mu\alpha.~{}\texttt{Int}\to\alpha)$. This
type can be partially folded to
$\texttt{Int}\to(\mu\alpha.~{}\texttt{Int}\to\alpha)$. In an equi-recursive
setting, due to the rule Typ-eq, the term $e$ can also be assigned the type
$\texttt{Int}\to(\mu\alpha.~{}\texttt{Int}\to\alpha)$, without any changes. In
an iso-recursive setting, in addition to folding and unfolding, we need
explicit coercions. The coercion function for this transformation is
$\lambda(x:\texttt{Int}\to\texttt{Int}\to\mu\alpha.~{}\texttt{Int}\to\alpha).~{}\lambda(y:\texttt{Int}).~{}\textsf{fold}\,[\mu\alpha.~{}\texttt{Int}\to\alpha]\,(x\,y)$
Now, applying the coercion function to $e$ results in a term of type
$\texttt{Int}\to(\mu\alpha.~{}\texttt{Int}\to\alpha)$. Unfortunately, such
explicit coercion functions are computationally relevant. Thus, an encoding of
equi-recursive types in terms of iso-recursive types can introduce non-trivial
computational overhead. The issue is particularly problematic because some
coercions need to essentially be _recursive_ functions. Therefore, it is
impractical to use such an encoding in a language implementation.
#### Issues with reasoning
Explicit coercions also bring new challenges in terms of reasoning, and in
particular in proving the behavioral preservation of the encoding. Continuing
with the previous example, if we transform this resulting term back to an
equi-recursive setting, by erasing the fold and unfold operators, we will get
a term:
(1)
$(\lambda(x:\texttt{Int}\to\texttt{Int}\to\mu\alpha.~{}\texttt{Int}\to\alpha).~{}\lambda(y:\texttt{Int}).~{}(x\,y))\,e$
This term is equivalent to $e$ under $\beta-$ and $\eta-$reduction, but it is
not the same as $e$ anymore. In more complicated cases, especially for
derivations involving the use of rule Tyeq-contract, the insertion of coercion
functions can lead to a significant change in the syntactic structure of the
terms, which makes it difficult to reason about the behavior preservation of
the encoding. In essence, with the rule Tyeq-contract, one needs to encode
special iterating functions to model the fixpoint of a type function. Abadi
and Fiore proved that the encoding is equivalent to the original term in an
axiomatized program logic, but the soundness of the program logic is
conjectured to be sound, and the authors did not consider an operational
semantics. Thus, while it is expected that the behavioral equivalence result
holds (assuming the conjecture and a suitable operational semantics), there is
no complete proof in the literature for this result.
### 2.4. Subtyping
#### Equi-recursive subtyping
It is common to extend recursive types with subtyping. For equi-recursive
types, Amadio and Cardelli proposed a set of rules, which relies on the
equality relation in Figure 1. We show some selected rules below:
ACSub-eq $\displaystyle\displaystyle{\hbox{\hskip
14.32112pt\vbox{\hbox{\hskip-14.32112pt\hbox{\hbox{$\displaystyle\displaystyle
A\doteq B$}}}\vbox{}}}\over\hbox{\hskip
23.9044pt\vbox{\vbox{}\hbox{\hskip-23.9044pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma\vdash
A\leq B$}}}}}}$ ACSub-arrow $\displaystyle\displaystyle{\hbox{\hskip
60.63112pt\vbox{\hbox{\hskip-60.63112pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma\vdash
B_{{\mathrm{1}}}\leq A_{{\mathrm{1}}}\qquad\Sigma\vdash A_{{\mathrm{2}}}\leq
B_{{\mathrm{2}}}$}}}\vbox{}}}\over\hbox{\hskip
43.68669pt\vbox{\vbox{}\hbox{\hskip-43.68669pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma\vdash
A_{{\mathrm{1}}}\rightarrow A_{{\mathrm{2}}}\leq B_{{\mathrm{1}}}\rightarrow
B_{{\mathrm{2}}}$}}}}}}$ ACSub-rec $\displaystyle\displaystyle{\hbox{\hskip
38.81985pt\vbox{\hbox{\hskip-38.81985pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma,\alpha\leq\beta\vdash
A\leq B$}}}\vbox{}}}\over\hbox{\hskip
43.73087pt\vbox{\vbox{}\hbox{\hskip-43.73085pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma\vdash\mu\alpha.~{}A\leq\mu\beta.~{}B$}}}}}}$
ACSub-var $\displaystyle\displaystyle{\hbox{\hskip
22.41542pt\vbox{\hbox{\hskip-22.4154pt\hbox{\hbox{$\displaystyle\displaystyle\alpha\leq\beta\in\Sigma$}}}\vbox{}}}\over\hbox{\hskip
22.13763pt\vbox{\vbox{}\hbox{\hskip-22.13762pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma\vdash\alpha\leq\beta$}}}}}}$
Two types are in a subtyping relation if their infinite unfoldings are equal,
as shown in rule ACSub-eq. The subtyping relation is structural, as can be
seen in rule ACSub-arrow. For dealing with recursive types, rule ACSub-rec
states that two recursive types are in a subtyping relation if their recursive
bodies are subtypes, when assuming that the recursive variable of the two
types are in a subtyping relation. The subtyping rules are also referred to as
the Amber rules, since rule ACSub-rec is adopted by the implementation of the
Amber programming language (Cardelli, 1985). The Amber rules are proved to be
sound and complete to the tree model interpretation of equi-recursive
subtyping (Amadio and Cardelli, 1993).
#### Iso-recursive subtyping
For iso-recursive types, one can replace the equi-recursive equality relation
in rule ACSub-eq with the syntactic equality relation to obtain the iso-
recursive style Amber rules. The iso-recursive Amber rules are well-known and
widely used for subtyping iso-recursive types (Abadi and Cardelli, 1996;
Bengtson et al., 2011; Chugh, 2015; Lee et al., 2015; Swamy et al., 2011;
Duggan, 2002). However, the metatheory for the iso-recursive Amber rules has
not been well studied until recently (Zhou et al., 2022, 2020). Zhou et al.
provided a new specification for iso-recursive subtyping and proved a number
of metatheory results, including type soundness, transitivity of the subtyping
relation, and equivalence to the iso-recursive Amber rules.
However, unlike type equality, the relation between equi-recursive and iso-
recursive subtyping has been less studied. One attempt that we are aware of is
the work by Ligatti et al. (2017). They provided an extension of the iso-
recursive subtyping rules to allow for subtyping between recursive types and
their unfoldings, but their rules cannot account for the full expressiveness
of equi-recursive subtyping. For example,
$\mu\alpha.~{}\texttt{Int}\to\alpha\leq\mu\alpha.~{}\texttt{Int}\to\texttt{Int}\to\alpha$
is a valid subtyping relation in the equi-recursive Amber rules using rule
ACSub-eq, but it is not derivable in Ligatti et al.’s rules.
### 2.5. Key Ideas and Results
As we have shown, encoding iso-recursive types with equi-recursive types is
simple. As for the other direction, Abadi and Fiore showed that iso-recursive
types can be encoded with equi-recursive types, which leads to a well-known
statement that “iso-recursive types have the same expressive power as equi-
recursive types”. However, their encoding involves the insertion of explicit
coercion functions, and lacks a complete proof of correctness. In our work, we
present a novel approach to iso-recursive types, full iso-recursive types,
which extends the unfold and fold operators to a more general form. We show
that full iso-recursive types and equi-recursive types can be mutually encoded
and the encoding preserves the semantic behavior. Compared to the previous
work, the correctness proof of our encoding is straightforward and
foundational, without relying on any a priori assumptions.
#### Type Casting.
The key idea of our approach is the introduction of a type casting relation
that generalizes the unfold and fold operators. Instead of allowing only the
unfold and fold operators to transform between recursive types and their
unfoldings, we allow terms of any type to be transformed to their equivalent
type using the type casting relation. The rules Typ-unfold and Typ-fold are
now replaced by the following rule:
Typ-cast $\displaystyle\displaystyle{\hbox{\hskip
59.57672pt\vbox{\hbox{\hskip-59.5767pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash
e:A\qquad\cdot;\cdot\vdash A\hookrightarrow B:c$}}}\vbox{}}}\over\hbox{\hskip
33.35466pt\vbox{\vbox{}\hbox{\hskip-33.35464pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash\textsf{cast}\,[c]\,e:B$}}}}}}$
The type casting relation $A\hookrightarrow B:c$ states that type $A$ can be
cast to type $B$ using the casting operator $c$. Essentially, the type casting
relation is an equivalent form of Brandt and Henglein’s type equality
relation, augmented with a casting operator $c$, a new syntactic construct to
witness the proof of the type casting relation. As we will show in the
following sections, the type casting relation is not a simple extension of
Figure 2. For example, we remove the rule Tye-symm from the type casting
relation, which is hard to interpret in the operational semantics, and proved
that it is admissible from the remaining rules. After resolving these issues,
it is easy to encode the equi-recursive rule Typ-eq using the type casting
relation in rule Typ-cast. For instance, the encoding in (1) can now be
replaced by the following term:
$\textsf{cast}\,[\textsf{id}\to\textsf{fold}_{(\mu\alpha.~{}\texttt{Int}\to\alpha)}]\,e$
Here id is the identity casting operator, and
$\textsf{fold}_{\mu\alpha.~{}\texttt{Int}\to\alpha}$ is the casting operator
that witnesses the proof of a folding from
$\texttt{Int}\to(\mu\alpha.~{}\texttt{Int}\to\alpha)$ to
$\mu\alpha.~{}\texttt{Int}\to\alpha$. Thus, the full iso-recursive typing
rules are equivalent to the equi-recursive typing rules.
On the other hand, the unfolding and folding operators in standard iso-
recursive types can be recovered from our type casting relation. For example,
the term $(\textsf{cast}\,[\textsf{fold}_{A}]\,e)$ is essentially equivalent
to $(\textsf{fold}\,[A]\,e)$, and the term
$(\textsf{cast}\,[\textsf{unfold}_{A}]\,e)$ is equivalent to
$(\textsf{unfold}\,[A]\,e)$ in terms of typing and dynamic semantics, as we
will show in the following sections. Therefore, full iso-recursive types are a
generalization of standard iso-recursive types.
#### Push Rules.
The extension of the typing rules brings new challenges to the design of
semantics and the proof of type soundness. With the casting operator, there
can be terms that are not simple unfoldings or foldings of recursive types,
and the operational semantics needs to be extended to handle these terms. For
example, terms such as
$(\textsf{cast}\,[\textsf{id}\to\textsf{fold}_{(\mu\alpha.~{}\texttt{Int}\to\alpha)}]\,e)$,
which have no analogous representation in calculi with standard iso-recursive
types, need to be considered during reduction. To address this issue, we
introduce a set of new reduction rules to handle casting operators:
$\begin{array}[]{c}\text{$\raise
0.0pt\hbox{${\vbox{\hbox{\hbox{\small\small{{\hypertarget{ottalt:rule:ott:Red-
cast}{{{Red-cast}}} }{}}}}\hbox{$\displaystyle\displaystyle{\hbox{\hskip
11.95398pt\vbox{\hbox{\hskip-11.95396pt\hbox{\hbox{$\displaystyle\displaystyle
e\hookrightarrow e^{\prime}$}}}\vbox{}}}\over\hbox{\hskip
41.05931pt\vbox{\vbox{}\hbox{\hskip-41.0593pt\hbox{\hbox{$\displaystyle\displaystyle\textsf{cast}\,[c]\,e\hookrightarrow\textsf{cast}\,[c]\,e^{\prime}$}}}}}}$}}}\hbox{}$
}$ }\quad\text{$\raise
0.0pt\hbox{${\vbox{\hbox{\hbox{\small\small{{\hypertarget{ottalt:rule:ott:Red-
cast-id}{{{Red-cast-id}}}
}{}}}}\hbox{$\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
28.98372pt\vbox{\vbox{}\hbox{\hskip-28.9837pt\hbox{\hbox{$\displaystyle\displaystyle\textsf{cast}\,[\textsf{id}]\,v\hookrightarrow
v$}}}}}}$}}}\hbox{}$ }$ }\\\ \\\ \text{$\raise
0.0pt\hbox{${\vbox{\hbox{\hbox{\small\small{{\hypertarget{ottalt:rule:ott:Red-
cast-arr}{{{Red-cast-arr}}}
}{}}}}\hbox{$\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
94.65594pt\vbox{\vbox{}\hbox{\hskip-94.65593pt\hbox{\hbox{$\displaystyle\displaystyle(\textsf{cast}\,[c_{{\mathrm{1}}}\rightarrow
c_{{\mathrm{2}}}]\,v_{{\mathrm{1}}})\,v_{{\mathrm{2}}}\hookrightarrow\textsf{cast}\,[c_{{\mathrm{2}}}]\,(v_{{\mathrm{1}}}\,(\textsf{cast}\,[\neg
c_{{\mathrm{1}}}]\,v_{{\mathrm{2}}}))$}}}}}}$}}}\hbox{}$ }$ }\\\ \end{array}$
The reduction rules are designed in a call-by-value fashion, and we also
define the cast of values of function types (i.e. $\textsf{cast}\,[c_{1}\to
c_{2}]\,v$) as values. The new reduction rules in our system are referred to
as _push_ rules, since they push the casting operators inside the terms to
make the terms more reducible, as shown in rule Red-Cast-Arr. Our design is
inspired by the homonymous push rules in the design of calculi with coercions
(Cretin, 2014; Sulzmann et al., 2007). Note that the casting operator $\neg
c_{1}$ computes the dual of the casting operator $c_{1}$, which is used to
indicate the reverse transformation that $c_{1}$ represents. This operation is
necessary to ensure that the reduction is type preserving, since applying a
casting operator to the expected input type of a function is essentially
equivalent to applying the reverse casting to the actual input argument. A
running example of a reduction using the push rules is shown as follows:
$\begin{array}[]{rll}&(\textsf{cast}\,[\textsf{id}\to\textsf{fold}_{(\mu\alpha.~{}\texttt{Int}\to\alpha)}]\,v)\,1\\\
\hookrightarrow&\textsf{cast}\,[\textsf{fold}_{(\mu\alpha.~{}\texttt{Int}\to\alpha)}]\,(v\,(\textsf{cast}\,[\textsf{id}]\,1))&(\text{{\hyperlink{ottalt:rule:ott:Red-
cast-arr}{{{{Red-cast-arr}}}}}})\\\
\hookrightarrow&\textsf{cast}\,[\textsf{fold}_{(\mu\alpha.~{}\texttt{Int}\to\alpha)}]\,(v\,1)&(\text{{\hyperlink{ottalt:rule:ott:Red-
cast-id}{{{{Red-cast-id}}}}} and {\hyperlink{ottalt:rule:ott:Red-cast}{{{{Red-
cast}}}}}})\end{array}$
One of the key results of our work is the type soundness of the full iso-
recursive calculus, which is proved by showing that the push rules preserve
the type of the terms and the type casting relation. This is one step beyond
Brandt and Henglein’s work, in which a coercion typing rule similar to our
casting rules was introduced, but no results about the dynamic semantics were
studied. Another contribution of our work is that with full iso-recursive
types, we retain the computational structure of the terms when encoding equi-
recursive types. In other words, erasing the casting operators from the terms
will result in the original terms, which is not the case for the previous work
(Abadi and Fiore, 1996). Our casts are _computationally irrelevant_ , and
unlike regular iso-recursive types, which require computationally relevant
term coercions for some type conversions, no such coercions are needed in our
approach. For example, all the reduction steps in the example above are erased
to the original term $(v\,1)$. This round-tripping property simplifies the
correctness reasoning of the encoding.
We show that all the terms that are well-typed in the equi-recursive setting
can be encoded in the full iso-recursive setting. Furthermore, the encoding is
behavior preserving, i.e. evaluating the encoded terms will result in a value
that is equal to the value of the original equi-recursive term up to erasure.
In this sense, we get back a fully verified statement that _“full iso-
recursive types have the same expressive power as equi-recursive types”_.
#### Subtyping
Our results extend to subtyping. Our main observation for subtyping is to show
that the equi-recursive subtyping relation can be defined by a combination of
equi-recursive equality and the iso-recursive subtyping relation (Cardelli,
1985; Abadi and Cardelli, 1996; Zhou et al., 2022), as shown below:
$A\leq_{e}B\triangleq\exists\,C_{1}\,C_{2}.~{}(A\doteq
C_{1})\land(C_{1}\leq_{i}C_{2})\land(C_{2}\doteq B)$
Here $\leq_{e}$ denotes an equi-recursive subtyping relation, and $\leq_{i}$
denotes an iso-recursive subtyping relation. This alternative definition of
equi-recursive subtyping is implicitly implied from Amadio and Cardelli’s
work, but it is somewhat hidden behind their proofs and definitions. We
reinterpret their proofs and definitions to highlight that this alternative
definition is equivalent to existing equi-recursive subtyping definitions in
the literature.
This alternative definition of equi-recursive subtyping is important because
we can reuse the existing type casting relation in the full iso-recursive
setting with subtyping. For example, given an equi-recursive term $e$ that has
the type $A$ with $A\leq_{e}B$, we can encode $e:B$ in the full iso-recursive
setting as
$((\textsf{cast}\,[c_{2}]\,(\textsf{cast}\,[c_{1}]\,e^{\prime})):B)$, in which
$c_{1}$ and $c_{2}$ are casts encoding the equality relation $A\doteq C_{1}$
and $C_{2}\doteq B$, and $e^{\prime}$ is the encoding of $(e:A)$. This term
type checks with the iso-recursive subtyping relation.
Our encoding is still computationally irrelevant in the presence of subtyping.
Thus, all the results – including type soundness, well-typed encoding, and
behavior preservation – are also applicable to the system with subtyping. This
is a significant improvement over previous work (Abadi and Fiore, 1996), which
has not studied the relationship between equi-recursive and iso-recursive
subtyping.
## 3\. A calculus with full iso-recursive types
In this section we will introduce a calculus with full iso-recursive types,
called $\lambda^{\mu}_{Fi}$. Our calculus is based on the simply typed lambda
calculus extended with recursive types and type cast operators.
### 3.1. Syntax and Well-formedness
The syntax of $\lambda^{\mu}_{Fi}$ is shown at the top of Figure 3.
Types | $A,B$ | $\Coloneqq$ | $\texttt{Int}\mid A_{1}\to A_{2}\mid\alpha\mid\mu\alpha.~{}A$
---|---|---|---
Expressions | $e$ | $\Coloneqq$ | $x\mid\textsf{n}\mid e_{1}~{}e_{2}\mid\lambda x:A.~{}e\mid\textsf{cast}\,[c]\,e$
Values | $v$ | $\Coloneqq$ | $\textsf{n}\mid\lambda x:A.~{}e\mid\textsf{cast}\,[\textsf{fold}_{A}]\,v\mid\textsf{cast}\,[c_{1}\to c_{2}]\,v$
Cast Operators | $c$ | $\Coloneqq$ | $\iota\mid\textsf{id}\mid\textsf{fold}_{A}\mid\textsf{unfold}_{A}\mid c_{1}\to c_{2}\mid c_{1};c_{2}\mid\textsf{fix}\,{\iota}.~{}c$
Type Contexts | $\Delta$ | $\Coloneqq$ | $\cdot\mid\Delta,\alpha$
Typing Contexts | $\Gamma$ | $\Coloneqq$ | $\cdot\mid\Gamma,x:A$
Type Cast Contexts | $\mathbb{E}$ | $\Coloneqq$ | $\cdot\mid\mathbb{E},\iota:A\hookrightarrow B$
* $\Delta\vdash\textit{A}$ _(Well-formed Type)_
WFT-int $\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
15.0pt\vbox{\vbox{}\hbox{\hskip-14.99998pt\hbox{\hbox{$\displaystyle\displaystyle\Delta\vdash\mathsf{Int}$}}}}}}$
WFT-var $\displaystyle\displaystyle{\hbox{\hskip
13.47624pt\vbox{\hbox{\hskip-13.47624pt\hbox{\hbox{$\displaystyle\displaystyle\alpha\in\Delta$}}}\vbox{}}}\over\hbox{\hskip
13.19846pt\vbox{\vbox{}\hbox{\hskip-13.19846pt\hbox{\hbox{$\displaystyle\displaystyle\Delta\vdash\alpha$}}}}}}$
WFT-arrow $\displaystyle\displaystyle{\hbox{\hskip
37.79332pt\vbox{\hbox{\hskip-37.79332pt\hbox{\hbox{$\displaystyle\displaystyle\Delta\vdash
A\qquad\Delta\vdash B$}}}\vbox{}}}\over\hbox{\hskip
23.07108pt\vbox{\vbox{}\hbox{\hskip-23.07108pt\hbox{\hbox{$\displaystyle\displaystyle\Delta\vdash
A\rightarrow B$}}}}}}$ WFT-rec $\displaystyle\displaystyle{\hbox{\hskip
19.17067pt\vbox{\hbox{\hskip-19.17067pt\hbox{\hbox{$\displaystyle\displaystyle\Delta,\alpha\vdash
A$}}}\vbox{}}}\over\hbox{\hskip
23.84837pt\vbox{\vbox{}\hbox{\hskip-23.84837pt\hbox{\hbox{$\displaystyle\displaystyle\Delta\vdash\mu\alpha.~{}A$}}}}}}$
Figure 3. Syntax and type well-formedness of $\lambda^{\mu}_{Fi}$.
#### Types.
Meta-variables $A,B$ range over types. Types include base types (Int),
function types ($A_{1}\to A_{2}$), type variables ($\alpha$), and recursive
types ($\mu\alpha.~{}A$).
#### Expressions.
Meta-variables $e$ range over expressions. Most of the expressions are
standard, including: variables ($x$), integers (n), applications
($e_{1}~{}e_{2}$) and lambda abstractions ($\lambda x:A.~{}e$). We also have a
type cast operator ($\textsf{cast}\,[c]\,e$) that transforms the type of the
expression $e$ to an equivalent type using the cast operator $c$. The cast
operators $c$ include cast variables ($\iota$), the identity cast (id), the
fold and unfold casts ($\textsf{fold}_{A}$ and $\textsf{unfold}_{A}$), the
arrow cast ($c_{1}\to c_{2}$), the sequential cast ($c_{1};c_{2}$), and the
fixpoint cast ($\textsf{fix}\,{\iota}.~{}c$). We will define the type cast
rules for these operators in §3.2.
#### Values.
Meta-variables $v$ range over values. Integers (n) and lambda abstractions
($\lambda x:A.~{}e$), are considered as values, which are standard for a
simply typed lambda calculus. In standard iso-recursive types, the folding of
a value ($\textsf{fold}\,[A]\,v$) is a value. Therefore in our calculus its
corresponding encoding ($\textsf{cast}\,[\textsf{fold}_{A}]\,v$) is also
considered a value. We also consider arrow casts of a value
($\textsf{cast}\,[c_{1}\to c_{2}]\,v$) to be values, since they cannot be
reduced further.
#### Contexts and Well-formedness
Type contexts $\Delta$ track the bound type variables $\alpha$. A type is
well-formed if all of its free variables are in the context. The well-
formedness rules for types are standard, and shown at the bottom Figure 3.
Typing contexts $\Gamma$ track the bound variables $x$ with their types. A
typing context is well-formed ($\vdash\Gamma$) if there are no duplicate
variables and all the types are well-formed. We also define a type cast
context $\mathbb{E}$ to keep track of the cast variables $\iota$ and the cast
operators that they are associated with. This will be used in the type cast
rules, which we will define in §3.2.
For type variables and term variables, we assume the usual notions of free and
bound variables, and the usual capture-avoiding substitution function, denoted
by $A[\alpha\mapsto B]$, that replaces the free occurrences of variable
$\alpha$ in $A$ by $B$, while avoiding the capture of any bound variable in
$A$. When needed, we assume that $\alpha$-equivalence is applied at will to
avoid the clashing of free variables.
* $\Gamma\vdash e:\textit{A}$ _(Typing)_
Typ-int $\displaystyle\displaystyle{\hbox{\hskip
8.95828pt\vbox{\hbox{\hskip-8.95827pt\hbox{\hbox{$\displaystyle\displaystyle\vdash\Gamma$}}}\vbox{}}}\over\hbox{\hskip
21.1261pt\vbox{\vbox{}\hbox{\hskip-21.1261pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash
n:\mathsf{Int}$}}}}}}$ Typ-var $\displaystyle\displaystyle{\hbox{\hskip
39.02763pt\vbox{\hbox{\hskip-39.02763pt\hbox{\hbox{$\displaystyle\displaystyle\vdash\Gamma\qquad\mathit{x}:A\,\in\,\Gamma$}}}\vbox{}}}\over\hbox{\hskip
16.73608pt\vbox{\vbox{}\hbox{\hskip-16.73607pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash\mathit{x}:A$}}}}}}$
Typ-abs $\displaystyle\displaystyle{\hbox{\hskip
30.61417pt\vbox{\hbox{\hskip-30.61415pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma,\mathit{x}:A_{{\mathrm{1}}}\vdash
e:A_{{\mathrm{2}}}$}}}\vbox{}}}\over\hbox{\hskip
46.18076pt\vbox{\vbox{}\hbox{\hskip-46.18074pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash\lambda\mathit{x}:A_{{\mathrm{1}}}.\,e:A_{{\mathrm{1}}}\rightarrow
A_{{\mathrm{2}}}$}}}}}}$ Typ-app $\displaystyle\displaystyle{\hbox{\hskip
60.26723pt\vbox{\hbox{\hskip-60.26721pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash
e_{{\mathrm{1}}}:A_{{\mathrm{1}}}\rightarrow
A_{{\mathrm{2}}}\qquad\Gamma\vdash
e_{{\mathrm{2}}}:A_{{\mathrm{1}}}$}}}\vbox{}}}\over\hbox{\hskip
25.17561pt\vbox{\vbox{}\hbox{\hskip-25.17561pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash
e_{{\mathrm{1}}}\,e_{{\mathrm{2}}}:A_{{\mathrm{2}}}$}}}}}}$ Typ-cast
$\displaystyle\displaystyle{\hbox{\hskip
59.57672pt\vbox{\hbox{\hskip-59.5767pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash
e:A\qquad\cdot;\cdot\vdash A\hookrightarrow B:c$}}}\vbox{}}}\over\hbox{\hskip
33.35466pt\vbox{\vbox{}\hbox{\hskip-33.35464pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash\textsf{cast}\,[c]\,e:B$}}}}}}$
* $\Delta;\mathbb{E}\vdash\textit{A}\hookrightarrow\textit{B}:c$ _(Type Casting)_
Cast-id $\displaystyle\displaystyle{\hbox{\hskip
31.59717pt\vbox{\hbox{\hskip-31.59717pt\hbox{\hbox{$\displaystyle\displaystyle\Delta\vdash
A\qquad\vdash\mathbb{E}$}}}\vbox{}}}\over\hbox{\hskip
34.37492pt\vbox{\vbox{}\hbox{\hskip-34.37492pt\hbox{\hbox{$\displaystyle\displaystyle\Delta;\mathbb{E}\vdash
A\hookrightarrow A:\textsf{id}$}}}}}}$ Cast-arrow
$\displaystyle\displaystyle{\hbox{\hskip
80.95316pt\vbox{\hbox{\hskip-80.95316pt\hbox{\hbox{$\displaystyle\displaystyle\Delta;\mathbb{E}\vdash
A_{{\mathrm{1}}}\hookrightarrow
A_{{\mathrm{2}}}:c_{{\mathrm{1}}}\qquad\Delta;\mathbb{E}\vdash
B_{{\mathrm{1}}}\hookrightarrow
B_{{\mathrm{2}}}:c_{{\mathrm{2}}}$}}}\vbox{}}}\over\hbox{\hskip
61.30035pt\vbox{\vbox{}\hbox{\hskip-61.30035pt\hbox{\hbox{$\displaystyle\displaystyle\Delta;\mathbb{E}\vdash
A_{{\mathrm{1}}}\rightarrow B_{{\mathrm{1}}}\hookrightarrow
A_{{\mathrm{2}}}\rightarrow B_{{\mathrm{2}}}:c_{{\mathrm{1}}}\rightarrow
c_{{\mathrm{2}}}$}}}}}}$ Cast-unfold $\displaystyle\displaystyle{\hbox{\hskip
41.69559pt\vbox{\hbox{\hskip-41.69559pt\hbox{\hbox{$\displaystyle\displaystyle\Delta\vdash\mu\alpha.~{}A\qquad\vdash\mathbb{E}$}}}\vbox{}}}\over\hbox{\hskip
83.9476pt\vbox{\vbox{}\hbox{\hskip-83.9476pt\hbox{\hbox{$\displaystyle\displaystyle\Delta;\mathbb{E}\vdash\mu\alpha.~{}A\hookrightarrow
A[\alpha\mapsto\mu\alpha.~{}A]:\textsf{unfold}_{\mu\alpha.~{}A}$}}}}}}$ Cast-
fold $\displaystyle\displaystyle{\hbox{\hskip
41.69559pt\vbox{\hbox{\hskip-41.69559pt\hbox{\hbox{$\displaystyle\displaystyle\Delta\vdash\mu\alpha.~{}A\qquad\vdash\mathbb{E}$}}}\vbox{}}}\over\hbox{\hskip
78.39203pt\vbox{\vbox{}\hbox{\hskip-78.39203pt\hbox{\hbox{$\displaystyle\displaystyle\Delta;\mathbb{E}\vdash
A[\alpha\mapsto\mu\alpha.~{}A]\hookrightarrow\mu\alpha.~{}A:\textsf{fold}_{\mu\alpha.~{}A}$}}}}}}$
Cast-seq $\displaystyle\displaystyle{\hbox{\hskip
81.08981pt\vbox{\hbox{\hskip-81.0898pt\hbox{\hbox{$\displaystyle\displaystyle\Delta;\mathbb{E}\vdash
A\hookrightarrow B:c_{{\mathrm{1}}}\qquad\Delta;\mathbb{E}\vdash
B\hookrightarrow C:c_{{\mathrm{2}}}$}}}\vbox{}}}\over\hbox{\hskip
41.12811pt\vbox{\vbox{}\hbox{\hskip-41.1281pt\hbox{\hbox{$\displaystyle\displaystyle\Delta;\mathbb{E}\vdash
A\hookrightarrow C:c_{{\mathrm{1}}};c_{{\mathrm{2}}}$}}}}}}$ Cast-var
$\displaystyle\displaystyle{\hbox{\hskip
94.02295pt\vbox{\hbox{\hskip-94.02293pt\hbox{\hbox{$\displaystyle\displaystyle\Delta\vdash
A\qquad\Delta\vdash B\qquad\vdash\mathbb{E}\qquad\iota:A\hookrightarrow
B\in\mathbb{E}$}}}\vbox{}}}\over\hbox{\hskip
33.66019pt\vbox{\vbox{}\hbox{\hskip-33.66017pt\hbox{\hbox{$\displaystyle\displaystyle\Delta;\mathbb{E}\vdash
A\hookrightarrow B:\iota$}}}}}}$ Cast-fix
$\displaystyle\displaystyle{\hbox{\hskip
165.47696pt\vbox{\hbox{\hskip-165.47696pt\hbox{\hbox{$\displaystyle\displaystyle\Delta;\mathbb{E},\iota:A_{{\mathrm{1}}}\rightarrow
B_{{\mathrm{1}}}\hookrightarrow A_{{\mathrm{2}}}\rightarrow
B_{{\mathrm{2}}}\vdash A_{{\mathrm{1}}}\hookrightarrow
A_{{\mathrm{2}}}:c_{{\mathrm{1}}}\qquad\Delta;\mathbb{E},\iota:A_{{\mathrm{1}}}\rightarrow
B_{{\mathrm{1}}}\hookrightarrow A_{{\mathrm{2}}}\rightarrow
B_{{\mathrm{2}}}\vdash B_{{\mathrm{1}}}\hookrightarrow
B_{{\mathrm{2}}}:c_{{\mathrm{2}}}$}}}\vbox{}}}\over\hbox{\hskip
75.84615pt\vbox{\vbox{}\hbox{\hskip-75.84615pt\hbox{\hbox{$\displaystyle\displaystyle\Delta;\mathbb{E}\vdash
A_{{\mathrm{1}}}\rightarrow B_{{\mathrm{1}}}\hookrightarrow
A_{{\mathrm{2}}}\rightarrow
B_{{\mathrm{2}}}:\textsf{fix}\,{\iota}.~{}(c_{{\mathrm{1}}}\rightarrow
c_{{\mathrm{2}}})$}}}}}}$
Figure 4. Typing and type cast rules for $\lambda^{\mu}_{Fi}$.
### 3.2. Typing
The top of Figure 4 shows the typing rules for $\lambda^{\mu}_{Fi}$. Most
rules are standard except for the typing rule for type casting (rule Typ-
cast). This rule replaces the standard folding and unfolding rules for iso-
recursive types, as we explained in §2.5. Rule Typ-cast relies on the type
casting rules shown at the bottom of Figure 4. In the type casting judgment
$\Delta;\mathbb{E}\vdash\textit{A}\hookrightarrow\textit{B}:c$, $\Delta$ is
the type context used to ensure that all the types in the cast derivation are
well-formed. $\mathbb{E}$ tracks of the cast variables $\iota$ that appear in
$c$ and the cast operator that they are associated with. New cast variables
are introduced when a fixpoint cast is encountered, as shown in rule Cast-fix,
which gives us the ability to encode the coinductive reasoning in equi-
recursive equalities. The cast operator $c$ in the type casting relation
essentially describes the derivation of a judgment. Our type casting rules,
ignoring the cast variables and operators, are very similar to the type
equality rules in Brandt and Henglein’s axiomatization of type equality.
Despite some subtle differences, which we will discuss in §4.2, our type
casting rules are sound and complete with respect to their type equality
rules.
###### Theorem 3.1 (Soundness and completeness of type casting).
For any types $A$ and $B$, $\cdot\vdash A\doteq B$ if and only if there exists
a cast operator $c$ such that $\cdot;\cdot\vdash A\hookrightarrow B:c$.
#### Equivalence to a calculus with equi-recursive typing.
The only difference between the equi-recursive typing rules and
$\lambda^{\mu}_{Fi}$’s typing rules is replacing type casting in rule Typ-cast
with a type equality relation. Therefore, we can give an alternative
definition of the equi-recursive typing rules in Figure 5. The gray parts are
used to generate a term in $\lambda^{\mu}_{Fi}$, and can be ignored for
understanding the equi-recursive typing rules. The standard equi-recursive
type equality relation in rule Typ-eq is replaced by the type casting relation
in rule ETyp-eq. Since the two relations are equivalent by Theorem 3.1, the
typing rules in Figure 5 are equivalent to the standard equi-recursive typing
rules.
* $\Gamma\vdash_{e}e:A\text{ \hbox{\pagecolor{gray!20}$\rhd\,e^{\prime}$}}$ _(Equi-recursive typing and full iso-recursive elaboration)_
ETyp-int $\displaystyle\displaystyle{\hbox{\hskip
8.95828pt\vbox{\hbox{\hskip-8.95827pt\hbox{\hbox{$\displaystyle\displaystyle\vdash\Gamma$}}}\vbox{}}}\over\hbox{\hskip
32.65323pt\vbox{\vbox{}\hbox{\hskip-32.65321pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash_{e}n:\mathsf{Int}\text{
\hbox{\pagecolor{gray!20}$\displaystyle\rhd\,n$}}$}}}}}}$ ETyp-var
$\displaystyle\displaystyle{\hbox{\hskip
39.02763pt\vbox{\hbox{\hskip-39.02763pt\hbox{\hbox{$\displaystyle\displaystyle\vdash\Gamma\qquad\mathit{x}:A\,\in\,\Gamma$}}}\vbox{}}}\over\hbox{\hskip
29.28978pt\vbox{\vbox{}\hbox{\hskip-29.28978pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash_{e}\mathit{x}:A\text{
\hbox{\pagecolor{gray!20}$\displaystyle\rhd\,\mathit{x}$}}$}}}}}}$ ETyp-abs
$\displaystyle\displaystyle{\hbox{\hskip
42.23824pt\vbox{\hbox{\hskip-42.23824pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma,\mathit{x}:A_{{\mathrm{1}}}\vdash_{e}e:A_{{\mathrm{2}}}\text{
\hbox{\pagecolor{gray!20}$\displaystyle\rhd\,e^{\prime}$}}$}}}\vbox{}}}\over\hbox{\hskip
74.34369pt\vbox{\vbox{}\hbox{\hskip-74.34367pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash_{e}\lambda\mathit{x}:A_{{\mathrm{1}}}.\,e:A_{{\mathrm{1}}}\rightarrow
A_{{\mathrm{2}}}\text{
\hbox{\pagecolor{gray!20}$\displaystyle\rhd\,\lambda\mathit{x}:A_{{\mathrm{1}}}.\,e^{\prime}$}}$}}}}}}$
ETyp-app $\displaystyle\displaystyle{\hbox{\hskip
83.51538pt\vbox{\hbox{\hskip-83.51537pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash_{e}e_{{\mathrm{1}}}:A_{{\mathrm{1}}}\rightarrow
A_{{\mathrm{2}}}\text{
\hbox{\pagecolor{gray!20}$\displaystyle\rhd\,e^{\prime}_{{\mathrm{1}}}$}}\qquad\Gamma\vdash_{e}e_{{\mathrm{2}}}:A_{{\mathrm{1}}}\text{
\hbox{\pagecolor{gray!20}$\displaystyle\rhd\,e^{\prime}_{{\mathrm{2}}}$}}$}}}\vbox{}}}\over\hbox{\hskip
40.73114pt\vbox{\vbox{}\hbox{\hskip-40.73112pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash_{e}e_{{\mathrm{1}}}\,e_{{\mathrm{2}}}:A_{{\mathrm{2}}}\text{
\hbox{\pagecolor{gray!20}$\displaystyle\rhd\,e^{\prime}_{{\mathrm{1}}}\,e^{\prime}_{{\mathrm{2}}}$}}$}}}}}}$
ETyp-eq $\displaystyle\displaystyle{\hbox{\hskip
71.20079pt\vbox{\hbox{\hskip-71.20079pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash_{e}e:A\text{
\hbox{\pagecolor{gray!20}$\displaystyle\rhd\,e^{\prime}$}}\qquad\cdot;\cdot\vdash
A\hookrightarrow B:c$}}}\vbox{}}}\over\hbox{\hskip
46.36758pt\vbox{\vbox{}\hbox{\hskip-46.36758pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash_{e}e:B\text{
\hbox{\pagecolor{gray!20}$\displaystyle\rhd\,\textsf{cast}\,[c]\,e^{\prime}$}}$}}}}}}$
Figure 5. An equivalent equi-recursive typing system and elaboration rules to
$\lambda^{\mu}_{Fi}$.
###### Theorem 3.2 (Equivalence of alternative equi-recursive typing).
For any expression $e$ and type $A$, $\Gamma\vdash_{e}e:A$ in the standard
equi-recursive typing with rule Typ-eq if and only if there exists a full iso-
recursive term $e^{\prime}$ such that $\Gamma\vdash_{e}e:A\rhd e^{\prime}$
using the rules in Figure 5.
Our alternative formulation of equi-recursive typing also provides a way to
elaborate equi-recursive terms into full iso-recursive terms, as shown by the
gray colored parts in Figure 5. The elaboration is type-directed, by inserting
the appropriate casts where a type equality is needed following the typing
derivation of the equi-recursive terms. The interesting point here is that, by
replacing Brandt and Henglein’s type equality relation with our type casting
relation, we obtain a cast $c$, which can be viewed as evidence of the type
transformation from type $A$ into type $B$. Then, we use $c$ in an explicit
cast in the elaborated $\lambda^{\mu}_{Fi}$ term, which will trigger the type
transformation in $\lambda^{\mu}_{Fi}$. Every well-typed equi-recursive term
can be elaborated into a full iso-recursive term, and every full iso-recursive
term can be erased to an equi-recursive term that has the same type. It
follows that the full iso-recursive typing rules are sound and complete with
respect to equi-recursive types:
###### Theorem 3.3 (Equi-recursive to full iso-recursive typing).
For any expressions $e$, $e^{\prime}$ and type $A$, if
$\Gamma\vdash_{e}e:A\rhd e^{\prime}$ then $\Gamma\vdash e^{\prime}:A$.
###### Theorem 3.4 (Full iso-recursive to equi-recursive typing).
For any expressions $e$ and type $A$, if $\Gamma\vdash e:A$ then
$\Gamma\vdash_{e}|e|:A\rhd e$.
In the theorem above, the full iso-recursive expressions can be erased to the
equi-recursive expressions by removing the casts. The erasure operation $|e|$
is defined as follows:
$\begin{array}[]{llcll}|\textsf{n}|&=\textsf{n}&&|x|&=x\\\
|e_{1}~{}e_{2}|&=|e_{1}|~{}|e_{2}|&&|\lambda x:A.~{}e|&=\lambda x:A.~{}|e|\\\
|\textsf{cast}\,[c]\,e|&=|e|\end{array}$
Moreover, our elaboration achieves the round-tripping property – elaborating
an equi-recursive term into a full iso-recursive term and then erasing the
casts will get back the original equi-recursive term. This is not the case for
previous work in relating recursive types (Abadi and Fiore, 1996), in which
computationally relevant coercions are inserted as term-level functions and
erasing the unfold/fold annotations do not recover the original term. The
round-tripping property is crucial for a simple proof of the behavioral
equivalence between the two systems, which we discuss next.
###### Theorem 3.5 (Round-tripping of the encoding).
For any expression $e$, $e^{\prime}$ and type $A$, if $\Gamma\vdash_{e}e:A\rhd
e^{\prime}$, then $|e^{\prime}|=e$.
### 3.3. Semantics
Figure 6 shows the reduction rules for $\lambda^{\mu}_{Fi}$. In addition to
the standard reduction rules for the simply typed lambda calculus, we add the
reduction rules for the cast operators. Our reduction rules are call-by-value.
The inner expressions of the cast operators are reduced first (rule Red-cast).
Then, based on different cast operators, the cast operator is pushed into the
expression in various ways. For identity casts, the cast operator is simply
erased (rule Red-cast-id). Arrow casts are values, but when they are applied
to an argument, the cast operator is pushed into the argument (rule Red-cast-
arr). Note that the cast operator needs to be reversed when pushed into the
function argument in order to ensure type preservation after the reduction.
The reverse operation is defined by analyzing the structure of $c$ as follows:
$\begin{array}[]{llll}\neg\,\iota&=\iota&\neg\,\textsf{id}&=\textsf{id}\\\
\neg\,\textsf{fold}_{A}&=\textsf{unfold}_{A}&\neg\,\textsf{unfold}_{A}&=\textsf{fold}_{A}\\\
\neg\,(c_{1}\to
c_{2})&=(\neg\,c_{1})\to(\neg\,c_{2})&\neg\,(c_{1};c_{2})&=(\neg\,c_{2});(\neg\,c_{1})\\\
\neg\,(\textsf{fix}\,{\iota}.~{}c)&=\textsf{fix}\,{\iota}.~{}\neg\,c\end{array}$
A single sequential cast is split into two separate casts (rule Red-cast-seq),
so that the sub-components can be reduced independently. Fold casts are
values, but can be eliminated by an outer unfold cast (rule Red-cast-elim).
Thus, rule Red-cast-elim corresponds to the traditional fold/unfold
cancellation rule used in calculi with conventional iso-recursive types.
Finally, fixpoint casts are reduced by unrolling the fixpoint (rule Red-cast-
fix).
* $e\hookrightarrow e^{\prime}$ _(Reduction)_
Red-beta $\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
43.20775pt\vbox{\vbox{}\hbox{\hskip-43.20773pt\hbox{\hbox{$\displaystyle\displaystyle(\lambda\mathit{x}:A.\,e)\,v^{\prime}\hookrightarrow
e[\mathit{x}\mapsto v^{\prime}]$}}}}}}$ Red-appl
$\displaystyle\displaystyle{\hbox{\hskip
11.96512pt\vbox{\hbox{\hskip-11.96512pt\hbox{\hbox{$\displaystyle\displaystyle
e_{{\mathrm{1}}}\hookrightarrow
e^{\prime}_{{\mathrm{1}}}$}}}\vbox{}}}\over\hbox{\hskip
21.08801pt\vbox{\vbox{}\hbox{\hskip-21.08801pt\hbox{\hbox{$\displaystyle\displaystyle
e_{{\mathrm{1}}}\,e_{{\mathrm{2}}}\hookrightarrow
e^{\prime}_{{\mathrm{1}}}\,e_{{\mathrm{2}}}$}}}}}}$ Red-appr
$\displaystyle\displaystyle{\hbox{\hskip
11.96512pt\vbox{\hbox{\hskip-11.96512pt\hbox{\hbox{$\displaystyle\displaystyle
e_{{\mathrm{2}}}\hookrightarrow
e^{\prime}_{{\mathrm{2}}}$}}}\vbox{}}}\over\hbox{\hskip
21.63776pt\vbox{\vbox{}\hbox{\hskip-21.63776pt\hbox{\hbox{$\displaystyle\displaystyle
v_{{\mathrm{1}}}\,e_{{\mathrm{2}}}\hookrightarrow
v_{{\mathrm{1}}}\,e^{\prime}_{{\mathrm{2}}}$}}}}}}$ Red-cast
$\displaystyle\displaystyle{\hbox{\hskip
11.95398pt\vbox{\hbox{\hskip-11.95396pt\hbox{\hbox{$\displaystyle\displaystyle
e\hookrightarrow e^{\prime}$}}}\vbox{}}}\over\hbox{\hskip
41.05931pt\vbox{\vbox{}\hbox{\hskip-41.0593pt\hbox{\hbox{$\displaystyle\displaystyle\textsf{cast}\,[c]\,e\hookrightarrow\textsf{cast}\,[c]\,e^{\prime}$}}}}}}$
Red-cast-id $\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
28.98372pt\vbox{\vbox{}\hbox{\hskip-28.9837pt\hbox{\hbox{$\displaystyle\displaystyle\textsf{cast}\,[\textsf{id}]\,v\hookrightarrow
v$}}}}}}$ Red-cast-arr $\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
94.65594pt\vbox{\vbox{}\hbox{\hskip-94.65593pt\hbox{\hbox{$\displaystyle\displaystyle(\textsf{cast}\,[c_{{\mathrm{1}}}\rightarrow
c_{{\mathrm{2}}}]\,v_{{\mathrm{1}}})\,v_{{\mathrm{2}}}\hookrightarrow\textsf{cast}\,[c_{{\mathrm{2}}}]\,(v_{{\mathrm{1}}}\,(\textsf{cast}\,[\neg
c_{{\mathrm{1}}}]\,v_{{\mathrm{2}}}))$}}}}}}$ Red-cast-seq
$\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
69.96104pt\vbox{\vbox{}\hbox{\hskip-69.96103pt\hbox{\hbox{$\displaystyle\displaystyle\textsf{cast}\,[c_{{\mathrm{1}}};c_{{\mathrm{2}}}]\,v\hookrightarrow\textsf{cast}\,[c_{{\mathrm{2}}}]\,(\textsf{cast}\,[c_{{\mathrm{1}}}]\,v)$}}}}}}$
Red-castelim $\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
68.09807pt\vbox{\vbox{}\hbox{\hskip-68.09805pt\hbox{\hbox{$\displaystyle\displaystyle\textsf{cast}\,[\textsf{unfold}_{A}]\,(\textsf{cast}\,[\textsf{fold}_{B}]\,v)\hookrightarrow
v$}}}}}}$ Red-cast-fix $\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
73.03065pt\vbox{\vbox{}\hbox{\hskip-73.03065pt\hbox{\hbox{$\displaystyle\displaystyle\textsf{cast}\,[\textsf{fix}\,{\iota}.~{}c]\,v\hookrightarrow\textsf{cast}\,[c[\iota\mapsto\textsf{fix}\,{\iota}.~{}c]]\,v$}}}}}}$
Figure 6. Reduction rules.
The addition of the push rules for the cast operators is necessary for the
type soundness of $\lambda^{\mu}_{Fi}$, since the cast rules are necessary to
preserve types. $\lambda^{\mu}_{Fi}$ is type sound, proved with the usual
preservation and progress theorems:
###### Theorem 3.6 (Progress).
For any expression $e$ and type $A$, if $\cdot\vdash e:A$ then either $e$ is a
value or there exists an expression $e^{\prime}$ such that $e\hookrightarrow
e^{\prime}$.
###### Theorem 3.7 (Preservation).
For any expression $e$ and type $A$, if $\cdot\vdash e:A$ and
$e\hookrightarrow e^{\prime}$ then $\cdot\vdash e^{\prime}:A$.
#### Equivalence to the equi-recursive dynamic semantics
As explained in §2.5, the reduction rules for the cast operators are
computationally irrelevant. Therefore, they can be erased from expressions
without affecting the behavior of the expressions. We can obtain the following
theorem easily:
###### Theorem 3.8 (Full iso-recursive to equi-recursive behavioral
preservation).
For any expression $e$, if $e\hookrightarrow^{*}v$ then
$|e|\hookrightarrow_{e}^{*}|v|$.
Here $\hookrightarrow_{e}$ is the reduction relation in the equi-recursive
setting, which is basically defined by a subset of the reduction rules in
Figure 6 (rules Red-beta, Red-appl, and Red-appr). We use
$\hookrightarrow^{*}$ to denote the reflexive, transitive closure of the
reduction relation. The other direction of the behavioral preservation also
holds, but only applies to well-typed expressions and relies on the
elaboration process defined in Figure 5. The proof of this direction is also
more involved, and we will detail it in §4.3. To summarize, the two systems
are behaviorally equivalent, in terms of both termination and divergence
behavior:
###### Theorem 3.9 (Behavioral equivalence).
For any expression $e$, $e^{\prime}$ and type $A$, if $\cdot\vdash_{e}e:A\rhd
e^{\prime}$, then
1. (1)
$e\hookrightarrow^{*}_{e}v$ if and only if there exists $v^{\prime}$ such that
$e^{\prime}\hookrightarrow^{*}v^{\prime}$ and $|v^{\prime}|=v$.
2. (2)
$e$ diverges if and only if $e^{\prime}$ diverges.
## 4\. Metatheory of full iso-recursive types
In this section we discuss the key proof techniques and results in the
metatheory of $\lambda^{\mu}_{Fi}$. The metatheory covers three components:
type soundness of $\lambda^{\mu}_{Fi}$ (Theorem 3.6 and 3.7), the typing
equivalence between $\lambda^{\mu}_{Fi}$ and equi-recursive types (Theorem
3.2, 3.3 and 3.4), and the behavioral equivalence between $\lambda^{\mu}_{Fi}$
and equi-recursive types (Theorem 3.9).
### 4.1. Type Soundness
#### Progress
For $\lambda^{\mu}_{Fi}$ we need to ensure that the definition of value and
the reduction rules, in particular the push rules for type casting, are
complementary to each other, i.e. a cast expression is either a value or can
be further reduced. The definition of value has been discussed in §3.1. In
$\lambda^{\mu}_{Fi}$ now there are two canonical forms for a value with
function types ($A_{1}\to A_{2}$): lambda abstractions $(\lambda x:A.~{}e)$
and arrow casts $(\textsf{cast}\,[c_{1}\to c_{2}]\,v)$. Therefore in the
progress proof for function applications ($e_{1}~{}e_{2}$), we need to
consider one extra case when $e_{1}$ is an arrow cast. We push the cast
operator further by rule Red-cast-arr as a reduction step in this case to
complete the proof.
#### Preservation
The preservation proof is standard by first doing induction on the typing
derivation $\cdot\vdash e:A$ and then induction on the reduction relation
$e\hookrightarrow e^{\prime}$. The interesting cases are when the reduction
rule is a push rule. Most cases of the push rules are straightforward, by
inversion on the type casting relation and then reconstructing the casting
derivation for the reduced expression. Two tricky cases require extra care:
the push rules for arrow cast (rule Red-cast-arr) and the fixpoint cast (rule
Red-cast-fix).
In the Red-cast-arr case, by inversion on the typing derivation of
$\cdot\vdash(\textsf{cast}\,[c_{1}\to c_{2}]\,v_{1})~{}v_{2}:A_{2}$ we know
that $v_{1}$ has a function type $B_{1}\to B_{2}$, $v_{2}$ has type $A_{1}$,
and $B_{1}\to B_{2}$ can be cast to $A_{1}\to A_{2}$ by $c_{1}\to c_{2}$,
which in turn implies that $B_{1}\hookrightarrow A_{1}:c_{1}$ and
$B_{2}\hookrightarrow A_{2}:c_{2}$. In order to show that the type of the
reduced expression $(\textsf{cast}\,[c_{2}]\,(v_{1}~{}(\textsf{cast}\,[\neg
c_{1}]\,v_{2})))$ is still $A_{2}$, we need to prove $A_{1}\hookrightarrow
B_{1}:\neg c_{1}$. This goal can be achieved by Lemma 4.1 below, which is
proved by induction on the type casting relation. The analysis of rule Red-
cast-arr also shows that it is necessary to insert the reverse operation on
the cast operator $c_{1}$ to ensure the preservation of $\lambda^{\mu}_{Fi}$.
###### Lemma 4.1 (Reverse of casting).
For any types $A$ and $B$, and casting operators $c$, if $\cdot;\cdot\vdash
A\hookrightarrow B:c$ then $\cdot;\cdot\vdash B\hookrightarrow A:\neg\,c$.
As for the Red-cast-fix case, the reduction rule unfolds the fixpoint cast
$(\textsf{fix}\,{\iota}.~{}c)$ to
$(c[\iota\mapsto(\textsf{fix}\,{\iota}.~{}c)])$. By inversion on the type
casting relation $\cdot;\cdot\vdash A\hookrightarrow
B:\textsf{fix}\,{\iota}.~{}c$, we know that
(2) $\cdot;\iota:A\hookrightarrow B\vdash A\hookrightarrow B:c$
Essentially the cast operator $(\textsf{fix}\,{\iota}.~{}c)$ and its unrolling
$(c[\iota\mapsto(\textsf{fix}\,{\iota}.~{}c)])$ should represent the same
proof. The type casting judgement (2) can be interpreted as: if we know that
there is a cast variable $\iota$ that can cast $A$ to $B$, then we can cast
$A$ to $B$ by $c$, using the cast variable $\iota$. Since we already know that
$\textsf{fix}\,{\iota}.~{}c$ can do the same job as $\iota$ in casting $A$ to
$B$, it should be safe to replace $\iota$ with $\textsf{fix}\,{\iota}.~{}c$ in
the cast operator $c$, and show that $\cdot;\cdot\vdash A\hookrightarrow
B:c[\iota\mapsto(\textsf{fix}\,{\iota}.~{}c)]$. This idea can be formalized by
the following cast substitution lemma, which is proved by induction on the
type casting relation of $c_{1}$.
###### Lemma 4.2 (Cast substitution).
For any contexts $\Gamma$, $\mathbb{E}$, types $A$, $B$, $C$, $D$, cast
operators $c_{1}$, $c_{2}$ and cast variable $\iota$, if
$\Gamma;\mathbb{E}\vdash A\hookrightarrow B:c_{1}$, and
$\Gamma;\mathbb{E},\iota:A\hookrightarrow B\vdash C\hookrightarrow D:c_{2}$
then $\Gamma;\mathbb{E}\vdash C\hookrightarrow D:c_{2}[\iota\mapsto c_{1}]$.
### 4.2. Typing Equivalence
As discussed in Section 3.2, the key to the typing equivalence between full
iso-recursive types and equi-recursive types is to show our type casting rules
are equivalent to Brandt and Henglein’s type equality rules (Theorem 3.1).
This section focuses on the proof of this theorem.
Most of our type casting rules, ignoring the cast variables and operators, are
very similar to their type equality rules, so the proof for these cases is
straightforward. For instance, the treatment of coinductive reasoning by
introducing new premises for function types in our rule Cast-fix is exactly
the same treatment as their rule Tye-arrfix. We discuss the only two
differences below.
#### Arrow cast for type soundness
In addition to transforming function types with a fixpoint cast using rule
Cast-fix as Brandt and Henglein did in rule Tye-arrfix, we also allow function
types to be cast without a fixpoint cast as well, as shown in rule Cast-arr.
This is a harmless extension, since one can always wrap an arrow cast with a
dummy fixpoint, which does not use the fixpoint variable in the body. However,
having this rule is essential to the type soundness of $\lambda^{\mu}_{Fi}$.
By rule Cast-fix, all the fixpoint casts in well-typed expressions are in the
form of $\textsf{fix}\,{\iota}.~{}c_{1}\to c_{2}$. During the reduction, we
need to unroll those fixpoint casts using rule Red-cast-fix to a bare arrow
cast in the form of $c^{\prime}_{1}\to c^{\prime}_{2}$, which cannot be typed
without the rule Cast-arr. In other words, while casts of arrows are values,
casts of fixpoints are not values. Due to this difference we separate the two
rules and prove that the extension does not affect the soundness and
completeness of our type casting rules.
#### Removing the symmetry rule from equality
The other difference is that our rules do not include a symmetry rule for type
casting, since it is hard to interpret symmetry in the operational semantics.
As a result, changing the type cast context from $\cdot$ to a universally
quantified $\mathbb{E}$ in Lemma 4.1 will not work, i.e. rule Tye-symm in its
general form, with the same list of assumptions $H$ in both the premise and
conclusion, is not admissible in our type casting relation. The reason is that
invalid assumptions may exist in the list $H$, which are not derivable by the
type casting rules. For example,
$\texttt{Int}\doteq\texttt{Int}\to\texttt{Int}\vdash\texttt{Int}\to\texttt{Int}\doteq\texttt{Int}$
is a valid judgement using rules Tye-symm and Tye-assump, but cannot be
derived from our type casting rules.
Nevertheless we can still prove that our system is complete to Brandt and
Henglein’s equality, when the initial environment is empty. The idea is that
starting from an empty assumption list, one can always replace the use of rule
Tye-symm in the derivation with a complete derivation that redoes the proof
goal in the symmetry way to obtain a derivation without using rule Tye-symm.
The replacement is feasible since when the initial environment is empty, all
the type equalities introduced to the environment are guaranteed to be
derivable from an empty assumption list by the type casting rules. Interested
readers can refer to our Coq formalization for the details of the proof.
### 4.3. Behavioral Equivalence
To prove Theorem 3.9 it suffices to show one of the two propositions in the
theorem. Since the type soundness of $\lambda^{\mu}_{Fi}$ ensures that a well-
typed term does not get stuck – it can either diverge or reduce to a value, we
only need to show the preservation of termination behavior and the
preservation of divergence behavior can then be proved by contradiction. The
“if” direction of the theorem (from full-iso recursive reduction to equi-
recursive reduction) is easy, directly from the behavioral preservation
property of the erasure function (Theorem 3.8). The “only if” direction (from
equi-recursive reduction to full-iso recursive reduction) is more involved,
and we prove it by induction on the length of reduction steps. It suffices to
show the following lemma, which states that one step of equi-recursive
reduction can be simulated by several steps of full iso-recursive reduction.
###### Lemma 4.3 (Simulation of equi-recursive reduction).
For any expressions $e_{1}$, $e^{\prime}_{1}$, $e_{2}$ and type $A$, if
$\cdot\vdash_{e}e_{1}:A\rhd e_{1}^{\prime}$ and
$e_{1}\hookrightarrow_{e}e_{2}$, then there exists $e_{2}^{\prime}$ such that
$e_{1}^{\prime}\hookrightarrow^{*}e_{2}^{\prime}$ and
$\cdot\vdash_{e}e_{2}:A\rhd e_{2}^{\prime}$.
The proof of Lemma 4.3 is done by first induction on the equi-recursive
reduction relation $e_{1}\hookrightarrow_{e}e_{2}$, and then induction on the
elaboration relation $\cdot\vdash_{e}e_{1}:A\rhd e_{1}^{\prime}$. Most of the
cases are straightforward, by applying the induction hypothesis and using the
congruence lemmas below to construct the reduction steps.
###### Lemma 4.4 (Congruence lemma of full iso-recursive reduction).
1. (1)
If $e_{1}\hookrightarrow^{*}e_{1}^{\prime}$, then
$e_{1}~{}e_{2}\hookrightarrow^{*}e_{1}^{\prime}~{}e_{2}$.
2. (2)
If $e_{2}\hookrightarrow^{*}e_{2}^{\prime}$, then
$v_{1}~{}e_{2}\hookrightarrow^{*}v_{1}~{}e_{2}^{\prime}$.
3. (3)
If $e\hookrightarrow^{*}e^{\prime}$, then
$\textsf{cast}\,[c]\,e\hookrightarrow^{*}\textsf{cast}\,[c]\,e^{\prime}$.
The tricky case is when $e_{1}\hookrightarrow_{e}e_{2}$ is a beta reduction
(case Red-beta). By inversion on the reduction relation, we know that $e_{1}$
is a function application ($e_{1}=(\lambda x:A_{1}.e_{0})~{}v_{1}$) and
$e_{2}$ is ($e_{0}[x\mapsto v_{1}]$). By induction on the elaboration relation
$\cdot\vdash_{e}e_{1}:A\rhd e_{1}^{\prime}$, case ETyp-eq can be proved using
the induction hypothesis. We consider case ETyp-app, where
(3) $\cdot\vdash\lambda x:A_{1}.e_{0}:A_{1}\to A\rhd e_{3}^{\prime}\quad\text{
and }\quad\cdot\vdash v_{1}:A_{1}\rhd e_{4}^{\prime}$
However, we do not know the exact form of $e_{3}^{\prime}$ and
$e_{4}^{\prime}$, since many different casts can be inserted in the
elaboration derivation using rule ETyp-eq, and the current induction
hypothesis cannot deal with this. To address this issue, we first prove a
lemma showing that any full iso-recursive terms elaborated from an equi-
recursive _value_ can always be further reduced to a value in the full iso-
recursive setting.
###### Lemma 4.5 (Reductions of full iso-recursive terms from equi-recursive
values).
For any expression $e$ and type $A$, if $\cdot\vdash_{e}v:A\rhd e^{\prime}$,
then there exists $v^{\prime}$ such that
$e^{\prime}\hookrightarrow^{*}v^{\prime}$ and $\cdot\vdash_{e}v:A\rhd
v^{\prime}$.
Lemma 4.5 can be proved by induction on the typing derivation
$\cdot\vdash_{e}v:A\rhd e^{\prime}$ and using the congruence lemma (Lemma
4.4). By applying Lemma 4.5 to (3) in the tricky case of Lemma 4.3, we can
first show that both $e_{3}^{\prime}$ and $e_{4}^{\prime}$ can be further
reduced to a value. Moreover, the value of evaluating $e_{3}^{\prime}$
preserves the function type, so it must be one of the two canonical forms: a
lambda abstraction $(\lambda x:A_{1}.e^{\prime}_{0})$ or an arrow cast
$(\textsf{cast}\,[c_{1}\to c_{2}]\,v^{\prime}_{1})$, and then we can construct
the reduction steps for $(e^{\prime}_{3}~{}e^{\prime}_{4})$ as shown below:
$\begin{array}[]{ll@{}l@{}ll}&e^{\prime}_{3}~{}e^{\prime}_{4}\\\
\hookrightarrow^{*}&(\lambda
x:A_{1}.e^{\prime}_{0})~{}e^{\prime}_{4}&\text{\quad
or\quad}&(\textsf{cast}\,[c_{1}\to
c_{2}]\,v^{\prime}_{1})~{}e^{\prime}_{4}&\text{(Lemma~{}\ref{lem:equi-to-full-
iso-value} and \ref{lem:cong-red}(\ref{lem:cong-red1}))}\\\
\hookrightarrow^{*}&(\lambda
x:A_{1}.e^{\prime}_{0})~{}v^{\prime}_{2}&\text{\quad
or\quad}&(\textsf{cast}\,[c_{1}\to
c_{2}]\,v^{\prime}_{1})~{}v^{\prime}_{2}&\text{(Lemma~{}\ref{lem:equi-to-full-
iso-value} and \ref{lem:cong-red}(\ref{lem:cong-red2}))}\\\
\hookrightarrow&e^{\prime}_{0}[x\mapsto v^{\prime}_{2}]&\text{\quad
or\quad}&(\textsf{cast}\,[c_{2}]\,(v^{\prime}_{1}~{}(\textsf{cast}\,[\neg
c_{1}]\,v^{\prime}_{2})))&\text{({{R}ule~{}\hyperlink{ottalt:rule:ott:Red-
beta}{{{{Red-beta}}}}} or {\hyperlink{ottalt:rule:ott:Red-cast-arr}{{{{Red-
cast-arr}}}}}) }\\\ \end{array}$
Now we are left to prove the second goal of Lemma 4.3, that is, the result of
the reduction constructed above can be derived from the elaboration relation,
i.e.
$\cdot\vdash_{e}e_{0}[x\mapsto v_{1}]:A\rhd e^{\prime}_{0}[x\mapsto
v^{\prime}_{2}]\quad\text{or}\quad\cdot\vdash_{e}e_{0}[x\mapsto
v_{1}]:A\rhd(\textsf{cast}\,[c_{2}]\,(v^{\prime}_{1}~{}(\textsf{cast}\,[\neg
c_{1}]\,v^{\prime}_{2})))$
The latter case for rule Red-cast-arr follows from the induction hypothesis.
The first case for rule Red-beta can be proved by the following substitution
lemma for the elaboration relation.
###### Lemma 4.6 (Substitution lemma for elaboration).
For any typing context $\Gamma$, expressions $e_{1}$, $e_{1}^{\prime}$,
$e_{2}$, $e_{2}^{\prime}$ and types $A$, $B$, if
$\Gamma,x:A\vdash_{e}e_{1}:B\rhd e_{1}^{\prime}$ and
$\Gamma\vdash_{e}e_{2}:A\rhd e_{2}^{\prime}$, then
$\Gamma\vdash_{e}e_{1}[x\mapsto e_{2}]:B\rhd e_{1}^{\prime}[x\mapsto
e_{2}^{\prime}]$.
With Lemma 4.4, 4.5 and 4.6, we complete the proof of Lemma 4.3, and the
behavioral preservation theorem (Theorem 3.9) follows. Compared to the
behavioral equivalence proof in Abadi and Fiore’s work, we show that it is
much more straightforward to prove the behavioral equivalence between full
iso-recursive types and equi-recursive types, and our proof for
$\lambda^{\mu}_{Fi}$ is completely mechanized in Coq, without relying on any
conjectures or axioms.
## 5\. Recursive Subtyping
In this section we show that our results in the previous sections can be
extended to a calculus with subtyping called $\lambda^{\mu<:}_{Fi}$.
### 5.1. A Calculus with Subtyping
Adapting the results in Section 3 to a calculus with subtyping requires only a
few changes. In terms of types, we add a top type ($\top$). Expressions and
values remain the same.
* $\Sigma\vdash A\leq_{\oplus}B$ _(Equi-recursive/Iso-recursive Subtyping)_
Sub-top $\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
24.53883pt\vbox{\vbox{}\hbox{\hskip-24.53882pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma\vdash
A\leq_{\oplus}\top$}}}}}}$ Sub-int
$\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
25.68687pt\vbox{\vbox{}\hbox{\hskip-25.68686pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma\vdash\mathsf{Int}\leq_{i}\mathsf{Int}$}}}}}}$
Sub-eq $\displaystyle\displaystyle{\hbox{\hskip
14.32112pt\vbox{\hbox{\hskip-14.32112pt\hbox{\hbox{$\displaystyle\displaystyle
A\doteq B$}}}\vbox{}}}\over\hbox{\hskip
23.8193pt\vbox{\vbox{}\hbox{\hskip-23.8193pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma\vdash
A\leq_{e}B$}}}}}}$ Sub-var $\displaystyle\displaystyle{\hbox{\hskip
22.41542pt\vbox{\hbox{\hskip-22.4154pt\hbox{\hbox{$\displaystyle\displaystyle\alpha\leq\beta\in\Sigma$}}}\vbox{}}}\over\hbox{\hskip
22.92656pt\vbox{\vbox{}\hbox{\hskip-22.92654pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma\vdash\alpha\leq_{\oplus}\beta$}}}}}}$
Sub-self $\displaystyle\displaystyle{\hbox{}\over\hbox{\hskip
43.38364pt\vbox{\vbox{}\hbox{\hskip-43.38362pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma\vdash\mu\alpha.~{}A\leq_{i}\mu\alpha.~{}A$}}}}}}$
Sub-trans $\displaystyle\displaystyle{\hbox{\hskip
57.81987pt\vbox{\hbox{\hskip-57.81985pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma\vdash
A\leq_{e}B\qquad\Sigma\vdash B\leq_{e}C$}}}\vbox{}}}\over\hbox{\hskip
23.70715pt\vbox{\vbox{}\hbox{\hskip-23.70714pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma\vdash
A\leq_{e}C$}}}}}}$ Sub-arrow $\displaystyle\displaystyle{\hbox{\hskip
62.20897pt\vbox{\hbox{\hskip-62.20897pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma\vdash
B_{{\mathrm{1}}}\leq_{\oplus}A_{{\mathrm{1}}}\qquad\Sigma\vdash
A_{{\mathrm{2}}}\leq_{\oplus}B_{{\mathrm{2}}}$}}}\vbox{}}}\over\hbox{\hskip
44.47562pt\vbox{\vbox{}\hbox{\hskip-44.47562pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma\vdash
A_{{\mathrm{1}}}\rightarrow
A_{{\mathrm{2}}}\leq_{\oplus}B_{{\mathrm{1}}}\rightarrow
B_{{\mathrm{2}}}$}}}}}}$ Sub-rec $\displaystyle\displaystyle{\hbox{\hskip
39.60878pt\vbox{\hbox{\hskip-39.60878pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma,\alpha\leq\beta\vdash
A\leq_{\oplus}B$}}}\vbox{}}}\over\hbox{\hskip
44.51979pt\vbox{\vbox{}\hbox{\hskip-44.51978pt\hbox{\hbox{$\displaystyle\displaystyle\Sigma\vdash\mu\alpha.~{}A\leq_{\oplus}\mu\beta.~{}B$}}}}}}$
Figure 7. Amadio and Cardelli’s equi-recursive and iso-recursive subtyping
rules.
#### Subtyping
The equi-recursive and iso-recursive subtyping rules that we use in this
section are based on the Amber rules (Amadio and Cardelli, 1993), as shown in
Figure 7. The subtyping rules use a special environment $\Sigma$, which tracks
a set of pairs of type variables that are assumed in the subtyping relation,
as explained in §2.4. We use $\leq_{\oplus}$ parameterized by a metavariable
$\oplus\in\\{i,e\\}$ to denote the subtyping rules for both relations: $i$
denotes iso-recursive subtyping, and $e$ denotes equi-recursive subtyping. We
use $\leq_{e}$ to denote the subtyping rules (rules Sub-eq and Sub-trans) that
only apply to equi-recursive types, and $\leq_{i}$ to denote the subtyping
rules (rules Sub-int and Sub-self) that only apply to iso-recursive types.
Rule Sub-eq embeds the equi-recursive equality relation in Figure 1 into the
subtyping relation, so is only present in equi-recursive subtyping. For the
iso-recursive subtyping relation, we choose the variant of the Amber rules
presented by Zhou et al. (2022), which replaces the built-in reflexivity with
the more primitive rules Sub-int and Sub-self and removes transitivity rule
Sub-trans from the original Amber rules. Zhou et al. discussed the technical
challenges of having reflexivity and transitivity built-in in the iso-
recursive subtyping relation, and showed that they are admissible from the
other rules.
###### Lemma 5.1 (Reflexivity of iso-recursive subtyping).
If $A$ is a closed type, then $\Sigma\vdash A\leq_{i}A$.
###### Lemma 5.2 (Transitivity of iso-recursive subtyping).
If $\cdot\vdash A\leq_{i}B$ and $\cdot\vdash B\leq_{i}C$, then $\cdot\vdash
A\leq_{i}C$.
In Lemma 5.1, the assumption that $A$ is a closed type can be implied by the
fact that the free variable sets of $A$ and $B$ in the subtyping relation
$\Sigma\vdash A\leq_{i}B$ are disjoint, which is also a side condition in
Amadio and Cardelli’s equi-recursive Amber rules. As for the transitivity,
Lemma 5.2 only holds when the environment is empty. Otherwise, one may also
get into problematic subtyping relations, as discussed by Zhou et al.. For the
reasons above, we choose to use the variant of iso-recursive Amber rules
without built-in reflexivity and transitivity in this section.
#### Typing and Reduction
As for the typing rules, we extend the full iso-recursive type system in
Figure 4 with rule Typ-sub, and the equi-recursive type system in Figure 5
with rules ETyp-isub and ETyp-bare-sub, respectively. Rule ETyp-isub allows
the elaboration rule to consider iso-recursive subtyping subsumptions, as rule
Typ-sub does in the typing rules for $\lambda^{\mu<:}_{Fi}$. Rule ETyp-bare-
sub is required for the calculus with equi-recursive subtyping. Note that rule
ETyp-bare-sub only exists in the equi-recursive type system. The elaboration
rule for this case will be discussed later in this section. There are no
changes to the reduction rules.
Typ-sub $\displaystyle\displaystyle{\hbox{\hskip
50.18323pt\vbox{\hbox{\hskip-50.18323pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash
e:A\qquad\cdot\vdash A\leq_{i}B$}}}\vbox{}}}\over\hbox{\hskip
19.49641pt\vbox{\vbox{}\hbox{\hskip-19.49641pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash
e:B$}}}}}}$ ETyp-isub $\displaystyle\displaystyle{\hbox{\hskip
61.80731pt\vbox{\hbox{\hskip-61.8073pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash_{e}e:A\text{
\hbox{\pagecolor{gray!20}$\displaystyle\rhd\,e^{\prime}$}}\qquad\cdot\vdash
A\leq_{i}B$}}}\vbox{}}}\over\hbox{\hskip
31.1205pt\vbox{\vbox{}\hbox{\hskip-31.12048pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash_{e}e:B\text{
\hbox{\pagecolor{gray!20}$\displaystyle\rhd\,e^{\prime}$}}$}}}}}}$ ETyp-bare-
sub $\displaystyle\displaystyle{\hbox{\hskip
50.43726pt\vbox{\hbox{\hskip-50.43724pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash_{e}e:A\qquad\cdot\vdash
A\leq_{e}B$}}}\vbox{}}}\over\hbox{\hskip
19.41132pt\vbox{\vbox{}\hbox{\hskip-19.41132pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash_{e}e:B$}}}}}}$
### 5.2. Type Soundness
There are no significant technical challenges in extending the type soundness
proof to $\lambda^{\mu<:}_{Fi}$. The only part that requires extra care is the
preservation lemma, in which we need to show that rule Red-castelim preserves
the typing in the presence of subtyping. Let us consider an expression
$\textsf{cast}\,[\textsf{unfold}_{\mu\alpha.~{}A}]\,(\textsf{cast}\,[\textsf{fold}_{\mu\alpha.~{}B}]\,v)$.
The derivation below shows the typing of this expression.
$\cdot\vdash v:B[\alpha\mapsto\mu\alpha.~{}B]$ $\ldots$ Typ-cast
$\cdot\vdash\textsf{cast}\,[\textsf{fold}_{\mu\alpha.~{}B}]\,v:\mu\alpha.~{}B$
$\cdot\vdash\mu\alpha.~{}B\leq_{i}\mu\alpha.~{}A$ Typ-sub
$\cdot\vdash\textsf{cast}\,[\textsf{fold}_{\mu\alpha.~{}B}]\,v:\mu\alpha.~{}A$
$\ldots$ Typ-cast
$\cdot\vdash\textsf{cast}\,[\textsf{unfold}_{\mu\alpha.~{}A}]\,(\textsf{cast}\,[\textsf{fold}_{\mu\alpha.~{}B}]\,v):A[\alpha\mapsto\mu\alpha.~{}A]$
By inversion we know that after reduction using rule Red-castelim, the result
$v$ has the type $B[\alpha\mapsto\mu\alpha.~{}B]$, and that
$\mu\alpha.~{}B\leq_{i}\mu\alpha.~{}A$. The preservation proof goal for this
case can be expressed as the following lemma:
###### Lemma 5.3 (Unfolding lemma).
If $\cdot\vdash\mu\alpha.~{}B\leq_{i}\mu\alpha.~{}A$, then $\cdot\vdash
B[\alpha\mapsto\mu\alpha.~{}B]\leq_{i}A[\alpha\mapsto\mu\alpha.~{}A]$.
The proof of this lemma has been shown by Zhou et al. (2022, Corollary 59).
They proposed an alternative formulation of the iso-recursive subtyping rules,
which is equivalent to the iso-recursive Amber rules. Therefore, by adopting
their results, we complete the type soundness proof for
$\lambda^{\mu<:}_{Fi}$.
###### Theorem 5.4 (Type soundness of $\lambda^{\mu<:}_{Fi}$).
For any term $e$ and type $A$ in $\lambda^{\mu<:}_{Fi}$,
1. (1)
(Progress) if $\cdot\vdash e:A$ then either $e$ is a value or there exists a
term $e^{\prime}$ such that $e\hookrightarrow e^{\prime}$.
2. (2)
(Preservation) if $\cdot\vdash e:A$ and $e\hookrightarrow e^{\prime}$ then
$\cdot\vdash e^{\prime}:A$.
### 5.3. Typing equivalence
Similarly to $\lambda^{\mu}_{Fi}$ with equality, we can prove that
$\lambda^{\mu<:}_{Fi}$ with iso-recursive subtyping is sound and complete with
respect to a calculus with equi-recursive subtyping. The key idea is to encode
the equi-recursive subtyping relation using a combination of equi-recursive
equality and the iso-recursive subtyping relation, as explained in §2.5. The
encoding can be justified by the following theorem:
###### Theorem 5.5 (Equi-recursive subtyping decomposition).
$\cdot\vdash A\leq_{e}B$ if and only if there exist types $C_{1}$ and $C_{2}$
such that $A\doteq C_{1}$, $\cdot\vdash C_{1}\leq_{i}C_{2}$, and $C_{2}\doteq
B$.
The soundness direction (i.e. the “if” direction) of this lemma is
straightforward by rules Sub-eq and Sub-trans and the fact that $\leq_{i}$ is
a sub-relation of $\leq_{e}$. The completeness direction can be derived from
Amadio and Cardelli’s proof of completeness with respect to the tree model for
the equi-recursive Amber rules. They proved that for any types $A$, $B$ that
are in the tree model interpretation of the equi-recursive subtyping relation,
one can find types $C_{1}$ and $C_{2}$ such that $A\doteq C_{1}$,
$C_{1}\leq_{e}C_{2}$, and $C_{2}\doteq B$ hold (Amadio and Cardelli, 1993,
Lemma 5.4.1, Lemma 5.4.3). Moreover, the derivation of $C_{1}\leq_{e}C_{2}$
satisfies the _one-expansion property_ , which means that in the derivation
each recursive type is unfolded at most once, informally speaking. Although
this result is expressed as an equi-recursive subtyping relation in their
conclusion, we can rewrite all the occurrences of $C_{1}\leq_{e}C_{2}$ with
one-expansion property in their proof to an iso-recursive subtyping relation
$C_{1}\leq_{i}C_{2}$. Every application of rules Sub-eq and Sub-trans in their
proofs can either be replaced by rule Sub-rec, in which the recursive type
body does not involve the type variable and unfolds to itself, or by rule Sub-
self for two recursive types that are syntactically equal up to
$\alpha$-renaming. In other words, Amadio and Cardelli’s proof of their Lemma
5.4.3 can be seen as a proof that the iso-recursive subtyping relation is
complete with respect to the equi-recursive subtyping relation with the one-
expansion property, because they never use the power of the rules Sub-eq and
Sub-trans.
$\vdash\texttt{Int}\leq_{e}\texttt{Int}$ Tyeq-unfold Tyeq-contract
$A_{1}\doteq\top\to A_{2}$ $\vdash\top\leq_{e}\top$ $\mathcal{P}_{1}$ Sub-rec
$\vdash A_{2}\leq_{e}B$ Sub-arrow $\vdash\top\to A_{2}\leq_{e}\top\to B$ Sub-
eq and Sub-trans $\vdash A_{1}\leq_{e}\top\to B$ Sub-arrow $\vdash
A\leq_{e}\texttt{Int}\to\top\to B$ Tyeq-unfold $\vdash\texttt{Int}\to\top\to
B\doteq B$ $\begin{array}[]{@{}l@{}}\text{{\hyperlink{ottalt:rule:ott:Sub-
trans}{{{{Sub-trans}}}}}}\\\\[-2.0pt] \text{and
{\hyperlink{ottalt:rule:ott:Sub-eq}{{{{Sub-eq}}}}}}\end{array}$ $\vdash
A\leq_{e}B$
$\vdash\texttt{Int}\leq_{i}\texttt{Int}$ Lemma 5.1 $\vdash\top\to
A_{2}\leq_{i}\top\to A_{2}$ $\vdash\top\leq_{i}\top$ $\mathcal{P}_{2}$ Sub-rec
$\vdash A_{2}\leq_{i}B$ Sub-arrow $\vdash\top\to A_{2}\leq_{i}\top\to B$ Lemma
5.2 $\vdash\top\to A_{2}\leq_{i}\top\to B$ Sub-arrow $\vdash C\leq_{i}D$ Lemma
5.1 $\vdash D\leq_{i}D$ Lemma 5.2 $\vdash C\leq_{i}D$
$\begin{array}[]{l}\begin{array}[]{ll}A=\texttt{Int}\to(\mu\alpha.~{}\top\to\alpha)&B=\mu\alpha.~{}\texttt{Int}\to\top\to\alpha\\\
C=\texttt{Int}\to\top\to(\mu\alpha.~{}\top\to\top\to\alpha)&D=\texttt{Int}\to\top\to(\mu\alpha.~{}\texttt{Int}\to\top\to\alpha)\\\
A_{1}=\mu\alpha.~{}\top\to\alpha&A_{2}=\mu\alpha.~{}\top\to\top\to\alpha\end{array}\\\
\,\,\,\mathcal{P}_{1}\text{ is the judgement
}\alpha\leq\beta\vdash\top\to\top\to\alpha\leq_{e}\texttt{Int}\to\top\to\beta\\\
\,\,\,\mathcal{P}_{2}\text{ is the judgement
}\alpha\leq\beta\vdash\top\to\top\to\alpha\leq_{i}\texttt{Int}\to\top\to\beta\end{array}$
Figure 8. Illustration of decomposing an equi-recursive subtyping derivation.
The idea of this decomposition can be illustrated by an example in Figure 8.
The upper part of the figure shows a derivation by following Amadio and
Cardelli’s subtyping algorithm for equi-recursive subtyping. Note that there
are two applications of rule Sub-eq in the derivation, one for expanding the
type $\mu\alpha.~{}\top\to\alpha$ to
$\top\to\mu\alpha.~{}\top\to\top\to\alpha$ and the other for expanding the
type $B$ to its unfolding $\texttt{Int}\to\top\to B$. Although rule Sub-eq is
applied in the middle of the derivation, we can always lift these uses of rule
Sub-eq to the top of the derivation, by replacing two types in the conclusion
with their more expanded forms. The lower part of the figure shows such a
derivation, in which we use $C$ and $D$ to denote (a simplified form of) the
expanded types obtained from Amadio and Cardelli’s proof. A key observation
here is that the original structure of the derivation is preserved in the new
derivation. To highlight this, we use a dummy application of the reflexivity
and transitivity lemma to show the correspondence between the two derivations.
With the decomposition theorem, we can use the following rule to encode the
equi-recursive subtyping relation:
ETyp-sub $\displaystyle\displaystyle{\hbox{\hskip
146.9263pt\vbox{\hbox{\hskip-146.92628pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash_{e}e:A\text{
\hbox{\pagecolor{gray!20}$\displaystyle\rhd\,e^{\prime}$}}\qquad\cdot;\cdot\vdash
A\hookrightarrow C_{{\mathrm{1}}}:c_{{\mathrm{1}}}\qquad\cdot\vdash
C_{{\mathrm{1}}}\leq_{i}C_{{\mathrm{2}}}\qquad\cdot;\cdot\vdash
C_{{\mathrm{2}}}\hookrightarrow
B:c_{{\mathrm{2}}}$}}}\vbox{}}}\over\hbox{\hskip
68.30359pt\vbox{\vbox{}\hbox{\hskip-68.30357pt\hbox{\hbox{$\displaystyle\displaystyle\Gamma\vdash_{e}e:B\text{
\hbox{\pagecolor{gray!20}$\displaystyle\rhd\,\textsf{cast}\,[c_{{\mathrm{2}}}]\,(\textsf{cast}\,[c_{{\mathrm{1}}}]\,e^{\prime})$}}$}}}}}}$
If one ignores the gray parts in the rule, rule ETyp-sub is equivalent to rule
ETyp-bare-sub. We can first apply Theorem 3.1 to rewrite our type casting
rules to equi-recursive equalities and then use Theorem 5.5 to show the
equivalence to the equi-recursive subtyping relation. On the other hand, rule
ETyp-sub can be derived from the primitive rules ETyp-eq and ETyp-isub in
$\lambda^{\mu<:}_{Fi}$. Therefore, we can conclude that $\lambda^{\mu<:}_{Fi}$
with iso-recursive subtyping is sound and complete with respect to a calculus
with equi-recursive subtyping in terms of typing.
###### Theorem 5.6 (Typing equivalence for $\lambda^{\mu<:}_{Fi}$).
For any expressions $e$, $e^{\prime}$ and type $A$,
1. (1)
(Soundness) if $\Gamma\vdash e:A$ then $\Gamma\vdash_{e}|e|:A\rhd e$.
2. (2)
(Completeness) if $\Gamma\vdash_{e}e:A\rhd e^{\prime}$ then $\Gamma\vdash
e^{\prime}:A$.
3. (3)
(Round-tripping) if $\Gamma\vdash_{e}e:A\rhd e^{\prime}$, then
$|e^{\prime}|=e$.
### 5.4. Behavioral equivalence
We also show that $\lambda^{\mu<:}_{Fi}$ with iso-recursive subtyping is sound
and complete with respect to a calculus with equi-recursive subtyping in terms
of dynamic semantics. Since there are no changes to the reduction rules, the
proof of $\lambda^{\mu<:}_{Fi}$ to equi-recursive behavioral equivalence by
erasure of cast operators remains the same as Theorem 3.8. The proof of the
other direction comes almost for free as well. We simply follow the same steps
as described in §4.3 and use the same lemmas and theorems to show the
completeness of $\lambda^{\mu<:}_{Fi}$ in preserving the equi-recursive
reductions, except that during the proof of Lemma 4.3, we may need to insert
an application of rule ETyp-isub at certain points to prove that the encoding
is well-typed. In terms of dynamic semantics, $\lambda^{\mu<:}_{Fi}$ is
equivalent to a calculus with equi-recursive subtyping.
###### Theorem 5.7 (Behavioral equivalence of $\lambda^{\mu<:}_{Fi}$).
For any expression $e$, $e^{\prime}$ and type $A$ in $\lambda^{\mu<:}_{Fi}$,
if $\cdot\vdash_{e}e:A\rhd e^{\prime}$, then
1. (1)
$e\hookrightarrow^{*}_{e}v$ if and only if there exists $v^{\prime}$ such that
$e^{\prime}\hookrightarrow^{*}v^{\prime}$ and $|v^{\prime}|=v$.
2. (2)
$e$ diverges if and only if $e^{\prime}$ diverges.
## 6\. Related Work
Throughout the paper, we have discussed some of the closest related work in
detail. This section covers additional related work.
#### Relating iso-recursive and equi-recursive types
Recursive types were first introduced by Morris, who presented equi-recursive
types to model recursive definitions. Later on, iso-recursive types were
introduced (Harper and Mitchell, 1993; Gunter, 1992; Crary et al., 1999). The
terms for these two types of recursive formulations were coined by Crary et
al..
Both equi-recursive and iso-recursive types have been applied in various
programming language areas. Equi-recursive types are used in several contexts,
including: session types (Castagna et al., 2009; Chen et al., 2014; Gay and
Hole, 2005; Gay and Vasconcelos, 2010), gradual typing (Siek and Tobin-
Hochstadt, 2016), and the foundation of Scala through Dependent object types
(DOT) (Amin et al., 2016; Rompf and Amin, 2016), among others. Iso-recursive
types have also been utilized in different calculi and language designs due to
their ease of use in type checking (Abadi and Cardelli, 1996; Bengtson et al.,
2011; Chugh, 2015; Duggan, 2002; Lee et al., 2015; Swamy et al., 2011).
Urzyczyn (1995) studied the relationship between positive iso- and equi-
recursive types, showing their equivalence in typing power. Closer to our
work, Abadi and Fiore (1996) explored translating equi-recursive terms to iso-
recursive terms using explicit coercion functions but did not address the
operational semantics. Moreover, their behavioral equivalence argument relies
on a program logic which was conjectured to be sound. As we have argued in
Section 2.3, the use of explicit coercions has important drawbacks. Firstly,
it adds significant computational overhead to the encoding, making the
encoding impractical. Secondly, it introduces major complications to
reasoning, and also prevents a round-tripping property. By using casts, we
avoid both of these issues, leading to an easier, and fully formalized, way to
establish behavioral equivalence.
Recently, Patrignani et al. (2021) examined the contextual equivalence between
iso- and equi-recursive types, providing a mechanized proof in Coq for fully
abstract compilers. Their focus was on the compilation from iso-recursive to
equi-recursive types. They proved that the translation from iso-recursive to
equi-recursive types, by erasing unfold/fold operations, is fully abstract
with respect to contextual equivalence. The work also covered the compiler
from term-level fixpoints to equi-recursive types, but did not explore the
translation from equi-recursive to iso-recursive types. In our work, we
establish the bidirectional equivalence between full iso-recursive and equi-
recursive types, taking into account both typing and operational semantics.
Furthermore, in addition to type equality, we also study calculi with
subtyping, which have not been covered in previous work studying the
relationship between iso- and equi-recursive typing.
#### Subtyping recursive types
Amadio and Cardelli (1993) were the first to present a comprehensive formal
study of subtyping with equi-recursive types. This work inspired further
research that refined and simplified the original study (Brandt and Henglein,
1998; Gapeyev et al., 2002; Danielsson and Altenkirch, 2010; Komendantsky,
2011). In particular, Brandt and Henglein (1998) introduced a fixpoint rule
for a coinductive relation within an inductive framework. Their rules give
rise to a natural operational interpretation of proofs as coercions, as they
indicated as future work in their paper. Our work is inspired by their work,
and we formally present an operational interpretation of equi-recursive
equalities in our paper. However, instead of using coercions to model the
subtyping relation as they suggested, we use cast operators to model the
equalities between equi-recursive types. Furthermore, we show that our
computationally irrelevant cast operators simplify the metatheory and extend
to subtyping as well.
Iso-recursive subtyping, notably through the Amber rules introduced by
Cardelli (Cardelli, 1985), has long been used in practice. The iso-recursive
Amber rules, while easy to implement, are difficult to reason with formally.
The only known direct proof for transitivity of subtyping for an algorithmic
version of the Amber rules was given by Bengtson et al. (2011). However this
proof relies on a complex inductive argument and was found difficult to
formalize in theorem provers (Backes et al., 2014; Zhou et al., 2022). Zhou et
al., proposed alternative formulations of iso-recursive subtyping, which are
equivalent to the Amber rules and are also easier to reason with. Their work
comes with a comprehensive formalization of the metatheory of iso-recursive
subtyping. Our work is based on some of their findings. In particular we reuse
their mechanized proof of the unfolding lemma to show the type soundness of
iso-recursive subtyping, but instead apply it in a setting with full iso-
recursive types. Thus, we extend their work to a more general setting, in
terms of typing and operational semantics.
To address the complexities of iso-recursive subtyping, several alternative
formulations of iso-recursive subtyping have been proposed. Hofmann and Pierce
(1995) introduced a subtyping relation that limits recursive subtyping to
covariant types only, making the rules more restrictive than the Amber rules.
Ligatti et al. (2017) offered a broader subtyping relation for iso-recursive
types, allowing a recursive type and its unfolded version to be considered
subtypes of each other. This approach extends the iso-recursive Amber rules
but is still not complete with respect to the equi-recursive subtyping, since
it does not consider types not directly related by unfolding or folding as
subtypes. Additionally, Rossberg (2023) developed a calculus for higher-order
iso-recursive subtyping, to handle mutually recursive types more effectively.
#### Mechanizing recursive types
Danielsson and Altenkirch (2010); Jones and Pearce (2016) formalized equi-
recursive subtyping relations in Agda using a mixed coinduction and induction
technique. Jones and Pearce presented a semantic interpretation of subtyping
and proved that their semantic interpretation is sound with respect to an
inductive interpretation of types, but they did not lift their results to
cover function types. Instead, they focused on other constructs like product
and sum types. Danielsson and Altenkirch are closer to our work since they
also did not consider semantic interpretations, but formalized in Agda an
alternative equi-recursive subtyping relation that allows an explicit
transitivity rule to be included. They formally proved that this relation is
equivalent to the tree model of subtyping as well as Brandt and Henglein’s
subtyping relation. In a similar vein, Komendantsky (2011) showed how to
implement mixed coinduction and induction within Coq, formalizing rules that
closely resemble those introduced by Danielsson and Altenkirch (2010). They
also validated their approach against Amadio and Cardelli’s tree model of
subtyping. Zhou et al. (2022) focused on formalizing Amber-style iso-recursive
subtyping in Coq, adding to the understanding of iso-recursive subtyping.
Patrignani et al. (2021), which we have discussed earlier, formalized three
calculi in Coq: a simply typed lambda calculus extended with iso-recursive
types, equi-recursive types, and term-level fixpoints. Their work is focused
on the translation from iso-recursive to equi-recursive types, by erasing
unfold/fold operations, and the translation from a calculus with term-level
fixpoints to a calculus with equi-recursive types. All of our results are
mechanized in Coq, with the exception of the decomposition lemma (Theorem
5.5). This lemma is implied from Amadio and Cardelli’s work, but relies on a
significant amount of technical machinery, which we have not formalized in
Coq. Thus we assume it as an axiom in our Coq formalization.
#### Casts for type-level computation
In this paper, we employ explicit cast operators to represent the
transformations between types related by equi-recursive equalities. Several
studies (Stump et al., 2009; Sjöberg et al., 2012; Kimmell et al., 2012;
Sjöberg and Weirich, 2015; Cretin, 2014; Sulzmann et al., 2007; Gundry, 2013;
Weirich et al., 2017; Yang and Oliveira, 2019) have also used explicit casts
for managed type-level computation. However, casts in those approaches
primarily address type-level computations within contexts such as dependent
types or type-level programming, rather than the operational interpretation of
recursive type equalities. When considering the dynamic semantics of cast-like
operations, there have been two major approaches. One approach is to use an
elaboration semantics, used in works like (Sjöberg et al., 2012; Sjöberg and
Weirich, 2015; Stump et al., 2009), where the semantics are only defined for a
cast-free language and the casts need to be erased before execution. Another
approach is to use push rules as seen in (Sulzmann et al., 2007; Yorgey et
al., 2012; Weirich et al., 2013, 2017), which is the approach that we adopt in
our work. Pure Iso-type Systems (PITS) (Yang and Oliveira, 2019) provides a
generalization of iso-recursive types with explicit casts, but their focus is
on unifying the syntax of terms and types while retaining decidable type
checking, instead of subsuming equi-recursive type casting as we do. Also, the
form of casts is different from ours. For example, they do not have a fixpoint
cast, to enable coinductive reasoning.
## 7\. Conclusion
This paper proposes full iso-recursive types, a generalization of iso-
recursive types that can be used to encode the full power of equi-recursive
types. The key idea is to introduce a computationally irrelevant cast operator
in the term language that captures all the equi-recursive type equalities. We
present $\lambda^{\mu}_{Fi}$, a calculus that extends simply typed lambda
calculus with full iso-recursive types. $\lambda^{\mu}_{Fi}$ is proved to be
type sound and has the same expressive power as a calculus with equi-recursive
types, in terms of typing and dynamic semantics. Our results can also be
extended to subtyping, by encoding equi-recursive subtyping using iso-
recursive subtyping with cast operators.
As future work, we plan to extend $\lambda^{\mu}_{Fi}$ with other programming
language features, such as polymorphism and intersection and union types. It
is also interesting to see whether our results can scale to real world
languages (e.g. Haskell). In particular, it would be interesting to employ
full iso-recursive types in an internal target language with explicit cast
operators for a source language using equi-recursive types.
## References
* (1)
* Abadi and Cardelli (1996) Martin Abadi and Luca Cardelli. 1996. _A theory of objects_. Springer Science & Business Media.
* Abadi and Fiore (1996) Martin Abadi and Marcelo P Fiore. 1996. Syntactic considerations on recursive types. In _Proceedings 11th Annual IEEE Symposium on Logic in Computer Science_. IEEE, 242–252.
* Amadio and Cardelli (1993) Roberto M Amadio and Luca Cardelli. 1993. Subtyping recursive types. _ACM Transactions on Programming Languages and Systems (TOPLAS)_ 15, 4 (1993), 575–631.
* Amin et al. (2016) Nada Amin, Samuel Grütter, Martin Odersky, Tiark Rompf, and Sandro Stucki. 2016. The essence of dependent object types. _A List of Successes That Can Change the World: Essays Dedicated to Philip Wadler on the Occasion of His 60th Birthday_ (2016), 249–272.
* Backes et al. (2014) Michael Backes, Cătălin Hriţcu, and Matteo Maffei. 2014. Union, intersection and refinement types and reasoning about type disjointness for secure protocol implementations. _J. Comput. Secur._ 22, 2 (mar 2014), 301–353.
* Bengtson et al. (2011) Jesper Bengtson, Karthikeyan Bhargavan, Cédric Fournet, Andrew D Gordon, and Sergio Maffeis. 2011. Refinement types for secure implementations. _ACM Transactions on Programming Languages and Systems (TOPLAS)_ 33, 2 (2011), 1–45.
* Brandt and Henglein (1998) Michael Brandt and Fritz Henglein. 1998. Coinductive axiomatization of recursive type equality and subtyping. _Fundamenta Informaticae_ 33, 4 (1998), 309–338.
* Cardelli (1985) Luca Cardelli. 1985. Amber, Combinators and Functional Programming Languages. _Proc. of the 13th Summer School of the LITP, Le Val D’Ajol, Vosges (France)_ (1985).
* Castagna et al. (2009) Giuseppe Castagna, Mariangiola Dezani-Ciancaglini, Elena Giachino, and Luca Padovani. 2009. Foundations of session types. In _Proceedings of the 11th ACM SIGPLAN conference on Principles and practice of declarative programming_. 219–230.
* Chen et al. (2014) Tzu-Chun Chen, Mariangiola Dezani-Ciancaglini, and Nobuko Yoshida. 2014. On the preciseness of subtyping in session types. In _Proceedings of the 16th International Symposium on Principles and Practice of Declarative Programming_. 135–146.
* Chugh (2015) Ravi Chugh. 2015. IsoLATE: A type system for self-recursion. In _Programming Languages and Systems: 24th European Symposium on Programming, ESOP 2015, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2015, London, UK, April 11-18, 2015, Proceedings 24_. Springer, 257–282.
* Crary et al. (1999) Karl Crary, Robert Harper, and Sidd Puri. 1999. What is a recursive module?. In _Proceedings of the ACM SIGPLAN 1999 conference on Programming language design and implementation_. 50–63.
* Cretin (2014) Julien Cretin. 2014. _Erasable coercions: a unified approach to type systems_. Ph. D. Dissertation. Université Paris-Diderot-Paris VII.
* Danielsson and Altenkirch (2010) Nils Anders Danielsson and Thorsten Altenkirch. 2010. Subtyping, declaratively: An exercise in mixed induction and coinduction. In _Mathematics of Program Construction: 10th International Conference, MPC 2010, Québec City, Canada, June 21-23, 2010. Proceedings 10_. Springer, 100–118.
* Duggan (2002) Dominic Duggan. 2002. Type-safe linking with recursive DLLs and shared libraries. _ACM Transactions on Programming Languages and Systems (TOPLAS)_ 24, 6 (2002), 711–804.
* Gapeyev et al. (2002) Vladimir Gapeyev, Michael Y Levin, and Benjamin C Pierce. 2002. Recursive subtyping revealed. _Journal of Functional Programming_ 12, 6 (2002), 511–548.
* Gay and Hole (2005) Simon Gay and Malcolm Hole. 2005. Subtyping for session types in the pi calculus. _Acta Informatica_ 42 (2005), 191–225.
* Gay and Vasconcelos (2010) Simon J Gay and Vasco T Vasconcelos. 2010. Linear type theory for asynchronous session types. _Journal of Functional Programming_ 20, 1 (2010), 19–50.
* Gundry (2013) Adam Michael Gundry. 2013. Type inference, Haskell and dependent types. (2013).
* Gunter (1992) Carl A Gunter. 1992. _Semantics of programming languages: structures and techniques_. MIT press.
* Harper and Mitchell (1993) Robert Harper and John C Mitchell. 1993. On the type structure of Standard ML. _Acm transactions on programming languages and systems (TOPLAS)_ 15, 2 (1993), 211–252.
* Harper and Stone (2000) Robert Harper and Christopher Stone. 2000. A type-theoretic interpretation of Standard ML. (2000).
* Hofmann and Pierce (1995) Martin Hofmann and Benjamin Pierce. 1995. Positive subtyping. In _Proceedings of the 22nd ACM SIGPLAN-SIGACT symposium on Principles of programming languages_. 186–197.
* Jones and Pearce (2016) Timothy Jones and David J Pearce. 2016. A mechanical soundness proof for subtyping over recursive types. In _Proceedings of the 18th Workshop on Formal Techniques for Java-like Programs_. 1–6.
* Kimmell et al. (2012) Garrin Kimmell, Aaron Stump, Harley D Eades III, Peng Fu, Tim Sheard, Stephanie Weirich, Chris Casinghino, Vilhelm Sjöberg, Nathan Collins, and Ki Yung Ahn. 2012. Equational reasoning about programs with general recursion and call-by-value semantics. In _Proceedings of the sixth workshop on Programming languages meets program verification_. 15–26.
* Komendantsky (2011) Vladimir Komendantsky. 2011. Subtyping by folding an inductive relation into a coinductive one. In _International Symposium on Trends in Functional Programming_. Springer, 17–32.
* Lee et al. (2015) Joseph Lee, Jonathan Aldrich, Troy Shaw, and Alex Potanin. 2015. A theory of tagged objects. In _29th European Conference on Object-Oriented Programming (ECOOP 2015)_. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.
* Ligatti et al. (2017) Jay Ligatti, Jeremy Blackburn, and Michael Nachtigal. 2017. On subtyping-relation completeness, with an application to iso-recursive types. _ACM Transactions on Programming Languages and Systems (TOPLAS)_ 39, 1 (2017), 1–36.
* Morris (1968) James H Morris. 1968. Lambda calculus models of programming languages. (1968).
* Patrignani et al. (2021) Marco Patrignani, Eric Mark Martin, and Dominique Devriese. 2021. On the semantic expressiveness of recursive types. _Proceedings of the ACM on Programming Languages_ 5, POPL (2021), 1–29.
* Pierce (2002) Benjamin C Pierce. 2002. _Types and programming languages_. MIT press.
* Rompf and Amin (2016) Tiark Rompf and Nada Amin. 2016. Type soundness for dependent object types (DOT). In _Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications_. 624–641.
* Rossberg (2023) Andreas Rossberg. 2023. Mutually Iso-Recursive Subtyping. _Proceedings of the ACM on Programming Languages_ 7, OOPSLA2 (2023), 347–373.
* Siek and Tobin-Hochstadt (2016) Jeremy G Siek and Sam Tobin-Hochstadt. 2016. The recursive union of some gradual types. In _A List of Successes that can Change the World: Essays Dedicated to Philip Wadler on the Occasion of His 60th Birthday_. Springer, 388–410.
* Sjöberg et al. (2012) Vilhelm Sjöberg, Chris Casinghino, Ki Yung Ahn, Nathan Collins, Harley D Eades III, Peng Fu, Garrin Kimmell, Tim Sheard, Aaron Stump, and Stephanie Weirich. 2012. Irrelevance, heterogeneous equality, and call-by-value dependent type systems. _arXiv preprint arXiv:1202.2923_ (2012).
* Sjöberg and Weirich (2015) Vilhelm Sjöberg and Stephanie Weirich. 2015. Programming up to congruence. In _Proceedings of the 42nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages_. 369–382.
* Stump et al. (2009) Aaron Stump, Morgan Deters, Adam Petcher, Todd Schiller, and Timothy Simpson. 2009. Verified programming in Guru. In _Proceedings of the 3rd workshop on Programming languages meets program verification_. 49–58.
* Sulzmann et al. (2007) Martin Sulzmann, Manuel MT Chakravarty, Simon Peyton Jones, and Kevin Donnelly. 2007. System F with type equality coercions. In _Proceedings of the 2007 ACM SIGPLAN international workshop on Types in languages design and implementation_. 53–66.
* Swamy et al. (2011) Nikhil Swamy, Juan Chen, Cédric Fournet, Pierre-Yves Strub, Karthikeyan Bhargavan, and Jean Yang. 2011. Secure distributed programming with value-dependent types. _ACM SIGPLAN Notices_ 46, 9 (2011), 266–278.
* Urzyczyn (1995) Pawel Urzyczyn. 1995. Positive recursive type assignment. In _International Symposium on Mathematical Foundations of Computer Science_. Springer, 382–391.
* Vanderwaart et al. (2003) Joseph C Vanderwaart, Derek Dreyer, Leaf Petersen, Karl Crary, Robert Harper, and Perry Cheng. 2003. Typed compilation of recursive datatypes. _ACM SIGPLAN Notices_ 38, 3 (2003), 98–108.
* Weirich et al. (2013) Stephanie Weirich, Justin Hsu, and Richard A Eisenberg. 2013. System FC with explicit kind equality. _ACM SIGPLAN Notices_ 48, 9 (2013), 275–286.
* Weirich et al. (2017) Stephanie Weirich, Antoine Voizard, Pedro Henrique Azevedo de Amorim, and Richard A Eisenberg. 2017. A specification for dependent types in Haskell. _Proceedings of the ACM on Programming Languages_ 1, ICFP (2017), 1–29.
* Yang and Oliveira (2019) Yanpeng Yang and Bruno CDS Oliveira. 2019. Pure iso-type systems. _Journal of Functional Programming_ 29 (2019), e14.
* Yorgey et al. (2012) Brent A Yorgey, Stephanie Weirich, Julien Cretin, Simon Peyton Jones, Dimitrios Vytiniotis, and José Pedro Magalhães. 2012. Giving Haskell a promotion. In _Proceedings of the 8th ACM SIGPLAN Workshop on Types in Language Design and Implementation_. 53–66.
* Zhou et al. (2020) Yaoda Zhou, Bruno C d S Oliveira, and Jinxu Zhao. 2020. Revisiting iso-recursive subtyping. _Proceedings of the ACM on Programming Languages_ 4, OOPSLA (2020), 1–28.
* Zhou et al. (2022) Yaoda Zhou, Jinxu Zhao, and Bruno CDS Oliveira. 2022. Revisiting Iso-recursive subtyping. _ACM Transactions on Programming Languages and Systems (TOPLAS)_ 44, 4 (2022), 1–54.
|
# The time-like minimal surface equation in Minkowski space: low regularity
solutions
Albert Ai Department of Mathematics, University of Wisconsin, Madison
<EMAIL_ADDRESS>, Mihaela Ifrim Department of Mathematics, University of
Wisconsin, Madison<EMAIL_ADDRESS>and Daniel Tataru Department of
Mathematics, University of California at Berkeley<EMAIL_ADDRESS>
###### Abstract.
It has long been conjectured that for nonlinear wave equations which satisfy a
nonlinear form of the null condition, the low regularity well-posedness theory
can be significantly improved compared to the sharp results of Smith-Tataru
for the generic case. The aim of this article is to prove the first result in
this direction, namely for the time-like minimal surface equation in the
Minkowski space-time. Further, our improvement is substantial, namely by $3/8$
derivatives in two space dimensions and by $1/4$ derivatives in higher
dimensions.
###### Key words and phrases:
time-like minimal surface, low regularity, normal forms, nonlinear wave
equations, paracontrolled distributions
###### 2020 Mathematics Subject Classification:
35L72, 35B65
###### Contents
1. 1 Introduction
2. 2 Notations, paraproducts and some commutator type bounds
3. 3 A complete set of equations
4. 4 Energy and Strichartz estimates
5. 5 Control parameters and related bounds
6. 6 Paracontrolled distributions
7. 7 Energy estimates for the paradifferential equation
8. 8 Energy estimates for the full equation
9. 9 Energy and Strichartz estimates for the linearized equation
10. 10 Short time Strichartz estimates
11. 11 Conclusion: proof of the main result
## 1\. Introduction
The question of local well-posedness for nonlinear wave equations with rough
initial data is a fundamental question in the the study of nonlinear waves,
and which has received a lot of attention over the years. The result of Smith
and Tataru [36], proved almost 20 years ago, provides the sharp regularity
threshold for generic nonlinear wave equations in view of Lindblad’s
counterexample [28]. On the other hand, it has also been conjectured [43] that
for nonlinear wave equations which satisfy a suitable nonlinear null
condition, the result of [36] can be improved, and the well-posedness
threshold can be lowered. In this paper we provide the first result which
proves the validity of this conjecture, for a representative equation in this
class, namely the hyperbolic minimal surface equation. Further, our
improvement turns out to be substantial; precisely, we gain $3/8$ derivatives
in two space dimensions and $1/4$ derivatives in higher dimension. At this
regularity level, the Lorentzian metric $g$ in our problem is no better that
$C_{x,t}^{\frac{1}{4}+}\cap L^{2}_{t}C_{x}^{\frac{1}{2}+}$,
($C_{x,t}^{\frac{3}{8}+}\cap L^{4}_{t}C_{x}^{\frac{1}{2}+}$ in $2d$) far below
anything studied before.
Most of the ideas introduced in this paper will likely extend to other
nonlinear wave models, and open the way toward further progress in the study
of low regularity solutions.
### 1.1. The Minimal Surface Equation in Minkowski Space
Let $n\geq 2$, and ${\mathfrak{M}}^{n+2}$ be the $n+2$ dimensional Minkowski
space-time. A codimension one time-like submanifold
$\Sigma\subset{\mathfrak{M}}^{n+2}$ is called a minimal surface if it is
locally a critical point for the area functional
$\mathcal{L}=\int_{\Sigma}\,dA,$
where the area element is measured relative to the Minkowski metric. A
standard way to think of this equation is by representing $\Sigma$ as a graph
over ${\mathfrak{M}}^{n+1}$,
$\Sigma=\\{x_{n+1}=u(t,x_{1},\cdot,x_{n})\\},$
where $u$ is a real valued function
$u:D\subset{\mathfrak{M}}^{n+1}\rightarrow{\mathbb{R}},$
which satisfies the constraint
(1.1) $u_{t}^{2}<1+|\nabla_{x}u|^{2},$
expressing the condition that its graph is a time-like surface in
${\mathfrak{M}}^{n+2}$.
Then the surface area functional takes the form
(1.2) $\mathcal{L}(u)=\int\sqrt{1-u_{t}^{2}+|\nabla_{x}u|^{2}}\ dx.$
Interpreting this as a Lagrangian, the minimal surface equation can be thought
of as the associated Euler-Lagrange equation, which takes the form
(1.3) $-\frac{\partial}{\partial
t}\left(\frac{u_{t}}{\sqrt{1-u_{t}^{2}+|\nabla_{x}u|^{2}}}\right)+\sum_{i=1}^{n}\frac{\partial}{\partial
x_{i}}\left(\frac{u_{x_{i}}}{\sqrt{1-u_{t}^{2}+|\nabla_{x}u|^{2}}}\right)=0.$
Under the condition (1.1), the above equation is a quasilinear wave equation.
The left hand side of the last equation can be also interpreted as the mean
curvature of the hypersurface $\Sigma$, and as such the minimal surface
equation is alternatively described as the _zero mean curvature flow_.
In addition to the above geometric interpretation, the minimal surface
equation for time-like surfaces in the Minkowski space is also known as the
Born-Infeld model in nonlinear electromagnetism [48], as well as a model for
evolution of branes in string theory [14].
On the mathematical side, the question of global existence for small, smooth
and localized initial data was considered in work of Lindblad [29], Brendle
[7], Stefanov [38] and Wong [46]. The stability of a nonflat steady solution,
called the catenoid, was studied in [27, 9]. Some blow-up scenarios due to
failure of immersivity were investigated by Wong [47]. Minimal surfaces have
also been studied as singular limits of certain semilinear wave equations by
Jerrard [20]. The local well-posedness question fits into the similar theory
for the broader class of quasilinear wave equations, but there is also one
result which is specific to minimal surfaces, due to Ettinger [10]; this is
discussed later in the paper.
In our study of the minimal surface equation, the above way of representing it
is less useful, and instead it is better to think of it in geometric terms. In
particular the fact that the above Lagrangian (1.2) and the equation (1.3) are
formulated relative to a background Minkowski metric is absolutely non-
essential; one may instead use any flat Lorentzian metric. This is no surprise
since any two such metrics are equivalent via a linear transformation. Perhaps
less obvious is the fact that the equations may be actually written in an
identical fashion, independent of the background metric; see Remark 3.1 in
Section 3.
For full details on the structure of the equation we refer the reader to
Section 3 of the paper, but here we review the most important facts.
The main geometric object is the metric $g$ which is the trace of the
Minkowski metric in ${\mathfrak{M}}^{n+2}$ on $\Sigma$, and which, expressed
in the $(t=x_{0},x_{1},\cdots,x_{n})$ coordinates, has the form
(1.4) $g_{\alpha\beta}:=m_{\alpha\beta}+\partial_{\alpha}u\partial_{\beta}u,$
where $m_{\alpha\beta}$ denotes the Minkowski metric with signature
$(-1,1,...,1)$ in ${\mathfrak{M}}^{n+1}$. Since $\Sigma$ is time-like, this is
also a Lorentzian metric. This has determinant
(1.5)
$g:=|\det(g^{\alpha\beta})|=1+m^{\alpha\beta}\partial_{\alpha}u\,\partial_{\beta}u,$
and the dual metric is
(1.6)
$g^{\alpha\beta}:=m^{\alpha\beta}-\frac{m^{\alpha\gamma}m^{\beta\delta}\partial_{\gamma}u\,\partial_{\delta}u}{1+m^{\mu\nu}\partial_{\mu}u\,\partial_{\nu}u}.$
Here, and later in the paper, we carefully avoid raising indices with respect
to the Minkowski metric. Instead, all raised indices in this paper will be
with respect to the metric $g$.
Relative to this metric, the equation (1.3) can be expressed in the form
(1.7) $\Box_{g}u=0,$
where $\Box_{g}$ is the covariant d’Alembertian, and which in this problem
will be shown to have the simple expression
(1.8) $\Box_{g}=g^{\alpha\beta}\partial_{\alpha}\partial_{\beta}.$
An important role will also be played by the associated linearized equation,
which, as it turns out, may be easily expressed in divergence form as
(1.9)
$\partial_{\alpha}{\hat{g}}^{\alpha\beta}\partial_{\beta}v=0,\qquad{\hat{g}}^{\alpha\beta}:=g^{-\frac{1}{2}}g^{\alpha\beta}.$
Our objective in this paper will be to study the local well-posedness of the
associated Cauchy problem with initial data at $t=0$,
(1.10) $\left\\{\begin{aligned} &\Box_{g}u=0,\\\ &u(t=0)=u_{0},\\\
&u_{t}(t=0)=u_{1},\end{aligned}\right.$
where the initial data $(u_{0},u_{1})$ is taken in classical Sobolev spaces,
(1.11) $u[0]:=(u_{0},u_{1})\in\mathcal{H}^{s}:=H^{s}\times H^{s-1},$
and is subject to the constraint
(1.12) $u_{1}^{2}-|\partial_{x}u_{0}|^{2}<1.$
Here we use the following notation for the Cauchy data in (1.3) at time $t$,
$u[t]:=(u(t,\cdot),u_{t}(t,\cdot)).$
We aim to investigate the range of exponents $s$ for which local well-
posedness holds, and significantly improve the lower bound for this range.
### 1.2. Nonlinear wave equations
The hyperbolic minimal surface equation (1.3) can be seen as a special case of
more general scalar quasilinear wave equations, which have the form
(1.13) $g^{\alpha\beta}(\partial
u)\partial_{\alpha}\partial_{\beta}u=N(u,\partial u),$
where, again, $g^{\alpha\beta}$ is assumed to be Lorentzian, but without any
further structural properties, and where $u$ may be a vector valued function.
This generic equation will serve as a reference.
As a starting point, we note that the equation (1.3) (and also (1.13) if
$N=0$) admits the scaling law
$u(t,x)\to\lambda^{-1}u(\lambda t,\lambda x).$
This allows us to identify the critical Sobolev exponent as
$s_{c}=\frac{n+2}{2}.$
Heuristically, $s_{c}$ serves as a universal threshold for local well-
posedness, i.e. we have to have $s>s_{c}$. Taking a naive view, one might
think of trying to reach the scaling exponent $s_{c}$. However, this is a
quasilinear wave equation, and getting to $s_{c}$ has so far proved impossible
in any problem of this type.
As a good threshold from above, one might start with the classical well-
posedness result, due to Hughes, Kato, and Marsden [16], and which asserts
that local well-posedness holds for $s>s_{c}+1$. This applies to all equations
of the form (1.13), and can be proved solely by using energy estimates. These
have the form
(1.14) $\|u[t]\|_{\mathcal{H}^{s}}\lesssim
e^{\int_{0}^{t}\|\partial^{2}u(s)\|_{L^{\infty}}ds}\|u[0]\|_{\mathcal{H}^{s}}.$
They may also be restated in terms of quasilinear energy functionals $E^{s}$
which have the following two properties:
1. (a)
Coercivity,
$E^{s}(u[t])\approx\|u[t]\|_{\mathcal{H}^{s}}^{2}.$
2. (b)
Energy growth,
(1.15) $\frac{d}{dt}E^{s}(u)\lesssim\|\partial^{2}u\|_{L^{\infty}}\cdot
E^{s}(u).$
To close the energy estimates, it then suffices to use Sobolev embeddings,
which allow one to bound the above $L^{\infty}$ norm, which we will refer to
as a _control parameter_ , in terms of the $\mathcal{H}^{s}$ Sobolev norm
provided that $s>\frac{n}{2}+2$, which is one derivative above scaling.
The reason a derivative is lost in the above analysis is that one would only
need to bound $\|\partial^{2}u\|_{L^{1}L^{\infty}}$, whereas the norm that is
actually controlled is $\|\partial^{2}u\|_{L^{\infty}L^{\infty}}$; this
exactly accounts for the one derivative difference in scaling. It also
suggests that the natural way to improve the classical result is to control
the $L^{p}L^{\infty}$ norm directly. This is indeed possible in the context of
the Strichartz estimates, which in dimension three and higher give the bound
$\|\partial^{2}u\|_{L^{2}L^{\infty}}\lesssim\|u[0]\|_{\mathcal{H}^{\frac{n+3}{2}}},$
with another $\epsilon$ derivatives loss in three space dimensions. When true,
such a bound yields well-posedness for $s>\frac{n+3}{2}$, which is $1/2$
derivatives above scaling. The numerology changes slightly in two space
dimensions, where the best possible Strichartz estimate has the form
$\|\partial^{2}u\|_{L^{4}L^{\infty}}\lesssim\|u[0]\|_{\mathcal{H}^{\frac{n}{2}+\frac{7}{4}}},$
which is $3/4$ derivatives above scaling.
The difficulty in using Strichartz estimates is that, while these are well
known in the constant coefficient case [11, 22] and even for smooth variable
coefficients [21, 31], that is not as simple in the case of rough
coefficients. Indeed, as it turned out, the full Strichartz estimates are true
for $C^{2}$ metrics, see [33] ($n=2,3$), [42] (all $n$), but not, in general,
for $C^{\sigma}$ metrics when $\sigma<2$, see the counterexamples of [34, 35].
This difficulty was resolved in two stages:
1. (i)
_Semiclassical time scales and Strichartz estimates with loss of derivatives._
The idea here, which applies even for $C^{\sigma}$ metrics with $\sigma<2$, is
that, associated to each dyadic frequency scale $2^{k}$, there is a
corresponding “ semiclassical ” time scale $T_{k}=2^{-\alpha k}$, with
$\alpha$ dependent on $\sigma$, so that full Strichartz estimates hold at
frequency $2^{k}$ on the scale $T_{k}$. Strichartz estimates with loss of
derivatives are then obtained by summing up the short time estimates with
respect to the time intervals, separately at each frequency. This idea was
independently introduced in [5] and [41], and further refined in [4] and [44].
2. (ii)
_Wave packet coherence and parametrices._ The observation here is that in the
study of nonlinear wave equations such as (1.13), in addition to Sobolev-type
regularity for the metric, we have an additional piece of information, namely
that the metric itself can be seen as a solution to a nonlinear wave equation.
This idea was first introduced and partially exploited in [24], but was
brought to full fruition in [36], where it was shown that almost loss-less
Strichartz estimates hold for the solutions to (1.13) at exactly the correct
regularity level.
The result in [36] represents the starting point of the present work, and is
concisely stated as follows111The primary result in [36] is for the case when
$g=g(u)$, but it directly carries over to equations of the form (1.13). The
result as stated below applies equally to both cases, but if $g=g(u)$ then
$s_{c}$ is one unit lower.:
###### Theorem 1.1 (Smith-Tataru [36]).
(1.13) is locally well-posed in $\mathcal{H}^{s}$ provided that
(1.16) $s>s_{c}+\frac{3}{4},\qquad n=2,$
respectively
(1.17) $s>s_{c}+\frac{1}{2},\qquad n\geq 3.$
As part of this result, almost loss-less Strichartz estimates were obtained
both directly for the solution $u$, and more generally for the associated
linearized evolution. We will return to these estimates in Section 10 for a
more detailed statement and an in-depth discussion.
The optimality of this result, at least in dimension three, follows from work
of Lindblad [28], see also the more recent two dimensional result in [32].
However, this counterexample should only apply to “generic” models, and the
local well-posedness threshold might possibly be improved in problems with
additional structure, i.e. some form of null condition.
Moving forward, we recall that in [43], a null condition was formulated for
quasilinear equations of the form (1.13).
###### Definition 1.2 ([43]).
The nonlinear wave equation (1.13) satisfies the _nonlinear null condition_ if
(1.18) $\frac{\partial g^{\alpha\beta}(u,p)}{\partial
p_{\gamma}}\xi_{\alpha}\xi_{\beta}\xi_{\gamma}=0\qquad\text{in}\qquad
g^{\alpha\beta}(u,p)\xi_{\alpha}\xi_{\beta}=0.$
Here we use the terminology “nonlinear null condition” in order to distinguish
it from the classical null condition, which is relative to the Minkowski
metric, and was heavily used in the study of global well-posedness for
problems with small localized data, see [23] as well as the books [37, 15]. In
geometric terms, this null condition may be seen as a cancellation condition
for the self interactions of wave packets traveling along null geodesics. In
Section 3 we verify that the minimal surface equation indeed satisfies the
nonlinear null condition.
Further, it was conjectured in [43] that, for problems satisfying (1.18), the
local well-posedness threshold can be lowered below the one in [36]. This
conjecture has remained fully open until now, though one should mention two
results in [25] and [10] for the Einstein equation, respectively the minimal
surface equation, where the endpoint in Theorem 1.1 is reached but not
crossed.
The present work provides the first positive result in this direction,
specifically for the minimal surface equation. Indeed, not only are we able to
lower the local well-posedness threshold in Theorem 1.1, but in effect we
obtain a substantial improvement, namely by $3/8$ derivatives in two space
dimensions and by $1/4$ derivatives in higher dimension.
### 1.3. The main result
Our main result, stated in a succinct form, is as follows:
###### Theorem 1.3.
The Cauchy problem for the minimal surface equation (1.10) is locally well-
posed for initial data $u[0]$ in $\mathcal{H}^{s}$ which satisfies the
constraint (1.12), where
(1.19) $s>s_{c}+\frac{3}{8},\qquad n=2,$
respectively
(1.20) $s>s_{c}+\frac{1}{4},\qquad n\geq 3.$
The result is valid regardless of the $\mathcal{H}^{s}$ size of the initial
data. Here we interpret local well-posedness in a strong Hadamard sense,
including:
* •
_existence of solutions_ in the class $u([\cdot])\in C[0,T;\mathcal{H}^{s}]$,
with $T$ depending only on the $\mathcal{H}^{s}$ size of the initial data.
* •
_uniqueness of solutions_ , in the sense that they are the unique limits of
smooth solutions.
* •
_higher regularity_ , i.e. if in addition the initial data
$u[0]\in\mathcal{H}^{m}$ with $m>s$, then the solution satisfies
$u([\cdot])\in C(0,T;\mathcal{H}^{m})$, with a bound depending only on the
$\mathcal{H}^{m}$ size of the data,
$\|u([\cdot])\|_{C(0,T;\mathcal{H}^{m})}\lesssim\|u[0]\|_{\mathcal{H}^{m}}.$
* •
_continuous dependence_ in $\mathcal{H}^{s}$, i.e. continuity of the the data
to solution map
$\mathcal{H}^{s}\ni u[0]\to u([\cdot])\in C[0,T;\mathcal{H}^{s}].$
* •
_weak Lipschitz dependence_ , i.e. for two $\mathcal{H}^{s}$ solutions $u$ and
$v$ we have the difference bound
$\|u([\cdot])-v([\cdot])\|_{C(0,T;\mathcal{H}^{\frac{1}{2}})}\lesssim\|u[0]-v[0]\|_{\mathcal{H}^{\frac{1}{2}}}.$
In addition to the above components of the local well-posedness result, a key
intermediate role in the proof of the above theorem is played by the
Strichartz estimates, not only for the solution $u$, but also, more
importantly, for the linearized problem
(1.21) $\left\\{\begin{aligned}
&\partial_{\alpha}{\hat{g}}^{\alpha\beta}\partial_{\beta}v=0,\\\
&v(t=0)=v_{0},\\\ &v_{t}(t=0)=v_{1},\end{aligned}\right.$
as well as its paradifferential counterpart
(1.22) $\left\\{\begin{aligned}
&\partial_{\alpha}T_{{\hat{g}}^{\alpha\beta}}\partial_{\beta}v=0,\\\
&v(t=0)=v_{0},\\\ &v_{t}(t=0)=v_{1},\end{aligned}\right.$
Here the paraproducts are defined using the Weyl quantization, see Section 2.2
for more details. For later reference, we state the Strichartz estimates in a
separate theorem:
###### Theorem 1.4.
Then there exists some $\delta_{0}>0$, depending on $s$ in (1.19), (1.19) so
that the following properties hold for every solution $u$ as in Theorem 1.3:
a) The solution $u$ in Theorem 1.3 satisfies the Strichartz estimates
(1.23) $\displaystyle\|\langle D_{x}\rangle^{\frac{1}{2}+\delta_{0}}\partial
u\|_{L^{4}L^{\infty}}\lesssim 1,\qquad n=2,$ $\displaystyle\|\langle
D_{x}\rangle^{\frac{1}{2}+\delta_{0}}\partial u\|_{L^{2}L^{\infty}}\lesssim
1,\qquad n\geq 3.$
b) Both the linearized equation (1.21) and its paradifferential version (1.22)
are well-posed in $\mathcal{H}^{\frac{5}{8}}$ for $(n=2)$ respectively
$\mathcal{H}^{\frac{1}{2}}$ for $n\geq 3$, and the following Strichartz
estimates hold for each222Of course with an implicit constant which may depend
on $\delta$. $\delta>0$:
(1.24) $\displaystyle\|v\|_{L^{\infty}\mathcal{H}^{\frac{5}{8}}}+\|\langle
D_{x}\rangle^{-\frac{n}{2}-\frac{1}{4}-\delta}\partial
v\|_{L^{4}(0,1;L^{\infty})}\lesssim$ $\displaystyle\
\|v[0]\|_{\mathcal{H}^{\frac{5}{8}}}\qquad n=2,$
respectively
(1.25) $\displaystyle\|v\|_{L^{\infty}\mathcal{H}^{\frac{1}{2}}}+\|\langle
D_{x}\rangle^{-\frac{n}{2}-\frac{1}{4}-\delta}\partial
v\|_{L^{2}(0,1;L^{\infty})}\lesssim$ $\displaystyle\
\|v[0]\|_{\mathcal{H}^{\frac{1}{2}}}\qquad n\geq 3,$
We note that the Strichartz estimates in both parts (a) and (b) have
derivative losses, namely $1/8$ derivatives in the $L^{4}L^{\infty}$ bound in
two dimensions, respectively $1/4$ derivatives in higher dimensions. These
estimates only represent the tip of the iceberg. One may also consider the
inhomogeneous problem, allow source terms in dual Strichartz spaces, etc.
These and other variations which play a role in this paper are discussed in
Section 4.
To understand the new ideas in the proof of our main theorem, we recall the
two key elements of the proof of the result in [36], namely (i) the classical
energy estimates (1.15) and (ii) the nearly lossless Strichartz estimates; at
the time, the chief difficulty was to prove the Strichartz estimates.
In this paper we completely turn the tables, taking part (ii) above for
granted, and instead work to improve the energy estimates. Let us begin with a
simple observation, which is that the minimal surface equation (1.7) has a
cubic nonlinearity, which allows one to replace (1.15) with
(1.26) $\frac{d}{dt}E^{s}(u)\lesssim\|\partial
u\|_{L^{\infty}}\|\partial^{2}u\|_{L^{\infty}}\cdot E^{s}(u).$
This is what one calls a _cubic energy estimate_ , which is useful in the
study of long time solutions but does not yet help with the low regularity
well-posedness question. The key to progress lies in developing a much
stronger form of this bound, which roughly has the form333See Section 2 for
our Besov norm notations.
(1.27) $\frac{d}{dt}E^{s}(u)\lesssim\|\partial
u\|_{B^{\frac{1}{2}}_{\infty,2}}^{2}\cdot E^{s}(u),$
where the two control norms on the right are now balanced, and only require
$1/2$ derivative less than (1.26). This is what we call a _balanced energy
estimate_ , which may only hold for a very carefully chosen energy functional
$E^{s}$.
This is an idea which originates in our recent work on 2D water waves (see
[1]), where balanced energy estimates are also used in order to substantially
lower the low regularity well-posedness threshold. Going back further, this
has its roots in earlier work of the last two authors [18], [17], in the
context of trying to apply normal form methods in order to obtain long time
well-posedness results in quasilinear problems. There we have introduced what
we called the _modified energy method_ , which in a nutshell asserts that in
quasilinear problems it is far better to modify the energies in a normal form
fashion, rather than to transform the equation. It was the cubic energy
estimates of [17] which were later refined in [1] to balanced energy
estimates. Along the way, we have also borrowed and adapted another idea from
Alazard and Delort [3, 2], which is to prepare the problem with a partial
normal form transformation, and is a part of their broader concept of
paradiagonalization; that same idea is also used here.
There are several major difficulties in the way of proving energy estimates
such as (1.27):
* •
The normal form structure is somewhat weaker in the case of the minimal
surface equation, compared to water waves. As a consequence, we have to
carefully understand which components of the equation can be improved with a
normal form analysis and which cannot, and thus have to be estimated directly.
* •
Not only are the energy functionals $E^{s}$ not explicit, they have to be
constructed in a very delicate way, following a procedure which is reminiscent
of Tao’s renormalization idea in the context of wave-maps [40], as well as the
subsequent work [45] of the third author on the same problem.
* •
Keeping track of symbol regularities in our energy functionals and in the
proof of the energy estimates is also a difficult task. To succeed, here we
adapt and refine a suitable notion of paracontrolled distributions, an idea
which has already been used successfully in the realm of stochastic pde’s [12,
13].
* •
The balanced energy estimates need to be proved not only for the full
equation, but also for the associated linear paradifferential equation, as a
key intermediate step, as well as for the full linearized flow. In particular,
when linearizing, some of the favourable normal form structure (or null
structure, to use the nonlinear wave equations language) is lost, and the
proofs become considerably more complex.
Finally, the Strichartz estimates of [36] cannot be used directly here.
Instead, we are able to reformulate them in a paradifferential fashion, and to
apply them on appropriate semiclassical time scales. After interval summation,
this leads to Strichartz estimates on the unit time scale but with derivative
losses. Precisely, in our main Strichartz estimates, whose aim is to bound the
control parameters in (1.27), we end up losing essentially $1/8$ derivatives
in two space dimensions, and $1/4$ derivatives in higher dimension. These
losses eventually determine the regularity thresholds in our main result in
Theorem 1.3.
One consequence of these energy estimates is the following continuation result
for the solutions:
###### Theorem 1.5.
The $\mathcal{H}^{s}$ solution $u$ given by Theorem 1.3 can be continued for
as long as the following integral remains finite:
(1.28) $\int_{0}^{T}\|\partial
u(t)\|_{B^{\frac{1}{2}}_{\infty,2}}^{2}dt<\infty.$
### 1.4. An outline of the paper
#### Paraproducts and paradifferential calculus
The bulk of the paper is written in the language of paradifferential calculus.
The notations and some of the basic product and paracommutator bounds are
introduced in Section 2. Importantly, we use the Weyl quantization throughout;
this plays a substantial role as differences between quantizations are not
always perturbative in our analysis. Also of note, we emphasize the difference
between balanced and unbalanced bounds, so some of our $\Psi$DO product or
commutator expansions have the form
$\text{commutator}=\text{principal part}+\text{unbalanced lower
order}+\text{balanced error}.$
#### The geometric form of the minimal surface equation
While the flat d’Alembertian may naively appear to play a role in the
expansion (1.3) of the minimal surface equation, this is not at all useful,
and instead we need to adopt a geometric viewpoint. As a starting point, in
Section 2 we consider several equivalent formulations of the minimal surface
equation, leading to its geometric form in (1.7). This is based on the metric
$g$ associated to the solution $u$ by (1.4), whose dual we also compute. Two
other conformally equivalent metrics will also play a role. In the same
section we derive the linearized equation, and also introduce the associated
linear paradifferential flow.
#### Strichartz estimates
As explained earlier, Strichartz estimates play a major role in our analysis.
These are applied to several equations, namely the full evolution, the linear
paradifferential evolution and finally the linearized equation; in the present
paper, we view the bounds for the paradifferential equation as the core ones,
and the other bounds as derived bounds, though not necessarily in a directly
perturbative fashion. The Strichartz estimates admit a number of formulations:
in direct form for the homogeneous flow, in dual form for the inhomogeneous
one, or in the full form. The aim of Section 4 is to introduce all these forms
of the Strichartz estimates, as well as to describe the relations between
them, in the context of this paper. A new idea here is to allow source terms
which are time derivatives of distributions in appropriate spaces; this is
achieved by reinterpreting the wave equation as a system.
#### Control parameters in energy estimates
We begin Section 5 by defining the control parameters ${\mathcal{A}}$ and
${\mathcal{B}}$, which will play a fundamental role in our energy estimate.
Here ${\mathcal{A}}$ is a scale invariant norm, at the level of $\|\partial
u\|_{L^{\infty}}$, which will remain small uniformly in time. ${\mathcal{B}}$,
on the other hand, is time dependent and at the level of
$\||D_{x}|^{\frac{1}{2}}\partial u\|_{L^{\infty}}$, and will control the
energy growth. Typically, our _balanced cubic energy estimates_ will have the
form
$\frac{\partial E}{\partial t}\lesssim_{{\mathcal{A}}}{\mathcal{B}}^{2}E.$
To propagate energy bounds we will need to know that ${\mathcal{B}}\in
L^{2}_{t}$. Also in the same section we prove a number of core bounds for our
solutions in terms of the control parameters.
#### The multiplier method and paracontrolled distributions
Both the construction of our energies and the proof of the energy estimates
are based on a paradifferential implementation of the multiplier method, which
leads to space-time identities of the form
$\int_{0}^{T}\\!\\!\\!\\!\int_{{\mathbb{R}}^{n}}\Box_{g}u\cdot
Xu\,dxdt=\left.E_{X}(u)\right|_{0}^{T}+\int_{0}^{T}\\!\\!\\!\\!\int_{{\mathbb{R}}^{n}}R(u)\,dxdt$
in a paradifferential format, where the vector field $X$ is our multiplier and
$E_{X}$ is its associated energy, while $R(u)$ is the energy flux term which
will have to be estimated perturbatively. A fundamental difficulty is that the
multiplier $X$, which should heuristically be at the regularity level of
$\partial u$ cannot be chosen algebraically, and instead has to be constructed
in an inductive manner relative to the dyadic frequency scales. In order to
accurately quantify the regularity of $X$, in Section 6 we use and refine the
notion of paracontrolled distributions; in a nutshell, while $X$ may not be
chosen to be a function of $\partial u$, it will still have to be
paracontrolled by $\partial u$, which we denote by $X\llcurly u$.
#### Energy estimates for the paradifferential equation
The construction of the energy functionals is carried out in Section 2.2,
primarily at the level of the linear paradifferential equation, first in
$\mathcal{H}$ and then in $\mathcal{H}^{s}$. In both cases there are two
steps: first the construction of the symbol of the multiplier $X$, as a
paracontrolled distribution, and then the proof of the energy estimates. The
difference between the two cases is that $X$ is a vector field in the first
case, but a full pseudodifferential operator in the second case; because of
this, we prefer to present the two arguments separately.
#### Energy estimates for the full equation
The aim of Section 8 is to prove that balanced cubic energy estimates hold for
the full equation in all $\mathcal{H}^{s}$ spaces with $s\geq 1$. We do this
by thinking about the full equation in a paradifferential form, i.e. as a
linear paradifferential equation with a nonlinear source term, and then by
applying a normal form transformation to the unbalanced part of the source
term.
#### Well-posedness for the linearized equation
The goal of Section 9 is to establish both energy and Strichartz estimates for
$\mathcal{H}^{\frac{1}{2}}$ solutions ($\mathcal{H}^{\frac{5}{8}}$ in
dimension two) to the linearized equation. This is achieved under the
assumption that both energy and Strichartz estimates for
$\mathcal{H}^{\frac{1}{2}}$ solutions ($\mathcal{H}^{\frac{5}{8}}$ in
dimension two) for the linear paradifferential equation hold. We remark that,
while the energy estimates for the linear paradifferential equation have
already been established by this point in the paper, the corresponding
Strichartz estimates have yet to be proved.
#### Short time Strichartz estimates for the full equation
The local well-posedness result of Smith and Tataru [36] yields well-posedness
and nearly sharp Strichartz estimates on the unit time scale for initial data
which is small in the appropriate Sobolev space. Our objective in Section 10
is to recast this result as a short time result for a corresponding large data
problem. This is a somewhat standard scaling/finite speed of propagation
argument, though with an interesting twist due to the need to use homogeneous
Sobolev norms.
#### Small vs. large $\mathcal{H}^{s}$ data
In our main well-posedness prof, in order to avoid more cumbersome notations
and estimates, it is convenient to work with initial data which is small in
$\mathcal{H}^{s}$. This is not a major problem, as this is a nonlinear wave
equation which exhibits finite speed of propagation. This allows us to reduce
the large data problem to the small data problem by appropriate localizations.
This argument is carried out at the beginning of Section 11.
#### Rough solutions as limits of smooth solutions
Our sequence of modules discussed so far comes together in Section 11, where
we finally obtain our rough solutions $u$ as a limit of smooth solutions
$u^{h}$ with initial data frequency localized below frequency $2^{h}$. The
bulk of the proof is organized as a bootstrap argument, where the bootstrap
quantities are uniform energy type bounds for both $u^{h}$ and for their
increments $v^{h}=\dfrac{d}{dh}u^{h}$, which solve the corresponding
linearized equation. The main steps are as follows:
* •
we use the short time Strichartz estimates derived from [36] for $u^{h}$ and
$v^{h}$ in order to obtain long time Strichartz estimates for $u^{h}$, which
in turn implies energy estimates for both the full equation and the
paradifferential equation, and closes one half of the bootstrap.
* •
we combine the short time Strichartz estimates and the long time energy
estimates for the paradifferential equation in $\mathcal{H}^{\frac{1}{2}}$
($\mathcal{H}^{\frac{5}{8}}$ if $n=2$) to obtain long time Strichartz
estimates for the same paradifferential equation.
* •
we use the energy and Strichartz estimates for the paradifferential equation
to obtain similar bounds for the linearized equation. This in turn implies
long time energy estimates for $v^{h}$, closing the second half of the
bootstrap loop.
#### The well-posedness argument
Once we have a complete collection of energy estimates and Strichartz
estimates for both the full equation and the linearized equation, we are able
to use frequency envelopes in order to prove the remaining part of the well-
posedness results, namely the strong convergence of the smooth solutions, the
continuous dependence, and the associated uniqueness property. In this we
follow the strategy outlined in the last two authors’ expository paper [19].
### 1.5. Acknowledgements
The first author was supported by the Henry Luce Foundation. The second author
was supported by a Luce Associate Professorship, by the Sloan Foundation, and
by an NSF CAREER grant DMS-1845037. The third author was supported by the NSF
grant DMS-2054975 as well as by a Simons Investigator grant from the Simons
Foundation.
This material is also based upon work supported by the National Science
Foundation under Grant No. DMS-1928930 while all three authors participated in
the program Mathematical problems in fluid dynamics hosted by the Mathematical
Sciences Research Institute in Berkeley, California, during the Spring 2021
semester.
## 2\. Notations, paraproducts and some commutator type bounds
We begin with some standard notations and conventions:
* •
The greek indices $\alpha,\beta,\gamma,\delta$ etc. in expressions range from
$0$ to $n$, where $0$ stands for time. Roman indices $i,j$ are limited to the
range from $1$ to $n$, and are associated only to spatial coordinates.
* •
The differentiation operators with respect to all coordinates are
$\partial_{\alpha}$, $\alpha=0,...,n$. By $\partial$ without any index we
denote the full space-time gradient. To separate only spatial derivatives we
use the notation $\partial_{x}$.
* •
We consistently use the Einstein summation convention, where repeated indices
are summed over, unless explicitly stated otherwise.
* •
The inequality sign $x\lesssim y$ means $x\leq Cy$ with a universal implicit
constant $C$. If instead the implicit constant $C$ depends on some parameter
$A$ then we write instead $x\lesssim_{A}y$.
### 2.1. Littlewood-Paley decompositions and Sobolev spaces
We denote the Fourier variables by $\xi_{\alpha}$ with $\alpha=0,...,n$. To
separate the spatial Fourier variables we use the notation $\xi^{\prime}$.
#### 2.1.1. Littlewood-Paley decompositions
For distributions in ${\mathbb{R}}^{n}$ we will use the standard inhomogeneous
Littlewood-Paley decomposition
$u=\sum_{k=0}^{\infty}P_{k}u,$
where $P_{k}=P_{k}(D_{x})$ are multipliers with smooth symbols
$p_{k}(\xi^{\prime})$, localized in the dyadic frequency region
$\\{|\xi|\approx 2^{k}\\}$ (unless $k=0$, where we capture the entire unit
ball). We emphasize that no such decompositions are used in the paper with
respect to the time variable. We will also use the notations $P_{<k}$,
$P_{>k}$ with the standard meaning. Often we will use shorthand for the
Littlewood-Paley pieces of $u$, such as $u_{k}:=P_{k}u$ or $u_{<k}:=P_{<k}u$.
#### 2.1.2. Function spaces
For our main evolution we will use inhomogeneous Sobolev space $H^{s}$, often
combined as product spaces $\mathcal{H}^{s}=H^{s}\times H^{s-1}$ for the
position/velocity components of our evolution. In the next to last section of
the paper only we will have an auxiliary use for the corresponding homogeneous
spaces $\dot{H}^{s}$, in connection with scaling analysis.
For our estimates we will use $L^{\infty}$ based control norms. In addition to
the standard $L^{\infty}$ norms, in many estimates we will use the standard
inhomogeneous $BMO$ norm, as well as its close relatives $BMO^{s}$, with norm
defined as
$\|f\|_{BMO^{s}}=\|\langle D_{x}\rangle^{s}f\|_{BMO}.$
We will also need several related $L^{\infty}$ based Besov norms
$B^{s}_{\infty,q}$, defined as
$\|u\|_{B^{s}_{\infty,q}}^{q}=\sum_{k}2^{pks}\|P_{k}u\|_{L^{\infty}}^{p}$
with the obvious changes if $p=\infty$. In particular the spaces
$B^{0}_{\infty,1}$ and $B^{\frac{1}{2}}_{\infty,2}$ will be used for our
control norms ${\mathcal{A}}$ and ${\mathcal{B}}$.
#### 2.1.3. Frequency envelopes
Throughout the paper we will use the notion of _frequency envelopes_ ,
introduced by Tao (see for example [40]), which is a very useful device that
tracks the evolution of the energy of solutions between dyadic energy shells.
###### Definition 2.1.
We say that $\\{c_{k}\\}_{k\geq 0}\in\ell^{2}$ is a frequency envelope for a
function $u$ in $H^{s}$ if we have the following two properties:
a) Energy bound:
(2.1) $\|P_{k}u\|_{H^{s}}\leq c_{k},$
b) Slowly varying
(2.2) $\frac{c_{k}}{c_{j}}\lesssim 2^{c|j-k|},\quad j,k\in\mathbb{N}.$
Here $c$ is a positive constant, which is taken small enough in order to
account for energy leakage between nearby frequencies.
One can also limit from above the size of a frequency envelope, by requiring
that
$\|u\|_{H^{s}}^{2}\approx\sum c_{k}^{2}.$
Such frequency envelopes always exist, for instance one can define
$c_{k}=\sup_{j}2^{-\delta|j-k|}\|P_{j}u\|_{H^{s}}.$
The same notion can be applied to any Besov norms. In particular we will use
it jointly for the Besov norms which define our control parameters
${\mathcal{A}}$ and ${\mathcal{B}}$.
### 2.2. Paraproducts and paradifferential operators
For multilinear analysis, we will consistently use paradifferential calculus,
for which we refer the reader to [6, 30].
We begin with the simplest bilinear expressions, namely products, for which we
will use the Littlewood-Paley trichotomy
$f\cdot g=T_{f}g+\Pi(f,g)+T_{g}f,$
where the three terms capture the _low $\times$high_ frequency interactions,
the _high $\times$high_ frequency interactions and the _low $\times$high_
frequency interactions. The paraproduct $T_{f}g$ might be heuristically
thought of as the dyadic sum
$T_{f}g=\sum_{k}f_{<k-\kappa}g_{k}$
where the frequency gap $\kappa$ can be simply chosen as a universal
parameter, say $k=4$, or on occasion may be increased and used as a smallness
parameter in a large data context. However, in our context a definition such
as the above one is too imprecise, and the difference between usually
equivalent choices is nonperturbative. Also, the symmetry properties of
$T_{f}$ as an operator in $L^{2}$ are important in our energy estimates. For
this reason, we choose to work with the Weyl quantization, and we define
$\mathcal{F}(T_{f}g)(\zeta)=\int_{\xi+\eta=\zeta}\hat{f}(\eta)\chi\left(\frac{|\eta|}{\langle\xi+\frac{1}{2}\eta\rangle}\right)\hat{g}(\xi)d\xi.$
Here $\chi$ is a smooth function supported in a small ball and which equals
$1$ near the origin. With this convention, if $f$ is real then $T_{f}$ is an
$L^{2}$ self-adjoint operator.
For paraproducts we have a number of standard bounds which we list below, and
we will refer to as Coifman-Meyer estimates:
(2.3) $\|T_{f}g\|_{L^{p}}\lesssim\|f\|_{L^{\infty}}\|g\|_{L^{p}},$
(2.4) $\|T_{f}g\|_{L^{p}}\lesssim\|f\|_{L^{p}}\|g\|_{BMO},$
(2.5) $\|\Pi(f,g)\|_{L^{p}}\lesssim\|f\|_{L^{p}}\|g\|_{BMO}.$
These hold for $1<p<\infty$, but there are also endpoint results available
roughly corresponding to $p=1$ and $p=\infty$.
Paraproducts may also be thought of as belonging to the larger class of
translation invariant bilinear operators. Such operators
$f,g\to B(f,g)$
may be described by their symbols $b(\eta,\xi)$ in the Fourier space, by
$\mathcal{F}B(u,v)(\zeta)=\int_{\xi+\eta=\zeta}b(\eta,\xi)\hat{f}(\eta)\hat{g}(\xi)d\xi.$
A special class of such operators, which we denote by $L_{lh}$, will play an
important role later in the paper:
###### Definition 2.2.
a) By $L_{lh}$ we denote translation invariant bilinear forms whose symbol
$\ell_{lh}(\eta,\xi)$ is supported in $\\{|\eta|\ll|\xi|+1\\}$ and satisfies
bounds of the form
$|\partial^{i}_{\eta}\partial^{j}_{\xi}\ell_{lh}(\eta,\xi)|\lesssim\langle\xi\rangle^{-i-j}.$
We remark that in particular the bilinear form $B(f,g)=T_{f}g$ is an operator
of type $L_{lh}$, with symbol
$b(\eta,\xi)=\chi\left(\frac{|\eta|}{\langle\xi+\frac{1}{2}\eta\rangle}\right).$
Here the factor in the denominator $\xi+\eta/2$ is the average of the $g$
input frequency and the output frequency, and corresponds exactly to our use
of the Weyl calculus. The $L^{p}$ bounds and the commutator estimates for such
bilinear form mirror exactly the similar bounds for paraproducts.
### 2.3. Commutator and other paraproduct bounds
Here we collect a number of general paraproduct estimates, which are
relatively standard. See for instance Appendix B of [17] and Section 2 of [1]
for proofs of the following estimates as well as further references.
We begin with the following standard commutator estimate:
###### Lemma 2.3 ($P_{k}$ commutators).
We have
(2.6) $\|[T_{f},P_{k}]\|_{\dot{H}^{s}\to\dot{H}^{s}}\lesssim 2^{-k}\|\partial
f\|_{L^{\infty}},$
also in $L^{\infty}$.
The following commutator-type estimates are either exact reproductions of, or
closely follow, statements from Section 2 of [1]:
###### Lemma 2.4 (Para-commutators).
Assume that $\gamma_{1},\gamma_{2}<1$. Then we have
(2.7)
$\|T_{f}T_{g}-T_{g}T_{f}\|_{\dot{H}^{s}\to\dot{H}^{s+\gamma_{1}+\gamma_{2}}}\lesssim\||D|^{\gamma_{1}}f\|_{BMO}\||D|^{\gamma_{2}}g\|_{BMO}.$
###### Lemma 2.5 (Para-associativity).
For $s+\gamma_{2}\geq 0,s+\gamma_{1}+\gamma_{2}\geq 0$, and $\gamma_{1}<1$ we
have
(2.8)
$\|T_{f}\Pi(v,u)-\Pi(v,T_{f}u)\|_{\dot{H}^{s+\gamma_{1}+\gamma_{2}}}\lesssim\||D|^{\gamma_{1}}f\|_{BMO}\||D|^{\gamma_{2}}v\|_{BMO}\|u\|_{\dot{H}^{s}}.$
###### Lemma 2.6 (Para-Leibniz rule).
For the balanced Leibniz rule error
$E^{\pi}_{L}(u,v)=T_{f}\partial_{\alpha}\Pi(u,v)-\Pi(T_{f}\partial_{\alpha}u,v)-\Pi(u,T_{f}\partial_{\alpha}v)$
we have the bound
(2.9)
$\|E^{\pi}_{L}(u,v)\|_{H^{s}}\lesssim\|f\|_{BMO^{\frac{1}{2}}}\|u\|_{BMO^{-\frac{1}{2}-\sigma}}\|v\|_{H^{s+\sigma}},\qquad\sigma\in{\mathbb{R}}.$
Next, we state paraproduct estimates which also may be found in [1]:
###### Lemma 2.7 (Para-products).
Assume that $\gamma_{1},\gamma_{2}<1$, $\gamma_{1}+\gamma_{2}\geq 0$. Then
(2.10)
$\|T_{f}T_{g}-T_{fg}\|_{\dot{H}^{s}\to\dot{H}^{s+\gamma_{1}+\gamma_{2}}}\lesssim\||D|^{\gamma_{1}}f\|_{BMO}\||D|^{\gamma_{2}}g\|_{BMO}.$
###### Lemma 2.8 (Low-high para-products).
Assume that $\gamma_{1},\gamma_{2}<1$, $\gamma_{1}+\gamma_{2}\geq 0$. Then
(2.11)
$\|T_{f}T_{g}-T_{T_{f}g}\|_{\dot{H}^{s}\to\dot{H}^{s+\gamma_{1}+\gamma_{2}}}\lesssim\||D|^{\gamma_{1}}f\|_{BMO}\||D|^{\gamma_{2}}g\|_{BMO}.$
These are stated here in the more elegant homogeneous setting, but there are
obvious modifications which also apply in the inhomogeneous case. We end with
the following Moser-type result:
###### Lemma 2.9.
Let $F$ be smooth with $F(0)=0$, and $w\in H^{s}$. Set
$R(w)=F(w)-T_{F^{\prime}(w)}w.$
Then we have the estimate
(2.12) $\|R(w)\|_{H^{s+\frac{1}{2}}}\lesssim
C(\|w\|_{L^{\infty}})\|D^{\frac{1}{2}}w\|_{BMO}\|w\|_{H^{s}}.$
### 2.4. Paradifferential operators
As a generalization of paraproducts, we will also work with paradifferential
operators. Precisely, given a symbol $a(x,\xi)$ in ${\mathbb{R}}^{n}$, we
define its paradifferential Weyl quantization $T_{a}$ as the operator
$\mathcal{F}(T_{f}g)(\zeta)=\int_{\xi+\eta=\zeta}\hat{f}(\eta)\chi\left(\frac{|\eta|}{\langle\xi+\frac{1}{2}\eta\rangle}\right)\hat{a}(\eta,\xi)\hat{g}(\xi)d\xi,$
where
$\hat{a}(\eta,\xi)=\mathcal{F}_{x}a(x,\xi).$
The simplest class of symbols one can work with is $L^{\infty}S^{m}$, which
contains symbols $a$ for which
(2.13) $|\partial_{\xi}^{\alpha}a(x,\xi)|\leq
c_{\alpha}\langle\xi\rangle^{m-|\alpha|}$
for all multi-indices $\alpha$. For such symbols, the Calderon-Vaillancourt
theorem insures appropriate boundedness in Sobolev spaces,
$T_{a}:H^{s}\to H^{s-m}.$
More generally, given a translation invariant space of distributions $X$, we
can define an associated symbol class $XS^{m}$ of symbols with the property
that
(2.14) $\|\partial_{\xi}^{\alpha}a(x,\xi)\|_{X}\leq
c_{\alpha}\langle\xi\rangle^{m-|\alpha|}$
for each $\xi\in{\mathbb{R}}^{n}$. Later in the paper, we will use several
choices of symbols of this type, using function spaces which we will associate
to our problem.
## 3\. A complete set of equations
Here we aim to further describe the minimal surface equation and the
underlying geometry, and, in particular, its null structure. We also derive
the linearized equation, and introduce the paralinearization of both the main
equation and its linearization.
### 3.1. The Lorentzian geometry of the minimal surface
Starting from the expression of the metric $g$ in (1.4), the dual metric is
easily computed to be
(3.1)
$g^{\alpha\beta}:=m^{\alpha\beta}-\frac{m^{\alpha\gamma}m^{\beta\delta}\partial_{\gamma}u\partial_{\delta}u}{1+m^{\mu\nu}\partial_{\mu}u\partial_{\nu}u}.$
Also associated to the metric $g$ is its determinant
$g=\det(g_{\alpha\beta})=\det(g^{\alpha\beta})^{-1},$
and the associated volume form
$dV=\sqrt{g}\,dx.$
This can be easily computed as
$g=1+m^{\mu\nu}\partial_{\mu}u\,\partial_{\nu}u.$
In the sequel, we will always raise indices with respect to the metric $g$,
never with respect to Minkowski. In particular we will use the standard
notation
(3.2) $\partial^{\alpha}=g^{\alpha\beta}\partial_{\beta}.$
We remark that, when applied to the function $u$, this operator has nearly the
same effect as the corresponding Minkowski operator,
(3.3) $\partial^{\alpha}u=\frac{1}{g}m^{\alpha\beta}\partial_{\beta}u.$
### 3.2. The minimal surface equation
Here we rewrite the minimal surface equation in covariant form. Using the $g$
notation above and the Minkowski metric, we rewrite (1.3) as
$m^{\alpha\beta}\partial_{\alpha}(g^{-\frac{1}{2}}\partial_{\beta}u)=0,$
or equivalently
$m^{\alpha\beta}(\partial_{\alpha}\partial_{\beta}u-\frac{1}{2g}\partial_{\alpha}g\partial_{\beta}u)=0.$
Expanding the $g$ derivative, we have
(3.4)
$\partial_{\alpha}g=2m^{\mu\nu}\partial_{\mu}u\,\partial_{\alpha}\partial_{\nu}u.$
Then in the previous equation we recognize the expression for the dual metric,
and the minimal surface equation becomes
(3.5) $g^{\alpha\beta}\partial_{\alpha}\partial_{\beta}u=0.$
Using the notation (3.2), this is written in an even shorter form,
(3.6) $\partial^{\alpha}\partial_{\alpha}u=0.$
Similarly, using also (3.3), the relation (3.4) becomes
(3.7)
$\frac{1}{2g}\partial_{\alpha}g=\partial^{\nu}u\,\partial_{\alpha}\partial_{\nu}u.$
### 3.3. The covariant d’Alembertian
The covariant d’Alembertian associated to the metric $g$ has the form
$\Box_{g}=\frac{1}{\sqrt{g}}\partial_{\alpha}\sqrt{g}g^{\alpha\beta}\partial_{\beta},$
which we can rewrite as
$\displaystyle\Box_{g}$
$\displaystyle=\partial_{\alpha}g^{\alpha\beta}\partial_{\beta}+\frac{1}{2g}(\partial_{\alpha}g)g^{\alpha\beta}\partial_{\beta}$
$\displaystyle=g^{\alpha\beta}\partial_{\alpha}\partial_{\beta}+\left(\partial_{\alpha}g^{\alpha\beta}\right)\partial_{\beta}+\frac{1}{2g}(\partial_{\alpha}g)g^{\alpha\beta}\partial_{\beta}.$
Next we need to compute the two coefficients in round brackets. The second one
is given by (3.7). For the first one, for later use, we perform a slightly
more general computation where we differentiate
$g^{\alpha\beta}(\partial_{\gamma}u)$ as a function of its arguments
$p_{\gamma}:=\partial_{\gamma}u$,
(3.8) $\frac{\partial g^{\alpha\beta}}{\partial
p_{\gamma}}=-\partial^{\alpha}u\,g^{\beta\gamma}-\partial^{\beta}u\,g^{\alpha\gamma}.$
This formula follows by directly differentiating (3.1) and from (3.3),
$\frac{\partial g^{\alpha\beta}}{\partial
p_{\gamma}}=-m^{\alpha\gamma}\partial^{\beta}u\,-m^{\beta\gamma}\partial^{\alpha}u\,+2g\partial^{\alpha}\,u\partial^{\beta}u\,\partial^{\nu}u.$
We use (3.1) once again to get (3.8)
$\displaystyle\frac{\partial g^{\alpha\beta}}{\partial p_{\gamma}}$
$\displaystyle=-[g^{\alpha\gamma}+g\partial^{\alpha}u\partial^{\gamma}u]\partial^{\beta}u-[g^{\beta\gamma}+g\partial^{\gamma}u\partial^{\beta}u]\partial^{\alpha}u+2\partial^{\alpha}u\partial^{\beta}ug\partial^{\nu}u,$
$\displaystyle=-g^{\alpha\gamma}\partial^{\beta}u-g^{\beta\gamma}\partial^{\alpha}u.$
From (3.8) and chain rule, we arrive at
(3.9)
$\partial_{\gamma}g^{\alpha\beta}=-\partial^{\alpha}u\,g^{\beta\delta}\partial_{\gamma}\partial_{\delta}u-\partial^{\beta}u\,g^{\alpha\sigma}\partial_{\gamma}\partial_{\sigma}u.$
Setting $\gamma=\alpha$ and using the minimal surface equation in the (3.5)
formulation, we get
(3.10)
$\partial_{\alpha}g^{\alpha\beta}=-\partial^{\alpha}u\,g^{\beta\delta}\partial_{\alpha}\partial_{\delta}u.$
Comparing this with (3.7), we see that the last two terms in the $\Box_{g}$
expression above cancel, and we obtain the following simplified form for the
covariant d’Alembertian:
(3.11) $\Box_{g}=g^{\alpha\beta}\partial_{\alpha}\partial_{\beta}.$
In particular, we get the covariant form of the minimal surface equation for
$u$:
(3.12) $\Box_{g}u=0.$
For later use, we introduce the notation
(3.13)
$A^{\alpha}=-\partial_{\beta}g^{\alpha\beta}=\frac{1}{2g}\partial^{\alpha}g=\partial^{\beta}u\,g^{\alpha\delta}\partial_{\beta}\partial_{\delta}u.$
An interesting observation is that from here on, the Minkowski metric plays
absolutely no role:
###### Remark 3.1.
In order to introduce the minimal surface equations we have started from the
Minkowski metric $m^{\alpha\beta}$. However, the formulation (3.5) of the
equations together with the relations (3.8) provide a complete description of
the equations without any reference to the Minkowski metric, and which is in
effect valid for any other Lorentzian metric. Indeed, the equation (3.5)
together with the fact that the metric components $g^{\alpha\beta}$ are smooth
functions of $\partial u$ satisfying (3.8) are all that is used for the rest
of the paper. Thus, our results apply equally for any other Lorentzian metric
in ${\mathbb{R}}^{n+2}$.
### 3.4. The linearized equations
Our objective now is to derive the linearized minimal surface equations. We
will denote by $v$ the linearized variable. Then, by (3.8), the linearization
of the dual metric $g^{\alpha\beta}=g^{\alpha\beta}(u)$ takes the form
$\delta
g^{\alpha\beta}=-\partial^{\alpha}u\,g^{\beta\nu}\partial_{\nu}v-\partial^{\beta}u\,g^{\alpha\sigma}\partial_{\sigma}v.$
Then the linearized equation is directly computed, using the symmetry in
$\alpha$ and $\beta$, as
$g^{\alpha\beta}\partial_{\alpha}\partial_{\beta}v-2\partial^{\alpha}u\,g^{\beta\gamma}\partial_{\gamma}v=0.$
Using the expression of $A$ in (3.13), the linearized equations take the form
(3.14)
$(g^{\alpha\beta}\partial_{\alpha}\partial_{\beta}-2A^{\gamma}\partial_{\gamma})v=0.$
Alternatively this may also be written in a divergence form,
(3.15)
$(\partial_{\alpha}g^{\alpha\beta}\partial_{\beta}-A^{\gamma}\partial_{\gamma})v=0.$
or in covariant form,
(3.16) $\displaystyle\Box_{g}v$ $\displaystyle=2A^{\beta}\partial_{\beta}v.$
### 3.5. Null forms and the nonlinear null condition
The primary null form which plays a role in this article is $Q_{0}$, defined
by
(3.17)
$Q_{0}(v,w):=g^{\alpha\beta}\partial_{\alpha}v\partial_{\beta}w=\partial^{\alpha}v\partial_{\alpha}w.$
Now, we verify that the nonlinear null condition (1.18) holds; for this we use
(3.8) to compute
$\frac{\partial
g^{\alpha\beta}}{\partial_{g_{\gamma}}}\xi_{\alpha}\xi_{\beta}\xi_{\gamma}=\left(-g^{\alpha\beta}\partial^{\beta}u-g^{\beta\gamma}\partial^{\alpha}u\right)\xi_{\alpha}\xi_{\beta}\xi_{\gamma},$
which vanishes on the null cone $g^{\alpha\beta}\xi_{\alpha}\xi_{\beta}=0$.
In addition we would like the contribution of $A$ to the linearized equation
to be a null form. We get
$A^{\beta}\partial_{\beta}v={\partial^{\alpha}u}\,Q_{0}(\partial_{\alpha}u,v).$
### 3.6. Two conformally equivalent metrics
While the metric $g$ is the primary metric we use in this paper, for technical
reasons we will also introduce two additional, conformally equivalent metrics,
as follows:
(i) The metric ${\tilde{g}}$ is defined by
(3.18) ${\tilde{g}}^{\alpha\beta}:=(g^{00})^{-1}g^{\alpha\beta}.$
Then the minimal surface equation can be written as
(3.19) ${\tilde{g}}^{\alpha\beta}\partial_{\alpha}\partial_{\beta}u=0$
while the linearized equation, written in divergence form is
(3.20)
$({\tilde{g}}^{\alpha\beta}\partial_{\alpha}\partial_{\beta}-{\tilde{A}}^{\alpha}\partial_{\alpha})v=0,$
where, still raising indices only with respect to $g$,
(3.21)
${\tilde{A}}^{\alpha}=(g^{00})^{-1}A^{\alpha}-{\tilde{g}}^{\alpha\beta}\partial_{\alpha}(\ln
g^{00})=\partial^{\beta}u{\tilde{g}}^{\alpha\delta}\partial_{\beta}\partial_{\delta}u+2\partial^{0}u{\tilde{g}}^{0\delta}{\tilde{g}}^{\alpha\beta}\partial_{\beta}\partial_{\delta}u.$
The main feature of $\tilde{g}$ is that $\tilde{g}^{00}=1$. Because of this,
it will be useful in the study of the linear paradifferential flow, in order
to prevent a nontrivial paracoefficient in front of $\partial_{0}^{2}v$ in the
equations.
(ii) The metric ${\hat{g}}$ is defined by
(3.22) ${\hat{g}}^{\alpha\beta}=g^{-\frac{1}{2}}g^{\alpha\beta}.$
Then the minimal surface equation can be written as
(3.23) ${\hat{g}}^{\alpha\beta}\partial_{\alpha}\partial_{\beta}u=0,$
which is not so useful. Instead, the advantage of using this metric is that,
using (3.13), the linearized equation can now be written in divergence form,
(3.24) $\partial_{\alpha}{\hat{g}}^{\alpha\beta}\partial_{\beta}v=0.$
This will be very useful when we study the linearized equation in
$\mathcal{H}^{\frac{1}{2}}$ (respectively $\mathcal{H}^{\frac{5}{8}}$ in two
dimensions).
### 3.7. Paralinearization and the linear paradifferential flow
A key element in our study of the minimal surface equation is the associated
linear paradifferential flow, which is derived from the linearized flow
(3.14). In inhomogeneous form, this is
(3.25)
$(\partial_{\alpha}T_{g^{\alpha\beta}}\partial_{\beta}-T_{A^{\gamma}}\partial_{\gamma})w=f.$
Similarly we can write the paradifferential equations associated to
${\tilde{g}}$, namely
(3.26)
$(\partial_{\alpha}T_{{\tilde{g}}^{\alpha\beta}}\partial_{\beta}-T_{{\tilde{A}}^{\gamma}}\partial_{\gamma})w=f.$
as well as ${\hat{g}}$, which can be written in divergence form:
(3.27) $\partial_{\alpha}T_{{\hat{g}}^{\alpha\beta}}\partial_{\beta}w=f.$
These are all equivalent up to perturbative errors. Accordingly, we introduce
the notation
(3.28) $T_{P}=\partial_{\alpha}T_{g^{\alpha\beta}}\partial_{\beta}$
for the paradifferential wave operator as well as its counterparts
$T_{\tilde{P}}$ and $T_{\hat{P}}$ with the metric $g$ replaced by
${\tilde{g}}$, respectively ${\hat{g}}$.
We will first use the paradifferential equation in the study of the minimal
surface problem (3.5), which we rewrite in the form
(3.29)
$(T_{g^{\alpha\beta}}\partial_{\alpha}\partial_{\beta}-2T_{A^{\gamma}}\partial_{\gamma})u=N(u).$
A key contention of our paper is that the nonlinearity $N$ plays a
perturbative role. However, this has to be interpreted in a more subtle way,
in the sense that $N$ becomes perturbative only after a well chosen partial,
variable coefficient normal form transformation.
Secondly, we will use it in the study of the linearized minimal surface
equation, which we can write in the form
(3.30)
$\partial_{\alpha}T_{{\hat{g}}^{\alpha\beta}}\partial_{\beta}v=N_{lin}(u)v.$
Here the nonlinearity $N_{lin}$ will also play a perturbative role, in the
same fashion as above. We caution the reader that this is _not_ the
linearization of $N$.
## 4\. Energy and Strichartz estimates
Both energy and Strichartz estimates play an essential role in this paper, in
various forms and combinations. These are primarily applied first to the
linear paradifferential flow, and then to the linearized flow associated to
solutions to our main equation (1.7). Our goal here is to provide a brief
overview of these estimates.
Importantly, in this section we do not prove any energy or Strichartz
estimates. Instead, we simply provide definitions and context for what will be
proved later in the paper, and prove a good number of equivalences between
various well-posedness statements and estimates. We do this under absolutely
minimal assumptions (e.g. boundedness) on the metric $g$, in order to be able
to apply these properties easily later on. In particular there are no
commutator bounds needed or used in this section. The structure of the minimal
surface equations also plays no role here.
### 4.1. The equations
For context, here we consider a pseudo-Riemannian metric $g$ in
$I\times{\mathbb{R}}^{n}$, where $I=[0,T]$ is a time interval of unspecified
length. We will make some minimal universal assumptions on the metric $g$:
* •
both $g$ and its inverse are uniformly bounded,
* •
the time slices are uniformly space-like.
Associated to this metric $g$, we will consider several equations:
The linear paradifferential flow in divergence form:
(4.1) $\partial_{\alpha}T_{g^{\alpha\beta}}\partial_{\beta}v=f,\qquad
v[0]=(v_{0},v_{1}).$
The linear paradifferential flow in non-divergence form:
(4.2) $T_{g^{\alpha\beta}}\partial_{\alpha}\partial_{\beta}v=f,\qquad
v[0]=(v_{0},v_{1}).$
The linear flow in divergence form:
(4.3) $\partial_{\alpha}g^{\alpha\beta}\partial_{\beta}v=f,\qquad
v[0]=(v_{0},v_{1}).$
The linear flow in non-divergence form:
(4.4) $g^{\alpha\beta}\partial_{\alpha}\partial_{\beta}v=f\qquad
v[0]=(v_{0},v_{1}).$
Several comments are in order:
* •
As written, the above evolutions are inhomogeneous. If $f=0$ then we will
refer to them as the _homogeneous_ flows.
* •
In the context of this paper, we are primarily interested in the metric
${\hat{g}}$, in which case the equation (4.3) represents our main linearized
flow, and (4.1) represents our main linear paradifferential flow. The metric
$g$ and the nondivergence form of the equations will be used in order to
connect our results with the result of Smith-Tataru, which will be used in our
proofs.
* •
One may also add a gradient potential in the equations above; with the
gradient potential added there is no difference between the divergence and the
non-divergence form of the equations. We omit it in this section, as it plays
no role.
We will consider these evolutions in the inhomogeneous Sobolev spaces
$\mathcal{H}^{s}$. In order to do this uniformly, we will assume that $|I|\leq
1$; else using homogeneous spaces would be more appropriate. The exponent $s$
will be an arbitrary real number in the case of the paradifferential flows,
but will have a restricted range otherwise.
### 4.2. Energy estimates and well-posedness for the homogeneous problem
Here we review some relatively standard definitions and facts about local
well-posedness.
###### Definition 4.1.
For any of the above flows in the homogeneous form, we say that they are
(forward) well-posed in $\mathcal{H}^{s}$ in the time interval $I=[0,T]$ if
for each initial data $u[0]\in\mathcal{H}^{s}$ there exists a unique solution
$u$ with the property that
$u([\cdot])\in C(I;\mathcal{H}^{s}).$
This corresponds to a linear estimate of the form
(4.5)
$\|v[\cdot]\|_{L^{\infty}(I;\mathcal{H}^{s})}\lesssim\|v[0]\|_{\mathcal{H}^{s}}.$
Sometimes one establishes additional bounds for the solution (e.g. Strichartz
estimates) and these are then added in to the class of solutions for which
uniqueness is established. We will comment on this where needed. If no such
assumption is used, we call this _unconditional uniqueness_.
For completeness and reference, we now state without proof a classical well-
posedness result:
###### Theorem 4.2.
Assume that $\partial g\in L^{1}(I;L^{\infty})$. Then
a) The paradiffererential flows (4.1) and (4.2) are wellposed in
$\mathcal{H}^{s}$ for all real $s$.
b) The divergence form evolution (4.3) is well-posed in $\mathcal{H}^{s}$ for
$s\in[0,1]$, and the non-divergence form evolution (4.4) is well-posed in
$\mathcal{H}^{s}$ for $s\in[1,2]$.
We remark that the metrics $g$ associated with the solutions of Smith-Tataru
satisfy the above hypothesis, but the solutions in our paper do not.
A slightly stronger form of well-posedness is to assert the existence of a
suitable (time dependent) energy functional $E^{s}$ in $\mathcal{H}^{s}$:
###### Definition 4.3.
An energy functional for either of the above problems in $\mathcal{H}^{s}$ is
a bounded quadratic form in $\mathcal{H}^{s}$ which has the following two
properties:
1. a)
Coercivity,
(4.6) $E^{s}(v[t])\approx\|v[t]\|_{\mathcal{H}^{s}}^{2}.$
2. b)
Bounded growth for solutions $v$ to the homogeneous equation,
(4.7) $\frac{d}{dt}E^{s}(v[t])\lesssim B(t)\|v[t]\|_{\mathcal{H}^{s}}^{2},$
where $B\in L^{1}$ depends only on $g$.
Later we will also interpret $E^{s}$ as a symmetric bilinear form in
$\mathcal{H}^{s}$. Such an interpretation is unique.
We remark that, in the context of Theorem 4.2, where $\partial g\in
L^{1}L^{\infty}$, an energy functional $E^{1}$ corresponding to $s=1$ is
classically obtained by multiplying the equation with a suitable smooth time-
like vector field and integrating by parts; we refer the reader to Section
7.2.1 where this procedure is described in greater detail. Then for $s\neq 1$
one simply defines
$E^{s}(v[0])=E^{1}(\langle D_{x}\rangle^{s-1}v[0]),$
and the corresponding control parameter $B$ may be taken as
$B(t)=\|\partial g(t)\|_{L^{\infty}}.$
### 4.3. The wave equation as a system and the inhomogeneous problem
Switching now to the associated inhomogeneous flows, the classical set-up is
to take a source term $f\in L^{1}H^{s-1}$, and then look for solutions $v$ in
$C(I;\mathcal{H}^{s})$ as above. This is commonly done using the _Duhamel
principle_ , which is most readily applied by rewriting the wave equation as a
system. We next describe this process.
A common choice is to write the system for the pair of variables
$(v,\partial_{t}v)$. However, for us it will be more convenient to to make a
slightly different linear transformation, and use instead the pair
(4.8) $\mathbf{v}[t]:=\begin{pmatrix}v(t)\\\
g^{0\alpha}\partial_{\alpha}v(t)\end{pmatrix}:=Q\begin{pmatrix}v\\\
\partial_{t}v\end{pmatrix},\qquad Q=\begin{pmatrix}1&0\\\
g^{0j}\partial_{j}&g^{00}\end{pmatrix}$
for (4.3) and (4.4), with products replaced by paraproducts in the case of the
equation (4.1) or (4.2). For later use, we record the inverse of $Q$; this is
either
(4.9) $Q^{-1}=\begin{pmatrix}1&0\\\
-(g^{00})^{-1}g^{0j}\partial_{j}&(g^{00})^{-1}\end{pmatrix},$
or its version with products replaced by paraproducts, as needed.
The system for $\mathbf{v}$ will have the form
(4.10) $\frac{d}{dt}\mathbf{v}[t]={\mathcal{L}}\mathbf{v}[t],$
with the appropriate choice for the matrix operator ${\mathcal{L}}$. For
instance in the case of the homogeneous equation (4.3) we have
(4.11)
${\mathcal{L}}=\begin{pmatrix}-(g^{00})^{-1}g^{0j}\partial_{j}&(g^{00})^{-1}\\\
-(g^{00})^{-1}\partial_{i}g^{ij}\partial_{j}+\partial_{i}g^{i0}(g^{00})^{-1}g^{0j}\partial_{j}&-\partial_{j}(g^{00})^{-1}g^{0j}\end{pmatrix},$
which has the antisymmetry property (only for the principal part, in the non-
divergence case)
(4.12) ${\mathcal{L}}^{*}=-J{\mathcal{L}}J^{-1},\qquad J=\begin{pmatrix}0&1\\\
-1&0\end{pmatrix}.$
We will always work in settings where $Q$ is bounded and invertible in
$\mathcal{H}^{s}$. This is nearly automatic in the paradifferential case;
there we only need to make sure that the operator $T_{g^{00}}$ is invertible.
In the differential case we will have to ask that multiplication by $g$ and by
$(g^{00})^{-1}$ are bounded in $H^{s-1}$. In such settings, $\mathcal{H}^{s}$
well-posedness for our original wave equation and for the associated system
are equivalent. If a good energy functional $E^{s}$ exists for the wave
equation, then we may define an associated energy functional for the system by
setting
(4.13) $\mathbf{E}^{s}(\mathbf{v}[t])=E^{s}(Q^{-1}\mathbf{v}[t]).$
Then the properties (4.6) and (4.7) directly transfer to the homogeneous
system (4.10).
If our system is (forward) well-posed in $\mathcal{H}^{s}$, then solving it
generates a (forward) evolution operator $S(t,s)$ which is bounded in
$\mathcal{H}^{s}$ and maps the data at time $s$ to the solution at time $t$,
$S(t,s)\mathbf{v}[s]=\mathbf{v}[t].$
For the system it is easy to consider the inhomogeneous version
(4.14) $\frac{d}{dt}\mathbf{v}[t]={\mathcal{L}}\mathbf{v}[t]+\mathbf{f}[t].$
If $f\in L^{1}\mathcal{H}^{s}$ then the solution to (4.14) is given by
Duhamel’s formula,
(4.15)
$\mathbf{v}[t]=S(t,0)\mathbf{v}[0]+\int_{0}^{t}S(t,s)\mathbf{f}[s]\,ds,$
and satisfies the bound
(4.16)
$\|\mathbf{v}\|_{L^{\infty}\mathcal{H}^{s}}\lesssim\|\mathbf{v}[0]\|_{H^{s}}+\|\mathbf{f}\|_{L^{1}\mathcal{H}^{s}}.$
If we have a good energy $\mathbf{E}^{s}$ for the homogeneous system, then
Duhamel’s formula easily allows us to obtain the corresponding energy estimate
for the inhomogeneous one, namely
(4.17)
$\frac{d}{dt}\mathbf{E}^{s}(\mathbf{v}[t])\lesssim\mathbf{E}^{s}(\mathbf{v}[t],\mathbf{f}[t])+B(t)\|\mathbf{v}[t]\|_{\mathcal{H}^{s}}^{2}.$
Now we are ready to return to our original set of equations, add the source
term $f$ and reinterpret the above consequences of Duhamel’s formula there. As
in the homogeneous case, we define $\mathbf{v}[t]=Qv[t]$. Then adding the
source term $f$ in the original equation is equivalent to adding a source term
$\mathbf{f}$ in the above system. Indeed, it is readily seen that for all our
four equations, $\mathbf{f}$ is given by
(4.18) $\mathbf{f}[t]=\begin{pmatrix}0\\\ f(t)\end{pmatrix}.$
To complete the correspondence, we note that for such $\mathbf{f}$ we have
$Q^{-1}\mathbf{f}=({g^{00}})^{-1}\mathbf{f}[t].$
Then we immediately arrive at the following result:
###### Theorem 4.4.
a) Assume that either of the homogeneous paradifferential flows (4.1) or (4.2)
are well-posed in $\mathcal{H}^{s}$. Then the associated inhomogeneous flows
are well-posed in $\mathcal{H}^{s}$ for $f\in L^{1}H^{s-1}$, and the following
estimate holds
(4.19)
$\|v[\cdot]\|_{L^{\infty}(I;\mathcal{H}^{s})}\lesssim\|v[0]\|_{\mathcal{H}^{s}}+\|f\|_{L^{1}H^{s-1}}.$
In addition, if an energy functional $E^{s}$ in $\mathcal{H}^{s}$ exists, then
(4.20) $\frac{d}{dt}E^{s}(v[t])\lesssim
E^{s}(v[t],(T_{g^{00}})^{-1}\mathbf{f}[t])+B(t)\|u[t]\|_{\mathcal{H}^{s}}^{2}.$
b) The same holds for the flows (4.3) or (4.4) under the additional assumption
that multiplication by $g$ and $({g^{00}})^{-1}$ is bounded in $H^{s-1}$, with
the paraproduct replaced by the corresponding product.
For our purposes in this paper, we will also need to allow for a larger class
of source terms of the form
(4.21) $f=\partial_{t}f_{1}+f_{2}.$
To understand why this is natural, it is instructive to start from the
inhomogeneous system (4.14) and argue backward.
Above, we have used the inhomogeneous system in the case where the first
component of $\mathbf{f}$ was zero. Now we will allow for both terms to be
nonzero in $\mathbf{f}$, and derive the corresponding wave equation. For
clarity we do this in the context of the equation (4.3), for which we have
computed the corresponding operator ${\mathcal{L}}$ in (4.11); however, a
similar computation will apply in all four cases.
We begin by defining
(4.22) $v(t)=\mathbf{v}_{1}(t)$
as our candidate for the wave equation solution. Then the first equation of
the system reads
(4.23)
$\partial_{t}v=-(g^{00})^{-1}g^{0j}\partial_{j}v+(g^{00})^{-1}\mathbf{v}_{2}+\mathbf{f}_{1},$
or equivalently
$g^{0\alpha}\partial_{\alpha}v=\mathbf{v}_{2}+g^{00}\mathbf{f}_{1}.$
Differentiating this with respect to time we obtain
$\displaystyle\partial_{\beta}g^{\beta\alpha}\partial_{\alpha}v=$
$\displaystyle\
\partial_{j}g^{j\alpha}\partial_{\alpha}v+\partial_{t}\mathbf{v}_{2}+\partial_{t}g^{00}\mathbf{f}_{1}$
$\displaystyle=$ $\displaystyle\
\partial_{j}g^{jk}\partial_{k}v+\partial_{j}g^{j0}\partial_{t}v+\partial_{t}\mathbf{v}_{2}+\partial_{t}g^{00}\mathbf{f}_{1}.$
Finally we substitute $\partial_{t}v$ from (4.23) and
$\partial_{t}\mathbf{v}_{2}$ from the second equation of the system. We
already know the right hand side should vanish if $\mathbf{f}=0$, so it
suffices to track the $\mathbf{f}$ terms. Then we easily obtain the desired
equation for $v$:
(4.24)
$\partial_{\beta}g^{\beta\alpha}\partial_{\alpha}v=\partial_{\alpha}g^{\alpha
0}\mathbf{f}_{1}+\mathbf{f}_{2}.$
Comparing this with (4.21), we obtain the correspondence between the source
terms for the wave equation and the system:
(4.25) $f_{1}=g^{00}\mathbf{f}_{1},\qquad
f_{2}=\partial_{k}g^{k0}\mathbf{f}_{1}+\mathbf{f}_{2}.$
We also record here the correspondence between the solutions, in the form
(4.26)
$\mathbf{v}_{1}=v,\qquad\mathbf{v}_{2}=g^{0\alpha}\partial_{\alpha}v-g^{00}\mathbf{f}_{1},$
noting that this is no longer homogeneous, as in (4.8).
The last step in our analysis is to reinterpret the bounds (4.16) and (4.17)
in terms of $v$ and $f$. To do this we make the assumption that multiplication
by $g$ and $(g^{00})^{-1}$ is bounded in both $H^{s}$ and $H^{s-1}$. Then from
(4.16) we get
$\displaystyle\|v\|_{L^{\infty}\mathcal{H}^{s}}\lesssim$ $\displaystyle\
\|\mathbf{v}\|_{L^{\infty}\mathcal{H}^{s}}+\|\mathbf{f}_{1}\|_{L^{\infty}\mathcal{H}^{s}}$
$\displaystyle\lesssim$ $\displaystyle\
\|\mathbf{v}[0]\|_{\mathcal{H}^{s}}+\|\mathbf{f}\|_{L^{1}\mathcal{H}^{s}}+\|\mathbf{f}_{1}\|_{L^{\infty}\mathcal{H}^{s}}$
$\displaystyle\lesssim$ $\displaystyle\
\|v[0]\|_{\mathcal{H}^{s}}+\|f_{1}\|_{L^{1}H^{s-1}}+\|f_{2}\|_{L^{1}H^{s}\cap
L^{\infty}\mathcal{H}^{s}}.$
Similarly, from (4.17) and (4.13) we obtain the energy bound
(4.27)
$\frac{d}{dt}E^{s}(Q^{-1}\mathbf{v}[t])\lesssim\mathbf{E}^{s}(Q^{-1}\mathbf{v}[t],Q^{-1}\mathbf{f}[t])+B(t)\|v[t]\|_{\mathcal{H}^{s}}^{2}.$
Here we use (4.9) and (4.26) to compute
$Q^{-1}\mathbf{v}[t]=v[t]-\begin{pmatrix}0\\\
\mathbf{f}_{1}\end{pmatrix}=v[t]-\begin{pmatrix}0\\\
(g^{00})^{-1}f_{1}\end{pmatrix}:=v[t]-\tilde{v}[t],$
respectively, using also (4.25),
(4.28) $\displaystyle\tilde{f}[t]:=Q^{-1}\mathbf{f}[t]=$ $\displaystyle\
Q^{-1}\begin{pmatrix}(g^{00})^{-1}f_{1}\\\
f_{2}-\partial_{k}g^{k0}(g^{00})^{-1}f_{1}\end{pmatrix}$ $\displaystyle=$
$\displaystyle\ \begin{pmatrix}(g^{00})^{-1}f_{1}\\\
(g^{00})^{-1}(f_{2}-\partial_{k}g^{k0}(g^{00})^{-1}f_{1}-g^{0k}\partial_{k}(g^{00})^{-1}f_{1})\end{pmatrix}.$
Thus we obtain the following natural extension of Theorem 4.4 above:
###### Theorem 4.5.
a) Assume that the homogeneous evolution (4.4) or (4.3) is well-posed in
$\mathcal{H}^{s}$, and that multiplication by $g$ and $(g^{00})^{-1}$ is
bounded in $H^{s}$ and in $H^{s-1}$. Consider the evolution (4.4) with a
source term $f$ of the form
$f=\partial_{t}f_{1}+f_{2},\qquad f_{1}\in L^{1}H^{s}\cap CH^{s-1},\qquad
f_{2}\in L^{1}H^{s-1}.$
Then a unique solution $u\in C(I,\mathcal{H}^{s})$ exists. If in addition the
homogeneous problem admits an energy functional $E^{s}$ as in Definition 4.3
then we have the energy estimate
(4.29) $\frac{d}{dt}E^{s}(v[t]-\tilde{v}[t])\lesssim
E^{s}(v[t]-\tilde{v}[t],\tilde{f}[t])+B(t)\|v[t]-\tilde{v}[t]\|_{\mathcal{H}^{s}}^{2}$
with $\tilde{v}$ and $\tilde{f}$ defined above and $B$ as in (4.7).
b) The same result applies for the paradifferential equations (4.1),
respectively (4.2), where all instances of $g$ above are replaced by the
corresponding paraproducts $T_{g}$.
We remark that in the situations where we apply this result, the mapping
properties for $g$ and $(g^{00})^{-1}$ will be fairly straightforward to
verify. In the paradifferential case, for instance, the continuity of $g$ will
suffice.
### 4.4. A duality argument
Duality plays an important role in many estimates for evolution equations. We
will also use duality considerations in this paper for several arguments. We
restrict the discussion below to the problems written in divergence form, as
this is what we will use later in the paper. However, similar versions may be
formulated in the nondivergence case.
At heart, this is based on the following identity, which in the context of the
operator $\partial_{\alpha}g^{\alpha\beta}\partial_{\beta}$ is written as
follows:
(4.30)
$\int_{0}^{T}\\!\\!\\!\\!\int_{{\mathbb{R}}^{n}}\partial_{\alpha}g^{\alpha\beta}\partial_{\beta}v\cdot
w-v\cdot\partial_{\alpha}g^{\alpha\beta}\partial_{\beta}w\,dxdt=\left.\int
g^{0j}\partial_{j}v\cdot w-v\cdot g^{0j}\partial_{j}w\,dx\right|_{0}^{T}.$
This holds for any test functions $v$ and $w$. The integral on the right can
be viewed as a duality relation between $u[t]$ and $v[t]$,
$B(u[t],v[t])=\int g^{0j}\partial_{j}u\cdot v-u\cdot g^{0j}\partial_{j}v\,dx.$
Precisely, assuming that $g:H^{s-1}\to H^{s-1}$ and that $g^{00}$ is
invertible, this expression has the following two properties
1. (1)
Boundedness,
$B:\mathcal{H}^{s}\times\mathcal{H}^{1-s}\to{\mathbb{R}}.$
2. (2)
Coercivity,
$\sup_{\|v\|_{\mathcal{H}^{1-s}}\leq 1}B(u,v)\approx\|u\|_{\mathcal{H}^{s}}.$
A standard consequence of this relation is the following property:
###### Proposition 4.6.
The evolutions (4.3), respectively (4.1) are forward well-posed in
$\mathcal{H}^{s}$ iff they are backward well-posed in $\mathcal{H}^{1-s}$.
We remark that in the context of this paper forward and backward well-
posedness are almost identical, so for us this property says that well-
posedness in $\mathcal{H}^{s}$ and $\mathcal{H}^{1-s}$ are equivalent.
The above proposition may be equivalently reformulated as the corresponding
result for the system (4.10). It will be more convenient to view it in this
context. To do this, we reinterpret the above duality, in terms of the
associated system (4.14). In view of the symmetry property (4.12), we have the
relation
(4.31)
$\int_{0}^{T}\\!\\!\\!\\!\int_{{\mathbb{R}}^{n}}(\partial_{t}-{\mathcal{L}})\mathbf{v}\cdot
J\mathbf{w}-J\mathbf{v}\cdot(\partial_{t}-{\mathcal{L}})\mathbf{w}\,dxdt=\left.\int\mathbf{v}\cdot
J\mathbf{w}\,dx\ \right|_{0}^{T},$
where the corresponding duality relation is
(4.32) ${\mathbf{B}}({\mathbf{u}},\mathbf{v})=\int\mathbf{v}\cdot
J\mathbf{w}\,dx,$
which provides the duality between $\mathcal{H}^{s}$ and $\mathcal{H}^{1-s}$.
Incidentally, a consequence of (4.31) is the duality relation
$S(t,s)=S(s,t)^{\ast},$
where the duality between $\mathcal{H}^{s}$ and $\mathcal{H}^{1-s}$ is the one
given by the bilinear form ${\mathbf{B}}$ above. This can be used to construct
the backward evolution in $\mathcal{H}^{1-s}$ given the forward evolution in
$\mathcal{H}^{s}$, and vice-versa. The full equivalence argument is standard,
and is omitted.
### 4.5. Strichartz estimates
Here we discuss several versions of Strichartz estimates, as well as the
connection between them.
#### 4.5.1. Estimates for homogeneous equations
In the context of this paper, these have the form
(4.33) $\|v\|_{S^{r}}+\|\partial_{t}v\|_{S^{r-1}}\lesssim\|v[0]\|_{H^{r}},$
where for the Strichartz space $S$ we will consider two different choices:
1. i)
Almost lossless estimates, akin to those established in Smith-Tataru [36]. The
corresponding Strichartz norms, denoted by $S=S_{ST}$ are defined as
(4.34) $\displaystyle\|v\|_{S_{ST}^{r}}=\|v\|_{L^{\infty}H^{r}}+\|\langle
D_{x}\rangle^{r-\frac{3}{4}-\delta}v\|_{L^{4}L^{\infty}},\qquad n=2,$
$\displaystyle\|v\|_{S_{ST}^{r}}=\|v\|_{L^{\infty}H^{r}}+\|\langle
D_{x}\rangle^{r-\frac{n-1}{2}-\delta}v\|_{L^{2}L^{\infty}},\qquad n\geq 3.$
Here the loss of derivatives is measured by $\delta>0$, which is an
arbitrarily small parameter.
2. ii)
Estimates with derivative losses, precisely the type that will be established
in this paper. The corresponding Strichartz norms, denoted by $S=S_{AIT}$ are
defined as
(4.35) $\displaystyle\|v\|_{S_{AIT}^{r}}=\|v\|_{L^{\infty}H^{r}}+\|\langle
D_{x}\rangle^{r-\frac{3}{4}-\frac{1}{8}-\delta}v\|_{L^{4}L^{\infty}},\qquad
n=2,$ $\displaystyle\|v\|_{S_{AIT}^{r}}=\|v\|_{L^{\infty}H^{r}}+\|\langle
D_{x}\rangle^{r-\frac{n-1}{2}-\frac{1}{4}-\delta}v\|_{L^{2}L^{\infty}},\qquad
n\geq 3.$
Here $\delta>0$ is again an arbitrarily small parameter, but we allow for an
additional loss of derivatives in the endpoint (Pecher) estimate, namely $1/8$
derivatives in two space dimensions and $1/4$ in higher dimensions.
These estimates can be applied to any of the four equations discussed in this
section. There are also appropriate counterparts for the corresponding system
(4.10), which have the form
(4.36)
$\|\mathbf{v}_{1}\|_{S^{r}}+\|\mathbf{v}_{2}\|_{S^{r-1}}\lesssim\|\mathbf{v}[0]\|_{\mathcal{H}^{r}},\qquad
S\in\\{S_{ST},S_{AIT}\\}.$
Under very mild assumptions on $g$, these are equivalent to the ones for the
corresponding wave equation:
###### Proposition 4.7.
The Strichartz estimates (4.33) for the homogeneous wave equation are
equivalent to the Strichartz estimates (4.36) for the associated system.
We also remark on a very mild extension of the estimate (4.33) to the
inhomogeneous case. Precisely, if (4.33) holds then we also have the
inhomogeneous bound
(4.37)
$\|v\|_{S^{r}}+\|\partial_{t}v\|_{S^{r-1}}\lesssim\|v[0]\|_{H^{r}}+\|f\|_{L^{1}H^{r-1}}.$
This follows in a straightforward manner by the Duhamel formula, see the
discussion in Section 4.3.
We conclude the discussion of the Strichartz estimates for the homogeneous
equation with a simple but important case, which will be useful for us in the
sequel, and applies in particular to the solutions in [36].
###### Proposition 4.8.
Assume that $\partial g\in L^{1}L^{\infty}$ and that the Strichartz estimates
for the homogeneous equation (4.4) hold in $\mathcal{H}^{1}$. Then the
Strichartz estimates for the homogeneous equation hold in $\mathcal{H}^{r}$
for all $r\in{\mathbb{R}}$ for both paradifferential flows (4.1) and (4.2).
We remark that the implicit constant in these Strichartz estimates depends on
the implicit constant in the Strichartz estimate in the hypothesis and on the
bound for $\|\partial g\|_{L^{1}L^{\infty}}$. Later when we apply this result
we will have uniform control over both, so we obtain uniform control over the
$\mathcal{H}^{r}$ Strichartz norm.
###### Proof.
It will be easier to work with the inhomogeneous bound (4.37), as it is more
stable with respect to perturbations. We divide the proof into several steps,
all of which are relatively standard.
_Step 1:_ We start with the case $r=1$ with the additional assumption
$g^{00}=-1$. Then the second equation in (4.2) can be seen as a perturbation
of (4.4) with an $L^{1}L^{2}$ source term. Hence the bound (4.37) for (4.4)
implies the same bound for (4.2).
_Step 2:_ Next, assuming still that $g^{00}=-1$, we extend the bound (4.37)
for (4.2) to all integers $r$ by conjugating by $\langle
D_{x}\rangle^{\sigma}$ with $\sigma=r-1$, where we can estimate perturbatively
the commutator
(4.38) $\|[T_{g^{\alpha\beta}},\langle
D_{x}\rangle^{\sigma}]\partial_{\alpha}\partial_{\beta}\langle
D_{x}\rangle^{-\sigma}v\|_{L^{1}L^{2}}\lesssim\|\partial
g\|_{L^{1}L^{\infty}}\|\partial v\|_{L^{\infty}L^{2}}.$
_Step 3:_ We multiply by $(T_{g^{00}})^{-1}$ to reduce the problem with
nonconstant $g^{00}$ to the case when $g^{00}=-1$. At the conclusion of this
step, we have the bound (4.37) for (4.2) for all $r$.
_Step 4:_ We commute the paracoefficients $T_{g^{\alpha\beta}}$ inside
$\partial_{\alpha}$ perturbatively, in order to obtain the bound (4.37) for
(4.1) for all $r$. ∎
#### 4.5.2. Dual Strichartz estimates
Here one considers the corresponding inhomogeneous problems, with source terms
in dual Strichartz spaces. The estimates have the form
(4.39)
$\|v\|_{L^{\infty}\mathcal{H}^{r}}\lesssim\|v[0]\|_{\mathcal{H}^{r}}+\|f\|_{(S^{1-r})^{\prime}},\qquad
S\in\\{S_{ST},S_{AIT}\\}.$
Classically, these are obtained by duality from the homogeneous estimates, as
follows:
###### Proposition 4.9.
If the homogeneous estimates (4.33) hold in $\mathcal{H}^{r}$ for the forward
(backward) evolution then the dual estimates (4.39) hold in
$\mathcal{H}^{1-r}$ for the backward (forward) evolution.
However, one can do better than this by going instead through the system form
of the equations (4.14). The dual estimates for (4.14) have the form
(4.40)
$\|\mathbf{v}\|_{L^{\infty}\mathcal{H}^{r}}\lesssim\|\mathbf{v}[0]\|_{\mathcal{H}^{r}}+\|\mathbf{f}_{1}\|_{(S^{-r})^{\prime}}+\|\mathbf{f}_{2}\|_{(S^{-r+1})^{\prime}},\qquad
S\in\\{S_{ST},S_{AIT}\\}.$
These are directly obtained from the homogeneous estimates for the system
(4.10) via the duality (4.31):
###### Proposition 4.10.
If the homogeneous estimates hold in $\mathcal{H}^{r}$ for the forward
(backward) evolution (4.10) then the dual estimates hold in
$\mathcal{H}^{1-r}$ for the backward (forward) evolution (4.14).
One can now further return to the original inhomogeneous equation with a
source term as in (4.21), and use the correspondence (4.25) and (4.26), in
order to transfer the dual bounds back. These dual estimates, which represent
a generalization of (4.39), have the form
(4.41)
$\|v\|_{L^{\infty}\mathcal{H}^{r}}\lesssim\|v[0]\|_{\mathcal{H}^{r}}+\|f_{1}\|_{L^{\infty}H^{r-1}\cap(S^{-r})^{\prime}}+\|f_{2}\|_{(S^{1-r})^{\prime}},\qquad
S\in\\{S_{ST},S_{AIT}\\}.$
We obtain the following strengthening of Proposition 4.9:
###### Proposition 4.11.
If the homogeneous estimates (4.33) hold in $\mathcal{H}^{r}$ for the forward
(backward) evolution then the dual estimates (4.41) hold in
$\mathcal{H}^{1-r}$ for the backward (forward) evolution.
#### 4.5.3. Full (retarded) Strichartz estimates
Here we combine the homogeneous and dual Strichartz estimates in a single
bound for the inhomogeneous problem. The classical form is
(4.42)
$\|v\|_{S^{r}}+\|\partial_{t}v\|_{S^{r-1}}\lesssim\|v[0]\|_{\mathcal{H}^{r}}+\|f\|_{(S^{1-r})^{\prime}},\qquad
S\in\\{S_{ST},S_{AIT}\\}.$
However, here we need to take the extra step where we allow source terms of
the form $f=\partial_{t}f_{1}+f_{2}$, and then the estimates have the form
(4.43)
$\|v\|_{S^{r}}+\|\partial_{t}v\|_{S^{r-1}}\lesssim\|v[0]\|_{\mathcal{H}^{r}}+\|f_{1}\|_{S^{r-1}\cap(S^{-r})^{\prime}}+\|f_{2}\|_{(S^{1-r})^{\prime}},\qquad
S\in\\{S_{ST},S_{AIT}\\}.$
As we will see, this is closely related to the corresponding bound for the
associated inhomogeneous system (4.14):
(4.44)
$\|\mathbf{v}_{1}\|_{S^{r}}+\|\mathbf{v}_{2}\|_{S^{r-1}}\lesssim\|\mathbf{v}[0]\|_{\mathcal{H}^{r}}+\|\mathbf{f}_{1}\|_{(S^{-r})^{\prime}}+\|\mathbf{f}_{2}\|_{(S^{-r+1})^{\prime}},\qquad
S\in\\{S_{ST},S_{AIT}\\}.$
Our main result here is as follows:
###### Theorem 4.12.
Consider either the equation (4.3) or (4.1). If the homogeneous problem is
well-posed forward in $\mathcal{H}^{r}$ and backward in $\mathcal{H}^{1-r}$
and satisfies the homogeneous Strichartz estimates (4.42) in both cases, then
the solutions to the associated forward inhomogeneous problem with source term
$f=\partial_{t}f_{1}+f_{2}$ satisfy the bounds (4.43).
###### Proof.
The proof consists of a number of steps:
Step 1: If the homogeneous problem is well-posed forward in $\mathcal{H}^{r}$
and satisfies the homogeneous Strichartz estimates (4.36), then so does the
corresponding system, see Proposition 4.7.
Step 2: If the homogeneous problem is well-posed backward in
$\mathcal{H}^{1-r}$ and satisfies the homogeneous Strichartz estimates, then
so does the corresponding system. By duality, the inhomogeneous system is
well-posed forward in $H^{r}$ and satisfies the dual Strichartz bounds (4.40).
Step 3: We represent the forward $\mathcal{H}^{r}$ solution by the Duhamel
formula
$\mathbf{v}[t]=S(t,0)\mathbf{v}[0]+\int_{0}^{t}S(t,s)\mathbf{f}(s)\,ds.$
The first term represents the solution to the homogeneous equation, and is
estimated by (4.36). For the second term we have two bounds at our disposal:
the dual bound where we fix $t$ and estimate the output in $\mathcal{H}^{s}$
in terms of the input in the dual Strichartz space, and the homogeneous bound
where we fix $s$, set $\mathbf{f}(s)\in\mathcal{H}^{r}$ and estimate the
output as a function of $t$ in the Strichartz space. Concatenating the two, we
get the restricted bound
(4.45)
$\|\mathbf{v}_{1}\|_{S^{r}(J)}+\|\mathbf{v}_{2}\|_{S^{r-1}(J)}\lesssim\|\mathbf{f}_{1}\|_{(S^{-r})^{\prime}(I)}+\|\mathbf{f}_{2}\|_{(S^{-r+1})^{\prime}(I)},\qquad
S\in\\{S_{ST},S_{AIT}\\},$
where the source $\mathbf{f}$ is supported in an interval $I$ and the output
$\mathbf{v}$ is measured in an interval $J$ so that $I$ precedes $J$. In two
dimensions we can now apply the Christ-Kiselev lemma [8] (or the
$U^{p}$-$V^{p}$ spaces, see [26]) to get the full estimate. In three and
higher dimensions we have a slight problem which is that neither method
applies for bounds from $L^{2}_{t}$ to $L^{2}_{t}$. However in our case this
is not an issue, because our estimates allow for at least a loss of $\delta$
derivatives. Then we can afford to interpolate the two endpoints and use the
Christ-Kiselev lemma for bounds from $L^{2-}_{t}$ to $L^{2+}_{t}$ and then
return to the endpoint setting by Bernstein’s inequality in space and Holder’s
inequality in time, all at the expense of an arbitrarily small increase in the
size $\delta$ of the loss.
Step 4. We transfer the estimate (4.44) back to the original system via the
correspondence (4.25), (4.26), in order to obtain (4.43). ∎
We conclude with a corollary of Theorem 4.12, which will be used later in the
paper and follows by combining this result with Proposition 4.8:
###### Corollary 4.13.
Assume that $\partial g\in L^{1}L^{\infty}$ and that the Strichartz estimates
for the homogeneous equation (4.4) hold in $\mathcal{H}^{1}$. Then the full
Strichartz estimates (4.43) hold in $\mathcal{H}^{r}$ for all
$r\in{\mathbb{R}}$ for both paradifferential flows (4.1) and (4.2).
## 5\. Control parameters and related bounds
### 5.1. Control parameters
Here we introduce our main control parameters associated to a solution $u$ to
the minimal surface equation, which serve to bound the growth of energy for
both solutions to the minimal surface flow and for its linearization. We will
use two such primary quantities, ${\mathcal{A}}$ and ${\mathcal{B}}$, which
are defined as $L^{\infty}$ based Besov norms of the solution $u$, as follows:
(5.1) ${\mathcal{A}}=\sup_{t\in[0,T]}\sum_{k}\|P_{k}\partial
u\|_{L^{\infty}},$
respectively
(5.2) ${\mathcal{B}}(t)=\left(\sum_{k}2^{k}\|P_{k}\partial
u\|_{L^{\infty}}^{2}\right)^{\frac{1}{2}}.$
In connection with ${\mathcal{A}}$, we will also need the slightly stronger
variant ${\mathcal{A}^{\sharp}}\gtrsim{\mathcal{A}}$,
(5.3)
${\mathcal{A}^{\sharp}}=\sup_{t\in[0,T]}\sum_{k}2^{\frac{k}{2}}\|P_{k}\partial
u\|_{L^{2n}},\qquad 2<p<\infty.$
Here the choice of the exponent $2n$ is in no way essential, though it does
provide some minor simplifications in one or two places.
In a nutshell, the energy functionals we construct later in the paper will be
shown to satisfy _cubic balanced_ bounds of the form
(5.4) $\frac{dE}{dt}\lesssim_{{\mathcal{A}^{\sharp}}}{\mathcal{B}}^{2}E,$
which guarantee that energy bounds can be propagated for as long as
${\mathcal{A}^{\sharp}}$ remains finite and ${\mathcal{B}}$ remains in
$L^{2}_{t}$. One should compare these bounds with the classical energy
estimates, which have the form
(5.5) $\frac{dE}{dt}\lesssim_{{\mathcal{A}}}\|\partial^{2}u\|_{L^{\infty}}E,$
and which require an extra half derivative in the control parameter.
We continue with a few comments concerning our choice of control parameters:
* •
Here ${\mathcal{A}}$ and ${\mathcal{A}^{\sharp}}$ are critical norms for $u$,
which may be described using the Besov notation as capturing the norm
${\mathcal{A}}=\|\partial
u\|_{L^{\infty}_{t}B^{0}_{\infty,1}},\qquad{\mathcal{A}^{\sharp}}=\|\partial
u\|_{L^{\infty}_{t}B^{\frac{1}{2}}_{2n,1}}.$
In a first approximation, the reader should think of ${\mathcal{A}}$ as simply
capturing the $L^{\infty}$ norm of $\partial u$; the slightly stronger Besov
norm above is needed for minor technical reasons, and allows us to work with
scale invariant bounds. Often we will simply rely on the simpler
$L^{\infty}$-bound, since
(5.6) $\|\partial u\|_{L^{\infty}}\lesssim{\mathcal{A}}.$
* •
The control norm ${\mathcal{B}}$, taken at fixed time, is $1/2$ derivative
above scaling, and may also be described using the Besov notation as
${\mathcal{B}}(t)=\|\partial u(t)\|_{B^{\frac{1}{2}}_{\infty,2}}.$
Again, in a first approximation one should simply think of it as $\|\partial
u\|_{BMO^{\frac{1}{2}}}$, which in effect suffices for most of the analysis.
Indeed, we have
(5.7) $\|\partial u\|_{BMO^{\frac{1}{2}}}\lesssim{\mathcal{B}}.$
* •
Given the choice of these control parameters, it is not difficult to see that
our energy estimates of the form (5.4) are invariant with respect to scaling.
This by itself does not mean much; even the classical energy estimates, of the
form (5.5), are scale invariant, but much less useful for low regularity well-
posedness. What is important here is that our energy estimates are _cubic_ and
_balanced_.
* •
The fact that our control norms are based on uniform, rather than
$L^{2}$-bounds, particularly at the level of ${\mathcal{B}}$, is also
critical. This is what allows us to use Strichartz estimates to further
improve the low regularity well-posedness threshold in our results.
For bookkeeping reasons we will use a joint frequency envelope
$\\{c_{k}\\}_{k}$ for the dyadic components of both ${\mathcal{A}^{\sharp}}$
and ${\mathcal{B}}$, so that
(i) $\\{c_{k}\\}_{k}$ is normalized in $\ell^{2}$ and slowly varying,
$\sum c_{k}^{2}=1;$
(ii) We have control of dyadic Littlewood-Paley pieces as follows for
$\partial u$:
(5.8) $2^{-\frac{k}{2}}\|P_{k}\partial
u\|_{L^{2n}}\lesssim{\mathcal{A}^{\sharp}}c_{k}^{2},\qquad
2^{\frac{k}{2}}\|P_{k}\partial u\|_{L^{\infty}}\lesssim{\mathcal{B}}c_{k}.$
A-priori, these frequency envelopes depend on time. However, at the conclusion
of the paper, we will see that for our rough solutions they can be taken to be
independent of time, essentially equal to appropriate $L^{2}$-type frequency
envelopes for the initial data.
### 5.2. Related bounds
We will frequently need to use bounds which are similar to (5.8) in nonlinear
expressions, so it is convenient to have a notation for the corresponding
space:
###### Definition 5.1.
The space $\mathfrak{C}_{0}$ is the Banach space of all distributions $v$
which satisfy the bounds
(5.9) $\|v\|_{L^{\infty}}\leq C,\qquad 2^{\frac{k}{2}}\|P_{k}v\|_{L^{2n}}\leq
C{\mathcal{A}^{\sharp}}c_{k}^{2},\qquad
2^{\frac{k}{2}}\|P_{k}v\|_{L^{\infty}}\leq C{\mathcal{B}}c_{k}$
with the norm given by the best constant $C$ in the above inequalities.
For this space we have the following algebra and Moser-type result:
###### Lemma 5.2.
a) The space $\mathfrak{C}_{0}$ is closed with respect to multiplication and
para-multiplication. In particular $\mathfrak{C}_{0}$ is an algebra.
b) Let $F$ be a smooth function, and $v\in\mathfrak{C}_{0}$. Then
$F(v)\in\mathfrak{C}_{0}$. In particular if $\|v\|_{\mathfrak{C}_{0}}\lesssim
1$ then $F(v)$ satisfies
(5.10)
$\|F(v)\|_{\mathfrak{C}_{0}}\lesssim_{{\mathcal{A}}}\|v\|_{\mathfrak{C}_{0}}.$
In particular the above result applies to the metrics $g$, ${\tilde{g}}$ and
${\hat{g}}$, all of which are smooth functions of $\partial u$, and thus
belong to $\mathfrak{C}_{0}$.
###### Proof.
a) We first estimate the $\mathfrak{C}_{0}$ norm for the paraproduct $T_{f}g$
for $f,g\in\mathfrak{C}_{0}$. This is straightforward, using the $L^{\infty}$
bound for $f$, for both the second and the third norms in (5.9). It remains to
obtain a pointwise bound, for which we change the summation order in the
Littlewood-Paley expansion to obtain
$\|T_{f}g\|_{L^{\infty}}\lesssim\sum_{k}\|P_{k}f\|_{L^{\infty}}\|P_{>k}g\|_{L^{\infty}}\lesssim\|f\|_{\mathfrak{C}_{0}}\|g\|_{L^{\infty}}.$
It now remains to estimate $\Pi(f,g)$ in $\mathfrak{C}_{0}$. The uniform bound
is almost identical to the one above. For the ${\mathcal{A}^{\sharp}}$ norm we
use Bernstein’s inequality
$2^{\frac{k}{2}}\|P_{k}\Pi(f,g)\|_{L^{2n}}\lesssim\sum_{j\geq
k}2^{k}\|f_{j}g_{j}\|_{L^{n}}\lesssim\sum_{j\geq
k}2^{k}\|f_{j}\|_{L^{p}}\|g_{j}\|_{L^{p}}\lesssim\sum_{j\geq
k}2^{k-j}c_{j}^{2}{\mathcal{A}^{\sharp}}^{2}\|f\|_{\mathfrak{C}_{0}}\|g\|_{\mathfrak{C}_{0}},$
and now the $j$ summation is straightforward.
For the ${\mathcal{B}}$ norm, on the other hand, we estimate
$\|P_{k}\Pi(f,g)\|_{L^{\infty}}\lesssim\sum_{j\geq
k}\|f_{j}g_{j}\|_{L^{\infty}}\lesssim\sum_{j\geq
k}\|f_{j}\|_{L^{\infty}}\|g_{j}\|_{L^{\infty}}\lesssim\sum_{j\geq
k}2^{\frac{j}{2}}c_{j}{\mathcal{A}^{\sharp}}{\mathcal{B}}\|f\|_{\mathfrak{C}_{0}}\|g\|_{\mathfrak{C}_{0}},$
and again the $j$ summation is straightforward.
b) To prove the Moser inequality we use a continuous Littlewood-Paley
decomposition, which leads to the expansion
$F(v)=F(v_{0})+\int_{0}^{\infty}F^{\prime}(v_{<j})v_{j}\,dj.$
To estimate $P_{k}F(v)$ we consider several cases:
i) $j=k+O(1)$. Then $c_{j}\approx c_{k}$, $F^{\prime}(v_{<j})$ is directly
bounded in $L^{\infty}$ and our bounds are straightforward.
ii) $j<k-4$. Then we can insert an additional localization,
$P_{k}(F^{\prime}(v_{<j})v_{j})=P_{k}(\tilde{P}_{k}F^{\prime}(v_{<j})v_{j}),$
where we gain from the frequency difference
(5.11) $\|P_{k}F^{\prime}(v_{<j})\|_{L^{\infty}}\lesssim 2^{-N(k-j)},$
which more than compensates for the difference (ratio) between $c_{j}$ and
$c_{k}$.
iii) $j>k+4$. In this case we reexpand $F^{\prime}(v_{<j})$ and write
$F^{\prime}(v_{<j})v_{j}=F^{\prime}(v_{0})v_{j}+\int_{0}^{\infty}F^{\prime\prime}(v_{<l})v_{l}v_{j}\,dl.$
We further separate into two cases:
(iii.1) $l=j+O(1)$. Then we simply bound $F^{\prime\prime}(v_{<l})$ in
$L^{\infty}$, and estimate first for the ${\mathcal{A}^{\sharp}}$ bound using
Bernstein’s inequality
$2^{\frac{k}{2}}\|P_{k}(F^{\prime\prime}(v_{<l})v_{l}v_{j})\|_{L^{2n}}\lesssim
2^{k}\|v_{l}v_{j}\|_{L^{n}}\lesssim
2^{k}\|v_{l}\|_{L^{2n}}\|v_{j}\|_{L^{2n}}\lesssim
2^{k-j}c_{j}^{2}{\mathcal{A}^{\sharp}}^{2},$
where the $j$ and $l$ integrations are trivial. Next we estimate for the
${\mathcal{B}}$-bound
$\|P_{k}(F^{\prime\prime}(v_{<l})v_{l}v_{j})\|_{L^{\infty}}\lesssim\|v_{l}\|_{L^{\infty}}\|v_{j}\|_{L^{\infty}}\lesssim
2^{-\frac{j}{2}}{\mathcal{A}}{\mathcal{B}}c_{j},$
again with easy $j$ and $l$ integrations.
(iii.2) $l<j-4$. Then we can insert another frequency localization,
$P_{k}(F^{\prime\prime}(v_{<l})v_{l}v_{j})=P_{k}(\tilde{P}_{j}F^{\prime\prime}(v_{<l})v_{l}v_{j}),$
and repeat the computation in (b.ii) but using (5.11) to account for the
difference between $l$ and $j$.
∎
In order to avoid tampering with causality, the Littlewood-Paley projections
we use in this paper are purely spatial. This is more of a choice between
different evils than a necessity; see for instance the alternate choice made
in [36]. A substantial but worthwhile price to pay is that on occasion we will
need to separately estimate double time derivatives, in a somewhat imperfect
but sufficient fashion.
A good starting point in this direction is to think of bounds for second
derivatives of our solution $u$. If at least one of the derivatives is
spatial, then this is straightforward:
(5.12) $\|P_{<k}\partial_{x}\partial u\|_{L^{\infty}}\lesssim
2^{\frac{k}{2}}{\mathcal{B}}c_{k}.$
However, matters become more complex if instead we look at the second time
derivative of $u$. The natural idea is to use the main equation (3.5) to
estimate $\partial_{t}^{2}u$, by writing of spatial derivatives,
$\partial_{t}^{2}u=-\sum_{(\alpha,\beta)\neq(0,0)}{\tilde{g}}^{\alpha\beta}\partial_{\alpha}\partial_{\beta}u.$
If one takes this view, the main difficulties we face are with the high-high
interactions in this expression. But these high-high interactions have the
redeeming feature that they are balanced, so they will often play a
perturbative role. This leads us to define a corrected expression as follows:
###### Definition 5.3.
We denote
(5.13)
$\hat{\partial}_{t}^{2}u=\partial_{t}^{2}u+\sum_{(\alpha,\beta)\neq(0,0)}\Pi({\tilde{g}}^{\alpha\beta},\partial_{\alpha}\partial_{\beta}u).$
More generally, by $\widehat{\partial_{\alpha}\partial_{\beta}}u$ we denote
${\partial_{\alpha}\partial_{\beta}}u$ if $(\alpha,\beta)\neq(0,0)$.
With this notation, we have
###### Lemma 5.4.
Assume that $u$ solves the equation (3.5). Then for its second time derivative
we have the decomposition
(5.14) $\partial_{t}^{2}u=\hat{\partial}_{t}^{2}u+\pi_{2}(u),$
where the two components satisfy the uniform bounds
(5.15) $\|P_{<k}\hat{\partial}_{t}^{2}u\|_{L^{\infty}}\lesssim
2^{\frac{k}{2}}{\mathcal{B}}c_{k},\qquad\|P_{<k}\hat{\partial}_{t}^{2}u\|_{L^{\infty}}\lesssim
2^{k}{\mathcal{A}}c_{k}^{2},$
respectively
(5.16)
$\|\pi_{2}(u)\|_{L^{\infty}}\lesssim_{{\mathcal{A}}}{\mathcal{B}}^{2},\qquad\|\pi_{2}(u)\|_{L^{n}}\lesssim{\mathcal{A}^{\sharp}}^{2}.$
One should compare this with the easier direct bound (5.12) for spatial
derivatives; the good part $\hat{\partial}_{t}^{2}u$ satisfies a similar
bound, but the error $\pi_{2}(u)$ does not. Later, when such expressions are
involved, we will systematically peel off perturbatively the error, and always
avoid differentiating it further.
###### Proof.
The main ingredient here is the Littlewood-Paley decomposition. For expository
simplicity we prove (5.15) at fixed frequency $k$. Using the notation in
(5.13) we can rewrite equation (3.19) as
(5.17)
$\hat{\partial}_{t}^{2}u+\sum_{(\alpha,\beta)\neq(0,0)}T_{{\tilde{g}}^{\alpha\beta}}\partial_{\alpha}\partial_{\beta}u+T_{\partial_{\alpha}\partial_{\beta}u}{\tilde{g}}^{\alpha\beta}=0.$
To finish the proof we consider the expression above localized at frequency
$k$, and evaluated in the $L^{\infty}$-norm
(5.18) $\displaystyle\|P_{k}\hat{\partial}_{t}^{2}u\|_{L^{\infty}}$
$\displaystyle\leq\|P_{<k}({\tilde{g}}^{\alpha\beta})P_{k}(\partial_{\alpha}\partial_{\beta}u)\|_{L^{\infty}}+\|P_{<k}(\partial_{\alpha}\partial_{\beta}u)P_{k}({\tilde{g}}^{\alpha\beta})\|_{L^{\infty}}.$
We bound each of the terms separately. For the second we use that
${\tilde{g}}$ is bounded in $L^{\infty}$, and get
$\|P_{<k}(\partial_{\alpha}\partial_{\beta}u)P_{k}({\tilde{g}}^{\alpha\beta})\|_{L^{\infty}}\leq
2^{\frac{k}{2}}{\mathcal{B}}c_{k}.$
For the first term we rely on the same procedure, and hence finish the proof
of the first bound in (5.15). The second bound in (5.15) has as a starting
point the decomposition in (5.18), only that this time we want to bound the
RHS terms using the control norm ${\mathcal{A}}$. Here we use Lemma 5.2, part
b), and the algebra property of $L^{\infty}$ so that
$\|P_{<k}(\partial_{\alpha}\partial_{\beta}u)P_{k}({\tilde{g}}^{\alpha\beta})\|_{L^{\infty}}\leq\|P_{<k}(\partial_{\alpha}\partial_{\beta}u)\|_{L^{\infty}}\|P_{k}({\tilde{g}}^{\alpha\beta})\|_{L^{\infty}}\leq
2^{k}{\mathcal{A}}c_{k}^{2},$
where for both factors we used (5.10) in order to arrive at the result.
The last bound to prove is (5.16), where because of the balanced frequencies
we can easily even out the derivatives balance and estimate each of the
factors using the ${\mathcal{B}}$ norm. Explicitly,
${\tilde{g}}^{\alpha\beta}$ is in $\mathfrak{C}_{0}$ by Lemma 5.2 and hence,
we get that for $(\alpha,\beta)\neq(0,0)$:
$\displaystyle\|\Pi({\tilde{g}}^{\alpha\beta},\partial_{\alpha}\partial_{\beta}u)\|_{L^{\infty}}$
$\displaystyle\leq\sum_{k}\|P_{k}({\tilde{g}}^{\alpha\beta})P_{k}(\partial_{\alpha}\partial_{\beta}u)\|_{L^{\infty}}$
$\displaystyle\leq\sum_{k}2^{-\frac{k}{2}}\|2^{\frac{k}{2}}P_{k}({\tilde{g}}^{\alpha\beta})\|_{L^{\infty}}2^{\frac{k}{2}}\|2^{-\frac{k}{2}}P_{k}(\partial_{\alpha}\partial_{\beta}u)\|_{L^{\infty}}$
$\displaystyle\lesssim_{{\mathcal{A}}}{\mathcal{B}}^{2}.$
which is the first bound in (5.16). The second bound in (5.16) is similar, but
replacing the $L^{\infty}$ norms with $L^{2n}$ norms. ∎
The above lemma motivates narrowing the space $\mathfrak{C}_{0}$, in order to
also include information about $\partial_{t}v$. For later use, we also define
two additional closely related spaces.
###### Definition 5.5.
a) The space $\mathfrak{C}$ is the space of distributions $v$ which satisfy
(5.9) and, in addition, $\partial_{t}v$ admits a decomposition
$\partial_{t}v=w_{1}+w_{2}$ so that
(5.19) $\|P_{k}w_{1}\|_{L^{\infty}}\leq
C2^{\frac{k}{2}}{\mathcal{B}}c_{k},\qquad\|w_{2}\|_{L^{\infty}}\leq
C{\mathcal{B}}^{2},$
endowed with the norm defined as the best possible constant $C$ in (5.9) and
in the above inequality relative to all such possible decompositions.
b) The space $\mathfrak{DC}$ consists of all functions $f$ which admit a
decomposition $f=f_{1}+f_{2}$ so that
(5.20) $\|P_{k}f_{1}\|_{L^{\infty}}\leq
C2^{\frac{k}{2}}{\mathcal{B}}c_{k},\qquad\|f_{2}\|_{L^{\infty}}\leq
C{\mathcal{B}}^{2},$
endowed with the norm defined as the best possible constant $C$ in the above
inequality relative to all such possible decompositions.
c) The space $\partial_{x}\mathfrak{DC}$ consists of functions $f$ which admit
a decomposition $f=f_{1}+f_{2}$ so that
(5.21) $\|P_{k}f_{1}\|_{L^{\infty}}\leq
C2^{\frac{3k}{2}}{\mathcal{B}}c_{k},\qquad\|P_{k}f_{2}\|_{L^{\infty}}\leq
C2^{k}{\mathcal{B}}^{2},$
endowed also with the corresponding norm.
We remark that, by definition, we have the simple inclusions
(5.22)
$\mathfrak{C}\subset\mathfrak{C}_{0},\qquad\partial:\mathfrak{C}\to\mathfrak{DC},\qquad\partial_{x}:\mathfrak{DC}\to\partial_{x}\mathfrak{DC}.$
Based on what we have so far, we begin by identifying some elements of these
spaces:
###### Lemma 5.6.
We have
(5.23) $\|\partial u\|_{\mathfrak{C}}\lesssim
1,\qquad\|\partial^{2}u\|_{\mathfrak{DC}}\lesssim 1.$
###### Proof.
The bounds in (5.23) are trivial unless both derivatives are time derivatives,
in which case it follows directly from the previous Lemma 5.4. ∎
The Moser estimates of Lemma 5.2 may be extended to this setting to include
all smooth functions of $\partial u$:
###### Lemma 5.7.
a) We have the bilinear multiplicative relations
(5.24) $\mathfrak{C}_{0}\cdot\mathfrak{DC}\to\mathfrak{DC},\qquad
T_{\mathfrak{C}_{0}}\cdot\mathfrak{DC}\to\mathfrak{DC},\qquad
T_{\mathfrak{DC}}\mathfrak{C}_{0}\to{\mathcal{A}}\mathfrak{DC},$
as well as
(5.25)
$T_{\mathfrak{DC}}\mathfrak{C}_{0}\to{\mathcal{B}}^{2}L^{\infty},\qquad\Pi(\mathfrak{DC},\mathfrak{C}_{0})\to{\mathcal{B}}^{2}L^{\infty}.$
b) The space $\mathfrak{C}$ is closed under multiplication and para-
multiplication; in particular it is an algebra.
c) Let $F$ be a smooth function, and $v\in\mathfrak{C}$. Then
$F(v)\in\mathfrak{C}$. In particular if $\|v\|_{\mathfrak{C}}\lesssim 1$ then
$F(v)$ satisfies
(5.26) $\|F(v)\|_{\mathfrak{C}}\lesssim_{{\mathcal{A}}}\|v\|_{\mathfrak{C}}.$
d) In addition we also have the paralinearization error bound
(5.27) $\|\partial R(v)\|_{L^{\infty}}\lesssim{\mathcal{B}}^{2},\qquad
R(v)=F(v)-T_{F^{\prime}(v)}v.$
Here part (a) is the main part, after which parts (b) and (c) become immediate
improvements of Lemma 5.2. But the new interesting bound is the one in part
(d), where, notably, we also bound the time derivative of $R(v)$.
###### Proof.
a) Let $z\in\mathfrak{C}_{0}$ and $w\in\mathfrak{DC}$ with the decomposition
$w=w_{1}+w_{2}$ as in (5.20). We skip the first bound in (5.24), as it is a
consequence of the rest of the estimates in (5.24) and (5.25), and first
consider the paraproduct $T_{z}w$. We will bound the contributions of $w_{1}$
and $w_{2}$ in the same norms as $w_{1}$, respectively $w_{2}$. Precisely, we
have
$\|P_{k}T_{z}w_{1}\|_{L^{\infty}}\lesssim\|z\|_{L^{\infty}}\|\tilde{P}_{k}w_{1}\|_{L^{\infty}}\lesssim\|z\|_{\mathfrak{C}_{0}}2^{\frac{k}{2}}{\mathcal{B}}c_{k}\|w\|_{\mathfrak{DC}},$
respectively
$\|T_{z}w_{2}\|_{L^{\infty}}\lesssim\sum_{k}\|z_{k}\|_{L^{\infty}}\|P_{>k}w_{2}\|_{L^{\infty}}\lesssim\|z\|_{\mathfrak{C}_{0}}\|w_{1}\|_{L^{\infty}}.$
Next we consider $T_{w}z$, where we have two choices. The first choice is to
use only the ${\mathcal{A}}$ component of the $\mathfrak{C}_{0}$ norm of $z$,
and prove the last bound in (5.24). Precisely, we have
$\|P_{k}T_{w_{1}}z\|_{L^{\infty}}\lesssim\|w_{1,<k}\|_{L^{\infty}}\|\tilde{P}_{k}z\|_{L^{\infty}}\lesssim
2^{\frac{k}{2}}{\mathcal{B}}c_{k}\|w\|_{\mathfrak{DC}}\cdot{\mathcal{A}}\|z\|_{\mathfrak{C}_{0}},$
respectively
$\|T_{w_{2}}z\|_{L^{\infty}}\lesssim\sum_{k}\|w_{2,<k}\|_{L^{\infty}}\|P_{k}z\|_{L^{\infty}}\lesssim\|w_{2}\|_{L^{\infty}}\cdot{\mathcal{A}}\|z\|_{\mathfrak{C}_{0}}.$
Alternatively, we can use the ${\mathcal{B}}$ component of the
$\mathfrak{C}_{0}$ norm of $z$ in the bound for the $w_{1}$ component,
$\|T_{w_{1}}z\|_{L^{\infty}}\lesssim\sum_{k}\|w_{1,<k}\|_{L^{\infty}}\|z_{k}\|_{L^{\infty}}\lesssim\sum_{k}2^{\frac{k}{2}}{\mathcal{B}}c_{k}\|w\|_{\mathfrak{DC}}\cdot
2^{-\frac{k}{2}}{\mathcal{B}}c_{k}\|z\|_{\mathfrak{C}_{0}}\lesssim{\mathcal{B}}^{2}\|w\|_{\mathfrak{DC}}\|z\|_{\mathfrak{C}_{0}},$
which leads to the first bound in (5.25).
It remains to consider the second bound in (5.25), where we have
$\|\Pi(w_{1},z)\|_{L^{\infty}}\lesssim\sum_{k}\|w_{1,k}\|_{L^{\infty}}\|z_{k}\|_{L^{\infty}}\lesssim\sum_{k}2^{\frac{k}{2}}{\mathcal{B}}c_{k}\|w\|_{\mathfrak{DC}}\cdot
2^{-\frac{k}{2}}{\mathcal{B}}c_{k}\|z\|_{\mathfrak{C}_{0}}\lesssim{\mathcal{B}}^{2}\|w\|_{\mathfrak{DC}}\|z\|_{\mathfrak{C}_{0}},$
respectively
$\|\Pi(w_{2},z)\|_{L^{\infty}}\lesssim\sum_{k}\|w_{2,k}\|_{L^{\infty}}\|z_{k}\|_{L^{\infty}}\lesssim\|w_{2}\|_{L^{\infty}}\cdot{\mathcal{A}}\|z\|_{\mathfrak{C}_{0}}\lesssim
A{\mathcal{B}}^{2}\|w\|_{\mathfrak{DC}}\cdot{\mathcal{A}}\|z\|_{\mathfrak{C}_{0}}.$
b) Compared with part (a) of Lemma (5.2), it remains to estimate the time
derivative of products and paraproducts. Using Leibniz’s rule, this reduces
directly to the multiplicative bounds in (a).
c) Compared with part (b) of Lemma (5.2) it remains to estimate
$\partial_{0}F(v)=F^{\prime}(v)\partial_{0}v$
in $\mathfrak{DC}$. By Lemma (5.2) we have $F^{\prime}(v)\in\mathfrak{C}_{0}$,
while $\partial_{0}v\in\mathfrak{DC}$. Then we can bound the product in
$\mathfrak{DC}$ by part (a) of this Lemma.
d) We have
$\partial R=(F^{\prime}(v)-T_{F^{\prime}(v)})\partial v-T_{\partial
F^{\prime}(v)}v=\Pi(F^{\prime}(v),\partial v)+T_{\partial
v}F^{\prime}(v)-T_{\partial F^{\prime}(v)}v.$
Now it suffices to use (5.26) for both $v$ and $F^{\prime}(v)$. ∎
Applying the above lemma shows that $F(\partial u)\in\mathfrak{C}$, and in
particular all components of the metrics $g$, ${\tilde{g}}$ and ${\hat{g}}$
are in $\mathfrak{C}$. We also have $F(\partial
u)\partial^{2}u\in\mathfrak{DC}$, which in particular shows that the gradient
potentials $A$ and ${\tilde{A}}$ belong to $\mathfrak{DC}$.
We will use part (d) when $w=\partial u$ and $F=g$, in which case (2.12) reads
(5.28) $\|\partial R(\partial
u)\|_{L^{\infty}}\lesssim_{\mathcal{A}}{\mathcal{B}}^{2},\qquad
R=g^{\alpha\beta}+T_{\partial^{\alpha}ug^{\beta\gamma}}\partial_{\gamma}u+T_{\partial^{\beta}ug^{\alpha\gamma}}\partial_{\gamma}u.$
We remark that a similar $H^{s}$ type bound for the same $R$ is provided by
(2.12), namely
(5.29) $\|R(\partial
u)\|_{H^{s-\frac{1}{2}}}\lesssim_{\mathcal{A}}{\mathcal{B}}\|\partial
u\|_{H^{s-1}}.$
The next lemma provides us with the primary example of elements of the space
$\partial_{x}\mathfrak{DC}$:
###### Lemma 5.8.
We have
(5.30)
$\|\partial_{\alpha}\widehat{\partial_{\beta}\partial_{\gamma}}u\|_{\partial_{x}\mathfrak{DC}}\lesssim
1.$
###### Proof.
The bound in (5.30) is trivial if at least two derivatives are spatial, and
follows from (5.23) unless all indices are zero. It remains to consider the
case $\alpha=\beta=\gamma=0$. Here we rely on the earlier decomposition (5.17)
to which we further apply a $\partial_{t}$:
(5.31)
$\partial_{t}\hat{\partial}_{t}^{2}u=-\sum_{(\alpha,\beta)\neq(0,0)}\left(T_{\partial_{t}\tilde{g}^{\alpha\beta}}\partial_{\alpha}\partial_{\beta}u+T_{\tilde{g}^{\alpha\beta}}\partial_{t}\partial_{\alpha}\partial_{\beta}u+T_{\partial_{t}\partial_{\alpha\beta}u}{\tilde{g}}^{\alpha\beta}+T_{\partial_{\alpha}\partial_{\beta}u}\partial_{t}{\tilde{g}}^{\alpha\beta}\right).$
We now investigate each term separately. We begin with the first term, which
needs to be bounded in the $\partial_{x}\mathfrak{DC}$ norm given in (5.21).
We have
$\|P_{k}T_{\partial_{t}\tilde{g}^{\alpha\beta}}\partial_{\alpha}\partial_{\beta}u\|_{L^{\infty}}\leq\|P_{<k}(\partial_{t}\tilde{g}^{\alpha\beta})\|_{L^{\infty}}\|P_{k}(\partial_{\alpha}\partial_{\beta}u)\|_{L^{\infty}}.$
The term that contains the time derivative falling onto the metric will be
bounded using the Moser estimate Lemma 5.7. Explicitly, we have
${\tilde{g}}^{\alpha\beta}\in\mathfrak{C}$, and due to Lemma 5.7, part $c)$,
we get $\partial_{t}{\tilde{g}}^{\alpha\beta}\in\mathfrak{DC}$ which allows us
to decompose it as in (5.20),
$\partial_{t}{\tilde{g}}^{\alpha\beta}={\tilde{g}}_{1}^{\alpha\beta}+{\tilde{g}}_{2}^{\alpha\beta}$
where
$\|P_{<k}\partial_{t}{\tilde{g}}^{\alpha\beta}\|_{L^{\infty}}\leq\|{\tilde{g}}_{1}^{\alpha\beta}\|_{L^{\infty}}+\|P_{<k}{\tilde{g}}_{2}^{\alpha\beta}\|_{L^{\infty}}\leq
C{\mathcal{B}}^{2}+C2^{\frac{k}{2}}{\mathcal{B}}c_{k}.$
We now turn to the last bound which we can estimate in two ways
$\|P_{k}(\partial_{\alpha}\partial_{\beta}u)\|_{L^{\infty}}\leq
2^{\frac{k}{2}}\|2^{-\frac{k}{2}}P_{k}(\partial_{\alpha}\partial_{\beta}u)\|_{L^{\infty}}\leq
2^{\frac{k}{2}}{\mathcal{B}}c_{k},$
or
$\|P_{k}(\partial_{\alpha}\partial_{\beta}u)\|_{L^{\infty}}\leq
2^{k}\|2^{-k}P_{k}(\partial_{\alpha}\partial_{\beta}u)\|_{L^{\infty}}\leq
2^{k}{\mathcal{A}}c_{k}^{2}.$
Putting together the bounds we have, leads to
$\|P_{k}T_{\partial_{t}{\tilde{g}}^{\alpha\beta}}\partial_{\alpha}\partial_{\beta}u\|_{L^{\infty}}\leq
C2^{k}{\mathcal{A}}{\mathcal{B}}^{2}c_{k}^{2}+C2^{k}{\mathcal{B}}^{2}c_{k}^{2}.$
We now bound the second term in (5.31)
$\|T_{{\tilde{g}}^{\alpha\beta}}\partial_{t}\partial_{\alpha}\partial_{\beta}u\|_{L^{\infty}}\leq\|{\tilde{g}}^{\alpha\beta}\|_{L^{\infty}}\|\partial_{t}\partial_{\alpha}\partial_{\beta}u\|_{L^{\infty}}.$
For the last term in the above estimate we know that $(\alpha,\beta)\neq(0,0)$
hence there are two cases to consider: (i) we have either $\alpha=0$ or
$\beta=0$, but not both zero, which overalls means we need to bound
$\partial_{t}^{2}\partial_{x}u$, or (ii) we have both $\alpha,\beta\neq 0$, in
which case we need a pointwise bound for $\partial_{t}\partial^{2}_{x}u$.
However, both cases can be handled in the same way if we observe that
$\partial_{x}(\partial_{x}\partial_{t}u)$ and
$\partial_{x}(\partial^{2}_{t}u)$ are elements in $\partial_{x}\mathfrak{DC}$;
this is a direct consequence of $\partial^{2}u\in\mathfrak{DC}$ as shown in
(5.23), followed by the inclusion in (5.22).
Finally, the third and fourth terms in (5.31) can be treated in the same way
the first term in (5.31) was shown to be bounded. ∎
We continue with another, slightly more subtle balanced bound:
###### Lemma 5.9.
For $g,h\in\mathfrak{C}$ define
$r=(T_{g}T_{h}-T_{gh})\partial^{2}u.$
Then we have the balanced bound
(5.32) $\|r\|_{L^{\infty}}\lesssim{\mathcal{B}}^{2}.$
###### Proof.
For $\partial^{2}u$ we use the $\mathfrak{DC}$ decomposition as in (5.20),
$\partial^{2}u=f_{1}+f_{2}.$
We begin with the contribution $r_{1}$ of $f_{1}$, which we expand as
$r_{1}=\sum_{k}(T_{g}T_{h}-T_{gh})f_{1,k}.$
This vanishes unless the frequencies $k_{1}$, $k_{2}$ of $g$ and $h$ are
either
(i) $k_{1},k_{2}\leq k$ and $\max\\{k_{1},k_{2}\\}=k+O(1)$, or
(ii) $k_{1}=k_{2}>k+O(1)$.
Then we use the ${\mathcal{A}}$ component of the $\mathfrak{C}_{0}$ norm for
the lower frequency and and the ${\mathcal{B}}$ component for the lower
frequency to estimate
$\displaystyle\|r_{1}\|_{L^{\infty}}\lesssim$ $\displaystyle\
\sum_{k}\|f_{1,k}\|_{L^{\infty}}\left(\sum_{k_{1}<k+O(1)}\|g_{k_{1}}\|_{L^{\infty}}\sum_{k_{2}=k+O(1)}\|h_{k_{2}}\|_{L^{\infty}}\right.$
$\displaystyle\,+\left.\sum_{k_{1}=k+O(1)}\|g_{k_{1}}\|_{L^{\infty}}\sum_{k_{2}<k+O(1)}\|h_{k_{2}}\|_{L^{\infty}}+\sum_{k_{1}=k_{2}\geq
k+O(1)}\|g_{k_{1}}\|_{L^{\infty}}\|h_{k_{2}}\|_{L^{\infty}}\right)$
$\displaystyle\lesssim$ $\displaystyle\
2^{\frac{k}{2}}{\mathcal{B}}c_{k}({\mathcal{A}}\cdot
2^{-\frac{k}{2}}{\mathcal{B}}c_{k}+2^{-\frac{k}{2}}{\mathcal{B}}c_{k}\cdot{\mathcal{A}}+{\mathcal{A}}\cdot
2^{-k}{\mathcal{B}}c_{k})$ $\displaystyle\lesssim$ $\displaystyle\
{\mathcal{A}}{\mathcal{B}}^{2},$
as needed. ∎
As already discussed in the introduction, the paradifferential wave operator
(5.33) $T_{P}=\partial_{\alpha}T_{g^{\alpha\beta}}\partial_{\beta},$
as well as its counterparts $T_{\tilde{P}}$ and $T_{\hat{P}}$ with the metric
$g$ replaced by ${\tilde{g}}$, respectively ${\hat{g}}$, play an important
role in our context.
Throughout the paper, we will interpret various objects related to $u$ as
approximate solutions for the $T_{P}$ equation. We provide several results of
this type, where we use our control parameters ${\mathcal{A}},{\mathcal{B}}$
in order to estimate the source term in the paradifferential equation for both
$u$ and for its derivatives.
###### Lemma 5.10.
We have
(5.34) $\|T_{P}u\|_{L^{\infty}}\lesssim{\mathcal{B}}^{2},$
as well as the similar bounds for $T_{\tilde{P}}$ and $T_{\hat{P}}$.
###### Proof.
We first prove the bound (5.34), and for this we begin with the
paradifferential equation associated to the minimal surface equation (3.5)
$T_{g^{\alpha\beta}}\partial_{\alpha}\partial_{\beta}u+T_{\partial_{\alpha}\partial_{\beta}u}g^{\alpha\beta}+\Pi(g^{\alpha,\beta},\partial_{\alpha}\partial_{\beta}u)=0,$
and further isolate the part we are interested in estimating
$\partial_{\beta}T_{g^{\alpha\beta}}\partial_{\alpha}u-T_{\partial_{\beta}g^{\alpha\beta}}\partial_{\alpha}u+T_{\partial_{\alpha}\partial_{\beta}u}g^{\alpha\beta}+\Pi(g^{\alpha\beta},\partial_{\alpha}\partial_{\beta}u)=0.$
The bound we want relies on getting bounds for the following terms
$\|T_{P}u\|_{L^{\infty}}=\|\partial_{\beta}T_{g^{\alpha\beta}}\partial_{\alpha}u\|_{L^{\infty}}\leq\|T_{\partial_{\beta}g^{\alpha\beta}}\partial_{\alpha}u\|_{L^{\infty}}+\|T_{\partial_{\alpha}\partial_{\beta}u}g^{\alpha\beta}\|_{L^{\infty}}+\|\Pi(g^{\alpha\beta},\partial_{\alpha}\partial_{\beta}u)\|_{L^{\infty}.}$
However, the bounds for all of these terms rely on the use of the fact that
$g^{\alpha\beta}\in\mathfrak{C}$, $\partial
g^{\alpha\beta},\partial_{\alpha}\partial_{\beta}u\in\mathfrak{DC}$
(consequence of Lemma 5.7), as well as on the bound given by Lemma 5.4.
Precisely the estimate (5.25) implies that
$\|T_{\partial_{\beta}g^{\alpha\beta}}\partial_{\alpha}u\|_{L^{\infty}}+\|T_{\partial_{\alpha}\partial_{\beta}u}g^{\alpha\beta}\|_{L^{\infty}}+\|\Pi(g^{\alpha\beta},\partial_{\alpha}\partial_{\beta}u)\|_{L^{\infty}}\lesssim{\mathcal{B}}^{2}.$
Similar bounds will be obtained for $T_{\hat{P}}$ and $T_{\tilde{P}}$ by the
use of the same results mentioned in the proof of bound (5.34).
∎
We next consider similar bounds for derivatives of $u$. Here we will
differentiate between space and time derivatives. We begin with spatial
derivatives:
###### Lemma 5.11.
We have
(5.35) $\|P_{<k}T_{P}\partial_{x}u\|_{L^{\infty}}\lesssim
2^{k}{\mathcal{B}}^{2},$
as well as the similar bounds for $T_{\tilde{P}}$ and $T_{\hat{P}}$.
###### Proof.
For this proof we rely on the previous Lemma 5.10 and on Lemma 5.4 . This
becomes obvious after we commute the $\partial_{x}$ across the $T_{P}$
operator
$P_{k}T_{P}\partial_{x}u=\partial_{x}P_{k}\partial_{\alpha}T_{g^{\alpha\beta}}\partial_{\beta}u-P_{k}\partial_{\alpha}T_{\partial_{x}g^{\alpha\beta}}\partial_{\beta}u$
The first term on the RHS of the identity above is bounded using (5.34) as
follows
$\|\partial_{x}P_{k}\partial_{\alpha}T_{g^{\alpha\beta}}\partial_{\beta}u\|_{L^{\infty}}\lesssim
2^{k}\|\partial_{\alpha}T_{g^{\alpha\beta}}\partial_{\beta}u\|_{L^{\infty}}\lesssim
2^{k}{\mathcal{B}}^{2}.$
Here we took advantage of the $\partial_{x}$ accompanied by the frequency
projector $P_{k}$. A similar advantage will not present itself for the last
term, where we need to distribute the $\alpha$ derivative
$P_{k}\partial_{\alpha}T_{\partial_{x}g^{\alpha\beta}}\partial_{\beta}u=P_{k}T_{\partial_{\alpha}\partial_{x}g^{\alpha\beta}}\partial_{\beta}u+P_{k}T_{\partial_{x}g^{\alpha\beta}}\partial_{\alpha}\partial_{\beta}u:=e_{1}+e_{2}.$
We bound $e_{1}$ using Lemma 5.7, by placing $\partial_{x}\partial
g^{\alpha\beta}\in\partial_{x}\mathfrak{DC}$ which means it will admit a
decomposition as follows
$\partial_{x}\partial g^{\alpha\beta}=f_{1}+f_{2},$
where
$\|P_{k}f_{1}\|_{L^{\infty}}\lesssim
2^{\frac{3k}{2}}{\mathcal{B}}c_{k},\quad\|P_{k}f_{2}\|_{L^{\infty}}\lesssim
2^{k}{\mathcal{B}}^{2}.$
Thus, we get
$\|e_{1}\|_{L^{\infty}}\lesssim\left(\|P_{k}f_{1}\|_{L^{\infty}}+\|P_{k}f_{2}\|_{L^{\infty}}\right)\|\partial_{\beta}u\|_{L^{\infty}}\lesssim\left(2^{k}{\mathcal{B}}^{2}+2^{\frac{3k}{2}}{\mathcal{B}}c_{k}\right)\|\partial_{\beta}u\|_{L^{\infty}},$
which leads to the desired bound once we estimate the last term accordingly.
The bounds can be one of the following
$\|\partial_{\beta}u\|_{L^{\infty}}\lesssim
2^{-\frac{k}{2}}\|2^{\frac{k}{2}}\partial_{\beta}u\|_{L^{\infty}}\lesssim
2^{-\frac{k}{2}}{\mathcal{B}}\qquad\mbox{ or
}\qquad\|\partial_{\beta}u\|_{L^{\infty}}\lesssim{\mathcal{A}}.$
For the first term in the bracket we use the control norm ${\mathcal{A}}$, and
for the second term we use the ${\mathcal{B}}$ norm bound.
For $e_{2}$ we use the decomposition in Lemma 5.4 for
$\partial_{\alpha}\partial_{\beta}u$ and for $g$ we use the fact that
$g\in\mathfrak{C}_{0}$ where we can use either the ${\mathcal{A}}$ bound or
the ${\mathcal{B}}$ bound. The computations are similar to the case of
$e_{1}$.
The bounds for $T_{\tilde{P}}$ and $T_{\hat{P}}$ follow from the exact
argument as the one used above in the $T_{P}$ case.
∎
###### Lemma 5.12.
a) We have
(5.36) $T_{P}\partial u\in\partial({\mathcal{B}}^{2}L^{\infty}),$
i.e. there exists a representation
(5.37) $T_{P}\partial
u=\partial_{\alpha}f^{\alpha},\qquad|f|\lesssim{\mathcal{B}}^{2}.$
b) We also have
(5.38)
$\|P_{k}\partial_{\alpha}T_{g^{\alpha\beta}}\widehat{\partial_{\beta}\partial_{\gamma}}u\|_{L^{\infty}}+\|P_{k}\partial_{\gamma}T_{g^{\alpha\beta}}\widehat{\partial_{\beta}\partial_{\alpha}}u\|_{L^{\infty}}\lesssim
2^{k}{\mathcal{B}}^{2}.$
Similar results hold with $g$ replaced by ${\tilde{g}}$ or ${\hat{g}}$.
###### Proof.
a) We write
$\displaystyle\partial_{\alpha}T_{g^{\alpha\beta}}\partial_{\beta}\partial u=$
$\displaystyle\
\partial\partial_{\alpha}T_{g^{\alpha\beta}}\partial_{\beta}u-\partial_{\alpha}T_{\partial
g^{\alpha\beta}}\partial_{\beta}u.$
Here by (5.25) we have
$\|T_{\partial g}\partial u\|_{L^{\infty}}\lesssim{\mathcal{B}}^{2},$
so we get
$\displaystyle\partial_{\alpha}T_{g^{\alpha\beta}}\partial_{\beta}\partial u=$
$\displaystyle\ \partial
T_{g^{\alpha\beta}}\partial_{\alpha}\partial_{\beta}u+\partial({\mathcal{B}}^{2}L^{\infty})=\partial\Pi(g^{\alpha\beta},\partial_{\alpha}\partial_{\beta}u)+\partial
T_{\partial_{\alpha}\partial_{\beta}u}g^{\alpha\beta}+\partial({\mathcal{B}}^{2}L^{\infty}),$
where we can use again the previous lemma.
b) The first step here is to reduce to the case of the metric ${\tilde{g}}$.
Each of the other two metrics may be written in the form $h{\tilde{g}}$, with
$h=h(\partial u)$. Then we can write
(5.39)
$\partial_{\alpha}T_{h{\tilde{g}}^{\alpha\beta}}\widehat{\partial_{\beta}\partial_{\gamma}}u=T_{h}\partial_{\alpha}T_{{\tilde{g}}^{\alpha\beta}}\widehat{\partial_{\beta}\partial_{\gamma}}u-T_{\partial_{\alpha}h}T_{{\tilde{g}}^{\alpha\beta}}\widehat{\partial_{\beta}\partial_{\gamma}}u+\partial_{\alpha}(T_{h{\tilde{g}}^{\alpha\beta}}-T_{h}T_{{\tilde{g}}^{\alpha\beta}})\widehat{\partial_{\beta}\partial_{\gamma}}u.$
The first term corresponds to our reduction, and the remaining terms need to
be estimated perturbatively. This is straightforward unless $\alpha=0$, so we
focus now on this case.
For the middle term in (5.39) we can use the last part of (5.26) in Lemma 5.7
for $\partial_{0}h$. Using a $\mathfrak{DC}$ decomposition for it,
$\partial_{0}h=h_{1}+h_{2}$, we can match the two terms with the two pointwise
bounds for $\widehat{\partial_{\beta}\partial_{\gamma}}u$, namely
$\|P_{k}\widehat{\partial_{\beta}\partial_{\gamma}}u\|_{L^{\infty}}\lesssim
2^{k}{\mathcal{A}}c_{k}^{2},\qquad\|P_{k}\widehat{\partial_{\beta}\partial_{\gamma}}u\|_{L^{\infty}}\lesssim
2^{\frac{k}{2}}{\mathcal{B}}c_{k}.$
This yields
$\displaystyle\|P_{k}T_{\partial_{\alpha}h}T_{{\tilde{g}}^{\alpha\beta}}\widehat{\partial_{\beta}\partial_{\gamma}}u\|_{L^{\infty}}\lesssim$
$\displaystyle\
\|h_{1}\|_{L^{\infty}}\cdot\|\tilde{P}_{k}\widehat{\partial_{\beta}\partial_{\gamma}}u\|_{L^{\infty}}+\|h_{2}\|_{L^{\infty}}\cdot\|\tilde{P}_{k}\widehat{\partial_{\beta}\partial_{\gamma}}u\|_{L^{\infty}}$
$\displaystyle\lesssim$ $\displaystyle\ {\mathcal{B}}^{2}\cdot
2^{k}{\mathcal{A}}c_{k}^{2}+2^{\frac{k}{2}}{\mathcal{B}}c_{k}\cdot
2^{\frac{k}{2}}{\mathcal{B}}c_{k},$
where the frequency envelope provides the summation with respect to $k$.
For the last expression in (5.39) we distribute $\partial_{0}$. If it falls on
the main term, then we can combine the bound (5.30) with the para-composition
bound in Lemma 2.4 exactly as in the proof of Lemma 5.9. If it falls on $h$
(or similarly on ${\tilde{g}}$) then we use the same decomposition as above
for $\partial_{0}h$. For the $h_{1}$ contribution we have a direct bound
without using any cancellations, while for $h_{2}$ we use again Lemma 2.4.
We continue with the second444Note that these two steps are interchangeable.
step, which is to switch $\partial_{\alpha}$ and $\partial_{\gamma}$. For
fixed $\alpha$ and $\gamma$, we write
(5.40)
$\partial_{\alpha}T_{{\tilde{g}}^{\alpha\beta}}\widehat{\partial_{\beta}\partial_{\gamma}}u=\partial_{\gamma}T_{{\tilde{g}}^{\alpha\beta}}\widehat{\partial_{\beta}\partial_{\alpha}}u+f,$
where $f$ satisfies
(5.41) $\|P_{<k}f\|_{L^{\infty}}\lesssim 2^{k}{\mathcal{B}}^{2}.$
This is trivial if $\alpha=\gamma=0$. If both are nonzero, or if one of them
is zero but $\beta\neq 0$, then there is no hat correction and this is a
straightforward commutator bound. It remains to discuss the case when
$\beta=0$ and exactly one of $\alpha$ and $\gamma$ are zero, say $\gamma=0$.
Then we need to consider the difference
$\displaystyle f=$ $\displaystyle\ \partial_{\alpha}T_{{\tilde{g}}^{\alpha
0}}\widehat{\partial_{0}\partial_{0}}u-\partial_{0}T_{{\tilde{g}}^{\alpha
0}}\partial_{0}\partial_{\alpha}u$ $\displaystyle=$ $\displaystyle\
\partial_{\alpha}T_{{\tilde{g}}^{\alpha 0}}\Pi(u,\partial_{x}\partial
u)+T_{\partial_{\alpha}{\tilde{g}}^{\alpha
0}}\partial_{0}\partial_{0}u-T_{\partial_{0}{\tilde{g}}^{\alpha
0}}\partial_{0}\partial_{\alpha}u,$
which can be estimated as in (5.41) using the fact that $\alpha\neq 0$ and the
bound (5.15) for the second time derivative of $u$, respectively the similar
bound (5.26) (third estimate) for $\partial_{0}{\tilde{g}}$.
It remains to examine the expression
$g_{\gamma}=\partial_{\gamma}T_{{\tilde{g}}^{\alpha\beta}}\widehat{\partial_{\beta}\partial_{\alpha}}u,$
where, unlike above, we take advantage of the summation with respect to
$\alpha$ and $\beta$. Then, using the $u$ equation, we have
$g_{\gamma}=\partial_{\gamma}T_{\partial_{\alpha}\partial_{\beta}u}{\tilde{g}}^{\alpha\beta}.$
The term where both $\alpha$ and $\beta$ are zero vanishes, so this can be
estimated directly as in (5.41) if $\gamma\neq 0$, and using either (5.15) or
(5.26) (third estimate) for $\partial_{0}{\tilde{g}}$, otherwise.
∎
## 6\. Paracontrolled distributions
To motivate this section, we start from the classical energy estimates for the
wave equation, which are obtained using the multiplier method. Precisely, one
multiplies the equation $\Box_{g}u=f$ by $Xu$ and simply integrates by parts.
Here $X$ is any regular time-like vector field. In the next section, we prove
energy estimates for the paradifferential equation (3.25), by emulating this
strategy at the paradifferential level. The challenge is then to uncover a
suitable vector field $X$. Unlike the classical case, here not every time-like
vector field $X$ will suffice. Instead $X$ must be carefully chosen, and in
particular it will inherently have a limited regularity.
Since the metric $g$ is a function of $\nabla u$, scaling considerations
indicate that the vector field $X$ should be at the same regularity level.
Naively, one might hope to have an explicit expression $X=X(\nabla u)$ for our
vector field. Unfortunately, seeking such an $X$ eventually leads to an
overdetermined system. At the other extreme, one might enlarge the class of
$X$ to all distributions which satisfy the same $H^{s}$ and Besov norms as
$\partial u$, which is essentially the class of functions which satisfy
(5.26). While this class will turn out to contain the correct choice for $X$,
it is nevertheless too large to allow for a clean implementation of the
multiplier method.
Instead, there is a more subtle alternative, namely to have the vector $X$ to
be _paracontrolled_ by $\partial u$. This terminology was originally
introduced by Gubinelli, Imkeller and Perkowski [12] in connection to Bony’s
calculus, in order to study stochastic pde problems, see also [13]. However,
similar constructions have been carried earlier in the renormalization
arguments e.g. for wave maps, in work of Tao [40], Tataru [45] and Sterbenz-
Tataru [39]; the last reference used the name _renormalizable_ for the
corresponding class of distributions.
In the standard usage, this is more of a principle than an exact notion, which
needs to be properly adapted to one’s purposes. For our own objective here, we
provide a very precise definition of this notion, which is exactly tailored to
the problem at hand.
### 6.1. Definitions and key properties
###### Definition 6.1.
We say that a function $z$ is paracontrolled by $\partial u$ in a time
interval $I$ if it admits a representation555Such a representation might not
be unique in general, though later in the paper we often identify specific
choices for the paracoefficients. of the form
(6.1) $z=T_{a}\partial u+r,$
where the vector field $a$ and the error $r$ have the following properties:
(i) bounded para-coefficients $a$:
(6.2) $\|a\|_{\mathfrak{C}}\leq C.$
|
# Periodic DMP formulation for Quaternion Trajectories
Fares J. Abu-Dakka1, Matteo Saveriano2, and Luka Peternel3 1Intelligent
Robotics Group, Dept of Electrical Engineering and Automation, Aalto
University, Finland (e-mail: [email protected]).2Dept of Computer
Science and Digital Science Center, University of Innsbruck, Austria (e-mail:
[email protected]).3Delft Haptics Lab, Dept of Cognitive Robotics,
Delft University of Technology, Delft, Netherlands (e-mail:
[email protected]).This work has been partially supported by CHIST-ERA
project IPALM (Academy of Finland decision 326304), and by the Austrian
Research Foundation (Euregio IPN 86-N30, OLIVER).
###### Abstract
Imitation learning techniques have been used as a way to transfer skills to
robots. Among them, dynamic movement primitives (DMPs) have been widely
exploited as an effective and an efficient technique to learn and reproduce
complex discrete and periodic skills. While DMPs have been properly formulated
for learning point-to-point movements for both translation and orientation,
periodic ones are missing a formulation to learn the orientation. To address
this gap, we propose a novel DMP formulation that enables encoding of periodic
orientation trajectories. Within this formulation we develop two approaches:
Riemannian metric-based projection approach and unit quaternion based periodic
DMP. Both formulations exploit unit quaternions to represent the orientation.
However, the first exploits the properties of Riemannian manifolds to work in
the tangent space of the unit sphere. The second encodes directly the unit
quaternion trajectory while guaranteeing the unitary norm of the generated
quaternions. We validated the technical aspects of the proposed methods in
simulation. Then we performed experiments on a real robot to execute daily
tasks that involve periodic orientation changes (i.e., surface
polishing/wiping and liquid mixing by shaking).
## I Introduction
The ability to control movements and interaction behaviour is one of the
fundamental aspects of robots performing their intended tasks. The underlying
properties of the task can be encoded and represented by reference
trajectories of motion impedance and/or force. Therefore, it is imperative to
have a reliable and powerful trajectory encoding method.
One of the most widely used methods is Dynamic Movement Primitives (DMPs) [1,
2]. DMPs are capable of encoding discrete and periodic trajectories. Discrete
trajectories are point-to-point motions with distinct starting and ending
point (goal), which are suitable for many daily tasks (e.g., pick and place).
Nevertheless, many other daily tasks are of periodic nature and periodic
trajectories have their own specifics. For example, they have the same
starting or ending point and all the neighbouring points have to be smoothly
connected to each-other to form a repetitive pattern. To consider these
specifics, Ijspeert et al. [1] introduced a different formulation for periodic
(also called rhythmic) DMPs. Since then, periodic DMPs and their upgrades have
been successfully applied to various realistic tasks of periodic nature, such
as surface wiping [3], sawing [4], bolt screwing [5], humanoid locomotion [6]
and exoskeleton assistance [7].
Figure 1: Franka Emika Panda performs polishing of a curved surface.
DMPs were originally designed to encode one Degree of Freedom (DoF)
trajectories and are well suited for representing independent signals like
joint or Cartesian positions. Synchronization between different DoF is usually
achieved by exploiting a common phase variable. However, in common orientation
representations like rotation matrix or unit quaternions the DoFs are
interrelated by geometric constraints (e.g., unitary norm). To properly
consider this constraint both learning and integration need to be carried out
by considering the structure of the orientation and perform the integration of
rotation elements while obeying its constraint manifold. To account for this
interdependence among DoFs, Pastor et al. [8] reformulated the original
discrete DMP to encode unit quaternion profiles. However, this formulation
does not take into account the geometry of the unit sphere, since it only used
the vector part of the quaternion product. To address this issue DMPs were
reformulated for direct unit quaternion encoding in [9, 10]. The quaternion-
based DMPs were also extended to include real-time goal switching mechanism
[11]. As an alternative to the quaternion representation, a rotation matrix
can be used as in [11]. The existing methods are suitable for discrete (point-
to-point) orientation trajectories. However, there is no DMP approach that can
effectively encode periodic orientation trajectories.
To address this gap, we propose a novel DMP formulation that enables encoding
of periodic orientation trajectories. Within this formulation we develop two
approaches: Riemannian Metric-based Periodic (RMP-DMP) and Unit Quaternion-
based Periodic (QP-DMP). The first exploits the fact that the space of unit
quaternions is a Riemannian manifold that locally behaves as an Euclidean
space (tangent space). Therefore, we can project unit quaternion trajectories
onto the tangent space, fit a periodic DMP in the tangent space, and project
back the output of this DMP onto the unit quaternion manifold. The second
encodes directly the unit quaternion trajectory and uses quaternion operations
to ensure the unitary norm of the integrated quaternions. The proposed
approaches are tested on synthetic data and then used to perform surface
polishing/wiping and mixing liquids by shaking with a robotic manipulator (see
Fig. 1).
## II Related Works
Learning from Demonstration (LfD) has been widely used as a convenient way to
transfer human skills to robots. It aims at extracting relevant motion
patterns from human demonstrations and subsequently applying them to different
situations. In the past decades, several LfD based approaches have been
developed such as: DMPs [12, 9], Probabilistic Movement Primitives (ProMP)
[13], Stable Dynamical Systems (SDS) [14, 15], GMM and Task-Parameterized
(TP-GMM) [16], and Kernelized Movement Primitive (KMP) [17]. In many previous
works, quaternion trajectories are learned and adapted without considering the
unit norm constraint (e.g., orientation DMP [18] and TP-GMM [19]), leading to
improper quaternions and hence requiring an additional re-normalization.
The problem of learning proper orientations has been investigated in several
works. In [8] was the first attempt to modify the DMP formulation to deal with
unit quaternions. Their DMP considers the vector part of the quaternion
product as a measure of the error between the current and the goal
orientation. However, this formulation has slow convergence to an attractor
point as it is not considering the full geometry constraints of the unit
quaternions. Later, the authors in [9, 11] considered a geometrically
consistent orientation error, i.e., the angular velocity needed to rotate the
current quaternion into the goal one in a unitary time. The stability of both
DMP formulations is shown in [20] using Lyapunov arguments.
In [21], GMM is used to model the distribution of the quaternion
displacements. This probabilistic encoding, together with the exploitation of
Riemannian metrics, has been employed in [22] to learn orientation
trajectories with TP-GMM.
All previously mentioned approaches exploited LfD techniques in order to learn
discrete (point-to-point) orientation trajectories and not periodic ones. At
the best of the authors knowledge, this is the first work that focuses on
learning periodic DMP for orientation trajectories.
The idea of learning a KMP in the tangent space of the orientation manifold is
presented in [17]. While the authors also successfully tested orientation-KMP
against periodic unit quaternion trajectories, the orientation-KMP is an
alternative approach and does not resolve the issue within the periodic-DMP
formulation itself. Due to popularity of DMPs, there is still a significant
functionality gap for many robotic systems that are already based on DMPs and
for future applications for those who would prefer to use DMP approach.
## III Background
In this section, we provide a brief introduction to the classical periodic DMP
formulation (sometimes called rhythmic DMP). Periodic DMPs are used when the
encoded motion follows a rhythmic pattern. Moreover, we provide basic
quaternion notations and operations used in the paper.
### III-A Classical Periodic DMP formulation
The basic idea of periodic DMPs is to model movements by a system of
differential equations that ensure some desired behavior, e.g. convergence to
a specified movement cycle in order to encode motion of rhythmic patterns [1].
A DMP for a single DoF periodic trajectory $y$ is defined by the following set
of nonlinear differential equations
$\displaystyle\dot{z}$
$\displaystyle=\Omega\left({\alpha_{z}\left({\beta_{z}\left(g-y\right)-z}\right)+f(\phi)}\right),$
(1) $\displaystyle\dot{y}$ $\displaystyle=\Omega z,$ (2)
$\displaystyle\tau\dot{\phi}$ $\displaystyle=1,$ (3)
where $g$ is the goal and its value can be set to zero or alternatively to the
average of the demonstrated trajectory cycle. $\Omega$ is the frequency and
$y$ is the desired periodic trajectory that we want to encode with a DMP.
The main difference between periodic DMPs and point-to-point DMPs is that the
time constant related to trajectory duration is replaced by the frequency of
trajectory execution (refer to [12, 1] for details). In addition, the periodic
DMPs must ensure that the values of the initial phase ($\phi=0$) and the final
phase ($\phi=2\pi$) coincide in order to achieve smooth transition during the
repetitions. $f(\phi)$ is defined with $N$ Gaussian kernels according to the
following equation
$\displaystyle f(\phi)$
$\displaystyle=\frac{{\sum_{i=1}^{N}{\Psi_{i}(\phi)w_{i}}}}{{\sum_{i=1}^{N}{\Psi_{i}(\phi)}}}\,r,$
(4) $\displaystyle\Psi_{i}(\phi)$
$\displaystyle=\exp\left(h\left({\cos\left({\phi-
c_{i}}\right)-1}\right)\right),$ (5)
where the weights are uniformly distributed along the phase space, and $r$ is
used to modulate the amplitude of the periodic signal [1, 23] (if not used, it
can be set to $r=1$ [7]).
The classical periodic DMP described by (1)–(3) does not encode the transit
motion needed to start the periodic one. Transients are important in several
applications, e.g., in humanoid robot walking, where the first step from a
resting position is usually a transient and is needed to start the periodic
motion. To overcome this limitation, [24] modified the classical formulation
of periodic DMPs to explicitly consider transients as motion trajectory that
converge towards the limit cycle (i.e., periodic) one.
### III-B Periodic state estimation and control
The phase and frequency of periodic DMPs can be controlled by an adaptive
oscillator, which estimates them as [23]
$\displaystyle\dot{\phi}$ $\displaystyle=\Omega-K\cdot e\cdot\sin(\phi),$ (6)
$\displaystyle\dot{\Omega}$ $\displaystyle=-K\cdot e\cdot\sin(\phi),$ (7)
where $K$ is a positive-value coupling constant and $e=U-\hat{U}$ is a
difference between some external signal $U$ (e.g., human muscle activity in
exoskeleton control [7]) and its internal estimation $\hat{U}$ constructed by
a Fourier series [25]
$\displaystyle\hat{U}$
$\displaystyle=\sum_{c=0}^{M}(\alpha_{c}\cos(c\phi)+\beta_{c}\sin(c\phi)),$
(8)
where $M$ is the size of the Fourier series. Fourier series parameters are
learnt in the following manner
$\displaystyle\dot{\alpha}_{c}$ $\displaystyle=\eta\cos(c\phi)\cdot e,$ (9)
$\displaystyle\dot{\beta}_{c}$ $\displaystyle=\eta\sin(c\phi)\cdot e,$ (10)
where parameter $\eta$ is a learning rate. The open parameters can be set to
$K=10$, $M=10$ and $\eta=2$ [25]. Adaptive oscillators are most useful during
online learning to infer periodic state (phase and frequency) in real-time.
Nevertheless, they are also useful for offline learning when the recorded
signal has variable frequency.
### III-C Quaternion operations
A hyper-complex number is the basis of the quaternion mathematical object. Let
us define a quaternion as
$\boldsymbol{\mathbf{q}}=\nu+\boldsymbol{\mathbf{u}}\,:\,\nu\in\mathbb{R},\,\boldsymbol{\mathbf{u}}=[u_{x},u_{y},u_{z}]{{}^{\top}}\in\mathbb{R}^{3},$
$\boldsymbol{\mathbf{q}}\in\mathbb{S}^{3}$ and $\mathbb{S}^{3}$ is a unit
sphere in $\mathbb{R}^{4}$. A quaternion with a unit norm is called a unit
quaternion and is used to represent an orientation in 3–D space. In this
representation $\boldsymbol{\mathbf{q}}$ and $-\boldsymbol{\mathbf{q}}$
represent the same orientation.
$||\boldsymbol{\mathbf{q}}||=\sqrt{\nu^{2}+u_{x}^{2}+u_{y}^{2}+u_{z}^{2}}$ is
the quaternion norm where $||\cdot||$ denotes $\ell_{2}$ norm. The quaternion
conjugation of $\boldsymbol{\mathbf{q}}$ is defined as
$\bar{\boldsymbol{\mathbf{q}}}=\nu+(-\boldsymbol{\mathbf{u}})$, while the
multiplication of
$\boldsymbol{\mathbf{q}}_{1},\boldsymbol{\mathbf{q}}_{2}\in\mathbb{S}^{3}$ is
defined as
$\boldsymbol{\mathbf{q}}_{1}*\boldsymbol{\mathbf{q}}_{2}=(\nu_{1}\nu_{2}-\boldsymbol{\mathbf{u}}_{1}{{}^{\top}}\boldsymbol{\mathbf{u}}_{2})+(\nu_{1}\boldsymbol{\mathbf{u}}_{2}+\nu_{2}\boldsymbol{\mathbf{u}}_{1}+\boldsymbol{\mathbf{u}}_{1}\times\boldsymbol{\mathbf{u}}_{2})$
(11)
In order to project unit quaternions back and forth between the unit sphere
manifold $\mathbb{S}^{3}$ and the tangent space $\mathbb{R}^{3}$ we use
logarithmic and exponential mapping operators.
$\boldsymbol{\mathbf{\zeta}}={\text{Log}^{q}_{\boldsymbol{\mathbf{q}}_{2}}(\boldsymbol{\mathbf{q}}_{1}):\mathbb{S}^{3}\mapsto\mathbb{R}^{3}}$
maps $\boldsymbol{\mathbf{q}}_{1}\in\mathbb{S}^{3}$ to
$\boldsymbol{\mathbf{\zeta}}\in\mathbb{R}^{3}$ w.r.t. to
$\boldsymbol{\mathbf{q}}_{2}$. Consider
$\boldsymbol{\mathbf{q}}=\boldsymbol{\mathbf{q}}_{1}*\bar{\boldsymbol{\mathbf{q}}}_{2}$,
$\begin{split}\boldsymbol{\mathbf{\zeta}}=\text{Log}^{q}_{\boldsymbol{\mathbf{q}}_{2}}(\boldsymbol{\mathbf{q}}_{1})&=\text{Log}^{q}(\boldsymbol{\mathbf{q}}_{1}*\bar{\boldsymbol{\mathbf{q}}}_{2})=\text{Log}^{q}(\boldsymbol{\mathbf{q}})\\\
&=\begin{cases}\arccos(\nu)\frac{\boldsymbol{\mathbf{u}}}{||\boldsymbol{\mathbf{u||}}},&||\boldsymbol{\mathbf{\zeta||}}\neq
0\\\ [0\,\,0\,\,0]{{}^{\top}},&\text{otherwise}\end{cases}\end{split}$ (12)
Inversely,
$\boldsymbol{\mathbf{q}}={\text{Exp}^{q}_{\boldsymbol{\mathbf{q}}_{2}}(\boldsymbol{\mathbf{\zeta}}):\mathbb{R}^{3}\mapsto\mathbb{S}^{3}}$
maps $\boldsymbol{\mathbf{\zeta}}\in\mathbb{R}^{3}$ to
$\boldsymbol{\mathbf{q}}\in\mathbb{S}^{3}$ so that it lies on the geodesic
starting point from $\boldsymbol{\mathbf{q}}_{2}$ in the direction of
$\boldsymbol{\mathbf{\zeta}}$.
$\displaystyle\text{Exp}^{q}(\boldsymbol{\mathbf{\zeta}})$ $\displaystyle=$
$\displaystyle\begin{cases}\cos(||\boldsymbol{\mathbf{\zeta||}})+\sin(||\boldsymbol{\mathbf{\zeta||}})\frac{\boldsymbol{\mathbf{\zeta}}}{||\boldsymbol{\mathbf{\zeta||}}},&||\boldsymbol{\mathbf{\zeta||}}\neq
0\\\ 1+[0\,\,0\,\,0]{{}^{\top}},&\text{otherwise}\end{cases}$ (13)
$\displaystyle\boldsymbol{\mathbf{q}}_{1}$ $\displaystyle=$
$\displaystyle\text{Exp}^{q}(\boldsymbol{\mathbf{\zeta}})*\boldsymbol{\mathbf{q}}_{2}.$
(14)
In order to ensure discontinuity-free demonstration of an orientation profile
and avoid singularity problems, the following two assumptions should be
satisfied:
#### Assumption 1
The dot product between each adjacent unit quaternion is greater than zero,
such that $\boldsymbol{\mathbf{q}}_{t}\cdot\boldsymbol{\mathbf{q}}_{t+1}>0$,
which guarantees that $\boldsymbol{\mathbf{q}}_{t}$ and
$\boldsymbol{\mathbf{q}}_{t+1}$ are in the same hemisphere. Otherwise, we flip
$\boldsymbol{\mathbf{q}}_{t+1}$ such as
$\boldsymbol{\mathbf{q}}_{t+1}=\bar{\boldsymbol{\mathbf{q}}}_{t+1}$.
#### Assumption 2
As discussed in [9, 11], the domain of $\text{Log}^{q}(\cdot)$ extends to all
$\mathbb{S}^{3}$ except $-1+[0\,\,0\,\,0]{{}^{\top}}$, while the domain of
$\text{Exp}^{q}(\cdot)$ is constrained by
$\|\boldsymbol{\mathbf{\zeta}}\|<\pi$. Restricting the domain to
$\|\boldsymbol{\mathbf{\zeta}}\|<\pi$ makes (12) and (13) bijective.
## IV Proposed Approach
Multidimensional periodic variables are encoded using one DMP for each DoF and
synchronized by a common phase. This works for variables like joint or
Cartesian positions, forces, torques, etc, where every DoF of each variable
can be encoded and integrated independently form the rest and still reproduce
the desired combined behaviour of the robot. However, this is not enough to
successfully encode orientations, whose elements are interdependent variables
and subject to additional constraints (i.e., the orthogonality, the unit
norm), without pre- and/or post-processing the data.
In order to overcome this limitation, we propose two approaches to learn
periodic orientation movements, namely the Riemannian Metric-based Periodic
(RMP-DMP) and the Unit Quaternion-based Periodic (QP-DMP). In what follows,
we consider a periodic demonstration of length $T$ as the trajectory
$\boldsymbol{\mathbf{\mathcal{Q}}}_{q}=\\{\boldsymbol{\mathbf{q}}_{t}\\}_{t=1}^{T}$,
where $\boldsymbol{\mathbf{q}}_{t}$ are unit quaternions. We assume that the
unit quaternions are collected from a single demonstration of a rhythmic
pattern and then used to train a periodic DMP.
### IV-A Riemannian Metric-based Periodic (RMP-DMP)
Compute
---
$\mathcal{T}_{\boldsymbol{\mathbf{\mu}}_{q}}\mathcal{M}$
---
$\mathcal{T}_{\boldsymbol{\mathbf{\mu}}_{q}}\mathcal{M}$
---
Periodic
---
DMP
---
demonstration
---
$\boldsymbol{\mathbf{\mathcal{Q}}}_{q}$
reproduction
---
$\boldsymbol{\mathbf{\mathcal{Q}}}_{q}^{new}$
$\boldsymbol{\mathbf{\zeta}}$
---
$\boldsymbol{\mathbf{\mu}}_{q}$
---
$\boldsymbol{\mathbf{\mu}}_{q}$
---
$=\text{Log}^{q}_{\boldsymbol{\mathbf{\mu}}_{q}}($
---
$\boldsymbol{\mathbf{\zeta}}$
---
$(\boldsymbol{\mathbf{\mathcal{Q}}}_{q}$
---
$)$
---
$\boldsymbol{\mathbf{\mathcal{Q}}}_{q}^{new}$
---
$=\text{Exp}^{q}_{\boldsymbol{\mathbf{\mu}}_{q}}($
---
$\boldsymbol{\mathbf{\zeta}}$
---
$)*$
---
$\boldsymbol{\mathbf{\mu}}_{q}$
---
Figure 2: Diagram for the proposed Riemannian metric-based projection for
learning and adapting unit quaternion trajectories. The blue data represent
unit quaternion demonstration trajectory
$\boldsymbol{\mathbf{\mathcal{Q}}}_{q}$. The red data represent the projection
of $\boldsymbol{\mathbf{\mathcal{Q}}}_{q}$ onto the tangent space
$\mathcal{T}_{\boldsymbol{\mathbf{\mu}}_{q}}\mathcal{M}$ w.r.t. the mean
$\boldsymbol{\mathbf{\mu}}_{q}$. The brown data represent the reproduced unit
quaternion trajectory after projecting the reproduced periodic DMP trajectory
from $\mathcal{T}_{\boldsymbol{\mathbf{\mu}}_{q}}\mathcal{M}$.
A manifold is a topological space that resembles the Euclidean space around
each point, i.e., it has locally the properties of the Euclidean space. A
Riemannian manifold $\mathcal{M}$ is a smooth manifold that is equipped with a
Riemannian metric. For every point $\boldsymbol{\mathbf{m}}$ of a manifold
$\mathcal{M}$, i.e., $\boldsymbol{\mathbf{m}}\in\mathcal{M}$, it is possible
to compute a tangent space $\mathcal{T}_{\boldsymbol{\mathbf{m}}}\mathcal{M}$,
its metric is flat, which allows the use of classical arithmetic tools. In
other words, one can perform typical Euclidean operations in the tangent space
and project back the result to the manifold. The overall idea is sketched in
Fig. 2.
Inspired by [17], we exploit Riemannian operators to project a unit quaternion
demonstration $\boldsymbol{\mathbf{\mathcal{Q}}}_{q}$ from
$\mathcal{M}=\mathbb{S}^{3}$ onto the tangent space
$\mathcal{T}_{\boldsymbol{\mathbf{\mu}}_{q}}\mathcal{M}$. The tangent space is
attached to an auxiliary unit quaternion $\boldsymbol{\mathbf{\mu}}_{q}$ and
each unit quaternion in $\boldsymbol{\mathbf{\mathcal{Q}}}_{q}$ is moved to
the tangent space by means of
$\text{Log}^{q}_{\boldsymbol{\mathbf{\mu}}_{q}}(\cdot)$ defined in (12). Thus,
(12) is used to project $\boldsymbol{\mathbf{\mathcal{Q}}}_{q}$ onto
$\mathcal{T}_{\boldsymbol{\mathbf{\mu}}_{q}}\mathcal{M}$ creating
$\boldsymbol{\mathbf{\mathcal{Q}}}_{\zeta}=\\{\boldsymbol{\mathbf{\zeta}}_{t}\\}_{t=1}^{T}$.
Each point in $\boldsymbol{\mathbf{\mathcal{Q}}}_{\zeta}$ belongs to the
tangent space and has the same property of $3$D vectors, i.e.,
$\boldsymbol{\mathbf{\zeta}}_{t}\in\mathbb{R}^{3}$. The trajectory
$\boldsymbol{\mathbf{\mathcal{Q}}}_{\zeta}$ is (numerically) differentiated to
compute the 1st and 2nd time-derivatives
$\boldsymbol{\mathbf{\dot{\zeta}}}_{t},\boldsymbol{\mathbf{\ddot{\zeta}}}_{t}\in\mathbb{R}^{3}$.
The training data
$\boldsymbol{\mathbf{\zeta}}_{t},\boldsymbol{\mathbf{\dot{\zeta}}}_{t},\boldsymbol{\mathbf{\ddot{\zeta}}}_{t}\in\mathbb{R}^{3}$
are subsequently encoded using the periodic dynamic system in (1)–(2). During
the DMP execution, at each time-step we use (13) and (14) to project back each
of the generated $\boldsymbol{\mathbf{\zeta}}_{t}^{*}$ from
$\mathcal{T}_{\boldsymbol{\mathbf{\mu}}_{q}}\mathcal{M}$ onto
$\mathbb{S}^{3}$, creating a new unit quaternion
$\boldsymbol{\mathbf{q}}_{t}^{*}$.
The center of the tangent space $\boldsymbol{\mathbf{\mu}}_{q}$ is an open
parameter to determine. In [17], $\boldsymbol{\mathbf{\mu}}_{q}$ was selected
as the initial point of the demonstration. However, this choice cause problems
if the extreme orientations in the demonstrated trajectory are close to $0$ or
$\pi\,$rad. Using such extreme quaternion as $\boldsymbol{\mathbf{\mu}}_{q}$
will produce different $\boldsymbol{\mathbf{\zeta}}$ due to finite precision
algebra. This problem may be avoided by selecting
$\boldsymbol{\mathbf{\mu}}_{q}$ as the mean point of demonstrations. This is
because the central tendency of the mean—that is the farthest point from the
extreme—allows to avoid restricting the input domain, which may be necessary
if the initial point is used. In the case of unit quaternions, this mean can
be estimated using Maximum Likelihood Estimation (MLE) [22].
### IV-B Unit Quaternion-based Periodic (QP-DMP)
The original formulation of periodic DMPs were successfully applied to
multidimensional independent variables for their individual DoF
$\in\mathbb{R}$. These variables can be joint or Cartesian positions, forces,
torques, etc, where every DoF of each variable can be encoded and integrated
independently form the rest and still reproduce the desired combined behaviour
of the robot. However, such formulation is not enough to successfully encode
orientations, whose elements are interdependent variables and subject to
additional constraints (i.e., unitary norm), without pre- and/or post-
processing the data.
To properly encode unit quaternions trajectory, inspired by the work on
discrete quaternion DMPs [9, 11], we reformulate the dynamic system in (1) and
(2) as
$\displaystyle\boldsymbol{\mathbf{\dot{\eta}}}$
$\displaystyle=\boldsymbol{\mathbf{\Omega}}\left(\alpha_{z}(\beta_{z}2\,\text{Log}^{q}(\boldsymbol{\mathbf{g}}*\overline{\boldsymbol{\mathbf{q}}})-\boldsymbol{\mathbf{\eta}})+\boldsymbol{\mathbf{f}}(\boldsymbol{\mathbf{\phi}})\right),$
(15) $\displaystyle\boldsymbol{\mathbf{\dot{q}}}$
$\displaystyle=\boldsymbol{\mathbf{\Omega}}\frac{1}{2}\boldsymbol{\mathbf{\eta}}*\boldsymbol{\mathbf{q}}.$
(16)
The unit quaternion
$\boldsymbol{\mathbf{g}}\in\boldsymbol{\mathbf{\mathbb{S}}}^{3}$ in (15)
denotes the goal orientation and it can be the identity orientation
$1+[0\,\,0\,\,0]{{}^{\top}}$ or the average of the demonstration quaternion
profile, which can be estimated using MLE proposed in [22].
$2\,\text{Log}^{q}(\boldsymbol{\mathbf{g}}*\overline{\boldsymbol{\mathbf{q}}})$
defines the angular velocity $\boldsymbol{\mathbf{\omega}}$ that rotates a
unit quaternion $\boldsymbol{\mathbf{q}}$ into $\boldsymbol{\mathbf{g}}$
within a unit sampling time. $\boldsymbol{\mathbf{\Omega}}$ is the $3\times 3$
diagonal matrix of frequencies and the nonlinear forcing term is
$\boldsymbol{\mathbf{f}}(\boldsymbol{\mathbf{\phi}})=\boldsymbol{\mathbf{A}}_{r}\frac{{\sum_{i=1}^{N}{\boldsymbol{\mathbf{w}}_{i}\Psi_{i}(\phi)}}}{{\sum_{i=1}^{N}{\Psi_{i}(\phi)}}},$
(17)
where $\boldsymbol{\mathbf{w}}_{i}$ are the weights needed to follow any given
rotation profile. We estimate the weights by
$\begin{split}\small\frac{{\sum_{i=1}^{N}{\boldsymbol{\mathbf{w}}_{i}\Psi_{i}(\phi)}}}{{\sum_{i=1}^{N}{\Psi_{i}(\phi)}}}=\boldsymbol{\mathbf{A}}_{r}^{-1}\left(\boldsymbol{\mathbf{\Omega}}^{-1}\boldsymbol{\mathbf{\dot{\omega}}}-(\alpha_{z}(\beta_{z}2\,\text{Log}^{q}(\boldsymbol{\mathbf{g}}*\overline{\boldsymbol{\mathbf{q}}})-\boldsymbol{\mathbf{\omega}}))\right),\end{split}$
(18)
where $\boldsymbol{\mathbf{A}}_{r}$ is $3\times 3$ diagonal matrix of
amplitude modulators. The integration of (16) is done similarly as in (14),
i.e.,
$\boldsymbol{\mathbf{q}}(t+\delta t)=\text{Exp}^{q}\left(\frac{\delta
t}{2}\boldsymbol{\mathbf{\Omega}}\,\boldsymbol{\mathbf{\eta}}(t)\right).$ (19)
## V Validation
In order to illustrate the performance of both proposed approaches, we have
conducted several examples in simulation as well as in real setup as follows:
* •
In simulation: (_i_) comparison between both approaches while learning
periodic orientation trajectories, and (_ii_) coupling the adaptive oscillator
to the first approach (Section IV-A) to control periodic state variables.
* •
Real experiments: (_i_) wiping of uneven surface, and (_ii_) mixing of liquids
by shaking a bottle.
### V-A Simulations
$\nu$
---
$\boldsymbol{\mathbf{\dot{\omega}}}$ [rad$\backslash$s2]
---
$\boldsymbol{\mathbf{\omega}}$ [rad$\backslash$s]
---
$u_{z}$
---
$u_{y}$
---
$u_{x}$
---
-0.5
---
0.5
---
0
---
-0.6
---
-0.2
---
0.2
---
-0.2
---
0.2
---
0
---
-0.4
---
0.5
---
1
---
Unit quaternion $\mathbf{q}=\nu+\mathbf{u}$
---
2
---
0
---
-2
---
2
---
0
---
-2
---
$\omega_{x}$
---
$\omega_{y}$
---
$\omega_{z}$
---
$\dot{\omega}_{x}$
---
$\dot{\omega}_{y}$
---
$\dot{\omega}_{z}$
---
0
---
10
---
20
---
30
---
40
---
50
---
Time [s]
---
0
---
10
---
20
---
30
---
40
---
50
---
Time [s]
---
Figure 3: Reproduction of a rhythmic motion using QP-DMP (left column graphs)
and RMP-DMP (right column graphs). Dashed black lines correspond to the
original motion.
Figure 3 shows the simulation results of using QP-DMP and RMP-DMP. We
generated a synthetic periodic orientation trajectory (dashed lines) and use
both DMP approaches to encode it. Both approaches were able to successfully
encode the periodic trajectory with unit quaternions (solid lines). The first
four graphs in Fig. 3 show the individual elements of quaternions, while the
last two show angular velocity and acceleration, respectively. There is a
negligible error between the demonstrated orientation motion and the encoded
motion. Also, we did not observe any significant difference between the two
DMP formulations. In both cases, we initialize the DMP state such that the
initial orientation differs from the demonstrated one (first four graphs in
Fig. 3). This is to show that the DMP effectively converges to a limit cycle
that reproduces the demonstration. In this test, we set $\alpha_{z}=48$,
$\lambda=0.994$, and
$\boldsymbol{\mathbf{A}}_{r}=\text{diag}\left([1\,1\,1]\right)$.
In order to correctly represent the orientation, it is of importance that the
norm of the quaternions is unitary. To estimate the ability of the proposed
two approaches in keeping this condition, we performed a simulation that
tested the norm during the adaptation.
$u_{z}$
---
$u_{y}$
---
$u_{x}$
---
$\nu$
---
0.5
---
-0.5
---
0
---
0.5
---
-0.5
---
0
---
0.5
---
-0.5
---
0
---
0.5
---
-0.5
---
0
---
0
---
5
---
10
---
15
---
7
---
7.5
---
8
---
zoom in
---
Time [s]
---
$\Omega$ [Hz]
---
0
---
5
---
10
---
15
---
20
---
0.7
---
0.8
---
0.9
---
1
---
Time [s]
---
Figure 4: Results of coupling the proposed periodic orientation DMPs with an
adaptive oscillator. The first four graphs show the elements of quaternions as
demonstrated (dashed line) and learned (solid line). The frequency of the
system is shown in the bottom graph.
The phase and frequency of the demonstrated periodic motion has to be known in
order to be able to learn and encode the DMP. However, in practical scenario
where the learning is done online, it is difficult to maintain a constant pre-
planned frequency in real-time. Therefore, adaptive oscillators can be used to
dynamically estimate the phase and frequency of the demonstrated motion in
real-time. We performed a simulation to show that the periodic state variables
(phase and frequency), as estimated by the adaptive oscillator, can be
successfully coupled to the periodic quaternion DMPs that is operating in the
tangent space. The result of the coupling is shown in Fig. 4. We can see that
the system successfully converged to the given periodic frequency, as
estimated from the demonstrated input signal.
### V-B Experiments
We chose surface polishing/wiping and mixing liquids by shaking tasks because
they both involve periodic changes of orientation. Many objects are not flat
and to polish or wipe them the robot should adapt the orientation periodically
in way that the contact with the polishing/wiping tool is perpendicular to the
object surface. In case of mixing liquids, we typically change the orientation
to shake the container. Therefore, the robot should periodically change the
orientation back and forth to stir the liquids inside the container. Both
tasks are executed by a $7$ DoF Franka Emika Panda robot using Cartesian
impedance control. In the mixing task, the position is kept fixed at its
initial value.
The series of photos in top-row of Fig. 5 shows the robot performing an object
shaking task, which is important for mixing various liquids. To perform this
task the robot had to periodically change the orientation of the endpoint
around a certain point, while holding the container with liquids, in order to
produce rapid changes in acceleration (i.e., jerk). Like in the previous
tasks, we used the proposed DMP method to encode the appropriate orientation
modulating trajectories in order to successfully execute the object shaking
task. The demonstrated orientation trajectory and the encoded one are shown in
Fig. 6. We can see that the desired trajectory was reproduced well by the
robot during the experiment.
In a different scenario, bottom-row of Fig. 5 shows the robot performing
polishing/wiping of a highly curved surface. Such task requires the robot to
periodically change its end-effector orientation and position in order to keep
the contact with the surface of the object. We used the proposed DMP method to
encode the appropriate orientation modulating trajectories in order to
successfully execute the task. The demonstrated and encoded periodic
orientation trajectories are shown in Fig. 7.
Figure 5: Franka Emika Panda executes two tasks that require periodic
orientation motion with/without periodic position motion: shaking a bottle of
liquids (top) and polishing task of curved objects (bottom). 0
---
10
---
20
---
30
---
40
---
50
---
60
---
Time [s]
---
-1
---
-0.5
---
0
---
0.5
---
$\boldsymbol{\mathbf{q}}$
---
$\boldsymbol{\mathbf{q}}^{rob}$
---
$\boldsymbol{\mathbf{q}}^{dmp}$
---
$\boldsymbol{\mathbf{q}}^{demo}$
---
Figure 6: The response of the proposed DMP for periodic orientation during the
execution of the shaking task. $\boldsymbol{\mathbf{q}}^{rob}$ is actual robot
orientation trajectory, $\boldsymbol{\mathbf{q}}^{dmp}$ is the reference
orientation trajectory reproduced by our approach, while
$\boldsymbol{\mathbf{q}}^{demo}$ is the orientation trajectory used to
demonstrate the task.
## VI Conclusions
In this paper, we proposed two new approaches to learn periodic orientation
motion, represented by unit quaternions, using DMP. In the first approach, we
exploited Riemannian metrics and operators to perform the learning online on
the tangent space of $\mathbb{S}^{3}$. In the second one, we reformulate the
periodic DMP equations to directly learn and integrate the unit quaternions
online. The performance of both approaches has been validated in simulation as
well as in real setups.
In our experiments, we did not observe a significant outperforming of one of
the approaches, as they both behaved similarly. However, the RMP-DMP has a
simpler implementation as it only requires to map the training data onto the
tangent space and learn a classical DMP there. In other words, one does not
have to change the underlying DMP formulation and can reuse existing
implementations. Moreover, working on the tangent space is a general approach
that can be potentially applied to other Riemannian manifolds. Extending the
periodic DMP formulation to different Riemannian manifolds is the focus of our
future research.
0
---
20
---
40
---
60
---
80
---
Time [s]
---
-1
---
-0.5
---
0
---
$\boldsymbol{\mathbf{q}}$
---
0
---
0.2
---
0.4
---
$\boldsymbol{\mathbf{p}}$
---
$\boldsymbol{\mathbf{q}}^{rob}$
---
$\boldsymbol{\mathbf{q}}^{dmp}$
---
$\boldsymbol{\mathbf{q}}^{demo}$
---
$\boldsymbol{\mathbf{p}}^{rob}$
---
$\boldsymbol{\mathbf{p}}^{dmp}$
---
$\boldsymbol{\mathbf{p}}^{demo}$
---
Figure 7: The response of the proposed DMP for periodic orientation while
executing the polishing/wiping task. $\boldsymbol{\mathbf{q}}^{rob}$,
$\boldsymbol{\mathbf{q}}^{dmp}$, and $\boldsymbol{\mathbf{q}}^{demo}$ are
defined as in Fig. 6.
$\boldsymbol{\mathbf{p}}=[p_{x}\,p_{y}\,p_{z}]{{}^{\top}}$ is the position
trajectory of the end-effector.
## References
* [1] A. J. Ijspeert, J. Nakanishi, and S. Schaal, “Learning rhythmic movements by demonstration using nonlinear oscillators,” in _IROS_ , vol. 1, Lausanne, Switzerland, 2002, pp. 958–963.
* [2] M. Saveriano, F. J. Abu-Dakka, A. Kramberger, and L. Peternel, “Dynamic movement primitives in robotics: A tutorial survey,” _arXiv preprint arXiv:2102.03861_ , 2021.
* [3] A. Gams, T. Petrič, M. Do, B. Nemec, J. Morimoto, T. Asfour, and A. Ude, “Adaptation and coaching of periodic motion primitives through physical and visual interaction,” _Rob. Auton. Syst._ , vol. 75, pp. 340–351, 2016.
* [4] L. Peternel, N. Tsagarakis, D. Caldwell, and A. Ajoudani, “Robot adaptation to human physical fatigue in human–robot co-manipulation,” _Auton. Robots_ , vol. 42, no. 5, pp. 1011–1021, 2018.
* [5] L. Peternel, T. Petrič, and J. Babič, “Robotic assembly solution by human-in-the-loop teaching method based on real-time stiffness modulation,” _Auton. Robots_ , vol. 42, no. 1, pp. 1–17, 2018.
* [6] E. Rückert and A. d’Avella, “Learned parametrized dynamic movement primitives with shared synergies for controlling robotic and musculoskeletal systems,” _Front. Comput. Neurosci._ , 2013.
* [7] L. Peternel, T. Noda, T. Petrič, A. Ude, J. Morimoto, and J. Babič, “Adaptive control of exoskeleton robots for periodic assistive behaviours based on emg feedback minimisation,” _PLOS ONE_ , 2016.
* [8] P. Pastor, L. Righetti, M. Kalakrishnan, and S. Schaal, “Online movement adaptation based on previous sensor experiences,” in _IROS_ , San Francisco, CA, USA, 2011, pp. 365–371.
* [9] F. J. Abu-Dakka, B. Nemec, J. A. Jørgensen, T. R. Savarimuthu, N. Krüger, and A. Ude, “Adaptation of manipulation skills in physical contact with the environment to reference force profiles,” _Auton. Robots_ , vol. 39, no. 2, pp. 199–217, 2015.
* [10] L. Koutras and Z. Doulgeri, “A correct formulation for the orientation dynamic movement primitives for robot control in the cartesian space,” in _Conference on robot learning_ , 2020, pp. 293–302.
* [11] A. Ude, B. Nemec, T. Petric, and J. Morimoto, “Orientation in cartesian space dynamic movement primitives,” in _ICRA_ , 2014, pp. 2997–3004.
* [12] A. J. Ijspeert, J. Nakanishi, H. Hoffmann, P. Pastor, and S. Schaal, “Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors,” _Neural Comput._ , vol. 25, no. 2, pp. 328–373, 2013.
* [13] A. Paraschos, C. Daniel, J. Peters, and G. Neumann, “Probabilistic movement primitives,” in _NeurIPS_ , 2013, pp. 2616–2624.
* [14] S. M. Khansari-Zadeh and A. Billard, “Learning stable non-linear dynamical systems with gaussian mixture models,” _IEEE Trans. Robot._ , vol. 27, no. 5, pp. 943–957, 2011.
* [15] M. Saveriano, “An energy-based approach to ensure the stability of learned dynamical systems,” in _ICRA_ , 2020, pp. 4407–4413.
* [16] S. Calinon, D. Bruno, and D. G. Caldwell, “A task-parameterized probabilistic model with minimal intervention control,” in _ICRA_ , 2014, pp. 3339–3344.
* [17] Y. Huang, F. J. Abu-Dakka, J. Silvério, and D. G. Caldwell, “Toward orientation learning and adaptation in cartesian space,” _IEEE Trans. Robot._ , vol. 37, no. 1, pp. 82–98, 2021.
* [18] P. Pastor, H. Hoffmann, T. Asfour, and S. Schaal, “Learning and generalization of motor skills by learning from demonstration,” in _ICRA_ , 2009, pp. 763–768.
* [19] J. Silvério, L. Rozo, S. Calinon, and D. G. Caldwell, “Learning bimanual end-effector poses from demonstrations using task-parameterized dynamical systems,” in _IROS_ , 2015, pp. 464–470.
* [20] M. Saveriano, F. Franzel, and D. Lee, “Merging position and orientation motion primitives,” in _ICRA_ , 2019, pp. 7041–7047.
* [21] S. Kim, R. Haschke, and H. Ritter, “Gaussian mixture model for 3-dof orientations,” _Rob. Auton. Syst._ , vol. 87, pp. 28–37, 2017.
* [22] M. J. Zeestraten, I. Havoutis, J. Silvério, S. Calinon, and D. G. Caldwell, “An approach for imitation learning on riemannian manifolds,” _IEEE Robot. Autom. Lett._ , vol. 2, no. 3, pp. 1240–1247, 2017.
* [23] A. Gams, A. Ijspeert, S. Schaal, and J. Lenarčič, “On-line learning and modulation of periodic movements with nonlinear dynamical systems,” _Auton. Robots_ , vol. 27, no. 1, pp. 3–23, 2009.
* [24] J. Ernesti, L. Righetti, M. Do, T. Asfour, and S. Schaal, “Encoding of periodic and their transient motions by a single dynamic movement primitive,” in _IEEE-RAS Humanoids_ , 2012, pp. 57–64.
* [25] T. Petrič, A. Gams, A. J. Ijspeert, and L. Žlajpah, “On-line frequency adaptation and movement imitation for rhythmic robotic tasks,” _Int. J. Robot. Res._ , vol. 30, no. 14, pp. 1775–1788, 2011.
*[DMPs]: Dynamic Movement Primitive
*[DoF]: Degree of Freedom
*[DoFs]: Degree of Freedom
*[DMP]: Dynamic Movement Primitive
*[]:
*[RMP-DMP]: Riemannian Metric-based Periodic
*[QP-DMP]: Unit Quaternion-based Periodic
*[LfD]: Learning from Demonstration
*[ProMP]: Probabilistic Movement Primitives
*[SDS]: Stable Dynamical Systems
*[GMM]: Gaussian Mixture Model
*[TP-GMM]: Task-Parameterized ()
*[KMP]: Kernelized Movement Primitive
*[MLE]: Maximum Likelihood Estimation
|
# HDRfeat: A Feature-Rich Network
for High Dynamic Range Image Reconstruction
Lingkai Zhu, Fei Zhou, Bozhi Liu, Orcun Goksel L. Zhu and O. Goksel are with
the Department of Information Technology, Uppsala University, Sweden. F. Zhou
and B. Liu are with the College of Electronic and Information Engineering,
Shenzhen University, China. This work was performed during B. Liu’s visit at
Uppsala University, Sweden.
###### Abstract
A major challenge for high dynamic range (HDR) image reconstruction from
multi-exposed low dynamic range (LDR) images, especially with dynamic scenes,
is the extraction and merging of relevant contextual features in order to
suppress any ghosting and blurring artifacts from moving objects. To tackle
this, in this work we propose a novel network for HDR reconstruction with deep
and rich feature extraction layers, including residual attention blocks with
sequential channel and spatial attention. For the compression of the rich-
features to the HDR domain, a residual feature distillation block (RFDB) based
architecture is adopted. In contrast to earlier deep-learning methods for HDR,
the above contributions shift focus from merging/compression to feature
extraction, the added value of which we demonstrate with ablation experiments.
We present qualitative and quantitative comparisons on a public benchmark
dataset, showing that our proposed method outperforms the state-of-the-art.
###### Index Terms:
HDR, hierarchical features, attention
## I Introduction
Dynamic range is the luminance ratio between the brightest and darkest areas
in a scene. Natural scenes have a much higher luminance range than digital
cameras can represent, resulting in captured low dynamic range (LDR) images to
often have over- or under-exposed regions [1]. High dynamic range (HDR) images
can represent a wide range of illuminations.
The typical way to generate an HDR image is to merge a series of LDR images
with different exposures of the same scene captured by a positionally-fixed
camera. Given the acquired LDR images, some methods focus on recovering the
camera response function [2, 3, 4, 1, 5, 6], which is the relation between the
irradiance map and the HDR-image pixel values. Others adopt a multiple
exposure fusion approach [7, 8, 9, 10], which combines the multi-exposed LDR
images to estimate the irradiance value for each pixel. However, if the input
LDR images contain large foreground motions, the merged HDR image will suffer
from ghosting and blurring artifacts due to the misalignment among the LDR
images.
Figure 1: Visual results on the test dataset [11]. The top row shows three
input LDR images, the tone-mapped HDR image, and sample LDR patches at the
same image location. The bottom row compares the same patch from HDR images
reconstructed by state-of-the-art methods and our proposed method HDRfeat. The
arrows highlight some artifacts, discussed later in our Results.
Several deep learning approaches have been proposed to address the
aforementioned problems [11, 12, 13, 14, 15, 16, 17]. For example, Kalantari
et al. [11] used optical flow to align input images and then merged the
aligned images with a four-layer fully convolutional neural network (CNN) to
generate a restored HDR image. However, such a simple neural network structure
cannot handle the artifacts and distortion caused by errors from an unreliable
optical flow. Wu et al. [12] proposed the first end-to-end deep neural network
without optical flow alignment for HDR reconstruction, which includes three
encoder networks, a merger network, and a decoder network. However, due to the
lack of an explicit alignment mechanism among LDR images, this approach
suffers from occlusion and ghosting. In their seminal work, Yan et al. [13]
were the first to use attention networks (AHDRNet) aiming to highlight or
align large motions in the features prior to the merger network, which can
then suppress any ghosting artifacts. Besides the attention network
dramatically increasing the computational cost, there still existed blurring
or artifacts in large motion scenarios as shown in Fig. 1. Nevertheless, this
work inspired several following works, which adapted their proposed
architecture; for example, see several solutions in the recent NTIRE 2021 and
2022 challenges on HDR [18, 19]. In [15], Yan et al. proposed DAHDRNet with
recursive channel and spatial attention for more effective de-ghosting. Most
deep-learning solutions to HDR (e.g. AHDRNet and its derivatives) involve two
consecutive stages overall: extraction of features from LDR images, and based
on these merging/compression into an HDR image. The latter stage in essence is
a function of LDR images, e.g. how to scale and merge them locally. It is the
features from the former stage that parametrizes such merging function, e.g.
that should highlight/suppress misaligned regions, occlusions, etc. In other
words, one can expect the former task to be far more complex than the latter
task. Based on this motivation, we propose a network architecture (HDRfeat)
which is rich in feature extraction with attention and channel-wise
hierarchical layers, while lighter on the latter reconstruction side, compared
to most state-of-the-art solutions. Accordingly, our main contributions herein
include: $\bullet$ a novel channel-wise hierarchical feature extraction
network to efficiently extract rich contextual information from multi-exposure
images, with a channel bottleneck structure to compress these LDR feature
representations purposed for subsequent HDR reconstruction; $\bullet$ a
residual attention block with sequential channel and spatial attention to
allow the features to focus where needed, e.g. large motions; $\bullet$ the
state-of-the-art on HDR reconstruction, demonstrated via extensive evaluations
on a public benchmark dataset.
## II Methods
Figure 2: Proposed network architecture, involving sub-networks for
hierarchical rich feature extraction with sequential attention blocks and HDR
reconstruction via residual feature distillation. The former sub-network
applies channel-wise hierarchical processing (in contrast to spatial
hierarchy/level-of-detail, e.g., as in [17]) together with two sequential
(channel-spatial) attention blocks with residual connections and shared
weights to learn merging information from non-reference exposures. The
reconstruction sub-network inspired by [13, 15] uses the merged features and
the first-level feature of the medium-exposure image to generate an HDR image
via three consecutive residual feature distillation blocks (RFDBs) inspired by
[20] in addition to a long skip-connection that helps preserve and incorporate
early features closer to the original input image.
Let $\\{I_{1},I_{2},I_{3}\\}$ be a set of 3 LDR images denoting short, medium,
and long exposures, respectively. Following [11], we preprocess $\mathcal{I}$
by gamma correction ($\gamma$ = 2.2 herein) to generate a set of mapped images
$\\{H_{1},H_{2},H_{3}\\}$, which concatenate with the LDR images, i.e.
$X_{i}=\text{concat}(I_{i},H_{i})$ as input to our network. Accordingly, an
HDR image $H$ is restored by the proposed HDRfeat network $h(\cdot)$ as
$\displaystyle H=h(X_{1},X_{2},X_{3};\theta)$ (1)
where $\theta$ are the network parameters. We train this network for visual
acuity using a loss on tone-mapped images [11] as
$\displaystyle\mathcal{L}=\|\mathcal{T}(H)-\mathcal{T}(\hat{H})\|_{1}$ (2)
where $H$ and $\hat{H}$ are the predicted and groundtruth HDR images, and
$\mathcal{T}(H)=\frac{\log(1+\mu H)}{\log(1+\mu)}$ is the $\mu$-law tone
mapping with $H$ scaled within $[0,1]$ and $\mu$ = 5000 in this work.
### II-A Channel-wise Hierarchical Feature Extraction Network
As the input LDR images contain rich contextual information, e.g. about
foreground motions and the scene background, simple layers such as in [13] may
not be sufficient to effectively extract such information. In the GAN-based
solution of [17], a spatially multi-scale deep encoder was proposed to extract
visual feature from LDR images. Nevertheless, such spatial down- and
upsampling may lose the fine alignment information required in the subsequent
reconstruction stage. Furthermore, this operation is limited in enriching
features in the channel dimension, which is potentially more relevant.
Therefore, inspired by the spatially hierarchical feature extraction in the
generator of [17], we herein introduce a channel-wise hierarchical rich
feature extractor (Fig. 2) and incorporate this in a CNN solution, which is
more stable to train compared to GANS. For channel-wise hierarchy, we extract
features at the same scale with expanding channel sizes, which are then merged
at bottlenecks for channel dimensions to create summary features for the
reconstruction.
Architecturally, each depth of channel-wise feature extractor is composed of
three shared convolution operators $E$ (3$\times$3 Conv with zero-padding of 1
pixel), extracting different levels of information denoted here as
$F_{i,j}(i,j\in\\{1,2,3\\})$ with $i$ the hierarchical depth and $j$ the LDR
index, i.e.:
$\displaystyle F_{1,j}$ $\displaystyle=E_{1,j}(X_{j})$ (3) $\displaystyle
F_{2,j}$ $\displaystyle=E_{2,j}(F_{1,j})$ (4) $\displaystyle F_{3,j}$
$\displaystyle=E_{3,j}(F_{2,j})$ (5)
with respective dimensions of 64$\times$128$\times$128,
128$\times$128$\times$128, and 196$\times$128$\times$128.
In the spirit of several works that demonstrated the value of attention in
HDR, we also adopt this strategy. To that end, we introduce convolutional
block attention modules (CBAM) with residual attention for the first time in
HDR reconstruction, as described in detail in the following subsection.
Ultimately, our attention residual blocks $A_{1}$ and $A_{3}$ help introduce
attention from the medium-exposure features $F_{1,2}$ as reference, into the
short- and long-exposure features $F_{1,1}$ and $F_{1,3}$, generating the
feature maps
$\displaystyle a_{i}=A_{i}(\,F_{1,i}\,,\,F_{1,2}\,),\qquad i\in\\{1,3\\}\,.$
(6)
The rich contextual information (denoted by the red lines in Fig. 2) are next
compressed via sequential concatenation and (3$\times$3) convolutions, to
obtain summary feature representations $F_{j}$ at each hierarchical depth $j$
as follows:
$\displaystyle F_{3}$
$\displaystyle=\text{Conv}\big{(}\text{Concat}(F_{3,1},F_{3,2},F_{3,3})\big{)}$
(7) $\displaystyle F_{2}$
$\displaystyle=\text{Conv}\big{(}\text{Concat}(F_{2,1},F_{2,2},F_{2,3},F_{3})\big{)}$
(8) $\displaystyle F_{1}$
$\displaystyle=\text{Conv}\big{(}\text{Concat}(a_{1},F_{1,2},a_{3},F_{2})\big{)}$
(9)
all with dimensions 64$\times$128$\times$128.
### II-B Residual Attention Block
Attention mechanisms have been shown to be beneficial in several deep learning
tasks, such as object detection [21] and super-resolution [22], as well as HDR
reconstruction [13, 15]. Although spatial and channel attention can both be
instrumental and can help reduce ghosting artifacts [13, 15], their
introduction using traditional parallel layer branching, as in [13, 15],
requires high computational resources [23]. Furthermore,the multiplication of
such multiple attention branches makes the gradients in general unstable to
effectively back-propagate during training. To address these, we adopt a
sequential attention mechanism, inspired by Convolutional Block Attention
Module (CBAM) [21], where first the global information is put in focus via
channel attention and then the local information is explored via spatial
attention. Thus, such attention mechanism operates in a global-to-local
manner. In contrast to [21], which involves a single image input (for object
detection purposes), we wish to inject attention with the help of one image
(reference) into another (target). Accordingly, we first concatenate both
image features as our block input, and after the addition of attention, we
bring back the target image as a residual input, such that the attention will
only need to robustly learn the difference from the target (see Fig. 3).
Figure 3: Our attention module brings in attention from the medium-exposed
(reference) image features $F_{\mathrm{R}}$ into the short- or long-exposed
image (target) feature $F_{\mathrm{T}}$, by first fusing them (concatenation +
1$\times$1 convolution) into $F_{\mathrm{F}}$, then finding channel and
spatial attention sequentially, then applying this on the target features, and
finally adding target features as a residual input.
For a formal definition, let the medium-exposed image (reference) features
$F_{1,2}$ be denoted as $F_{\mathrm{R}}$, and the short- or long-exposed image
(target) features, $F_{1,1}$ or $F_{1,3}$, as $F_{\mathrm{T}}$. We then find
the final attention applied features $M\in[0,1]$, with the same dimension as
$F_{R}$ and $F_{T}$, as follows:
$\displaystyle M$ $\displaystyle=a(F_{\mathrm{R}},F_{\mathrm{T}})$ (10)
$\displaystyle M$ $\displaystyle=F_{\mathrm{T}}+F_{\mathrm{T}}\otimes
F_{\mathrm{S}}$ (11) $\displaystyle F_{\mathrm{S}}$
$\displaystyle=F_{\mathrm{C}}\otimes m_{\mathrm{S}}(F_{\mathrm{C}})$ (12)
$\displaystyle F_{\mathrm{C}}$ $\displaystyle=F_{\mathrm{S}}\otimes
m_{\mathrm{C}}(F_{\mathrm{I}})$ (13) $\displaystyle F_{\mathrm{F}}$
$\displaystyle=\text{Conv}(\text{Concat}(F_{\mathrm{R}},F_{\mathrm{T}}))$ (14)
where the operator $\otimes$ denotes the element-wise multiplication;
$F_{\mathrm{F}}$, $F_{\mathrm{C}}$, and $F_{\mathrm{S}}$ denote features for
fused features, spatial attention, and channel attention; Conv is a 1$\times$1
convolution; and $m_{\mathrm{C}}$ and $m_{\mathrm{S}}$ are core attention
modules with small light-weight CNNs for low computation cost. Channel
attention $m_{\mathrm{C}}$ of dimension 64 first calculates the average-pooled
feature and max-pooled feature separately, then forwards them to a multilayer
perceptron (MLP) with one hidden layer. Next, the two output features of MLP
are merged via element-wise summation and gated by a sigmoid function. Spatial
attention $m_{\mathrm{S}}$ of dimension 128$\times$128 first calculates the
average-pooled feature and max-pooled feature separately, which then will be
concatenated, forwarded to a 7$\times$7 Conv layer, and gated by the sigmoid
function.
Intuitively, max-pooling extracts the most salient features that are the
features with large foreground motions while the average-pooling extract the
global features. By combining both features, our attention module is able to
concentrate on features with large object movement in the global scenery.
### II-C HDR Reconstruction Network for HDR Image Restoration
We adapt the HDR reconstruction architecture from [13, 15]. Since, given
suitable features as input, this latter reconstruction processing stage should
be intuitively far less complex compared to the former feature extraction, we
propose the replacement of the original cumbersome dilated residual dense
blocks (DRDBs) with a more light-weight structure. To that end, we adopt
residual feature distillation blocks (RFDBs) proposed in [20] for image super-
resolution. We extend RFDB by incorporating dilated convolutions in order to
capture larger context for our purposes of HDR reconstruction. Accordingly,
our RFDB adaptation consists of 2-dilated convolution layers, shallow residual
blocks and enhanced spatial attention (ESA) blocks. In our preliminary
experiments, we observed this reconstruction architecture to have performance
comparable to DRDB, but with much smaller number of parameters, which we then
use to invest in our rich-feature extraction network. Since training such
networks with reasonable batch sizes takes a large part of typical GPU memory,
designing and investing parameters (degrees-of-freedom) within an overall
network architecture in line with the particular problem structure is
therefore fundamental in preventing overfitting and making best use of
available resources. Accordingly, our proposed shift of network complexity
from reconstruction to feature extraction is a major contribution of this
work, and it is a main component that makes HDRfeat competitive and superior
to state-of-the-art.
## III Experiments and Results
### III-A Experimental Setting and Details
We train our proposed model on Kalantari’s HDR dataset [11], which contains 74
images for training and 15 images for evaluation, all with corresponding LDR
and HDR images (of resolution 1500$\times$1000) as groundtruth. Our proposed
method was implemented in Pytorch and trained on a RTX 3090 GPU with 16 000
epochs. For training, instead of feeding entire images into the network, which
would require unattainable amount of memory, we randomly crop images into 256
$\times$ 256 patches. We apply data augmentation (flips and rotations) during
training. We use the Adam optimizer [24], with a learning rate initialized to
$10^{-4}$ for all layers and decreased to $10^{-5}$ after 12 000 epochs. All
convolution layer weights are initialized with the Kaiming method [25]. For
evaluation, we compute the PSNR and SSIM scores between the restored HDR image
and the groundtruth HDR image in the linear domain (PSNR-L, SSIM-L) as well as
after tone mapping using $\mu$-law (PSNR-T, SSIM-T). We also compute the HDR-
VDP-2 score [26].
### III-B Comparison with state-of-the-art
We compare our proposed method with state-of-the-art approaches, including a
patch-based technique [8], an optical-flow based method with a CNN merger
[11], as well as four other well-known deep-learning based methods [12, 15,
13, 17]. All methods are reproduced using the codes provided by the authors,
except for the results of [15] reported directly from their paper. The
comparison is presented in Table I.
TABLE I: Quantitative comparison of our method on the test set [11]. | PSNR-T | PSNR-L | SSIM-T | SSIM-L | HDR-VDP-2
---|---|---|---|---|---
Sen et al.[8] | 40.95 | 38.27 | 0.9858 | 0.9762 | 61.72
Kalantari et al.[11] | 42.67 | 41.21 | 0.9889 | 0.9829 | 65.01
Wu et al.[12] | 42.70 | 41.13 | 0.9910 | 0.9889 | 66.20
Yan et al.[13] | 43.62 | 41.03 | 0.9919 | 0.9887 | 65.79
Yan et al.[15] | 43.84 | 41.31 | - | - | -
Niu et al.[17] | 43.92 | 41.57 | 0.9925 | 0.9898 | 65.81
HDRfeat (ours) | 44.11 | 41.79 | 0.9931 | 0.9912 | 66.74
The result shows that our method outperforms the state-of-the-art techniques
on all metrics. In general, our method is seen to produce high quality output
with less artifacts in saturated and motion-involved (occluded) image regions.
Fig. 1 shows a visual comparison of the tone mapped HDR images, where a
saturated image region is shown as zoomed-in: As shown with the arrows in the
images, [8] and [11] show superfluous contrast, and [12], [13], and [17] show
quantization artifacts, while our results are superior visually. Note that no
visual comparison could be included from [15] since the code was not available
at the time of this work. Fig. 4 shows an example, where the zoomed in
location shows the car door that is occluded by the moving arm in one
exposure: As seen, our proposed method has minimal to no artifacts, whereas
[8] and [11] show irrelevant content; [12] and [13] show artifactual
boundaries from occluding arm; and [17] shows some leakage.
Figure 4: Visual comparison on [11]. The arrows highlight some artifacts in
the generated HDR images. Figure 5: Architectures in our ablation study of
feature extraction network, with varying depths and with/without attention
modules.
### III-C Ablation study
We investigate the components of our proposed hierarchical rich feature
extraction, by ablation the attention architecture as well as the hierarchical
depth (richness) of the feature extraction. We accordingly compare our
proposed HDRfeat with its ablated variants having different feature extraction
depths and enabled/disabled attention modules, as shown in Fig. 5. In
addition, we compared our sequential arrangement of spatial and channel
attention, to its parallel counter-part [21], within the same HDRfeat
structure as in Fig. 5(f). Results are tabulated in Table II.
TABLE II: Ablation study, where Depth indicates the hierarchical feature extraction depth as an indicator for the richness of features, Att indicates whether attention is applied (+) or not (–), where +∗ indicates the use of parallel attention. Depth | Att | PSNR-T | PSNR-L | SSIM-T | SSIM-L | HDR-VDP-2
---|---|---|---|---|---|---
1 | – | 43.58 | 41.26 | 0.9924 | 0.9903 | 65.54
+ | 43.44 | 40.94 | 0.9924 | 0.9902 | 65.08
2 | – | 43.38 | 41.24 | 0.9926 | 0.9905 | 65.54
+ | 43.78 | 41.61 | 0.9926 | 0.9902 | 66.38
3 | – | 43.52 | 41.62 | 0.9928 | 0.9906 | 66.79
+ | 44.11 | 41.79 | 0.9931 | 0.9912 | 66.74
3 | +∗ | 43.86 | 41.33 | 0.9928 | 0.9903 | 65.72
As seen, both the attention modules and the hierarchical rich-feature
extraction contribute to the results, with their combination as proposed
achieving the best results for most (4 out of 5) metrics. We also show that
our adapted sequential channel and spatial attention structure is superior to
its conventional parallel implementation.
## IV conclusion
In this work, we propose a novel feature-rich network (HDRfeat) for HDR
reconstruction including a channel-wise feature extraction network to extract
and bottleneck rich contextual information from multi-exposure images to fit
in the HDR domain, a residual attention block with sequential channel and
spatial attention to concentrate on features with large motions and an HDR
reconstruction network with dilated residual feature distillation block (RFDB)
as the backbone. We perform qualitative and quantitative comparisons on the
public benchmark dataset, showing that the proposed method outperforms the
state-of-the-art methods.
## References
* [1] E. Reinhard, W. Heidrich, P. Debevec, S. Pattanaik, G. Ward, and K. Myszkowski, _High dynamic range imaging: acquisition, display, and image-based lighting_. Morgan Kaufmann, 2010.
* [2] P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in _ACM SIGGRAPH Classes_ , 2008, pp. 1–10.
* [3] S. Mann and R. Picard, “Being ‘undigital’; with digital cameras: Extending dynamic range by combining differently exposed pictures mit media lab perceptual computing section, boston,” MA, Tech. Rep. 323, Tech. Rep., 1994.
* [4] M. Granados, B. Ajdin, M. Wand, C. Theobalt, H.-P. Seidel, and H. P. Lensch, “Optimal hdr reconstruction with linear digital cameras,” in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2010, pp. 215–222.
* [5] Q. Yan, J. Sun, H. Li, Y. Zhu, and Y. Zhang, “High dynamic range imaging by sparse representation,” _Neurocomputing_ , vol. 269, pp. 160–169, 2017.
* [6] Y. Salih, A. S. Malik, N. Saad _et al._ , “Tone mapping of hdr images: A review,” in _IEEE International Conference on Intelligent and Advanced Systems (ICIAS)_ , vol. 1, 2012, pp. 368–373.
* [7] T. Jinno and M. Okuda, “Multiple exposure fusion for high dynamic range image acquisition,” _IEEE Trans on Image Processing_ , vol. 21, no. 1, pp. 358–365, 2012.
* [8] P. Sen, N. K. Kalantari, M. Yaesoubi, S. Darabi, D. B. Goldman, and E. Shechtman, “Robust Patch-Based HDR Reconstruction of Dynamic Scenes,” _ACM Trans on Graphics (TOG) (Procs of SIGGRAPH Asia)_ , vol. 31, no. 6, pp. 203:1–11, 2012.
* [9] F. Xu, J. Liu, Y. Song, H. Sun, and X. Wang, “Multi-exposure image fusion techniques: A comprehensive review,” _Remote Sensing_ , vol. 14, no. 3, p. 771, 2022.
* [10] X. Zhang, “Benchmarking and comparing multi-exposure image fusion algorithms,” _Information Fusion_ , vol. 74, pp. 111–131, 2021.
* [11] N. K. Kalantari, R. Ramamoorthi _et al._ , “Deep high dynamic range imaging of dynamic scenes.” _ACM Trans on Graphics (TOG)_ , vol. 36, no. 4, pp. 144–1, 2017.
* [12] S. Wu, J. Xu, Y.-W. Tai, and C.-K. Tang, “Deep high dynamic range imaging with large foreground motions,” in _European Conference on Computer Vision (ECCV)_ , 2018, pp. 117–132.
* [13] Q. Yan, D. Gong, Q. Shi, A. v. d. Hengel, C. Shen, I. Reid, and Y. Zhang, “Attention-guided network for ghost-free high dynamic range imaging,” in _IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2019, pp. 1751–1760.
* [14] Q. Yan, D. Gong, P. Zhang, Q. Shi, J. Sun, I. Reid, and Y. Zhang, “Multi-scale dense networks for deep high dynamic range imaging,” in _2019 IEEE Winter Conference on Applications of Computer Vision (WACV)_ , 2019, pp. 41–50.
* [15] Q. Yan, D. Gong, J. Q. Shi, A. van den Hengel, C. Shen, I. Reid, and Y. Zhang, “Dual-attention-guided network for ghost-free high dynamic range imaging,” _International Journal of Computer Vision_ , vol. 130, no. 1, pp. 76–94, 2022\.
* [16] J. Wang, X. Li, and H. Liu, “Exposure fusion using a relative generative adversarial network,” _IEICE Trans on Information and Systems_ , vol. 104, no. 7, pp. 1017–1027, 2021.
* [17] Y. Niu, J. Wu, W. Liu, W. Guo, and R. W. H. Lau, “Hdr-gan: Hdr image reconstruction from multi-exposed ldr images with large motions,” _IEEE Trans on Image Processing_ , vol. 30, pp. 3885–3896, 2021.
* [18] E. Pérez-Pellitero, S. Catley-Chandar, A. Leonardis, and R. Timofte, “NTIRE 2021 challenge on high dynamic range imaging: Dataset, methods and results,” _CoRR arXiv_ , no. 2106.01439, 2021.
* [19] E. Pérez-Pellitero, S. Catley-Chandar, R. Shaw, A. Leonardis, R. Timofte, Z. Zhang, C. Liu, Y. Peng, Y. Lin, G. Yu _et al._ , “Ntire 2022 challenge on high dynamic range imaging: Methods and results,” in _IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 1009–1023.
* [20] J. Liu, J. Tang, and G. Wu, “Residual feature distillation network for lightweight image super-resolution,” in _ECCV Workshops_ , 2020, pp. 41–55.
* [21] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in _European Conference on Computer Vision (ECCV)_ , 2018\.
* [22] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 2472–2481.
* [23] L. Wang and K.-J. Yoon, “Deep learning for hdr imaging: State-of-the-art and future trends,” _IEEE Trans on Pattern Analysis and Machine Intelligence_ , 2021.
* [24] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _CoRR arXiv_ , no. 1412.6980, 2015.
* [25] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in _IEEE Int Conference on Computer Vision (ICCV)_ , 2015, pp. 1026–1034.
* [26] R. Mantiuk, K. J. Kim, A. G. Rempel, and W. Heidrich, “Hdr-vdp-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions,” _ACM Trans on Graphics (TOG)_ , vol. 30, no. 4, pp. 1–14, 2011\.
|
# Open-source Frame Semantic Parsing
David Chanin
Department of Computer Science
University College London
<EMAIL_ADDRESS>
###### Abstract
While the state-of-the-art for frame semantic parsing has progressed
dramatically in recent years, it is still difficult for end-users to apply
state-of-the-art models in practice. To address this, we present Frame
Semantic Transformer, an open-source Python library which achieves near state-
of-the-art performance on FrameNet 1.7, while focusing on ease-of-use. We use
a T5 model fine-tuned on Propbank and FrameNet exemplars as a base, and
improve performance by using FrameNet lexical units to provide hints to T5 at
inference time. We enhance robustness to real-world data by using textual data
augmentations during training.
## 1 Introduction
Frame semantic parsing (Gildea and Jurafsky, 2002) is a natural language
understanding (NLU) task involving finding structured semantic frames and
their arguments from natural language text as formalized by the FrameNet
project (Baker et al., 1998). Frame semantics has proved useful in
understanding user intent from text, finding use in modern voice assistants
(Chen et al., 2019), dialog systems (Chen et al., 2013), and even text
analysis (Zhao et al., 2023).
A semantic frame in FrameNet describes an event, relation, or situation and
its participants. When a frame occurs in a sentence, there is typically a
"trigger" word in the sentence which is said to evoke the frame. In addition,
a frame contains a list of arguments known as frame elements which describe
the semantic roles that pertain to the frame. A sample sentence parsed for
frame and frame elements is shown in Figure 1.
FrameNet provides a list of lexical units (LUs) for each frame, which are word
senses with may evoke the frame when they occur in a sentence. For instance,
the frame "Attack" has lexical units "ambush.n", "ambush.v", "assault.v",
"attack.v", "attack.n", "bomb.v", and many others. These lexical units are not
exhaustive, however - a frame trigger may not necessarily be one of the
lexical units listed in the frame, but the lexical units provide a strong hint
that the frame may be present.
Figure 1: The sentence "Jaclyn gave the box to Mark" annotated with frame
trigger and frame elements for the "Giving" frame.
In this paper we treat frame semantic parsing as sequence-to-sequence text
generation task, and fine-tune a T5 transformer model (Raffel et al., 2020) as
the base model. We increase performance by pretraining on related datasets,
providing in-context prompt hints to T5 based on FrameNet data, and using
textual data augmentations (Ma, 2019) to increase training data. More details
on our implementation are given in Section 2
We evaluate the performance of Frame Semantic Transformer using the same
dataset splits as Open Sesame (Swayamdipta et al., 2017), the previous state-
of-the-art open-source parser. Frame Semantic Transformer exceeds the
performance of Open Sesame, and achieves near state-of-the-art performance
compared with modern frame semantic parsers that do not publish models
(Kalyanpur et al., 2020, Zheng et al., 2022).
The performance of Frame Semantic Transformer does not, however, come at the
cost of usability. The library can be installed via PyPI (pyp, ) with the
following command:
pip install frame-semantic-transformer
Performing frame semantic parsing on a sentence can be achieved with a few
lines of code. We leverage the Huggingface (Wolf et al., 2020) model hub and
NLTK (Bird et al., 2009) corpora so that all required models and datasets are
automatically downloaded when frame semantic transformer is first run,
requiring no further action by the user aside from installing the library from
PyPI. Basic usage is shown in Figure 2. Pretrained models are provided based
on T5-base and T5-small, with T5-base being the default model used. The code
for Frame Semantic Transformer is available on Github
111https://github.com/chanind/frame-semantic-transformer.
from frame_semantic_transformer import FrameSemanticTransformer
frame_transformer = FrameSemanticTransformer()
sentence = "The hallway smelt of boiled cabbage and old rag mats."
results = frame_transformer.detect_frames(sentence)
Figure 2: Performing frame semantic parsing requires only a few lines of code
using Frame Semantic Tranformer. All needed pretrained models and datasets are
downloaded automatically.
## 2 Method
Typically, frame semantic parsing approaches treat the task as a set of 3
subtasks which happen in serial (Kalyanpur et al., 2020). First, in the
trigger identification subtask, all trigger locations are identified in the
text where a frame occurs. Second, in the frame classification subtask, each
identified trigger location is classified with a FrameNet frame. Finally, in
the arguments extraction subtask, frame elements and their arguments are
identified in the text.
We treat each of the 3 subtasks as sequence-to-sequence tasks performed in
series by a fine-tuned T5 model. Each of these tasks follows the format "<task
name> <task-specific hints> : <text>".
### 2.1 Trigger identification
Given a sentence, the trigger identification task identifies locations in the
sentence text which could be frame triggers. This task is conceptually the
simplest of the three - it has no task-specific hints, and the goal of the
task is to insert markers in the text to indicate frame triggers. For this
task, we us the asterisk character * to indicate a frame trigger. This is
shown in Figure 3.
input: | "TRIGGER: It was no use trying the lift."
---|---
output: | "It was no use *trying the *lift."
Figure 3: Trigger identification input and expected output for the text "It
was no use trying the lift.". Trigger locations are indicated by * in the
output
### 2.2 Frame classification
For each trigger identified in the trigger identification step, a frame
classification task is created to classify the frame that the trigger refers
to. To make this task easier for the model, we use the LUs from each frame to
build a list of possible frames this trigger could refer to.
We normalize trigger words and frame LUs using a similar process. First, we
lowercase the word and stem and lemmatize it using multiple stemmers and
lemmatizers. Each stemmer and lemmatizer may treat different English words
slightly differently, so multiple are used to increase the chance the
normalized trigger word will match a normalized LU from framenet.
Specifically, we use an English Snowball stemmer (Porter, 1980), a Lancaster
stemmer (Paice, 1990), a Porter stemmer (Porter, 1980), and a lemmatizer based
on WordNet (Miller, 1995), all from NLTK to generate a set of up to 4 possible
normalized versions of the trigger word.
For LUs, we also remove the part of speech (POS) tag. T5 is a powerful
transformer model and likely does not need to be provided with POS info,
although this is something that could be explored in future work. In addition,
for the trigger word, we also generate bigrams for the trigger and the words
on either side of the trigger, and normalize the bigrams in the same way. Some
LUs contain multiple words, so generating bigrams increases the chance that
after this normalization process the matching frame is found and can be added
as a hint.
For instance, for the trigger word "trying" from Figure 3, this word has the
bigrams "use_trying" and "trying_to", and the monogram "trying". After
normalization, these become the lookup set:
{ us_tri, us_try, us_trying, use_tri, use_try, use_trying, tri, try, trying,
tri_to, try_to, trying_to }
This lookup set overlaps with the normalized LUs for the following frames:
{ Attempt, Attempt_means, Operational_testing, Tasting, Trial, Try_defendant,
Trying_out, }
finally, these overlapping frames are provided as part of the prompt for the
frame classification task as shown below:
input: | ‘‘FRAME Attempt Attempt_means Operational_testing Tasting Trial
---|---
| Try_defendant Trying_out: It was no use *trying the lift.’’
output: | ‘‘Attempt_means’’
Likewise, a frame classification task is generated for the trigger “lift" as
well, as shown below:
input: | ‘‘FRAME Body_movement Building_subparts Cause_motion Cause_to_end
---|---
| Connecting_architecture Theft: It was no use trying the *lift.’’
output: | ‘‘Connecting_architecture’’
### 2.3 Argument extraction
After a frame is identified, the next task is to identify the frame elements and arguments for that frame in the text. An argument extraction task is generated for every frame classified. We include all available frame element names from FrameNet for the frame in question as part of the prompt input to make the argument extraction task easier for T5. The output is of the form ‘‘<element 1>="<arguments 1>" | <element 2>="<arguments 2>" | …’’. For instance, the arguments extraction task for the Attempt_means frame from above is shown below:
input: | ‘‘ARGS Attempt_means | Agent Means Goal Circumstances Degree Depictive Domain
---|---
| Duration Frequency Manner Outcome Particular_iteration Place Purpose
| Time: It was no use *trying the lift.’’
output: | ‘‘Means="the lift"’’
Likewise, the arguments extraction task for the Connecting_architecture frame
is shown below:
input: | ‘‘ARGS Connecting_architecture | Part Connected_locations Creator
---|---
| Descriptor Direction Goal Material Orientation Source Whole:
| It was no use trying the *lift.’’
output: | ‘‘Part="the lift"’’
### 2.4 Pretraining
The training data for FrameNet 1.7 is relatively small with under 6,000 fully
annotated sentences total, so it is common to leverage FrameNet exemplar data
as well to increase the amount of training data available. This exemplar data
includes around 100,000 sentences. Exemplar sentence annotates only a single
frame per sentence, so it is not suitable for generating trigger
identification tasks, but it is still a rich source of data to improve
performance on frame classification and argument extraction tasks. The
distribution of exemplar sentences is different from the distribution of
training data for FrameNet 1.7 (Kshirsagar et al., 2015), so rather than train
on exemplar data directly we instead use it for pretraining.
Another rich source of additional training data is PropBank (Kingsbury and
Palmer, 2002). PropBank is a similar frame parsing dataset to FrameNet,
although PropBank tends to focus more on verbs than FrameNet and has simpler
arguments. Still, the tasks are similar enough that pretraining on PropBank
can help the model score higher on FrameNet 1.7. Specifically, we use the
PropBank training data from OntoNotes (Weischedel et al., 2013) and the
English Web Treebank (Bies et al., 2012).
During training, we begin with a pretrained T5 model from Huggingface (Wolf et
al., 2020), and then go through two additional iterations of pretraining.
First we pretrain on PropBank data, and then on FrameNet 1.7 exemplars, before
finally training on the FrameNet 1.7 training set.
### 2.5 Data augmentation
The FrameNet 1.7 training data is well formatted and grammatically correct,
but in reality a lot of text that needs to be semantically parsed is not well
formatted and may have errors and typos. To help make our model more robust to
real world data, we also use data augmentation to expose the model to
misspellings, synonyms, and other differently formatted sentences.
The textual augmentations used are the following, leveraging the nlpaug Python
library (Ma, 2019):
* •
Synonyms: swaps out a word for a synonym from WordNet (Miller, 1995).
* •
Quotations: replaces latex-style quotes with standard double quotes and vice-
versa.
* •
Random misspelling: replaces characters in words with different characters at
random.
* •
Keyboard misspelling: replace characters with typos likely based on key
locations on keyboards.
* •
Uppercase and lowercase: fully uppercase or lowercase the sentence.
* •
Delete punctuation: randomly deletes punctuation characters in the sentence.
When augmenting text during training, we make sure to adjust the indices of
triggers and frame elements to match the new locations after the augmentation
is applied.
### 2.6 Task balancing
There is a mismatch between the 3 subtasks in terms of how much training data
each task has. Frame classification and argument extraction have multiple
examples per training sentence, since a task is generated for every frame in a
sentence. Furthermore, these tasks also benefit from pretraining with FrameNet
exemplar data. Trigger identification, however, has only 1 training example
per sentence, and cannot learn from exemplar data, so there is a large
mismatch between the amount of trigger identification samples available and
the amount of frame classification and argument extraction samples.
To help address this, we sample trigger identification tasks at a 3x higher
rate than frame classification and argument extraction tasks during training
to help ensure that the trigger identification performance does not trail
behind that of the other tasks per training epoch. We also increase the data
augmentation rate for trigger identification tasks to help increase the number
of training samples available.
## 3 Evaluation
FrameNet 1.7 does not include an official train / test / dev split, so we
follow the split and evaluation used by Open Sesame (Swayamdipta et al., 2017)
as this is the most popular open-source frame semantic parser on Github, and
was also the previous state-of-the-art. Other parsers also use the same split
for this reason (Kalyanpur et al., 2020, Zheng et al., 2022).
We calculate f1 score for each of the subtasks against the dev and test sets
from Open Sesame. For trigger identification, each trigger location that is
identified correctly is considered a true positive, each location that is
missed is a false negative, and each location that is incorrectly marked is a
false positive. For frame classification, an incorrectly classified frame is
considered both a false positive and a false negative. For the argument
extraction task, each frame element that is correctly identified and labeled
is a true positive. If a frame element is missed it is a false negative. If a
frame element is marked incorrectly it is a false positive. If a frame element
is labeled, but the element is incorrectly classified or the arguments are not
labeled entirely correctly, this is considered both a false positive and a
false negative.
Model | FN 1.7 dev set | FN 1.7 test set
---|---|---
| Trigger ID | Frame ID | Args ID | Trigger ID | Frame ID | Args ID
(Peng et al., 2018) | - | - | - | - | 0.891 | -
(Zheng et al., 2022) | - | - | - | - | - | 0.756
(Kalyanpur et al., 2020) | - | - | 0.77 | - | - | 0.76
Open Sesame (Swayamdipta et al., 2017) | 0.80 | 0.90 | 0.61 | 0.73 | 0.87 | 0.61
Frame Semantic Transformer (T5-small) | 0.75 | 0.87 | 0.76 | 0.71 | 0.86 | 0.73
Frame Semantic Transformer (T5-base) | 0.78 | 0.91 | 0.78 | 0.74 | 0.89 | 0.75
Figure 4: Evaluation results comparing Frame Semantic Transformer with other
frame semantic parsers on FrameNet 1.7, where comparable data can be found.
The top section of the table contains frame semantic parsers without available
pretrained models, while the bottom section contains open-source parsers with
pretrained models. Bold indicates the best performance in each group.
We also include an ablation study showing the effects of pretraining and data
augmentation on model performance in Figure 5. Data augmentation actually
slighltly hurts performance in argument extraction, but we still think it is
worth it to give the model more robustness to messy examples that may appear
in real-world data. Lack of pretraining appears to slightly harm performance
on all tasks. We did not do a statistical significance test on this data.
Model | Trigger ID | Frame ID | Args ID
---|---|---|---
Frame semantic transformer (T5-base) | 0.74 | 0.89 | 0.75
No data augmentation | 0.74 | 0.89 | 0.76
No pretrain | 0.72 | 0.88 | 0.74
Figure 5: Ablation study comparing performance without data augmentation and
without pretraining on the FrameNet 1.7 test set.
## 4 Related work
Recent work on frame semantic parsing has focused on incorporating more
information from FrameNet into the parsing process. (Zheng et al., 2022)
encode the frame relation graph into an embedding space during inference to
improve performance. (Su et al., 2021) encodes the full text of frame and
element descriptions to aid in classification. Our work also follows in this
vein by using lexical unit data to provide hints to our model during frame
classification. However, both (Zheng et al., 2022) and (Su et al., 2021) do
not provide pretrained, open-source models.
Most similar to our work, and largely a point of inspiration, is (Kalyanpur et
al., 2020). In this work, a T5 model (Raffel et al., 2020) is fine-tuned on
the frame classification and argument extraction tasks. In a variant of their
work, the T5 decoder is replaced with a classification head for frame
classification. However, this work does not deal with trigger identification,
and does not use lexical unit hints during frame classification. Furthermore,
no code or models are open-sourced as part of this work, making it difficult
for end-users to easily make use of the model.
Previous open-source frame parsers include Open Sesame (Swayamdipta et al.,
2017) and SEMAFOR (Das et al., 2010). However, both of these projects predate
the rise of the transformer architecture, and their performance lags behind
transformer-based solutions, especially in argument extraction.
## 5 Conclusion
Frame Semantic Transformer approaches or matches state-of-the-art performance
on frame semantic parsing tasks while also being easy to use as an end-user.
We improve performance by pretraining both on FrameNet exemplars and PropBank
data. We also incorporate frame knowledge from FrameNet via lexical units and
available frame elements and pass that knowledge to T5 in-context as part of
task prompts. In addition, we add NLP data augmentations to help the model
generalize to real-world data which will likely be formatted differently than
the FrameNet 1.7 training set. At present, Frame Semantic Transformer only
provides pretrained models for English FrameNet, but we hope to support other
languages and PropBank in the future as well.
## References
* (1) Python package index - pypi. URL https://pypi.org/.
* Baker et al. (1998) C. F. Baker, C. J. Fillmore, and J. B. Lowe. The berkeley framenet project. In _COLING 1998 Volume 1: The 17th International Conference on Computational Linguistics_ , 1998.
* Bies et al. (2012) A. Bies, J. Mott, C. Warner, and S. Kulick. English web treebank dataset ldc2012t13. _Linguistic Data Consortium, Philadelphia, PA_ , 2012.
* Bird et al. (2009) S. Bird, E. Klein, and E. Loper. _Natural language processing with Python: analyzing text with the natural language toolkit_. "O’Reilly Media, Inc.", 2009.
* Chen et al. (2019) Q. Chen, Z. Zhuo, and W. Wang. Bert for joint intent classification and slot filling. _arXiv preprint arXiv:1902.10909_ , 2019.
* Chen et al. (2013) Y.-N. Chen, W. Y. Wang, and A. I. Rudnicky. Unsupervised induction and filling of semantic slots for spoken dialogue systems using frame-semantic parsing. In _2013 IEEE Workshop on Automatic Speech Recognition and Understanding_ , pages 120–125. IEEE, 2013.
* Das et al. (2010) D. Das, N. Schneider, D. Chen, and N. A. Smith. Probabilistic frame-semantic parsing. In _Human language technologies: The 2010 annual conference of the North American chapter of the association for computational linguistics_ , pages 948–956, 2010.
* Gildea and Jurafsky (2002) D. Gildea and D. Jurafsky. Automatic labeling of semantic roles. _Computational linguistics_ , 28(3):245–288, 2002\.
* Kalyanpur et al. (2020) A. Kalyanpur, O. Biran, T. Breloff, J. Chu-Carroll, A. Diertani, O. Rambow, and M. Sammons. Open-domain frame semantic parsing using transformers. _arXiv preprint arXiv:2010.10998_ , 2020.
* Kingsbury and Palmer (2002) P. R. Kingsbury and M. Palmer. From treebank to propbank. In _LREC_ , pages 1989–1993, 2002.
* Kshirsagar et al. (2015) M. Kshirsagar, S. Thomson, N. Schneider, J. G. Carbonell, N. A. Smith, and C. Dyer. Frame-semantic role labeling with heterogeneous annotations. In _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)_ , pages 218–224, 2015.
* Ma (2019) E. Ma. Nlp augmentation. https://github.com/makcedward/nlpaug, 2019.
* Miller (1995) G. A. Miller. Wordnet: a lexical database for english. _Communications of the ACM_ , 38(11):39–41, 1995\.
* Paice (1990) C. D. Paice. Another stemmer. In _ACM Sigir Forum_ , volume 24, pages 56–61. ACM New York, NY, USA, 1990.
* Peng et al. (2018) H. Peng, S. Thomson, S. Swayamdipta, and N. A. Smith. Learning joint semantic parsers from disjoint data. _arXiv preprint arXiv:1804.05990_ , 2018.
* Porter (1980) M. F. Porter. An algorithm for suffix stripping. _Program_ , 14(3):130–137, 1980.
* Raffel et al. (2020) C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. _The Journal of Machine Learning Research_ , 21(1):5485–5551, 2020.
* Su et al. (2021) X. Su, R. Li, X. Li, J. Z. Pan, H. Zhang, Q. Chai, and X. Han. A knowledge-guided framework for frame identification. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 5230–5240, 2021.
* Swayamdipta et al. (2017) S. Swayamdipta, S. Thomson, C. Dyer, and N. A. Smith. Frame-semantic parsing with softmax-margin segmental rnns and a syntactic scaffold. _arXiv preprint arXiv:1706.09528_ , 2017.
* Weischedel et al. (2013) R. Weischedel, M. Palmer, M. Marcus, E. Hovy, S. Pradhan, L. Ramshaw, N. Xue, A. Taylor, J. Kaufman, M. Franchini, et al. Ontonotes release 5.0 ldc2013t19. _Linguistic Data Consortium, Philadelphia, PA_ , 23, 2013.
* Wolf et al. (2020) T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 38–45, Online, Oct. 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.emnlp-demos.6.
* Zhao et al. (2023) X. Zhao, X. Walton, S. Shrestha, and A. Rios. Bike frames: Understanding the implicit portrayal of cyclists in the news. _arXiv preprint arXiv:2301.06178_ , 2023.
* Zheng et al. (2022) C. Zheng, X. Chen, R. Xu, and B. Chang. A double-graph based framework for frame semantic parsing. _arXiv preprint arXiv:2206.09158_ , 2022.
|
# How robust are gravitational wave predictions from cosmological phase
transitions?
Peter Athron<EMAIL_ADDRESS>Department of Physics and Institute of
Theoretical Physics, Nanjing Normal University, Nanjing, 210023, China
Lachlan Morris<EMAIL_ADDRESS>School of Physics and Astronomy,
Monash University, Melbourne, Victoria 3800, Australia Zhongxiu Xu
<EMAIL_ADDRESS>Department of Physics and Institute of Theoretical
Physics, Nanjing Normal University, Nanjing, 210023, China
###### Abstract
Gravitational wave (GW) predictions of cosmological phase transitions are
almost invariably evaluated at either the nucleation or percolation
temperature. We investigate the effect of the transition temperature choice on
GW predictions, for phase transitions with weak, intermediate and strong
supercooling. We find that the peak amplitude of the GW signal varies by a
factor of a few for weakly supercooled phase transitions, and by an order of
magnitude for strongly supercooled phase transitions. The variation in
amplitude for even weakly supercooled phase transitions can be several orders
of magnitude if one uses the mean bubble separation, while the variation is
milder if one uses the mean bubble radius instead. We also investigate the
impact of various approximations used in GW predictions. Many of these
approximations introduce at least a 10% error in the GW signal, with others
introducing an error of over an order of magnitude.
## I Introduction
We are now in an era where existing gravitational wave (GW) data can have an
impact on our understanding of physics beyond the Standard Model (BSM) of
particle physics. Very recently pulsar timing array experiments have detected
a stochastic GW background (SGWB) [1, 2, 3, 4] and find that new physics
explanations have a slight preference over less exotic sources [5]. Existing
data on GWs from the LIGO/VIRGO network [6] is also constraining well-
motivated Pati-Salam models that can lead to gauge coupling unification [7] as
well as models of the dark sector [8].
However, with this exciting progress also comes significant challenges. It is
now essential that we have reliable calculations of the GW spectra for BSM
models where we understand the uncertainties involved and the effects of
various approximations and assumptions that are commonly used. There are many
challenging calculations involved in going from a particular BSM scenario to a
predicted GW spectrum; see Ref. [9] for a review. Quantities derived from the
effective potential can strongly depend on the method used [10] and
uncertainties in the GW spectra from effective potential computations have
been investigated in Ref. [11]. Here we show that even if the effective
potential calculation was under full control, there are many other challenges
for reliable predictions of GW spectra.
Since the first direct detection of GWs [12] in 2015, there has been
substantial progress in understanding how to characterise phase transitions
and extract GW predictions. Here we mention a few important points. Sound
waves are expected to be the largest source of GWs following Ref. [13] which
showed that sound waves source last long after the bubbles have merged.
However, more recently it has been shown that in many cases the lifetime is
nonetheless significantly shorter than the Hubble time [14, 15] and
suppression factors were introduced [16, 17] to account for the finite
lifetime of the source. These suppression factors were subsequently refined to
address issues stemming from the derivation of the Hubble time as the maximum
lifetime of the source [18]. Furthermore, the modelling of GWs from sound
waves has improved considerably from simulations [19, 20] and the construction
of the sound shell model [21] and its further development [22, 23, 24].
Significant improvements have also been made in determining the kinetic energy
fraction that is available to source GWs. New parameterisations have been
developed that go beyond simplified models such as the bag model, first for
the case where bubbles expand as supersonic detonations [25] and later
generalised to cover subsonic deflagrations and hybrids [26]. These advances
have both improved predictions and raised questions about our previous and
current understanding of how sensitive GW experiments can be to first-order
phase transitions.
In particular, strongly supercooled phase transitions present significant
challenges for calculations and may lead to erroneous explanations of GW
signals [27]. We therefore treat the extent of supercooling as an important
parameter when considering the uncertainties and compare scenarios with weak,
intermediate, and strong supercooling. Previously, we have shown that in the
presence of supercooling various possible choices of transition temperature
decouple [28] and it has been argued that the percolation temperature should
be used [17, 29, 30, 28]. Here we show explicitly that the peak amplitude and
frequency of the GW spectrum — and thus the resulting signal-to-noise ratio
(SNR) at a detector — are sensitive to the choice of transition temperature.
This is especially true for strongly supercooled phase transitions as one
might expect, but is also true for weakly supercooled phase transitions. We
show that if one chooses the nucleation temperature as the transition
temperature (as is very common practice), then the peak amplitude, peak
frequency, and SNR can change by orders of magnitude compared to when using
the percolation temperature. This has a huge impact on the the prospects for
detection. However, such a drastic change only arises when using the mean
bubble separation as the characteristic length scale. If one is more careful
about the choice of length scale, the discrepancy can potentially be reduced
to a factor of a few.
Additionally, we investigate how the predictions can be affected by different
estimates of the thermal parameters which determine the GW spectrum. We
compare various parameterisations of the kinetic energy fraction, which
determines the energy available for sourcing GWs. Another important factor
that determines the peak GW amplitude and frequency is the timescale during
which the source is active, which is usually replaced by a characteristic
length scale. The mean bubble separation is used as this length scale in
lattice simulations. We compare the impact different estimates of this have on
GW signals, and we qualitatively explore the consequences of using the mean
bubble radius instead. Finally, because the turbulence contribution to the
overall GW signal is not well modelled, but could be significant, we also
compare several different choices for the energy available for sourcing GWs
from turbulence and show the impact that this can have on the SNR.
In section II we describe first-order phase transitions and supercooling in
more detail, and we define important milestone temperatures. In section III we
describe how properties of the phase transition and the thermal parameters are
computed in particle physics models. We also discuss various estimates for
these thermal parameters that are made in the literature. We briefly describe
how we use these thermal parameters to predict GW spectra in section IV. We
then introduce the model we use to obtain a first-order phase transition in
section V. Finally, we present our results in section VI and provide
concluding remarks in section VII.
## II First-order phase transitions and supercooling
As the Universe cools down the shape of the effective potential changes such
that minima (or phases) can appear and disappear and cosmological phase
transitions take place. These cosmological phase transitions play an important
role in particle physics, such as breaking the electroweak symmetry and
thereby generating masses for the fundamental particles via the Higgs
mechanism. Further, if a phase transition is of first order (i.e. there is a
potential barrier separating the phases), GWs are produced in the process.
A potential barrier between the phases prevents an instantaneous transition
from the local metastable minimum to the deeper minimum on the other side of
the barrier. Instead, the phase transition must proceed via either tunnelling
through the barrier or fluctuating over it. This first becomes possible when
the Universe cools below the critical temperature, $T_{c}$, where the free
energy densities of the two minima are degenerate. Below $T_{c}$ the
transition begins through a stochastic process where the tunnelling or
fluctuations occur at localised points in space-time, and when this happens
bubbles of the new phase can form and grow in a process known as bubble
nucleation. The phase transition completes if the bubbles of the new phase
fill the whole universe. More precisely, because it is a stochastic process we
define the the completion temperature, $T_{f}$, to be the temperature when the
fraction of the universe left in the false vacuum (i.e. the old phase) is less
then $1\%$, $P_{f}(T_{f})<0.01$.
When this process takes a long time to complete $T_{f}$ may be much smaller
than the critical temperature $T_{c}$ at which the new minimum first becomes
energetically favoured. This is known as supercooling in analogy with the
phenomenon where liquids are supercooled well below their freezing point. All
first-order cosmological phase transitions exhibit some degree of supercooling
because they do not happen instantly. However, the temperature change can vary
from $T_{f}$ being within 1% of $T_{c}$ to being orders of magnitude smaller.
The degree of supercooling can have a significant impact on a phase transition
and is an important characteristic when comparing phase transitions.
Increasing supercooling may boost the energy released in the phase transition
and the amplitude of resultant GWs, but too much supercooling can lead to the
transition failing to complete.
Strongly supercooled phase transitions admit qualitatively different behaviour
compared to weakly supercooled phase transitions. Because the nucleation rate
is lower, the smaller number of bubbles that are nucleated grow to much larger
sizes. This means that the number of bubbles per Hubble volume, $N$, can be
less than one during the period where most of the bubbles are colliding or
even by the time the phase transition has completed [28]. This can be
expressed more precisely as follows. The nucleation temperature $T_{n}$ is
defined by the condition $N(T_{n})=1$. Usually $T_{n}$ is higher than the
percolation temperature $T_{p}$, defined by the moment when the false vacuum
fraction, $P_{f}$, is roughly 71%: $P_{f}(T_{p})=0.71$. Roughly speaking,
$T_{p}$ is where the bubbles should be in contact with each other (see section
4.7.2 of Ref. [9] for more details). In strongly supercooled scenarios the
nucleation temperature can be reached some time after most of the the bubble
collisions have taken place. In more extreme cases the phase transition may
complete, reaching $P_{f}(T_{f})<0.01$, before $N(T)=1$. In such cases there
is no nucleation temperature. However, strongly supercooled scenarios can also
have enough bubble nucleation such that $N(T)=1$ is reached relatively early
in the phase transition but the transition is still slow leading to a
substantial gap between $T_{n}$ and $T_{p}$ or $T_{f}$. Thus, the nucleation
temperature is not coupled with the actual progress of the phase transition
and the production of GWs.
## III Determining properties of the phase transition
### III.1 Temperatures and length scales
The rate of a phase transition depends strongly on the size and persistence of
the potential barrier. In fast transitions the barrier disappears fairly
quickly. The nucleation rate is initially zero at $T_{c}$ and then increases
rapidly as the barrier dissolves, giving an exponential nucleation rate of the
form
$\displaystyle\Gamma(t)=\Gamma(t_{*})\exp(\beta(t-t_{*})),$ (1)
where $t_{*}$ is some relevant time in the transition (often taken to
correspond to $T_{n}$). In contrast, if the barrier persists at low
temperatures or even at $T=0$, the nucleation rate can instead reach a maximum
at some temperature $T_{\Gamma}$ because lower temperature reduces the
likelihood of thermal fluctuations over the barrier.
The nucleation rate is given by [31]
$\displaystyle\Gamma(T)=T^{4}\left(\frac{S(T)}{2\pi}\right)\exp(-S(T)),$ (2)
where $S(T)$ is the bounce action which we obtain from a modified version of
CosmoTransitions [32].111See appendix F of Ref. [28] for details of the
modifications. If one expresses $S$ as a function of time and Taylor expands
about $t_{*}$,
$\displaystyle S(t)\approx S(t_{*})$
$\displaystyle+\left.\derivative{S}{t}\right|_{t=t_{*}}\\!\\!\\!\\!(t-t_{*})$
(3)
$\displaystyle{}+{}\frac{1}{2}\\!\left.\derivative[2]{S}{t}\right|_{t=t_{*}}\\!\\!\\!\\!(t-t_{*})^{2}+\cdots,$
(4)
then truncating at first order gives the exponential nucleation rate given in
eq. 1, and we can identify
$\displaystyle\beta=-\\!\left.\derivative{S}{t}\right|_{t=t_{*}}.$ (5)
This can be useful because $\beta$ is related to the mean separation of
bubbles, $R_{\text{sep}}$, through [33]
$R_{\text{sep}}=(8\pi)^{\frac{1}{3}}\frac{v_{w}}{\beta}.$ (6)
The mean bubble separation is an important quantity for GW predictions.
Equation 6 should hold when evaluated at the temperature where $P_{f}$ has
decreased to $1/e$, denoted by $T_{e}$. Computing $\beta$ directly from the
bounce action and using eq. 6 to estimate $R_{\text{sep}}$ can simply
calculations significantly.
However, while an exponential nucleation rate is a common assumption and eq. 6
is widely used, these approximations can be problematic in strongly
supercooled scenarios. We will demonstrate the potential consequences of this
in section VI. Note that if the transition temperature $T_{*}$ used to
evaluate $\beta$ is close to the temperature where nucleation rate is
maximised, $T_{\Gamma}$, then $\beta\approx 0$. Further, $\beta$ is negative
when $T_{*}<T_{\Gamma}$. Therefore, the use of $\beta$ entirely breaks down in
these cases. However, because $\beta$ vanishes one can truncate eq. 4 at
second order and obtain a Gaussian nucleation rate,
$\displaystyle\Gamma(t)=\Gamma(t_{*})\exp(\frac{\beta_{V}^{2}}{2}(t-t_{*})^{2}),$
(7)
where
$\displaystyle\beta_{V}=\sqrt{\left.\derivative[2]{S}{t}\right|_{t=t_{\Gamma}}}.$
(8)
We can relate $\beta_{V}$ to $R_{\text{sep}}$ through [14]
$R_{\text{sep}}=\left(\sqrt{2\pi}\frac{\Gamma(T_{\Gamma})}{\beta_{V}}\right)^{\\!\\!-\frac{1}{3}}.$
(9)
It is unclear how well the approximations eq. 6 and eq. 9 perform, so we
include this investigation in our study. We note that we use temperature
rather than time in our analysis, so we employ the usual time-temperature
relation [28]
$\derivative{t}{T}=\frac{-1}{TH(T)}.$ (10)
Thus, $\beta$ and $\beta_{V}$ are in fact calculated from
$\text{d}S/\text{d}T$. The Hubble rate is given by
$\displaystyle H(T)=\sqrt{\frac{8\pi G}{3}\rho_{\text{tot}}(T)},$ (11)
where $\rho_{\text{tot}}$ is the total energy density. We use energy
conservation such that $\rho_{\text{tot}}=\rho_{f}-\rho_{\text{gs}}$, where
$\rho_{f}$ is the false vacuum energy density and $\rho_{\text{gs}}$ is the
ground state energy density. We renormalise the free energy density such that
$\rho_{\text{gs}}=0$, leaving $\rho_{\text{tot}}=\rho_{f}$.
Returning to the full treatment, the nucleation rate in eq. 2 can be used
directly to compute the false vacuum fraction $P_{f}$ as a function of
temperature, given by
$P_{f}(T)=\exp\\!\left[-\frac{4\pi}{3}\\!\int_{T}^{T_{c}}\\!\frac{dT^{\prime}}{T^{\prime
4}}\frac{\Gamma(T^{\prime})}{H(T^{\prime})}\\!\left(\int_{T}^{T^{\prime}}\\!\\!\\!dT^{\prime\prime}\frac{v_{w}(T^{\prime\prime})}{H(T^{\prime\prime})}\right)^{\\!\\!3}\right].$
(12)
Here we have assumed that the Universe is expanding adiabatically and we
neglect the initial radius of the bubble at formation. See Ref. [9] for more
details on the derivations and assumptions. The last undetermined quantity in
eq. 12 is the bubble wall velocity, $v_{w}$. We discuss our treatment of
$v_{w}$ in section III.2.
The number of bubbles nucleated at any given temperature can also be computed
from eq. 2. In the literature it is standard to calculate the nucleation
temperature from an approximation for the number of bubbles per Hubble volume,
$\displaystyle N(T)$
$\displaystyle=\int_{T}^{T_{c}}\\!\\!dT^{\prime}\,\frac{\Gamma(T^{\prime})}{T^{\prime}H^{4}(T^{\prime})}.$
(13)
This implicitly assumes a fast transition so that one can assume $P_{f}=1$
before $T_{n}$, and thus omit $P_{f}$ from the integrand [28].222A factor of
$4\pi/3$ from the spherical Hubble volume is also neglected in this treatment.
In this study we only use $T_{n}$ to show the impact of approximations made in
the literature, so we use the expression in eq. 13 to calculate $T_{n}$ for
consistency.
In contrast, to compute the mean bubble separation we determine the bubble
number density with $P_{f}(T)$ included to account for the fact that true
vacuum bubbles can only nucleate in regions that are still in the false
vacuum. The mean bubble separation is given by
$R_{\text{sep}}(T)=(n_{B}(T))^{-\frac{1}{3}},$ (14)
where
$n_{B}(T)=T^{3}\\!\\!\int_{T}^{T_{c}}\\!dT^{\prime}\frac{\Gamma(T^{\prime})P_{f}(T^{\prime})}{T^{\prime
4}H(T^{\prime})}$ (15)
is the bubble number density. Finally, there are possibly other choices for
the characteristic length scale in GW predictions [34, 35, 14, 29, 9].
However, fits for GW predictions are determined in terms of $R_{\text{sep}}$,
and one cannot directly replace $R_{\text{sep}}$ with alternative length
scales in those fits. Still, we seek to investigate (among other things) the
impact of the choice of $T_{*}$ on the GW predictions (see section VI), so it
is important to understand the impact of $T_{*}$ on various length scales.
Thus, we also consider the mean bubble radius,
$\bar{R}(T)=\frac{T^{2}}{n_{B}(T)}\int_{T}^{T_{c}}\\!\\!dT^{\prime}\frac{\Gamma(T^{\prime})P_{f}(T^{\prime})}{T^{\prime
4}H(T^{\prime})}\\!\int_{T}^{T^{\prime}}\\!\\!dT^{\prime\prime}\frac{v_{w}(T^{\prime\prime})}{H(T^{\prime\prime})}.$
(16)
For more details see section 5.5 of Ref. [9] and references therein.
We can compute the milestone temperatures $T_{n}$, $T_{p}$, $T_{e}$ and
$T_{f}$ using eqs. 13 and 12, and we can similarly use eqs. 14 and 16 to
compute $R_{\text{sep}}$ and $\bar{R}$ at these milestone temperatures or at
arbitrary temperatures. We use PhaseTracer [36] to map the phase structure of
the potential and TransitionSolver [37] to analyse the phase history,
including all relevant phase transitions,333There is another high-temperature
phase transition with $T_{c}\sim 180$ GeV in the intermediate and strong
supercooling scenarios considered in section V. The phase transition is very
fast and is not relevant to our analysis. as well as determine the milestone
temperatures and relevant parameters for GW predictions. The GW fits are
parameterised in terms of thermal parameters, which — in addition to the
transition temperature and the characteristic length scale — also include
hydrodynamic parameters such as the kinetic energy fraction and the bubble
wall velocity.
### III.2 Hydrodynamic parameters
Here we discuss the hydrodynamic parameters used in GW fits. First we discuss
our best treatment for these parameters, then we introduce several common
variations to this treatment used in the literature. We will investigate the
impact of these variations on the GW signature in section VI.2. All of these
parameters — and all of the quantities that they depend on — should be
evaluated at the transition temperature, $T_{*}$.
In our best treatment, we take $T_{*}=T_{p}$, and we determine the kinetic
energy fraction using the pseudotrace difference between the phases,
corresponding to M2 in Ref. [25]:
$K=\frac{\bar{\theta}_{f}(T_{*})-\bar{\theta}_{t}(T_{*})}{\rho_{\text{tot}}(T_{*})}\kappa_{\bar{\theta}}(\alpha_{\bar{\theta}}(T_{*}),c_{s,f}(T_{*})).$
(17)
Here, $c_{s,f}$ is the speed of sound in the false vacuum and
$\alpha_{\bar{\theta}}$ is the transition strength parameter. The speed of
sound in phase $i$ is given by
$c_{s,i}^{2}(T)=\left.\frac{\partial_{T}V}{T\partial_{T}^{2}V}\right|_{\boldsymbol{\phi}_{i}(T)},$
(18)
where $V$ is the effective potential, or free energy density, and
$\boldsymbol{\phi}_{i}$ is the field configuration for phase $i$. The
transition strength parameter is defined as
$\alpha_{x}(T)=\frac{4\\!\left(x_{f}(T)-x_{t}(T)\right)}{3w_{f}(T)},$ (19)
where $x$ is a hydrodynamic quantity for which various choices exist in the
literature, and $w_{f}$ is the enthalpy density in the false vacuum. We use
the pseudotrace for $x$ in our best treatment, given by [25]
$\bar{\theta}_{i}(T)=\frac{1}{4}\\!\left(\rho_{i}(T)-\frac{p_{i}(T)}{c_{s,t}^{2}(T)}\right)$
(20)
in phase $i$, where $\rho$ and $p$ are the energy density and pressure,
respectively. The pseudotrace generalises the trace anomaly to models where
the speed of sound deviates from $1/\sqrt{3}$. We use the code snippet
provided in the appendix of Ref. [25] to determine the efficiency coefficient
$\kappa_{\bar{\theta}}$.
Turbulence from cosmological phase transitions is not well understood because
current hydrodynamic simulations cannot probe the turbulent regime. Hence, it
is difficult to estimate the efficiency coefficient for turbulence,
$\kappa_{\text{turb}}$, which is needed for turbulence contributions to the
production of GWs. However, it is expected that stronger phase transitions
(with larger $\alpha$) could result in more turbulence developing sooner and
could reduce the lifetime of the sound wave source. Lacking sufficient
modelling of the turbulence source, we consider the efficiency coefficient as
a fraction of $\kappa_{\bar{\theta}}$,
$\kappa_{\text{turb}}=\epsilon\kappa_{\bar{\theta}},$ (21)
and we take $\epsilon=0.05$ as our default treatment.
Finally, for our treatment of the bubble wall velocity, we assume bubbles grow
as supersonic detonations regardless of the extent of supercooling for
simplicity. General friction estimates are beyond the scope of this study, and
neither the ultra-relativistic or non-relativistic limits of friction are
applicable for all benchmark points in our study. We assume the bubbles expand
at the Chapman-Jouguet velocity,
$v_{w}=v_{\text{CJ}}=\frac{1+\sqrt{3\alpha_{\bar{\theta}}(1+c_{s,f}^{2}(3\alpha_{\bar{\theta}}-1))}}{c_{s,f}^{-1}+3\alpha_{\bar{\theta}}c_{s,f}},$
(22)
where temperature dependence has been suppressed. The Chapman-Jouguet velocity
is by no means the most likely supersonic detonation solution, however it does
capture dependence on the transition temperature and ensures a supersonic
detonation regardless of the extent of supercooling. The same cannot be said
for any fixed choice of $v_{w}$.
We now turn to the variations on our best treatment. First, we consider the
effect of setting $T_{*}$ to the other milestone temperatures: $T_{n}$,
$T_{e}$ and $T_{f}$. This involves using our best treatment (e.g. calculating
$K$ using eq. 17) but evaluating all quantities at, for example, $T_{n}$
instead of $T_{p}$. As a reminder, $T_{n}$ can be obtained by the condition
$N(T_{n})=1$ (see eq. 13), while $T_{p}$, $T_{e}$ and $T_{f}$ all come from
conditions on the false vacuum fraction (see eq. 12); specifically,
$P_{f}(T_{p})=0.71$, $P_{f}(T_{e})=1/e$ and $P_{f}(T_{f})=0.01$.
The approach we use for estimating $K$ was developed only recently in Refs.
[25, 26], so it is not yet widely adopted. More approximate treatments are
widespread, which we enumerate here. It is very common to determine $K$
through
$K_{x}=\frac{\kappa_{x}\alpha_{x}}{1+\alpha_{x}},$ (23)
with various choice of $x$ often being made. This parameterisation alone
introduces error in the determination of $K$, regardless of the choice of $x$
(see appendix A for details). The trace anomaly,
$\theta(T)=\frac{1}{4}(\rho(T)-3p(T)),$ (24)
is the closest alternative to $\bar{\theta}$, in fact exactly matching
$\bar{\theta}$ when $c_{s,t}=1/\sqrt{3}$ like in the bag model. The other
common choices for $x$ are the pressure $p$ and the energy density $\rho$. The
efficiency coefficient used for these choices of $x$ was derived in the bag
model, and is given by [38]
$\kappa=\frac{\sqrt{\alpha_{x}}}{0.135+\sqrt{0.98+\alpha_{x}}}$ (25)
for $v_{w}=v_{\text{CJ}}$, which is implicitly dependent on temperature.
In these more approximate treatments of $K$, the enthalpy density in the
denominator of eq. 19 is usually replaced with $w_{f}=\frac{4}{3}\rho_{R}$,
where $\rho_{R}=\frac{\pi^{2}}{30}g_{\text{eff}}T^{4}$ is the radiation energy
density and $g_{\text{eff}}$ is the effective number of relativistic degrees
of freedom. We find the replacement of the enthalpy density in this way (which
comes from the bag model) to be a very good approximation. This replacement
leads to less than 1% error in the GW predictions. Therefore our
$\alpha_{\rho}$ effectively corresponds to the latent heat definition
frequently found in the literature, see eq. 5.35 of Ref. [9]. Similarly
$\alpha_{\theta}$ also effectively corresponds to eq. 5.36 of Ref. [9], which
also frequently appears in the literature, though here one also needs to
substitute $\theta=\frac{1}{4}(\rho-3p)$. One could also replace
$\bar{\theta}$ with $\theta$ in eq. 17 and use eq. 25 for $\kappa$,
corresponding to M3 in Ref. [25]. However, we find this introduces at most 1%
difference in the GW predictions compared to using eq. 23 with $x=\theta$, so
we do not consider this variation in our results.
As described in section III.1, one can approximate the mean bubble separation
$R_{\text{sep}}$ through the often-used thermal parameter $\beta$, or through
$\beta_{V}$. We investigate the error in these approximations for
$R_{\text{sep}}$ and the corresponding effect on GW predictions. We also
demonstrate the impact of using $\bar{R}$ instead of $R_{\text{sep}}$, but we
do not treat this as a variation of the treatment because mapping $\bar{R}$
onto existing GW fits is currently ambiguous.
We also consider alternative treatments of the turbulence efficiency
coefficient. The most obvious variation is to simply choose another arbitrary,
fixed value. We choose $\epsilon_{2}=0.1$, where the subscript ‘2’ denotes the
index of this variation for $\epsilon$. We also consider
$\epsilon_{3}=\left(1-\min(H(T_{*})\tau_{\text{sw}},1\right))^{2/3}$, which
comes from assuming that a reduction in the lifetime of the sound wave source
$\tau_{\text{sw}}$ could boost the turbulence contribution to GW production
[39, 16]. However, the effective lifetime of the sound wave source is more
accurately suppressed by the factor
$\Upsilon=1-1/\sqrt{1+2H(T_{*})\tau_{\text{sw}}}$ derived in Ref. [18]. This
motivates a slightly modified choice: $\epsilon_{4}=(1-\Upsilon)^{2/3}$.
There are of course many other variations to the treatment that could be
considered, but we restrict our study to the variations mentioned thus far.
Changes to the bubble wall velocity could significantly impact the GW
predictions and even the phase transition properties, particularly if the
expansion mode of the bubbles changes from a supersonic detonation.
TransitionSolver currently does not use a full hydrodynamic treatment of
bubble profiles and therefore only provides accurate predictions for
supersonic detonations.444Reheating in the false vacuum for other bubble
expansion modes affects both bubble nucleation and growth [40, 41, 42]. Thus,
we currently cannot explore effect of $v_{w}$ on GW predictions. We explored
the impact of approximations made for the reheating temperature and GW
redshifting factors in Ref. [27], and found that their effects were small. We
do not reconsider these approximations here due to their accuracy. Also, we
explored accuracy of various approximations for $T_{n}$ as a function of
supercooling in Ref. [28]. Here we only calculate $T_{n}$ using eq. 13, but we
note that rougher approximations for $T_{n}$ are unreliable in strongly
supercooled scenarios, and would thus lead to significant errors in GW
predictions.
## IV Gravitational waves
We consider only the sound wave and turbulence sources of GWs in this study.
The collision source is expected to contribute negligibly due to friction with
the plasma. Even though some of the benchmark points listed in section V admit
strong supercooling, the bubbles nucleate at temperatures where the plasma
still imposes significant friction on the expanding bubble walls. Thus, we do
not expect runaway bubble walls and consequently neglect the collision source
altogether.
The general scaling of the GW equations is predominantly governed by two key
parameters: the kinetic energy fraction $K$ and the characteristic length
scale $L_{*}$. We set $L_{*}=R_{\text{sep}}(T_{p})$ in our best treatment. The
scaling of the peak amplitude $\Omega_{\text{peak}}$ and the peak frequency
$f_{\text{peak}}$ is roughly
$\displaystyle\Omega_{\text{peak}}$ $\displaystyle\propto K^{n}L_{*},$ (26)
$\displaystyle f_{\text{peak}}$ $\displaystyle\propto L_{*}^{-1},$ (27)
where $n=2$ for sound waves and $n=3/2$ for turbulence.
The details of the GW equations we use can be found in appendix A.5 of Ref.
[27]. In addition to the turbulence fit [43] and the sound shell model [21,
22] used for the sound wave source, we also consider another fit for the sound
wave source provided in Ref. [19]. We will refer to this fit as the ‘lattice
fit’ for the sound wave source, for lack of a better name. In this fit, the
redshifted peak amplitude is
$h^{2}\Omega_{\text{sw}}^{\text{lat}}(f)=5.3\\!\times\\!10^{-2}\,\mathcal{R}_{\Omega}K^{2}\\!\left(\\!\frac{HL_{*}}{c_{s,f}}\\!\right)\\!\Upsilon(\tau_{\text{sw}})S_{\text{sw}}(f),$
(28)
the redshifted peak frequency is
$f_{\text{sw}}^{\text{lat}}=1.58\,\mathcal{R}_{f}\\!\left(\frac{1}{L_{*}}\right)\\!\left(\frac{z_{p}}{10}\right),$
(29)
matching one of the key frequencies in the sound shell model, and the spectral
shape is
$S_{\text{sw}}(f)=\left(\frac{f}{f_{\text{sw}}^{\text{lat}}}\right)^{\\!\\!3}\\!\left(\frac{7}{4+3(f/f_{\text{sw}}^{\text{lat}})^{2}}\right)^{\\!\\!\frac{7}{2}}.$
(30)
See Ref. [9] and the appendices of Ref. [27] for details of the redshifting
factors $\mathcal{R}_{f}$ and $\mathcal{R}_{\Omega}$, the lifetime suppression
factor $\Upsilon$, and the simulation-derived factor $z_{p}$ (which is taken
to be $z_{p}=10$). All quantities in the fit are evaluated at $T_{*}$, except
for the redshifting factors. These are instead evaluated at the reheating
temperature, which itself depends on $T_{*}$. Just as in Ref. [27], we do not
include a suppression factor coming from bubbles not reaching their asymptotic
hydrodynamic profiles in the simulations from which the GW fits are obtained.
This suppression factor would likely depend on $T_{*}$ and the extent of
supercooling, however further modelling is required.
We also compute the SNR for the planned space-based GW detector LISA [44].
LISA has a peak sensitivity at the frequency scale $f_{\text{LISA}}\sim
10^{-3}$ Hz, which is the expected scale of GW signals from a first-order
electroweak phase transition [45]. We use the projected sensitivity curve
$\Omega_{\text{LISA}}$ from Refs. [46, 47], plotted in fig. 5. We calculate
the SNR as [47]
$\text{SNR}=\sqrt{\mathcal{T}\\!\int_{0}^{\infty}\\!\\!\\!df\\!\left(\frac{\Omega_{\text{GW}}(f)}{\Omega_{\text{LISA}}(f)}\right)^{\\!\\!2}},$
(31)
where $\Omega_{\text{GW}}$ is the total GW signal from the sound wave and
turbulence sources, and assume an effective observation time $\mathcal{T}$ of
three years, coming from a mission duration of four years and 75% data taking
uptime.
## V Model
We use the real scalar singlet model — which is a simple yet realistic
extension to the Standard Model — to realise a first-order electroweak phase
transition. Details of this model and our treatment of one-loop corrections
are available in section 4.2 of Ref. [28]. We improve the treatment by adding
extra fermions (including all quarks and the muon and tau), and adding
Boltzmann suppression factors to the Debye corrections. We also appropriately
adjust the radiation degrees of freedom to $g_{*}^{\prime}=22.25$. A similar
treatment in a simpler single-field model was used in Ref. [27].
We consider four benchmark points (BPs) in this study, each with a different
extent of supercooling. All BPs come from a narrow slice of the total
parameter space of the model. We start with M2-BP2 of Ref. [28] and vary only
the mixing angle $\theta_{m}$ to vary the extent of supercooling. The other
input parameters are fixed as $\kappa_{hhs}=-1259.83$ GeV,
$\kappa_{sss}=-272.907$ GeV, $v_{s}=663.745$ GeV and $m_{s}=351.183$ GeV. The
mixing angles and the milestone temperatures for the BPs are listed in table
1. The supercooling increases with the BP index. BP1 represents a typical
weakly supercooled phase transition with only 1 GeV difference between the
onset of bubble nucleation and percolation, and $\alpha_{\bar{\theta}}\approx
0.01$. BP2 represents a moderately supercooled phase transition with
$\alpha_{\bar{\theta}}\approx 0.05$. Both of these BPs have an exponential
nucleation rate, thus we do not calculate $T_{\Gamma}$ for them. BP3
represents a very strongly supercooled phase transition, where the physical
volume of the false vacuum only begins to decrease just below $T_{p}$. While
BP3 has a completion temperature, percolation is questionable [48, 14, 28].
The transition strength parameter is $\alpha_{\bar{\theta}}\approx 1.7$,
beyond the reach of current hydrodynamic simulations of GWs [20]. Thus, one
must be cautious when interpreting GW predictions from BP3, and indeed BP4.
BP4 has even stronger supercooling, so much so that the phase transition does
not complete. The transition strength parameter in BP4 is
$\alpha_{\bar{\theta}}\approx 177$.
| $\theta_{m}$ | $T_{c}$ | $T_{n}$ | $T_{p}$ | $T_{e}$ | $T_{f}$ | $T_{\Gamma}$ | $\log_{10}(\alpha_{\bar{\theta}})$
---|---|---|---|---|---|---|---|---
BP1 | 0.24 | 117.0 | 106.0 | 104.8 | 104.7 | 104.6 | N/A | $-1.938$
BP2 | 0.258 | 108.3 | 78.10 | 74.17 | 73.80 | 73.24 | N/A | $-1.264$
BP3 | 0.262 | 106.2 | N/A | 32.46 | 25.65 | 12.69 | 59.47 | $\;\;\,0.2178$
BP4 | 0.2623 | 106.1 | N/A | 10.09 | N/A | N/A | 59.57 | $\;\;\,2.248$
Table 1: Benchmark points and their corresponding milestone temperatures. The
mixing angle is expressed in radians, and the temperatures have units of GeV.
The transition strength parameter $\alpha_{\bar{\theta}}$ is evaluated at
$T_{p}$.
## VI Results
### VI.1 Dependence on the transition temperature
In this section we discuss the impact on GW predictions when varying the
transition temperature, $T_{*}$. The SNR as a function of $T_{*}$ is shown in
fig. 1 for each BP. The SNR varies by orders of magnitude over the duration of
the phase transition. However, GWs are not produced until the phase transition
is well underway, so we restrict our attention to the temperature range
$T\in[T_{f},\max(T_{n},T_{\Gamma})]$.
There are two sets of curves — solid and dashed — which have starkly different
forms in the temperature domain. The solid curves use $L_{*}=R_{\text{sep}}$
while the dashed curves use $L_{*}=\bar{R}$, with everything else in the
treatment being the same between the two sets of curves. The most immediate
difference between the two sets is that the SNR increases with $T_{*}$ when
using $R_{\text{sep}}$, and decreases with $T_{*}$ when using $\bar{R}$. In
fig. 2(a,b) we see that the peak amplitude of GWs follows a similar behaviour:
the amplitude increases (decreases) with $T_{*}$ when using $R_{\text{sep}}$
($\bar{R}$). Inversely, in fig. 3(a,b) we see that the peak frequency of GWs
decreases with $T_{*}$ when using $R_{\text{sep}}$, and increases considerably
slower with $T_{*}$ when using $\bar{R}$.
These observations can be easily explained by investigating the behaviour of
$R_{\text{sep}}$ and $\bar{R}$ as function of $T_{*}$ (see fig. 4). In fact,
we find that the dominant thermal parameter when varying $T_{*}$ is $L_{*}$,
not $K$. In fig. 4(a) we plot choices of the length scale as a function of
$T_{*}$ for BP2 (intermediate supercooling). The mean bubble separation is
large near the start of the phase transition (at higher $T_{*}$) because there
are few bubbles so their separation is large. The separation decreases over
the course of the phase transition (with decreasing $T_{*}$) because new
bubbles are nucleated. The mean bubble radius, on the other hand, begins very
small because the first bubbles to nucleate have not had time to grow
significantly. As the phase transition continues, pre-existing bubbles grow,
but more small bubbles are nucleated, suppressing an increase in the mean
radius. Thus, the mean bubble radius increases over time (i.e. with decreasing
$T_{*}$) but varies less than the mean bubble separation. We also see that the
mean bubble separation estimated using $\beta$ actually emulates the mean
bubble radius. This is unsurprising, because $R_{\text{sep}}$ is supposedly
inversely proportional to $\beta$, and $\beta$ is much higher at the start of
a phase transition with the bounce action diverging at $T_{c}$. Thus,
$R_{\text{sep}}$ estimated using $\beta$ is small at high $T_{*}$, in line
with $\bar{R}$, whereas the true $R_{\text{sep}}$ is large at high $T_{*}$.
The behaviour of $R_{\text{sep}}$ in BP3 (see fig. 4(b)) is more complicated
due to strong supercooling. The expansion of space dilutes the bubble number
density and increases the separation between bubbles. Additionally, bubble
nucleation is negligible well below $T_{\Gamma}$ so new bubbles are not
nucleated to reduce the mean separation. With even stronger supercooling in
BP4 (not shown), $R_{\text{sep}}$ begins to increase rapidly as $T_{*}$ drops
below $T_{p}$. We also see that $\beta$ cannot be used to estimate
$R_{\text{sep}}$ in BP3 (at least below $T_{\Gamma}$). However, one can
instead use $\beta_{V}$ under the Gaussian nucleation rate approximation,
which is seen to reproduce both $R_{\text{sep}}$ and $\bar{R}$ quite well at
$T_{p}$ in this example.
Now that the temperature scaling of the length scales is clear, we can return
to effects on the GW signal. First, the peak frequency for all sources is
inversely proportional to $L_{*}$ and is largely unaffected by any other
thermal parameters. Only the frequency corresponding to the sound shell
thickness scale (in the sound shell model) is directly affected by the
hydrodynamic parameters $K$, $v_{w}$ and $c_{s}$. The two key frequencies in
the sound shell model are less separated with increased supercooling due to
thickening of the sound shell. Otherwise, the behaviour of the peak
frequencies in fig. 3 can be explained purely by the behaviour of the length
scales in fig. 4. If one uses $\bar{R}$, the change in frequency with $T_{*}$
is milder than when using $R_{\text{sep}}$. In general, stronger supercooling
lowers the peak frequency at $T_{p}$.
Next, the peak amplitude for all sources is proportional to $L_{*}$. However,
the amplitude also depends on $K$ and $c_{s}$, as well as $v_{w}$ indirectly
through $K$. Nevertheless, $L_{*}$ typically has a dominant effect on the
amplitude. In the absence of strong supercooling, $R_{\text{sep}}$ changes
considerably with $T_{*}$ while $\bar{R}$ does not. Yet, $K$ and the other
hydrodynamic parameters change very little, so $L_{*}$ still has a dominant
effect even when using $\bar{R}$. With strong supercooling, $K$ and the other
hydrodynamic parameters can vary considerably between $T_{\Gamma}$ and
$T_{p}$. So too can $\bar{R}$, while $R_{\text{sep}}$ remains approximately
constant, and is in fact minimised near $T_{\Gamma}$. The peak amplitude
increases rapidly below $T_{p}$ in BP4 not because of $K$ (which is roughly
unity even at $T_{p}$), but because of the expansion of space causing a rapid
increase in $L_{*}$.555This also causes a rapid decrease in the peak frequency
at low temperature, consistent with the findings in Ref. [27]. These are all
generic features of strongly supercooled phase transitions so the results and
analysis presented here should apply to other BPs and other models.
Combining the peak amplitudes and frequencies of the GW sources, one can then
compare the GW signal to the sensitivity of a GW detector to obtain the SNR.
We consider LISA in this study, but in principle the SNR at any detector could
be calculated. Although we now have a clear picture of the behaviour of the
peak amplitude and frequency, the behaviour of the SNR is complicated by the
sensitivity window of LISA. The SNR is enhanced when the peak frequency
matches the frequency range where LISA is most sensitive; that is, near
$f_{\text{LISA}}\sim 10^{-3}$ Hz. If by varying $T_{*}$ one would obtain a
higher peak amplitude but shift the peak frequency further from LISA’s optimal
frequency range, the SNR could decrease. Thus, investigating the peak
amplitude or peak frequency in isolation will not give a clear indication of
detectability.
In fig. 5 we plot the peak of the GW signal in the amplitude-frequency plane
as a function of $T_{*}$ for BP3 to provide further insight into these
competing effects. When using $R_{\text{sep}}$ for a strongly supercooled
phase transition, the peak frequency (amplitude) increases (decreases) with
decreasing $T_{*}$, until a reversal at $T_{\Gamma}$. However, between
$T_{\Gamma}$ and $T_{p}$ the amplitude increases faster than the frequency
decreases, increasing the SNR at LISA. Meanwhile, if one uses $\bar{R}$ for a
strongly supercooled phase transition, the peak frequency (amplitude)
decreases (increases) with decreasing $T_{*}$. In the example of BP3, the peak
of the GW signal slides across the boundary of LISA’s sensitivity curve,
leading to an almost constant SNR between $T_{\Gamma}$ and $T_{f}$. One can
imagine that a slightly different BP could alter the GW peak scaling slightly,
leading to a substantially different scaling of SNR with $T_{*}$. Naturally,
the curves for $R_{\text{sep}}$ and $\bar{R}$ meet near $T_{f}$ because the
two length scales are very similar near the end of the phase transition (as
was also demonstrated in Ref. [34]).
The GW signal is formed from the sound wave and turbulence contributions,
noting again that we have neglected the collision contribution. We consider
one GW fit for the turbulence throughout, but we present results for two GW
fits for sound waves: the sound shell model and lattice fits. First we compare
the two fits for the sound wave source. Based on the SNR alone (see fig. 1) we
find a significant discrepancy between the two fits at $T_{p}$ in BP1 and BP2.
The fits agree quite well for BP3 and BP4 when using $R_{\text{sep}}$ but this
is a coincidence due to LISA’s sensitivity window. Looking at the peak
amplitudes and frequencies separately for BP3 and BP4 (see fig. 2(c,d) and
fig. 3(c,d)), we see that the predicted GW signals are still different. When
using $\bar{R}$ instead, the SNR of the sound shell model is consistently
smaller in BP1 and BP2 for all $T_{*}$ because the peak frequency is always
above LISA’s optimal frequency, $f_{\text{LISA}}$. The situation is more
complicated in BP3 and BP4 because the peak frequency crosses
$f_{\text{LISA}}$ as $T_{*}$ is varied.
The ratio of peak amplitudes in the two sound wave fits is
$\Omega_{\text{sw}}^{\text{ss}}/\Omega_{\text{sw}}^{\text{lat}}\approx 0.20$
for $v_{w}\sim 1$ and $c_{s,f}\sim 1/\sqrt{3}$, where the superscripts ‘ss’
and ‘lat’ denote the sound shell and lattice fits, respectively. This ratio is
approximately independent of $T_{*}$ and is similar for all BPs. The ratio of
peak frequencies is
$f_{\text{sw}}^{\text{ss}}/f_{\text{sw}}^{\text{lat}}\approx 2.4$ for
$v_{w}\sim 1$ and $c_{s,f}\sim 1/\sqrt{3}$ as in BP3, but increases to roughly
$8.1$ in BP1 where $v_{\text{CJ}}\approx 0.65$. The ratio of peak frequencies
has a slight dependence on $T_{*}$ due to our choice $v_{w}=v_{\text{CJ}}$,
with $v_{\text{CJ}}$ implicitly depending on $T_{*}$ through $\alpha$. The
large frequency ratio in BP1 and BP2 leads to a large difference in the SNR at
LISA between the two sound wave fits. The choice $v_{w}=v_{\text{CJ}}$ results
in a large separation in length scales — $L_{*}$ and $L_{*}\Delta_{w}$ — when
$v_{\text{CJ}}\sim c_{s,f}$, which occurs when $\alpha\ll 1$. Here,
$\Delta_{w}=(v_{w}-c_{s,f})/v_{w}$ is a multiplier for the sound shell
thickness, and can be applied to either $R_{\text{sep}}$ or $\bar{R}$.
Next we compare the sound wave source to the turbulence source. In general,
$\Omega_{\text{turb}}$ decreases faster than $\Omega_{\text{sw}}$ with
decreasing $T_{*}$ when using $R_{\text{sep}}$. This is because both
amplitudes are proportional to the decreasing $L_{*}$, but
$\Omega_{\text{sw}}$ is proportional to the increasing $K^{2}$ while
$\Omega_{\text{turb}}$ is proportional to $K^{3/2}$. Thus, the fractional
contribution of turbulence to the total GW signal decreases with $T_{*}$.
However, when $K\sim 1$, as in BP4 below $T_{p}$, the scaling with $K$ is
equivalent between the two GW sources. The comparison of the two sources does
not change when instead using $\bar{R}$, although the amplitudes now
monotonically increase with decreasing $T_{*}$. There is little insight to
gain when comparing the peak frequencies of the GW sources because they
largely differ by a constant factor. The peak frequency for the turbulence
contribution is between the peak frequencies of the two sound wave fits; it is
larger than that of the lattice fit and smaller than that of the sound shell
model fit. However, because the sound shell thickens with supercooling (at
least when choosing $v_{w}=v_{\text{CJ}}$), we find that the peak frequency of
turbulence closely matches the peak frequency in the sound shell model in
strongly supercooled scenarios. Though, the GW fits were obtained in weak and
intermediate supercooling scenarios, so their use in scenarios with strong
supercooling requires extrapolation and should be interpreted with care.
Finally, one can compare the contribution to the SNR from the sound wave and
turbulence sources. This information cannot be inferred from the results shown
in fig. 1. Instead, we will discuss the turbulence contribution — and the
impact on the SNR when increasing it — in the next section, where we consider
variations of our best treatment.
Figure 1: SNR at LISA as a function of $T_{*}$. From top to bottom: (a) BP1,
(b) BP2, (c) BP3, (d) BP4. The vertical dashed lines correspond to key
temperatures: $T_{\Gamma}$ (magenta), $T_{n}$ (red), $T_{p}$ (green), $T_{e}$
(blue) and $T_{f}$ (black). Completion occurs at the left border of each plot,
except for BP4 where there is no completion. The solid curves correspond to
$L_{*}=R_{\text{sep}}$ and the dashed curves correspond to $L_{*}=\bar{R}$.
Figure 2: Peak GW amplitude as a function of transition temperature. See the
caption of fig. 1 for further details.
Figure 3: Peak GW frequency as a function of transition temperature. See the
caption of fig. 1 for further details.
Figure 4: Characteristic length scale as a function of transition temperature.
From top to bottom: (a) BP2, (b) BP3. The qualitative features of BP1 and BP4
are respectively very similar to those of BP2 and BP3, although
$R_{\text{sep}}$ and $\bar{R}$ increase rapidly near $T_{f}$ in BP4. The
vertical dashed lines correspond to key temperatures: $T_{\Gamma}$ (magenta),
$T_{n}$ (red), $T_{p}$ (green), $T_{e}$ (blue) and $T_{f}$ (black). Completion
occurs at the left border of each plot Figure 5: The peak amplitude and
frequency of the GW signal for BP3 as a function of temperature. Here we show
only the sound shell model for the sound wave source. The noise curve for LISA
is shown in blue.
### VI.2 Variations of the treatment
We now discuss the impact of individual variations to our best treatment for
GW prediction. These variations involve estimating $R_{\text{sep}}$ using
$\beta$ and $\beta_{V}$, estimating $K$ using other hydrodynamic quantities,
and changing the efficiency coefficient for turbulence, as discussed in
section III.2. The numerical results are stated in tables 2, 3 and 4 for
BP1-3. We do not consider BP4 here because the phase transition does not
complete; besides the results should qualitatively match those of BP3. Note
that studies typically do not vary from our best treatment by one small
change. Usually many approximations are made for all thermal parameters used
in GW predictions. Our investigation here does not encompass such treatments;
instead we point the reader to Ref. [30] where they compare low and high
diligence treatments. However, one cannot easily determine from their results
the effects of individual variations to indicate whether an approximation is
appropriate.
First, we briefly discuss the impact of the varying the transition
temperature, which is otherwise treated in more detail in section VI.1. The
two main parameters affecting the GW predictions are $K$ and $L_{*}$. We see
that $K$ changes by at most a factor of a few between $T_{n}$ and $T_{f}$ even
in the strongly supercooled scenario, BP3.666Evaluating the GW signal at
$T_{f}$ (defined by $P_{f}(T_{f})=0.01$) is not a standard treatment. We show
this variation to demonstrate the limiting behaviour of quantities near the
end of the phase transition. Yet the peak amplitudes and frequencies change by
several orders of magnitude. This is because $R_{\text{sep}}$ changes by
several orders of magnitude between $T_{n}$ and $T_{f}$. Whether the SNR is
higher or lower for some choice of $T_{*}$ depends on where the peak frequency
lies with respect to LISA’s peak sensitivity, $f_{\text{LISA}}$. Because of
this, there is no consistent trend in the effect of $T_{*}$ on the SNR across
the BPs, even though there is a consistent trend in the peak amplitudes and
frequencies.
Next, we find that using $\beta(T_{p})$ to estimate $R_{\text{sep}}(T_{p})$
results in roughly a factor of two error in peak amplitudes and frequencies in
BP1 and BP2. A similar error is present when using $\beta_{V}$ to estimate
$R_{\text{sep}}(T_{p})$ in BP3. However, it is common practice to evaluate
$\beta$ at $T_{n}$ rather than at $T_{p}$, which introduces a larger error as
seen in fig. 4(a). Yet using $\beta(T_{n})$ is more appropriate than using
$R_{\text{sep}}(T_{n})$ simply because the bubble number density changes
faster than $\beta$ between $T_{n}$ and $T_{p}$. We do not consider the
variation $L_{*}=\bar{R}$ here because GW fits are derived in terms of
$R_{\text{sep}}$ rather than $\bar{R}$. An appropriate mapping would need to
be applied to use $\bar{R}$ in the fits, such as multiplying $L_{*}$ by an
unknown constant factor in the fits.
Varying the hydrodynamic quantity $x$ in eq. 23 has a significant impact on
the prediction of $K$ in BP1 and BP2. The effect is considerably smaller in
BP3. This can be understood as follows. The pressure difference $\Delta p$ and
energy density difference $\Delta\rho$ are starkly different at high
temperature, with $\Delta p=0$ and $\Delta\rho\neq 0$ at $T_{c}$. We always
have $\alpha_{p}<\alpha_{\theta}<\alpha_{\rho}$ [25]. Using the pressure
difference underestimates $K$, while using the energy density difference
overestimates $K$. Our results match the findings of Refs. [25, 26]. With
increased supercooling (i.e. at lower temperature), the energy density
approaches the pressure such that $\alpha_{p}\approx\alpha_{\rho}$, and
$c_{s,t}^{2}\approx 1/3$ such that $\bar{\theta}\approx\theta$. Thus, for
strong supercooling we find that all methods to estimate $K$ lead to similar
results, while significant discrepancies arise for weak and intermediate
supercooling.
Lastly, we consider the impact of varying the turbulence efficiency
coefficient, $\kappa_{\text{turb}}$, through variation of $\epsilon$ (see eq.
21). Increasing $\kappa_{\text{turb}}$ can have a large impact on the SNR,
particularly if the peak frequency of turbulence better matches the detector’s
sensitivity window than the peak frequency of sound waves does. The variations
$\epsilon_{3}$ and $\epsilon_{4}$ increase the amplitude of the turbulence
source by two orders of magnitude because $\epsilon$ approaches unity, and
$(1/0.05)^{3/2}\approx 90$. However, $\epsilon_{3}$ predicts zero turbulence
in BP3 because $H(T_{*})\tau_{\text{sw}}>1$. Increasing the turbulence
contribution increases the SNR significantly in BP1 when using the sound shell
model but has little effect when using the lattice fit for sound waves. The
effect is small in BP2 with up to a 50% increase in SNR when using the sound
shell model. The effect is significant in BP3 when using either sound wave
fit.
Variation | $h^{2}\Omega_{\mathrm{sw}}^{\mathrm{lat}}$ $(\\!\times\\!10^{-17})$ | $h^{2}\Omega_{\mathrm{sw}}^{\mathrm{ss}}$ $(\\!\times\\!10^{-18})$ | $f_{\mathrm{sw}}^{\mathrm{lat}}$ $(\\!\times\\!10^{-5})$ | $f_{\mathrm{sw}}^{\mathrm{ss}}$ $(\\!\times\\!10^{-4})$ | $h^{2}\Omega_{\mathrm{turb}}$ $(\\!\times\\!10^{-20})$ | $f_{\mathrm{turb}}$ $(\\!\times\\!10^{-5})$ | $\mathrm{SNR_{lat}}$ $(\\!\times\\!10^{-5})$ | $\mathrm{SNR_{ss}}$ $(\\!\times\\!10^{-7})$ | $\alpha$ $(\\!\times\\!10^{-3})$ | $\kappa$ $(\\!\times\\!10^{-2})$ | $K$ $(\\!\times\\!10^{-4})$
---|---|---|---|---|---|---|---|---|---|---|---
None | 22.57 | 31.49 | 1422 | 1157 | 21.28 | 3150 | 156.2 | 39.60 | 11.52 | 9.900 | 11.20
$T_{*}=T_{e}$ | 13.97 | 19.50 | 1833 | 1490 | 12.90 | 4061 | 56.44 | 11.24 | 11.57 | 9.921 | 11.27
$T_{*}=T_{f}$ | 11.10 | 15.50 | 2080 | 1685 | 10.16 | 4607 | 33.82 | 6.105 | 11.66 | 9.955 | 11.39
$T_{*}=T_{n}$ | 147000 | 204300 | 2.611 | 2.187 | 5448000 | 5.785 | 10230 | 5026000 | 10.74 | 9.565 | 10.09
$R_{\text{sep}}(\beta)$ | 11.04 | 15.41 | 2062 | 1678 | 10.12 | 4567 | 34.32 | 6.216 | | |
$K(\alpha(\theta))$ | 21.09 | 29.44 | | | 19.92 | | 146.0 | 37.03 | 11.46 | 9.466 | 10.72
$K(\alpha(p))$ | 1.403 | 1.957 | | | 1.489 | | 9.711 | 2.509 | 3.590 | 5.317 | 1.902
$K(\alpha(\rho))$ | 261.9 | 365.5 | | | 234.7 | | 1813 | 456.2 | 35.05 | 16.39 | 55.50
$\epsilon_{2}$ | | | | | 60.18 | | 156.4 | 54.06 | | |
$\epsilon_{3}$ | | | | | 1776 | | 166.0 | 1035 | | |
$\epsilon_{4}$ | | | | | 1787 | | 166.0 | 1041 | | |
Table 2: GW predictions and hydrodynamic parameters for BP1. Each row
corresponds to a different variation of our best treatment. Blank cells match
the result of our best treatment (i.e. the top row). Frequencies are stated in
units of GeV, with all other quantities being dimensionless. The scripts ‘ss’
and ‘lat’ respectively denote the sound shell model fit and the lattice fit
for the sound wave source of GWs.
Variation | $h^{2}\Omega_{\mathrm{sw}}^{\mathrm{lat}}$ $(\\!\times\\!10^{-13})$ | $h^{2}\Omega_{\mathrm{sw}}^{\mathrm{ss}}$ $(\\!\times\\!10^{-14})$ | $f_{\mathrm{sw}}^{\mathrm{lat}}$ $(\\!\times\\!10^{-5})$ | $f_{\mathrm{sw}}^{\mathrm{ss}}$ $(\\!\times\\!10^{-4})$ | $h^{2}\Omega_{\mathrm{turb}}$ $(\\!\times\\!10^{-16})$ | $f_{\mathrm{turb}}$ $(\\!\times\\!10^{-5})$ | $\mathrm{SNR_{lat}}$ | $\mathrm{SNR_{ss}}$ | $\alpha$ $(\\!\times\\!10^{-2})$ | $\kappa$ | $K$ $(\\!\times\\!10^{-3})$
---|---|---|---|---|---|---|---|---|---|---|---
None | 3.590 | 5.673 | 129.6 | 60.20 | 3.898 | 287.0 | 10.08 | 2.031 | 5.450 | 0.2074 | 10.64
$T_{*}=T_{e}$ | 2.552 | 4.042 | 159.9 | 73.75 | 2.662 | 354.2 | 8.763 | 1.204 | 5.575 | 0.2096 | 10.99
$T_{*}=T_{f}$ | 2.146 | 3.410 | 181.7 | 82.91 | 2.187 | 402.5 | 8.110 | 0.8892 | 5.771 | 0.2129 | 11.54
$T_{*}=T_{n}$ | 676.5 | 1046 | 2.189 | 1.098 | 8968 | 4.849 | 1.310 | 5.142 | 4.297 | 0.1857 | 7.597
$R_{\text{sep}}(\beta)$ | 2.019 | 3.191 | 177.5 | 82.45 | 2.078 | 393.1 | 7.510 | 0.8449 | | |
$K(\alpha(\theta))$ | 3.372 | 5.329 | | | 3.676 | | 9.469 | 1.908 | 5.362 | 0.2011 | 10.23
$K(\alpha(p))$ | 1.428 | 2.256 | | | 1.649 | | 4.010 | 0.8081 | 3.698 | 0.1682 | 5.997
$K(\alpha(\rho))$ | 14.45 | 22.84 | | | 14.61 | | 40.59 | 8.172 | 10.35 | 0.2736 | 25.68
$\epsilon_{2}$ | | | | | 11.03 | | 10.11 | 2.064 | | |
$\epsilon_{3}$ | | | | | 290.2 | | 11.21 | 3.406 | | |
$\epsilon_{4}$ | | | | | 301.7 | | 11.26 | 3.462 | | |
Table 3: The same as table 2 but for BP2.
Variation | $h^{2}\Omega_{\mathrm{sw}}^{\mathrm{lat}}$ $(\\!\times\\!10^{-7})$ | $h^{2}\Omega_{\mathrm{sw}}^{\mathrm{ss}}$ $(\\!\times\\!10^{-8})$ | $f_{\mathrm{sw}}^{\mathrm{lat}}$ $(\\!\times\\!10^{-6})$ | $f_{\mathrm{sw}}^{\mathrm{ss}}$ $(\\!\times\\!10^{-6})$ | $h^{2}\Omega_{\mathrm{turb}}$ $(\\!\times\\!10^{-10})$ | $f_{\mathrm{turb}}$ $(\\!\times\\!10^{-6})$ | $\mathrm{SNR_{lat}}$ | $\mathrm{SNR_{ss}}$ | $\alpha$ | $\kappa$ | $K$
---|---|---|---|---|---|---|---|---|---|---|---
None | 1.861 | 3.748 | 9.345 | 23.48 | 6.348 | 20.70 | 249.6 | 307.7 | 1.651 | 0.7175 | 0.4536
$T_{*}=T_{e}$ | 4.318 | 8.872 | 7.908 | 19.12 | 14.74 | 17.52 | 443.7 | 498.2 | 4.257 | 0.8422 | 0.6950
$T_{*}=T_{f}$ | 17.04 | 35.42 | 4.111 | 9.722 | 81.84 | 9.106 | 864.5 | 876.4 | 71.06 | 0.9831 | 0.9803
$R_{\text{sep}}(\beta_{V})$ | 1.193 | 2.402 | 12.80 | 32.17 | 3.394 | 28.36 | 222.6 | 356.9 | | |
$K(\alpha(\theta))$ | 1.819 | 3.663 | | | 6.227 | | 244.9 | 301.5 | 1.605 | 0.7269 | 0.4478
$K(\alpha(p))$ | 1.768 | 3.560 | | | 6.083 | | 239.2 | 294.2 | 1.564 | 0.7269 | 0.4409
$K(\alpha(\rho))$ | 1.967 | 3.962 | | | 6.646 | | 261.4 | 323.0 | 1.728 | 0.7383 | 0.4677
$\epsilon_{2}$ | | | | | 17.95 | | 700.0 | 742.2 | | |
$\epsilon_{3}$ | | | | | 0 | | 18.36 | 130.9 | | |
$\epsilon_{4}$ | | | | | 288.4 | | 11210 | 11230 | | |
Table 4: The same as table 2 but for BP3. There is no row for $T_{*}=T_{n}$
because there is no nucleation temperature for BP3. This time there is a row
for $R_{\text{sep}}(\beta_{V})$ instead of $R_{\text{sep}}(\beta)$ because the
bubble nucleation rate is Gaussian rather than exponential. In fact, $\beta$
is negative and leads to invalid predictions.
## VII Discussion
In this study we have investigated several ambiguities and approximations made
in predictions of GWs from cosmological phase transitions. We considered each
approximation in isolation to provide a clear indication of their individual
effects on the GW signal. We recommend our results be used in conjunction with
the results of Ref. [30] to determine whether a particular set of
approximations can lead to reliable GW predictions. Alternatively, one could
use our best treatment described in section III.2 if feasible, and even
improve on it with a proper treatment of hydrodynamic profile around bubble
walls and a method for estimating friction on the bubble wall.
To our knowledge, our investigation is the first to explicitly determine the
effect of varying the transition temperature, $T_{*}$. We note that our
investigation is fundamentally different from studies that vary thermal
parameters (including $T_{*}$) separately, treating them as independent
quantities. We account for the implicit interdependence of all thermal
parameters.
The correct choice of the transition temperature is still unknown because the
hydrodynamic simulations from which GW fits are obtained hold the temperature
fixed. In fact, evaluating GW predictions at a single temperature may fall out
of favour once modelling of GW production is improved further. We have
demonstrated that using the current set of thermal parameters (in particular
$R_{\text{sep}}$), the GW signal can change by several orders of magnitude
between commonly chosen transition temperatures: $T_{n}$ and $T_{p}$. If a
more appropriate choice of transition temperature lies somewhere between
$T_{n}$ and $T_{p}$, then new GW predictions would significantly differ from
those obtained using the current best treatments which use $T_{*}=T_{p}$.
We argued in section VI.1 that evaluating the GW signal at temperatures above
$T_{n}$ is not meaningful because bubble collisions would not have occurred to
source GWs at that stage in the phase transition. This same reasoning can also
be used to discard $T_{n}$ as a reasonable transition temperature. The only
case where the nucleation temperature reflects a time when collisions are
occurring is in some strongly supercooled phase transitions — where in extreme
cases $T_{n}\sim T_{p}$, counter-intuitively [28]. However, using $T_{n}$ in
strongly supercooled phase transitions is not recommended. For one, it
decouples from the progress of the phase transition, so it does not represent
a consistent stage in the phase transition. Further, the existence of a
nucleation temperature does not indicate whether a phase transition occurs or
completes, as discussed in Ref. [28]. Thus, one must be careful when using
$T_{n}$, and ensure that the phase transition is in fact weakly supercooled.
It is commonly assumed that the GW signal should be similar at $T_{n}$ and
$T_{p}$ for weakly supercooled phase transitions. This is not consistent with
our findings. Calculating the mean bubble separation properly (from the bubble
number density) would suggest orders of magnitude difference in the GW signal
between $T_{n}$ and $T_{p}$. Using the mean bubble radius or $\beta$ instead
still suggests a factor of a few difference in the GW signal between $T_{n}$
and $T_{p}$. The hydrodynamic parameters like the kinetic energy fraction,
however, are similar at the two temperatures.
The mean bubble radius varies much slower with temperature than the mean
bubble separation. Thus, studies evaluating GWs at $T_{n}$ should use the mean
bubble radius or $\beta$ instead of calculating the mean bubble separation
directly from the bubble number density. However, we note that if one could
calculate the bubble number density, then one could calculate $T_{p}$ and use
the recommended treatment outlined in section III.2.
In general, we find that variations of the treatment of GW predictions can
lead to sizeable deviations in the SNR and peak amplitudes and frequencies;
potentially deviations of many orders of magnitude. In the context of GW
predictions from cosmological phase transitions, mild deviation is of the
order of 10%, suggesting that constraints on particle physics models from GW
observations will be hard to apply reliably at this stage. Nevertheless, the
recent emergence of successful GW astronomy offers hope for constraining
particle physics models at scales beyond the reach of particle physics
experiments.
## VIII Acknowledgements
LM thanks Thomas Konstandin for assistance with numerical accuracy in
calculating $\kappa_{\bar{\theta}}$. LM was supported by an Australian
Government Research Training Program (RTP) Scholarship and a Monash Graduate
Excellence Scholarship (MGES). The work of PA is supported by the National
Natural Science Foundation of China (NNSFC) under grant No. 12150610460 and by
the supporting fund for foreign experts grant wgxz2022021L. ZX is also
supported in part by NNSFC grant No. 12150610460.
## Appendix A Correction to the kinetic energy fraction parameterisation
The kinetic energy fraction is often parameterised as
$K=\frac{\kappa\alpha}{1+\alpha}.$ (32)
This parameterisation introduces approximation to the fundamental definition
[22, 25, 9]
$K=\frac{\rho_{\text{kin}}(T_{*})}{\rho_{\text{tot}}(T_{*})},$ (33)
where $\rho_{\text{kin}}$ is the fluid kinetic energy. In the following we
assume $\rho$ and $p$ are renormalised such that the ground state energy
density vanishes. In this case, $\rho_{\text{tot}}=\rho_{f}$.
The inexact nature of eq. 32 was demonstrated in appendix B.2 of Ref. [22] and
implied in Ref. [25] (seen by comparing methods M2 and M3). A correction
$\delta$ can be applied such that [22]
$K=\frac{\kappa\alpha}{1+\alpha+\delta}.$ (34)
One can solve for $\delta$ by equating eq. 33 and eq. 34. If $\alpha$ is
calculated using the trace anomaly
$\theta=\frac{1}{4}\\!\left(\rho-3p\right)$ (35)
as in Ref. [22], one finds
$\delta=\frac{\theta_{t}}{3w_{f}}.$ (36)
If $\alpha$ is calculated using the pseudotrace [25]
$\bar{\theta}=\frac{1}{4}\\!\left(\rho-\frac{p}{c_{s,t}^{2}}\right),$ (37)
which reduces to the trace anomaly if $c_{s,t}^{2}=1/3$ (e.g. as in the bag
model), one instead finds
$\delta=\frac{4}{3w_{f}}\\!\left(\rho_{\text{tot}}-\Delta\bar{\theta}\right)-1.$
(38)
In our benchmark points we find $\delta\ll 1+\alpha$ such that the difference
between eq. 32 and eq. 34 is at most 1%. Thus, we do not include such
variations on the treatment of $K$ in our results.
## References
* [1] NANOGrav collaboration, _The NANOGrav 15 yr Data Set: Evidence for a Gravitational-wave Background_ , _Astrophys. J. Lett._ 951 (2023) L8 [2306.16213].
* [2] H. Xu et al., _Searching for the Nano-Hertz Stochastic Gravitational Wave Background with the Chinese Pulsar Timing Array Data Release I_ , _Res. Astron. Astrophys._ 23 (2023) 075024 [2306.16216].
* [3] J. Antoniadis et al., _The second data release from the European Pulsar Timing Array III. Search for gravitational wave signals_ , 2306.16214.
* [4] D.J. Reardon et al., _Search for an Isotropic Gravitational-wave Background with the Parkes Pulsar Timing Array_ , _Astrophys. J. Lett._ 951 (2023) L6 [2306.16215].
* [5] NANOGrav collaboration, _The NANOGrav 15 yr Data Set: Search for Signals from New Physics_ , _Astrophys. J. Lett._ 951 (2023) L11 [2306.16219].
* [6] KAGRA, VIRGO, LIGO Scientific collaboration, _Open Data from the Third Observing Run of LIGO, Virgo, KAGRA, and GEO_ , _Astrophys. J. Suppl._ 267 (2023) 29 [2302.03676].
* [7] P. Athron, C. Balázs, T.E. Gonzalo and M. Pearce, _Falsifying Pati-Salam models with LIGO_ , 2307.02544.
* [8] F. Huang, V. Sanz, J. Shu and X. Xue, _LIGO as a probe of dark sectors_ , _Phys. Rev. D_ 104 (2021) 095001 [2102.03155].
* [9] P. Athron, C. Balázs, A. Fowlie, L. Morris and L. Wu, _Cosmological phase transitions: from perturbative particle physics to gravitational waves_ , 2305.02357.
* [10] P. Athron, C. Balazs, A. Fowlie, L. Morris, G. White and Y. Zhang, _How arbitrary are perturbative calculations of the electroweak phase transition?_ , _JHEP_ 01 (2023) 050 [2208.01319].
* [11] D. Croon, O. Gould, P. Schicho, T.V.I. Tenkanen and G. White, _Theoretical uncertainties for cosmological first-order phase transitions_ , _JHEP_ 04 (2021) 055 [2009.10080].
* [12] LIGO Scientific, Virgo collaboration, _Observation of Gravitational Waves from a Binary Black Hole Merger_ , _Phys. Rev. Lett._ 116 (2016) 061102 [1602.03837].
* [13] M. Hindmarsh, S.J. Huber, K. Rummukainen and D.J. Weir, _Gravitational waves from the sound of a first order phase transition_ , _Phys. Rev. Lett._ 112 (2014) 041301 [1304.2433].
* [14] J. Ellis, M. Lewicki and J.M. No, _On the Maximal Strength of a First-Order Electroweak Phase Transition and its Gravitational Wave Signal_ , _JCAP_ 04 (2019) 003 [1809.08242].
* [15] J. Ellis, M. Lewicki and J.M. No, _Gravitational waves from first-order cosmological phase transitions: lifetime of the sound wave source_ , _JCAP_ 07 (2020) 050 [2003.07360].
* [16] J. Ellis, M. Lewicki, J.M. No and V. Vaskonen, _Gravitational wave energy budget in strongly supercooled phase transitions_ , _JCAP_ 06 (2019) 024 [1903.09642].
* [17] C. Caprini et al., _Detecting gravitational waves from cosmological phase transitions with LISA: an update_ , _JCAP_ 03 (2020) 024 [1910.13125].
* [18] H.-K. Guo, K. Sinha, D. Vagie and G. White, _Phase Transitions in an Expanding Universe: Stochastic Gravitational Waves in Standard and Non-Standard Histories_ , _JCAP_ 01 (2021) 001 [2007.08537].
* [19] M. Hindmarsh, S.J. Huber, K. Rummukainen and D.J. Weir, _Shape of the acoustic gravitational wave power spectrum from a first order phase transition_ , _Phys. Rev. D_ 96 (2017) 103520 [1704.05871].
* [20] D. Cutting, M. Hindmarsh and D.J. Weir, _Vorticity, kinetic energy, and suppressed gravitational wave production in strong first order phase transitions_ , _Phys. Rev. Lett._ 125 (2020) 021302 [1906.00480].
* [21] M. Hindmarsh, _Sound shell model for acoustic gravitational wave production at a first-order phase transition in the early Universe_ , _Phys. Rev. Lett._ 120 (2018) 071301 [1608.04735].
* [22] M. Hindmarsh and M. Hijazi, _Gravitational waves from first order cosmological phase transitions in the Sound Shell Model_ , _JCAP_ 12 (2019) 062 [1909.10040].
* [23] X. Wang, F.P. Huang and Y. Li, _Sound velocity effects on the phase transition gravitational wave spectrum in the sound shell model_ , _Phys. Rev. D_ 105 (2022) 103513 [2112.14650].
* [24] R.-G. Cai, S.-J. Wang and Z.-Y. Yuwen, _Hydrodynamic sound shell model_ , 2305.00074.
* [25] F. Giese, T. Konstandin and J. van de Vis, _Model-independent energy budget of cosmological first-order phase transitions—A sound argument to go beyond the bag model_ , _JCAP_ 07 (2020) 057 [2004.06995].
* [26] F. Giese, T. Konstandin, K. Schmitz and J. Van De Vis, _Model-independent energy budget for LISA_ , _JCAP_ 01 (2021) 072 [2010.09744].
* [27] P. Athron, A. Fowlie, C.-T. Lu, L. Morris, L. Wu, Y. Wu et al., _Can supercooled phase transitions explain the gravitational wave background observed by pulsar timing arrays?_ , 2306.17239.
* [28] P. Athron, C. Balázs and L. Morris, _Supercool subtleties of cosmological phase transitions_ , _JCAP_ 03 (2023) 006 [2212.07559].
* [29] X. Wang, F.P. Huang and X. Zhang, _Phase transition dynamics and gravitational wave spectra of strong first-order phase transition in supercooled universe_ , _JCAP_ 05 (2020) 045 [2003.08892].
* [30] H.-K. Guo, K. Sinha, D. Vagie and G. White, _The benefits of diligence: how precise are predicted gravitational wave spectra in models with phase transitions?_ , _JHEP_ 06 (2021) 164 [2103.06933].
* [31] A.D. Linde, _Decay of the False Vacuum at Finite Temperature_ , _Nucl. Phys. B_ 216 (1983) 421.
* [32] C.L. Wainwright, _CosmoTransitions: Computing Cosmological Phase Transition Temperatures and Bubble Profiles with Multiple Fields_ , _Comput. Phys. Commun._ 183 (2012) 2006 [1109.4189].
* [33] K. Enqvist, J. Ignatius, K. Kajantie and K. Rummukainen, _Nucleation and bubble growth in a first order cosmological electroweak phase transition_ , _Phys. Rev. D_ 45 (1992) 3415.
* [34] A. Mégevand and S. Ramírez, _Bubble nucleation and growth in very strong cosmological phase transitions_ , _Nucl. Phys. B_ 919 (2017) 74 [1611.05853].
* [35] R.-G. Cai, M. Sasaki and S.-J. Wang, _The gravitational waves from the first-order phase transition with a dimension-six operator_ , _JCAP_ 08 (2017) 004 [1707.03001].
* [36] P. Athron, C. Balázs, A. Fowlie and Y. Zhang, _PhaseTracer: tracing cosmological phases and calculating transition properties_ , _Eur. Phys. J. C_ 80 (2020) 567 [2003.02859].
* [37] P. Athron, C. Balázs and L. Morris, _TransitionSolver: resolving cosmological phase histories_ , in preparation (2023).
* [38] J.R. Espinosa, T. Konstandin, J.M. No and G. Servant, _Energy Budget of Cosmological First-order Phase Transitions_ , _JCAP_ 06 (2010) 028 [1004.4187].
* [39] T. Alanne, T. Hugle, M. Platscher and K. Schmitz, _A fresh look at the gravitational-wave signal from cosmological phase transitions_ , _JHEP_ 03 (2020) 004 [1909.11356].
* [40] A.F. Heckler, _The Effects of electroweak phase transition dynamics on baryogenesis and primordial nucleosynthesis_ , _Phys. Rev. D_ 51 (1995) 405 [astro-ph/9407064].
* [41] A. Mégevand and S. Ramírez, _Bubble nucleation and growth in slow cosmological phase transitions_ , _Nucl. Phys. B_ 928 (2018) 38 [1710.06279].
* [42] M.A. Ajmi and M. Hindmarsh, _Thermal suppression of bubble nucleation at first-order phase transitions in the early Universe_ , _Phys. Rev. D_ 106 (2022) 023505 [2205.04097].
* [43] C. Caprini, R. Durrer and G. Servant, _The stochastic gravitational wave background from turbulence and magnetic fields generated by a first-order phase transition_ , _JCAP_ 12 (2009) 024 [0909.0622].
* [44] LISA collaboration, _Laser Interferometer Space Antenna_ , 1702.00786.
* [45] C. Caprini et al., _Science with the space-based interferometer eLISA. II: Gravitational waves from cosmological phase transitions_ , _JCAP_ 04 (2016) 001 [1512.06239].
* [46] T. Robson, N.J. Cornish and C. Liu, _The construction and use of LISA sensitivity curves_ , _Class. Quant. Grav._ 36 (2019) 105011 [1803.01944].
* [47] C. Caprini, D.G. Figueroa, R. Flauger, G. Nardini, M. Peloso, M. Pieroni et al., _Reconstructing the spectral shape of a stochastic gravitational wave background with LISA_ , _JCAP_ 11 (2019) 017 [1906.09244].
* [48] M.S. Turner, E.J. Weinberg and L.M. Widrow, _Bubble nucleation in first order inflation and other cosmological phase transitions_ , _Phys. Rev. D_ 46 (1992) 2384.
|
# Zero-Shot Uncertainty-Aware Deployment of Simulation Trained Policies on
Real-World Robots
Krishan Rana
QUT Centre for Robotics, Brisbane
<EMAIL_ADDRESS>&Vibhavari Dasagi
QUT Centre for Robotics, Brisbane
<EMAIL_ADDRESS>&Jesse Haviland
QUT Centre for Robotics, Brisbane
<EMAIL_ADDRESS>&Ben Talbot
QUT Centre for Robotics, Brisbane
<EMAIL_ADDRESS>&Michael Milford
QUT Centre for Robotics, Brisbane
<EMAIL_ADDRESS>&Niko Sünderhauf
QUT Centre for Robotics, Brisbane
<EMAIL_ADDRESS>
###### Abstract
While deep reinforcement learning (RL) agents have demonstrated incredible
potential in attaining dexterous behaviours for robotics, they tend to make
errors when deployed in the real world due to mismatches between the training
and execution environments. In contrast, the classical robotics community have
developed a range of controllers that can safely operate across most states in
the real world given their explicit derivation. These controllers however lack
the dexterity required for complex tasks given limitations in analytical
modelling and approximations. In this paper, we propose Bayesian Controller
Fusion (BCF), a novel uncertainty-aware deployment strategy that combines the
strengths of deep RL policies and traditional handcrafted controllers. In this
framework, we can perform zero-shot sim-to-real transfer, where our
uncertainty based formulation allows the robot to reliably act within out-of-
distribution states by leveraging the handcrafted controller while gaining the
dexterity of the learned system otherwise. We show promising results on two
real-world continuous control tasks, where BCF outperforms both the standalone
policy and controller, surpassing what either can achieve independently. A
supplementary video demonstrating our system is provided at
https://bit.ly/bcf_deploy.
## 1 Introduction
As the adoption of autonomous robotic systems increases around us, there is a
need for the controllers driving them to exhibit the level of sophistication
required to operate in our everyday unstructured environments. Recent advances
in reinforcement learning (RL) coupled with deep neural networks as function
approximators, have shown impressive results across a range of complex robotic
control tasks including dexterous in-hand manipulation [1], quadrupedal
locomotion [10], and targeted throwing [9]. Nevertheless, the lack of safety
guarantees in deep RL-based controllers limits their usability for real-world
robotics [15]. This comes as a result of the black-box nature of neural
network policies and their inability to reliably deal with out-of-distribution
states, particularly seen when simulation-trained models are transferred to
the real world.
On the contrary, the classical robotics community have yielded numerous
controllers and algorithms for a range of real-world physical systems (from
mobile robots to humanoids) that allow us to reliably and safely deploy
robotic agents in the real world. These include classical feedback controllers
[25], trajectory generators [16] and behaviour trees [5]. This is attributed
to their explicit analytic derivation and known models, allowing for control
theoretic guarantees which make them suitable for real-world deployment. They
however can be highly suboptimal when applied to complex tasks, due to
limitations in analytical modelling and approximations.
A promising direction for the future of robot control is in combining the
complementary strengths of these different control mechanisms in order to
address their respective limitations. Such approaches have been observed in
neuroscience, as the underlying control strategy used by biological systems.
The dual-process theory of decision making [6] proposes that multiple
different neural controllers are involved when controlling action selection in
biological systems. Lee et al. [23] provide evidence for this theory based on
the human brain and show the existence of an arbitration mechanism that
determines the extent to which the different neural controllers govern
behaviours. The arbitrator bases its selection on specific performance
measures exhibited by each controller, exploiting their respective strengths
in a given state.
We draw inspiration from this observation and present Bayesian Controller
Fusion (BCF), a hybrid control strategy that combines the respective strengths
of deep RL and traditional handcrafted controllers (control priors) for safe
real-world deployment. We formulate the final policy as a Bayesian composition
of these two controllers, where the controller output for each system captures
their respective epistemic uncertainty in a given state as shown in Figure 1.
This allows BCF to naturally arbitrate control between the two systems based
on their confidence to act. This has important implications during real-world
deployment, where we gain the dexterity of the learned system in states that
it has generalised to while relying on the risk-averse behaviours of the
handcrafted system in out-of-distribution states for safe operation.
Importantly, our method learns to control a real robot in joint space to
complete a given task with no on-robot time (zero-shot), even though the
learned policy may not be perfectly transferable from simulation to the real
world.
We demonstrate our approach on two continuous control, robotic tasks involving
reactive navigation on a mobile robot and a manipulability maximising reacher
on a robotic arm. We show how BCF allows us to reliably transfer a simulation
trained policy to the real world while gaining significant performance
improvements from the RL component when compared to the handcrafted system
alone. We see this as a promising and practical avenue to bringing simulation-
trained RL controllers to safely operate in the real world.
Figure 1: Bayesian Controller Fusion (BCF): a hybrid control strategy for safe
deployment on real robotic systems. We derive uncertainty-aware action outputs
for each controller and compose these outputs to better inform the action
selection process.
## 2 Related Work
Safe sim-to-real transfer has been an active research area, particularly in
robotics where the cost of training robots directly in the real world is high.
Many prior works have focused on developing realistic simulation environments
that represent the real world as close as possible [37, 32] or directly build
a training environment from real-world data [4]. While such approaches attempt
to provide realistic environments, there are still a wide range of states and
variations that are not captured by the training environment. Several works
have generated robust policies using domain randomisation, where the agent is
exposed to a wide range of environmental variations allowing it to generalise
to changing environments. In the case of physical interaction with the
environment, dynamics randomisation of simulated robots has also been used to
capture the intricacies of real-world robots and their environments [30].
Recent works have also utilised a meta-RL approach where the agent’s dynamics
are adapted online in the real world [2, 27]. While all these approaches do
produce increasingly robust policies for real-world operation, they still fail
to capture the vast range of potential states and physical intricacies that
the agent may encounter in the real world, limiting their safe operational
space.
As opposed to attempting to replicate the real world within the simulation
environment, recent works have explored the ability of the agent to reason
about the current state and utilise this as a proxy for decision making.
Osband et al. [28] and Gal and Ghahramani [7] first explored the idea of state
uncertainty estimation from neural networks to assist exploration during
training. Kahn et al. [20] extended these ideas to the robotics setting to
enable the agent to predict its state uncertainty as it acts within the real
world. This allowed it to move slowly to avoid high-speed collisions while
increasing its velocity within parts of the space where it had greater
confidence in its actions. In our work, we explore an approach to avoid
collisions altogether by switching to a safer controller. Other works learn a
predictive model for catastrophic states using supervised learning [24], as
well as a "backup policy" to return the agent from a critical state to a safe
regime [13]. The operation of the robot is however constrained to the states
that the predictor was trained on. As opposed to predictive models, we rely on
epistemic uncertainty estimates directly from an ensemble of trained policies
to identify unknown states. This removes the need to learn from labelled
states or the restriction of operation within specific domains. More closely
related to our work, Garcia and Fernández [8] utilise a distance-based risk
estimator to identify out-of-distribution states and utilise this to switch
between the learned policy and safe baseline controller. The switching however
is abrupt and can result in jittery behaviours unsuitable for controlling real
robots. In contrast, we formulate our controller as a composition of
stochastic controllers allowing the resulting hybrid controller to smoothly
interpolate between behaviours.
A growing area of interest is the combination of learning with traditional
control strategies given their inherent safety guarantees. Bansal et al. [3]
decouple the control from the perception module and learn an obstacle-free
way-point finder that can be used by a low-level optimal controller for
navigation. They show that this works well in the sim-to-real setting given
the decomposition, however, the system is heavily reliant on the accuracy of
the way-point prediction model. In our work we consider the uncertainty of our
trained model to better inform the decision process. Rana et al. [31] leverage
the Residual RL framework [18, 33] to learn a dexterous reactive navigation
controller and utilise uncertainty estimates of the residual policy to govern
whether its action output is used to augment the behaviours of the underlying
controller in the real world or not. The abrupt switching behaviour however
resulted in noisy control signals not ideal for continuous control tasks.
Julian et al. [19] learn a range of skill competencies for a task and use a
model predictive control (MPC) approach to forward simulate each of these
skills in the simulation environment before executing the best action in the
real world. While such hybrid controllers enable continuous and safe operation
of learned policies in the real world, they come with a considerable
computation overhead that can be detrimental to real-time operation. Our
Bayesian fusion formulation allows us to directly leverage uncertainty
estimates to govern the best action for execution between controllers at a
given state, allowing for reactive and real-time control.
## 3 Approach
We introduce Bayesian Controller Fusion (BCF), a hybrid control strategy that
composes stochastic action outputs from two separate control mechanisms: an RL
policy $\pi(a|s)$, and a control prior $\psi(a|s)$. These outputs are
formulated as distributions over actions, where each distribution captures the
relative state uncertainty for the system to act. The Bayesian composition of
these two outputs forms our hybrid policy $\phi(a|s)$.
Figure 1 illustrates our hybrid control strategy that composes the outputs
from the learned policy $\pi(a|s)$ and control prior $\psi(a|s)$. The
uncertainty-aware compositional policy $\phi(a|s)$ allows for the safe
deployment of learned controllers. In states of high uncertainty, the
compositional distribution naturally biases towards the reliable, risk-averse
and potentially suboptimal behaviours suggested by the control prior. In
states of lower uncertainty, it biases towards the policy, allowing the agent
to exploit the optimal behaviours discovered by it. This is reminiscent of the
arbitration mechanism suggested by [23] for behavioural control in the human
brain, where the most confident controller assumes control in a given
situation. This dual-control perspective provides a reliable strategy for
bringing RL to real-world robotics, where generalisation to all states is near
impossible and the presence of a risk-averse control prior serves as a
reliable fallback.
### 3.1 Method
Given a policy, $\pi$ and control prior, $\psi$, we can obtain two independent
estimates of an executable action, $a$, in a given state. In a Bayesian
context, we can utilise the normalised product to fuse these estimates under
the assumption of a uniform prior, $p(a)$:
$\displaystyle p\left(a\mid
s,\mathbf{\theta}_{\pi},\mathbf{\theta}_{\psi}\right)=\frac{p\left(s,\mathbf{\theta}_{\pi},\mathbf{\theta}_{\psi}\mid
a\right)p(a)}{p\left(s,\mathbf{\theta}_{\pi},\mathbf{\theta}_{\psi}\right)}.$
(1)
We assume Gaussian distributional outputs from each system and represent
$\pi(a|s)\approx p\left(a\mid s,\theta_{\pi}\right)$ and $\psi(a|s)\approx
p\left(a\mid s,\theta_{\psi}\right)$, where,
$\theta_{\pi}=\\{[\mu_{\pi_{1}},...,\mu_{\pi_{n}}]^{\intercal},[\sigma_{\pi_{1}},...,\sigma_{\pi_{n}}]^{\intercal}\\}$
and
$\theta_{\psi}=\\{[\mu_{\psi_{1}},...,\mu_{\psi_{n}}]^{\intercal},[\sigma_{\psi_{1}},...,\sigma_{\psi_{n}}]^{\intercal}\\}$
denote the distribution parameters for the policy and control prior outputs
respectively and $n$ is the dimensionality of the action space. We drop the
state $s$, to simplify notation.
Assuming statistical independence of $p\left(\mathbf{\theta}_{\pi}\mid
a\right)$ and $p\left(\mathbf{\theta}_{\psi}\mid a\right)$, we can expand our
likelihood estimate, $p\left(\mathbf{\theta}_{\pi},\mathbf{\theta}_{\psi}\mid
a\right)$, as follows:
$\displaystyle p\left(\mathbf{\theta}_{\pi},\mathbf{\theta}_{\psi}\mid
a\right)$ $\displaystyle=p\left(\mathbf{\theta}_{\pi}\mid
a\right)p\left(\mathbf{\theta}_{\psi}\mid a\right)$
$\displaystyle=\frac{p\left(a\mid\mathbf{\theta}_{\pi}\right)p\left(\mathbf{\theta}_{\pi}\right)}{p(a)}\frac{p\left(a\mid\mathbf{\theta}_{\psi}\right)p\left(\mathbf{\theta}_{\psi}\right)}{p(a)}.$
(2)
Substituting this result back into (1), we can simplify the fusion as a
normalised product of the respective action distributions from each control
mechanism:
$\displaystyle
p\left(a\mid\mathbf{\theta}_{\pi},\mathbf{\theta}_{\psi}\right)=\eta\underbrace{p\left(a\mid\mathbf{\theta}_{\pi}\right)}_{\begin{subarray}{c}\text{Policy}\end{subarray}}\underbrace{p\left(a\mid\mathbf{\theta}_{\psi}\right)}_{\begin{subarray}{c}\text{Control}\\\
\text{Prior}\end{subarray}},$ (3)
where,
$\displaystyle\eta=\frac{p\left(\mathbf{\theta}_{\pi}\right)p\left(\mathbf{\theta}_{\psi}\right)}{p\left(\mathbf{\theta}_{\pi},\mathbf{\theta}_{\psi}\right)p(a).}$
(4)
The composite distribution
$p\left(a\mid\mathbf{\theta}_{\pi},\mathbf{\theta}_{\psi}\right)$ forms our
hybrid policy output $\phi(a|s)$. As we approximate the distributional output
from each system to be univariate Gaussian for each action, the composite
distribution $\phi(a|s)$ will also be univariate Gaussian
$\phi(a|s)\sim\mathcal{N}(\mu_{\phi},\sigma^{2}_{\phi})$. As a result, we can
compute the corresponding mean $\mu_{\phi}$ and variance $\sigma^{2}_{\phi}$
for each action as follows:
$\mu_{\phi}=\frac{\mu_{\pi}\sigma_{\psi}^{2}+\mu_{\psi}\sigma_{\pi}^{2}}{\sigma_{\psi}^{2}+\sigma_{\pi}^{2}},$
(5)
$\sigma_{\phi}^{2}=\frac{\sigma^{2}_{\pi}\sigma_{\psi}^{2}}{\sigma_{\psi}^{2}+\sigma_{\pi}^{2}},$
(6)
where this expansion implicitly handles the normalisation constant $\eta$.
### 3.2 Components
In order to leverage our proposed approach in practice, we describe the
derivation of the distributional action outputs for each system below and
provide the complete BCF algorithm for combining these systems in Algorithm 1.
#### 3.2.1 Uncertainty-Aware Policy
We leverage stochastic RL algorithms that output each action as an independent
Gaussian
$\pi^{\prime}(a|s)\sim\mathcal{N}(\mu_{\pi^{\prime}},\sigma_{\pi^{\prime}}^{2})$
where $\mu_{\pi^{\prime}}$ denotes the mean and $\sigma^{2}_{\pi^{\prime}}$
denotes the corresponding variance. This distribution is optimised to reflect
the action which would both maximise the returns from a given state, as well
as the entropy [12]. Such exploration distributions tend to be risk seeking
and do not capture the state uncertainty of the agent. The latter is a key
component required for our BCF formulation. To attain an uncertainty-aware
distribution, we leverage epistemic uncertainty estimation techniques
suggested in the computer vision literature based on ensemble learning [22].
We train an ensemble of $M$ agents to form a uniformly weighted Gaussian
mixture model, and combine these predictions into a single univariate Gaussian
whose mean and variance are respectively the mean, $\mu_{\pi}(s)$ and
variance, $\sigma^{2}_{\pi}(s)$ of the mixture,
$p(a|s,\theta_{\pi})=M^{-1}\sum_{m=1}^{M}p\left(a|s,\theta_{\pi^{\prime}_{m}}\right)$
as described in [22]. The mean and variance of the mixture
$M^{-1}\sum\mathcal{N}\left(\mu_{\pi^{\prime}_{m}}(s),\sigma_{\pi^{\prime}_{m}}^{2}(s)\right)$
are given by:
$\mu_{\pi}(s)=M^{-1}\sum_{m}\mu_{\pi^{\prime}_{m}}(s)$ (7)
$\sigma_{\pi}^{2}(s)=M^{-1}\sum_{m}\left(\sigma_{\pi^{\prime}_{m}}^{2}(s)+\mu_{\pi^{\prime}_{m}}^{2}(s)\right)-\mu_{\pi}^{2}(s)$
(8)
The empirical variance, $\sigma_{\pi}^{2}(s)$, of the resulting output
distribution, $p(a|s,\theta_{\pi})$ approximates a measure of the policy’s
epistemic uncertainty in a given state for a particular action. This allows
for a broader distribution when presented with unknown states and a tighter
distribution in familiar states. This plays an important role within our BCF
formulation as described previously. Note that we fuse the distributional
polices as opposed to just their means to prevent collapse to a deterministic
system once they converge. This allows the agent to continue to explore
alternate actions and identify better solutions.
#### 3.2.2 Control Prior
In order to incorporate the inherently deterministic control priors developed
by the robotics community within our stochastic RL framework, we require a
distributional action output that captures its uncertainty to act in a given
state. As the uncertainty is state-centric, we empirically derive this action
distribution by propagating noise (provided by the known sensor model
variance, $\sigma^{2}_{\text{model}}$) from the sensor measurements through to
the action outputs using Monte Carlo (MC) sampling. By computing the mean,
$\mu_{\psi}$ and variance, $\sigma_{\psi}$ of the outputs, the distributional
action output, $\mathcal{N}(\mu_{\psi},\sigma_{\psi}^{2})$ for a given state,
$s$ is given by:
$\mu_{\psi}(s)=N^{-1}\sum_{n}a_{\psi}(s_{MC_{n}}),\hskip
20.0pts_{MC}\sim\mathcal{N}(s,\sigma_{\text{model}}^{2}),$ (9)
$\sigma_{\psi}^{2}(s)=N^{-1}\sum_{n}(a_{\psi}(s_{MC_{n}})-\mu_{\psi}(s))^{2},$
(10)
where $a_{\psi}(\cdot)$ denotes a deterministic action output from the control
prior for a given MC sample and $N$ is the number of sampled states. Given the
inherent robustness to noise of most control priors, we additionally set a
minimum possible standard deviation for the distribution. This prevents the
control prior distribution from collapsing to a deterministic value, rendering
the policy useless within the BCF formulation. The resulting variance for the
control prior distribution is defined as:
$\displaystyle\sigma_{\psi}^{2}(s)=\max\left(N^{-1}\sum_{n}(a_{\psi}(s_{MC_{n}})-\mu_{\psi}(s))^{2},\sigma_{d}^{2}(s)\right).$
(11)
The choice of $\sigma^{2}_{d}$ is left as a hyper-parameter for the user to
set based on the specific controller used and its optimality towards solving
the task.
1 Given: Ensemble of M policies
($[\pi^{\prime}_{1},\pi^{\prime}_{2}...\pi^{\prime}_{M}]$), control prior
($\pi_{\psi}$) and variance ($\sigma_{d}^{2}$)
Input: State $\textit{s}_{t}$
Output: Action $\textit{a}_{t}$
2 Approximate the policy ensemble predictions as a unimodal Gaussian
$\pi(\cdot|s_{t})\sim\mathcal{N}(\mu_{\pi},\sigma_{\pi}^{2})$ described in
Equations (7) and (8)
3
4Compute the control prior action distribution
$\psi(\cdot|s_{t})\sim\mathcal{N}\left(\mu_{\psi},\sigma_{\psi}^{2}\right)$ as
given in Equations (11) and (9)
5
6Compute the composite distribution
$\phi(\cdot|s_{t})\sim\mathcal{N}(\mu_{\phi},\sigma^{2}_{\phi})$
7
$\phi(\cdot|s_{t})=\eta(\pi(\cdot|s_{t})\cdot\psi(\cdot|s_{t}))$ as given in
Equations (5) and (6)
8
9Select action $\textit{a}_{t}$ from the distribution $\phi(\cdot|s_{t})$
10
return _$\textit{a}_{t}$_
Algorithm 1 Bayesian Controller Fusion
Given the formulation for the distributional outputs from each system, we
present the complete BCF algorithm for governing action selection both during
training and deployment in Algorithm 1.
Figure 2: Simulation training environments and real world deployment
environments for (a) PointGoal Navigation and (b) Maximum Manipulability
Reacher tasks. Note the stark discrepancy in obstacle profiles for the
navigation task between the simulation environment and real world
environments.
## 4 Experiments
In this section, we assess the ability of BCF to reliably control real-world
robots with a simulation trained policy and without any on-robot fine-tuning.
We evaluate our approach on two continuous control robotics tasks: target
driven navigation, and a manipulability maximising reacher task. We provide a
detailed description of these tasks in Appendix A.1. We additionally compare
BCF to its individual learned and handcrafted components in isolation in order
to understand its ability to exploit their respective strengths. We provide
details of the evaluated systems below.
1. 1.
Control Prior: The deterministic classical controller derived using analytic
methods.
2. 2.
SAC: RL policy trained using vanilla Soft Actor Critic (SAC) [11].
3. 3.
BCF: Our proposed hybrid control strategy that combines uncertainty aware
outputs from the control prior and the learned RL policy. The agent was
trained using SAC and Algorithm 1 for action selection during exploration.
Note that all the policies used in this evaluation were trained to convergence
in the simulation environments. We provide a detailed account for each task in
the following sections.
### 4.1 PointGoal Navigation
In this experiment, we examine whether BCF could overcome the limitations of
an existing reactive navigation controller, in this case, an Artificial
Potential Fields (APF) controller [34, 21], while leveraging this control
prior to safely deal with out-of-distribution states that the policy could
fail in. The APF controller used exhibited suboptimal oscillatory behaviours
particularly in between obstacles.
Table 1: Evaluation of PointGoal Navigation in the Real World
| Trajectory 1 | Trajectory 2
---|---|---
Method | | Distance Travelled
---
(meters)
| Actuation Time
---
(seconds)
| Distance Travelled
---
(meters)
| Actuation Time
---
(seconds)
Control Prior | 42.3 | 274 | 35.3 | 277
SAC | Fail | Fail | Fail | Fail
Move-Base | 62.6 | 263 | 35.8 | 258
BCF | 41.2 | 135 | 30.4 | 117
Figure 3: Trajectories taken by the real robot for different start and goal
locations in a cluttered office environment with long narrow corridors. The
trajectory was considered unsuccessful if a collision occurred. The trajectory
taken by BCF is colour coded to represent the uncertainty in the linear
velocity of the trained policy. We illustrate the behaviour of the fused
distributions at key areas along the trajectory. The symbols 1 and 2 indicate
the start locations for each trajectory and G indicates the corresponding goal
locations.
#### 4.1.1 Evaluation
We utilise a GuiaBot mobile robot which is equipped with a 180∘ laser scanner,
matching that used in the simulation environment. The velocity outputs from
the policies are scaled to a maximum of
$0.25\text{\,}\mathrm{m}\mathrm{/}\mathrm{s}$ before execution on the robot at
a rate of 100 Hz. The system was deployed in a cluttered indoor office space
that was previously mapped using the laser scanner. We utilise the ROS AMCL
package to localise the robot within this map and extract the necessary state
inputs for the policy network and control prior. Despite having a global map,
the agent is only provided with global pose information with no additional
information about its operational space. The environment also contained
clutter which was unaccounted for in the mapping process. To enable large
traversals through the office space, we utilise a global planner to generate
target sub-goals, for our reactive agents to navigate towards. We report the
distance travelled by each controller and compare them to the distance
travelled by a fine-tuned ROS move-base controller. This controller is not
necessarily the optimal solution but serves as a practical example of a
commonly used controller on the Guiabot.
The evaluation was conducted on two different trajectories indicated as
Trajectory 1 and 2 in Figure 3 and Table 1. Trajectory 1 consisted of a lab
space with multiple obstacles, tight turns, and dynamic human subjects along
the trajectory, while Trajectory 2 consisted of narrow corridors never seen by
the robot during training. We terminated a trajectory once a collision
occurred and marked the run as a failed attempt. We summarise the results in
Table 1.
Across both trajectories, the standalone SAC agent failed to complete a
trajectory without any collisions, exhibiting sporadic reversing behaviours in
out-of-distribution states. We can attribute these behaviours to its poor
generalisation in such states, given the discrepancies in obstacle profiles
seen during training in simulation and those encountered in the real world as
shown in Figure 2 (a). The control prior was capable of completing all
trajectories however required excessive actuation times. We can attribute this
to its inefficient oscillatory motion when moving through passageways and in
between obstacles. BCF was successful across both trajectories exhibiting the
lowest actuation times across all methods. This indicates its ability to
exploit the optimal behaviours learned by the agent while ensuring it did not
act sporadically when presented with out-of-distribution states. It also
demonstrates superior results when compared with the fine-tuned ROS move-base
controller.
To gain a better understanding of the reasons for BCF’s success when compared
to the control prior and SAC agent acting in isolation, we examine the
trajectories taken by these systems as shown in Figure 3. The trajectory
attained using BCF is colour-coded to illustrate the uncertainty of the
policy’s actions as given by the outputs of the ensemble. We draw the readers
attention to the region marked A which exhibits higher values of policy
uncertainty. The composition of the respective distributions at this region is
shown within the orange ring. Given the higher policy uncertainty at this
point, the resulting composite distribution was biased more towards the
control prior which displayed greater certainty, allowing the robot to
progress beyond this point safely. We note here that this is the particular
region that the SAC agent failed as shown in Figure 3 (c). The purple ring at
region C illustrates a region of low policy uncertainty with the composite
distribution biased closer towards the policy. Comparing the performance
benefit over the control prior gained in such a case, we draw the readers
attention to regions B and D which show the path profile taken by the
respective agents. The dense darker path shown by the control prior indicates
regions of high oscillatory behaviour and significant time spent at a given
location. On the other hand, we see that BCF does not exhibit this and attains
a smoother trajectory which is attributed to the learned policy having higher
precedence in these regions, stabilising the oscillatory effects of the
control prior. This illustrates the ability of BCF to exploit the relative
strengths of each component throughout deployment.
### 4.2 Maximum Manipulability Reacher
We evaluate the ability of BCF to build upon the basic structure provided by a
Resolved Rate Motion Controller (RRMC) [35], for reaching on a 7 DoF arm
robot, in order to learn a more complex manipulability maximising reaching
controller. While RRMC provides the policy with the basic knowledge to reach a
goal, the agent has to learn how to modify the individual velocities of each
joint in order to maximise the manipulability of the controller. More details
on the tasks are provided in Appendix A.1.2.
Table 2: Evaluation of Maximum Manipulability Reacher in the Real World
Method | Average Manipulability | Average Final Manipulability | Success Rate
---|---|---|---
Control Prior | 0.0629 $\pm$ 0.00926 | 0.0658 $\pm$ 0.0165 | 98.2%
SAC | 0.0803 $\pm$ 0.00514 | 0.07812 $\pm$ 0.0150 | 78.6%
BCF | 0.0836 $\pm$ 0.0156 | 0.0889 $\pm$ 0.0177 | 98.2%
Figure 4: Manipulability and uncertainty curves for known and out-of-
distribution goals for the reacher task, deployed on a real robot. The red
cross indicates a failed trajectory.
#### 4.2.1 Evaluation
To ensure that the simulation trained policies could be transferred directly
to a real robot, we matched the coordinate frames of the PyRep simulator [17]
used with the real Franka Emika Panda robot setup shown in Figure 2. The state
and action space were matched with that used in the training environment, with
the actions all scaled down to a maximum of
$1.74\text{\,}\mathrm{rad}\text{\,}{\mathrm{s}}^{-1}$ before publishing them
to the robot at a rate of 100 Hz. The robot was trained with a subset of goals
randomly sampled from the positive x-axis region of its workspace as shown in
Figure 2. We classify goal states sampled from outside this region as out-of-
distribution states.
Table 2 shows the results obtained when evaluating the agent on a random set
of ten different goals sampled from the robot’s entire workspace. We report
the average manipulability across the entire trajectory for all sampled goals
as well as the average final manipulability attained at the end of all
trajectories. We additionally indicate the success rate for each controller to
reach the given goals. In all cases, BCF attains the highest manipulability
and success rate surpassing both the control prior and SAC policy. It
additionally illustrates its ability to deal with higher dimensional action
spaces.
To better understand how BCF attains successful trajectories when compared to
a standalone SAC policy, we take a closer look at the individual trajectories
taken by the robot for goals sampled from both the known and out-of-
distribution regions. From each region, we sampled 3 goals and show the
corresponding manipulability curves of the robot across the trajectory in
Figure 4. For each goal, we additionally plot the policy ensemble uncertainty
estimate used by BCF across the trajectory as indicated by the red curves. As
shown in Figure 4 (a), for the known goals, BCF and the SAC agent both attain
similar performances, maximising the manipulability of the agent across the
trajectory. This is in stark contrast to the control prior which exhibits poor
performance. Note here that while the control prior exhibits poor performance
with regard to manipulability, it is still successful in completing the
reaching task at hand without any failures. It is interesting to note the high
uncertainty of the ensemble at the start of a trajectory which quickly drops
to a significantly lower value. The high uncertainty is a result of the
multiple possible trajectories that the robot could take at the start, which
quickly narrows down once the robot begins to move. Note that once the policy
ensemble exhibits a lower uncertainty, the corresponding performance of BCF
closely resembles that of the standalone SAC agent, indicating that BCF does
not cripple the optimality of the learned policy.
When evaluating the agents on out-of-distribution goals as shown in Figure 4
(b), BCF plays an important role in ensuring that the robot can successfully
and safely complete the task. Note the higher levels of uncertainty across
these trajectories when compared to the known goals case. In all these cases,
the standalone SAC agent fails to successfully complete a trajectory,
frequently self-colliding or exhibiting random sporadic behaviours. We
indicate these failed trajectories with a red cross in Figure 4 (b). BCF is
seen to closely follow the behaviours of the control prior in states of high
uncertainty, averting it from such catastrophic failures. While the composite
control strategy works well to ensure the safety of the robot, the higher
reliance of the system on the control prior results in suboptimal behaviour
with regards to manipulability. We provide a supplementary video to
demonstrate these behaviours 111Video demonstration:
https://bit.ly/bcf_deploy. The trade-off between task optimality versus the
safety of the robot is an interesting dilemma that BCF attempts to balance
naturally. The fixed variance, $\sigma_{d}^{2}$ chosen for the prior
controller could serve as a tuning parameter to allow the user to control this
trade-off at deployment. A smaller variance would bias the resulting
controller more strongly towards the control prior yielding more conservative
and suboptimal actions; whereas a larger variance would allow for close to
optimal behaviours at the expense of the robots safety. We leave the
exploration of this idea to future work.
## 5 Discussion and Future Work
Building on the large body of work already developed by the robotics community
can greatly help accelerate the use of RL based systems, allowing us to
develop better controllers for robots as they move towards solving tasks in
the real world. The ideas presented in this paper demonstrate a strategy that
closely couples traditional controllers with learned systems, exploiting the
strengths of each approach in order to attain more reliable and robust
behaviours in the zero-shot sim-to-real setting. We see this as a promising
step towards safely bringing reinforcement learning to real-world robotics.
Across two robotics tasks for navigation and reaching, we show that BCF can
safely deal with out-of-distribution states in the sim-to-real setting without
any fine-tuning, succeeding where a typical standalone policy would fail,
while attaining the optimality of the learned behaviours in known states. In
the navigation domain, we overcome the inefficient oscillatory motion of an
existing reactive navigation controller, decreasing the overall actuation time
during real-world navigation runs by 50.7%. For the reaching task, we show
that our hybrid controller achieves the highest success rate, and improves the
manipulability of an existing reaching controller by 34.9%, a controller
typically difficult to attain using analytical approaches.
While the uncertainty-based compositional policy we derive using BCF does
train with the control prior in the loop, the policy is not directly aware of
the control prior’s presence. This could impact its overall ability to work in
synergy with the control prior at deployment. In future work, we propose to
incorporate the control prior in the Q-value update or alternatively learn a
gating parameter to better inform the fusion process. This should allow the
hybrid controller to operate on more complex tasks as well as interpolate
across multiple behaviours. We are also interested in exploring alternative
state uncertainty estimation techniques for both the control prior and RL that
are faster than the sampling based approaches used in this work. This includes
work from the supervised learning literature for out-of-distribution detection
and distance-based uncertainty estimation techniques [26].
## References
* Andrychowicz et al. [2020] OpenAI: Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Józefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, Jonas Schneider, Szymon Sidor, Josh Tobin, Peter Welinder, Lilian Weng, and Wojciech Zaremba. Learning dexterous in-hand manipulation. _The International Journal of Robotics Research_ , 39(1):3–20, 2020. doi: 10.1177/0278364919887447.
* Arndt et al. [2020] Karol Arndt, Murtaza Hazara, Ali Ghadirzadeh, and Ville Kyrki. Meta reinforcement learning for sim-to-real domain adaptation. In _2020 IEEE International Conference on Robotics and Automation (ICRA)_ , pages 2725–2731. IEEE, 2020.
* Bansal et al. [2020] Somil Bansal, Varun Tolani, Saurabh Gupta, Jitendra Malik, and Claire Tomlin. Combining optimal control and learning for visual navigation in novel environments. In Leslie Pack Kaelbling, Danica Kragic, and Komei Sugiura, editors, _Proceedings of the Conference on Robot Learning_ , volume 100 of _Proceedings of Machine Learning Research_ , pages 420–429. PMLR, 30 Oct–01 Nov 2020.
* Bruce et al. [2018] Jacob Bruce, Niko Suenderhauf, Piotr Mirowski, Raia Hadsell, and Michael Milford. Learning deployable navigation policies at kilometer scale from a single traversal. In A Dragan, J Peters, A Billard, and J Morimoto, editors, _Proceedings of Machine Learning Research (PMLR), Volume 87: Conference on Robot Learning 2018_ , pages 346–361. Proceedings of Machine Learning Research, 2018.
* Colledanchise and Ögren [2016] Michele Colledanchise and Petter Ögren. How behavior trees modularize hybrid control systems and generalize sequential behavior compositions, the subsumption architecture, and decision trees. _IEEE Transactions on robotics_ , 33(2):372–389, 2016.
* Dickinson and Balleine [2002] Anthony Dickinson and Bernard Balleine. The role of learning in the operation of motivational systems. _Stevens’ handbook of experimental psychology_ , 2002.
* Gal and Ghahramani [2016] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In _international conference on machine learning_ , pages 1050–1059, 2016.
* Garcia and Fernández [2012] Javier Garcia and Fernando Fernández. Safe exploration of state and action spaces in reinforcement learning. _Journal of Artificial Intelligence Research_ , 45:515–564, 2012.
* Ghadirzadeh et al. [2017] Ali Ghadirzadeh, Atsuto Maki, Danica Kragic, and Mårten Björkman. Deep predictive policy training using reinforcement learning. In _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 2351–2358, 2017. doi: 10.1109/IROS.2017.8206046.
* Haarnoja et al. [2018a] Tuomas Haarnoja, Sehoon Ha, Aurick Zhou, Jie Tan, George Tucker, and Sergey Levine. Learning to walk via deep reinforcement learning. _arXiv preprint arXiv:1812.11103_ , 2018a.
* Haarnoja et al. [2018b] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. _arXiv preprint 1801.01290_ , 2018b.
* Haarnoja et al. [2019] Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Soft actor-critic algorithms and applications, 2019.
* Hans et al. [2008] Alexander Hans, Daniel Schneegaß, Anton Maximilian Schäfer, and Steffen Udluft. Safe exploration for reinforcement learning. In _ESANN_ , pages 143–148. Citeseer, 2008.
* Haviland and Corke [2021] Jesse Haviland and Peter Corke. A purely-reactive manipulability-maximising motion controller. _arXiv preprint arXiv:2002.11901_ , 2021.
* Ibarz et al. [2021] Julian Ibarz, Jie Tan, Chelsea Finn, Mrinal Kalakrishnan, Peter Pastor, and Sergey Levine. How to train your robot with deep reinforcement learning: lessons we have learned. _The International Journal of Robotics Research_ , 40(4-5):698–721, Jan 2021. ISSN 1741-3176. doi: 10.1177/0278364920987859.
* Ijspeert [2008] Auke Jan Ijspeert. Central pattern generators for locomotion control in animals and robots: a review. _Neural networks_ , 21(4):642–653, 2008.
* James et al. [2019] Stephen James, Marc Freese, and Andrew J. Davison. Pyrep: Bringing v-rep to deep robot learning. _arXiv preprint arXiv:1906.11176_ , 2019.
* Johannink et al. [2018] Tobias Johannink, Shikhar Bahl, Ashvin Nair, Jianlan Luo, Avinash Kumar, Matthias Loskyll, Juan Aparicio Ojea, Eugen Solowjow, and Sergey Levine. Residual reinforcement learning for robot control. _arXiv preprint arXiv:1812.03201_ , 2018.
* Julian et al. [2020] Ryan C Julian, Eric Heiden, Zhanpeng He, Hejia Zhang, Stefan Schaal, Joseph J Lim, Gaurav S Sukhatme, and Karol Hausman. Scaling simulation-to-real transfer by learning a latent space of robot skills. _The International Journal of Robotics Research_ , 39(10-11):1259–1278, 2020. doi: 10.1177/0278364920944474.
* Kahn et al. [2017] Gregory Kahn, Adam Villaflor, Vitchyr Pong, Pieter Abbeel, and Sergey Levine. Uncertainty-aware reinforcement learning for collision avoidance. _arXiv preprint arXiv:1702.01182_ , 2017.
* Koren and Borenstein [1991] Yoram Koren and Johann Borenstein. Potential field methods and their inherent limitations for mobile robot navigation. In _Proceedings. 1991 IEEE International Conference on Robotics and Automation_ , pages 1398–1404. IEEE, 1991.
* Lakshminarayanan et al. [2017] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In _Advances in Neural Information Processing Systems_ , pages 6402–6413, 2017.
* Lee et al. [2014] Sang Wan Lee, Shinsuke Shimojo, and John P O’Doherty. Neural computations underlying arbitration between model-based and model-free learning. _Neuron_ , 81(3):687–699, 2014.
* Lipton et al. [2016] Zachary C Lipton, Jianfeng Gao, Lihong Li, Jianshu Chen, and Li Deng. Combating deep reinforcement learning’s sisyphean curse with intrinsic fear. 2016\.
* Maxwell [1868] James Clerk Maxwell. I. on governors. _Proceedings of the Royal Society of London_ , (16):270–283, 1868.
* Miller et al. [2021] Dimity Miller, Niko Sunderhauf, Michael Milford, and Feras Dayoub. Class anchor clustering: A loss for distance-based open set recognition. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)_ , pages 3570–3578, January 2021.
* Murooka et al. [2020] Takayuki Murooka, Masashi Hamaya, Felix von Drigalski, Kazutoshi Tanaka, Yoshihisa Ijiri, and Yutaro Konta. Exi-net: Explicitly/implicitly conditioned network for multiple environment sim-to-real transfer. In _CoRL_ , 2020.
* Osband et al. [2018] Ian Osband, John Aslanides, and Albin Cassirer. Randomized prior functions for deep reinforcement learning. _arXiv preprint arXiv:1806.03335_ , 2018.
* Patel and Sobh [2015] Sarosh Patel and Tarek Sobh. Manipulator performance measures-a comprehensive literature survey. _Journal of Intelligent & Robotic Systems_, 77(3-4):547–570, 2015.
* Peng et al. [2018] Xue Bin Peng, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Sim-to-real transfer of robotic control with dynamics randomization. In _2018 IEEE international conference on robotics and automation (ICRA)_ , pages 3803–3810. IEEE, 2018.
* Rana et al. [2020] Krishan Rana, Ben Talbot, Vibhavari Dasagi, Michael Milford, and Niko Sünderhauf. Residual reactive navigation: Combining classical and learned navigation strategies for deployment in unknown environments. In _2020 IEEE International Conference on Robotics and Automation (ICRA)_ , pages 11493–11499. IEEE, 2020.
* Rusu et al. [2017] Andrei A Rusu, Matej Večerík, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, and Raia Hadsell. Sim-to-real robot learning from pixels with progressive nets. In _Conference on Robot Learning_ , pages 262–270. PMLR, 2017.
* Silver et al. [2018] Tom Silver, Kelsey Allen, Josh Tenenbaum, and Leslie Kaelbling. Residual policy learning. _arXiv preprint arXiv:1812.06298_ , 2018.
* Warren [1989] Charles W Warren. Global path planning using artificial potential fields. In _Proceedings, 1989 International Conference on Robotics and Automation_ , pages 316–321. Ieee, 1989.
* Whitney [1969] D. E. Whitney. Resolved motion rate control of manipulators and human prostheses. _IEEE Transactions on Man-Machine Systems_ , 10(2):47–53, June 1969. ISSN 2168-2860. doi: 10.1109/TMMS.1969.299896.
* Yoshikawa [1985] Tsuneo Yoshikawa. Manipulability of Robotic Mechanisms. _The International Journal of Robotics Research_ , 4(2):3–9, 1985. doi: 10.1177/027836498500400201.
* Zhang et al. [2015] Fangyi Zhang, Jürgen Leitner, Michael Milford, Ben Upcroft, and Peter Corke. Towards vision-based deep reinforcement learning for robotic motion control. _arXiv preprint arXiv:1511.03791_ , 2015.
## Appendix A Appendix
### A.1 Task Description
#### A.1.1 PointGoal Navigation
The objective of this task is to navigate a robot from a start location to a
goal location in the shortest time possible, while avoiding obstacles along
the way. We utilise the training environment provided by [31], which consists
of five arenas with different configurations of obstacles. The goal and start
location of the robot are randomised at the start of every episode, each
placed on the extreme opposite ends of the arena (see Figure 2 (a)). This sets
the long horizon nature of the task. As we focus on the sparse reward setting,
we define $r(s_{t},a_{t},s_{t+1})=1$ if
$d_{\text{target}}<d_{\text{threshold}}$ and $r(s_{t},a_{t},s_{t+1})=0$
otherwise, where $d_{\text{target}}$ is the distance between the agent and the
goal and $d_{\text{threshold}}$ is a set threshold. The action $a_{t}$
consists of two continuous values: linear velocity $\nu_{\text{nav}}\in[-1,1]$
and angular velocity $\omega_{\text{nav}}\in[-1,1]$. We assume that the robot
can localise itself within a global map in order to determine its relative
position to a goal location. The 180∘ laser scan range data is divided into 15
bins and concatenated to the robot’s angle. The overall state $s_{t}$ of the
environment is comprised of:
* •
The binned laser scan data $l_{\text{bin}}\in\mathbb{R}^{15}$,
* •
The pose error between the robot’s pose and the goal location
$e_{t}\in\mathbb{R}^{2}$,
* •
The previous executed linear and angular velocity $a_{t-1}\in\mathbb{R}^{2}$,
for a total of 19 dimensions. The length of each episode is set to a maximum
of 500 steps and does not terminate once the goal is achieved.
#### A.1.2 Manipulability Maximising Reacher
The objective of this task is to actuate each joint of a manipulator within a
closed-loop velocity controller such that the end-effector moves towards and
reaches the goal point, while the manipulability of the manipulator is
maximised. The manipulability index describes how easily the manipulator can
achieve any arbitrary velocity. The ability of the manipulator to achieve an
arbitrary end-effector velocity is a function of the manipulator Jacobian.
While there are many methods that seek to summarise this, the manipulability
index proposed by [36] is the most used and accepted within the robotics
community [29]. Utilising Jacobian based indices in existing controllers have
several limitations, require greater engineering effort than simple inverse
kinematics based reaching systems, and precise tuning in order to ensure the
system is operational [14]. We explore the use of RL to learn such behaviours
by leveraging simple reaching controllers as priors. We utilise the PyRep
simulation environment [17], with the Franka Emika Panda as our manipulator as
shown in Figure 2 (b). For this task, we generate a random initial joint
configuration and random end-effector goal pose. We use a sparse goal reward,
$r(s_{t},a_{t},s_{t+1})=1$ if $e_{t}<e_{\text{threshold}}$ and
$r(s_{t},a_{t},s_{t+1})=m$ otherwise, where $e_{t}$ is the spatial
translational error between the end-effector and the goal,
$e_{\text{threshold}}$ is a set threshold and
$m=\sqrt{\mbox{det}\left(J(q)J(q)^{\top}\right)}\ \in[0,\infty)$ (12)
is the manipulability of the robot at the particular joint configuration $q$
where $J(q)$ is the manipulator Jacobian. The action space consists of the
manipulator joint velocities $\dot{\it{q}}\in[-1,1]^{n}$, where the values are
continuous, and $n$ is the number of joints within the manipulator. In this
work, the manipulator used consists of 7 joints. The state, $s_{t}$, of the
environment is comprised of:
* •
The joint coordinate vector $\it{q}\in\mathbb{R}^{7}$,
* •
The joint velocity vector $\dot{\it{q}}\in\mathbb{R}^{7}$,
* •
The translation error between the manipulator’s end-effector and the goal
$e_{t}(\it{q})\in\mathbb{R}^{3}$,
* •
The end-effector translation vector $e_{p}(\it{q})\in\mathbb{R}^{3}$,
for a total of 20 dimensions. Similar to the navigation task, the episode
length for this task in fixed at 1000 steps and only terminates at the end of
the episode.
|
# Dissecting kinetically coupled quintessence:
phenomenology and observational tests
Elsa M. Teixeira<EMAIL_ADDRESS>School of Mathematics and
Statistics, University of Sheffield, Hounsfield Road, Sheffield S3 7RH, United
Kingdom Bruno J. Barros<EMAIL_ADDRESS>Cosmology and Gravity Group,
Department of Mathematics and Applied Mathematics, University of Cape Town,
Rondebosch 7700, Cape Town, South Africa Vasco M. C. Ferreira
<EMAIL_ADDRESS>Instituto de Astrofísica e Ciências do Espaço,
Universidade do Porto, CAUP, Rua das Estrelas, PT4150-762 Porto, Portugal
Noemi Frusciante<EMAIL_ADDRESS>Dipartimento di Fisica “E.
Pancini”, Università degli Studi di Napoli “Federico II”, Compl. Univ. di
Monte S. Angelo, Edificio G, Via Cinthia, I-80126, Napoli, Italy
###### Abstract
We investigate an interacting dark energy model which allows for the kinetic
term of the scalar field to couple to dark matter via a power-law interaction.
The model is characterised by scaling solutions at early times, which are of
high interest to alleviate the coincidence problem, followed by a period of
accelerated expansion. We discuss the phenomenology of the background
evolution and of the linear scalar perturbations and we identify measurable
signatures of the coupling in the dark sector on the cosmic microwave
background, the lensing potential auto-correlation and the matter power
spectra. We also perform a parameter estimation analysis using data of cosmic
microwave background temperature, polarisation and lensing, baryonic acoustic
oscillations and supernovae. We find that the strength of the coupling between
the dark sectors, regulated by the parameter $\alpha$, is constrained to be of
order $10^{-4}$. A model selection analysis does not reveal a statistical
preference between $\Lambda$CDM and the Kinetic model.
###### Contents
1. I Introduction
2. II Theory
1. II.1 The Kinetic model
2. II.2 Background equations
3. II.3 Linear cosmological perturbations
4. II.4 The parameter space
3. III Phenomenology of the Kinetic model
1. III.1 Background evolution
2. III.2 Cosmological observables
4. IV Cosmological constraints and model selection analysis
1. IV.1 Data sets
2. IV.2 Cosmological bounds
5. V Conclusions
6. A Synchronous gauge
## I Introduction
The existence of dark energy (DE) and dark matter (DM) is supported by
multiple cosmological observations, though their nature still remains unknown.
The former is postulated as a repulsive force acting on the largest scales,
needed to explain the late time cosmic acceleration, whereas the latter is a
non-baryonic matter component, responsible for the formation and evolution of
large scale structures in the Universe. The Standard Model of Cosmology, known
as $\Lambda$-cold dark matter ($\Lambda$CDM), is based on General Relativity
(GR) and includes a cosmological constant, $\Lambda$, as the simplest model of
DE and a cold dark matter (CDM) component as weakly interacting non-
relativistic particles. In this base scenario, it is assumed that the two dark
components do not directly couple to each other. Although it provides a fairly
accurate description of the Universe, there are some unexplained theoretical
and observational conundrums that indirectly affect the $\Lambda$CDM model
Weinberg (1989, 2000); Martin (2012). Such is the case of the Cosmological
Constant problem or the need for a primordial inflationary period.
Observational tensions, if not stemming from systematics, pose an additional
challenge Abdalla _et al._ (2022) namely concerning the mismatch in the
estimation of the values of the Hubble constant, $H_{0}$ Aghanim _et al._
(2020a); Riess _et al._ (2019); Wong _et al._ (2020); Riess _et al._
(2021); Pesce _et al._ (2020), and the amplitude of the matter power spectrum
at present time, $\sigma_{8}$ Heymans _et al._ (2021); Di Valentino _et al._
(2021); Asgari _et al._ (2021), when using high- and low-redshift data from
different surveys. These shortcomings might signal the need to go beyond the
vanilla $\Lambda$CDM model Saridakis _et al._ (2021).
Promoting DE to a dynamical scalar field is an enticing approach to extend
$\Lambda$CDM and still achieve the late-time accelerated expansion. Recent
experimental advancements in particle physics have lead to the detection of a
Higgs-like particle Aad _et al._ (2012); Chatrchyan _et al._ (2012) and
scalar fields also comprise the most promising proposal to solve the early
Universe trinity puzzle (i.e. the horizon, flatness, and magnetic-monopole
problems) Guth (1981); Linde (1982); Starobinsky (1982). The quintessence
model Wetterich (1995); Caldwell _et al._ (1998); Chiba (1999) was the first
attempt to include a scalar degree of freedom, $\phi$, portraying a time-
varying DE component with dynamics assigned by the form of the potential,
$V(\phi)$, and its kinetic term,
${X=-\partial_{\mu}\phi\partial^{\mu}\phi}/2$.
In particular, it should resemble the cosmological constant at late times,
that is, its negative pressure must have a magnitude close to its energy
density, $p_{\phi}\approx-\rho_{\phi}$, while not revealing effective
clustering properties at small scales. One appealing feature of this theory
(or, more in general, of scalar tensor theories) lies in obtaining, under
particular conditions, scaling solutions Wetterich (1995); Copeland _et al._
(1998); Ferreira and Joyce (1998); Liddle and Scherrer (1999); Barreiro _et
al._ (2000); Amendola (2000); Guo _et al._ (2003a, b); Chimento _et al._
(2003); Tsujikawa and Sami (2004); Piazza and Tsujikawa (2004); Pettorino _et
al._ (2005); Amendola _et al._ (2006); Ohashi and Tsujikawa (2009); Gomes and
Amendola (2014); Chiba _et al._ (2014); Amendola _et al._ (2014);
Albuquerque _et al._ (2018); Frusciante _et al._ (2018); Amendola _et al._
(2018); Barros (2019); Albuquerque _et al._ (2022); Abdalla _et al._ (2022);
Pace and Frusciante (2022). These are characterised by a constant ratio
between the energy density of the matter components and that of the scalar
field. In this case the DE contribution remains hidden throughout the
radiation and matter domination eras, despite allowing the energy density of
the scalar field to be of the same order of magnitude as these components.
This mechanism is relevant in addressing the cosmic coincidence problem Zlatev
_et al._ (1999); Velten _et al._ (2014), namely why the magnitude of the
energy densities for matter and DE are comparable at present, while still
preserving compatibility with the energy scale associated with particle
physics. Accordingly, here we will focus on a specific model that already
revealed to feature scaling solutions Barros (2019).
In this work we are interested in exploring a setting in which the scaling
regime is achieved through a “fifth-force” acting on DM particles, induced by
a quintessence field. An effective field theory formulation of such
phenomenological interaction can be set at the level of the action and
provides a fully covariant way to construct theoretically viable models
Tamanini (2015), thus avoiding the propagation of unphysical modes on large
scales Valiviita _et al._ (2008). One such approach consists of considering
the presence of a field dependent function $f\left(\phi\right)$ multiplying
the CDM Lagrangian, $\mathcal{L}_{c}$, in the total action, that is, a
coupling of the form $f(\phi)\mathcal{L}_{c}$ Koivisto (2005). Recently, this
formulation was generalised to accommodate interactions of the matter sector
with the kinetic term of the scalar as well, through a functional form
$f\left(\phi\,,X\right)\mathcal{L}_{c}$ Barros (2019). Lagrangian-based models
have been further explored in the context of the Schutz-Sorkin action Schutz
(1970); Schutz and Sorkin (1977); Brown (1993), allowing for the inclusion of
interaction terms depending on single derivatives of the scalar field in the
action for CDM Pourtsidou _et al._ (2013); Boehmer _et al._ (2015a, b).
Along similar lines, in Ref. Kase and Tsujikawa (2020) the energy exchange is
achieved via two terms of the form $f_{1}(\phi,X)\rho_{c}(n_{c})$ and
$f_{2}(n_{c},\phi,X)J_{c}^{\mu}\partial_{\mu}\phi$, where $\rho_{c}$ and
$n_{c}$ are the energy density and number density of CDM, respectively, and
$J_{c}^{\mu}$ is a vector field related to the CDM four-velocity Kase and
Tsujikawa (2020). In the presence of a $f(\phi)$-coupling, scaling solutions
have been shown to exist for quintessence with an exponential potential
Amendola (1999, 2000). Likewise, general forms of the Lagrangian allowing for
scaling behaviour given either a constant or field-dependent interaction, have
been derived for k-essence Piazza and Tsujikawa (2004); Tsujikawa and Sami
(2004); Tsujikawa (2006); Amendola _et al._ (2006) and scalar-tensor theories
such as Horndeski Gomes and Amendola (2014, 2016); Amendola _et al._ (2018);
Frusciante _et al._ (2018) and quadratic-order degenerate higher-order
scalar-tensor theory Frusciante _et al._ (2019a).
Setting cosmological constraints on the interaction between a scalar field and
DM has been the subject of many investigations Amendola and Quercellini
(2003); Pettorino and Baccigalupi (2008); Bean _et al._ (2008); Pettorino
_et al._ (2012); Pettorino (2013); Xia (2013); Ade _et al._ (2016); van de
Bruck _et al._ (2017); Pourtsidou and Tram (2016); Van De Bruck and Mifsud
(2018); Barros _et al._ (2019); Agrawal _et al._ (2021); Gómez-Valent _et
al._ (2020); Pan _et al._ (2020); da Fonseca _et al._ (2022); Archidiacono
_et al._ (2022). A well tested class of proposals is the coupled DE model in
which DM particles interact with the scalar field due to a $\phi$-dependent
mass, characterised by a constant coupling strength $\beta$. This parameter
has been constrained to be $\beta=0.036\pm 0.016$ (Planck13 + WMAP + baryon
acoustic oscillations (BAO)), deviating from the vanishing interaction case at
$2.2\sigma$, and $\beta=0.066\pm 0.018$ when including polarisation, with
increasing significance at 3.6$\sigma$ Pettorino (2013). Similar results were
reported lately by the Planck collaboration, also showing a tension at $\sim
2.5\sigma$ with $\Lambda$CDM when Planck15 + BAO + Supernovae Ia + $H_{0}$
data are considered Ade _et al._ (2016), and in Refs. van de Bruck _et al._
(2017); Barros _et al._ (2019); Gómez-Valent _et al._ (2020) resorting to
more recent data sets. Additionally it has been realised that such constant
coupling can remove the $\sigma_{8}$ tension if the background is assumed to
be identical to the $\Lambda$CDM one Barros _et al._ (2019). Moreover in Ref.
Bean _et al._ (2008) the authors provided cosmological bounds for a variety
of models, which differ from each other through the form of the nontrivial
coupling between the DM and the quintessence field. The strength of the
coupling was constrained to be less than 7% of the coupling to gravity.
Let us remark that non-minimal couplings of the DE field to other matter
components have also been explored, e.g. to massive neutrinos Afshordi _et
al._ (2005); Brookfield _et al._ (2006), to baryons Aviles and Cervantes-Cota
(2011) or to the electromagnetic field Carroll (1998); Chiba and Kohri (2002).
A universal coupling has also been investigated and its magnitude is tightly
constrained through Solar System experiments Hui _et al._ (2009); Creminelli
_et al._ (2014). Therefore such couplings are often chosen to be minimal, i.e.
there is no additional coupling of the matter fields to the scalar curvature,
thus motivating the choice of a direct coupling between the dark species only.
In this work we explore the model presented in Ref. Barros (2019), in which a
purely kinetic coupling between the quintessence field and DM is considered.
This coupling is expressed in terms of a power law interaction function,
$f\propto X^{\alpha}$, with $\alpha$ being a constant parameter quantifying
the strength of the interaction. Hereafter this will be referred to as the
Kinetic model. At a more fundamental level, the low-energy limit of a scalar
field theory with a shift symmetry only allows for kinetic couplings to matter
Brax and Valageas (2017), where the scalar field is identified as the
Goldstone mode of the broken symmetry. Although in the literature it is much
more natural to consider a universal coupling, such as in dilaton gravity
Damour and Polyakov (1994), it is possible to construct a specific (non-
universal) interaction with an individual matter source Damour _et al._
(1990) or it can even naturally emerge in an effective description of a
fundamental theory, such as Type II string theory Koivisto _et al._ (2014).
The toy model considered in this present work also allows for scaling
solutions at early times Barros (2019), already found to be fruitful to tackle
the cosmic coincidence problem. The specific kinetic power law coupling here
assumed was employed in the literature to couple quintessence to
electromagnetism Barros and da Fonseca (2022) inducing a time variation on the
fine-structure constant. The authors showed that the theory encapsulates a
plethora of new analytical coupled solutions motivated by the dark energy
kinematics. We remark that kinetically coupled models have never been fully
explored in terms of theoretical predictions at linear order in perturbations
and, as such, cosmological bounds on the parameters are not present in
literature. In this work we present such kind of analysis for the first time,
by comparing the theoretical predictions to the $\Lambda$CDM model for the
temperature-temperature (TT) power spectrum, lensing potential auto-
correlation power spectrum and matter power spectrum. These are then used to
provide cosmological constraints by means of Markov Chain Monte Carlo (MCMC)
methods. For this purpose we resort to large sets of data including
measurements of the background expansion of the Universe, temperature
fluctuations power spectra and those of gravitational potentials.
The manuscript is organised as follows. We lay down the theoretical framework
in Sec. II: the Kinetic model is introduced in Sec. II.1 and the explicit
equations of motion for the background dynamics and linear scalar
perturbations in the Newtonian gauge are presented in Sec. II.2 and Sec. II.3,
respectively; in Sec. II.4 we discuss the parameter space of the model in
order to guarantee its theoretical viability. In Sec. III we focus on the
cosmological properties of the Kinetic model, exploring the signatures left by
the dark coupling on the background expansion in Sec. III.1, and on the
relevant cosmological observables in Sec. III.2. Finally, in Sec. IV we
present the observational constraints on the free cosmological and model
parameters along with a model selection analysis. Finally, we summarise our
findings in Sec. V. Appendix A provides the linear perturbation equations for
the Kinetic model in the Synchronous gauge as well.
## II Theory
In this Section we will present the theoretical formulation of the kinetically
coupled dark energy model in consideration. We present the covariant
formulation and the corresponding equations in Section II.1, followed by the
background evolution and the framework for linear scalar perturbations in
Sections II.2 and II.3, respectively. We then discuss the parameter space in
Section II.4.
### II.1 The Kinetic model
Let us start by considering a phenomenological theory minimally coupled to
gravity in the Einstein frame, where the dark energy source is portrayed by a
dynamical quintessence field, $\phi$, interacting with a dark matter component
via the action Barros (2019),
$\mathcal{S}=\int{\rm
d}^{4}x\sqrt{-g}\left[\frac{\text{M}_{\text{Pl}}^{2}}{2}R+X-V(\phi)+{f}(X)\tilde{\mathcal{L}}_{c}(\zeta,g_{\mu\nu})+\mathcal{L}_{\text{SM}}(\psi_{i},g_{\mu\nu})\right]\,,$
(1)
where $g$ denotes the determinant of the metric tensor, $g_{\mu\nu}$, $R$ is
the curvature scalar and ${\text{M}_{\text{Pl}}^{2}=(8\pi G)^{-1}}$ is the
Planck mass in units of $c=1$, with $G$ being the Newtonian constant. The
second and third terms in the action denote the scalar field Lagrangian, in
which ${X=-g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi}/2$ stands for the
kinetic term of $\phi$ and $V(\phi)$ is the scalar self-interacting potential.
In this work we extend the conventional quintessence formulation by taking a
purely kinetic function, $f(X)$ multiplying the Lagrangian of cold dark
matter, $\tilde{\mathcal{L}}_{c}$, which mediates a coupling of $\phi$ to the
dark matter field $\zeta$. Finally,
$\mathcal{L}_{\text{SM}}(\psi_{i},g_{\mu\nu})$ denotes a collective
representation of Lagrangians of the uncoupled standard model fields,
$\psi_{i}$.
Variation of the action in Eq. (1) with respect to the metric $g^{\mu\nu}$
yields the following field equations
$\text{M}_{\text{Pl}}^{2}G_{\mu\nu}=T^{(\phi)}_{\mu\nu}+T^{(c)}_{\mu\nu}+T^{(b)}_{\mu\nu}+T^{(r)}_{\mu\nu}\,,$
(2)
with $G_{\mu\nu}$ being the Einstein tensor and $T^{(i)}_{\mu\nu}$ the energy
momentum tensor for the $i$th species, defined as:
$T^{(i)}_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta\left(\sqrt{-g}\mathcal{L}_{i}\right)}{\delta
g^{\mu\nu}}\,,$ (3)
where $i=\phi,c,b,r$ and $c$ denotes the cold dark matter, $b$ the baryons and
$r$ the radiation. Let us note that, for the previous definition to be valid
for all the fluids present in theory, we define an effective dark matter
Lagrangian as follows Barros (2019); Koivisto (2005); Kase and Tsujikawa
(2020)
${\mathcal{L}_{c}\equiv f(X)\tilde{\mathcal{L}}_{c}},$ (4)
incorporating the effect of the coupling. We follow to consider that all the
matter components in the theory can be modelled as perfect fluids, with energy
density $\rho_{i}$, pressure $p_{i}$, and equation of state (EoS) parameter
$w_{i}=p_{i}/\rho_{i}$. Therefore, the energy momentum tensor of each $i$th
species becomes fully defined in terms of the fluid variables:
$T^{(i)}_{\mu\nu}=\rho_{i}\left[\left(1+w_{i}\right)u^{(i)}_{\mu}u^{(i)}_{\nu}+w_{i}g_{\mu\nu}\right]\,,$
(5)
with $u^{(i)}_{\mu}$ being the 4-velocity vector associated with the $i$th
species, under the individual constraint
$g^{\mu\nu}{u^{(i)}_{\mu}u^{(i)}_{\nu}=-1}$. Regarding the EoS parameter, we
have, $w_{r}=1/3$ for radiation, and $w_{b}=w_{c}=0$ for baryons and cold dark
matter, respectively. In view of these considerations, the dark matter
Lagrangian takes the particular form Koivisto (2005); Avelino and Sousa
(2018),
$\mathcal{L}_{c}=-\rho_{c}\,.$ (6)
The scalar field admits a perfect fluid description as well Faraoni (2012),
provided that
$u^{(\phi)}_{\mu}=-\frac{\partial_{\mu}\phi}{\sqrt{2X}}\,,$ (7)
and $X>0$, where the energy density and pressure associated to the
quintessence field are given by:
$\displaystyle\rho_{\phi}$ $\displaystyle=$ $\displaystyle X+V\,,$ (8)
$\displaystyle p_{\phi}$ $\displaystyle=$ $\displaystyle X-V\,.$ (9)
The scalar field EoS parameter is
$w_{\phi}=p_{\phi}/\rho_{\phi}\,.$ (10)
The equation of motion for the quintessence field, or simply the Klein-Gordon
equation, is obtained through variation of the action in Eq. (1) with respect
to $\phi$ and reads:
$\square\phi-V_{,\phi}=-Q\,,$ (11)
with ${V_{,\phi}=\mathrm{d}V/\mathrm{d}\phi}$. The term on the right-hand side
of Eq. (11) includes the interaction in the dark sector in terms of $f(X)$
Barros (2019), and may be expressed as
$\displaystyle Q$ $\displaystyle=$
$\displaystyle-\mathcal{L}_{c}\left\\{\frac{{f}_{,X}}{{f}}\left[\square\phi+\partial^{\mu}\phi\left(\frac{\nabla_{\mu}\mathcal{L}_{c}}{\mathcal{L}_{c}}+\frac{{f}_{,X}}{{f}}\,\partial_{\alpha}\phi\nabla_{\mu}\partial^{\alpha}\phi\right)\right]-\frac{{f}_{,XX}}{{f}}\,\partial^{\mu}\phi\partial_{\alpha}\phi\left(\nabla_{\mu}\partial^{\alpha}\phi\right)\right\\}\,,$
(12)
where ${f_{,X}\equiv\mathrm{d}f/\mathrm{d}X}$ and
${f_{,XX}\equiv\mathrm{d}^{2}f/\mathrm{d}X^{2}}$. The uncoupled case ($Q=0$)
is naturally recovered when $f$ is a constant function. Let us note that Eq.
(11) could likewise be found through the contracted Bianchi identities,
yielding the following conservation relations,
$\nabla_{\mu}T^{(c)}{}^{\mu}{}_{\nu}=-\nabla_{\mu}T^{(\phi)}{}^{\mu}{}_{\nu}=Q\nabla_{\nu}\phi\,.$
(13)
These equations illustrate clearly the energy transfer between the scalar
field and DM when $f$ is not a constant, meaning that the dark components are
not individually conserved. However, since radiation and baryons remain non-
interacting, i.e.,
$\nabla_{\mu}T^{(r)}{}^{\mu}{}_{\nu}=\nabla_{\mu}T^{(b)}{}^{\mu}{}_{\nu}=0\,,$
(14)
then, consistently, the overall energy momentum tensor of the theory is
conserved, rendering the total action covariant.
In this work, we will focus on the case of a power-law interaction, motivated
in Barros (2019), and parameterised by the function
$f(X)=\left(\text{M}_{\text{Pl}}^{-4}\,X\right)^{\alpha}\,,$ (15)
where $\alpha$ is a dimensionless constant. Therefore Eq. (12) becomes
$Q=-\rho_{c}\frac{\alpha}{X}\left(\square\phi+\frac{\partial^{\mu}\phi\partial_{\nu}\phi\nabla_{\mu}\partial^{\nu}\phi}{X}+\partial^{\mu}\phi\frac{\partial_{\mu}\rho_{c}}{\rho_{c}}\right)\,.$
(16)
From Eq. (16) it is straightforward to conclude that the parameter $\alpha$
governs the strength of the coupling within the dark sector. Additionally, we
fully specify the model by considering the case of an exponential potential,
that is,
$V(\phi)=V_{0}\mathup{e}^{-\lambda\phi/\text{M}_{\text{Pl}}}\,,$ (17)
where $V_{0}$ is the energy scale of the potential (a constant with dimensions
of mass4), and $\lambda$ is a dimensionless parameter depicting the steepness
of the potential. The particular choices in Eqs. (15) and (17) are motivated
by the possibility of having a scaling regime at early times, which is then
followed by a period of accelerated expansion driven by $\phi$ Barros (2019).
In terms of a dynamical systems analysis, the kinetic coupling is indeed
responsible for the emergence of two novel critical points corresponding to
scaling solutions. Finally, the role of the exponential potential is to drive
the evolution of the system out of this scaling regime and towards the late
time attractor.
We conclude this section by remarking that the theory described by the action
Eq. (1) is mathematically equivalent (namely it reproduces the same field
equations and thus leads to equivalent cosmological dynamics) to that of the
following scalar-tensor theory in the Einstein frame Brax and Valageas (2017)
$\mathcal{S}=\int{\rm
d}^{4}x\sqrt{-g}\left[\frac{\text{M}_{\text{Pl}}^{2}}{2}R+X-V(\phi)+\mathcal{L}_{i}(\psi_{i},g_{\mu\nu})\right]+\mathcal{S}_{c}\left[\tilde{g}_{\mu\nu}(X),\zeta\right]\,,$
(18)
where $\tilde{g}_{\mu\nu}$ is the Jordan frame metric. However, it is worth
noting that their physical interpretation differs: while in the action (1) the
coupling is imposed directly through $f$, in the action (18) the metric
$\tilde{g}(X)$ defines a posteriori the coupling. In order for both theories
to give rise to the same cosmological physics, the two metrics must be
conformally related by the following Weyl scaling
$\tilde{g}_{\mu\nu}=f^{2}(X)g_{\mu\nu}\,,$ (19)
with conformal factor given by the square of $f$. Note that the square term
automatically guarantees the signature of the metric to be preserved in both
the Jordan and Einstein frames. These conformally coupled theories can also be
written in terms of a non-minimal coupling to matter in the Einstein frame
Pettorino and Baccigalupi (2008); Amendola (1999); Damour _et al._ (1990);
nevertheless it is most common to assume a sole field dependence, i.e.,
${\tilde{g}_{\mu\nu}=\Omega(\phi)g_{\mu\nu}}$ Teixeira _et al._ (2019);
Pettorino and Baccigalupi (2008); Barros _et al._ (2019). The mapping between
the different formulations still applies as we have assumed in Eq. (6) that
the cold dark matter on-shell Lagrangian can be described by its trace Avelino
and Azevedo (2022); Ferreira _et al._ (2020); Avelino and Azevedo (2018),
$T^{c}$, more generally,
$\mathcal{L}_{c}=T^{c}\equiv g^{\mu\nu}T^{c}_{\mu\nu}\,.$ (20)
If a different form for the nature of the cold dark matter Lagrangian had been
adopted, departing from the perfect fluid description, then the relation in
Eq. (20) might not hold, in which case the mapping between the theories would
break down. Notice that the same power law coupling, ${f\propto X^{\alpha}}$,
was considered in Ref. Barros and da Fonseca (2022) to couple a quintessence
field to Maxwell’s electromagnetism. It was found that the model dynamics were
mathematically equivalent to a disformally coupled theory. The reason for this
stems from the fact that radiation is conformally invariant (since it has a
vanishing energy-momentum trace ${T^{r}=0}$) thus one needs to consider a more
general Weyl scaling such as to induce an interaction at the level of the
field equations. This clearly shows that the correspondence between the
theories strongly depends on the nature of the matter fields one wishes to
couple the scalar source to.
### II.2 Background equations
For what concerns the cosmological background dynamics, let us assume a flat
Friedmann–Lemaître–Robertson–Walker (FLRW) metric, expressed in terms of the
conformal time $\tau$, as
$\mathrm{d}s^{2}=a(\tau)^{2}\left(-\mathrm{d}\tau^{2}+\delta_{ij}\mathrm{d}x^{i}\mathrm{d}x^{j}\right)\,,$
(21)
where $a\equiv a(\tau)$ is the scale factor of the Universe.
The equations governing the background evolution can be derived from Eq. (2),
more precisely the modified Friedmann equation and the conservation relations,
Eqs. (11), (13) and (14), which become,
$\displaystyle 3\text{M}_{\text{Pl}}^{2}\mathcal{H}^{2}$ $\displaystyle=$
$\displaystyle a^{2}(\rho_{c}+\rho_{b}+\rho_{r}+\rho_{\phi})\,,$ (22)
$\displaystyle\phi^{\prime\prime}+2\mathcal{H}\phi^{\prime}+a^{2}V_{,\phi}$
$\displaystyle=$ $\displaystyle a^{2}Q\,,$ (23)
$\displaystyle\rho_{c}^{\prime}+3\mathcal{H}\rho_{c}$ $\displaystyle=$
$\displaystyle-Q\phi^{\prime}\,,$ (24)
$\displaystyle\rho^{\prime}_{b}+3\mathcal{H}\rho_{b}$ $\displaystyle=$
$\displaystyle 0\,,$ (25)
$\displaystyle\rho^{\prime}_{r}+4\mathcal{H}\rho_{r}$ $\displaystyle=$
$\displaystyle 0\,,$ (26)
where a prime is used to refer to derivatives with respect to conformal time,
$\mathcal{H}=a^{\prime}/a$ is the Hubble rate in conformal time, and the
coupling term in Eq. (16) may now be written as:
$Q=2\alpha\rho_{c}\frac{3\mathcal{H}\phi^{\prime}+a^{2}V_{,\phi}}{2\alpha
a^{2}\rho_{c}+\left(1+2\alpha\right)\phi^{\prime 2}}\,.$ (27)
We can further define the energy density and pressure of the $\phi$ field at
the background level, through Eqs. (8) and (9), as
$\displaystyle\rho_{\phi}$ $\displaystyle=$ $\displaystyle\frac{\phi^{\prime
2}}{2a^{2}}+V\,,$ (28) $\displaystyle p_{\phi}$ $\displaystyle=$
$\displaystyle\frac{\phi^{\prime 2}}{2a^{2}}-V\,,$ (29)
respectively. Therefore Eq. (23) can be written as:
$\rho_{\phi}^{\prime}+3\mathcal{H}(1+w_{\phi})\rho_{\phi}=Q\phi^{\prime}.$
(30)
Equations (24) and (30) imply that, when $Q\phi^{\prime}>0$, energy is being
transferred from the cold dark matter source to the scalar field, and,
accordingly, the opposite holds when $Q\phi^{\prime}<0$, and it is the
$\phi$-field granting energy to cold dark matter. At the classical level the
energy exchange in the dark sector may be interpreted as a mass variation for
dark matter particles, since $m_{c}=a^{3}\rho_{c}$, assuming conservation of
the number of particles, i.e. $N_{c}=N_{c}(\tau_{0})$, with $\tau_{0}$ being
the present conformal time. Integration of Eq. (24) yields an expression for
the total energy density of coupled dark matter,
$\rho_{c}=\rho_{c}(\tau_{0})a^{-3}\exp\left(2\alpha\int_{\tau_{0}}^{\tau}Q\frac{\phi^{\prime}}{\rho_{c}}{\rm
d}\tau\right)\,,$ (31)
that can be expressed equivalently in terms of the mass of the dark matter
particles:
$m_{c}(\tau)=m_{c}(\tau_{0})\exp\left(2\alpha\int_{\tau_{0}}^{\tau}Q\frac{\phi^{\prime}}{\rho_{c}}{\rm
d}\tau\right)\,.$ (32)
Finally let us note that the modified Friedmann equation, Eq. (22), can be
cast to the form of the well-known Friedmann constraint:
$1=\Omega_{\phi}+\Omega_{m}+\Omega_{r}\,,$ (33)
where we have defined a collective matter density
$\rho_{m}=\rho_{c}+\rho_{b}$, and the fractional density parameter of the
$ith$ species
$\Omega_{i}=\rho_{i}a^{2}/(3\text{M}_{\text{Pl}}^{2}\mathcal{H}^{2})$. Eq.
(33) can be rewritten in the form of a constraint on the present scalar field
fractional density, $\Omega_{\phi}^{0}=1-\Omega_{m}^{0}-\Omega_{r}^{0}$, where
“0” stands for quantities evaluated at the present time,
$\Omega_{i}^{0}=\rho_{i}^{0}/(3\text{M}_{\text{Pl}}^{2}H_{0}^{2})$, where
$H_{0}$ is the Hubble parameter. For numerical purposes, $V_{0}$, implicitly
entering the definition of $\Omega_{\phi}^{0}$, is used to perform a shooting
method that yields the fiducial value of $\Omega_{\phi}^{0}$ fulfilling the
constraint relation in Eq. (33), while simultaneously avoiding degeneracies.
As such, $V_{0}$ will no longer be considered a free parameter of the model,
leaving $\\{\lambda,\alpha\\}$ as the model free parameters.
### II.3 Linear cosmological perturbations
For the purpose of studying the background dynamics, we have assumed that the
Universe is homogeneous and isotropic on large scales. However, we know that
the global picture is far more complex and that, in particular, deviations to
the homogeneous model are needed in order to explain phenomena such as the
formation of structures in the Universe. For the purpose of this study, we
consider small inhomogeneities of the geometry (encoded in the metric) and the
matter fields, and investigate their synergy through the Einstein equations on
linear scales.
Let us consider the perturbed FLRW metric in the so called Newtonian gauge,
corresponding to a line element written as follows Ma and Bertschinger (1995):
$\mathrm{d}s^{2}=a^{2}(\tau)\left[-\left(1+2\Psi\right)\mathrm{d}\tau^{2}+\left(1-2\Phi\right)\delta_{ij}\mathrm{d}x^{i}\mathrm{d}x^{j}\right]\,,$
(34)
where $\Psi(\vec{x},\tau)$ and $\Phi(\vec{x},\tau)$ are the Newtonian
potentials. We also consider linear perturbations around the relevant
background fluid variables:
$\displaystyle\phi(\vec{x},\tau)=\phi_{i}(\tau)+\delta\phi(\vec{x},\tau)\,,\quad\rho_{i}(\vec{x},\tau)=\rho_{i}(\tau)+\delta\rho_{i}(\vec{x},\tau)\,,\quad
p_{i}(\vec{x},\tau)=p_{i}(\tau)+\delta p_{i}(\vec{x},\tau)\,.$ (35)
In particular from Eqs. (8) and (9) we derive the perturbations for the energy
density and pressure of the scalar field:
$\displaystyle\delta\rho_{\phi}$ $\displaystyle=$
$\displaystyle\frac{\phi^{\prime}}{a^{2}}\delta\phi^{\prime}-\frac{\phi^{\prime
2}}{a^{2}}\Psi+V_{,\phi}\delta\phi\,,$ (36) $\displaystyle\delta p_{\phi}$
$\displaystyle=$
$\displaystyle\frac{\phi^{\prime}}{a^{2}}\delta\phi^{\prime}-\frac{\phi^{\prime
2}}{a^{2}}\Psi-V_{,\phi}\delta\phi\,.$ (37)
The perturbations of the energy-momentum tensor, Eq. (5), for each species and
at first order, read
$\delta T^{(i)}{}^{\mu}_{\nu}=(\delta\rho_{i}+\delta
p_{i})u^{(i)}{}^{\mu}u^{(i)}{}_{\nu}+\delta
p_{i}\delta^{\mu}_{\nu}+(\rho_{i}+p_{i})\left(\delta
u^{(i)}{}^{\mu}u^{(i)}{}_{\nu}+u^{(i)}{}^{\mu}\delta
u^{(i)}{}_{\nu}\right)\,,$ (38)
where $\delta u^{(i)}{}_{\mu}$ is the perturbation on the four velocity vector
of the $i$th-species, i.e. ${u^{(i)}{}_{\mu}=a(-1,v^{(i)}{}_{j})}$, with
$v_{j}$ being the peculiar velocity. In this study we will assume that there
is no anisotropic stress associated with the fluids under consideration.
Then we compute the linearised Einstein equations where we include the
modifications introduced by the coupling function. These are expressed in
terms of independent Fourier modes that characterise the evolution of the
perturbations for different scales:
$\displaystyle
k^{2}\Phi+3\mathcal{H}\left(\Phi^{\prime}+\mathcal{H}\Psi\right)$
$\displaystyle=$ $\displaystyle-4\pi Ga^{2}\sum_{i}\delta\rho_{i}\,,$ (39)
$\displaystyle k^{2}\left(\Phi^{\prime}+\mathcal{H}\Psi\right)$
$\displaystyle=$ $\displaystyle 4\pi
Ga^{2}\sum_{i}\rho_{i}(1+w_{i})\theta_{i}\,,$ (40)
$\displaystyle\Phi^{\prime\prime}+\mathcal{H}\left(\Psi^{\prime}+2\Phi^{\prime}\right)+\Psi\left(\mathcal{H}^{2}+2\mathcal{H}^{\prime}\right)+\frac{k^{2}}{3}\left(\Phi-\Psi\right)$
$\displaystyle=$ $\displaystyle 4\pi Ga^{2}\sum_{i}\delta p_{i}\,,$ (41)
$\displaystyle\Phi$ $\displaystyle=$ $\displaystyle\Psi\,.$ (42)
The first equation, corresponding to the time-time component, provides the
energy density constraint. Equation (40), computed from the time-space
components of the perturbed Einstein equations, gives the momentum constraint,
where we have adopted the definition of the velocity divergence
${\theta_{i}=\nabla\cdot v^{(i)}}$. The trace of the spatial components yields
Eq. (41) and, finally, Eq. (42) corresponds to the shear propagation for
vanishing anisotropic stress. This relation is expected due to the lack of a
non-minimal coupling in action (1).
The equations governing the evolution of each fluid’s perturbations can be
found through the conservation relations, Eqs. (13) and (14), perturbed at
first order. For the non-interacting species, i.e. baryons and radiation,
these are respectively
$\displaystyle\delta^{\prime}_{i}+3\mathcal{H}\left(\frac{\delta
p_{i}}{\delta\rho_{i}}-w_{i}\right)\delta_{i}+(1+w_{i})\left(\theta_{i}-3\Phi^{\prime}\right)$
$\displaystyle=$ $\displaystyle 0\,,$ (43)
$\displaystyle\theta_{i}^{\prime}+\left[\mathcal{H}(1-3w_{i})+\frac{w_{i}^{\prime}}{1+w_{i}}\right]\theta_{i}-k^{2}\left(\Psi+\frac{\delta
p_{i}}{\delta\rho_{i}}\frac{\delta_{i}}{1+w_{i}}\right)$ $\displaystyle=$
$\displaystyle 0\,,$ (44)
where we have defined the dimensionless density contrast as
${\delta_{i}=\delta\rho_{i}/\rho_{i}}$. The dynamics for the coupled cold dark
matter is given by
$\delta_{c}^{\prime}+\theta_{c}-3\Phi^{\prime}=\frac{Q}{\rho_{c}}\left(\phi^{\prime}\delta_{c}-\delta\phi^{\prime}\right)-\frac{\phi^{\prime}}{\rho_{c}}\delta
Q\,,$ (45)
and the corresponding velocity divergence evolves according to
$\theta^{\prime}_{c}+\mathcal{H}\theta_{c}-k^{2}\Psi=\frac{Q}{\rho_{c}}\left(\phi^{\prime}\theta_{c}-k^{2}\delta\phi\right)\,,$
(46)
with the perturbed coupling term, obtained from Eq. (16), being defined as
$\displaystyle\delta Q$ $\displaystyle=$
$\displaystyle\frac{2\alpha\rho_{c}}{2\alpha
a^{2}\rho_{c}+(1+2\alpha)\phi^{\prime
2}}\left\\{-3\Phi^{\prime}\phi^{\prime}-\phi^{\prime}\theta_{c}+\left[3\mathcal{H}\phi^{\prime}+a^{2}(V_{,\phi}-Q)\right]\delta_{c}+\left(2k^{2}+a^{2}V_{,\phi\phi}\right)\delta\phi\right.$
(47) $\displaystyle\left.\hskip
108.12054pt-\left[3\mathcal{H}\phi^{\prime}+2a^{2}(V_{,\phi}-Q)\right]\frac{\delta\phi^{\prime}}{\phi^{\prime}}+2a^{2}\Psi\left(Q-V_{,\phi}\right)\right\\}\,,$
with $V_{,\phi\phi}=\mathrm{d}^{2}V/\mathrm{d}\phi^{2}$. One exceptional
feature of the Kinetic model can be readily identified at the level of the
perturbed coupling parameter, Eq. (47): it includes an explicit dependence on
$\theta_{c}$. This is not usual in other coupled dark energy models explored
so far, such as in Refs. van de Bruck _et al._ (2017); van de Bruck and
Teixeira (2020), and it arises due to the $X$-dependence of the coupling, in
particular in relation to the term containing ${\nabla_{\mu}\mathcal{L}_{c}}$
in Eq. (16).
The evolution of the $\phi$-field perturbation is given by the linearisation
of Eq. (11):
$\delta\phi^{\prime\prime}+2\mathcal{H}\delta\phi^{\prime}+\left(a^{2}V_{,\phi\phi}+k^{2}\right)\delta\phi-\left(\Psi^{\prime}+3\Phi^{\prime}\right)\phi^{\prime}+2a^{2}\Psi
V_{,\phi}=a^{2}\delta Q+2a^{2}Q\Psi\,.$ (48)
For completeness we also provide the corresponding set of equations in the
synchronous gauge, see Appendix A.
In Sections III and IV we will evolve the full dynamics of the gravitational
potentials, scalar and matter fields. However in order to have a glimpse at
the phenomenology of the Kinetic model we resort to the so-called quasi-static
approximation (QSA) on sub-horizon scales Boisseau _et al._ (2000); Tsujikawa
(2007); De Felice _et al._ (2011). Under this approximation we find:
$\displaystyle
k^{2}\Psi\approx-\frac{3\mathcal{H}^{2}}{2}\big{(}\Omega_{c}\delta_{c}+\Omega_{b}\delta_{b}\big{)}\,,$
(49) $\displaystyle k^{2}\delta\phi\approx
a^{2}\Delta\bigg{\\{}\delta_{c}\Big{[}Q\phi^{\prime
2}+\rho_{c}\big{(}\phi^{\prime\prime}-\mathcal{H}\phi^{\prime}\big{)}\Big{]}-\rho_{c}\phi^{\prime}\delta_{c}^{\prime}\bigg{\\}}\,,$
(50)
where we have defined the following time and scale dependent functions:
$\displaystyle\Delta$ $\displaystyle=$
$\displaystyle-\frac{2\alpha\mathcal{M}^{2}}{\phi^{\prime
2}V_{\phi\phi}\left(1+\frac{a^{2}}{k^{2}}\mathcal{M}^{2}\right)}\,,$ (51)
$\displaystyle\mathcal{M}^{2}$ $\displaystyle=$
$\displaystyle\frac{\phi^{\prime 2}V_{\phi\phi}}{\phi^{\prime 2}-2\alpha
a^{2}\rho_{c}}\,.$ (52)
It follows that the equations for cold dark matter and baryonic fluids read
$\displaystyle\delta_{c}^{\prime\prime}+\mathcal{H}\big{(}1+\beta\big{)}\delta_{c}^{\prime}-\frac{3\mathcal{H}^{2}}{2G}\big{(}G_{cc}\Omega_{c}\delta_{c}+G_{cb}\Omega_{b}\delta_{b}\big{)}\approx
0\,,$ (53)
$\displaystyle\delta_{b}^{\prime\prime}+\mathcal{H}\delta^{\prime}_{b}-\frac{3\mathcal{H}^{2}}{2}\big{(}\Omega_{c}\delta_{c}+\Omega_{b}\delta_{b}\big{)}\approx
0\,,$ (54)
with
$\displaystyle\beta$ $\displaystyle=$
$\displaystyle\frac{G_{cb}}{G\mathcal{H}\rho_{c}}\bigg{\\{}\big{(}1-a^{2}\rho_{c}\Delta\big{)}\Big{[}8\alpha\mathcal{H}\rho_{c}-Q\phi^{\prime}\big{(}1+2\alpha\big{)}\Big{]}-Q\phi^{\prime}\bigg{\\}}\,,$
(55) $\displaystyle G_{cc}$ $\displaystyle=$ $\displaystyle
G+\frac{2G}{3\mathcal{H}\rho_{c}\Omega_{c}\phi^{\prime
2}}\bigg{\\{}\phi^{\prime
2}\Big{[}\phi^{\prime}Q^{\prime}+Q\big{(}\phi^{\prime\prime}+4\mathcal{H}\phi^{\prime}\big{)}\Big{]}+\Big{[}Q\phi^{\prime
2}+\rho_{c}\big{(}\phi^{\prime\prime}-\mathcal{H}\phi^{\prime}\big{)}\Big{]}$
(57) $\displaystyle\times\Big{[}\phi^{\prime
2}Qa^{2}\Delta+2\alpha\big{(}\phi^{\prime\prime}+4\mathcal{H}\phi^{\prime}\big{)}\big{(}1-a^{2}\rho_{c}\Delta\big{)}\Big{]}\Bigg{\\}}\,,$
$\displaystyle G_{cb}$ $\displaystyle=$
$\displaystyle\frac{G}{1+2\alpha\big{(}1-a^{2}\rho_{c}\Delta\big{)}}\,.$ (58)
The cold dark matter perturbations are then modified by two effects emerging
from the coupling: a modified friction term, quantified by $\beta$, which
inevitably influences the growth rate of $\delta_{c}$; and a modified
effective gravitational potential encoded in
$\nabla^{2}\Psi^{\rm
eff}=4\pi\big{(}G_{cc}\rho_{c}\delta_{c}+G_{cb}\rho_{b}\delta_{b}\big{)}\,.$
(59)
The latter includes two effective gravitational couplings, $G_{cc}$ and
$G_{cb}$, defined in analogy to Kase and Tsujikawa (2020). We also find that
$G_{cb}$ is always an attractive contribution. These modifications clearly
show the emergence of a fifth force which is a standard signature of coupled
scalar field models. We expect that even a relative small value for the
coupling parameter $\alpha$ can lead to a significant effect on the
cosmological observables given the evolution equation for the cold dark matter
perturbations, which impacts the baryons dynamics and the gravitational
potentials. Among others we foresee a modification in the lensing angular
power spectrum due to a modified lensing potential
($\phi_{lens}=(\Phi+\Psi)/2=\Psi$). We will explore these signatures in more
detail in Section III. In the absence of the coupling, i.e. $\alpha=0$, we
recover $\beta=0$ and $G_{cc}=G_{cb}=G$, corresponding to the standard case of
quintessence.
### II.4 The parameter space
In order for a model to be theoretically viable, there are specific stability
requirements that need to be satisfied. We will present and examine them below
for the Kinetic model. Let us stress that the identification of a physically
motivated parameter space plays an important role when testing particular
gravity models with cosmological data Raveri _et al._ (2014); Frusciante _et
al._ (2016); Salvatelli _et al._ (2016); Frusciante _et al._ (2019b, 2020);
Frusciante and Perenon (2020); Albuquerque _et al._ (2022).
According to the results in Ref. Barros (2019) the solutions with
$\lambda^{2}<2$ guarantee that the future dark energy attractor is a stable
fixed point of the system and describes an accelerated expanding Universe.
Albeit necessary at the attractor, this condition can be somewhat relaxed,
while still generating an accelerating behaviour at transient times. By
allowing the attractor to lie outside, but close to the accelerated region,
with say ${\lambda^{2}=2+\epsilon}$, the solution may still feature an
accelerated expanding scenario at present time, that is, with
${w_{\phi}(a_{0})<-1/3}$. We further discuss this point in Section III.1 and
we show some examples in the right panel of Fig. 3. Let us note that under
such condition, instead of accelerating forever, there should be a turning
point in the future when the expansion changes from accelerated to
decelerated, i.e. ${w_{\phi}(a_{x})=-1/3}$ at the crossover $a_{x}$, and
${w_{\phi}(a)>-1/3}$ thereafter, for $a<a_{x}$, as the attractor is
approached. From the critical points analysis conducted in Ref. Barros (2019)
we know that at the attractor ${w^{\star}_{\rm eff}=\lambda^{2}/3-1}$, which
using the following general identity
$\frac{1}{H}\frac{\mathrm{d}H}{\mathrm{d}\ln a}=-\frac{3}{2}\left(1+w_{\rm
eff}\right)\,,$ (60)
we find for the Hubble rate at the attractor:
$\frac{\mathrm{d}H^{\star}}{\mathrm{d}\ln
a}=-\frac{1}{2}H^{\star}\lambda^{2}\,,$ (61)
from which the accelerated condition is derived Barros (2019). Here we define
a star superscript denoting quantities evaluated at the attractor, and
$H=\mathcal{H}/a$ is the Hubble function in cosmic time, $t$. The relation
above corresponds to a cosmological expanding behaviour described as,
$H^{\star}=H_{0}a^{-\lambda^{2}/2}\quad\text{and therefore}\quad
a^{\star}=\left(1+H_{0}\frac{\lambda^{2}}{2}t\right)^{2/\lambda^{2}}\,.$ (62)
Indeed, the explicit time dependence of the scale factor in Eq (62), $a\propto
t^{2/\lambda^{2}}$, reveals that $\lambda^{2}=2$ is an inflection point of
$a(t)$, i.e. $\ddot{a}=0$ (with dots referring to derivatives with respect to
cosmic time), laying out the fine limit between an accelerated or decelerated
setting. Following the above discussion we will then consider $\lambda>0$.
Additionally, according to the power-law role of $\alpha$ in Eq. (15), we
choose to consider cases with $\alpha\geqslant 0$ only Barros (2019).
Furthermore, we take into account theoretical stability conditions to
guarantee the absence of ghost and gradient instabilities in the scalar sector
Sbisà (2015); Kase and Tsujikawa (2020). The first demands for positive
kinetic terms of the scalar field and cold dark matter perturbations
($q_{s}>0$ and $q_{c}>0$, respectively), and the second for their positive
speeds of propagation ($c_{s}^{2}\geqslant 0$ and $c_{c}^{2}\geqslant 0$). It
is possible to show that a very general way to write an action with an extra
scalar field and one matter component up to second order in perturbations, is
the following Kase and Tsujikawa (2020):
$\mathcal{S}^{(2)}=\int{}dtdk^{3}a^{3}\left[\dot{\vec{\chi}}^{t}{\bf
K}\dot{\vec{\chi}}-\frac{k^{2}}{a^{2}}\vec{\chi}^{t}{\bf
G}\vec{\chi}-\vec{\chi}^{t}{\bf M}\vec{\chi}-\frac{k}{a}\vec{\chi}^{t}{\bf
B}\dot{\vec{\chi}}\right]$ (63)
with $\vec{\chi}^{t}=(\delta\phi,\delta\rho_{c}/k)$ being $\delta\phi$ and
$\delta\rho_{c}$ the perturbations of the scalar field and cold dark matter
component respectively, the $2\times 2$ matrices are defined in terms of
background quantities and their general forms can be found in ref. Kase and
Tsujikawa (2020). For the action (1), with $f(X)$ and $V(\phi)$ defined in
Eqs. (15) and (17) respectively we have Kase and Tsujikawa (2020):
$\displaystyle q_{s}$ $\displaystyle=$ $\displaystyle
K_{11}=2\text{M}_{\text{Pl}}^{2}\left[1-\frac{\alpha(2\alpha-1)}{X}\rho_{c}\right]\,,$
(64) $\displaystyle q_{c}$ $\displaystyle=$ $\displaystyle
K_{22}=\left(\text{M}_{\text{Pl}}^{-4}X\right)^{\alpha}\,,$ (65)
$\displaystyle c_{s}^{2}$ $\displaystyle=$
$\displaystyle\frac{G_{11}}{K_{11}}+\frac{B_{12}^{2}}{K_{11}K_{22}}=\frac{4\text{M}_{\text{Pl}}^{2}}{q_{s}}-1\,,$
(66) $\displaystyle c_{c}^{2}$ $\displaystyle=$
$\displaystyle\frac{G_{22}}{K_{22}}=0\,.$ (67)
For dark matter the conditions are trivial. The stability conditions for the
scalar field are more involved and need to be verified throughout the entire
expansion history. We find that both conditions are verified as long as:
$-1<\alpha(2\alpha-1)\frac{\rho_{c}}{X}<1\,,$ (68)
where the first inequality accounts for the no-ghost condition and the second
one for the positive (square) speed of propagation. This constraint then
selects the viable range for the parameter $\alpha$. The initial condition
$\phi^{\prime}_{i}$ plays a role in securing the stability toward the
cosmological evolution.
Let us now discuss the initial conditions (ICs) for the scalar field,
$\phi_{i}$ and its first derivative, $\phi^{\prime}_{i}$ which must be
specified, deep in the radiation dominated epoch, namely around redshift
$z_{i}\approx 10^{14}$ in order to solve the system of equations (23)-(26).
From the numerical study we concluded that for non-trivial ICs, the system
rapidly enters in the scaling regime. According to this feature we found that
the choice of values for $\phi_{i}$ and $\phi^{\prime}_{i}$ has a negligible
impact on the cosmological evolution111We have numerically verified that the
phenomenology of the cosmological observables, as discussed in Section III, is
not affected by the choice of ICs and neither are the cosmological
constraints.. Hence, without loss of generality, we set
$\phi(z_{i})=10^{-2}\,\text{M}_{\rm Pl}$ .
The IC for $\phi^{\prime}$ is chosen such as to avoid instabilities according
to Eq. (68). Moreover, it should be noted that when $\phi^{\prime}_{i}$ is
chosen to be positive, the condition $\lambda>0$ must hold for the
accelerating attractor solution to exist Barros (2019).
Finally, we recall that $V_{0}$ is not considered an extra parameter of the
model as discussed in Section II.2.
## III Phenomenology of the Kinetic model
In this Section we shall explore the signatures left by the Kinetic model on
the background expansion and on some cosmological observables such as the
cosmic microwave background (CMB), the lensing potential auto-correlation and
the matter power spectra in Sections II.2 and II.3, respectively. We use our
own modification of the public version of the Einstein Boltzmann solver CLASS
Lesgourgues (2011a); Blas _et al._ (2011); Lesgourgues (2011b).
(a) (b)
Figure 1: Left upper panel: Evolution of the energy densities $\rho_{i}$ with
redshift, $1+z$, of the scalar field (pink), matter (black) and radiation
(blue) for the uncoupled case (solid line), ${\alpha=0.01}$ (dotted line) and
${\alpha=0.03}$ (dashed line). Left lower panel: Ratio of the energy densities
of cold dark matter and dark energy, for ${\alpha=0.01}$ (solid line) and
${\alpha=0.03}$ (dashed line). Right panel: Differences relative to the
uncoupled case, ${\alpha=0}$, for ${\alpha=0.01}$ (solid line) and
${\alpha=0.03}$ (dashed line), on the quantities (from top to bottom): the
coupling strength parameter; the fractional deviation of the energy density of
matter, i.e. ${\Delta\rho_{m}/\tilde{\rho}_{m}=\rho_{m}/\tilde{\rho}_{m}-1}$,
where a tilde denotes variables in the uncoupled scenario, such that
${\tilde{\rho}_{m}=\rho^{0}_{m}a^{-3}}$; and the fractional deviation of the
Hubble rate of expansion.
### III.1 Background evolution
We start by reviewing the background evolution in the kinetic coupled dark
sector scenario. A similar study has been previously presented in Ref. Barros
(2019) by means of a dynamical systems analysis, with a particular focus on
the late time dynamics (i.e. cosmological redshift $z\lesssim 40$), neglecting
the radiation and baryonic contribution. In this work we shall examine the
cosmological evolution starting from the early stages, deep into the radiation
dominated epoch ($z_{i}\approx 10^{14}$) up to present time ($z=0$). For the
numerical investigation in this work we fix the following cosmological
parameters to be Aghanim _et al._ (2020a): ${H_{0}=67.56}$ km/s/Mpc,
${\Omega_{b}h^{2}=0.022}$ and ${\Omega_{c}h^{2}=0.12}$, with ${h\equiv
H_{0}/100}$. We also select some exemplifying values for the parameter
$\alpha$ controlling the coupling, namely ${\alpha=0.01}$ and ${\alpha=0.03}$,
and we fix the slope of the potential as ${\lambda=0.2}$, with the aim of
singling out the main phenomenology associated to the coupling function.
Moreover, for comparison purposes, we also include the case with $\alpha=0$,
which corresponds to an uncoupled scenario. It should be noted that the choice
for the values of the parameters associated with the scalar field are purely
illustrative, but nevertheless still satisfy the requirements discussed in
Section II.4. They are chosen in such a way that the overall effect of the
coupling can be grasped, and therefore are not necessarily realistic. This
will be assessed in Section IV, in which case these parameters are left to
vary when performing a parameter estimation according to cosmological data.
In the left panel of Fig. 1 we show the evolution with redshift, $1+z$, of the
energy densities for each species, $\rho_{i}$. We notice that the introduction
of the coupling results in the emergence of an early scaling regime, in direct
contrast with the uncoupled case, for which this behaviour can never be
achieved. The onset of this scaling behaviour takes place during the radiation
dominated epoch, with energy density of the scalar field proportional to the
dark matter one, approximately according to the relation
${\rho_{c}/\rho_{\phi}=1/\alpha}$, as shown in the left lower panel of Fig. 1.
Eventually the field will exit this scaling regime and head towards the future
attractor solution, in which case its energy density will remain forever
diluting as ${\rho_{\phi}\propto a^{-\lambda^{2}}}$.
In the upper right panel of Fig. 1 we show the evolution of the coupling
strength, expressed as $Q\phi^{\prime}/\rho_{c}$, as a function of the
redshift. The sign of this quantity is relevant to assess the direction of the
energy flow between cold dark matter and the scalar field.
We can notice that the interaction term is positive at all redshifts,
establishing the direction of the energy transfer from the dark matter fluid
to the scalar field. This is consistent with the fact that the dynamics of the
scalar field follows the relation ${\phi^{\prime}>\lambda
V/\left(3\text{M}_{\text{Pl}}^{2}\mathcal{H}\right)\Leftrightarrow Q>0}$ (see
Eq. (27)). Let us note that because we fixed the present day values of the
fluid densities this results in a larger value for the cold dark matter energy
density at early times because it is the CDM component granting energy to the
scalar field at later times, with this feature being more prominent for higher
values of $\alpha$. This effect is compensated as the matter energy density
decreases throughout time, while additional energy is being transferred for
the scalar field, when compared with the uncoupled case. We illustrate this
behaviour in the middle right panel of Fig. 1, where we report on the
deviations from the uncoupled case, denoted by a tilde. As consequence there
is a shift of the matter-radiation equality towards earlier times for
increasing values of $\alpha$, as shown in the left panel of Fig. 2. From the
same Figure, we can notice that because the $\phi$ field is acquiring energy
at a rate that is proportional to its energy density (see Eq. (27)), then the
matter-dark energy equality is achieved earlier.
Additionally, in the lower right panel of Fig. 1, we show the deviations in
the Hubble rate for the Kinetic model when compared with the uncoupled case,
i.e.
${\Delta\mathcal{H}/\tilde{\mathcal{H}}=\mathcal{H}/\tilde{\mathcal{H}}-1}$.
No significant deviations on $\mathcal{H}$ are observed during the radiation
dominated epoch, since any interactions between the dark and radiation sectors
have been excluded. However, when the matter contribution becomes non-
negligible, around ${z\approx 10^{6}}$, the Kinetic models show an enhanced
value of $\mathcal{H}$ with respect to the uncoupled case, with this effect
being larger for the higher values of $\alpha$.
(a) (b)
Figure 2: Left panel: Evolution of the relative energy densities $\Omega_{i}$
with redshift, $1+z$, of the scalar field (pink), matter (black) and radiation
(blue). Right panel: Equation of state parameters, $w_{\rm eff}$ (pink) and
${w_{\phi}}$ (black), along redshift. In accordance with Fig. 1, we present
the uncoupled case (solid line), ${\alpha=0.01}$ (dotted line) and
${\alpha=0.03}$ (dot-dashed line).
Finally, it is also worth analysing the evolution of two fluid-related
quantities: the equation of state parameters for the scalar field, $w_{\phi}$,
and for the total effective budget, $w_{\rm eff}$. These characterise the
nature of the dark energy fluid description and the overall effective
dominating fluid contribution in the Universe, and are defined according to
Eq. (10) and
$w_{\rm eff}=\frac{\sum_{i}p_{i}}{\sum_{i}\rho_{i}}\,,$ (69)
respectively. Their evolution with redshift is depicted in the right panel of
Fig. 2. We observe that during the scaling regime the field behaves as a stiff
fluid, with ${w_{\phi}=1}$, since ${V\ll\phi^{\prime 2}}$, and in agreement
with the findings of Ref. Barros (2019). As the field exits the scaling
regime, the Universe approaches the attractor scenario, for which
${w_{\phi}=-1+\lambda^{2}/3}$. During radiation domination, the effective
equation of state remains at a plateau with ${w_{\rm eff}\approx w_{r}=1/3}$.
At matter domination, and during the scaling regime, when radiation may be
neglected and under the limit ${V\ll\phi^{\prime 2}}$, the equation of state
follows
$w_{\rm
eff}\approx\frac{\alpha}{1+\alpha\left(1+\frac{\rho_{b}}{\rho_{\phi}}\right)}\,.$
(70)
Note that, in Ref. Barros (2019), a similar approximation was presented,
though stated as ${w_{\rm eff}\approx\alpha/(1+\alpha)}$. That is because the
contribution of radiation and baryons was not taken into account in that
study, which focused mainly on the late time dynamics, for which it still
stands as a good approximation. By neglecting the baryonic contribution we may
resort to the dynamical system analysis employed in Ref. Barros (2019) to find
the behaviour of the Hubble rate and coupled DM at matter domination during
the scaling:
$\rho_{c}\propto H^{2}\propto a^{-3\frac{1+2\alpha}{1+\alpha}}\,,$ (71)
which we numerically verified to be a good approximation. We remark that the
transition towards an accelerating state occurs later for increasingly larger
values of $\alpha$, owing to the fact that, for a stronger interaction, the
field remains frozen in the scaling regime for longer, with ${w_{\phi}=1}$. As
a direct outcome, when the accelerating stage finally starts (that is, when
$w_{\rm eff}<-1/3$), it will take place at a slower rate. This behaviour is
illustrated in the right panel of Fig. 2. Alternatively, this trend could be
intuitively understood by inspection of the deceleration parameter
${q=(1+3w_{\rm eff})/2}$, that scales linearly with the total equation of
state parameter of the Universe.
At this point, there is a subtlety that should be noted. Although the ICs for
the scalar field do not have any influence on the parameter constraints, there
is a link between the initial values of the velocity of the field and the dark
energy density, as expressed in Eq. (8), which will have a subtle impact on
the early behaviour of the quintessence. Increasing the initial density of the
field inevitably leads to an earlier onset for the scaling regime by taking
higher values of $\phi^{\prime}_{i}$. On the other hand, the value for the
initial velocity is completely negligible when it comes to setting the time
for which the field exits the scaling and starts evolving towards the
accelerating attractor. This implies that the duration of the period in which
the energy density of dark energy scales with matter is extended for
increasing values of $\phi^{\prime}_{i}$. This trend is illustrated in the
left panel of Fig. 3. Nonetheless this does not mean that $\phi^{\prime}_{i}$
can take any arbitrary value, as the conditions in Eq. (68) still have to be
verified, in order to avoid instabilities in the theory. On the other hand,
the initial value for the field per se has no influence over the dynamics.
Indeed $\phi_{i}$ only appears in the exponential term of the potential, Eq.
(17), which can be equivalently absorbed by the shooting parameter $V_{0}$.
Finally, we conclude by providing some concrete examples to support the
argument in Section II.4, namely that values of $\lambda^{2}>2$ can still give
rise to present time accelerated expansion under exceptional conditions. In
the right panel of Fig. 3 we illustrate the behaviour of the effective
equation of state parameter close to the present epoch and up to some time in
the future for different values of $\lambda$, and for a fixed coupling
parameter, ${\alpha=0.03}$. Indeed we notice that transient acceleration
phases around the present time are achieved for ${\lambda^{2}>2}$, before
crossing the boundary given by $w_{\rm eff}<-1/3$, and exiting this region at
some point in the future. Accordingly, these solutions may still stand as
cosmologically valid, and such values for $\lambda$ need to be taken into
account in the statistical analysis of Sec. IV.
(a) (b)
Figure 3: Left Panel: Evolution of the energy densities of matter (black) and
scalar field (pink) for different ICs for the field’s velocity,
$\phi^{\prime}_{i}$, with fixed $\alpha=0.03$ and $\lambda=0.2$. Right Panel:
Effective equation of state, $w_{\rm eff}$, for different values of $\lambda$,
namely $\lambda=1.4$ (solid line), $\lambda=1.6$ (dashed line) and
$\lambda=1.8$ (dotted line), with fixed $\alpha=0.03$. The shaded green area
corresponds to the region where the Universe features accelerated expansion,
i.e. ${w_{\rm eff}<-1/3}$. (a) Figure 4: Upper panel: The matter power
spectrum as function of $k$, for the uncoupled case (dashed line),
${\alpha=0.001}$ (dot-dashed line), ${\alpha=0.002}$ (dotted line) and
$\Lambda$CDM (pink solid line). Lower panel: Percentage deviations of the
matter power spectrum of the Kinetic model and the uncoupled case from the
$\Lambda$CDM model. (a) Figure 5: Evolution of the density contrast of cold
dark matter for the Kinetic model relative to the $\Lambda$CDM case, that is
$\delta_{c}/\delta^{\Lambda CDM}_{c}$, as a function of the Fourier scale $k$
and the redshift $z$ for $\alpha=0.001$ and $\alpha=0.002$.
### III.2 Cosmological observables
In this section we discuss the effect of the coupling on some relevant
cosmological observables such as the matter power spectrum and the CMB
temperature-temperature (TT) and lensing angular power spectra. We assume
adiabatic perturbative initial conditions with an amplitude of curvature
fluctuations of $A_{s}=2.215\times 10^{-9}$, at the pivot scale $k_{\rm
piv}=0.05$ Mpc-1, and with the spectral index set to $n_{s}=0.962$ Aghanim
_et al._ (2020a). The remaining cosmological parameters and $\lambda$ are the
same as used in the previous Section. We adopt a different set of values for
$\alpha$, which are one order of magnitude smaller than the ones used in the
numerical analysis of the background quantities, with the reason being that
the latter would lead to drastic effects on the cosmological observables. On
the contrary, the values we will use to highlight the features on cosmological
observables do not produce any significant effects on the background
quantities. As a consequence the features we will show in this Section are
attributed solely to the modifications to the linear perturbation equations
presented in Section II.3. Then, for illustrative purposes, we set $\alpha$ to
be $1\times 10^{-3}$ and $2\times 10^{-3}$. Moreover, and without loss of
generality, we assume vanishing ICs for the scalar field perturbation and its
velocity, that is, $\delta\phi(z_{i})=\delta\phi^{\prime}(z_{i})=0$,
respectively.
In the upper panel of Fig. 4 we present the linear matter power spectrum at
present time up to the scale $k_{\rm max}=0.1h$ Mpc-1, above which the linear
perturbative approximation is expected to break down due to non-linear
effects, dominant at smaller scales. In the lower panel we also plot the
fractional differences between the coupled scenarios and the $\Lambda$CDM one.
We note that the matter power spectrum of the Kinetic model is significantly
suppressed at intermediate scales, $10^{-3}h$ Mpc${}^{-1}\lesssim k\lesssim
3\times 10^{-2}h$ Mpc-1, with respect to $\Lambda$CDM, and enhanced at the
smaller scales. These signatures emerge as a combination of the effects
produced by the changes in the evolution of the background and the cold dark
matter perturbations due to the positive exchange of energy that flows from
cold dark matter to dark energy. Because the radiation-matter equality era is
shifted towards earlier times, when compared with the uncoupled case (see left
panel of Fig. 2), the turnover in the matter power spectrum is shifted to
higher $k$. The growth of the matter perturbations is suppressed at
intermediate scales, with deviations from $\Lambda$CDM of $\sim 7\%$ and $\sim
14\%$ for $\alpha=0.001$ and $\alpha=0.002$, respectively, and enhanced at the
smaller scales, with deviations that can reach $\sim 45\%$ for $\alpha=0.002$.
This is illustrated in Fig. 5, where we can clearly see that the largest
deviations occur for scales $0.01h<k<0.1h$ Mpc-1 and at large redshift, with
some milder modifications close to present time as well for $k\sim 0.1h$
Mpc-1. At larger scales $k\sim 0.01h$ Mpc-1 the deviations are more
accentuated at intermediate redshifts ($z\sim 10$). The plots also show that
as expected the largest deviations are present for the higher values of
$\alpha$. As a consequence, the value of the amplitude of the matter power
spectrum at present time and scale of 8 $h^{-1}$Mpc, denoted by $\sigma_{8}$,
is expected to be larger for the Kinetic model.
(a) (b)
Figure 6: Left panel: (Top) Evolution of the sum of the gravitational
potentials as a function of the redshift at $k=0.01$ Mpc-1 for the cases:
uncoupled model (dashed line), $\alpha=0.001$ (dot-dashed line),
$\alpha=0.002$ (dotted line), and $\Lambda$CDM (pink solid line). (Bottom)
Relative percentage difference of $\Psi+\Phi$ computed with respect to
$\Lambda$CDM. Right panel: (Top) Evolution of the time derivative of the sum
of the gravitational potentials as a function of the redshift (Bottom)
Relative percentage difference of $\Psi^{\prime}+\Phi^{\prime}$ computed with
respect to $\Lambda$CDM. (a) Figure 7: Upper panel: Lensing angular power
spectra for $\Lambda$CDM (solid pink line), $\alpha=0.001$ (dot-dashed line),
$\alpha=0.002$ (dotted line) and the uncoupled case (dashed line). Lower
panel: Relative difference between the lensing power spectra of each model and
that of $\Lambda$CDM. (a) Figure 8: Upper panel: TT power spectrum as
function of the angular scale $\ell$, for the uncoupled case (dashed line),
${\alpha=0.001}$ (dot-dashed line), ${\alpha=0.002}$ (dotted line) and
$\Lambda$CDM (pink solid line) for reference. Lower panel: Percentage
deviations of the TT power spectra for the coupled and uncoupled cases with
respect to $\Lambda$CDM.
In Fig. 6 we show the sum of the gravitational potentials $\Phi+\Psi$ (left
panel) and their time derivative (right panel), as a function of the redshift,
for a fixed scale, $k=0.01$ Mpc-1. The evolution of the potentials is
regulated according to the Poisson equation. We can infer that the value of
the lensing potential, given by $\phi_{\text{lens}}=(\Psi+\Phi)/2$, is lower
in the Kinetic model when compared to the standard cosmological scenario,
resulting in a suppression of the lensing power spectrum, as shown in Fig. 7.
This effect becomes increasingly evident for larger value of $\alpha$. The
quantity $\Psi^{\prime}+\Phi^{\prime}$ instead is directly connected with the
integrated Sachs-Wolfe effect (ISW). The latter affects the shape of the TT
power spectrum as it enters in the radiation transfer function. The total ISW
effect is divided into: an early time contribution, produced during the
transition from radiation to matter dominated epochs, which in the Kinetic
model is shifted towards earlier times when compared to the standard scenario;
and a late time contribution, related with the presence of the dark energy
component. The impact of the ISW effect on the TT power spectrum is
illustrated in Fig. 8, as a function of the angular multipole $\ell$,
exhibiting an overall enhancement with respect to the reference case for
$\ell\lesssim 300$. While milder differences are identified around the plateau
at $\ell<10$, significant deviations can be appreciated around $10<\ell<200$,
in particular for $\ell\sim 50$, being as large as $\sim 40\%$ for
$\alpha=0.002$. Moreover, there is a clear increase in the amplitude of the
first peak, accompanied by a broadening of its shape. Likewise, the presence
of the coupling and the modifications to the background expansion also induce
small differences between the peaks and troughs at the higher multipoles.
These effects can be measured using cosmological data from background and
large-scale structure.
## IV Cosmological constraints and model selection analysis
In this Section we present the constraints on the cosmological and model
parameters of the Kinetic model for different combinations of data sets. We
perform a Bayesian Monte Carlo Markov Chain (MCMC) analysis using the
Metropolis-Hastings algorithm implemented in the Monte
Python222https://github.com/brinckmann/montepython_public sampler Audren _et
al._ (2013); Brinckmann and Lesgourgues (2019) interfaced with our personal
modified version of CLASS333https://github.com/lesgourg/class_public
Lesgourgues (2011a); Blas _et al._ (2011); Lesgourgues (2011b). The general
aim is to estimate the sample posteriors that maximise the likelihood
associated to each data set, therefore minimising the statistical error
distribution. Subsequently, we analyse the MCMC chains and produce the results
reported in Tables 2 and 3, and in Figures 9, 10, 11, and 12, resorting to the
GetDist444https://github.com/cmbant/getdist Python package Lewis (2019). For
comparison purposes we also report on the constraints derived for the standard
cosmological scenario. Finally we examine whether the Kinetic model is
supported by the data over $\Lambda$CDM.
Parameter | Prior
---|---
$\Omega_{b}h^{2}$ | $[0.005,0.1]$
$\Omega_{c}h^{2}$ | $[0.001,0.99]$
$100\theta_{s}$ | $[0.5,10]$
$z_{reio}$ | $[0.,20.]$
$n_{s}$ | $[0.7,1.3]$
$\log\left(10^{10}A_{s}\right)$ | $[1.7,5.0]$
$\lambda$ | $[0,2]$
$\alpha$ | $[0,1]$
Table 1: Flat priors on the cosmological and model parameters sampled in this
work. Kinetic Model
---
Parameter | Plk18 | Plk18+BAO+SN | Plk18+BAO+SN+len
$S^{0}_{8}$ | $0.793^{+0.110}_{-0.064}$ | $0.875^{+0.037}_{-0.043}$ | $0.863^{+0.030}_{-0.039}$
$\Omega^{0}_{m}$ | $0.257^{+0.045}_{-0.025}$ | $0.2988^{+0.0072}_{-0.0036}$ | $0.2982^{+0.0070}_{-0.0035}$
$H_{0}$ | $64.0^{+3.3}_{-1.8}$ | $67.14\pm 0.62$ | $66.94^{+0.60}_{-0.54}$
$10^{-9}A_{s}$ | $2.088\pm 0.035$ | $2.096\pm 0.035$ | $2.111\pm 0.031$
$n_{s}$ | $0.9667\pm 0.0047$ | $0.9669\pm 0.0044$ | $0.9655\pm 0.0041$
$\lambda$ | $1.11\pm 0.48$ | $0.42^{+0.18}_{-0.21}$ | $0.41^{+0.17}_{-0.22}$
$10^{4}\alpha$ | $1.88\pm 0.95$ | $1.37^{+0.67}_{-1.00}$ | $1.05^{+0.51}_{-0.87}$
Table 2: $68\%$ C.L. bounds on the cosmological and model parameters for the
Kinetic model for the three different combinations of data sets: Planck,
Planck combined with BAO and SN, and their full combination with CMB lensing.
$\Lambda$CDM Model
---
Parameter | Plk18 | Plk18+BAO+SN | Plk18+BAO+SN+len
$S^{0}_{8}$ | $0.833\pm 0.016$ | $0.831^{+0.013}_{-0.015}$ | $0.834\pm 0.013$
$\Omega^{0}_{m}$ | $0.3163\pm 0.0085$ | $0.3151^{+0.0060}_{-0.0075}$ | $0.3162\pm 0.0073$
$H_{0}$ | $67.31\pm 0.61$ | $67.39^{+0.53}_{-0.45}$ | $67.32\pm 0.53$
$10^{-9}A_{s}$ | $2.102\pm 0.034$ | $2.102\pm 0.034$ | $2.105^{+0.028}_{-0.032}$
$n_{s}$ | $0.9652\pm 0.0044$ | $0.9656\pm 0.0039$ | $0.9651\pm 0.0041$
Table 3: $68\%$ C.L. bounds on the cosmological parameters for the
$\Lambda$CDM model for the three different combinations of data sets: Planck
2018, Planck 2018 combined with BAO and SN, and and their full combination
with CMB lensing.
### IV.1 Data sets
For the present analysis we resort to the CMB Planck 2018 Aghanim _et al._
(2020b) data for large angular scales $\ell=[2,29]$ and a joint of TT, TE and
EE likelihoods for the small angular scales. In detail, for the latter case,
$\ell=[30,2508]$ for the TT power spectrum and $\ell=[30,1996]$ for the TE
cross-correlation and EE power spectra. This will be our baseline data set and
we will refer to it as “Plk18” in what follows. Subsequently, we examine the
changes when adding to the Plk18 data set a compilation of BAO distance and
expansion rate measurements from the Sloan Digital Sky Survey (SDSS) DR7 Main
Galaxy Sample Ross _et al._ (2015), SDSS DR12 consensus release Beutler _et
al._ (2017) and the 6dF Galaxy Survey Beutler _et al._ (2011) (see text and
Figure 11 in Ref. Aghanim _et al._ (2020a) for more details), and distance
moduli measurements of type Ia Supernova (SN) data from Pantheon Scolnic _et
al._ (2018), hereafter simply “Plk18+BAO+SN”. Finally we consider the
combination of “Plk18+BAO+SN” with the addition of the CMB lensing potential
data from Planck 2018 Aghanim _et al._ (2020b, c), referenced as
“Plk18+BAO+SN+len” from now on. We note that both the CMB Planck 2018
temperature and polarisation angular power spectra data used corresponds to
the standard reference likelihood from the 2018 release
555http://pla.esac.esa.int/pla used in the Planck analysis. In particular this
is given by the product of the Commander, SimALL, and PlikTT,TE,EE likelihoods
Aghanim _et al._ (2020b).
Our set of free parameters consists of the baseline $\Lambda$CDM cosmological
parameters, namely: ${\Omega_{b}h^{2}}$, ${\Omega_{c}h^{2}}$, $A_{\rm s}$ and
$n_{s}$, the angular size of the sound horizon at recombination $\theta_{s}$
and the reionisation redshift $z_{reio}$; moreover we add the two free
parameters associated to the Kinetic model, $\alpha$ and $\lambda$. We impose
flat priors for all the parameters sampled and these are specified in Table
1666We use a linear sampling for $\alpha$ but we have run chains using also a
logarithmic sampling. Comparing the results we concluded that the C.L. bounds
and the marginalised posterior distributions found are in agreement at
$1\sigma$ level. This supports the robustness of the results reported on with
a flat prior.. We will then provide derived constraints on $H_{0}$, and
$S^{0}_{8}=\sigma_{8}^{0}\sqrt{\Omega_{m}^{0}/0.3}$.
### IV.2 Cosmological bounds
(a) Figure 9: $68\%$ and $95\%$ C.L. contours obtained in the Kinetic model
under consideration for the Planck 2018 data (grey), the Planck 2018, BAO and
SN combination (yellow), and their combination with CMB lensing (red). (a)
Figure 10: Comparison between the $\Lambda$CDM (dashed lines) and Kinetic
model (solid lines) marginalised likelihood of the cosmological parameters for
the Planck 2018 data (grey), the Planck 2018, BAO and SN combination (yellow)
and their combination with CMB lensing (red).
We show the constraints on the model and cosmological parameters for the
Kinetic model in Table 2 and the corresponding contour plots in Figure 9, for
all the data set combinations considered. For comparison purposes, we include
the results for the $\Lambda$CDM model in Table 3 and in Figure 10.
We find that the parameter $\alpha$ is constrained to be of the order of
$10^{-4}$, regardless of the combination of data sets considered. The Planck
data alone prefer the higher mean value of $\alpha$, mainly as this allows to
better accommodate the TT likelihood; on the other hand, the inclusion of the
BAO and SN data results in a slight decrease of the mean value of $\alpha$; at
last, adding the CMB lensing data leads to a shift of the peak of the
posterior distribution for the $\alpha$ parameter to an even lower central
value. This feature is connected to the lensing excess reported by the Planck
collaboration Ade _et al._ (2014); Adam _et al._ (2016); Aghanim _et al._
(2020a). As discussed in the previous section, the lensing power spectrum is
always suppressed in the Kinetic model, when compared to the $\Lambda$CDM one,
with higher values of $\alpha$ corresponding to lower amplitudes of the
lensing power spectrum (see Figure 7). Therefore, in order to better
accommodate the CMB lensing data, a lower mean value for $\alpha$ is
preferred.
Although the constraints on the cosmological parameters of the Kinetic model
are compatible with the $\Lambda$CDM ones within the errors, the cosmological
standard model yields higher mean values for $H_{0}$ and $\Omega_{m}^{0}$,
when compared to the Kinetic model. The latter is characterised by a positive-
correlation between $\Omega_{m}^{0}$ and $H_{0}$, contrary to the anti-
correlation that characterises the $\Lambda$CDM model, as shown in Figure 11.
In other words, a preference for lower values of $\Omega_{m}^{0}$ results in
lower values for $H_{0}$ alike. This characteristic correlation is persistent
through all the three data combinations considered. This trait can be ascribed
to the presence of a non-vanishing value for the $\alpha$ parameter,
associated with an enhancement of the TT power spectrum (see Fig 8).
Furthermore, in Figure 11 we depict the contour plots for the constraints in
the $S_{8}^{0}-\Omega_{m}^{0}$ plane. The parameters are positively correlated
for both the $\Lambda$CDM model and the Kinetic model. For the latter we find
$S_{8}^{0}=0.793^{+0.110}_{-0.064}$ at 68% C.L. with Plk18 data only, thus
alleviating the discordance with cosmic shear measurements Heymans _et al._
(2021); Di Valentino _et al._ (2021); Abdalla _et al._ (2022) present in the
standard model, for which we report ${S_{8}^{0}=0.833\pm 0.016}$. However, as
seen in Table 2 when the other data sets are also taken into account the
discrepancy arises again, reflecting a tension between BAO and/or SN data
under this framework. A similar situation has also been reported in a Galileon
model Frusciante _et al._ (2020). This contingency requires further
investigation since it has been suggested that there might be a bias towards
$\Lambda$CDM-like models enclosed in the BAO data Carter _et al._ (2020).
The inclusion of BAO and SN data leads to narrower constraints on
$\Omega_{m}^{0}$, which in turn results in tighter constraints on other
parameters, such as $H_{0}$, $S_{8}^{0}$, and $\lambda$. The latter is
directly connected to the anti-correlation shown in Fig. 12 in the
$\Omega_{m}^{0}$-$\lambda$ plane, i.e. higher values of $\Omega_{m}^{0}$
select lower values for $\lambda$. This negative correlation is justified by
considering that the late time accelerated expansion, expressed in terms of
$w_{\phi}\approx 1-2V/3H^{2}$, is mainly regulated by two parameters, namely
$\Omega_{\phi}^{0}$ and $\lambda$. The former is given by the Friedmann
constraint $\Omega_{\phi}^{0}\approx 1-\Omega_{m}^{0}$, meaning that, in turn,
higher values of $\Omega_{m}^{0}$ are associated with lower values of
$\Omega_{\phi}^{0}$. Therefore, and in order to have a cosmological constant-
like scenario for the scalar field at present times, $w_{\phi}^{0}\approx-1$,
the mean value of $\lambda$ is pushed towards smaller values, explaining the
identified anti-correlation between $\Omega_{m}^{0}$ and $\lambda$.
(a)
(b)
Figure 11: 68% and 95% C.L. 2D contours obtained for the parameters $H_{0}$ and $\Omega_{m}^{0}$ (left panels) and $S_{8}^{0}$ and $\Omega_{m}^{0}$ (right panels) in the Kinetic model (upper panels) and $\Lambda$CDM model (lower panels) for the Planck 2018 data (grey), the Planck 2018, BAO and SN combination (yellow), and their combination with CMB lensing (red). (a) Figure 12: 68% and 95% C.L. 2D contours obtained for the parameters $\lambda$ and $\Omega_{m}^{0}$ for the Kinetic model considering the Planck 2018 data (grey), the Planck 2018, BAO and SN combination (yellow), and their combination with CMB lensing (red). | Plk18 | Plk18+BAO+SN | Plk18+BAO+SN+len
---|---|---|---
$\Delta\chi^{2}_{\rm eff}$ | $-0.9$ | $0.7$ | $1.0$
$\Delta$DIC | $-0.3$ | $0.8$ | $1.6$
Table 4: Results for the $\Delta\chi^{2}_{\rm eff}$ and $\Delta\text{DIC}$
obtained as the difference between the Kinetic and $\Lambda$CDM scenarios.
Finally, we wish to examine whether the Kinetic model is supported over the
$\Lambda$CDM case resorting to statistical indicators: the effective
$\chi^{2}$ corresponding to the maximum likelihood, namely
$\chi_{\text{eff}}^{2}$, and the Deviance Information Criterion (DIC)
Spiegelhalter _et al._ (2014). The former will enable us to assess whether
the Kinetic model is preferred by the data against $\Lambda$CDM, by computing
$\Delta\chi_{\rm eff}^{2}=\chi^{2}_{\rm eff,Kinetic}-\chi^{2}_{\rm eff,\Lambda
CDM}$, with a negative outcome standing for a support for it, while a positive
result indicates no preference. The DIC will complement this analysis as a
tool for quantifying this preference, and it is defined as
$\text{DIC}:=\chi_{\text{eff}}^{2}+2p_{\text{D}},$ (72)
where ${p_{\text{D}}=\overline{\chi}_{\text{eff}}^{2}-\chi_{\text{eff}}^{2}}$,
with the upper bar denoting the average of the posterior distribution.
According to this definition, the DIC accounts for both the reliability of the
fit, through the $\chi_{\text{eff}}^{2}$ term, and for the Bayesian complexity
of the model, encoded in $p_{\text{D}}$. Hence, more complex models are
disfavoured, in line with a quantitative Occam’s razor criteria. Hence,
cosmological models with smaller DIC should be preferred over models with
larger DIC Liddle (2009); Peirone _et al._ (2019a, b); Frusciante _et al._
(2020); Frusciante and Benetti (2021); Anagnostopoulos _et al._ (2021);
Rezaei and Malekjani (2021); Albuquerque _et al._ (2022); Atayde and
Frusciante (2021). Finally, the quantity
$\Delta\text{DIC}=\text{DIC}_{\text{Kinetic}}-\text{DIC}_{\text{$\Lambda$CDM}}\,,$
(73)
will indicate support for the Kinetic model over the $\Lambda$CDM scenario
provided that $\Delta\text{DIC}<0$. In Tab. 4 we present the values for both
the $\Delta\chi_{\rm eff}^{2}$ and the $\Delta\text{DIC}$. We gather that, by
taking the Plk18 data alone, a better fit to the data for the Kinetic model is
suggested, compared to the $\Lambda$CDM case, since $\Delta\chi^{2}=-0.9$.
However, when the other data sets are included, this preference is no longer
present. This is linked to the fact that the BAO and SN data spoil the fit to
the TT likelihood which, after the inclusion of the CMB lensing data, becomes
worsened as a result of the Kinetic model predicting a suppressed lensing
amplitude, while the CMB lensing data actually shows an excess of power.
However, it should be noted that the support of the Kinetic model by the
Planck data over the standard cosmological scenario is not overly significant
($\Delta{\rm DIC}=-0.3$) and the remaining data combinations indicate a slight
preference for the $\Lambda$CDM model. Therefore, we conclude that there is no
statistical evidence in support for either of the two models in this analysis.
## V Conclusions
In this work we have thoroughly explored the evolution of the background and
linear perturbations of the Kinetic model, a coupled quintessence theory
characterised by a power-law kinetic interaction, with strength characterised
by the parameter $\alpha$. We studied the impact of the coupling between the
scalar field and the dark matter fluid on the cosmological observables and we
have provided cosmological constraints on the parameters of the theory using
CMB, CMB lensing, BAO and SN data.
We have derived the background and linear scalar perturbation equations and we
have modified the public Einstein Boltzmann code CLASS. For our study we have
identified the theoretically viable parameter space by enforcing stability
requirements such as the absence of ghosts and gradient instabilities. These
mostly define the range of viability of the parameter $\alpha$. The other
additional free parameter of this model is the steepness of the potential
function, $\lambda$, which has a crucial role in regulating the late time
accelerated expansion. We employed an extended viable range for $\lambda$
compared to what had previously been presented Barros (2019), as we allowed
for transient accelerated regimes at the present time and not at the future
attractor only.
In Section III we studied in detail the phenomenology of the Kinetic model. At
the background level we found that a non-vanishing value of $\alpha$ allows
for the presence of a scaling regime at early times, during the radiation
dominated epoch, according to which the ratio of the densities of the cold
dark matter and the scalar field approximately scales with $\alpha$. The
initial condition for the velocity of the scalar field sets how long the
quintessence field stays in the scaling regime, hence quantifying the
deviations from a cosmological constant behaviour. Furthermore we found that,
due to the coupling in the dark sector, energy is being transferred from the
dark matter field to the scalar field. We also highlighted the presence of a
shift of the radiation matter equality towards earlier times. These two
features have a direct impact on the matter power spectrum: the latter leads
to a shift in the position of its peak towards higher $k$ modes, generating in
turn a suppression for scales $k\lesssim 3\times 10^{-2}h$ Mpc-1, when
compared to the $\Lambda$CDM case; the former affects the growth of the matter
perturbations on larger $k$, resulting in an enhancement with respect to the
standard scenario. Likewise, the differences in the growth of the matter
perturbations influence the evolution of the gravitational potentials through
the Poisson equation. Consequently we found an overall suppression of the
lensing potential (and lensing power spectrum), with respect to $\Lambda$CDM,
along with a modified ISW effect which alters the shape of the TT power
spectrum for large angular scales.
These theoretical predictions are then used to provide constraints on the
model through a Monte Carlo code for cosmological parameter extraction. We
found that the $S_{8}$ tension is alleviated since
$S_{8}^{0}=0.793^{+0.110}_{-0.064}$ at 68% with Planck data, while the $H_{0}$
tension is still present. Regardless of the combination of data considered,
the parameter $\alpha$ is consistently constrained to be of the order
$10^{-4}$. We also reported on the bounds for the other parameter of the
model, $\lambda$, for which the strongest constraints are for the two
combinations including BAO and SN data. This is attributed to the strong
constraining power of BAO data on $\Omega_{m}^{0}$, which indirectly impact
the bounds on $\lambda$. Finally we performed a model selection analysis based
on the effective $\chi_{\rm eff}^{2}$ and Deviance Information Criterion, but
we were not able to clearly identify the statistically favoured model between
$\Lambda$CDM and the Kinetic model. We want to stress that the purpose of our
work is not to make any claim on the class of models characterised by a
kinetic coupling with cold dark matter but to provide constraints on the
parameters of the specific model analysed. The latter being the first tested
model in such class of theories. Actually the present analysis can be
considered a starting point to construct and test new kinetic coupling models
with interesting cosmological signatures.
In conclusion, we remark that it would be of interest to consider the Kinetic
model for future investigations when new probes from upcoming surveys will be
available. This progress will help in shedding light on the tensions and the
high accuracy data we expect to collect will allow us to set a definite
preference of one model over the other.
###### Acknowledgements.
We thank Eleonora di Valentino for useful comments on the results. E.M.T. is
supported by the grant SFRH/BD/143231/2019 from Fundação para a Ciência e a
Tecnologia (FCT). B.J.B. is supported by the South African NRF Grants No.
120390, reference: BSFP190416431035; No. 120396, reference: CSRP190405427545.
N.F. is supported by the Italian Ministry of University and Research (MUR)
through the Rita Levi Montalcini project “Tests of gravity on cosmic scales”
with reference PGR19ILFGP. B.J.B., E.M.T. and N.F. also acknowledge the FCT
project with ref. number PTDC/FIS-AST/0054/2021. The results of this work were
possible thanks to The University of Sheffield’s High Performance Computing
(HPC) clusters Bessemer and ShARC.
## Appendix A Synchronous gauge
In this Appendix we write the linear perturbations equations of the Kinetic
model in synchronous gauge.
We use the following metric to describe perturbations in synchronous gauge
$\mathrm{d}s^{2}=a^{2}(\tau)\left[-\mathrm{d}\tau^{2}+\left(\delta_{ij}+h_{ij}\right)\mathrm{d}x^{i}\mathrm{d}x^{j}\right]\,,$
(74)
where the scalar modes of the perturbation components $h_{ij}$ are
parameterised in Fourier space,
$h_{ij}(\vec{x},\tau)=\int
d^{3}k\,\mathup{e}^{\mathup{i}\vec{k}\cdot\vec{x}}\left[\hat{\vec{k}}_{i}\cdot\hat{\vec{k}}_{j}\,h(\vec{k},\tau)+\left(\hat{\vec{k}}_{i}\cdot\hat{\vec{k}}_{j}-\frac{1}{3}\delta_{ij}\right)6\eta(\vec{k},\tau)\right]\,,$
(75)
with $\vec{k}=k\hat{\vec{k}}$. The perturbations in Newtonian gauge are
related with the scalar quantities $\eta$ and $h$ as follows Ma and
Bertschinger (1995):
$\displaystyle\Psi$ $\displaystyle=$
$\displaystyle\frac{1}{2k^{2}}\left[h^{\prime\prime}+6\eta^{\prime\prime}+\mathcal{H}\left(h^{\prime}+6\eta^{\prime}\right)\right]\,,$
(76) $\displaystyle\Phi$ $\displaystyle=$
$\displaystyle\eta-\frac{\mathcal{H}}{2k^{2}}\left(h^{\prime}+6\eta^{\prime}\right)\,,$
(77)
where a prime denotes derivatives with respect to the conformal time $\tau$.
We can the write the system of equations (39)-(42) in synchronous gauge as
follows:
$\displaystyle k^{2}\eta-\frac{1}{2}\mathcal{H}h^{\prime}$ $\displaystyle=$
$\displaystyle-4\pi Ga^{2}\sum_{i}\delta\rho_{i}\,,$ (78) $\displaystyle
k^{2}\eta^{\prime}$ $\displaystyle=$ $\displaystyle 4\pi
Ga^{2}\sum_{i}\rho_{i}(1+w_{i})\theta_{i}\,,$ (79) $\displaystyle
h^{\prime\prime}+2\mathcal{H}h^{\prime}-2k^{2}\eta$ $\displaystyle=$
$\displaystyle-24\pi Ga^{2}\sum_{i}\delta p_{i}\,,$ (80) $\displaystyle
h^{\prime\prime}+6\eta^{\prime\prime}+2\mathcal{H}\left(h^{\prime}+6\eta^{\prime}\right)-2k^{2}\eta$
$\displaystyle=$ $\displaystyle 0\,.$ (81)
Similarly one can find the equivalent of the linear perturbation equation for
the matter density perturbations and velocity:
$\displaystyle\delta^{\prime}_{i}+3\mathcal{H}\left(\frac{\delta
p_{i}}{\delta\rho_{i}}-w_{i}\right)\delta_{i}+(1+w_{i})\left(\theta_{i}+\frac{h^{\prime}}{2}\right)$
$\displaystyle=$ $\displaystyle 0\,,$ (82)
$\displaystyle\theta_{i}^{\prime}+\left[\mathcal{H}(1-3w_{i})+\frac{w_{i}^{\prime}}{1+w_{i}}\right]\theta_{i}-\frac{\delta
p_{i}}{\delta\rho_{i}}\frac{k^{2}}{1+w_{i}}\delta_{i}$ $\displaystyle=$
$\displaystyle 0\,,$ (83)
and for the cold dark matter density and velocity perturbations:
$\delta_{c}^{\prime}+\theta_{c}+\frac{h^{\prime}}{2}=\frac{Q}{\rho_{c}}\left(\phi^{\prime}\delta_{c}-\delta\phi^{\prime}\right)-\frac{\phi^{\prime}}{\rho_{c}}\delta
Q\,,$ (84)
$\theta^{\prime}_{c}+\mathcal{H}\theta_{c}=\frac{Q}{\rho_{c}}\left(\phi^{\prime}\theta_{c}-k^{2}\delta\phi\right)\,,$
(85)
where
$\displaystyle\delta Q$ $\displaystyle=$
$\displaystyle\frac{2\alpha\rho_{c}}{2\alpha
a^{2}\rho_{c}+(1+2\alpha)\phi^{\prime
2}}\left\\{\frac{h^{\prime}}{2}\phi^{\prime}-\phi^{\prime}\theta_{c}+\left[3\mathcal{H}\phi^{\prime}+a^{2}(V_{,\phi}-Q)\right]\delta_{c}+\left(2k^{2}+a^{2}V_{,\phi\phi}\right)\delta\phi\right.$
(86) $\displaystyle\left.\hskip
108.12054pt-\left[3\mathcal{H}\phi^{\prime}+2a^{2}(V_{,\phi}-Q)\right]\frac{\delta\phi^{\prime}}{\phi^{\prime}}\right\\}\,.$
It is worth noting that the synchronous gauge defines a frame which is always
comoving with cold dark matter. That is, in the absence of a coupling, $Q=0$,
and for an initial condition $\theta_{c}(z_{i})=0$, the the velocity
divergence of CDM remains zero throughout time, as dictated by Eq. (85).
Finally we write the equation for the scalar field perturbation:
$\delta\phi^{\prime\prime}+2\mathcal{H}\delta\phi^{\prime}+\left(a^{2}V_{,\phi\phi}+k^{2}\right)\delta\phi+\frac{h^{\prime}}{2}\phi^{\prime}=a^{2}\delta
Q\,.$ (87)
## References
* Weinberg (1989) S. Weinberg, Rev. Mod. Phys. 61, 1 (1989).
* Weinberg (2000) S. Weinberg, _4th International Symposium on Sources and Detection of Dark Matter in the Universe (DM 2000)_ , , 18 (2000), arXiv:astro-ph/0005265 .
* Martin (2012) J. Martin, Comptes Rendus Physique 13, 566 (2012), arXiv:1205.3365 [astro-ph.CO] .
* Abdalla _et al._ (2022) E. Abdalla _et al._ , JHEAp 34, 49 (2022), arXiv:2203.06142 [astro-ph.CO] .
* Aghanim _et al._ (2020a) N. Aghanim _et al._ (Planck), Astron. Astrophys. 641, A6 (2020a), [Erratum: Astron.Astrophys. 652, C4 (2021)], arXiv:1807.06209 [astro-ph.CO] .
* Riess _et al._ (2019) A. G. Riess, S. Casertano, W. Yuan, L. M. Macri, and D. Scolnic, Astrophys. J. 876, 85 (2019), arXiv:1903.07603 [astro-ph.CO] .
* Wong _et al._ (2020) K. C. Wong _et al._ , Mon. Not. Roy. Astron. Soc. 498, 1420 (2020), arXiv:1907.04869 [astro-ph.CO] .
* Riess _et al._ (2021) A. G. Riess, S. Casertano, W. Yuan, J. B. Bowers, L. Macri, J. C. Zinn, and D. Scolnic, Astrophys. J. Lett. 908, L6 (2021), arXiv:2012.08534 [astro-ph.CO] .
* Pesce _et al._ (2020) D. W. Pesce _et al._ , Astrophys. J. Lett. 891, L1 (2020), arXiv:2001.09213 [astro-ph.CO] .
* Heymans _et al._ (2021) C. Heymans _et al._ , Astron. Astrophys. 646, A140 (2021), arXiv:2007.15632 [astro-ph.CO] .
* Di Valentino _et al._ (2021) E. Di Valentino _et al._ , Astropart. Phys. 131, 102604 (2021), arXiv:2008.11285 [astro-ph.CO] .
* Asgari _et al._ (2021) M. Asgari _et al._ (KiDS), Astron. Astrophys. 645, A104 (2021), arXiv:2007.15633 [astro-ph.CO] .
* Saridakis _et al._ (2021) E. N. Saridakis _et al._ (CANTATA), (2021), arXiv:2105.12582 [gr-qc] .
* Aad _et al._ (2012) G. Aad _et al._ (ATLAS), Phys. Lett. B 716, 1 (2012), arXiv:1207.7214 [hep-ex] .
* Chatrchyan _et al._ (2012) S. Chatrchyan _et al._ (CMS), Phys. Lett. B 716, 30 (2012), arXiv:1207.7235 [hep-ex] .
* Guth (1981) A. H. Guth, Phys. Rev. D 23, 347 (1981).
* Linde (1982) A. D. Linde, Phys. Lett. B 108, 389 (1982).
* Starobinsky (1982) A. A. Starobinsky, Phys. Lett. B 117, 175 (1982).
* Wetterich (1995) C. Wetterich, Astron. Astrophys. 301, 321 (1995), arXiv:hep-th/9408025 .
* Caldwell _et al._ (1998) R. R. Caldwell, R. Dave, and P. J. Steinhardt, Phys. Rev. Lett. 80, 1582 (1998), arXiv:astro-ph/9708069 .
* Chiba (1999) T. Chiba, Phys. Rev. D 60, 083508 (1999), arXiv:gr-qc/9903094 .
* Copeland _et al._ (1998) E. J. Copeland, A. R. Liddle, and D. Wands, Phys. Rev. D 57, 4686 (1998), arXiv:gr-qc/9711068 .
* Ferreira and Joyce (1998) P. G. Ferreira and M. Joyce, Phys. Rev. D 58, 023503 (1998), arXiv:astro-ph/9711102 .
* Liddle and Scherrer (1999) A. R. Liddle and R. J. Scherrer, Phys. Rev. D 59, 023509 (1999), arXiv:astro-ph/9809272 .
* Barreiro _et al._ (2000) T. Barreiro, E. J. Copeland, and N. J. Nunes, Phys. Rev. D 61, 127301 (2000), arXiv:astro-ph/9910214 .
* Amendola (2000) L. Amendola, Phys. Rev. D 62, 043511 (2000), arXiv:astro-ph/9908023 .
* Guo _et al._ (2003a) Z.-K. Guo, Y.-S. Piao, R.-G. Cai, and Y.-Z. Zhang, Phys. Lett. B 576, 12 (2003a), arXiv:hep-th/0306245 .
* Guo _et al._ (2003b) Z. K. Guo, Y.-S. Piao, and Y.-Z. Zhang, Phys. Lett. B 568, 1 (2003b), arXiv:hep-th/0304048 .
* Chimento _et al._ (2003) L. P. Chimento, A. S. Jakubi, D. Pavon, and W. Zimdahl, Phys. Rev. D 67, 083513 (2003), arXiv:astro-ph/0303145 .
* Tsujikawa and Sami (2004) S. Tsujikawa and M. Sami, Phys. Lett. B 603, 113 (2004), arXiv:hep-th/0409212 .
* Piazza and Tsujikawa (2004) F. Piazza and S. Tsujikawa, JCAP 07, 004 (2004), arXiv:hep-th/0405054 .
* Pettorino _et al._ (2005) V. Pettorino, C. Baccigalupi, and F. Perrotta, JCAP 12, 003 (2005), arXiv:astro-ph/0508586 .
* Amendola _et al._ (2006) L. Amendola, M. Quartin, S. Tsujikawa, and I. Waga, Phys. Rev. D 74, 023525 (2006), arXiv:astro-ph/0605488 .
* Ohashi and Tsujikawa (2009) J. Ohashi and S. Tsujikawa, Phys. Rev. D 80, 103513 (2009), arXiv:0909.3924 [gr-qc] .
* Gomes and Amendola (2014) A. R. Gomes and L. Amendola, JCAP 03, 041 (2014), arXiv:1306.3593 [astro-ph.CO] .
* Chiba _et al._ (2014) T. Chiba, A. De Felice, and S. Tsujikawa, Phys. Rev. D 90, 023516 (2014), arXiv:1403.7604 [gr-qc] .
* Amendola _et al._ (2014) L. Amendola, T. Barreiro, and N. J. Nunes, Phys. Rev. D 90, 083508 (2014), arXiv:1407.2156 [astro-ph.CO] .
* Albuquerque _et al._ (2018) I. S. Albuquerque, N. Frusciante, N. J. Nunes, and S. Tsujikawa, Phys. Rev. D 98, 064038 (2018), arXiv:1807.09800 [gr-qc] .
* Frusciante _et al._ (2018) N. Frusciante, R. Kase, N. J. Nunes, and S. Tsujikawa, Phys. Rev. D 98, 123517 (2018), arXiv:1810.07957 [gr-qc] .
* Amendola _et al._ (2018) L. Amendola, D. Bettoni, G. Domènech, and A. R. Gomes, JCAP 06, 029 (2018), arXiv:1803.06368 [gr-qc] .
* Barros (2019) B. J. Barros, Phys. Rev. D 99, 064051 (2019), arXiv:1901.03972 [gr-qc] .
* Albuquerque _et al._ (2022) I. S. Albuquerque, N. Frusciante, and M. Martinelli, Phys. Rev. D 105, 044056 (2022), arXiv:2112.06892 [astro-ph.CO] .
* Pace and Frusciante (2022) F. Pace and N. Frusciante, Universe 8, 145 (2022), arXiv:2204.06420 [gr-qc] .
* Zlatev _et al._ (1999) I. Zlatev, L.-M. Wang, and P. J. Steinhardt, Phys. Rev. Lett. 82, 896 (1999), arXiv:astro-ph/9807002 .
* Velten _et al._ (2014) H. E. S. Velten, R. F. vom Marttens, and W. Zimdahl, Eur. Phys. J. C 74, 3160 (2014), arXiv:1410.2509 [astro-ph.CO] .
* Tamanini (2015) N. Tamanini, Phys. Rev. D 92, 043524 (2015), arXiv:1504.07397 [gr-qc] .
* Valiviita _et al._ (2008) J. Valiviita, E. Majerotto, and R. Maartens, JCAP 07, 020 (2008), arXiv:0804.0232 [astro-ph] .
* Koivisto (2005) T. Koivisto, Phys. Rev. D 72, 043516 (2005), arXiv:astro-ph/0504571 .
* Schutz (1970) B. F. Schutz, Phys. Rev. D 2, 2762 (1970).
* Schutz and Sorkin (1977) B. F. Schutz and R. Sorkin, Annals Phys. 107, 1 (1977).
* Brown (1993) J. D. Brown, Class. Quant. Grav. 10, 1579 (1993), arXiv:gr-qc/9304026 .
* Pourtsidou _et al._ (2013) A. Pourtsidou, C. Skordis, and E. J. Copeland, Phys. Rev. D 88, 083505 (2013), arXiv:1307.0458 [astro-ph.CO] .
* Boehmer _et al._ (2015a) C. G. Boehmer, N. Tamanini, and M. Wright, Phys. Rev. D 91, 123002 (2015a), arXiv:1501.06540 [gr-qc] .
* Boehmer _et al._ (2015b) C. G. Boehmer, N. Tamanini, and M. Wright, Phys. Rev. D 91, 123003 (2015b), arXiv:1502.04030 [gr-qc] .
* Kase and Tsujikawa (2020) R. Kase and S. Tsujikawa, Phys. Rev. D 101, 063511 (2020), arXiv:1910.02699 [gr-qc] .
* Amendola (1999) L. Amendola, Phys. Rev. D 60, 043501 (1999), arXiv:astro-ph/9904120 .
* Tsujikawa (2006) S. Tsujikawa, Phys. Rev. D 73, 103504 (2006), arXiv:hep-th/0601178 .
* Gomes and Amendola (2016) A. R. Gomes and L. Amendola, JCAP 02, 035 (2016), arXiv:1511.01004 [gr-qc] .
* Frusciante _et al._ (2019a) N. Frusciante, R. Kase, K. Koyama, S. Tsujikawa, and D. Vernieri, Phys. Lett. B 790, 167 (2019a), arXiv:1812.05204 [gr-qc] .
* Amendola and Quercellini (2003) L. Amendola and C. Quercellini, Phys. Rev. D 68, 023514 (2003), arXiv:astro-ph/0303228 .
* Pettorino and Baccigalupi (2008) V. Pettorino and C. Baccigalupi, Phys. Rev. D 77, 103003 (2008), arXiv:0802.1086 [astro-ph] .
* Bean _et al._ (2008) R. Bean, E. E. Flanagan, I. Laszlo, and M. Trodden, Phys. Rev. D 78, 123514 (2008), arXiv:0808.1105 [astro-ph] .
* Pettorino _et al._ (2012) V. Pettorino, L. Amendola, C. Baccigalupi, and C. Quercellini, Phys. Rev. D 86, 103507 (2012), arXiv:1207.3293 [astro-ph.CO] .
* Pettorino (2013) V. Pettorino, Phys. Rev. D 88, 063519 (2013), arXiv:1305.7457 [astro-ph.CO] .
* Xia (2013) J.-Q. Xia, JCAP 11, 022 (2013), arXiv:1311.2131 [astro-ph.CO] .
* Ade _et al._ (2016) P. A. R. Ade _et al._ (Planck), Astron. Astrophys. 594, A14 (2016), arXiv:1502.01590 [astro-ph.CO] .
* van de Bruck _et al._ (2017) C. van de Bruck, J. Mifsud, and J. Morrice, Phys. Rev. D 95, 043513 (2017), arXiv:1609.09855 [astro-ph.CO] .
* Pourtsidou and Tram (2016) A. Pourtsidou and T. Tram, Phys. Rev. D 94, 043518 (2016), arXiv:1604.04222 [astro-ph.CO] .
* Van De Bruck and Mifsud (2018) C. Van De Bruck and J. Mifsud, Phys. Rev. D 97, 023506 (2018), arXiv:1709.04882 [astro-ph.CO] .
* Barros _et al._ (2019) B. J. Barros, L. Amendola, T. Barreiro, and N. J. Nunes, JCAP 01, 007 (2019), arXiv:1802.09216 [astro-ph.CO] .
* Agrawal _et al._ (2021) P. Agrawal, G. Obied, and C. Vafa, Phys. Rev. D 103, 043523 (2021), arXiv:1906.08261 [astro-ph.CO] .
* Gómez-Valent _et al._ (2020) A. Gómez-Valent, V. Pettorino, and L. Amendola, Phys. Rev. D 101, 123513 (2020), arXiv:2004.00610 [astro-ph.CO] .
* Pan _et al._ (2020) S. Pan, G. S. Sharov, and W. Yang, Phys. Rev. D 101, 103533 (2020), arXiv:2001.03120 [astro-ph.CO] .
* da Fonseca _et al._ (2022) V. da Fonseca, T. Barreiro, and N. J. Nunes, Phys. Dark Univ. 35, 100940 (2022), arXiv:2104.14889 [astro-ph.CO] .
* Archidiacono _et al._ (2022) M. Archidiacono, E. Castorina, D. Redigolo, and E. Salvioni, (2022), arXiv:2204.08484 [astro-ph.CO] .
* Afshordi _et al._ (2005) N. Afshordi, M. Zaldarriaga, and K. Kohri, Phys. Rev. D 72, 065024 (2005), arXiv:astro-ph/0506663 .
* Brookfield _et al._ (2006) A. W. Brookfield, C. van de Bruck, D. F. Mota, and D. Tocchini-Valentini, Phys. Rev. Lett. 96, 061301 (2006), arXiv:astro-ph/0503349 .
* Aviles and Cervantes-Cota (2011) A. Aviles and J. L. Cervantes-Cota, Phys. Rev. D 83, 023510 (2011), arXiv:1012.3203 [astro-ph.CO] .
* Carroll (1998) S. M. Carroll, Phys. Rev. Lett. 81, 3067 (1998), arXiv:astro-ph/9806099 .
* Chiba and Kohri (2002) T. Chiba and K. Kohri, Prog. Theor. Phys. 107, 631 (2002), arXiv:hep-ph/0111086 .
* Hui _et al._ (2009) L. Hui, A. Nicolis, and C. Stubbs, Phys. Rev. D 80, 104002 (2009), arXiv:0905.2966 [astro-ph.CO] .
* Creminelli _et al._ (2014) P. Creminelli, J. Gleyzes, L. Hui, M. Simonović, and F. Vernizzi, JCAP 06, 009 (2014), arXiv:1312.6074 [astro-ph.CO] .
* Brax and Valageas (2017) P. Brax and P. Valageas, Phys. Rev. D 95, 043515 (2017), arXiv:1611.08279 [astro-ph.CO] .
* Damour and Polyakov (1994) T. Damour and A. M. Polyakov, Nucl. Phys. B 423, 532 (1994), arXiv:hep-th/9401069 .
* Damour _et al._ (1990) T. Damour, G. W. Gibbons, and C. Gundlach, Phys. Rev. Lett. 64, 123 (1990).
* Koivisto _et al._ (2014) T. Koivisto, D. Wills, and I. Zavala, JCAP 06, 036 (2014), arXiv:1312.2597 [hep-th] .
* Barros and da Fonseca (2022) B. J. Barros and V. da Fonseca, (2022), arXiv:2209.12189 [astro-ph.CO] .
* Avelino and Sousa (2018) P. P. Avelino and L. Sousa, Phys. Rev. D 97, 064019 (2018), arXiv:1802.03961 [gr-qc] .
* Faraoni (2012) V. Faraoni, Phys. Rev. D 85, 024040 (2012), arXiv:1201.1448 [gr-qc] .
* Teixeira _et al._ (2019) E. M. Teixeira, A. Nunes, and N. J. Nunes, Phys. Rev. D 100, 043539 (2019), arXiv:1903.06028 [gr-qc] .
* Avelino and Azevedo (2022) P. P. Avelino and R. P. L. Azevedo, Phys. Rev. D 105, 104005 (2022), arXiv:2203.04022 [gr-qc] .
* Ferreira _et al._ (2020) V. M. C. Ferreira, P. P. Avelino, and R. P. L. Azevedo, Phys. Rev. D 102, 063525 (2020), arXiv:2005.07739 [astro-ph.CO] .
* Avelino and Azevedo (2018) P. P. Avelino and R. P. L. Azevedo, Phys. Rev. D 97, 064018 (2018), arXiv:1802.04760 [gr-qc] .
* Ma and Bertschinger (1995) C.-P. Ma and E. Bertschinger, Astrophys. J. 455, 7 (1995), arXiv:astro-ph/9506072 .
* van de Bruck and Teixeira (2020) C. van de Bruck and E. M. Teixeira, Phys. Rev. D 102, 103503 (2020), arXiv:2007.15414 [gr-qc] .
* Boisseau _et al._ (2000) B. Boisseau, G. Esposito-Farese, D. Polarski, and A. A. Starobinsky, Phys. Rev. Lett. 85, 2236 (2000), arXiv:gr-qc/0001066 .
* Tsujikawa (2007) S. Tsujikawa, Phys. Rev. D 76, 023514 (2007), arXiv:0705.1032 [astro-ph] .
* De Felice _et al._ (2011) A. De Felice, T. Kobayashi, and S. Tsujikawa, Phys. Lett. B 706, 123 (2011), arXiv:1108.4242 [gr-qc] .
* Raveri _et al._ (2014) M. Raveri, B. Hu, N. Frusciante, and A. Silvestri, Phys. Rev. D 90, 043513 (2014), arXiv:1405.1022 [astro-ph.CO] .
* Frusciante _et al._ (2016) N. Frusciante, M. Raveri, D. Vernieri, B. Hu, and A. Silvestri, Phys. Dark Univ. 13, 7 (2016), arXiv:1508.01787 [astro-ph.CO] .
* Salvatelli _et al._ (2016) V. Salvatelli, F. Piazza, and C. Marinoni, JCAP 09, 027 (2016), arXiv:1602.08283 [astro-ph.CO] .
* Frusciante _et al._ (2019b) N. Frusciante, G. Papadomanolakis, S. Peirone, and A. Silvestri, JCAP 02, 029 (2019b), arXiv:1810.03461 [gr-qc] .
* Frusciante _et al._ (2020) N. Frusciante, S. Peirone, L. Atayde, and A. De Felice, Phys. Rev. D 101, 064001 (2020), arXiv:1912.07586 [astro-ph.CO] .
* Frusciante and Perenon (2020) N. Frusciante and L. Perenon, Phys. Rept. 857, 1 (2020), arXiv:1907.03150 [astro-ph.CO] .
* Sbisà (2015) F. Sbisà, Eur. J. Phys. 36, 015009 (2015), arXiv:1406.4550 [hep-th] .
* Lesgourgues (2011a) J. Lesgourgues, (2011a), arXiv:1104.2932 [astro-ph.IM] .
* Blas _et al._ (2011) D. Blas, J. Lesgourgues, and T. Tram, Journal of Cosmology and Astroparticle Physics 2011, 034–034 (2011).
* Lesgourgues (2011b) J. Lesgourgues, (2011b), arXiv:1104.2934 [astro-ph.CO] .
* Audren _et al._ (2013) B. Audren, J. Lesgourgues, K. Benabed, and S. Prunet, Journal of Cosmology and Astroparticle Physics 2013, 001 (2013).
* Brinckmann and Lesgourgues (2019) T. Brinckmann and J. Lesgourgues, Phys. Dark Univ. 24, 100260 (2019), arXiv:1804.07261 [astro-ph.CO] .
* Lewis (2019) A. Lewis, (2019), arXiv:1910.13970 [astro-ph.IM] .
* Aghanim _et al._ (2020b) N. Aghanim _et al._ (Planck), Astron. Astrophys. 641, A5 (2020b), arXiv:1907.12875 [astro-ph.CO] .
* Ross _et al._ (2015) A. J. Ross, L. Samushia, C. Howlett, W. J. Percival, A. Burden, and M. Manera, Mon. Not. Roy. Astron. Soc. 449, 835 (2015), arXiv:1409.3242 [astro-ph.CO] .
* Beutler _et al._ (2017) F. Beutler _et al._ (BOSS), Mon. Not. Roy. Astron. Soc. 464, 3409 (2017), arXiv:1607.03149 [astro-ph.CO] .
* Beutler _et al._ (2011) F. Beutler, C. Blake, M. Colless, D. H. Jones, L. Staveley-Smith, L. Campbell, Q. Parker, W. Saunders, and F. Watson, Mon. Not. Roy. Astron. Soc. 416, 3017 (2011), arXiv:1106.3366 [astro-ph.CO] .
* Scolnic _et al._ (2018) D. M. Scolnic _et al._ (Pan-STARRS1), Astrophys. J. 859, 101 (2018), arXiv:1710.00845 [astro-ph.CO] .
* Aghanim _et al._ (2020c) N. Aghanim _et al._ (Planck), Astron. Astrophys. 641, A8 (2020c), arXiv:1807.06210 [astro-ph.CO] .
* Ade _et al._ (2014) P. A. R. Ade _et al._ (Planck), Astron. Astrophys. 571, A16 (2014), arXiv:1303.5076 [astro-ph.CO] .
* Adam _et al._ (2016) R. Adam _et al._ (Planck), Astron. Astrophys. 594, A1 (2016), arXiv:1502.01582 [astro-ph.CO] .
* Carter _et al._ (2020) P. Carter, F. Beutler, W. J. Percival, J. DeRose, R. H. Wechsler, and C. Zhao, Mon. Not. Roy. Astron. Soc. 494, 2076 (2020), arXiv:1906.03035 [astro-ph.CO] .
* Spiegelhalter _et al._ (2014) D. J. Spiegelhalter, N. G. Best, B. P. Carlin, and A. van der Linde, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 76, 485 (2014).
* Liddle (2009) A. R. Liddle, Ann. Rev. Nucl. Part. Sci. 59, 95 (2009), arXiv:0903.4210 [hep-th] .
* Peirone _et al._ (2019a) S. Peirone, G. Benevento, N. Frusciante, and S. Tsujikawa, Phys. Rev. D 100, 063540 (2019a), arXiv:1905.05166 [astro-ph.CO] .
* Peirone _et al._ (2019b) S. Peirone, G. Benevento, N. Frusciante, and S. Tsujikawa, Phys. Rev. D 100, 063509 (2019b), arXiv:1905.11364 [astro-ph.CO] .
* Frusciante and Benetti (2021) N. Frusciante and M. Benetti, Phys. Rev. D 103, 104060 (2021), arXiv:2005.14705 [astro-ph.CO] .
* Anagnostopoulos _et al._ (2021) F. K. Anagnostopoulos, S. Basilakos, and E. N. Saridakis, Phys. Lett. B 822, 136634 (2021), arXiv:2104.15123 [gr-qc] .
* Rezaei and Malekjani (2021) M. Rezaei and M. Malekjani, Eur. Phys. J. Plus 136, 219 (2021), arXiv:2102.10671 [astro-ph.CO] .
* Atayde and Frusciante (2021) L. Atayde and N. Frusciante, Phys. Rev. D 104, 064052 (2021), arXiv:2108.10832 [astro-ph.CO] .
|
# QuAVF: Quality-aware Audio-Visual Fusion for Ego4D Talking to Me Challenge
Hsi-Che Lin1 Chien-Yi Wang2 Min-Hung Chen2 Szu-Wei Fu2 Yu-Chiang Frank Wang1,2
1 National Taiwan University 2 NVIDIA
<EMAIL_ADDRESS>{chienyiw, minhungc, szuweif<EMAIL_ADDRESS>
###### Abstract
This technical report describes our QuAVF@NTU-NVIDIA submission to the Ego4D
Talking to Me (TTM) Challenge 2023. Based on the observation from the TTM task
and the provided dataset, we propose to use two separate models to process the
input videos and audio. By doing so, we can utilize all the labeled training
data, including those without bounding box labels. Furthermore, we leverage
the face quality score from a facial landmark prediction model for filtering
noisy face input data. The face quality score is also employed in our proposed
quality-aware fusion for integrating the results from two branches. With the
simple architecture design, our model achieves $67.4\%$ mean average precision
(mAP) on the test set, which ranks first on the leaderboard and outperforms
the baseline method by a large margin. Code is available at:
https://github.com/hsi-che-lin/Ego4D-QuAVF-TTM-CVPR23
Figure 1: An illustration of our proposed Quality-aware Audio-Visual Fusion
(QuAVF) framework.
## 1 Introduction
Ego4D [2] is a large-scale dataset introduced by Meta AI, specifically
designed for the purpose of egocentric video understanding. Within the
dataset, the Talking to Me (TTM) challenge focuses on the identification of
social interactions in egocentric videos. Specifically, given a video and
audio segment containing tracked faces of interest, the objective is to
determine whether the person in each frame is talking to the camera wearer.
This task holds significant importance for studying social interactions, and
the understanding of egocentric social dynamics serves as a crucial element in
various applications, including virtual assistants and social robots.
Drawing inspiration from the winning approach by Xue et al. [6], we attempt to
fuse features from both modalities at an earlier stage. Therefore, in our
initial approach, referred to as the Audio-Vision joint model (AV-joint), as
shown in Figure 2, we incorporate a fusion of vision and audio features
immediately after the backbone network, prior to aggregating temporal
information. The AV-joint model is trained by jointly optimizing the vision
and audio branches. However, despite employing a significantly larger backbone
architecture (ResNet-50 [3] and Whisper [4]), the AV-joint model does not
yield substantial performance improvements over the baseline model. Although
our initial trial did not yield satisfactory results, a thorough analysis of
the limited improvement led to several key observations that motivated our
final approach. Firstly, as described in the original Ego4D paper [2], the
determination of the TTM label is based on vocal activity, irrespective of
whether the person is visible in the scene. Consequently, a significant
portion of the training data lacks the corresponding bounding box label.
(about 0.7M frames out of 1.7M frames with TTM label do not have bounding box
label)
In our initial approach, we addressed the absence of bounding box labels by
using zero padding. However, this approach can have adverse effects on the
optimization process of the vision branch, as it may be trained on a large
amount of non-realistic images. Additionally, since the visual and audio
branches are trained jointly, the quality of the visual inputs can potentially
impact the audio branch, particularly when fused at an early stage. Because
the quality of the data is influenced by various factors, such as the methods
employed to handle data without bounding box labels (e.g., zero padding),
limitations of the hardware used to record egocentric videos, and potential
inaccuracies in bounding box annotations, hence, improving data quality is not
a straightforward task. One simple approach would be to discard data without
bounding box labels, but this would significantly reduce the available data
and waste audio activity annotations. To address these challenges, we explore
disentangling the two modalities.
In our subsequent experiments, we discovered that using only the audio input
resulted in superior performance compared to our initial AV-joint model(as
shown in Table 1.) This finding further reinforces our assumption that the
quality of the visual data can impede the optimization process of the audio
branch. As a result, in our final approaches, we employ separate models to
process the audio and image modalities. For the audio branch, we leverage the
powerful encoder from Whisper [4], a robust speech recognition model, as we
observed that the semantic information conveyed through conversations provides
vital cues for this task. By disentangling the two modalities, the audio
branch can fully utilize all the labels in the dataset, unaffected by
variations in image quality. In the vision branch, we take steps to ensure
data quality by incorporating an additional model that provides a quality
score indicating the likelihood of a face appearing in images. This quality
score is utilized to filter out inappropriate training data for the vision
branch. Moreover, we discovered that employing a quantized quality score as
supplementary information for the vision branch yields significant
improvements to the model. Leveraging the same quality score, we introduce a
quality-aware audio-visual fusion (QuAVF) approach. Our top-performing models
achieved an accuracy of $71.2\%$ on the validation set and $67.4\%$ on the
test set.
## 2 Approach
In this section, we first describe the model architecture of our initial
approach (AV-joint) in detail. Then, we describe each component we designed in
QuFAV. The final results and the ablation study will be shown in the next
section.
### 2.1 Baseline Audio-Vision (AV) Joint Model
##### Model Architecture.
In the AV-joint baseline model approach, the inputs are a sequence of 15
images sampled for the video with a frame rate of 2, and the corresponding
audio clip. We view the center image frame as our target, and in each forward
pass our model will predict the score of the target frame. We first extract
features from the audio and image sequence by the encoder of Whisper [4] and
the ResNet-50 [3], respectively. Since the length of the audio feature in the
temporal dimension will be longer than the vision feature (Whisper produces 50
features for 1-second audio, while the video frame rate is 2) we apply average
pooling to the audio features so that it has the same length as the vision
feature. We concatenate audio and vision features along the temporal dimension
followed by an MLP to fuse two modalities. Then, we append a [CLS] token and
aggregate the temporal information by self-attention layers. Finally, the
prediction head is applied to the corresponding output of the [CLS] token to
produce the final prediction. The overall architecture of AV-joint is shown in
Figure 2.
### 2.2 QuAVF
#### 2.2.1 Audio Branch
##### Model Architecture.
For the audio branch, we use the encoder of Whisper-small [4] as the backbone.
We freeze the backbone and append a randomly initialized self-attention layer
to refine the temporal information. Given an audio clip, we follow the
approach described in [4], pad the clip to 30 seconds, and transform it into
an 80-channel log-magnitude Mel spectrogram. Since the length of audio input
may not match the number of image frames, we apply an adaptive average pooling
and a prediction head on the output of the self-attention layer to obtain the
final logits for each image frame. The model architecture of audio branch can
be found in Figure 1
##### Augmentation.
We have tried to add Gaussian noise or crop the input audio for data
augmentation. With probability, we sample a Gaussian noise with a
predetermined range of signal-to-noise ratio (SNR). Also, with probability, we
randomly crop the input audio into a shorter clip whose length is also
uniformly sampled from a predetermined range.
#### 2.2.2 Vision Branch
##### Model Architecture.
As for the vision branch, we choose a ResNet50 [3] pre-trained on ImageNet as
the backbone. To predict the label of a specific frame, we choose 7 frames
with a frame rate of $2$ before and after it ($15$ frames or $7.5$ seconds in
total), and feed them to our backbone independently. We then apply two
randomly initialized self-attention layers to the extracted features (together
with an additional learnable [CLS] token) to aggregate the information on the
temporal dimension. Finally, a prediction head is applied to the [CLS] token
to obtain the result logits. The model architecture of vision branch can be
seen in Figure 1
##### Data Filtering.
In addition to disentangling the vision and audio modalities, we’ve also tried
to improve the quality of training data for the vision branch. To that end, we
apply the facial landmarks prediction model [1] on the bounding box region of
training data and average the confidence scores of all the landmark points
(Figure 3). We treat the resulting score as the face quality score for that
image, which represents how likely there is a face appearing in that region.
The face quality score of a training sample is defined as the average of face
quality score over all the included frames. The data with a score lower than a
threshold will be discarded to increase the data quality.
Figure 2: The baseline AV-joint model approach.
##### Auxiliary Face Quality Score Feature.
The face quality scores obtained from the landmarks estimation model are not
only used to filter the data but also used as an input feature. We experiment
with two different settings. The first one is to apply a linear transformation
on the scalar and concatenate the output feature with the final [CLS] token
before the prediction head. The second one is to quantize the score first. The
result of quantization will be a one-hot vector showing which level of
magnitude the score falls into. We then apply the transformation,
concatenation, and prediction head on this vector.
Figure 3: An illustration of face quality score computation with the facial
landmark model [1].
#### 2.2.3 Quality-Aware Fusion Module
Since we use two independent model to process the audio and images separately,
to make our final decision consider information from both modalities, we need
an additional fusion module. To that end, we introduce a quality-aware fusion
module, which considers the face quality score and fuse the prediction scores
from two branches. The design is very simple; that is, we simply compute the
weighted sum of score from each branch with the weight of the vision branch
set as the face quality score (the weight of the audio branch is then
$(1-$face quality score$)$). The overall pipeline of our QuAVF is shown in
Figure 1.
## 3 Experiment
### 3.1 Experimental Setup
We evaluate the proposed AV-joint, QuAVF, and the two branches in QuAVF on the
validation and test data. For each model, we choose the setting that has best
performance on the validation data, and we apply the same moving average post-
process with widow size 25 to the raw prediction. We report the result
together with some previous approaches in Table 1.
Table 1: Results of Talking to Me (TTM) challenge.
method | Validation | Test
---|---|---
Accuracy | mAP | Accuracy | mAP
Random Guess [2] | | | 47.41 | 50.16
ResNet-18 Bi-LSTM [2] | | | 49.75 | 55.06
EgoTask Translation [6] | | | 55.93 | 57.51
Baseline AV joint | 58.1 | 59.5 | 55.66 | 57.05
Audio-only | 71.7 | 70.1 | 57.77 | 67.39
Vision-only | 64.0 | 67.2 | 54.80 | 56.17
QuAVF | 71.8 | 71.2 | 57.05 | 65.83
For the ablation study, we evaluate our model with different settings on the
validation data. If not mentioned otherwise, we freeze the backbone model
owing to the hardware constraints. We report the resulting accuracy and mAP on
the validation data in Table 2 3 4 5.
### 3.2 Results
From Table 1, we can see that our QuAVF outperforms the previous winning
approach in terms of both accuracy and mAP. Also, we can see the effect of
disentangling vision and audio branch as well as the quality-aware fusion.
We have tried different backbone model in our initial approach AV-joint, and
the results are shown in in Table 2. We find that the choice of the backbone
is not very decisive, which suggests that there should be other reasons behind
the limited improvement of AV-joint over the baseline model. As a result, we
started to explore the characteristics of input data, which motivates our
quality filtering method.
Table 2: Results of baseline AV joint model on validation data
method | backbone | Accuracy | mAP
---|---|---|---
AV joint | ResNet-50 Whsiper | 53.6 | 59.5
AV-HuBERT [5] | 53.4 | 58.2
For the audio branch, we experiment with the augmentation methods. The result
is presented in Table 3. We have tried different range of SNR of the input.
However, we find that noise augmentation does not have much effect. In
contrast, randomly crop the input audio can consistently improve the
performance. We experiment with different probability to apply cropping as
well as the minimum length after cropping.
Table 3: Results of audio branch model on validation data
method | noise aug. (SNR) | random crop | Accuracy | mAP
---|---|---|---|---
Audio only | | | 72.8 | 68.5
min=3, max=20 | | 68 | 64.8
min=10, max=20 | | 70.9 | 68.1
| p=0.5, min=3 | 72 | 69.5
| p=0.9, min=3 | 71.7 | 70.1
We experiment with different filter threshold when training the vision branch.
The result can be seen in Table 4. Also, from the table we can see that the
face quality scores indeed provide useful information especially when the
scores are quantized. By training the whole model including the backbone
model, we obtain our best model for vision branch. The moving average post
process method can consistently improve the results as shown in Table 5
Table 4: Results of vision branch model on validation data
method | filter | face quality score | Accuracy | mAP
---|---|---|---|---
Vision branch | | | 53 | 54
$>0.5$ | | 51.3 | 55.9
$>0.3$ | | 57.2 | 62
$>0.3$ | scalar | 57 | 62
$>0.3$ | quantized | 63 | 65
(fine-tune) | $>0.3$ | quantized | 64 | 67.2
Table 5: Effect of moving average post-process on validation data
method branch | mAP
---|---
w/out | with
av-joint | 59.5 | 59.9
audio branch | 70.1 | 70.4
vision branch | 67.2 | 67.4
## 4 Discussion
### 4.1 Performance Gap between Validation and Test
Although our QuAVF method obtains the best $71.2\%$ mAP on the validation
data, it does not perform equally well on the test data. From the results, we
can see that making the audio branch independent of vision input improves the
model on both validation and test data. Hence, we suspect that this
performance gap comes from the vision branch, especially from the distribution
difference between validation and test quality score. However, future works
are needed to find out the real reason.
Figure 4: Illustrating positive and negative examples
### 4.2 Positive and Negative Examples
In Figure 4, we give positive and negative examples on the validation set. The
white number on the bottom left corner is the ground truth TTM label. On the
bottom right corner are the prediction score of QuAVF, vision branch, and
audio branch, respectively (top to bottom).
## References
* [1] Adrian Bulat and Georgios Tzimiropoulos. How far are we from solving the 2d & 3d face alignment problem?(and a dataset of 230,000 3d facial landmarks). In Proceedings of the IEEE international conference on computer vision, pages 1021–1030, 2017.
* [2] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18995–19012, 2022.
* [3] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [4] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision. arXiv preprint arXiv:2212.04356, 2022.
* [5] Bowen Shi, Wei-Ning Hsu, Kushal Lakhotia, and Abdelrahman Mohamed. Learning audio-visual speech representation by masked multimodal cluster prediction. arXiv preprint arXiv:2201.02184, 2022.
* [6] Zihui Xue, Yale Song, Kristen Grauman, and Lorenzo Torresani. Egocentric video task translation@ ego4d challenge 2022. arXiv preprint arXiv:2302.01891, 2023.
|
# Co-occurrences using Fasttext embeddings for word similarity tasks in Urdu
Usama Khalid Department of Computer Science
AIM Lab, NUCES (FAST)
Islamabad, Pakistan
<EMAIL_ADDRESS>Aizaz Hussain Department of Computer Science
AIM Lab, NUCES (FAST)
Islamabad, Pakistan
<EMAIL_ADDRESS>Muhammad Umair Arshad Waseem Shahzad Department of
Computer Science
AIM Lab, NUCES (FAST)
Islamabad, Pakistan
<EMAIL_ADDRESS>Department of Computer Science
AIM Lab, NUCES (FAST)
Islamabad, Pakistan
<EMAIL_ADDRESS>Mirza Omer Beg Department of Computer Science
AIM Lab, NUCES (FAST)
Islamabad, Pakistan
<EMAIL_ADDRESS>
###### Abstract
Urdu is a widely spoken language in South Asia. Though immoderate literature
exists for the Urdu language still the data isn’t enough to naturally process
the language by NLP techniques. Very efficient language models exist for the
English language, a high resource language, but Urdu and other under-resourced
languages have been neglected for a long time. To create efficient language
models for these languages we must have good word embedding models. For Urdu,
we can only find word embeddings trained and developed using the skip-gram
model. In this paper, we have built a corpus for Urdu by scraping and
integrating data from various sources and compiled a vocabulary for Urdu
language. We also modify fasttext embeddings and N-Grams models to enable
training them on our built corpus. We have used these trained embeddings for a
word similarity task and compared the results with existing techniques. The
datasets and code is made freely available on
GitHub.111https://github.com/usamakh20/wordEmbeddingsUrdu.
###### Index Terms:
Word Embeddings, Ngrams, Fasttext, Urdu, Word2Vec, Skip-Gram, Low Resource
Figure 1: N dimensional visualization of word vectors in 3D space.
## I Introduction
Urdu language originated back in 12th with an Indo-Aryan vocabulary [1] base
and is a mixture of Arabic and Persian. Urdu language is widely spoken and
written in the South Asian region with more than 170 million speakers
specifically in Pakistan and India. Despite this Urdu [2] is considered a low
resourced language because of insufficient data [3] as compared to English and
other widely spoken languages. In recent times the paradigm has been shifted
towards the development of efficient models for low resource languages [4].
Many deep learning and machine learning techniques are used to train language
models for the derivation of semantics from given textual data [5]. To derive
meaningful information from the text it is useful to find out the relation
between words. For example, as shown in Fig. 2, words are clustered together
based on their similarity. Language models store this information which can
then be used for many downstream tasks.
Figure 2: An overview of the Fasttext architecture.
Machines cannot understand the language [6] in the way we do so data cannot be
used as it is passed into the network, instead, each word is converted into a
$N$ dimensional vector. These representations are known as word embeddings. An
example representation of these embeddings are shown in Fig. 1. The words are
projected from an $n$ dimension to 3D [7, 8]. The words with related meanings
tend to appear close together. The word embeddings are the baseline for any
natural language processing task e.g., transliteration, natural language
generation, understanding user inputs, etc [9]. All these vectors together
combined show how much a word is similar to others in a given vector space
[10]. For Urdu, a lot of work has been done on semantic analysis and sentence
classification however there are no studies that show the performance analysis
of word embeddings models on the Urdu language [11]. Unlike the studies
conducted for widely spoken languages, in this paper, we use different word
embedding models to compute similarity scores for words in the Urdu language.
In this paper, we used a freely available Urdu news text corpus COUNTER [12,
13], which contains data from 1200 documents collected from different news
agencies of Pakistan. We have then modified existing Fasttext and n-grams
approaches to be applied to Urdu data and we train and provide embeddings.
Additionally we compare our trained model and embeddings to previously
available skip-gram [14, 15] technique on a word similarity task [16, 15].
This paper is organized in multiple sections which are as follows: In section
2, we will look into the related research. In section 3, we discuss
methodologies and the experimental hypothesis. In section 4, we will look at
the experimentation results. Finally, in section 5 we will summarize our work
and discuss the possible contributions and future directions.
## II Literature Review
A lot of work have been done in Urdu language in terms of POS tagging,
Sentimental Analysis, NER, Stemmer, MT, Topic Modeling [17, 18, 19, 20, 21,
22, 23, 24, 6, 11, 25, 26, 1] but not much work has been done in word
embeddings for Urdu. These words embeddings play a major role in natural
language understanding. Multiple language embedding training architectures
have been introduced i.e. BERT [27], Word2Vec etc.
There are many ways in which words vectors can be represented among them one
is one hot encoding vector representation [28]. In one hot encoding the
vectors are represented as long binary vector representations of words [24].
To formulate one hot vectors for a corpus, they can be aggregated to form the
BoW (Bag of Words) representation [29, 30]. The bag of words maintain a
dictionary of all possible words in the language and keep track of the
frequency of the word encountered in the particular corpus.
The problem with BoW is that it doesn’t keep track of words similarity and
contextual meaning. So to solve the words similarity problem TF-IDF (Term
Frequency - Inverse Document Frequency) [31, 32] has been introduced. It
associates each word in a document with a number which is a measure of how
relevant that word is. Based on this similarity of words, one can compare the
similarity of multiple documents together.
Word2Vec is a fusion of two architectures i.e. CBOW and Skip-Gram [33]. These
architectures are designed to be mirror images of one another [34]. The CBOW
the model tries to predict the closest context to the input word while the
Skip-Gram model tries to predict the closest words to the input word.
Figure 3: An overview of the N-Gram model used in this research.
Word embeddings help us considerably improve NLP techniques for low resourced
languages. In context of Urdu, the only words embeddings present in literature
are that of Skip-Gram[14]. To create a large sample word embeddings for urdu,
140 million sentences in Urdu were used. To check the accuracy of learned
embeddings, the closest neighboured words were analyzed w.r.t different words
in the vector space [35], context window sizes and their performance on Urdu
transliteration.
The basic idea behind N-Gram language model is to assign probability to each
word in a given sequence of words [36, 37]. In word embeddings the words are
dissected into multiple N number of chunks and then these chunks are assigned
probabilities. Using these probabilities the closest context of a word can be
calculated in the vector space. By the calculation of probabilities, this
model is very helpful in Natural Language Generations, sentence completion
[38], sentence correction etc. The main issue of N-Gram model is that it is
sensitive to the training corpus. Many models have been introduced which
combine N-Grams and neural networks to overcome the problems of N-Gram and
generate more accurate results.
Fasttext is primarily an architecture developed by facebook for text
classification [39]. Fasttext works on the principal of Word2Vec and n-grams
technique. In word2vec the text is feed into the Neural Network individually.
However in Fasttext the words are divided into several sub words and then feed
into the Neural Networks. Consider the word apple and we have to dissect this
word into tri-grams then the resultant output would be app, ppl, and ple [40].
The word vector for apple will be the sum of all these tri-grams. After
training the Neural Network on the training data, we get the word vector for
each n-grams and later these n-grams can be used to relate other words. For
rare words can be mapped as there will be many overlapping n-grams which
appeared in other words.
## III Methodology
We used two methods to train our models on word embeddings, Fasttext and
N-Grams [41]. Fig. 3 shows the working of modified N-Gram model used in this
research. The n-gram model converts the document into tokens and stores these
tokens in a dictionary based on the co-occurrences of words. That is the
number of times a token $t_{i}$ appears next to a token $t_{j}$ is stored in a
co-occurrence dictionary. Against each key there is a are multiple word
vectors with the probability score of its occurrence. In fasttext a document
is tokenized and passed through a network. The network learns weights which
can be extracted as word embeddings. Fig. 2 shows how words are propagated
through the network to extract embeddings for Urdu. In next sections we will
discuss in detail about the dataset, experimentation and results.
### III-A Corpus
We have used the Urdu Monolingual corpus [42] containing 54 million sentences,
90 million tokens and 129K unique tokens. In the preprocessing step we removed
all special characters such as brackets, single/double quotes and spaces [43].
All these special characters are replaced by spaces. As a second step
consecutive occurring spaces of two or more are matched and replaced by a
single space character.
### III-B Techniques
We have used two techniques for t [44]raining, namely ngrams and Fasttext
[45]. The ngram technique requires data to be separated sentence by sentence
where each sentence is broken down into a list of words [46]. After separating
into list of words we remove common stop words. Similar prepossessing is
applied for Fasttext, however the fasttext python package has the tokenizer
and stop word removal tool builtin. The complete architecture is given in
figure 2 and 3.
### III-C Hyper Parameters
The Fasttext technique has four main hyper parameters [47] that we can tune.
Vector dimension represents the length of the vector size to represent a
single word. Larger vectors capture more information [48] but are harder to
train and cost more data [49]. Epochs is the number of times the model trains
on a batch of data. The larger the corpus the lesser number of times it may
have to be iterated. Learning rate is a measure of how quickly the model
should converge to a solution [50]. Sub words length specifies the length of
substrings [51] to consider for different processing tasks like resolving out
of vocabulary words.
For the current study we have used the default parameters for Fasttext which
are
* •
Vector dimension : 100
* •
Epochs : 5
* •
Learning rate : 0.05
* •
Sub words length : min=3 & max=6
The ngrams technique only has a single hyper-parameter namely the number of
consecutive words or grams to train.
## IV Results and Discussion
For the evaluating the similarity of learned word representations We use Urdu
translated version of corpora SimLex-999 [52] and WordSim-353 [53]. SimLex-999
is a gold standard dataset for evaluating word embeddings. It contains 999
noun, adjective and verb triplets in a concrete [54] and abstract form. The
dataset is designed to evaluate similarity of words rather than relatedness
and contains similarity score for words. The WordSim-353 dataset [55] contains
relatedness scores for 353 word pairs.
These datasets have been translated to urdu using ijunoon’s translation
service222https://translate.ijunoon.com/ and made available. For calculating
the similarity and relatedness of words we use the Spearman correlation
coefficient [56]. The difference between the predicted score and actual score
is d and n is the number of examples.
Figure 4: In this figure, some word embeddings produced by fasttext are mapped
to a 2D space which shows how words are related and how they appear in close
proximity to each other.
$r\textsubscript{s}=1-\frac{6\Sigma\textsuperscript{n}\textsubscript{i}d\textsuperscript{2}\textsubscript{i}}{n(n\textsuperscript{2}-1)}$
(1)
| WordSim-353 | SimLex-999
---|---|---
Fasttext | 0.462 | 0.743
bigrams | 0.188 | 0.156
skip-gram[14] | 0.492 | 0.293
Fasttext English[57] | 0.84 | 0.76
Figure 5: We compare our results with skip-gram[14] embeddings that were
evaluated on the same dataset. The Fasttext embeddings are trained for 100
dimensional vectors so the results of skip-gram are also for 100 dimensional
vectors for a fair comparison.
The bigrams similarity measure as expected produces a low correlation score,
this is also because correlation is only computed for exact word matches from
the corpus which are comparatively very less as compared to Fasttext for
WordSim-353 and SimLex-999. The Fasttext technique outperforms skip-gram based
technique [14] for the SimLex-999 task however slightly under-performs in
WordSim-353.
## V Conclusion
The advent of Word Embedding techniques [58] was no less than a revolution in
the field of NLP. It enabled the representation of words in a digital form
(vectors) that computers can understand and perform mathematical calculations
on, like the famous example King - Man + Woman = Queen. It also established
the ground work for modern Deep attention based models and Transformers in the
field of NLP.
Urdu has for long remained an Under resourced language which has caused many
proposed state-of-the-art techniques to under perform when being applied to
Urdu corpora. It can also be seen in Fig. 5 that performance of Fasttext on
Urdu corpora is nowhere near to that of English. In this research we have
proposed Word co-ocurrences using bigrams and Fasttext word embeddings trained
using the COUNTER corpus and have evaluated our approach on WordSim-353 and
SimLex-999 similarity tasks and compared that to previously proposed skip-gram
technique.
In the future work can be done on training these techniques on larger Urdu
corpora and evaluate on various tasks like POS Tagging, NER, Machine
Translation, sentiment analysis and dependency parsing. In addition to this
large corpora have to be proposed for Urdu if we want to at least match the
performance of techniques that have been proposed for High resource Languages
such as English. We hope that this work will help researchers to produce
better techniques in the area of Urdu NLP.
## References
* [1] Mirza Beg and Mike Dahlin. A memory accounting interface for the java programming language.
* [2] Bilal Naeem, Aymen Khan, Mirza Omer Beg, and Hasan Mujtaba. A deep learning framework for clickbait detection on social area network using natural language cues. Journal of Computational Social Science, pages 1–13, 2020.
* [3] Abdul Rehman Javed, Muhammad Usman Sarwar, Mirza Omer Beg, Muhammad Asim, Thar Baker, and Hissam Tawfik. A collaborative healthcare framework for shared healthcare plan with ambient intelligence. Human-centric Computing and Information Sciences, 10(1):1–21, 2020\.
* [4] Mirza Beg and Peter Van Beek. A graph theoretic approach to cache-conscious placement of data for direct mapped caches. In Proceedings of the 2010 international symposium on Memory management, pages 113–120, 2010.
* [5] Hafiz Tayyeb Javed, Mirza Omer Beg, Hasan Mujtaba, Hammad Majeed, and Muhammad Asim. Fairness in real-time energy pricing for smart grid using unsupervised learning. The Computer Journal, 62(3):414–429, 2019.
* [6] Rabail Zahid, Muhammad Owais Idrees, Hasan Mujtaba, and Mirza Omer Beg. Roman urdu reviews dataset for aspect based opinion mining. In 2020 35th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW), pages 138–143. IEEE, 2020.
* [7] Mirza Beg, Laurent Charlin, and Joel So. Maxsm: A multi-heuristic approach to xml schema matching. 2006\.
* [8] Aaditeshwar Seth and Mirza Beg. Achieving privacy and security in radio frequency identification. In Proceedings of the 2006 International Conference on Privacy, Security and Trust: Bridge the Gap Between PST Technologies and Business Services, pages 1–1, 2006.
* [9] Abdul Ali Bangash, Hareem Sahar, and Mirza Omer Beg. A methodology for relating software structure with energy consumption. In 2017 IEEE 17th International Working Conference on Source Code Analysis and Manipulation (SCAM), pages 111–120. IEEE, 2017.
* [10] Mirza O Beg, Mubashar Nazar Awan, and Syed Shahzaib Ali. Algorithmic machine learning for prediction of stock prices. In FinTech as a Disruptive Technology for Financial Institutions, pages 142–169. IGI Global, 2019.
* [11] Hussain S Khawaja, Mirza O Beg, and Saira Qamar. Domain specific emotion lexicon expansion. In 2018 14th International Conference on Emerging Technologies (ICET), pages 1–5. IEEE, 2018.
* [12] Muhammad Sharjeel, Rao Muhammad Adeel Nawab, and Paul Rayson. Counter: corpus of urdu news text reuse. Language resources and evaluation, 51(3):777–803, 2017.
* [13] Adeel Zafar, Hasan Mujtaba, Sohrab Ashiq, and Mirza Omer Beg. A constructive approach for general video game level generation. In 2019 11th Computer Science and Electronic Engineering (CEEC), pages 102–107. IEEE, 2019.
* [14] Samar Haider. Urdu word embeddings. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), 2018.
* [15] Saira Qamar, Hasan Mujtaba, Hammad Majeed, and Mirza Omer Beg. Relationship identification between conversational agents using emotion analysis. Cognitive Computation, pages 1–15.
* [16] Talha Imtiaz Baig, Nazish Banaras, Ebad Banissi, Rafia Bashir, Mirza Omer Beg, Junaid Bilal, Ahmad Hassan Butt, Waseem Chishti, Christos Chrysoulas, Anum Dastgir, et al. Awan, shahid mahmood 245 ayubi, salah-u-din 192.
* [17] Wahab Khan, Ali Daud, Khairullah Khan, Jamal Abdul Nasir, Mohammed Basheri, Naif Aljohani, and Fahd Saleh Alotaibi. Part of speech tagging in urdu: Comparison of machine and deep learning approaches. IEEE Access, 7:38918–38936, 2019.
* [18] Neelam Mukhtar and Mohammad Abid Khan. Urdu sentiment analysis using supervised machine learning approach. International Journal of Pattern Recognition and Artificial Intelligence, 32(02):1851001, 2018.
* [19] Muhammad Kamran Malik. Urdu named entity recognition and classification system using artificial neural network. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 17(1):1–13, 2017.
* [20] Vaishali Gupta, Nisheeth Joshi, and Iti Mathur. Design & development of rule based inflectional and derivational urdu stemmer ‘usal’. In 2015 International conference on futuristic trends on computational analysis and knowledge management (ABLAZE), pages 7–12. IEEE, 2015\.
* [21] Khadija Shakeel, Ghulam Rasool Tahir, Irsha Tehseen, and Mubashir Ali. A framework of urdu topic modeling using latent dirichlet allocation (lda). In 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), pages 117–123. IEEE, 2018.
* [22] Muhammad Umair Arshad, Muhammad Farrukh Bashir, Adil Majeed, Waseem Shahzad, and Mirza Omer Beg. Corpus for emotion detection on roman urdu. In 2019 22nd International Multitopic Conference (INMIC), pages 1–6. IEEE, 2019.
* [23] Saad Nacem, Majid Iqbal, Muhammad Saqib, Muhammad Saad, Muhammad Soban Raza, Zaid Ali, Naveed Akhtar, Mirza Omer Beg, Waseem Shahzad, and Muhhamad Umair Arshad. Subspace gaussian mixture model for continuous urdu speech recognition using kaldi. In 2020 14th International Conference on Open Source Systems and Technologies (ICOSST), pages 1–7. IEEE, 2020.
* [24] Adil Majeed, Hasan Mujtaba, and Mirza Omer Beg. Emotion detection in roman urdu text using machine learning. In Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering Workshops, pages 125–130, 2020.
* [25] Uzma Rani, Aamer Imdad, and Mirza Beg. Case 2: Recurrent anemia in a 10-year-old girl. Pediatrics in review, 36(12):548–550, 2015.
* [26] Zubair Baig, Mirza Omer Beg, Baber Majid Bhatti, Farzana Ahamed Bhuiyan, Tegawendé F Bissyandé, Shizhan Chen, Mohan Baruwal Chhetri, Marco Couto, João de Macedo, Randy de Vries, et al. Ahmed, sanam 124 aleti, aldeida 105 aloísio, joão 151 arachchilage, nalin asanka gamagedara 7.
* [27] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
* [28] John T Hancock and Taghi M Khoshgoftaar. Survey on categorical data for neural networks. Journal of Big Data, 7:1–41, 2020.
* [29] Yin Zhang, Rong Jin, and Zhi-Hua Zhou. Understanding bag-of-words model: a statistical framework. International Journal of Machine Learning and Cybernetics, 1(1-4):43–52, 2010.
* [30] Mirza Omer Beg. Performance analysis of packet forwarding on ixp2400 network processor. 2006\.
* [31] Bijoyan Das and Sarit Chakraborty. An improved text sentiment classification model using tf-idf and next word negation. arXiv preprint arXiv:1806.06407, 2018.
* [32] Muhammad Umer Farooq, Mirza Omer Beg, et al. Bigdata analysis of stack overflow for energy consumption of android framework. In 2019 International Conference on Innovative Computing (ICIC), pages 1–9. IEEE, 2019.
* [33] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013.
* [34] Adeel Zafar, Hasan Mujtaba, and Mirza Omer Beg. Search-based procedural content generation for gvg-lg. Applied Soft Computing, 86:105909, 2020.
* [35] M Beg. Critical path heuristic for automatic parallelization. 2008\.
* [36] Adam Pauls and Dan Klein. Faster and smaller n-gram language models. In Proceedings of the 49th annual meeting of the Association for Computational Linguistics: Human Language Technologies, pages 258–267, 2011\.
* [37] Mirza Omer Beg. Flecs: A data-driven framework for rapid protocol prototyping. Master’s thesis, University of Waterloo, 2007.
* [38] Adeel Zafar, Hasan Mujtaba, Mirza Tauseef Baig, and Mirza Omer Beg. Using patterns as objectives for general video game level generation. ICGA Journal, 41(2):66–77, 2019.
* [39] Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. Fasttext. zip: Compressing text classification models. arXiv preprint arXiv:1612.03651, 2016.
* [40] Muhammad Umer Farooq, Saif Ur Rehman Khan, and Mirza Omer Beg. Melta: A method level energy estimation technique for android development. In 2019 International Conference on Innovative Computing (ICIC), pages 1–10. IEEE, 2019.
* [41] Hamza M Alvi, Hareem Sahar, Abdul A Bangash, and Mirza O Beg. Ensights: A tool for energy aware software development. In 2017 13th International Conference on Emerging Technologies (ICET), pages 1–6. IEEE, 2017.
* [42] Bushra Jawaid, Amir Kamran, and Ondrej Bojar. A tagged corpus and a tagger for urdu. In LREC, pages 2938–2943, 2014.
* [43] Danyal Thaver and Mirza Beg. Pulmonary crohn’s disease in down syndrome: A link or linkage problem. Case reports in gastroenterology, 10(2):206–211, 2016.
* [44] Ahmed Uzair, Mirza O Beg, Hasan Mujtaba, and Hammad Majeed. Weec: Web energy efficient computing: A machine learning approach. Sustainable Computing: Informatics and Systems, 22:230–243, 2019\.
* [45] Mirza Beg and Peter van Beek. A constraint programming approach for integrated spatial and temporal scheduling for clustered architectures. ACM Transactions on Embedded Computing Systems (TECS), 13(1):1–23, 2013.
* [46] Muhammad Tariq, Hammad Majeed, Mirza Omer Beg, Farrukh Aslam Khan, and Abdelouahid Derhab. Accurate detection of sitting posture activities in a secure iot based assisted living environment. Future Generation Computer Systems, 92:745–757, 2019.
* [47] Adeel Zafar, Hasan Mujtaba, Mirza Omer Beg, and Sajid Ali. Deceptive level generator. 2018\.
* [48] Hareem Sahar, Abdul A Bangash, and Mirza O Beg. Towards energy aware object-oriented development of android applications. Sustainable Computing: Informatics and Systems, 21:28–46, 2019\.
* [49] Walid Koleilat, Joel So, and Mirza Beg. Watagent: A fresh look at tac-scm agent design. 2006\.
* [50] Mirza Beg. Flecs: A framework for rapidly implementing forwarding protocols. In International Conference on Complex Sciences, pages 1761–1773. Springer, 2009.
* [51] Muhammad Asad, Muhammad Asim, Talha Javed, Mirza O Beg, Hasan Mujtaba, and Sohail Abbas. Deepdetect: detection of distributed denial of service attacks using deep learning. The Computer Journal, 63(7):983–994, 2020.
* [52] Felix Hill, Roi Reichart, and Anna Korhonen. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4):665–695, 2015.
* [53] Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pasca, and Aitor Soroa. A study on similarity and relatedness using distributional and wordnet-based approaches. 2009\.
* [54] Noman Dilawar, Hammad Majeed, Mirza Omer Beg, Naveed Ejaz, Khan Muhammad, Irfan Mehmood, and Yunyoung Nam. Understanding citizen issues through reviews: A step towards data informed planning in smart cities. Applied Sciences, 8(9):1589, 2018.
* [55] Abdul Rehman Javed, Mirza Omer Beg, Muhammad Asim, Thar Baker, and Ali Hilal Al-Bayatti. Alphalogger: Detecting motion-based side-channel attack using smartphone keystrokes. Journal of Ambient Intelligence and Humanized Computing, pages 1–14, 2020.
* [56] Ch Spearman. The proof and measurement of association between two things. International journal of epidemiology, 39(5):1137–1150, 2010.
* [57] Vitalii Zhelezniak, Aleksandar Savkov, April Shen, and Nils Y Hammerla. Correlation coefficients and semantic textual similarity. arXiv preprint arXiv:1905.07790, 2019.
* [58] Martin Karsten, Srinivasan Keshav, Sanjiva Prasad, and Mirza Beg. An axiomatic basis for communication. ACM SIGCOMM Computer Communication Review, 37(4):217–228, 2007\.
|
# Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via
Model Checking
Dennis Gross1, Thiago D. Simão1, Nils Jansen1, and Guillermo A. Pérez2
1Institute for Computing and Information Sciences, Radboud University,
Toernooiveld 212, 6525 EC Nijmegen,
The Netherlands
2Department of Computing, University of Antwerp, Middelheimlaan 1, 2020
Antwerpen, Belgium
<EMAIL_ADDRESS>
###### Abstract
Deep Reinforcement Learning (RL) agents are susceptible to adversarial noise
in their observations that can mislead their policies and decrease their
performance. However, an adversary may be interested not only in decreasing
the reward, but also in modifying specific temporal logic properties of the
policy. This paper presents a metric that measures the exact impact of
adversarial attacks against such properties. We use this metric to craft
optimal adversarial attacks. Furthermore, we introduce a model checking method
that allows us to verify the robustness of RL policies against adversarial
attacks. Our empirical analysis confirms (1) the quality of our metric to
craft adversarial attacks against temporal logic properties, and (2) that we
are able to concisely assess a system’s robustness against attacks.
## 1 INTRODUCTION
_Deep reinforcement learning (RL)_ has changed how we build agents for
sequential decision-making problems (Mnih et al.,, 2015; Levine et al.,,
2016). It has triggered applications in critical domains like energy,
transportation, and defense (Farazi et al.,, 2021; Nakabi and Toivanen,, 2021;
Boron and Darken,, 2020). An RL agent learns a near-optimal policy (based on a
given objective) by making observations and gaining rewards through
interacting with the environment (Sutton and Barto,, 2018). Despite the
success of RL, potential security risks limit its usage in real-life
applications. The so-called adversarial attacks introduce noise into the
observations and mislead the RL decision-making to drop the cumulative reward,
which may lead to unsafe behaviour (Huang et al., 2017a, ; Chen et al.,, 2019;
Ilahi et al.,, 2022; Moos et al.,, 2022; Amodei et al.,, 2016).
Generally, rewards lack the expressiveness to encode complex safety
requirements (Vamplew et al.,, 2022; Hasanbeig et al.,, 2020). Therefore, for
an adversary, capturing how much the cumulative reward is reduced may be too
generic for attacks targeting specific safety requirements. For instance, an
RL taxi agent may be optimized to transport passengers to their destinations.
With the already existing adversarial attacks, the attacker can prevent the
agent from transporting the passenger. However, the attacker cannot create
controlled adversarial attacks that may increase the probability that the
passenger never gets picked up or that the passenger gets picked up but never
arrives at its destination. More generally, current adversary attacks are not
able to control temporal logic properties.
This paper aims to combine adversarial RL with rigorous model checking (Baier
and Katoen,, 2008), which allows the adversary to create so-called _property
impact attacks (PIAs)_ that can influence specific RL policy properties. These
PIAs are not limited by properties that can be expressed by rewards (Littman
et al.,, 2017; Hahn et al.,, 2019; Hasanbeig et al.,, 2020; Vamplew et al.,,
2022), but support a broader range of properties that can be expressed by
_probabilistic computation tree logic_ (PCTL; Hansson and Jonsson,, 1994). Our
experiments show that for PCTL properties, it is possible to create targeted
adversarial attacks that influence them specifically. Furthermore, the
combination of model checking and adversarial RL allows us to verify via
_permissive policies_ (Dräger et al.,, 2015) how vulnerable trained policies
are against PIAs. Our _main contributions are:_
* •
a metric to measure the impact of adversarial attacks on a broad range of RL
policy properties,
* •
a property impact attack (PIA) to target specific properties of a trained RL
policy, and
* •
a method that checks the robustness of RL policies against adversarial
attacks.
The empirical analysis shows that the method to attack RL policies can
effectively modify PCTL properties. Furthermore, the results support the
theoretical claim that it is possible to model check the robustness of RL
policies against property impact attacks.
The paper is structured in the following way. First, we summarize the related
work and position our paper in it. Second, we explain the fundamentals of our
technique. Then, we present the adversarial attack setting, define our
property impact attack, and show a way to model check policy robustness
against such adversarial attacks. After that, we evaluate our methods in
multiple environments.
## 2 RELATED WORK
We now summarize the related work and position our paper in between
adversarial RL and model checking.
There exist a variety of adversarial attack methods to attack RL policies with
the goal of dropping their total expected reward (Chan et al.,, 2020; Lin et
al., 2017b, ; Ilahi et al.,, 2022; Lin et al., 2017a, ; Clark et al.,, 2018;
Yu and Sun,, 2022). The first proposed adversarial attack on deep RL policies
(Huang et al., 2017a, ) uses a modified version of the _fast gradient sign
method (FGSM)_ , developed by Goodfellow et al., (2015), to force the RL
policy to make malicious decisions (for more details, see Section 3.2).
However, none of the previous work let the attacker target temporal logic
properties of RL policies. Chan et al., (2020) create more effective attacks
that modify only one feature (if the smallest sliding window is used) of the
observation of the agent, their approach empirically measures the impact of
each feature in the reward, then it modifies the feature with the highest
impact. We build upon this idea and measure the impact of changing each
feature in a given temporal logic property instead of the reward.
There exist a large body of work that combines RL with model checking (Wang et
al.,, 2020; Hasanbeig et al.,, 2020; Hahn et al.,, 2019; Hasanbeig et al.,,
2019; Fulton and Platzer,, 2019; Sadigh et al.,, 2014; Bouton et al.,, 2019;
Chatterjee et al.,, 2017) but no work that uses model checking to create
adversarial attacks for RL policies (Chen et al.,, 2019; Ilahi et al.,, 2022;
Moos et al.,, 2022). Most work about the formal robustness checking of deep
learning models focuses on supervised learning (Katz et al.,, 2017, 2019; Gehr
et al.,, 2018; Huang et al., 2017b, ; Ruan et al.,, 2018). In the RL setting,
Zhang et al., (2020) introduce a formal approach to check the robustness
against adversarial attacks with respect to the reward. They formulate the
perturbation on state observations as a modified _Markov decision process
(MDP)_. Furthermore, they can obtain certain robustness certificates under
attack. For environments like Pong, they can guarantee actions do not change
for all frames during policy execution, thus guaranteeing the cumulative
rewards under attack. We, on the other hand, focus on the robustness of
temporal PCTL properties against attacks.
## 3 BACKGROUND
In this section, we introduce the necessary foundations. First, we summarize
the modeling and analysis of probabilistic systems. Second, we introduce a
method to attack deep RL policies and a method that increases the robustness
of trained RL policies.
### 3.1 Probabilistic Systems
A probability distribution over a set $X$ is a function
$\mu:X\rightarrow[0,1]$ with $\sum_{x\in X}\mu(x)=1$. The set of all
distributions over $X$ is denoted by $Distr(X)$.
###### Definition 3.1 (Markov Decision Process).
A Markov decision process (MDP) is a tuple $M=(S,s_{0},Act,T,rew)$ where $S$
is a finite, nonempty set of states, $s_{0}\in S$ is an initial state, $Act$
is a finite set of actions, $T\colon S\times Act\rightarrow Distr(S)$ is a
probability transition function. We employ a factored state representation
$S\subseteq\mathbb{Z}^{n}$, where each state $s\in\mathbb{Z}^{n}$ is an
$n$-dimensional vector of features $(f_{1},f_{2},...,f_{n})$ such that
$f_{i}\in\mathbb{Z}$ for $1\leq i\leq n$. We define $rew\colon S\times
Act\rightarrow\mathbb{R}$ as a reward function.
The available actions in $s\in S$ are $Act(s)=\\{a\in Act\mid
T(s,a)\neq\bot\\}$. An MDP with only one action per state ($\forall s\in
S\colon|Act(s)|=1$) is a discrete-time Markov chain (DTMC). Note that features
do not necessarily have to have the same domain size. We define $\mathcal{F}$
as the set of all features $f_{i}$ in state $s\in S$.
A path of an MDP $M$ is an (in)finite sequence
$\tau=s_{0}\xrightarrow[\text{}]{\text{$a_{0},r_{0}$}}s_{1}\xrightarrow[\text{}]{\text{$a_{1},r_{1}$}}...$,
where $s_{i}\in S$, $a_{i}\in Act(s_{i})$,
$r_{i}\vcentcolon=rew(s_{i},a_{i})$, and $T(s_{i},a_{i})(s_{i+1})\neq 0$. A
state $s^{\prime}$ is reachable from state $s$ if there exists a path $\tau$
from state $s$ to state $s^{\prime}$. We say a state $s$ is reachable if $s$
is reachable from $s_{0}$.
###### Definition 3.2 (Policy).
A memoryless deterministic policy for an MDP $M{=}(S,s_{0},Act,T,rew)$ is a
function $\pi\colon S\rightarrow Act$ that maps a state $s\in S$ to an action
$a\in Act(s)$.
Applying a policy $\pi$ to an MDP $M$ yields an _induced DTMC_ , denoted as
$D$, where all non-determinism is resolved. This way, we say a state $s$ is
reachable by a policy $\pi$ if $s$ is reachable in the DTMC induced by $\pi$.
$\Lambda$ is the set of all possible memoryless policies.
To analyze the properties of an induced DTMC, it is necessary to specify the
properties via a specification language like probabilistic computation tree
logic PCTL (Hansson and Jonsson,, 1994).
###### Definition 3.3 (PCTL Syntax).
Let $AP$ be a set of atomic propositions. The following grammar defines a
state formula: $\Phi\coloneqq\text{ true }|\text{ a }|\text{
}\Phi_{1}\land\Phi_{2}\text{ }|\text{ }\lnot\Phi\text{ }|P_{\bowtie
p}|P^{max}_{\bowtie p}(\phi)\text{ }|\text{ }P^{min}_{\bowtie p}(\phi)$ where
$a\in AP,\bowtie\in\\{<,>,\leq,\geq\\}$, $p\in[0,1]$ is a threshold, and
$\phi$ is a path formula which is formed according to the following grammar
$\phi\coloneqq X\Phi\text{ }|\text{ }\phi_{1}\text{ }U\text{ }\phi_{2}\text{
}|\text{ }\phi_{1}\text{ }F_{\theta t}\text{ }\phi_{2}\text{ }|G\text{ }\Phi$
with $\theta=\\{<,\leq\\}$.
PCTL formulae are interpreted over the states of an induced DTMC. In a slight
abuse of notation, we use PCTL state formulas to denote probability values.
That is, we sometimes write $P_{\bowtie p}(\phi)$ where we omit the threshold
$p$. For instance, $P(F_{\leq 100}collision)$ denotes the reachability
probability of eventually running into a collision within the first $100$ time
steps.
There is a variety of model checking algorithms for verifying PCTL properties
(Courcoubetis and Yannakakis,, 1988, 1995), and PRISM and Storm offer
efficient and mature tool support (Kwiatkowska et al.,, 2011; Hensel et al.,,
2022). COOL-MC (Gross et al.,, 2022) allows model checking of a trained RL
policy against a PCTL property and MDP. The tool builds the induced DTMC on
the fly via an _incremental building process_ (Cassez et al.,, 2005; David et
al.,, 2015).
### 3.2 Adversarial Attacks on Deep RL Policies
The standard learning goal for RL is to find a policy $\pi$ in a MDP such that
$\pi$ maximizes the expected accumulated discounted rewards, that is,
$\mathbb{E}[\sum^{L}_{t=0}\gamma^{t}R_{t}]$, where $\gamma$ with
$0\leq\gamma\leq 1$ is the discount factor, $R_{t}$ is the reward at time $t$,
and $L$ is the total number of steps. Deep RL uses neural networks to train
policies. A neural network is a function parameterized by weights $\theta$. In
deep RL, the policy $\pi$ is encoded using a neural network which can be
trained by minimizing a sequence of loss functions $J(\theta,s,a)$ (Mnih et
al.,, 2013).
An _adversary_ is a malicious actor that seeks to harm or undermine the
performance of an RL system. For instance, an adversary may try to decrease
the expected discounted reward by attacking the RL policy via adversarial
attacks.
###### Definition 3.4 (Adversarial Attack).
An _adversarial attack_ $\delta\colon S\rightarrow S$ maps a state $s$ to an
_adversarial state_ $s_{adv}$ (see Figure 1). A successful adversarial attack
at a given state $s$ leads to a _misjudgment_ of the RL policy
($\pi(s)~{}\neq~{}\pi(\delta(s))$) and an attack is $\epsilon$-bounded if
$\lVert\delta(s)-s\rVert_{\infty}\leq\epsilon$ with $l_{\infty}$-norm defined
as
$\lVert\delta(s)-s\rVert_{\infty}=max_{\delta_{i}\in\delta}|\delta_{i}-s_{i}|$.
Recall that states are $n$-dimensional vectors of features from
$\mathbb{Z}^{n}$. Executing a policy $\pi$ on an MDP $M$ and attacking the
policy $\pi$ at each reachable state $s$ by $\delta$ yields an adversarial-
induced DTMC $D_{adv}$. There exist a variety of adversarial attack methods to
create adversarial attacks $\delta$ (Ilahi et al.,, 2022; Gleave et al.,,
2020; Lee et al.,, 2020, 2021; Rakhsha et al.,, 2020; Carlini and Wagner,,
2017).
RL policyEnvironment$\pi(s)$ $s$
---
$rew$
(a) RL policy interaction with the environment.
RL policyEnvironmentAttacker$\pi(s_{adv})$ $s$
---
$rew$
$s_{adv}=\delta(s)$
---
$rew$
(b) An adversary manipulates with $\delta$ the observations of the RL policy
$\pi$ and its interaction with the environment.
Figure 1: RL (a) vs. adversarial RL (b).
Our work builds upon the FGSM attack and the work of Chan et al., (2020).
Given the weights $\theta$ of the neural network policy $\pi$ and a loss
$J(\theta,s,a)$ with state $s$ and $a\coloneqq\pi(s)$, the FGSM, denoted as
$\delta_{\text{FGSM}}\colon S\rightarrow S$, adds noise whose direction is the
same as the gradient of the loss $J(\theta,s,a)$ w.r.t the state $s$ to the
state $s$ (Huang et al., 2017a, ) and the noise is scaled by
$\epsilon\in\mathbb{Z}$ (see Equation 1). Note that we are dealing with
integer $\epsilon$-values because our states are comprised of integer
features. We specify the $\bigtriangledown$-operator as a vector differential
operator. Depending on the gradient, we either add or subtract $\epsilon$.
$\delta_{\text{FGSM}}(s)=s+\epsilon\cdot
sign(\bigtriangledown_{s}J(\theta,s,a))$ (1)
A FGSM for feature $f_{i}$, denoted as $\delta_{\text{FGSM}}^{(f_{i})}(s)$,
modifies only the feature $f_{i}$ in state $s$.
$\delta_{\text{FGSM}}^{(f_{i})}(s)=s+\epsilon\cdot
sign(\bigtriangledown_{s_{f_{i}}}J(\theta,s,a))$ (2)
We denote the set of all possible $\epsilon$-bounded attacks at state $s$ via
feature $f_{i}$, including $\delta^{(f_{i})}(s)=s$ for no attack, as
$\Delta^{(f_{i})}_{\epsilon}(s)$.
Chan et al., (2020) first generate for all features a static reward impact
(SRI) map by attacking each feature (in the case of the smallest sliding
window) with the FGSM attack to measure its impact (the drop of the expected
reward) offline. A feature $f_{i}$ with a more significant impact indicates
that changing this feature $f_{i}$ via $\delta_{\text{FGSM}}^{(f_{i})}$ will
influence the expected discounted reward more than via another feature $f_{k}$
with a less significant impact. For each feature $f_{i}$, this is done
multiple times $N$, where each iteration executes the RL policy on the
environment and attacks at every state the feature $f_{i}$ via the FGSM attack
$\delta_{\text{FGSM}}^{(f_{i})}$. After calculating the SRI, they use all the
SRI values of the features $f_{i}$ to select the most vulnerable feature to
attack the deployed RL policy.
_Adversarial training_ retrains the already trained RL policy by using
adversarial attacks during training (see Figure 1(b)) to increase the RL
policy robustness (Pinto et al.,, 2017; Liu et al.,, 2022; Korkmaz, 2021b, ).
## 4 METHODOLOGY
We introduce the general adversarial setting, the property impact (PI), the
property impact attack (PIA), and bounded robustness.
### 4.1 Attack Setting
We first describe our method’s adversarial attack setting (adversary’s goals,
knowledge, and capabilities).
Goal. The adversary aims to modify the property value of the target RL policy
$\pi$ in its environment (modeled as an MDP). For instance, the adversary may
try to increase the probability that the agent collides with another object
(i.e. $\max_{\delta}P(F\text{ }\mathit{collision})$ in the adversarial-induced
DTMC).
Knowledge. The adversary that knows the weights $\theta$ of the trained policy
(for the FGSM attack) and knows the MDP of the environment. Note that we can
replace the FGSM attack with any other attack. Therefore, knowing the weights
of the trained policy should not be a strict constraint.
Capabilities. The adversary can attack the trained policy $\pi$ at every
visited state $s$ during the incremental building process for the model
checking of the adversarial-induced DTMC and after the RL policy got deployed.
### 4.2 Property Impact Attack (PIA)
Combining adversarial RL with model checking allows us to craft adversarial
property impact attacks (PIAs) that target temporal logic properties. Our work
builds upon the research of Chan et al., (2020). Instead of calculating SRIs
(see Section 3.2), we calculate property impacts (PIs). The PI values are used
to select the feature $f_{i}$ with the most significant $PI$-value to attack
the deployed RL policy in its environment
($f_{i}=\operatorname*{argmax}_{f_{i}\in\mathcal{F}}PI(\pi,P(\phi),f_{i},\epsilon)$).
###### Definition 4.1 (Property Impact).
The _property impact_
$PI\colon\Lambda\times\Theta\times\mathcal{F}\times\mathbb{Q}\rightarrow\mathbb{Q}$
quantifies the impact of an adversarial attack
$\delta_{\text{FGSM}}^{(f_{i})}\in\Delta^{(f_{i})}_{\epsilon}(s)$ via a
feature $f_{i}\in\mathcal{F}$ on a given RL policy property $P(\phi)\in\Theta$
with $\Theta$ as the set of all possible PCTL properties for the MDP $M$.
A feature $f_{i}$ with a more significant PI-value indicates that changing
this feature $f_{i}$ via $\delta_{\text{FGSM}}^{(f_{i})}$ will influence the
property (expressed by the property query $P(\phi)$) more than via another
feature $f_{k}$ with a less significant PI-value.
Algorithm 1 Calculate the property impact (PI) for a given MDP $M$, policy
$\pi$, property query $P(\phi)$, feature $f_{i}$, and FGSM attack strength
$\epsilon$.
1:procedure PI($\pi,P(\phi),f_{i},\epsilon$)
2: $r\leftarrow property\\_result(M,\pi,P(\phi))$
3: $r_{adv}\leftarrow adv\\_property\\_result(M,\pi,P(\phi),f_{i},\epsilon)$
4: return $|r-r_{adv}|$
5:end procedure
In Algorithm 1, we explain how to calculate the PI-value for a given MDP $M$,
policy $\pi$, PCTL property query $P(\phi)$, feature $f_{i}$, and FGSM attack
$\delta_{\text{FGSM}}^{(f_{i})}$. First, we incrementally build the induced
DTMC of the policy $\pi$ and the MDP $M$ to check the property value $r$ of
the policy $\pi$ via the function _property_result_. The function
_property_result_ uses COOL-MC and inputs the MDP $M$, policy $\pi$, and PCTL
property query $P(\phi)$ into it to calculate the probability $r$. Second, we
incrementally build the adversarial-induced DTMC $D_{adv}$ of the policy $\pi$
and the MDP $M$ with the $\epsilon$-bounded FGSM attack
$\delta_{\text{FGSM}}^{(f_{i})}$ to check its probability $r_{adv}$ via the
function $adv\\_property\\_result$. To support the building and model checking
of adversarial-induced DTMCs via $adv\\_property\\_result$, we extend the
incremental building process of COOL-MC in the following way. For every
reachable state $s$ by the policy $\pi$, the policy $\pi$ is queried for an
action $a=\pi(s)$. In the underlying MDP, only states $s$ that may be reached
via that action $a$ are expanded. The resulting model is fully probabilistic,
as no action choices are left open. It is, in fact, the Markov chain induced
by the original MDP $M$ and the policy $\pi$. An adversary can now inject
adversarial attacks $\delta(s)$ at every state $s$ that gets passed to the
policy $\pi$ during the incrementally building process (Zhang et al.,, 2020).
This may lead to the effect that the policy $\pi$ makes a misjudgment
$(\pi(s)\neq\pi(\delta(s))$ and results into an adversarial-induced DTMC
$D_{adv}$. This allows us to model check the adversarial-induced DTMCs
$D_{adv}$ to gain the adversarial probability $r_{adv}$. Finally, we measure
the property impact value by measuring the absolute difference between $r$ and
$r_{adv}$.
$x=1$$x=2$$x=3$$\pi(1-1)=a$$\pi(1+1)=b$$\pi(1)=c$$\pi(2-1)=a$$\pi(2)=b$$\pi(2+1)=c$$\pi(3-1)=b$$\pi(3)=c$$\pi(3+1)=a$
Figure 2: ($\epsilon,\alpha$)-robustness checking for an MDP with the state
set $S=\\{x\in[0,4]\\}$, action set $Act=\\{a,b.c\\}$, trained policy $\pi$,
adversarial attack strength $\epsilon=1$
($\Delta_{\epsilon=1}(s)=\\{s+1,s-1,s\\}$), and property $P(F\text{ }x=2)=0$
(for original blue policy $\pi$). The blue induced DTMC is the original one,
and the MDP (blue and red) is the adversarial MDP representing the permissive
policy $\Omega$. For the permissive policy $\Omega$, we get $P^{max}(F\text{
}x=2)=1$ which results in a $P^{max}(F\text{ }x=2)-P(Fx=2)=1$ and indicate
that $\pi$ is not robust for, for example, $\alpha=0$ and $\epsilon=1$. We can
extract the optimal attack set $\\{\delta(s)=s+1\text{ at state }x=1\\}$.
### 4.3 RL Policy Robustness
A trained RL policy $\pi$ can be robust against an $\epsilon$-bounded PIA that
attacks a temporal logic property $P(\phi)$ via feature $f_{i}$
($PI(\pi,P(\phi),f_{i},\epsilon)=0$). However, this is a weak statement about
robustness since there still exist multiple adversarial attacks
$\delta^{(f_{i})}(s)$ with
$\lVert\delta^{(f_{i})}(s)-s\rVert_{\infty}\leq\epsilon$ generated by other
attacks, such as the method from Carlini and Wagner, (2017).
Given a fixed policy $\pi$ and a set of attacks
$\Delta^{(f_{i})}_{\epsilon}(s)$, we generate a _permissive policy_ $\Omega$.
Applying this policy $\pi$ in the original MDP $M$ generates a new MDP
$M^{\prime}$ that describes all potential behavior of the agent under the
attack.
###### Definition 4.2 (Behavior under attack).
A permissive policy $\Omega\colon S\rightarrow 2^{\textsf{Act}}$ selects, at
every state $s$, all actions that can be queried via
$\Delta^{(f_{i})}_{s}(s)$. We consider
$\Omega(s)=\bigcup_{\delta^{(f_{i})}_{i}\in\Delta^{(f_{i})}_{s}(s)}\pi(\delta^{(f_{i})}_{i}(s))$
with $\pi(\delta^{(f_{i})}_{i}(s))\in Act(s)$.
Applying a permissive policy to an MDP does not necessarily resolve all
nondeterminism, since more than one action may be selected in some state(s).
The induced model is then (again) an MDP. We are able to apply model checking,
which typically results in best- and worst-case probability bounds
$P^{max}(\phi)$ and $P^{min}(\phi)$ for a given property query $P(\phi)$.
We use the induced MDP to model check the _robustness_ (see Definition 4.3)
against every possible $\epsilon$-bounded attack $\delta^{(f_{i})}(s)$ for a
trained RL policy $\pi$ in its environment and bound the robustness to an
$\alpha$-threshold (property impacts below a given threshold $\alpha$ may be
acceptable).
###### Definition 4.3 (Bounded robustness).
A policy $\pi$ is called robustly bounded by $\epsilon$ and $\alpha$
($\epsilon,\alpha$-robust) for property query $\phi$ if it holds that
$|P^{*}(\phi)-P(\phi)|\leq\alpha$ (3)
for all possible $\epsilon$-bounded adversarial attacks
$\Delta^{(f_{i})}_{\epsilon}(s)$ at every reachable state $s$ by the
permissive policy $\Omega$. We define $\alpha\in\mathbb{Q}$ as a threshold (in
this paper, we focus on probabilities and therefore $\alpha\in[0,1]$).
$|P^{*}(\phi)-P(\phi)|$ stands for the largest impact of a possible attack. We
denote $P^{*}$ as $P^{max}$ or $P^{min}$ depending if the attack should
increase ($P^{max}$) or decrease ($P^{min}$) the probability.
By model checking the robustness of the trained RL policies (see Figure 2), it
is possible to extract for each state $s$ the adversarial attack
$\delta^{(f_{i})}$ that is part of the most impactful attack and use the
corresponding attack as soon as the state gets observed by the adversary. This
is possible because the underlying model of the induced MDP allows the
extraction of the state and action pairs ($s,a_{adv}$) that lead to the wanted
property value modification ($a_{adv}\coloneqq\pi(\delta^{(f_{i})}(s))$).
## 5 EXPERIMENTS
We now evaluate our PI method, property impact attack (PIA), and robustness
checker method in multiple environments. The experiments are performed by
initially training the RL policies using the deep Q-learning algorithm (Mnih
et al.,, 2013), then using the trained policies to answer our research
questions. First, we compare our PI-method with the SRI-method (Chan et al.,,
2020). Second, we use PIAs to attack policy properties. Third, we discuss the
limitations of PIAs. Last but not least, we show that it is possible to make
trained policies more robust against PIAs by using adversarial training.
### 5.1 Setup
We now explain the setup of our experiments.
Environments. We used our proposed methods in a variety of environments (see
Figure 3, Figure 5, and Table 2). We use the _Freeway_ (for a fair comparison
between the SRI and PI method) and the _Taxi_. Additionally, we use the
environments _Collision Avoidance_ , _Stock Market_ , and _Smart Grid_ (see
Appendix A for more details).
_Freeway_ is an action video game for the Atari 2600. A player controls a
chicken (up, down, no operation) who must run across a highway filled with
traffic to get to the other side. Every time the chicken gets across the
highway, it earns a reward of one. An episode ends if the chicken gets hit by
a car or reaches the other side. Each state is an image of the game’s state.
Note that we use an abstraction of the original game (see Figure 3), which
sets the chicken into the middle column of the screen and contains fewer
pixels than the original game, but uses the same reward function and actions.
The _taxi_ agent has to pick up passengers and transport them to their
destination without running out of fuel. The environment terminates as soon as
the taxi agent does the predefined number of jobs or runs out of fuel. After
the job is done, a new guest spawns randomly at one of the predefined
locations. We define the maximal taxi fuel level as ten and the maximal number
of jobs as two. To refuel the taxi, the agent needs to drive to the gas
station cell ($x=1,y=2$). The problem is formalized as follows:
$\displaystyle S$ $\displaystyle=\\{(x,y,Xloc,Yloc,Xdest,Ydest,$
$\displaystyle\qquad\qquad fuel,done,pass,jobs,done),...\\}$ $\displaystyle
Act$ $\displaystyle=\\{north,east,south,west,pick\\_up,drop\\}$ $\displaystyle
rew$ $\displaystyle=\begin{cases}0\text{, if passenger successfully
dropped.}\\\ 21\text{, if passenger got picked up.}\\\ 21+|x-Xdest|+\\\
\quad|y-Ydest|\text{, if passenger on board.}\\\ 21+|x-Xloc|+|y-Yloc|\text{,
otherwise.}\end{cases}$
Figure 3: A comparison between the Atari 2600 Freeway game (top) and our
abstracted version (bottom).
Properties. Table 1 presents the property queries of the policy trained by an
RL agent achieves in these properties without the attack ($=$). For example,
$pass\\_empty$ describes the probability of the taxi agent running out of fuel
while having a passenger on board, which is $0$ for the policy without an
attack.
Trained RL policies. We trained deep Q-learning policies for the environments.
See Appendix B for a more detailed description of the RL training.
Env. | Label | PCTL Property Query ($P(\phi)$) | $=$
---|---|---|---
Fr | crossed | $P(F\text{ }crossed)$ | $1.0$
Taxi | deadlock1 | $P(fuel\geq 4\text{ }U\text{ }(G(jobs=1\land\lnot empty\text{ }\land pass)))$ | $0.0$
| deadlock2 | $P(fuel\geq 4\text{ }U\text{ }(G(jobs=1\land\lnot empty\land\lnot pass)))$ | $0.0$
| station_empty | $P((((jobs{=}0$ $U$ $x{=}1\land y{=}2)$ $U$ $(jobs{=}0\land\lnot(x{=}1\land y{=}2)))$ $U$ $empty\land jobs{=}0))$ | $0.0$
| $\overline{\textrm{station}}$_empty | $P(F\text{ }(empty\land jobs=0)\land G\lnot(x\neq 1\land y\neq 2))$ | $0.0$
| pass_empty | $P(F\text{ }(empty\land pass))$ | $0.0$
| $\overline{\textrm{pass}}$_empty | $P(F\text{ }(empty\land\lnot pass))$ | $0.0$
Coll. | collision | $P(F_{\leq 100}\text{ }collision)$ | $0.1$
SG | blackout | $P(F_{\leq 100}\text{ }blackout)$ | $0.2$
SM | bankruptcy | $P(F\text{ }bankruptcy)$ | $0.0$
Table 1: PCTL property queries, with their labels and the original result of
the property query without an attack ($=$). $Fr$ stands for _Freeway_ ,
$Coll.$ stands for _Collision Avoidance_ , $SG$ for _Smart Grid_ , and $SM$
for _Stock Market_.
Technical setup. All experiments were executed on an NVIDIA GeForce GTX 1060
Mobile GPU, 16 GB RAM, and an Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz x 12.
For model checking, we use Storm 1.7.1 (dev).
### 5.2 Analysis
We now answer our research questions.
_Does the PI method have the same behavior as the related SRI method?_ We
showcase that our PI approach yields similar results to the empirical SRI
approach (Chan et al.,, 2020) in the Freeway environment. We chose the Freeway
environment since this environment was also used by Chan et al., (2020).
To compare both approaches, we can use the reward function (a reward of one if
the player crosses the street) to express the expected reachability
probability of crossing the street (see label _crossed_ in Table 1). We then
sampled for each feature the SRI value multiple times ($N=300$) to generate
the SRI map (see Figure 4). The PI-values are calculated by Algorithm 1 with
the property query _crossed_ and are used to generate the PI map (see Figure
4). In both cases, we used an $\epsilon=1$.
Figure 4 shows both approaches’ feature impact maps (for each game pixel, a
value). In both maps, the most impactful features (pixels) concerning the
total expected reward lie on the chicken’s path, which is also the result of
Chan et al., (2020).
Figure 4: This plot contains the feature impacts (normalized between $0$ and
$1$) for the PI, and SRI concerning the total expected reward (sample size
$N=300$) approaches with $\epsilon=1$ of the policy $\pi_{A}$ in the Freeway
environment (street pixels without the sidewalks).
Figure 5: Taxi environment. This diagram plots different advanced property
impacts of different PIAs. The original property values (without an attack)
are all zero.
_Can the PI method generate different property impacts for different advanced
property queries?_ We now show that PI is suited to measure the property
impact for properties that can not be expressed by rewards which we call here
_advanced property queries_ (see Figure 5).
To make the interpretation of advanced properties more straightforward, we
focus on the Taxi environment instead of the Freeway environment and use the
advanced property queries _deadlock1_ and _station_empty_. Advanced property
queries contain, for example, the U-operator (Definition 3.3), which allows
the adversary to make sure that certain events happen before other events.
Figure 5 shows the property impact of each attack on the policy and different
$\epsilon$-bounded attacks. By attacking the _done_ feature via an PIA (with
$\epsilon=1$), it is possible to drive the taxi around without running out of
fuel and not finishing jobs while having a passenger on board (deadlock1).
Figure 5 also shows that it is possible to let the taxi drive first to the gas
station and let it run out of fuel afterwards (station_empty). We observe that
for different $\epsilon$-bounds, PIAs have different impacts via features on
the temporal logic properties (see station_empty in Figure 5).
Setup | | Robustness Checker | | PIA | | Baseline (FGSM)
---|---|---|---|---|---|---
Env. | Features | $\epsilon$ | Property Query | | $P^{max}$ | $P$ | Impact* | Time | | $Impact$ | Time | | $Impact$ | Time
Taxi | done | 1 | deadlock1 | | 0.44 | 0.0 | 0.44 | 9 | | 0.19 | 20 | | 0.00 | 6
| done | 1 | deadlock2 | | 0.00 | 0.0 | 0.00 | 9 | | 0.00 | 20 | | 0.00 | 6
| fuel | 2 | pass_empty | | 1.00 | 0.0 | 1.00 | 25 | | 0.25 | 20 | | 0.00 | 6
| y | 2 | $\overline{\textrm{pass}}$_empty | | 1.00 | 0.0 | 1.00 | 27 | | 1.00 | 20 | | 1.00 | 6
| x | 1 | station_empty | | 1.00 | 0.0 | 1.00 | 24 | | 1.00 | 6 | | 1.00 | 6
| x | 1 | $\overline{\textrm{station}}$_empty | | 1.00 | 0.0 | 1.00 | 30 | | 1.00 | 6 | | 1.00 | 6
C | obs1_x | 1 | collision | | 0.87 | 0.1 | 0.86 | 65 | | 0.46 | 213 | | 0.87 | 211
SG | non_renewable | 1 | blackout | | 0.97 | 0.2 | 0.95 | 2 | | 0.39 | 2 | | 0.98 | 2
SM | sell_price | 1 | bankruptcy | | 0.81 | 0.0 | 0.81 | 15 | | 0.08 | 20 | | 0.00 | 4
Table 2: Impact* stands for the optimal adversarial attack impact
($|P^{max}-P|$) via the feature specified in _Features_ , $P^{max}$ for the
maximal probability $P^{max}(\phi)$ with an attack, $P$ for the original
probability $P(\phi)$ (without an attack), Time in seconds, C for _Collision
Avoidance_ , SG for _Smart Grid_ , SM for _Stock Market_ , Baseline is a
standard FGSM attack on the whole observation. We observe that PIAs perform
similarly to FGSM attacks for advanced properties. Our robustness checker
shows that our generated PIAs are not necessarily optimal but that they can
still modify the targeted property. All our attacks are $\epsilon$-bounded for
a fair comparison.
_What are the limitations of PIAs?_ We now analyze the limitations of PIAs and
compare them with the FGSM attack (baseline) and the robustness checker. For
each experiment, we $\epsilon$-bounded all the generated attacks for a fair
comparison. We mainly focus on selected properties from the taxi environment
but also include other environments (see Table 2).
Table 2 shows that PIAs, in comparison to FGSM attacks, have similar impacts
on temporal logic properties (compare _impact_ columns of PIA and FGSM). For
temporal logic properties where some correct decision-making is still needed,
PIAs perform better than the FGSM attack (for instance, _pass_empty_).
However, PIAs do not necessarily create a maximal impact on the property
values like the robustness checker method (compare _PIA impact_ with
_Impact*_).
After observing the results of the three methods (PIA, FGSM, robustness
checker), we can summarize. By verifying the robustness of the trained RL
policies, the adversary can already extract for each state the optimal
adversarial attack that is part of the most impactful attack. Since PIAs build
induced DTMCs and the robustness checker induced MDPs, PIAs are suited for
MDPs with more states and transitions before running out of memory (see Gross
et al.,, 2022, for more details about the limitations of model checking RL
policies) .
_Does adversarial training make trained RL policies more robust against PIAs?_
Figure 5 shows that an adversarial attack (bounded by $\epsilon=1$) on feature
$done$ can bring the taxi agent into a deadlock and lets it drive around after
the first job is done ($deadlock1=0.19$). To protect the RL agent from this
attack, we trained the RL taxi policy over 5000 additional episodes via
adversarial training by using our method PIA on the done feature to make the
policy more robust against this deadlock attack. The adversarial training
improves the feature robustness for the done feature ($0$) but deteriorates
the robustness for the other features (all other feature PI-values: $1$). That
agrees with the observation that adversarially trained RL policies may be less
robust to other types of adversarial attacks (Zhang et al.,, 2020; Korkmaz,
2021a, ; Korkmaz,, 2022). To summarize, adversarial training can improve the
RL policy robustness against specific PIAs but may also deteriorate the
robustness against other PIAs.
## 6 CONCLUSION
We presented an analytical method to measure the adversarial attack impact on
RL policy properties. This knowledge can be used to craft fine-grained
property impact attacks (PIAs) to modify specific values of temporal RL policy
properties. Our model checking method allows us to verify if a trained policy
is robust against $\epsilon$-bounded PIAs. A learner can use adversarial
training in combination with the PIA to obtain more robust policies against
specific PIAs.
For future work, it would make sense to combine the current research with
countermeasures (Lin et al., 2017b, ; Xiang et al.,, 2018; Havens et al.,,
2018). Furthermore, it would be interesting to analyze adversarial multi-agent
reinforcement learning (Figura et al.,, 2021; Zeng et al.,, 2022) in
combination with model checking. Interpretable Reinforcement Learning (Davoodi
and Komeili,, 2021) can further use the impact results to interpret trained RL
policies.
## REFERENCES
* Amodei et al., (2016) Amodei, D., Olah, C., Steinhardt, J., Christiano, P. F., Schulman, J., and Mané, D. (2016). Concrete problems in AI safety. CoRR, abs/1606.06565.
* Baier and Katoen, (2008) Baier, C. and Katoen, J. (2008). Principles of model checking. MIT Press.
* Boron and Darken, (2020) Boron, J. and Darken, C. (2020). Developing combat behavior through reinforcement learning in wargames and simulations. In 2020 IEEE Conference on Games (CoG), pages 728–731. IEEE.
* Bouton et al., (2019) Bouton, M., Karlsson, J., Nakhaei, A., Fujimura, K., Kochenderfer, M. J., and Tumova, J. (2019). Reinforcement learning with probabilistic guarantees for autonomous driving. CoRR, abs/1904.07189.
* Carlini and Wagner, (2017) Carlini, N. and Wagner, D. A. (2017). Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy, pages 39–57. IEEE Computer Society.
* Cassez et al., (2005) Cassez, F., David, A., Fleury, E., Larsen, K. G., and Lime, D. (2005). Efficient on-the-fly algorithms for the analysis of timed games. In CONCUR, volume 3653 of Lecture Notes in Computer Science, pages 66–80. Springer.
* Chan et al., (2020) Chan, P. P. K., Wang, Y., and Yeung, D. S. (2020). Adversarial attack against deep reinforcement learning with static reward impact map. In AsiaCCS, pages 334–343. ACM.
* Chatterjee et al., (2017) Chatterjee, K., Novotný, P., Pérez, G. A., Raskin, J., and Zikelic, D. (2017). Optimizing expectation with guarantees in pomdps. In AAAI, pages 3725–3732. AAAI Press.
* Chen et al., (2019) Chen, T., Liu, J., Xiang, Y., Niu, W., Tong, E., and Han, Z. (2019). Adversarial attack and defense in reinforcement learning-from AI security view. Cybersecur., 2(1):11.
* Clark et al., (2018) Clark, G. W., Doran, M. V., and Glisson, W. (2018). A malicious attack on the machine learning policy of a robotic system. In TrustCom/BigDataSE, pages 516–521. IEEE.
* Courcoubetis and Yannakakis, (1988) Courcoubetis, C. and Yannakakis, M. (1988). Verifying temporal properties of finite-state probabilistic programs. In FOCS, pages 338–345. IEEE Computer Society.
* Courcoubetis and Yannakakis, (1995) Courcoubetis, C. and Yannakakis, M. (1995). The complexity of probabilistic verification. J. ACM, 42(4):857–907.
* David et al., (2015) David, A., Jensen, P. G., Larsen, K. G., Mikucionis, M., and Taankvist, J. H. (2015). Uppaal stratego. In TACAS, volume 9035 of Lecture Notes in Computer Science, pages 206–211. Springer.
* Davoodi and Komeili, (2021) Davoodi, O. and Komeili, M. (2021). Feature-based interpretable reinforcement learning based on state-transition models. In SMC, pages 301–308. IEEE.
* Dräger et al., (2015) Dräger, K., Forejt, V., Kwiatkowska, M. Z., Parker, D., and Ujma, M. (2015). Permissive controller synthesis for probabilistic systems. Log. Methods Comput. Sci., 11(2).
* Farazi et al., (2021) Farazi, N. P., Zou, B., Ahamed, T., and Barua, L. (2021). Deep reinforcement learning in transportation research: A review. Transportation Research Interdisciplinary Perspectives, 11:100425.
* Figura et al., (2021) Figura, M., Kosaraju, K. C., and Gupta, V. (2021). Adversarial attacks in consensus-based multi-agent reinforcement learning. In ACC, pages 3050–3055. IEEE.
* Fulton and Platzer, (2019) Fulton, N. and Platzer, A. (2019). Verifiably safe off-model reinforcement learning. In TACAS (1), volume 11427 of Lecture Notes in Computer Science, pages 413–430. Springer.
* Gehr et al., (2018) Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., and Vechev, M. T. (2018). AI2: safety and robustness certification of neural networks with abstract interpretation. In IEEE Symposium on Security and Privacy, pages 3–18. IEEE Computer Society.
* Gleave et al., (2020) Gleave, A., Dennis, M., Wild, C., Kant, N., Levine, S., and Russell, S. (2020). Adversarial policies: Attacking deep reinforcement learning. In ICLR. OpenReview.net.
* Goodfellow et al., (2015) Goodfellow, I. J., Shlens, J., and Szegedy, C. (2015). Explaining and harnessing adversarial examples. In ICLR.
* Gross et al., (2022) Gross, D., Jansen, N., Junges, S., and Pérez, G. A. (2022). Cool-mc: A comprehensive tool for reinforcement learning and model checking. In SETTA. Springer.
* Hahn et al., (2019) Hahn, E. M., Perez, M., Schewe, S., Somenzi, F., Trivedi, A., and Wojtczak, D. (2019). Omega-regular objectives in model-free reinforcement learning. In TACAS (1), volume 11427 of LNCS, pages 395–412. Springer.
* Hansson and Jonsson, (1994) Hansson, H. and Jonsson, B. (1994). A logic for reasoning about time and reliability. Formal Aspects Comput., 6(5):512–535.
* Hasanbeig et al., (2019) Hasanbeig, M., Kroening, D., and Abate, A. (2019). Towards verifiable and safe model-free reinforcement learning. In OVERLAY@AI*IA, volume 2509 of CEUR Workshop Proceedings, page 1. CEUR-WS.org.
* Hasanbeig et al., (2020) Hasanbeig, M., Kroening, D., and Abate, A. (2020). Deep reinforcement learning with temporal logics. In FORMATS, volume 12288 of LNCS, pages 1–22. Springer.
* Havens et al., (2018) Havens, A. J., Jiang, Z., and Sarkar, S. (2018). Online robust policy learning in the presence of unknown adversaries. In NeurIPS, pages 9938–9948.
* Hensel et al., (2022) Hensel, C., Junges, S., Katoen, J., Quatmann, T., and Volk, M. (2022). The probabilistic model checker storm. Int. J. Softw. Tools Technol. Transf., 24(4):589–610.
* (29) Huang, S. H., Papernot, N., Goodfellow, I. J., Duan, Y., and Abbeel, P. (2017a). Adversarial attacks on neural network policies. In ICLR. OpenReview.net.
* (30) Huang, X., Kwiatkowska, M., Wang, S., and Wu, M. (2017b). Safety verification of deep neural networks. In CAV (1), volume 10426 of Lecture Notes in Computer Science, pages 3–29. Springer.
* Ilahi et al., (2022) Ilahi, I., Usama, M., Qadir, J., Janjua, M. U., Al-Fuqaha, A. I., Hoang, D. T., and Niyato, D. (2022). Challenges and countermeasures for adversarial attacks on deep reinforcement learning. IEEE Trans. Artif. Intell., 3(2):90–109.
* Katz et al., (2017) Katz, G., Barrett, C. W., Dill, D. L., Julian, K., and Kochenderfer, M. J. (2017). Reluplex: An efficient SMT solver for verifying deep neural networks. In CAV (1), volume 10426 of Lecture Notes in Computer Science, pages 97–117. Springer.
* Katz et al., (2019) Katz, G., Huang, D. A., Ibeling, D., Julian, K., Lazarus, C., Lim, R., Shah, P., Thakoor, S., Wu, H., Zeljic, A., Dill, D. L., Kochenderfer, M. J., and Barrett, C. W. (2019). The marabou framework for verification and analysis of deep neural networks. In CAV (1), volume 11561 of Lecture Notes in Computer Science, pages 443–452. Springer.
* (34) Korkmaz, E. (2021a). Adversarial training blocks generalization in neural policies. In NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications.
* (35) Korkmaz, E. (2021b). Investigating vulnerabilities of deep neural policies. In UAI, volume 161 of Proceedings of Machine Learning Research, pages 1661–1670. AUAI Press.
* Korkmaz, (2022) Korkmaz, E. (2022). Deep reinforcement learning policies learn shared adversarial features across mdps. In AAAI, pages 7229–7238. AAAI Press.
* Kwiatkowska et al., (2011) Kwiatkowska, M. Z., Norman, G., and Parker, D. (2011). PRISM 4.0: Verification of probabilistic real-time systems. In CAV, volume 6806 of Lecture Notes in Computer Science, pages 585–591. Springer.
* Lee et al., (2021) Lee, X. Y., Esfandiari, Y., Tan, K. L., and Sarkar, S. (2021). Query-based targeted action-space adversarial policies on deep reinforcement learning agents. In ICCPS, pages 87–97. ACM.
* Lee et al., (2020) Lee, X. Y., Ghadai, S., Tan, K. L., Hegde, C., and Sarkar, S. (2020). Spatiotemporally constrained action space attacks on deep reinforcement learning agents. In AAAI, pages 4577–4584. AAAI Press.
* Levine et al., (2016) Levine, S., Finn, C., Darrell, T., and Abbeel, P. (2016). End-to-end training of deep visuomotor policies. J. Mach. Learn. Res., 17:39:1–39:40.
* (41) Lin, Y., Hong, Z., Liao, Y., Shih, M., Liu, M., and Sun, M. (2017a). Tactics of adversarial attack on deep reinforcement learning agents. In ICLR (Workshop). OpenReview.net.
* (42) Lin, Y., Liu, M., Sun, M., and Huang, J. (2017b). Detecting adversarial attacks on neural network policies with visual foresight. CoRR, abs/1710.00814.
* Littman et al., (2017) Littman, M. L., Topcu, U., Fu, J., Isbell, C., Wen, M., and MacGlashan, J. (2017). Environment-independent task specifications via GLTL. arXiv preprint 1704.04341.
* Liu et al., (2022) Liu, Z., Guo, Z., Cen, Z., Zhang, H., Tan, J., Li, B., and Zhao, D. (2022). On the robustness of safe reinforcement learning under observational perturbations. CoRR, abs/2205.14691.
* Mnih et al., (2013) Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M. A. (2013). Playing atari with deep reinforcement learning. CoRR, abs/1312.5602.
* Mnih et al., (2015) Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M. A., Fidjeland, A., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nat., 518(7540):529–533.
* Moos et al., (2022) Moos, J., Hansel, K., Abdulsamad, H., Stark, S., Clever, D., and Peters, J. (2022). Robust reinforcement learning: A review of foundations and recent advances. Machine Learning and Knowledge Extraction, 4(1):276–315.
* Nakabi and Toivanen, (2021) Nakabi, T. A. and Toivanen, P. (2021). Deep reinforcement learning for energy management in a microgrid with flexible demand. Sustainable Energy, Grids and Networks, 25:100413.
* Pinto et al., (2017) Pinto, L., Davidson, J., Sukthankar, R., and Gupta, A. (2017). Robust adversarial reinforcement learning. In ICML, volume 70 of Proceedings of Machine Learning Research, pages 2817–2826. PMLR.
* Rakhsha et al., (2020) Rakhsha, A., Radanovic, G., Devidze, R., Zhu, X., and Singla, A. (2020). Policy teaching via environment poisoning: Training-time adversarial attacks against reinforcement learning. In ICML, volume 119 of Proceedings of Machine Learning Research, pages 7974–7984. PMLR.
* Ruan et al., (2018) Ruan, W., Huang, X., and Kwiatkowska, M. (2018). Reachability analysis of deep neural networks with provable guarantees. In IJCAI, pages 2651–2659. ijcai.org.
* Sadigh et al., (2014) Sadigh, D., Kim, E. S., Coogan, S., Sastry, S. S., and Seshia, S. A. (2014). A learning based approach to control synthesis of markov decision processes for linear temporal logic specifications. In CDC, pages 1091–1096. IEEE.
* Sutton and Barto, (2018) Sutton, R. S. and Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.
* Vamplew et al., (2022) Vamplew, P., Smith, B. J., Källström, J., de Oliveira Ramos, G., Radulescu, R., Roijers, D. M., Hayes, C. F., Heintz, F., Mannion, P., Libin, P. J. K., Dazeley, R., and Foale, C. (2022). Scalar reward is not enough: a response to silver, singh, precup and sutton (2021). Auton. Agents Multi Agent Syst., 36(2):41.
* Wang et al., (2020) Wang, Y., Roohi, N., West, M., Viswanathan, M., and Dullerud, G. E. (2020). Statistically model checking PCTL specifications on markov decision processes via reinforcement learning. In CDC, pages 1392–1397. IEEE.
* Xiang et al., (2018) Xiang, Y., Niu, W., Liu, J., Chen, T., and Han, Z. (2018). A pca-based model to predict adversarial examples on q-learning of path finding. In DSC, pages 773–780. IEEE.
* Yu and Sun, (2022) Yu, M. and Sun, S. (2022). Natural black-box adversarial examples against deep reinforcement learning. In AAAI, pages 8936–8944. AAAI Press.
* Zeng et al., (2022) Zeng, L., Qiu, D., and Sun, M. (2022). Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks. Applied Energy, 324:119688.
* Zhang et al., (2020) Zhang, H., Chen, H., Xiao, C., Li, B., Liu, M., Boning, D. S., and Hsieh, C. (2020). Robust deep reinforcement learning against adversarial perturbations on state observations. In NeurIPS.
Policy | Env. | Layers | Neurons | LR | Batch | Episodes | Reward
---|---|---|---|---|---|---|---
$\pi_{A}$ | Freeway | 4 | 512 | 0.0001 | 100 | 10000 | 0.96
$\pi_{B}$ | Taxi | 4 | 512 | 0.0001 | 100 | 60000 | -1220.49
$\pi_{C}$ | Collision Avoidance | 4 | 512 | 0.0001 | 100 | 5000 | 9817.00
$\pi_{D}$ | Smart Grid | 4 | 512 | 0.0001 | 100 | 5000 | -1000.10
$\pi_{E}$ | Stock Market | 4 | 512 | 0.0001 | 100 | 10000 | 326.00
Table 3: $Reward$ specifies the best reward sliding window over $100$ episodes
while training, $LR$ for learning rate, $Batch$ for batch size.
## Appendix A Additional Environments
Collision avoidance. This is an environment that contains one agent and two
moving obstacles in a two-dimensional grid world. The environment terminates
as soon as a collision between the agent and obstacle 1 happens. The
environment contains a slickness parameter, which defines the probability that
the agent stays in the same cell.
$\displaystyle S=\\{(x,y,obs1\\_x,obs1\\_y,obs2\\_x,obs2\\_y,done),...\\}$
$\displaystyle Act=\\{north,east,south,west\\}$ $\displaystyle
rew=\begin{cases}0\text{, if collision with obstacle 1}\\\ 100\text{,
otherwise}\end{cases}$
Smart grid. In this environment, a controller controls renewable and non-
renewable energy production distribution. The objective is to minimize non-
renewable energy production by using renewable technologies. If the energy
consumption exceeds production, it leads to a blackout. Furthermore, if there
is too much energy in the electricity network, the energy production shuts
down.
$\displaystyle S=\\{(energy,blackout,renewable,$ $\displaystyle
non\\_renewable,consumption),...\\}$ $\displaystyle
Act=\\{increase\\_non\\_renewable,$ $\displaystyle
increase\\_non\\_renewable,$ $\displaystyle
decrease\\_renewable,decrease\\_both\\}$ $\displaystyle
rew=\begin{cases}-max(no\\_renewable-renewable,0),\\\ \text{if no
blackout.}\\\ -1000\text{, otherwise}\end{cases}$
Stock market. This environment is a simplified version of a stock market
simulation. The agent starts with an initial capital and has to increase it
through buying and selling stocks without running into bankruptcy.
$\displaystyle S=\\{(buy\\_price,sell\\_price,capital,stocks,$ $\displaystyle
last\\_action\\_price),...\\}$ $\displaystyle Act=\\{buy,hold,sell\\}$
$\displaystyle rew=\begin{cases}\text{max(capital - initial capital, 0), if
hold.}\\\ max(floor(\frac{capital}{buy\\_price}),0)\text{, if buy.}\\\
max(capital+\text{number of stocks}\\\ \text{times sell\\_price - initial
capital, 0), if sell.}\\\ \end{cases}$
## Appendix B Deep RL Training
The training parameters of our trained policies can be found in Table 3. For
all training runs we set the seed of numpy, PyTorch and Storm to 128. All RL
policies were trained with the standard deep Q-learning algorithm (Mnih et
al.,, 2013). With $\epsilon=1$, $\epsilon_{decay}=0.99999$
($\epsilon_{decay}=0.9999$ for freeway and Collision Avoidance),
$\epsilon_{min}=0.1$, $\gamma=0.99$, and a target network replacement of 100.
## Appendix C New COOL-MC Extensions
COOL-MC (Gross et al.,, 2022) provides a framework for connecting state-of-
the-art (deep) reinforcement learning (RL) with modern model checking. In
particular, COOL-MC extends the OpenAI Gym to support RL training on PRISM
environments and allows verification of the trained RL policies via the Storm
model checker (Hensel et al.,, 2022). COOL-MC is available on GitHub
(https://github.com/LAVA-LAB/COOL-MC).
We extended COOL-MC with an adversarial RL that allows the impact measurement
of adversarial attacks on RL policy properties. Furthermore, we extended COOL-
MC to support our proposed robustness checker. Both extensions can be found in
the branch _mc_pia_ in the GitHub Repository (https://github.com/LAVA-
LAB/MC˙PIA).
|
# Event-aware Video Corpus Moment Retrieval
Danyang Hou CAS Key Laboratory of AI Security, Institute of Computing
Technology, Chinese Academy of SciencesUniversity of Chinese Academy of
SciencesBeijingChina<EMAIL_ADDRESS>, Liang Pang CAS Key Laboratory
of AI Security, Institute of Computing Technology, Chinese Academy of
SciencesUniversity of Chinese Academy of SciencesBeijingChina
<EMAIL_ADDRESS>, Huawei Shen CAS Key Laboratory of AI Security,
Institute of Computing Technology, Chinese Academy of SciencesUniversity of
Chinese Academy of SciencesBeijingChina<EMAIL_ADDRESS>and Xueqi Cheng
CAS Key Lab of Network Data Science and Technology, Institute of Computing
Technology, Chinese Academy of SciencesUniversity of Chinese Academy of
SciencesBeijingChina<EMAIL_ADDRESS>
(2018)
###### Abstract.
Video Corpus Moment Retrieval (VCMR) is a practical video retrieval task
focused on identifying a specific moment within a vast corpus of untrimmed
videos using the natural language query. Existing methods for VCMR typically
rely on frame-aware video retrieval, calculating similarities between the
query and video frames to rank videos based on maximum frame similarity.
However, this approach overlooks the semantic structure embedded within the
information between frames, namely, the event, a crucial element for human
comprehension of videos. Motivated by this, we propose EventFormer, a model
that explicitly utilizes events within videos as fundamental units for video
retrieval. The model extracts event representations through event reasoning
and hierarchical event encoding. The event reasoning module groups consecutive
and visually similar frame representations into events, while the hierarchical
event encoding encodes information at both the frame and event levels. We also
introduce anchor multi-head self-attenion to encourage Transformer to capture
the relevance of adjacent content in the video. The training of EventFormer is
conducted by two-branch contrastive learning and dual optimization for two
sub-tasks of VCMR. Extensive experiments on TVR, ANetCaps, and DiDeMo
benchmarks show the effectiveness and efficiency of EventFormer in VCMR,
achieving new state-of-the-art results. Additionally, the effectiveness of
EventFormer is also validated on partially relevant video retrieval task.
Video Corpus Moment Retrieval, Video Retrieval, Event Retrieval
††copyright: acmlicensed††journalyear: 2018††doi: XXXXXXX.XXXXXXX††conference: ; July 2017; Washington D.C., USA††isbn: 978-1-4503-XXXX-X/18/06††ccs: Information systems Video search Figure 1. In VCMR, the relevant part corresponding to the query is the moment. While the frame-aware method utilizes frames for retrieval, our event-aware approach adopts events as the retrieval unit, ensuring a more comprehensive capture of moment information. Table 1. The overlap between the moments predicted by models or the extracted events with ground truth moments. The metric is the ratio of predicted moments to ground truth moments with an IoU greater than 0.5 or 0.7. Model | IoU=0.5 | IoU=0.7
---|---|---
XML (Lei et al., 2020) (ECCV’20) | 29.56 | 13.05
ReLoCLNet (Zhang et al., 2021a) (SIGIR’21) | 31.65 | 14.80
HERO (Li et al., 2020) (EMNLP’20) | 32.2 | 15.30
Event | 35.04 | 14.21
## 1\. Introduction
With the widespread use of mobile devices and video-sharing applications,
online video content has surged to unprecedented levels, encompassing
extensive, untrimmed content, including TV series and instructional videos. An
advanced video retrieval system should efficiently pinpoint specific moments
within a vast corpus for users, be it a classic shot from a movie or a crucial
step in an instructional video, thereby minimizing the user’s browsing time.
Addressing this need, the recently proposed Video Corpus Moment Retrieval task
(VCMR) (Escorcia et al., 2019; Lei et al., 2020) requires retrieving
semantically relevant video moments from a corpus of untrimmed videos by a
natural language query, where the moment is a continuous temporal segment.
A distinctive feature of VCMR, setting it apart from typical text-to-video
retrieval, lies in the nature of video relevance to the query. Unlike trimmed
videos in text-to-video retrieval (Chen and Dolan, 2011), where the entire
video aligns with the text query, the untrimmed video involves only a small
part (relevant moment) of the content being related to the query, shown in
Figure 1. The existing works (Lei et al., 2020; Li et al., 2020; Zhang et al.,
2021a) employ frame-aware video retrieval to capture the partial relevance
between the query and video. This entails calculating the similarity between
the query and all video frames in the corpus, ranking the videos based on the
maximum similarity of frames within each video. However, these works overlook
the semantic structure embedded within the information between video frames,
i.e. event. Cognitive science research (Tversky and Zacks, 2013) suggests that
human perception of visual information primarily revolves around the concept
of events, with event information being the most fundamental unit of visual
perception for humans. In the realm of videos, a sequence depicting a
consistent action, object, or environment is termed an event (Shou et al.,
2021), comprised of frames that are both similar and consecutive. Employing
frames as a unit for video retrieval contains less information compared to
human cognitive habits. While the event may not overlap with relevant moment
exactly, it covers more complete information than a frame, shown in Figure 1.
To further evaluate the helpfulness of the event for the VCMR task, we measure
the overlap between the events extracted using the unsupervised method in
(Kang et al., 2022) from the video and the ground truth moment. The results on
the TVR validation set are shown in Table 1. The model results are based on
the predicted moments, and the event extraction results are derived from video
events with the highest overlap with the correct moment (the ideal case).
Notably, with the threshold set at 0.5, the optimal extracted events
outperform the predicted moments of all models. Given that the events are
extracted without any training, these results highlight the utility of event
information in video for VCMR. If the event can be utilized effectively, it
will enhance the accuracy of retrieval. Retrieval efficiency will also
increase because fewer units reduce the amount of computation in retrieval.
However, the frame-aware method, which simply encodes the frame
representations by Transformer, struggles to utilize event for retrieval.
There are three main reasons. (1) The event information is not explicitly
extracted from the frames. Although contextual relevance is captured using
Transformer, each frame expresses more information from itself, posing
challenges in capturing the overall information of an event. Hence, directly
using frame as event is insufficient. (2) It lacks event-level information
interaction. Events encapsulate more comprehensive semantic information, and
strong semantic associations typically exist between events, as seen in
examples such as two correlative steps in an instructional video. (3) The
attention of model is not adequately concentrated. The range of attention in
vanilla Tansformer (Vaswani et al., 2017) is the entire video. But not all
content in the untrimmed informative video is relevant. The most relevant
content tends to be intra-event, i.e., adjacent.
To this end, we propose EventFormer to explicitly leverage event information
to help VCMR. The model contains two main components for event learning: event
reasoning and hierarchical event encoding. The event reasoning module plays a
pivotal role in extracting event information from the video based on the frame
representation. In reference to the works on generic event boundary detection
(Shou et al., 2021), we introduce three event extraction strategies,
contrastive convolution, Kmeans, and window, aimed at aggregating visually
similar and consecutive frames as event. The hierarchical event encoding
module captures interactions not only at the frame level but also at the event
level, obtaining a more semantically relevant representation of events. To
encourage the model to focus attention on adjacent content, anchor multi-head
self-attention is introduced to augment Transformer. VCMR task includes two
sub-tasks: video retrieval (VR) and single video moment retrieval (SVMR). In
VR, the objective is to retrieve the most pertinent untrimmed video from a
large corpus using a natural language description as query, while SVMR focuses
on pinpointing the start and end times of the relevant moment within the
retrieved video. For the two subtasks, EventFormer adopts distinct training
strategies. Specifically, two-branch contrastive learning and dual
optimization. Both strategies essentially integrate frame and event into the
training process.
We evaluate our proposed EventFormer on three benchmarks, TVR (Lei et al.,
2020), ANetCaps (Caba Heilbron et al., 2015), and DiDeMo (Anne Hendricks et
al., 2017). The results show the effectiveness and efficiency of EventFormer,
achieving new state-of-the-art results. Additionally, we validate the
effectiveness of our model in the partially relevant video retrieval (PRVR)
task.
Our main contributions are as follows:
* •
We propose an event-aware model EventFormer for VCMR, motivated by human
perception for visual information.
* •
We adopt event reasoning and hierarchical event encoding for event learning,
and anchor multi-head self-attention to enhance close-range dependencies.
* •
Experiments on three benchmarks show the effectiveness and efficiency,
achieving new state-of-the-art results on VCMR. We also validate the
effectiveness of the model in PRVR task.
## 2\. Related work
In this section, we first introduce works on two related tasks, text-to-video
retrieval and natural language video localization. Then we review works on
VCMR. Finally, we present works on generic event boundary detection.
Text-to-video retrieval Similar to VR, text-to-video retrieval aims to find
relevant videos from a corpus based on a natural language query. However, the
distinction lies in the nature of query-video relevance. In text-to-video
retrieval, the video is trimmed to precisely match the entire content of the
video with the text query. Text-to-video retrieval methods are broadly
categorized into two types: two-tower models (Bain et al., 2021; Gabeur et
al., 2020; Ge et al., 2022; Ging et al., 2020; Liu et al., 2019a; Miech et
al., 2020; Rouditchenko et al., 2020; Xu et al., 2021b) and one-tower models
(Fu et al., 2021; Lei et al., 2021b; Sun et al., 2019; Xu et al., 2021a; Chen
et al., 2020c; Han et al., 2021; Wang et al., 2021). Two-tower models utilize
separate encoders for obtaining video and query representations, employing a
simple similarity function like cosine to measure relevance. These methods are
efficient due to decomposable computations of query and video representations.
On the other hand, one-tower models leverage cross-modal attention (Bahdanau
et al., 2015; Vaswani et al., 2017) for deep interactions between query and
video, enhancing retrieval accuracy. Some works (Miech et al., 2021; Liu et
al., 2021b; Yu et al., 2022; Lei et al., 2022) combine the strengths of both
methods by employing a two-tower model for fast retrieval of potentially
relevant videos in the initial stage, followed by a one-tower model to
accurately rank the retrieved videos in the subsequent stage.
Natural language video localization The objective of the natural language
video localization task is to pinpoint a moment semantically linked to the
query. This task bears similarities to SVMR and can be viewed as a specialized
case of VCMR, wherein the corpus comprises only one video, and the video must
contain the target moment. Early works can be broadly classified into two
categories: proposal-based (Liu et al., 2018; Xu et al., 2019; Chen and Jiang,
2019; Xiao et al., 2021; Chen et al., 2018; Zhang et al., 2019, 2021b; Liu et
al., 2021a) models and proposal-free (Yuan et al., 2019; Chen et al., 2020b;
Zeng et al., 2020; Li et al., 2021; Ghosh et al., 2019; Chen et al., 2019;
Zhang et al., 2020b) models. In proposal-based methods, initial steps involve
generating moment proposals as candidates, followed by ranking these proposals
based on the similarity between the query and the proposals. On the other
hand, proposal-free methods take a direct approach by predicting the start and
end positions of the target moment in the video based on the query. Drawing
inspiration from the success of Transformer, particularly in object detection
tasks such as DETR (Carion et al., 2020) (DEtection TransfomeR), recent works
propose DETR-based methods (Lei et al., 2021a; Moon et al., 2023; Liu et al.,
2022; Cao et al., 2021) for moment localization. These approaches simplify the
post-processing of previous predictions into an end-to-end process.
Video corpus moment retrieval VCMR is first proposed by Escorcia et al.
(Escorcia et al., 2019), introducing VR task on top of natural language video
localization, with benchmarks derived from localization datasets such as
ANetCaps. Zhang et al. (Lei et al., 2020) propose a dataset for VCMR, where
the videos provide subtitles. Similar to the taxonomy applied to text-to-video
retrieval, existing VCMR works fall into one-tower, two-tower, and two-stage
methods. One-tower (Zhang et al., 2020a; Yoon et al., 2022) and two-tower
methods (Lei et al., 2020; Zhang et al., 2021a; Li et al., 2020), essentially
treated as one-stage approaches, address VCMR as a multi-task problem,
utilizing a shared backbone model with distinct heads for VR and SVMR. HAMMER
(Zhang et al., 2020a) is the first one-tower model with hierarchical fine-
grained cross-modal interactions. SQuiDNet (Yoon et al., 2022) utilizes causal
inference to avoid the model learning bad retrieval biases. The two-tower
method demonstrates superior retrieval efficiency, especially when dealing
with numerous videos in the corpus. To capture partial relevance in VR, frame-
aware retrieval methods are commonly employed. XML (Lei et al., 2020) is a
pioneering work in VCMR using frame-aware retrieval, followed by enhancements
in ReLoCLNet (Zhang et al., 2021a), leveraging contrastive learning. Li et al.
(Li et al., 2020) introduces HERO, a video-language pre-trained model,
significantly improving overall performance. The two-stage method combines
one-tower and two-tower approaches, utilizing the two-tower model for VR to
quickly retrieve video and the one-tower model for SVMR to precisely localize
moment. CONQUER (Hou et al., 2021), DMFAT (Zhang et al., 2023) and CKCN (Chen
et al., 2023) are two-stage models that employ HERO for video retrieval and
propose one-tower models as moment localizer. CONQUER introduce a moment
localizer based on context-query attention (CQA)(Yu et al., 2018). DMFAT
innovates with multi-scale deformable attention for multi-granularity feature
fusion. And CKCN introduces a calibration network to refine important modality
features. Our model also adopts a two-stage approach, differing by integrating
an event-aware retrieval strategy. Recently, Dong et al. (Dong et al., 2022)
introduces a new task partially relevant video retrieval (PRVR) which is a
weakly supervised version of VR, where the relevant moment is not provided.
Generic event boundary detection Generic event boundary detection (GEBD) (Shou
et al., 2021) is a video understanding task designed to identify boundaries,
dividing the video into several meaningful units that humans perceive as
events. Typically, the frames within an event exhibit visual similarity and
continuity, with event boundaries aligning with changes in action, subject,
and environment. The task provides supervised and unsupervised settings, where
the unsupervised setting is suitable to be generalized across various video
understanding scenarios. UBoCo (Kang et al., 2022) is a representative work
for unsupervised GEBD that leverages contrastive convolution to identify
frames with drastic visual variations as event boundaries from the temporal
self-similarity matrix (TSM) of video frames. We integrate the method into the
event reasoning of the proposed EventFormer and implement two other
strategies.
Figure 2. Video retriever: the hierarchical encoding of events involves
interactions at both the frame and event levels, where the events are
extracted by event reasoning module and Transformer for frames or events is
augmented with anchor attention.
## 3\. Method
In this section, we detail the proposed event-aware retrieval model
EventFormer for VCMR task. We first formulate VCMR task and the sub-tasks in
Section 3.1. Then, we describe the feature extraction of video and query in
Section 3.2. Next, we introduce two main modules of EventFormer video
retriever and moment localizer in Section 3.3 and Section 3.4 respectively.
Finally, we present training and inference of model on VCMR task in Section
3.5.
### 3.1. Task Formulation
Given a video corpus $\mathcal{V}=\\{v_{1},v_{2},...,v_{M}\\}$, the goal of
VCMR is to retrieve the most relevant moment $m_{*}$ using a natural language
query $q=\\{w^{1},w^{2},...,w^{L}\\}$ which consists of a sequence of words.
The retrieval can be formulated as :
(1) $m_{*}=\mathop{\rm argmax}\limits_{m}P(m|q,\mathcal{V}).$
VCMR can be decomposed into two sub-tasks, VR and SVMR. The goal of VR is to
find the video $v_{*}$ that potentially contains the target moment from the
corpus:
(2) $v_{*}=\mathop{\rm argmax}\limits_{v}P(v|q).$
And SVMR aims to localize moment from the retrieved video:
(3) $m_{*}=\mathop{\rm argmax}\limits_{m}P(m|v_{*},q),$
where the predicted moment is decided by the start and end times:
(4) $P(m|v_{*},q)=P(\tau_{st}|v_{*},q)\cdot P(\tau_{ed}|v_{*},q).$
In video $v_{*}$, only the segment of the target moment $m_{*}$ holds
relevance to the query. As a result, many prior methods typically adopt frame-
aware retrieval for VR. We introduce a simple yet effective event-aware
retrieval model for VCMR.
### 3.2. Feature Extractor
The initial features of model are extracted by pre-trained networks. The
visual features (frame features) of video are encoded by 2D and 3D CNNs, i.e.,
ResNet (He et al., 2016) and Slowfast (Feichtenhofer et al., 2019) to extract
semantic and action features respectively. The textual features of subtitles
in video and text query are encoded by RoBERTa (Liu et al., 2019b). In
particular, the feature of a frame in the video is obtained by max-pooling the
visual features over a short duration (1.5 seconds), and if subtitles are
available at the corresponding time, it is featured as max-pooling word
features in the subtitle at the corresponding duration. The visual features of
frames in the $i$-th video $v_{i}$ is formulated as
$\bm{F}_{i}=\\{\bm{f}_{i}^{1},\bm{f}_{i}^{2},...,\bm{f}_{i}^{T}\\}$, and the
subtitle features are
$\bm{S}_{i}=\\{\bm{s}_{i}^{1},\bm{s}_{i}^{2},...,\bm{s}_{i}^{T}\\}$. If the
subtitle is not available at a time in the video, the corresponding text
feature is a vector of zeros. The query feature is
$\bm{Q}=\\{\bm{w}^{1},\bm{w}^{2},...,\bm{w}^{L}\\}$. In this paper, we use
bold symbols for vectors, distinguishing normal symbols such as $v_{i}$ that
indicate a video. Before being fed into the model, all features are mapped by
the fully connected layers to a space of the dimension $D$.
### 3.3. Event-aware Video Retriever
We propose a two-tower event-aware retriever that utilizes the event
representations of the videos as the retrieval units. The extraction of event
representations involves event reasoning and hierarchical event encoding shown
in Figure 2.
#### 3.3.1. Event Reasoning
We segment the video into units perceived by humans as events, emphasizing the
gathering of consecutive and visually similar frames to form events. A
representative work for event extraction is UBoCo (Kang et al., 2022) which
leverages contrastive convolution to identify event boundaries. We draw on
this approach but simplify the process to make it more adaptable to VCMR. In
addition, we also adopt two extra event extraction strategies, K-means and
window.
Contrastive convolution Utilizing frame representations
$\bar{\bm{F}}=\\{\bar{\bm{f}}^{1},\bar{\bm{f}}^{2},...,\bar{\bm{f}}^{T}\\}$,
we compute self-similarities among frames, thereby constructing a Temporal
Self-Similarity Matrix (TSM) shown in Figure 2. A contrastive kernel is
employed to perform convolution along the diagonal of TSM, for computing event
boundary scores. The results of diagonal elements serve as boundary scores,
where a higher score indicates a greater likelihood that the frame is a
boundary used to split video into events. We use a threshold $\delta$ to
decide whether the i-$th$ frame is a boundary if the difference between the
score and the mean of all scores is greater than $\delta$.
Kmeans We employ TSM column vectors as features for K-means clustering,
partitioning the video into $k$ segments to represent distinct events. To
ensure consecutiveness within each segment, we include the frame index as an
additional feature.
Window A fixed-size window divides the video evenly into pieces as events. The
window size $w$ is a hyper-parameter.
This paper focuses on extracting visual events and still employing a frame-
aware approach for subtitles. Extracting textual events poses more challenges
as subtitle information is non-continuous, and subtitles with high similarity
may not belong to the same event, as observed in the topic model (Larochelle
and Lauly, 2012). The extraction of textual events can be left for future
works.
#### 3.3.2. Hierarchical Event Encoding
We employ a hierarchical structure to encode event representations, initially
focusing on frame representation and subsequently on the event, ensuring the
interactions of contextual information at both levels. Transformers for frame
and event are augmented with anchor attention, encouraging them to focus on
the correlations of neighboring content. And the video retriever is trained by
two-branch contrastive learning.
Anchor Attention Untrimmed video contains abundant information, where not all
frames or events exhibit strong correlations, and there is a tendency for
higher correlations within close ranges. To this end, we introduce anchor
multi-head self-attention (AMHSA) to enhance the relevance between neighboring
content.
We review vanilla multi-head self-attention (MHSA) (Vaswani et al., 2017):
(5) ${\rm MultiHead}(Q,K,V)={\rm Concat}({\rm head}_{1},...{\rm head}_{h}),$
(6) ${\rm head}_{i}={\rm Attention}(Q,K,V),$ (7)
$\alpha=\frac{QK^{t}}{\sqrt{D}},\ \ {\rm Attention}={\rm softmax}(\alpha)V,$
where $\alpha$ is the attention score before softmax normalized. In each
attention head, an element in the input computes attention scores with all
elements. Instead, we introduce a constraint, allowing an element to calculate
attention scores only with a finite number of its neighboring elements. For
instance, for the $i$-th frame, attention score computation is limited to the
2 frames before and after, forming a range of [i-2, i+2]. Different attention
heads can utilize various ranges, such as 2, 3, 4, or all frames (ensuring
globality) shown in Figure 2, capturing multi-scale neighborhood correlations.
These ranges for attention computation serve as anchors, leading us to term it
”anchor attention.” We use AnchorFormer to mark the Transformer enhanced with
anchor attention.
(a) Two-branch sampling for video retriever.
(b) Dual optimization for moment localizer.
Figure 3. Two-branch sampling and dual optimization.
Hierarchical Video Encoder The hierarchical encoding of the event involves
frame and event encodings. We first encode frame representations. For the
$i$-th video, we input visual features of the video and textual features of
subtitles, along with positional embeddings and modality embeddings, into a
multi-modal AnchorFormer. This allows for the simultaneous capture of both
intra-modal and inter-modal contextual dependencies. The output contextual
representation of visual and textual modalities are
$\bar{\bm{F}}_{i}=\\{\bar{\bm{f}}_{i}^{1},\bar{\bm{f}}_{i}^{2},...,\bar{\bm{f}}_{i}^{T}\\}$
and
$\bar{\bm{S}}_{i}=\\{\bar{\bm{s}}_{i}^{1},\bar{\bm{s}}_{i}^{2},...,\bar{\bm{s}}_{i}^{T}\\}$
respectively.
After event reasoning, we partition the video into $N$ events, the initial
event representation
$\bar{\bm{E}}_{i}=\\{\bar{\bm{e}}_{i}^{1},\bar{\bm{e}}_{i}^{2},...,\bar{\bm{e}}_{i}^{N}\\}$
is obtained from max pooling of the frame representations contained in event.
Considering that events carry richer semantic information compared to frames,
and the frequent presence of tight semantic associations between events, we
employ an additional AnchorFormer to capture contextual dependencies at event
level. The input of AnchorFormer is event representations $\bar{\bm{E}}_{i}$
and subtitle representations $\bar{\bm{S}}_{i}$, and the output is contextual
representations
$\hat{\bm{E}}_{i}=\\{\hat{\bm{e}}_{i}^{1},\hat{\bm{e}}_{i}^{2},...,\hat{\bm{e}}_{i}^{N}\\}$
and
$\hat{\bm{S}}_{i}=\\{\hat{\bm{s}}_{i}^{1},\hat{\bm{s}}_{i}^{2},...,\hat{\bm{s}}_{i}^{T}\\}$.
Query Encoder Token features of a query are processed through vanilla
Transformer to yield token representations $\bar{\bm{w}}^{j}$. Given the
inconsistent matching of words across modalities in a query, as the query in
Figure 2, where ”…turns and leaves the room” emphasizes the visual modality
with its action description, while ”Foreman gives Chase a negative answer to
his question…” leans towards the textual modality, we adopt modality-specific
pooling to create two query representations for two modalities, denoted as
$\bm{Q}_{F}$ (frame) and $\bm{Q}_{S}$ (subtitle). Specifically, we calculate
the weight of each word for a modality, followed by a weighted sum of word
representations:
(8) $o^{j}=\bm{W}_{d}\bar{\bm{w}}^{j},\ \ \ \\\ \ \alpha^{j}=\frac{{\rm
exp}(o^{j})}{\sum\limits_{i=1}^{L}{\rm exp}(o^{i})},\ \ \ \\\ \
\bm{q}^{d}=\sum\limits_{j=1}^{L}\alpha^{j}\bar{\bm{w}}^{j},$
where $\bm{W}_{d}\in\mathbb{R}^{D\times 1}$ is a fully-connect layer which
outputs a scalar $o^{j}$, $d\in\\{F,S\\}$ is frame or subtitle. $\alpha^{j}$
is softmax normalized weight of $j$-th word. And $\bm{Q}_{d}$ is a modality-
specific query representation.
Figure 4. Moment Localizer: both frame outputs and event outputs undergo optimization during training, while only frame outputs are utilized for prediction. Table 2. VR results on TVR validation set, ANetCaps validation set, and DiDeMo test set. ${\dagger}$: the fine-tuned model before pre-training. The hyper-parameters ($\delta$, $k$, $w$) of the three event extraction strategies are set to (0.3, 10, 5), (0.1, 7, 8), and (0.1, 5, 4) for three datasets respectively. The results of XML and ReLoCLNet are reproduced by us using the same features. Model | TVR | ANetCaps | DiDeMo
---|---|---|---
R@1 | R@5 | R@10 | R@100 | R@1 | R@5 | R@10 | R@100 | R@1 | R@5 | R@10 | R@100
XML (Lei et al., 2020) (ECCV’20) | 18.52 | 41.36 | 53.15 | 89.59 | 6.14 | 20.69 | 32.45 | 75.92 | 6.23 | 19.35 | 29.95 | 74.16
ReLoCLNet (Zhang et al., 2021a) (SIGIR’21) | 22.63 | 46.54 | 57.91 | 90.65 | 6.66 | 22.18 | 34.07 | 75.59 | 5.53 | 18.25 | 27.96 | 71.42
HERO (Li et al., 2020) (EMNLP’20) | 19.44 | 42.08 | 52.34 | 84.94 | 4.70 | 16.77 | 27.01 | 67.42 | 5.11 | 16.35 | 33.11 | 68.38
${\rm HERO}^{{\dagger}}$ (Li et al., 2020) (EMNLP’20) | 29.01 | 52.82 | 63.07 | 89.91 | 6.46 | 21.45 | 32.61 | 73.00 | 8.46 | 23.43 | 34.86 | 75.36
SQuiDNet (Yoon et al., 2022) (ECCV’22) | 31.61 | - | 65.32 | - | - | - | - | - | - | - | - | -
EventFormer (Frame) | 25.56 | 50.14 | 61.33 | 91.79 | 7.50 | 24.20 | 37.10 | 77.97 | 7.63 | 24.06 | 35.06 | 77.63
EventFormer (Convolution) | 28.44 | 52.92 | 64.11 | 92.92 | 7.97 | 25.51 | 37.97 | 77.62 | 8.19 | 23.77 | 35.33 | 77.67
EventFormer (Kmeans) | 27.51 | 52.80 | 64.01 | 92.54 | 8.36 | 26.03 | 38.42 | 78.00 | 7.99 | 24.08 | 35.61 | 77.57
EventFormer (Window) | 27.59 | 52.50 | 64.07 | 92.38 | 8.16 | 25.76 | 37.93 | 77.96 | 8.39 | 25.39 | 35.76 | 77.72
Two-branch Contrastive Learning We introduce a two-branch contrastive learning
method focusing on both frame and event representations for event
representation learning. The additional frame representation learning aims to
acquire more fitting representations for the query, considering that events
are composed of frames. A key aspect in representation learning involves the
selection of positive and negative samples (Chen et al., 2020a). Shown in
Figure 3, we sample positive sample from the range of the correct moment, as
this part is explicitly relevant to the query. Given the contextual coherence
of the video, content beyond the range of the moment might possess implicit
relevance to the query, such as content preceding and following the moment. We
also take positive sample from content excluding the target moment in the
video. And the negative samples are from videos irrelevant to query.
Specially, in frame branch, the positive sample and weak positive sample are
frames exhibiting the highest query similarity within and outside the target
moment, respectively. For negative frames, we employ the hardest sample mining
technique (Faghri et al., 2017), wherein the frame within each negative video
exhibiting the highest similarity to the query is chosen as the negative
sample. The sampling for subtitle is the same as the frame. We apply InfoNCE
(Oord et al., 2018) loss, and take the positive sample as an example:
(9) $\mathcal{L}^{f}=-log\frac{{\rm exp}(rf^{+}/t)}{{\rm
exp}(rf^{+}/t)+\sum\limits_{z=1}^{n}{\rm exp}(rf^{-}/t)},$
where $t$ is the temperature set to 0.01, $n$ is the number of negative
videos, and $rf^{+}$ is the average of cosine similarities of query and
positive frame/subtitle :
(10) $rf^{+}=\frac{1}{2}({\rm cos}(\bm{Q}_{F},\bar{\bm{f}}^{+})+{\rm
cos}(\bm{Q}_{S},\bar{\bm{s}}^{+})),$
where subtitle similarity ${\rm cos}(\bm{Q}_{S},\bar{\bm{s}}^{+})$ is
optional. The computation of weak positive frame loss $\mathcal{L}^{f}_{w}$ is
identical to that of the positive frame. In addition to query-to-frame loss,
following most works on cross-modal retrieval that employ bidirectional loss,
we incorporate frame-to-query loss $\mathcal{L}^{q}$. The bidirectional loss
for frame branch is:
(11)
$\mathcal{L}_{F}=\mathcal{L}^{f}+\omega*\mathcal{L}^{f}_{w}+\mathcal{L}^{q},$
where $\omega$ is a hyper-parameter set to 0.5.
For event branch, positive event is the event that contains positive frame to
hold the consistency of contrastive learning. And the negative events sampled
similarly to those in the frame branch. The overall loss for two-branch
contrastive learning is:
(12) $\mathcal{L}=\lambda*\mathcal{L}_{F}+\mathcal{L}_{E},$
where $\mathcal{L}_{E}$ is InfoNCE loss of event representation learning
between query representations $\bm{Q}_{F}$/$\bm{Q}_{S}$ and event and subtitle
representations $\hat{\bm{e}}$/$\hat{\bm{s}}$, and $\lambda$ is a hyper-
parameter set to 0.8.
Table 3. SVMR and VCMR results on TVR validation and test set. The results of the test set are obtained by submitting predictions to the evaluation system. $*$: reproduced results. ${\dagger}$: the two-stage models that use HERO as the video retriever. | | SVMR (val) | | | VCMR (val) | | | VCMR (test) |
---|---|---|---|---|---|---|---|---|---
Model | | IoU=0.7 | | | IoU=0.7 | | | IoU=0.7 |
| R@1 | R@10 | R@100 | R@1 | R@10 | R@100 | R@1 | R@10 | R@100
HAMMER (Zhang et al., 2020a) (Arxiv’20) | - | - | - | 5.13 | 11.38 | 16.71 | - | - | -
SQuiDNet (Yoon et al., 2022) (ECCV’22) | 24.74 | - | - | 8.52 | - | - | 10.09 | 31.22 | 46.05
XML (Lei et al., 2020) (ECCV’20) | $13.05^{*}$ | $38.80^{*}$ | $63.13^{*}$ | $2.91^{*}$ | $10.12^{*}$ | $25.10^{*}$ | 3.32 | 13.41 | 30.52
ReLoCLNet (Zhang et al., 2021a) (SIGIR’21) | $14.80^{*}$ | $45.85^{*}$ | $72.39^{*}$ | $4.11^{*}$ | $14.41^{*}$ | $32.94^{*}$ | - | - | -
HERO (Li et al., 2020) (EMNLP’20) | 15.30 | 40.84 | 63.45 | 5.13 | 16.26 | 24.55 | 6.21 | 19.34 | 36.66
${\rm CONQUER}^{{\dagger}}$ (Hou et al., 2021) (MM’21) | 22.84 | $53.98^{*}$ | $79.24^{*}$ | 7.76 | 22.49 | 35.17 | 9.24 | 28.67 | 41.98
${\rm DMFAT}^{{\dagger}}$ (Zhang et al., 2023) (TCSVT’23) | 23.26 | - | - | 7.99 | 23.81 | 36.89 | - | - | -
${\rm CKCN}^{{\dagger}}$ (Chen et al., 2023) (TMM’23) | 23.18 | - | - | 7.92 | 22.00 | 39.87 | - | - | -
EventFormer | 25.45 | 62.87 | 80.41 | 10.12 | 27.54 | 42.88 | 11.11 | 32.78 | 46.18
### 3.4. Event-aware Moment Localizer
The moment localizer shown in Figure 4 is focused on accurately pinpointing
the location of the target moment. We follow the works (Lei et al., 2020;
Zhang et al., 2021a; Hou et al., 2021; Li et al., 2020) on VCMR of using
proposal-free method, i.e., directly learning to predict the start and end
positions of moment. We incorporate event information to proposal-free method,
enhancing the model’s learning to discriminate the start and end positions of
moments through dual optimization of frame and event.
Architecture We introduce a one-tower event-aware moment localizer that has a
similar structure to the retriever and also leverages AMHSA and event
reasoning, but the video encoding requires cross-attention with the query. The
query encoder is the same as in the video retriever, employing the vanilla
Transformer to encode query words. However, the distinction is that the
overall query representation $\bm{Q}_{F}$ or $\bm{Q}_{S}$ is unnecessary. We
emphasize that the architecture of the localizer is not novel; however, our
innovation lies in the utilization of event and dual optimization.
Dual Optimization As shown in Figure 3, for frame optimization, the objective
is to maximize the confidence scores of frames which is the start or end
boundary of ground truth moment. The confidence scores are derived from the
output of AnchorFormer. Concretely, we begin by summing the visual and textual
outputs at the same index, creating a sequence of multi-modal features with a
length of $T$. Subsequently, the features are fed to two different 1D
convolution networks to generate confidence scores for start
$cf^{st}\in\mathbb{R}^{1}$ and end $cf^{ed}$ boundaries respectively. The 1D
convolutions are used to capture dependencies among neighboring frames. The
optimization is based on cross-entropy loss:
(13) $\mathcal{L}_{F}^{st}=-log\frac{{\rm
exp}(lf^{st})}{\sum\limits_{i=1}\limits^{T}lf_{1}^{i}},\ \ \
\mathcal{L}_{F}^{ed}=-log\frac{{\rm
exp}(lf^{ed})}{\sum\limits_{i=1}\limits^{T}lf_{2}^{i}},$ (14)
$\mathcal{L}_{F}=\mathcal{L}_{F}^{st}+\mathcal{L}_{F}^{ed},$
where $lf_{1}^{i}$ and $lf_{2}^{i}$ are the $i$-th outputs from two
convolution networks. For event optimization, we expect high confidence scores
for events that contain correct moment boundaries. To obtain the confidence of
the event, we use the output event representations and subtitle
representations as features. Similar to frame optimization, we also need text
features for event prediction. We perform max-pooling on the subtitle
representations within the scope of an event as the textual event features for
the event. The sum of visual and textual features of an event are fed to two
distinct fully connected networks to predict confidence scores that the moment
boundaries are in the event. The optimization is the same as that in Eq. 13
and Eq. 14. The overall loss is:
(15) $\mathcal{L}=\mathcal{L}_{F}+\gamma*\mathcal{L}_{E},$
where $\mathcal{L}_{E}$ is event loss, $\gamma$ is a hyper-parameter set to
0.8.
### 3.5. Training and Inference
We employ a stage-wise training strategy for the two modules. Firstly, the
video retriever is trained using an in-batch negative sampling method
(Karpukhin et al., 2020), where all other videos in a batch serve as negative
videos. Subsequently, the moment localizer is trained using sharing
normalization techniques (Shared-Norm)(Clark and Gardner, 2018), widely
applied in open domain QA(Chen et al., 2017) tasks. This technique enhances
the confidence that the moment appears in the correct video while reducing its
confidence in the wrong video. Especially, the softmax normalizations in the
loss functions Eq. 13 cover confidence scores not only for frames or events in
the correct video but also in incorrect videos, serving as negative samples.
The negative videos are sampled from the training set based on high similarity
to the query, with the similarity computed by the trained video retriever.
In inference, we first use the video retriever to retrieve the top-10 videos
from the corpus based on the the average of the highest query-event and query-
subtitle similarities $re_{i}$ in the video $v_{i}$. The moment localizer is
used to predict the position of the moment in the 10 videos, relying on the
confidence scores ($lf^{st}_{i}$ and $lf^{ed}_{i}$) indicating whether a frame
serves as a start or end boundary. The event aspect of moment localizer is
excluded from the prediction, as moment localization necessitates fine-grained
frame-level localization. The confidence score $cm$ for moment prediction
consists of video retrieval score and moment localization score:
(16) $cm=\frac{re_{i}}{t}+lf^{st}_{i}+lf^{ed}_{i},$
where $t$ is temperature in contrastive learning, consistent with the training
objective, and $cm$ is used to rank the candidate moments.
Table 4. SVMR and VCMR results on ANetCaps validation set and DeDiMo test set. The metric is $R@1,IoU=0.5,0.7$. Dataset | Model | SVMR | VCMR
---|---|---|---
0.5 | 0.7 | 0.5 | 0.7
ANetCaps | HAMMER (Zhang et al., 2020a) | 41.45 | 24.27 | 2.94 | 1.74
ReLoCLNet (Zhang et al., 2021a) | - | - | 3.09 | 1.82
CONQUER (Hou et al., 2021) | $35.63^{*}$ | $20.08^{*}$ | $2.14^{*}$ | $1.33^{*}$
EventFormer | 45.21 | 27.98 | 4.32 | 2.75
DiDeMo | XML (Lei et al., 2020) | - | - | 2.36 | 1.59
ReLoCLNet (Zhang et al., 2021a) | $34.81^{*}$ | $26.71^{*}$ | $2.28^{*}$ | $1.71^{*}$
HERO (Li et al., 2020) | $\textbf{39.20}^{*}$ | $30.19^{*}$ | $3.42^{*}$ | $2.79^{*}$
CONQUER (Hou et al., 2021) | 38.17 | 29.9 | 3.31 | 2.79
DMFAT (Zhang et al., 2023) | - | - | 3.44 | 2.89
CKCN (Chen et al., 2023) | 36.54 | 28.89 | 3.22 | 2.69
| EventFormer | 39.02 | 30.91 | 3.53 | 3.12
## 4\. Experiments
### 4.1. Experimental Details
Datasets We evaluate EventFormer on three benchmarks. TV shows retrieval (TVR)
(Lei et al., 2020) is constructed on TV shows with videos providing subtitles.
The training, validation, and testing sets of TVR consist of 17,435, 2,179,
and 1,089 videos, respectively. Each video contains 5 moments for retrieval.
The average duration of the videos and moments are 76.2 seconds and 9.1
seconds respectively. ActivityNet Captions (ANetCaps) (Caba Heilbron et al.,
2015) comprises approximately 20K videos. The videos exclusively contain
visual information without subtitles. We follow the setup in (Zhang et al.,
2020a; Yoon et al., 2022) with 10,009 videos for training and 4,917 videos for
testing, resulting in 37,421 and 17,505 moments respectively. The average
duration of videos and moments are 120 seconds and 36.18 seconds respectively.
The videos of Distinct Describable Moments (DiDeMo) (Anne Hendricks et al.,
2017) are from YFCC100M (Thomee et al., 2016), exclusively feature visual
information. The dataset is divided into 8,395, 1,065, and 1,004 videos for
training, validation, and testing, respectively. Most videos have a duration
of approximately 30 seconds, uniformly segmented into 5-second intervals,
resulting that moment boundaries consistently aligning with multiples of 5.
Implementation For TVR and DiDeMo, we utilize the 768D RoBERTa feature
provided by (Lei et al., 2020) for query and subtitle, and the 4352D
SlowFast+ResNet feature provided by (Li et al., 2020) as the frame feature.
The duration for the sampling frame feature is 1.5 seconds with an FPS of 3.
We follow the feature extractions in (Lei et al., 2020) and (Li et al., 2020)
to extract features for ANetCaps. In inference, we first retrieve the top-10
videos, then localize the moment within the retrieved videos. Non-maximum
suppression (Girshick et al., 2014) is applied in moment localization to
remove overlapped predictions. For Shared-Norm at the moment localizer, the
number of negative videos is set to 5, sampled from the top-100 videos in the
training set, ranked by the video retriever. Anchor sizes for AnchorFormer in
frame and event are configured as 3, 6, 9, all and 1, 2, 3, all, respectively.
Evaluation Metrics Following (Lei et al., 2020), the metrics for VR are the
same as those for text-to-video retrieval, i.e., $R@K$ ($k=1,5,10,100$) the
fraction of queries that correctly retrieve correct videos in the top K of the
ranking list. And for SVMR and VCMR, the metrics are $R@K,IoU=\mu$
($\mu=0.5,0.7$) which require the intersection over union (IoU) of predicted
moments to ground truth exceeds $\mu$. The evaluation of SVMR is only in the
correct video for a query, while the evaluation of VCMR ranges over videos in
the corpus.
Baselines We compare our model to the models for VCMR task as a baseline,
containing one-tower methods, two-tower methods, and two-stage methods. One-
tower: HAMMER (Zhang et al., 2020a), SQuiDNet (Yoon et al., 2022). Two-tower:
XML (Lei et al., 2020), ReLoCLNet (Zhang et al., 2021a), HERO (Li et al.,
2020). Two-stage: CONQUER (Hou et al., 2021), DMFAT (Zhang et al., 2023), CKCN
(Chen et al., 2023). Additionally, a frame-aware baseline, denoted as
EventFormer (Frame), is introduced for VR. This baseline shares the same model
architecture and the number of parameters as the proposed EventFormer but
lacks anchor attention, event reasoning and event encoding modules.
### 4.2. Main Results
VR The results of VR task on three datasets are reported in Table 2. Except
for the one-tower model SQuiDNet and the pre-trained large video-language
model HERO, our event-aware retrieval model surpasses other frame-aware
retrieval models such as XML, ReLoCLNet, and the frame-aware version of
EventFormer. SQuiDNet leverages fine-grained cross-modal interaction between
video and query for better matching. Nevertheless, the one-tower method
encounters retrieval efficiency challenges as it involves fine-grained
interactive matching between the query and each video in the corpus. HERO is
pre-trained on HowTo100M (Miech et al., 2019) and TVR, providing external
knowledge for retrieval. However, HERO’s performance on ANetCaps and DiDeMo is
sub-optimal, likely due to a domain gap between the videos in the two datasets
and HERO’s pre-training data. For three event strategies, convolution
surpasses the other two strategies in TVR, demonstrating superior adaptability
to varying numbers of events in videos and dynamic event spans. Kmeans
performs better in ANetCaps, attributed to the fact that the majority of
videos in ANetCaps come from YouTube, which are user-shot, one-shot,
continuous sequences, distinct from TV shows with explicit scene transitions.
In DiDeMo, the window strategy excels because of the consistently fixed size
of the query-related portion in videos, as detailed in (Anne Hendricks et al.,
2017). Future works can be the exploration of more robust extraction methods
for diverse datasets.
Table 5. Ablation of video retriever on TVR validation set. ’S’: subtitle.
’ER’: event reasoning. ’EI’: event interaction. ’AMHSA’: anchor multi-head
self-attention. ’FCL’: frame contrastive learning. ’ECL’:event contrastive
learning. ’WP’: weak positive sample.
S | ER | EI | AMHSA | FCL | ECL | WP | R@1 | R@5 | R@10 | R@100
---|---|---|---|---|---|---|---|---|---|---
$\surd$ | $\surd$ | $\surd$ | $\surd$ | $\surd$ | $\surd$ | $\surd$ | 28.44 | 52.92 | 64.11 | 92.92
| $\surd$ | $\surd$ | $\surd$ | $\surd$ | $\surd$ | $\surd$ | 17.15 | 38.84 | 50.30 | 86.97
$\surd$ | | | $\surd$ | $\surd$ | | $\surd$ | 25.56 | 50.14 | 61.33 | 91.79
$\surd$ | $\surd$ | | $\surd$ | $\surd$ | $\surd$ | $\surd$ | 26.54 | 51.43 | 62.01 | 91.91
$\surd$ | $\surd$ | $\surd$ | | $\surd$ | $\surd$ | $\surd$ | 26.29 | 51.55 | 62.18 | 92.13
$\surd$ | $\surd$ | $\surd$ | $\surd$ | $\surd$ | | $\surd$ | 26.84 | 51.48 | 62.55 | 91.90
$\surd$ | $\surd$ | $\surd$ | $\surd$ | $\surd$ | $\surd$ | | 27.49 | 52.53 | 64.01 | 92.69
Table 6. Ablation of moment localizer on TVR validation set. ’EO’: event
optimization.’SN’: Shared-Norm.
S | AMHSA | EO | SN | SVMR (R@1) | VCMR(R@1)
---|---|---|---|---|---
0.5 | 0.7 | 0.5 | 0.7
$\surd$ | $\surd$ | $\surd$ | $\surd$ | 47.14 | 25.45 | 17.79 | 10.12
| $\surd$ | $\surd$ | $\surd$ | 41.75 | 22.07 | 14.80 | 8.00
$\surd$ | | $\surd$ | $\surd$ | 44.64 | 23.89 | 16.49 | 9.23
$\surd$ | $\surd$ | | $\surd$ | 46.25 | 25.12 | 17.12 | 9.80
$\surd$ | $\surd$ | $\surd$ | | 43.52 | 23.07 | 14.14 | 8.14
SVMR and VCMR The results of SVMR and VCMR on three datasets are reported in
Table 3 and Table 4. In both tasks, our proposed EventFormer outperforms other
baselines, no matter which architectures (one-tower, two-tower, and two-stage)
these models belong to. The one-tower (HAMMER, SQuiDNet) and two-stage
(CONQUER, DMFAT, and CKCN) models exhibit superior performance compared to the
two-tower (XML, ReLoCLNet, HERO) models. This is attributed to fine-grained
interactions, making deep matching between the query and video. In contrast,
the two-tower model relies solely on the similarity between frames and queries
to determine the boundaries of the target segments. EventFormer has a similar
structure to the other two-stage models, leveraging Transformer for multi-
modal fusion. However, our model surpasses these models, attributed to AMHSA
and dual optimization.
### 4.3. Ablation Study
The results on TVR validation set for video retriever and moment localizer are
reported in Table 5 and Table 6, respectively.
Video retriever Subtitle plays a crucial role, as many queries in TVR include
character names like ”Shelton”, which align more effectively with the textual
information than with visual content. The retrieval accuracy significantly
benefits from event reasoning, event interaction, and anchor attention, thus
validating the three reasons highlighted in the Introduction for the
ineffectiveness of frame-aware methods in leveraging event information for
video retrieval. Frame learning in two-branch contrastive learning works,
demonstrating the query-related frame representations contribute to the
learning of event representations. Moreover, weak positive sample enhance
learning by taking implicitly query-related frame or event.
Moment localizer Subtitle information also helps, but the improvement is not
as pronounced as in video retriever. This is because moment localization
involves precisely identifying the action described by the query, placing more
emphasis on visual information, with text typically playing a supporting role.
AMHSA is also effective for moment localization. Event optimization enhances
retrieval accuracy, even without direct involvement in the prediction,
indicating the beneficial impact of additional optimization. Notably, Shared-
Norm exerts a substantial influence on moment localization, particularly in
VCMR, as this technique empowers the model with the capability to distinguish
moments in different videos.
Table 7. The results of moment localization directly using event extracted by
three extract strategies.
Strategy | SVMR | VCMR
---|---|---
0.5 | 0.7 | 0.5 | 0.7
Window ($w$ = 5) | 20.59 | 7.86 | 6.63 | 2.74
Kmeans ($k$ = 10) | 21.15 | 9.03 | 7.11 | 3.23
Convolution ($\delta$ = 0.3) | 21.31 | 9.88 | 6.91 | 3.51
### 4.4. Event Reasoning
We evaluate three event extraction strategies on TVR validation set.
Specially, we predict the moment by directly using the event with the highest
similarity computed by video retriever to the query in the video. The results
are presented in Table 7. While the accuracy falls short of the optimal events
in the ideal case that is shown in Table 1, it still demonstrates
effectiveness. This is because the events are extracted solely through the
aggregation of consecutive and similar frames, without training for the
localization task. The events extracted through contrastive convolution are
closer to the ground truth moment compared to Kmeans and window for its
superior adaptation to the number and length of events.
Table 8. Efficiency and memory on TVR validation set. Efficiency is measured
as the average latency (ms) to retrieve the top-10 videos. And memory (MB) is
the storage of vectors of frames or events saved in advance by two-tower
model. ’Number’ is the total number of frames or events in corpus.
Model | Params | VR | SVMR
---|---|---|---
Latency | Memory | Number | Latency
CONQUER (Hou et al., 2021) | 47M | 9932 | - | - | 156
ReLoCLNet (Zhang et al., 2021a) | 8M | 88 | 161 | 109924 | 29
HERO (Li et al., 2020) | 121M | 212 | 322 | 109924 | 97
EventFormer ($\delta=0.3$) | 9M+18M | 51 | 47 | 31975 | 103
EventFormer ($k=10$) | 9M+18M | 43 | 31 | 21786 | 103
EventFormer ($w=5$) | 9M+18M | 43 | 33 | 22755 | 103
### 4.5. Retrieval Efficiency and Memory Usage
We further analyze the retrieval efficiency and memory usage of our
EventFormer and other models. We select a two-tower model ReLoCLNet, a two-
tower pre-trained model HERO and a one-tower model. Given that the one-tower
models HAMMER and SQuiDNet lack published code, we choose CONQUER due to its
attempt at VR task introduced in (Hou et al., 2021). The results are reported
in Table 8. In VR, CONQUER shows the slowest performance because it cannot
decompose similarity, requiring online calculations for the relevance between
query and videos. HERO exhibits lower efficiency compared to XML and our
model, attributed to its excessive number of parameters and the
representations with twice the dimensionality of XML and our model. Our model
is optimally efficient and least memory consuming because the number of saved
events is much smaller than the number of frames. In SVMR, although our model
is not as efficient as two-tower models, it remains acceptable since only 10
videos interact with the query at a fine-grained level.
Table 9. PRVR results on TVR (without subtitle) validation set. All models use
the same features, ResNet+I3D (Carreira and Zisserman, 2017) for video and
RoBERTa for the query.
Model | R@1 | R@5 | R@10 | R@100 | SumR
---|---|---|---|---|---
VCMR models w/o moment localization:
XML (Lei et al., 2020) (ECCV’20) | 10.0 | 26.5 | 37.3 | 81.3 | 155.1
ReLoCLNet (Zhang et al., 2021a) (SIGIR’21) | 10.7 | 28.1 | 38.1 | 80.3 | 157.1
CONQUER (Hou et al., 2021) (MM’21) | 11.0 | 28.9 | 39.6 | 81.3 | 160.8
PRVR models:
MS-SL (Dong et al., 2022) (MM’22) | 13.5 | 32.1 | 43.4 | 83.4 | 172.4
PEAN (Jiang et al., 2023) (ICME’23) | 13.5 | 32.8 | 44.1 | 83.9 | 174.2
GMMFormer (Wang et al., 2023) (AAAI’24) | 13.9 | 33.3 | 44.5 | 84.9 | 176.6
DL-DKD (Dong et al., 2023) (ICCV’23) | 14.4 | 34.9 | 45.8 | 84.9 | 179.9
EventFormer ($\delta=0.3$) | 14.2 | 34.6 | 46.0 | 84.8 | 179.6
Figure 5. Cases of events extracted by retriever and moments predicted by
localizer.
### 4.6. Partially Relevant Video Retrieval
Beyond VCMR, we evaluate the proposed EventFormer on the PRVR task, which
serves as a weakly supervised version of VR, as it does not provide the ground
truth moment for the query. This poses a challenge for our model because the
positive frames and events for two-branch contrastive learning are sampled
based on the moment relevant to the query. We modify the sampling strategy to
adapt to PRVR, selecting the two frames or events in the correct video with
the highest similarity to the query as positive and weak positive samples. The
negative sampling from other videos are same as that of supervised
EventFormer. The results are presented in Table 9. Our model demonstrates
superior retrieval accuracy compared to other models, except for DL-DKD, which
distills knowledge from a large-scale vision-language model. Notably, our
model achieves this performance while being trained solely on the dataset’s
training set. Our model is designed for supervised VCMR task, leaving room for
enhancement in weakly supervised VR for feature works.
### 4.7. Case Study
We present three cases in Figure 5. In the first case, the extracted event
overlaps perfectly with the ground truth, because the boundaries of the moment
fall exactly where the visual content suddenly changes. In the second case,
the change occurs in the middle of the moment, resulting in the event
capturing the front half of the content. However, this part remains pertinent
to the query. The last case is a failure example, where multiple changes occur
within the moment, making the event reasoning struggle to capture consecutive
and similar frames.
## 5\. Conclusion
This paper proposes an event-aware retrieval model EventFormer for the VCMR
task, motivated by human perception of visual information. To extract event
representations of video for retrieval, EventFormer leverages event reasoning
and two-level hierarchical event encoding. Anchor multi-head self-attention is
introduced for Transformer to enhance close dependencies in the untrimmed
video. We adopt two-branch contrastive learning and dual optimization for the
training of two sub-tasks in VCMR. Extensive experiments show the
effectiveness and efficiency of EventFormer on VCMR. The ablation study and
case study additionally further verify the efficacy and rationale of each
module in our model. The effectiveness of the model is also validated on the
PRVR task. Our approach has limitations, particularly in robustness for videos
in different datasets. Additionally, our event reasoning relies mainly on
visual frame similarity, making it sensitive to changes in visual appearance.
Future work can address these problems by introducing more semantic
associations.
## References
* (1)
* Anne Hendricks et al. (2017) Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2017. Localizing moments in video with natural language. In _Proceedings of the IEEE international conference on computer vision_. 5803–5812.
* Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In _3rd International Conference on Learning Representations, ICLR 2015_.
* Bain et al. (2021) Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. 2021. Frozen in time: A joint video and image encoder for end-to-end retrieval. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 1728–1738.
* Caba Heilbron et al. (2015) Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. 2015. Activitynet: A large-scale video benchmark for human activity understanding. In _Proceedings of the ieee conference on computer vision and pattern recognition_. 961–970.
* Cao et al. (2021) Meng Cao, Long Chen, Mike Zheng Shou, Can Zhang, and Yuexian Zou. 2021. On Pursuit of Designing Multi-modal Transformer for Video Grounding. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_. 9810–9823.
* Carion et al. (2020) Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. 2020. End-to-end object detection with transformers. In _European conference on computer vision_. Springer, 213–229.
* Carreira and Zisserman (2017) Joao Carreira and Andrew Zisserman. 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In _proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 6299–6308.
* Chen and Dolan (2011) David Chen and William B Dolan. 2011. Collecting highly parallel data for paraphrase evaluation. In _Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies_. 190–200.
* Chen et al. (2017) Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to Answer Open-Domain Questions. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics_. Association for Computational Linguistics, 1870–1879.
* Chen et al. (2018) Jingyuan Chen, Xinpeng Chen, Lin Ma, Zequn Jie, and Tat-Seng Chua. 2018. Temporally grounding natural sentence in video. In _Proceedings of the 2018 conference on empirical methods in natural language processing_. 162–171.
* Chen et al. (2019) Jingyuan Chen, Lin Ma, Xinpeng Chen, Zequn Jie, and Jiebo Luo. 2019. Localizing natural language in videos. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 33. 8175–8182.
* Chen et al. (2020b) Long Chen, Chujie Lu, Siliang Tang, Jun Xiao, Dong Zhang, Chilie Tan, and Xiaolin Li. 2020b. Rethinking the bottom-up framework for query-based video localization. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 34. 10551–10558.
* Chen and Jiang (2019) Shaoxiang Chen and Yu-Gang Jiang. 2019. Semantic proposal for activity localization in videos via sentence query. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 33. 8199–8206.
* Chen et al. (2020c) Shizhe Chen, Yida Zhao, Qin Jin, and Qi Wu. 2020c. Fine-grained video-text retrieval with hierarchical graph reasoning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 10638–10647.
* Chen et al. (2020a) Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020a. A simple framework for contrastive learning of visual representations. In _International conference on machine learning_. PMLR, 1597–1607.
* Chen et al. (2023) Tongbao Chen, Wenmin Wang, Zhe Jiang, Ruochen Li, and Bingshu Wang. 2023. Cross-Modality Knowledge Calibration Network for Video Corpus Moment Retrieval. _IEEE Transactions on Multimedia_ (2023).
* Clark and Gardner (2018) Christopher Clark and Matt Gardner. 2018. Simple and Effective Multi-Paragraph Reading Comprehension. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics_. 845–855.
* Dong et al. (2022) Jianfeng Dong, Xianke Chen, Minsong Zhang, Xun Yang, Shujie Chen, Xirong Li, and Xun Wang. 2022. Partially Relevant Video Retrieval. In _Proceedings of the 30th ACM International Conference on Multimedia_. 246–257.
* Dong et al. (2023) Jianfeng Dong, Minsong Zhang, Zheng Zhang, Xianke Chen, Daizong Liu, Xiaoye Qu, Xun Wang, and Baolong Liu. 2023. Dual Learning with Dynamic Knowledge Distillation for Partially Relevant Video Retrieval. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 11302–11312.
* Escorcia et al. (2019) Victor Escorcia, Mattia Soldan, Josef Sivic, Bernard Ghanem, and Bryan C. Russell. 2019. Temporal Localization of Moments in Video Collections with Natural Language. (2019). arXiv:1907.12763
* Faghri et al. (2017) Fartash Faghri, David J Fleet, Jamie Ryan Kiros, and Sanja Fidler. 2017. Vse++: Improving visual-semantic embeddings with hard negatives. _arXiv preprint arXiv:1707.05612_ (2017).
* Feichtenhofer et al. (2019) Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. 2019. Slowfast networks for video recognition. In _Proceedings of the IEEE/CVF international conference on computer vision_. 6202–6211.
* Fu et al. (2021) Tsu-Jui Fu, Linjie Li, Zhe Gan, Kevin Lin, William Yang Wang, Lijuan Wang, and Zicheng Liu. 2021. Violet: End-to-end video-language transformers with masked visual-token modeling. _arXiv preprint arXiv:2111.12681_ (2021).
* Gabeur et al. (2020) Valentin Gabeur, Chen Sun, Karteek Alahari, and Cordelia Schmid. 2020. Multi-modal transformer for video retrieval. In _Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV 16_. Springer, 214–229.
* Ge et al. (2022) Yuying Ge, Yixiao Ge, Xihui Liu, Dian Li, Ying Shan, Xiaohu Qie, and Ping Luo. 2022. Bridging video-text retrieval with multiple choice questions. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 16167–16176.
* Ghosh et al. (2019) Soham Ghosh, Anuva Agarwal, Zarana Parekh, and Alexander G Hauptmann. 2019. ExCL: Extractive Clip Localization Using Natural Language Descriptions. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. 1984–1990.
* Ging et al. (2020) Simon Ging, Mohammadreza Zolfaghari, Hamed Pirsiavash, and Thomas Brox. 2020. Coot: Cooperative hierarchical transformer for video-text representation learning. _Advances in neural information processing systems_ 33 (2020), 22605–22618.
* Girshick et al. (2014) Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_. 580–587.
* Han et al. (2021) Ning Han, Jingjing Chen, Guangyi Xiao, Hao Zhang, Yawen Zeng, and Hao Chen. 2021. Fine-grained cross-modal alignment network for text-video retrieval. In _Proceedings of the 29th ACM International Conference on Multimedia_. 3826–3834.
* He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_. 770–778.
* Hou et al. (2021) Zhijian Hou, Chong-Wah Ngo, and Wing Kwong Chan. 2021. CONQUER: Contextual query-aware ranking for video corpus moment retrieval. In _Proceedings of the 29th ACM International Conference on Multimedia_. 3900–3908.
* Jiang et al. (2023) Xun Jiang, Zhiguo Chen, Xing Xu, Fumin Shen, Zuo Cao, and Xunliang Cai. 2023. Progressive Event Alignment Network for Partial Relevant Video Retrieval. In _2023 IEEE International Conference on Multimedia and Expo (ICME)_. IEEE, 1973–1978.
* Kang et al. (2022) Hyolim Kang, Jinwoo Kim, Taehyun Kim, and Seon Joo Kim. 2022. Uboco: Unsupervised boundary contrastive learning for generic event boundary detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 20073–20082.
* Karpukhin et al. (2020) Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense Passage Retrieval for Open-Domain Question Answering. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_. 6769–6781.
* Larochelle and Lauly (2012) Hugo Larochelle and Stanislas Lauly. 2012. A neural autoregressive topic model. _Advances in Neural Information Processing Systems_ 25 (2012).
* Lei et al. (2021a) Jie Lei, Tamara L Berg, and Mohit Bansal. 2021a. Detecting moments and highlights in videos via natural language queries. _Advances in Neural Information Processing Systems_ 34 (2021), 11846–11858.
* Lei et al. (2022) Jie Lei, Xinlei Chen, Ning Zhang, Mengjiao Wang, Mohit Bansal, Tamara L Berg, and Licheng Yu. 2022. Loopitr: Combining dual and cross encoder architectures for image-text retrieval. _arXiv preprint arXiv:2203.05465_ (2022).
* Lei et al. (2021b) Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L Berg, Mohit Bansal, and Jingjing Liu. 2021b. Less is more: Clipbert for video-and-language learning via sparse sampling. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_. 7331–7341.
* Lei et al. (2020) Jie Lei, Licheng Yu, Tamara L Berg, and Mohit Bansal. 2020. Tvr: A large-scale dataset for video-subtitle moment retrieval. In _European Conference on Computer Vision_. 447–463.
* Li et al. (2021) Kun Li, Dan Guo, and Meng Wang. 2021. Proposal-free video grounding with contextual pyramid network. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 35. 1902–1910.
* Li et al. (2020) Linjie Li, Yen-Chun Chen, Yu Cheng, Zhe Gan, Licheng Yu, and Jingjing Liu. 2020. HERO: Hierarchical Encoder for Video+ Language Omni-representation Pre-training. In _EMNLP_.
* Liu et al. (2021a) Daizong Liu, Xiaoye Qu, Jianfeng Dong, Pan Zhou, Yu Cheng, Wei Wei, Zichuan Xu, and Yulai Xie. 2021a. Context-aware biaffine localizing network for temporal sentence grounding. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 11235–11244.
* Liu et al. (2021b) Haoliang Liu, Tan Yu, and Ping Li. 2021b. Inflate and shrink: Enriching and reducing interactions for fast text-image retrieval. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_. 9796–9809.
* Liu et al. (2018) Meng Liu, Xiang Wang, Liqiang Nie, Qi Tian, Baoquan Chen, and Tat-Seng Chua. 2018. Cross-modal moment localization in videos. In _Proceedings of the 26th ACM international conference on Multimedia_. 843–851.
* Liu et al. (2019a) Yang Liu, Samuel Albanie, Arsha Nagrani, and Andrew Zisserman. 2019a. Use what you have: Video retrieval using representations from collaborative experts. _arXiv preprint arXiv:1907.13487_ (2019).
* Liu et al. (2022) Ye Liu, Siyuan Li, Yang Wu, Chang-Wen Chen, Ying Shan, and Xiaohu Qie. 2022. Umt: Unified multi-modal transformers for joint video moment retrieval and highlight detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 3042–3051.
* Liu et al. (2019b) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_ (2019).
* Miech et al. (2021) Antoine Miech, Jean-Baptiste Alayrac, Ivan Laptev, Josef Sivic, and Andrew Zisserman. 2021. Thinking fast and slow: Efficient text-to-visual retrieval with transformers. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 9826–9836.
* Miech et al. (2020) Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan Laptev, Josef Sivic, and Andrew Zisserman. 2020. End-to-end learning of visual representations from uncurated instructional videos. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 9879–9889.
* Miech et al. (2019) Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. 2019. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In _Proceedings of the IEEE/CVF international conference on computer vision_. 2630–2640.
* Moon et al. (2023) WonJun Moon, Sangeek Hyun, SangUk Park, Dongchan Park, and Jae-Pil Heo. 2023. Query-dependent video representation for moment retrieval and highlight detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 23023–23033.
* Oord et al. (2018) Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. _arXiv preprint arXiv:1807.03748_ (2018).
* Rouditchenko et al. (2020) Andrew Rouditchenko, Angie Boggust, David Harwath, Brian Chen, Dhiraj Joshi, Samuel Thomas, Kartik Audhkhasi, Hilde Kuehne, Rameswar Panda, Rogerio Feris, et al. 2020\. Avlnet: Learning audio-visual language representations from instructional videos. _arXiv preprint arXiv:2006.09199_ (2020).
* Shou et al. (2021) Mike Zheng Shou, Stan Weixian Lei, Weiyao Wang, Deepti Ghadiyaram, and Matt Feiszli. 2021. Generic event boundary detection: A benchmark for event segmentation. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 8075–8084.
* Sun et al. (2019) Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. 2019. Videobert: A joint model for video and language representation learning. In _Proceedings of the IEEE/CVF international conference on computer vision_. 7464–7473.
* Thomee et al. (2016) Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. 2016. YFCC100M: The new data in multimedia research. _Commun. ACM_ 59, 2 (2016), 64–73.
* Tversky and Zacks (2013) Barbara Tversky and Jeffrey M Zacks. 2013. Event perception. _Oxford handbook of cognitive psychology_ 1, 2 (2013), 3.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. _Advances in neural information processing systems_ 30 (2017).
* Wang et al. (2021) Xiaohan Wang, Linchao Zhu, and Yi Yang. 2021. T2vlad: global-local sequence alignment for text-video retrieval. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 5079–5088.
* Wang et al. (2023) Yuting Wang, Jinpeng Wang, Bin Chen, Ziyun Zeng, and Shu-Tao Xia. 2023. GMMFormer: Gaussian-Mixture-Model based Transformer for Efficient Partially Relevant Video Retrieval. _arXiv preprint arXiv:2310.05195_ (2023).
* Xiao et al. (2021) Shaoning Xiao, Long Chen, Jian Shao, Yueting Zhuang, and Jun Xiao. 2021. Natural Language Video Localization with Learnable Moment Proposals. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_. 4008–4017.
* Xu et al. (2021a) Hu Xu, Gargi Ghosh, Po-Yao Huang, Prahal Arora, Masoumeh Aminzadeh, Christoph Feichtenhofer, Florian Metze, and Luke Zettlemoyer. 2021a. Vlm: Task-agnostic video-language model pre-training for video understanding. _arXiv preprint arXiv:2105.09996_ (2021).
* Xu et al. (2021b) Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, and Christoph Feichtenhofer. 2021b. Videoclip: Contrastive pre-training for zero-shot video-text understanding. _arXiv preprint arXiv:2109.14084_ (2021).
* Xu et al. (2019) Huijuan Xu, Kun He, Bryan A Plummer, Leonid Sigal, Stan Sclaroff, and Kate Saenko. 2019. Multilevel language and vision integration for text-to-clip retrieval. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 33. 9062–9069.
* Yoon et al. (2022) Sunjae Yoon, Ji Woo Hong, Eunseop Yoon, Dahyun Kim, Junyeong Kim, Hee Suk Yoon, and Chang D Yoo. 2022. Selective Query-Guided Debiasing for Video Corpus Moment Retrieval. In _European Conference on Computer Vision_. Springer, 185–200.
* Yu et al. (2018) Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension. In _International Conference on Learning Representations_.
* Yu et al. (2022) Tan Yu, Hongliang Fei, and Ping Li. 2022. Cross-probe bert for fast cross-modal search. In _Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval_. 2178–2183.
* Yuan et al. (2019) Yitian Yuan, Tao Mei, and Wenwu Zhu. 2019. To find where you talk: Temporal sentence localization in video with attention based location regression. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 33. 9159–9166.
* Zeng et al. (2020) Runhao Zeng, Haoming Xu, Wenbing Huang, Peihao Chen, Mingkui Tan, and Chuang Gan. 2020. Dense regression network for video grounding. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 10287–10296.
* Zhang et al. (2020a) Bowen Zhang, Hexiang Hu, Joonseok Lee, Ming Zhao, Sheide Chammas, Vihan Jain, Eugene Ie, and Fei Sha. 2020a. A hierarchical multi-modal encoder for moment localization in video corpus. _arXiv preprint arXiv:2011.09046_ (2020).
* Zhang et al. (2019) Da Zhang, Xiyang Dai, Xin Wang, Yuan-Fang Wang, and Larry S Davis. 2019. Man: Moment alignment network for natural language moment retrieval via iterative graph adjustment. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 1247–1257.
* Zhang et al. (2021a) Hao Zhang, Aixin Sun, Wei Jing, Guoshun Nan, Liangli Zhen, Joey Tianyi Zhou, and Rick Siow Mong Goh. 2021a. Video corpus moment retrieval with contrastive learning. In _Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval_. 685–695.
* Zhang et al. (2020b) Hao Zhang, Aixin Sun, Wei Jing, and Joey Tianyi Zhou. 2020b. Span-based Localizing Network for Natural Language Video Localization. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_. 6543–6554.
* Zhang et al. (2021b) Mingxing Zhang, Yang Yang, Xinghan Chen, Yanli Ji, Xing Xu, Jingjing Li, and Heng Tao Shen. 2021b. Multi-stage aggregated transformer network for temporal language localization in videos. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 12669–12678.
* Zhang et al. (2023) Xuemei Zhang, Peng Zhao, Jinsheng Ji, Xiankai Lu, and Yilong Yin. 2023. Video Corpus Moment Retrieval via Deformable Multigranularity Feature Fusion and Adversarial Training. _IEEE Transactions on Circuits and Systems for Video Technology_ (2023).
|
# Smooth solutions to the Gauss image problem
Li Chen Faculty of Mathematics and Statistics, Hubei Key Laboratory of
Applied Mathematics, Hubei University, Wuhan 430062, P.R. China
<EMAIL_ADDRESS>, Di Wu Faculty of Mathematics and Statistics, Hubei Key
Laboratory of Applied Mathematics, Hubei University, Wuhan 430062, P.R. China
<EMAIL_ADDRESS>and Ni Xiang Faculty of Mathematics and Statistics,
Hubei Key Laboratory of Applied Mathematics, Hubei University, Wuhan 430062,
P.R. China<EMAIL_ADDRESS>
###### Abstract.
In this paper we study the the Gauss image problem, which is a generalization
of the Aleksandrov problem in convex geometry. By considering a geometric flow
involving Gauss curvature and functions of normal vectors and radial vectors,
we obtain the existence of smooth solutions to this problem.
###### Key words and phrases:
Monge-Ampère equation, dual Orlicz-Minkowski problem, Gauss curvature flow,
Existence of solutions.
###### 2010 Mathematics Subject Classification:
35J96, 52A20, 53C44.
This research was supported by funds from Natural Science Foundation of China
No.11971157.
## 1\. Introduction
Let $K\subset\mathbb{R}^{n}$ be a convex body which contains the origin in its
interior and $x\in\partial K$ be a boundary point, then the normal cone of $K$
at $x$ is defined by
$\mathcal{N}(K,x)=\\{v\in\mathbb{S}^{n-1}:\langle y-x,v\rangle\leq
0\quad\text{for all}\quad y\in K\\},$
where $\langle y-x,v\rangle$ denotes the standard inner product of $y-x$ and
$v$ in $\mathbb{R}^{n}$. For $\omega\subset\mathbb{S}^{n-1}$, the radial Gauss
image of $\omega$ is defined by
$\alpha_{K}(\omega)=\bigcup_{x\in\rho_{K}(\omega)}\mathcal{N}(K,x)\subset\mathbb{S}^{n-1},$
where $\rho_{K}:\mathbb{S}^{n-1}\rightarrow\partial K$ is the radial function
of $\partial K$(see Section 2 for the definition). Recently, Boroczky, Lutwak,
Yang, Zhang and Zhao [4] proposed the Gauss image problem which link two given
submeasures via the radial Gauss image of a convex body:
The Gauss image problem. _Suppose $\lambda$ is a submeasure defined on the
Lebesgue measurable subsets of $\mathbb{S}^{n-1}$, and $\mu$ is a Borel
submeasure on $\mathbb{S}^{n-1}$. What are the necessary and sufficient
conditions, on $\lambda$ and $\mu$, so that there exists a convex body $K$
such that_
(1) $\lambda(\alpha_{K}(\cdot))=\mu$
_on the Borel subsets of $\mathbb{S}^{n-1}$? And if such a body exists, to
what extent is it unique?_
When $\lambda$ is spherical Lebesgue measure, the Gauss image problem is just
the classical Aleksandrov problem. It is necessary to contrast the Gauss image
problem with the various Minkowski problems and dual Minkowski problems that
have been extensively studied, see [7, 12, 15, 29, 32, 33, 38, 39, 40, 41, 42,
44, 48, 49] for the $L_{p}$-Minkowski problem, [6, 23, 25, 26, 36, 46, 47] for
the dual Minkowski problem, [5, 10, 11, 27, 28, 35, 43] for the $L_{p}$ dual
Minkowski problem, [3, 21, 24, 31] for the Orlicz Minkowski problem, [17, 19,
37] for the dual Orlicz Minkowski problem. In the Gauss image problem, a pair
of submeasures is given and it is asked if there exists a convex body
“linking” them via its radial Gauss image. However, in a Minkowski problem,
only one measure is given, and the question asks if this measure is a specific
geometric measure of a convex body.
To statement the solutions to the Gauss image problem in [4]. We introduce
some concepts (see [4] for details). If $\omega\subset\mathbb{S}^{n-1}$ is
contained in a closed hemisphere, then the polar set $\omega^{*}$ is defined
by
$\omega^{*}=\\{v\in\mathbb{S}^{n-1}:\langle u,v\rangle\leq 0\quad\text{for
all}\quad u\in\omega\\}.$
###### Definition 1.
Two Borel measures $\mu$ and $\lambda$ on $\mathbb{S}^{n-1}$ are called
Aleksandrov related if
(2)
$\lambda(\mathbb{S}^{n-1})=\mu(\mathbb{S}^{n-1})>\lambda(\omega^{*})+\mu(\omega)$
for any compact, spherically convex set $\omega\in\mathbb{S}^{n-1}$.
Note that $\lambda(\mathbb{S}^{n-1})=\mu(\mathbb{S}^{n-1})$ is obvious for a
solution to (1). The following existence result for solutions to the Gauss
image problem was proved in [4].
###### Theorem 1.
Suppose $\lambda,\mu$ are Borel measures on $\mathbb{S}^{n-1}$ and $\lambda$
is absolutely continuous. If $\mu$ and $\lambda$ are Aleksandrov related, then
there exists a body $K$ containing the origin in its interior such that
$\lambda(\alpha_{K}(\cdot))=\mu$.
Note that for the special case in which $\mu$ is a measure that has a density
with repect to the spherical Lebesgue measure, say $f$, and $\lambda$ is a
measure that has a density with repect to the spherical Lebesgue measure, say
$g$. In this case, $\mu$ and $\lambda$ on $\mathbb{S}^{n-1}$ are Aleksandrov
related if
(3)
$\int_{\mathbb{S}^{n-1}}f=\int_{\mathbb{S}^{n-1}}g>\int_{\omega}f+\int_{\omega^{*}}g$
for any compact, spherically convex set $\omega\in\mathbb{S}^{n-1}$. Moreover,
the geometric problem (1) is the equation of Monge-Ampère type
(4) $g\bigg{(}\frac{\nabla h+hx}{|\nabla h+hx|}\bigg{)}|\nabla
h+hx|^{-n}h\det(\nabla^{2}h+hI)=f\quad\text{ on }\quad\mathbb{S}^{n-1},$
where $h$ is the support function of the polar body $K^{*}$ of $K$ which is
defined as
$K^{*}=\\{x\in\mathbb{R}^{n}:\langle x,y\rangle\leq 1\quad\text{for all}\quad
y\in K\\}.$
Here $\nabla$ is the covariant derivative with respect to an orthonormal frame
on $\mathbb{S}^{n-1}$, $I$ is the unit matrix of order $n-1$, and $\nabla
h(x)+h(x)x$ is just the point on $\partial K^{*}$ whose outer unit normal
vector is $x\in\mathbb{S}^{n-1}$.
In this paper we study the existence of smooth solutions to the equation (4).
We obtain the following existence result.
###### Theorem 2.
Suppose that $f$ and $g$ are two positive smooth functions on
$\mathbb{S}^{n-1}$ . If $f$ and $g$ satisfy the condition (3), then there
exists a smooth solution to the equation (4).
The proof of Theorem 2 is inspired by [36], where the existence of smooth
solutions to the Aleksandrov and dual Minkowski problem was obtained by
studying a generalized Gauss curvature flow. In fact, the Gauss curvature flow
and its various generalizations have been extensively studied by many
scholars; see for example [1, 2, 8, 9, 10, 14, 16, 20, 22, 30, 37, 45] and the
references therein.
Let $M_{0}$ be a smooth, closed, uniformly convex hypersurface in
$\mathbb{R}^{n}$, which contains the origin in its interiors and given by a
smooth embedding $X_{0}:\mathbb{S}^{n-1}\rightarrow\mathbb{R}^{n}$. We
consider a family of closed hypersurfaces $\left\\{M_{t}\right\\}$ given by
$M_{t}=X(\mathbb{S}^{n-1},t)$, where
$X:\mathbb{S}^{n-1}\times[0,T)\rightarrow\mathbb{R}^{n}$ is a smooth map
satisfying the following initial value problem:
(5) $\left\\{\begin{aligned} \frac{\partial X}{\partial
t}(x,t)&=-\frac{f(\nu)}{g\Big{(}\frac{X}{|X|}\Big{)}}|X|^{n}\mathcal{K}\nu+X,\\\
X(x,0)&=X_{0}(x).\end{aligned}\right.$
Here $\nu$ is the unit outer normal vector of the hypersurface $M_{t}$ at the
point $X(x,t)$, $\mathcal{K}$ is the Gauss curvature of $M_{t}$ at $X(x,t)$,
and $T$ is the maximal time for which the solution exists. We obtain the long-
time existence and convergence of the flow (5).
###### Theorem 3.
Suppose $f$ and $g$ satisfy the assumptions of Theorem 2. Let $M_{0}$ be a
smooth, closed, uniformly convex hypersurface in $\mathbb{R}^{n}$, which
contains the origin in its interior. Then, the flow (5) has a unique smooth
solution, which exists for all time $t>0$. Moreover, when $t\to\infty$, a
subsequence of $M_{t}=X(\mathbb{S}^{n-1},t)$ converges in $C^{\infty}$ to a
smooth, closed, uniformly convex hypersurface, whose support function is a
smooth solution to the equation (4).
This paper is organized as follows. In section 2, we give some basic knowledge
about convex hypersurfaces and the flow (5). In section 3, more properties of
the flow (5) will be proved, based on which we can obtain the uniform lower
and upper bounds of support functions of $\left\\{M_{t}\right\\}$ via delicate
analyses. In the last section, the long-time existence and convergence of the
flow (5) will be proved, which completes the proofs of our theorems.
## 2\. Preliminaries
### 2.1. Basic properties of convex hypersurfaces
We first recall some basic properties of convex hypersurfaces in
$\mathbb{R}^{n}$; see [45] for details. Let $M$ be a smooth, closed, uniformly
convex hypersurface in $\mathbb{R}^{n}$ enclosing the origin. The support
function $h$ of $M$ is defined as
(6) $h(x):=\max_{y\in M}\langle
y,x\rangle,\quad\forall\,x\in\mathbb{S}^{n-1},$
where $\langle\cdot,\cdot\rangle$ is the standard inner product in
$\mathbb{R}^{n}$.
The convex hypersurface $M$ can be recovered by its support function $h$. In
fact, writing the Gauss map of $M$ as $\nu_{M}$, we parametrize $M$ by
$X:\mathbb{S}^{n-1}\to M$ which is given as
$X(x)=\nu_{M}^{-1}(x),\quad\forall\,x\in\mathbb{S}^{n-1}.$
Note that $x$ is the unit outer normal vector of $M$ at $X(x)$. On the other
hand, one can easily check that the maximum in the definition (6) is attained
at $y=\nu_{M}^{-1}(x)$, namely
(7) $h(x)=\langle x,X(x)\rangle,\quad\forall\,x\in\mathbb{S}^{n-1}.$
Let $e_{ij}$ be the standard metric of the unit sphere $\mathbb{S}^{n-1}$, and
$\nabla$ be the corresponding connection on $\mathbb{S}^{n-1}$. Then, it is
easy to check that
(8) $X(x)=\nabla h(x)+h(x)x,\quad\forall\,x\in\mathbb{S}^{n-1}.$
By differentiating (7) twice, the second fundamental form $A_{ij}$ of $M$ can
be also computed in terms of the support function:
(9) $A_{ij}=\nabla_{i}\nabla_{j}h+he_{ij},$
where $\nabla_{i}\nabla_{j}$ denotes the second order covariant derivative
with respect to $e_{ij}$. The induced metric matrix $g_{ij}$ of $M$ can be
derived by Weingarten’s formula:
(10) $e_{ij}=\langle\nabla_{i}x,\nabla_{j}x\rangle=A_{ik}A_{lj}g^{kl}.$
The principal radii of curvature are eigenvalues of the matrix
$b_{ij}=A^{ik}g_{jk}$. When considering a smooth local orthonormal frame on
$\mathbb{S}^{n-1}$, by virtue of (9) and (10), we have
(11) $b_{ij}=A_{ij}=\nabla_{i}\nabla_{j}h+h\delta_{ij}.$
In particular, the Gauss curvature of $M$ at $X(x)$ is given by
$\mathcal{K}(x)=[\det(\nabla_{i}\nabla_{j}h+h\delta_{ij})]^{-1}.$
We shall use $b^{ij}$ to denote the inverse matrix of $b_{ij}$.
The radial function $\rho$ of the convex hypersurface $M$ is defined as
$\rho(u):=\max\left\\{\lambda>0:\lambda u\in
M\right\\},\quad\forall\,u\in\mathbb{S}^{n-1}.$
Note that $\rho(u)u\in M$. The Gauss map $\nu_{M}$ can be computed as
$\nu_{M}(\rho(u)u)=\frac{\rho(u)u-\nabla\rho}{\sqrt{\rho^{2}+|\nabla\rho|^{2}}}.$
If we connect $u$ and $x$ through the following equality:
(12) $\rho(u)u=X(x)=\nabla h(x)+h(x)x=\overline{\nabla}h(x),$
where $\overline{\nabla}$ is the standard connection of $\mathbb{R}^{n}$, then
we have the following relations
(13) $x=\frac{\rho(u)u-\nabla\rho}{\sqrt{\rho^{2}+|\nabla\rho|^{2}}},\quad
u=\frac{\nabla h+h(x)x}{\sqrt{|\nabla h|^{2}+h^{2}}},$
and
(14) $\frac{h(x)}{\mathcal{K}(x)}dx=\rho^{n}(u)du.$
### 2.2. Geometric flow and its associated functional
Recalling the evolution equation of $X(x,t)$ in the geometric flow (5), and
using similar computations as in [45], we obtain the evolution equation of the
corresponding support function $h(x,t)$:
(15) $\frac{\partial h}{\partial
t}(x,t)=-\frac{f(x)\rho^{n}(u)}{g(u)}\mathcal{K}(x,t)+h(x,t)\ \text{ in }\
\mathbb{S}^{n-1}\times(0,T).$
Since $M_{t}$ can be recovered by $h(\cdot,t)$, the flow (15) is equivalent to
the original flow (5).
Denote the radial function of $M_{t}$ by $\rho(u,t)$. For any $t$, let $u$ and
$x$ be related through the following equality:
$\rho(u,t)u=\overline{\nabla}h(x,t)=\nabla h(x,t)+h(x,t)x.$
Therefore, $x$ can be expressed as $x=x(u,t)$. By a direction computation (see
[10]), we have
(16)
$\frac{1}{\rho(u,t)}\partial_{t}\rho(u,t)=\frac{1}{h(x,t)}\partial_{t}h(x,t).$
Now by virtue of (15) and (16), we obtain the evolution equation of
$\rho(u,t)$:
(17) $\frac{\partial\rho}{\partial
t}(u,t)=-\frac{f(x)\rho^{n+1}(u,t)}{g(u)h(x,t)}\mathcal{K}(x,t)+\rho(u,t)\
\text{ in }\ \mathbb{S}^{n-1}\times(0,T),$
where $x=x(u,t)$ is the unit outer normal vector of $M_{t}$ at the point
$\rho(u,t)u$.
Consider the following functional:
(18) $J(t)=\int_{\mathbb{S}^{n-1}}f(x)\log
h(x,t)\mathop{}\\!\mathrm{d}x-\int_{\mathbb{S}^{n-1}}g(u)\log\rho(u,t)\mathop{}\\!\mathrm{d}u,\quad
t\geq 0,$
which will turn out to be monotonic along the flow (15).
###### Lemma 1.
$J(t)$ is non-increasing along the flow (15). Namely $\frac{d}{dt}J(t)\leq 0$,
and the equality holds if and only if $M_{t}$ satisfies the elliptic equation
(4).
###### Proof.
Using (14) and (16), we have
$\displaystyle\frac{d}{dt}J(t)$ $\displaystyle=$
$\displaystyle\int_{\mathbb{S}^{n-1}}\frac{f\partial_{t}h}{h}dx-\int_{\mathbb{S}^{n-1}}\frac{g\partial_{t}\rho}{\rho}du$
$\displaystyle=$
$\displaystyle\int_{\mathbb{S}^{n-1}}\frac{\partial_{t}h}{h}(f-\frac{gh}{\mathcal{K}\rho^{n}})dx$
$\displaystyle=$
$\displaystyle\int_{\mathbb{S}^{n-1}}\frac{1}{h}(f-\frac{gh}{\mathcal{K}\rho^{n}})(h-\frac{f\mathcal{K}\rho^{n}}{g})dx$
$\displaystyle=$
$\displaystyle-\int_{\mathbb{S}^{n-1}}\frac{1}{gh\mathcal{K}\rho^{n}}(gh-f\mathcal{K}\rho^{n})^{2}dx$
$\displaystyle\leq$ $\displaystyle 0.$
Clearly $\frac{d}{dt}J(t)=0$ if and only if
$gh=f\mathcal{K}\rho^{n}.$
Namely $M_{t}$ satisfies (4). This completes the proof. ∎
###### Lemma 2.
Assume that
$\int_{\mathbb{S}^{n-1}}f(x)dx=\int_{\mathbb{S}^{n-1}}g(u)du,$
then the log-volume of $M_{t}$
(20)
$V_{g}(M_{t})=\int_{\mathbb{S}^{n-1}}g(u)\log\rho(u,t)\mathop{}\\!\mathrm{d}u,$
remain unchanged under the flow (15).
###### Proof.
Using (14) and (17), we have
$\displaystyle\frac{d}{dt}V_{g}(M_{t})$ $\displaystyle=$
$\displaystyle\int_{\mathbb{S}^{n-1}}\frac{g\partial_{t}\rho}{\rho}du$
$\displaystyle=$
$\displaystyle\int_{\mathbb{S}^{n-1}}\frac{g}{\rho}(\rho-\frac{f\rho^{n+1}}{gh}\mathcal{K})du$
$\displaystyle=$
$\displaystyle\int_{\mathbb{S}^{n-1}}g(u)du-\int_{\mathbb{S}^{n-1}}f(x)\frac{\rho^{n}\mathcal{K}}{h}du$
$\displaystyle=$
$\displaystyle\int_{\mathbb{S}^{n-1}}g(u)du-\int_{\mathbb{S}^{n-1}}f(x)dx$
$\displaystyle=$ $\displaystyle 0.$
So, we complete the proof. ∎
## 3\. Uniform bounds of support functions
In this section, we will derive uniformly positive lower and upper bounds of
support functions along the flow (5). Our idea comes from the proof of Lemma
3.2 in [36].
###### Lemma 3.
Suppose $f$ and $g$ satisfy the assumptions in Theorem 2. Let $X(\cdot,t)$ be
a strictly convex solution to the flow (5) which encloses the origin for
$t\in[0,T)$, then exists a positive constant $C$ depending only on $M_{0}$,
$f$ and $g$, such that for every $t\in[0,T)$,
(21) $1/C\leq h(\cdot,t)\leq C\quad\text{ on }\ \mathbb{S}^{n-1},$
and
(22) $1/C\leq\rho(\cdot,t)\leq C\quad\text{ on }\ \mathbb{S}^{n-1}.$
###### Proof.
Since $f$ and $g$ satisfy the condition (3), the measure $\mu$ with the
density $f$, and the measure $\lambda$ with the density $g$ are Aleksandrov
related. Then, using Theorem 1, there exists a body $N^{*}$ containing the
origin in its interior satisfies the equation (1). Let $N$ be the polar dual
of $N^{*}$. Choosing the constants $s_{1}>s_{0}>0$ such that
$N_{0}=s_{0}N\subset K_{0}\subset s_{1}N=N_{1}.$
Let $r_{0}$ and $r_{1}$ be respectively the radial functions of $N_{0}$ and
$N_{1}$. Clearly, $sN$ is a stationary solution to (5) in the generalised
sense. We firstly prove that
$K_{t}\subset N_{1}$
for all $t>0$ by a contradiction. Otherwise, there exists a first time
$t_{0}>0$ such that
$\sup_{u\in\mathbb{S}^{n-1}}\frac{\rho(u,t_{0})}{r_{1}(u)}=1.$
Set
$P=M_{t_{0}}\cap N_{1},$
which can be a point or a closed set. Clearly, the unit normal vector of
$M_{t_{0}}$ coincide with that of $N_{1}$ for any $p\in P$. Namely,
$\nu_{M_{t_{0}}}(p)=\nu_{N_{1}}(p)$ for any $p\in P$. Moreover, replacing
$r_{1}$ by $(1+a)r_{1}$ for a small constant $a$, we may assume that
$\frac{\partial}{\partial t}\rho(u,t)>0\quad\text{on}\quad P\times\\{t_{0}\\}$
and also in a neighbourhood of $P\times\\{t_{0}\\}$. There exists sufficiently
small constants $\epsilon,\delta>0$ such that
$\frac{\partial}{\partial t}\rho(u,t)>\delta$
for all $u\in
E=\\{\xi\in\mathbb{S}^{n-1}:\rho(\xi,t)>(1-\epsilon)r_{1}(\xi)\\}$. Since
$\nu_{M_{t_{0}}}(p)=\nu_{N_{1}}(p)$ for any $p\in P$, making $\epsilon$ small
again, we have $\nu_{N_{1}}(u)\approx\nu_{M_{t_{0}}}(u)$ for $u\in E$. Thus,
using the equation (17), the Gauss curvature of $M_{t_{0}}$ satisfies
$\displaystyle\mathcal{K}(M_{t_{0}})$ $\displaystyle<$
$\displaystyle\frac{(\rho(u,t_{0})-\delta)g(u)}{f(\nu_{M_{t_{0}}})\rho^{n+1}(u,t_{0})}$
$\displaystyle<$
$\displaystyle\frac{(r_{1}(u,t_{0})-\delta)g(u)}{f(\nu_{N_{1}})(r_{1}(u,t_{0})-\epsilon)^{n+1}}$
$\displaystyle<$
$\displaystyle\frac{1}{(1-\epsilon)^{n}}\frac{g(u)}{f(\nu_{N_{1}})(r_{1}(u,t_{0}))^{n}}$
$\displaystyle<$ $\displaystyle\mathcal{K}((1-\epsilon)N_{1}).$
Namely, the Gauss curvature of $M_{t_{0}}$ is strictly small than that of
$(1-\epsilon)N_{1}$ for all $\xi\in E$. Applying the comparison principle for
generalised solutions to the elliptic Monge-Ampère equation (see Theorem 1.4.6
in [18]) to the functions $\rho(u,t_{0})$ and $(1-\epsilon)r_{1}(u)$, we reach
a contradiction. Similarly, we can prove that $N_{0}\subset M_{t}$ for all
$t>0$. ∎
From Theorems 2 and 3 in [4], we know that if $f$ and $g$ are even functions
satisfying
(23) $\int_{\mathbb{S}^{n-1}}f(x)dx=\int_{\mathbb{S}^{n-1}}g(u)du,$
then $f$ and $g$ satisfy the condition (3). In this case, if $M_{0}$ is
origin-symmetric, we can give a proof of Lemma 3 without using Theorem 1.
###### Lemma 4.
Suppose that $M_{0}$ is origin-symmetric, $f$ and $g$ are two smooth, positive
even functions satisfying the condition (23), then the conclusions in Lemma 3
hold true.
###### Proof.
Note that $M_{0}$ is origin-symmetric, $f$ and $g$ are even functions, thus
$M_{t}$ is origin-symmetric and $h(x,t)$ is an even function. For fixed
$t\in[0,T)$, assume $h(x,t)$ attains its maximum at $x_{t}$. Since $h(x,t)$ is
an even function, we have by the definition of the support function (6)
(24) $h(x,t)\geq|\langle x,x_{t}\rangle|h(x_{t},t).$
By Lemmas 1 and 2,
$\displaystyle\int_{\mathbb{S}^{n-1}}f(x)\log h(x,0)dx$ $\displaystyle\geq$
$\displaystyle\int_{\mathbb{S}^{n-1}}f(x)\log h(x,t)dx$ $\displaystyle\geq$
$\displaystyle\int_{\mathbb{S}^{n-1}}f(x)\log|\langle
x,x_{t}\rangle|h(x_{t},t)dx$ $\displaystyle\geq$ $\displaystyle C\log
h(x_{t},t)-C,$
which implies that
$\max_{\mathbb{S}^{n-1}\times[0,T)}h(x,t)\leq C$
for some positive constant $C$.
The positive lower bound of $h$ will be proved by contradiction. Let
$\\{t_{k}\\}\subset[0,T)$ be a sequence such that
$\min_{\mathbb{S}^{n-1}}h(\cdot,t_{k})\rightarrow 0\quad\text{as}\quad
k\rightarrow\infty.$
Let $K_{t}$ be the convex body enclosed by $M_{t}$. By Blaschke selection
theorem, there is a sequence in $\\{K_{t_{k}}\\}$, which is still denoted by
$\\{K_{t_{k}}\\}$, such that
$K_{t_{k}}\rightarrow\widetilde{K}\quad\text{as}\quad k\rightarrow+\infty.$
Since $K_{t_{k}}$ is an origin-symmetric convex body, $\widetilde{K}$ is also
origin-symmetric. Then
$\displaystyle\min_{\mathbb{S}^{n-1}}h_{\widetilde{K}}=\lim_{k\rightarrow+\infty}\min_{\mathbb{S}^{n-1}}h_{K_{t_{k}}}=0.$
It follows that $\widetilde{K}$ is contained in a hyperplane in
$\mathbb{R}^{n}$. Then
$\rho_{\widetilde{K}}=0,\quad\text{a.e. in $\mathbb{S}^{n-1}$}.$
Using Lemma 2, we have for any $\epsilon>0$
$\displaystyle V_{g}(M_{0})$ $\displaystyle=$ $\displaystyle V_{g}(M_{t_{k}})$
$\displaystyle\leq$
$\displaystyle\lim_{k\rightarrow+\infty}\int_{\mathbb{S}^{n-1}}g(u)\log[\rho(u,t_{k})+\epsilon]du$
$\displaystyle=$ $\displaystyle\int_{\mathbb{S}^{n-1}}g(u)\log\epsilon du$
$\displaystyle=$ $\displaystyle
C\log\epsilon\rightarrow-\infty\quad\text{as}\quad\epsilon\rightarrow 0,$
which is a contradiction. Then
$\min_{\mathbb{S}^{n-1}\times[0,T)}h(x,t)\geq C$
for some positive constant $C$. So we complete the proof. ∎
Due to the convexity of $M_{t}$, Lemma 3 also implies the gradient estimates
of $h(\cdot,t)$ and $\rho(\cdot,t)$.
###### Lemma 5.
Let $X(\cdot,t)$ be a strictly convex solution to the flow (5) which encloses
the origin for $t\in[0,T)$, then we have
$\displaystyle|\nabla h(x,t)|\leq
C,\quad\forall(x,t)\in\mathbb{S}^{n-1}\times[0,T),$
$\displaystyle|\nabla\rho(u,t)|\leq
C,\quad\forall(u,t)\in\mathbb{S}^{n-1}\times[0,T),$
where $C$ is a positive constant depending only on the constant in Lemma 3.
###### Proof.
By virtue of (12), we have
(25) $\rho^{2}=|\nabla h|^{2}+h^{2}\geq|\nabla h|^{2}.$
By (7), (12) and (13), we have
(26)
$h=\frac{\rho^{2}}{\sqrt{\rho^{2}+|\nabla\rho|^{2}}}\leq\frac{\rho^{2}}{|\nabla\rho|}.$
Using the two inequalities (25) and (26), the estimates of this lemma now
follows directly from Lemma 3. ∎
## 4\. Uniform bounds for principal curvatures
In this section, we continue to establish uniform upper and lower bounds for
principal curvatures. These estimates can be obtained by considering proper
auxiliary functions, see [10, 13, 36, 37] for similar techniques. First, we
need the following lemma.
###### Lemma 6.
Given two positive constants $r<R$. For the function
$G(y)=\frac{|y|^{n}}{g(\frac{y}{|y|})},\quad y=(y^{1},...,y^{n})\in
A(r,R)=\\{y\in\mathbb{R}^{n}:r<|y|<R\\},$
we have
$\displaystyle\left\|G\right\|_{C^{k}(A(r,R))}\leq
C_{k}\left\|g\right\|_{C^{k}(\mathbb{S}^{n-1})},$
where $k=0,1,2$, and $C_{k}$ is a positive constant depending only on $n,r,R$,
$\left\|g\right\|_{C^{k}(\mathbb{S}^{n-1})}$, and
$\min\limits_{\mathbb{S}^{n-1}}g$.
###### Proof.
Denote $\partial_{i}=\frac{\partial}{\partial y^{i}}$ for $1\leq i\leq n$. It
is clearly see that
$\displaystyle\overline{\nabla}_{i}|y|^{n}=n|y|^{n-2}y^{i}\quad\text{and}\quad\overline{\nabla}_{i}g=\frac{1}{|y|^{3}}\langle\overline{\nabla}g,|y|^{2}\partial_{i}-yy^{i}\rangle.$
Thus,
$\displaystyle\overline{\nabla}_{i}G(y)$ $\displaystyle=$
$\displaystyle\frac{n|y|^{n-2}y^{i}}{g}-\frac{|y|^{n-3}\langle\overline{\nabla}g,|y|^{2}\partial_{i}-yy^{i}\rangle}{g^{2}}.$
It follows consequently
$\displaystyle\left\|G\right\|_{C^{1}(A(r,R))}\leq
C_{1}\left\|g\right\|_{C^{1}(\mathbb{S}^{n})}.$
Moreover, we have
$\displaystyle\overline{\nabla}_{j}\overline{\nabla}_{i}|y|^{n}=n(n-2)|y|^{n-4}y^{i}y^{j}+n|y|^{n-2}\delta_{ij}$
and
$\displaystyle\overline{\nabla}_{j}\overline{\nabla}_{i}g$ $\displaystyle=$
$\displaystyle\frac{1}{|y|^{3}}\overline{\nabla}^{2}g\Big{(}|y|^{2}\partial_{i}-yy^{i},|y|^{2}\partial_{i}-yy^{i}\Big{)}+\frac{1}{|y|^{3}}\Big{\langle}\overline{\nabla}g,2y^{j}\partial_{i}-y^{i}\partial_{j}-y\delta_{ij}\Big{\rangle}$
$\displaystyle-\frac{3y^{j}}{|y|^{5}}\langle\overline{\nabla}g,|y|^{2}\partial_{i}-yy^{i}\rangle.$
Note that
$\displaystyle\overline{\nabla}_{j}\overline{\nabla}_{i}G$ $\displaystyle=$
$\displaystyle\frac{1}{g}\overline{\nabla}_{j}\overline{\nabla}_{i}|y|^{n}-|y|^{n}\frac{\overline{\nabla}_{j}\overline{\nabla}_{i}g}{g^{2}}+2|y|^{n}\frac{\overline{\nabla}_{i}g\overline{\nabla}_{j}g}{g^{3}}-2\overline{\nabla}_{i}(|y|^{n})\frac{\overline{\nabla}_{j}g}{g}.$
Thus, we get
$\displaystyle\left\|G\right\|_{C^{2}(A(r,R))}\leq
C_{2}\left\|g\right\|_{C^{2}(\mathbb{S}^{n})}$
in view of
$\displaystyle\overline{\nabla}^{2}g(e_{i},e_{j})=\nabla^{2}g(e_{i},e_{j})-\langle\overline{\nabla}g,\frac{y}{|y|}\rangle\delta_{ij}$
for a local orthonormal frame $\\{e_{1},\cdots,e_{n-1}\\}$ on
$\mathbb{S}^{n-1}$. So, our proof is completed. ∎
If we have proved Lemma 6, we can derive the uniform upper bound of the Gauss
curvature of $M_{t}$ and the uniform lower bound for principal curvatures by
similar calculations which have been done in [13]. In the rest of this
section, we take a local orthonormal frame $\\{e_{1},\cdots,e_{n-1}\\}$ on
$\mathbb{S}^{n-1}$ such that the standard metric on $\mathbb{S}^{n-1}$ is
$\\{\delta_{ij}\\}$. And double indices always mean to sum from $1$ to $n-1$.
###### Lemma 7.
Let $X(\cdot,t)$ be a strictly convex solution to the flow (5) which encloses
the origin for $t\in[0,T)$, then we have
$\mathcal{K}(x,t)\leq C,\quad\forall(x,t)\in\mathbb{S}^{n-1}\times[0,T),$
where $C$ is a positive constant depending only on the constants in Lemmas 3
and 5.
###### Proof.
Set
$Q(x,t)=\frac{-\partial_{t}h(x,t)+h(x,t)}{h(x,t)-\varepsilon_{0}}=\frac{f(x)\rho^{n}(u)}{(h-\varepsilon_{0})g(u)}\mathcal{K}(x,t),$
where
$\varepsilon_{0}=\frac{1}{2}\,\inf_{\mathbb{S}^{n-1}\times[0,T)}h(x,t)$
and the second equality follows from (15). For each $t\in[0,T)$, assume
$Q(\cdot,t)$ attains its maximum at some point $x_{t}\in\mathbb{S}^{n-1}$. At
$(x_{t},t)$, we can obtain
(27)
$0=Q_{i}=\frac{-\partial_{t}h_{i}+h_{i}}{h-\varepsilon_{0}}+\frac{\partial_{t}h-h}{(h-\varepsilon_{0})^{2}}h_{i},$
and
(28) $0\geq
Q_{ij}=\frac{-\partial_{t}h_{ij}+h_{ij}}{h-\varepsilon_{0}}+\frac{(\partial_{t}h-h)h_{ij}}{(h-\varepsilon_{0})^{2}},$
where (27) is used in (28). Recall that $b_{ij}=h_{ij}+h\delta_{ij}$, and
$b^{ij}$ is its inverse matrix. Using the inequality (28), it yields
$\begin{split}\partial_{t}b_{ij}&=\partial_{t}h_{ij}+\partial_{t}h\delta_{ij}\\\
&\geq
h_{ij}+\frac{\partial_{t}h-h}{h-\varepsilon_{0}}h_{ij}+\partial_{t}h\delta_{ij}\\\
&=b_{ij}-Q(b_{ij}-\varepsilon_{0}\delta_{ij}).\end{split}$
Noticing that $\mathcal{K}=1/\det(b_{ij})$, we have
(29)
$\begin{split}\partial_{t}\mathcal{K}&=-\mathcal{K}b^{ji}\partial_{t}b_{ij}\\\
&\leq-\mathcal{K}b^{ji}[b_{ij}-Q(b_{ij}-\varepsilon_{0}\delta_{ij})]\\\
&=-\mathcal{K}\bigl{[}(n-1)(1-Q)+Q\varepsilon_{0}\operatorname{tr}(b^{ij})\bigr{]}.\end{split}$
Note that
(30) $Q(x,t)=\frac{f(x)\rho^{n}(u)}{(h-\varepsilon_{0})g(u)}\mathcal{K}(x,t),$
we derive by Lemma 3
(31) $\frac{1}{C_{1}}Q(x,t)\leq\mathcal{K}(x,t)\leq C_{1}Q(x,t),$
where $C_{1}$ is a positive constant depending only on the constant $C$ in
Lemma 3, and the upper and lower bounds of $f,g$ on $\mathbb{S}^{n-1}$.
Combining Lemma 3 and the inequalities (29) and (31), we have
(32)
$\begin{split}\partial_{t}\mathcal{K}&\leq(n-1)\mathcal{K}Q-(n-1)\varepsilon_{0}Q\mathcal{K}^{\frac{n}{n-1}}\\\
&\leq C_{2}Q^{2}-C_{3}Q^{\frac{2n-1}{n-1}},\end{split}$
where the inequality
$\frac{1}{n-1}\operatorname{tr}(b^{ij})\geq\det(b^{ij})^{\frac{1}{n-1}}=\mathcal{K}^{\frac{1}{n-1}}$
is used. Here $C_{2}$, $C_{3}$ are positive constants depending only on $n$,
$\varepsilon_{0}$ and $C_{1}$. By the definition of $Q$ and (27),
(33) $\partial_{t}h_{i}=(1-Q)h_{i}.$
Thus, we have
$\displaystyle\partial_{t}(\nabla h+hx)$ $\displaystyle=$
$\displaystyle\partial_{t}(h_{i}e_{i}+hx)$ $\displaystyle=$
$\displaystyle\partial_{t}h_{i}e_{i}+(\partial_{t}h)x$ $\displaystyle=$
$\displaystyle(1-Q)h_{i}e_{i}+(h-(h-\varepsilon_{0})Q)x$ $\displaystyle=$
$\displaystyle(1-Q)(\nabla h+hx)+\varepsilon_{0}Qx.$
So, we can say that
$\displaystyle\partial_{t}G(\nabla h+hx)$ $\displaystyle=$
$\displaystyle\langle\overline{\nabla}G,\partial_{t}(\nabla h+hx)\rangle$
$\displaystyle=$ $\displaystyle(1-Q)\langle\overline{\nabla}G,\nabla
h+hx\rangle+\varepsilon_{0}Q\langle\overline{\nabla}G,x\rangle$
$\displaystyle\leq$ $\displaystyle(1-Q)|\overline{\nabla}G|\cdot|\nabla
h+hx|+\varepsilon_{0}Q|\overline{\nabla}G|$ $\displaystyle\leq$ $\displaystyle
C_{4}(1-Q)|\nabla h+hx|+C_{4}\varepsilon_{0}Q,$
where we know by Lemma 6 that $C_{4}$ is a positive constant depending on $n$,
the constant $C$ in Lemmas 3 and 5,
$\left\|g\right\|_{C^{1}(\mathbb{S}^{n-1})}$ and
$\min\limits_{\mathbb{S}^{n-1}}g$.
Thus,
(36) $\begin{split}&\frac{\partial}{\partial t}\Bigl{[}\frac{G(\nabla
h+hx)}{h(x,t)-\varepsilon_{0}}\Bigr{]}\\\
=&\frac{\partial_{t}G}{h-\varepsilon_{0}}-\frac{G\partial_{t}h}{(h-\varepsilon_{0})^{2}}\\\
\leq&\frac{(1-Q)|\overline{\nabla}G|\cdot|\overline{\nabla}h|+\varepsilon_{0}Q|\overline{\nabla}G|}{h-\varepsilon_{0}}+\frac{[(1-Q)h+\varepsilon_{0}Q]G}{(h-\varepsilon_{0})^{2}}\\\
\leq&C_{5}+C_{5}Q,\end{split}$
where $C_{5}$ is a positive constant depending only on $\varepsilon_{0}$,
$C_{4}$ and the constant $C$ in Lemmas 3 and 5.
By virtue of (30), (32) and (36), we have at $(x_{t},t)$
$\displaystyle\partial_{t}Q$ $\displaystyle=$ $\displaystyle
f\partial_{t}[\frac{G}{h-\varepsilon_{0}}]\mathcal{K}+f\frac{G}{h-\varepsilon_{0}}\partial_{t}\mathcal{K}$
$\displaystyle\leq$ $\displaystyle\max_{\mathbb{S}^{n-1}}f\cdot
C_{5}(1+Q)C_{1}Q+\frac{Q}{\mathcal{K}}(C_{2}Q^{2}-C_{3}Q^{\frac{2n-1}{n-1}})$
$\displaystyle\leq$ $\displaystyle\max_{\mathbb{S}^{n-1}}f\cdot
C_{1}C_{5}Q(1+Q)+C_{1}C_{2}Q^{2}-C_{1}^{-1}C_{3}Q^{\frac{2n-1}{n-1}})$
$\displaystyle\leq$ $\displaystyle CQ+CQ^{2}-CQ^{\frac{2n-1}{n-1}},$
where we have used (31) to obtain the third inequality and $C$ is a positive
constant depending only on $\max\limits_{\mathbb{S}^{n-1}}f$, $C_{1}$,
$C_{2}$, $C_{3}$ and $C_{5}$. Thus, whenever $Q(x_{t},t)$ is greater than some
constant which is independent of $t$, we have
$\partial_{t}Q<0,$
which implies that $Q$ has an uniform upper bound. By (31), $\mathcal{K}$ has
a uniform upper bound. ∎
###### Lemma 8.
Let $X(\cdot,t)$ be a strictly convex solution to the flow (5) which encloses
the origin for $t\in[0,T)$, then for the principal curvatures
$\kappa_{i}(x,t)$ of $M_{t}$, we have
$\kappa_{i}(x,t)\geq
C,\quad\forall(x,t)\in\mathbb{S}^{n-1}\times[0,T),\quad\forall
i=1,\cdots,n-1,$
where $C$ is a positive constant depending only on the constant in Lemmas 3
and 5.
###### Proof.
Set
$\widetilde{\Lambda}(x,t)=\log\lambda_{\max}(b_{ij})-A\log h+B|\nabla
h|^{2},\quad\forall(x,t)\in\mathbb{S}^{n-1}\times[0,T),$
where $b_{ij}=h_{ij}+h\delta_{ij}$ as before, $\lambda_{\max}(b_{ij})$ denotes
the maximal eigenvalue of the matrix $(b_{ij})$, and $A$ and $B$ are positive
constants to be chosen later.
For any fixed $T^{\prime}\in(0,T)$, assume
$\max_{\mathbb{S}^{n-1}\times[0,T^{\prime}]}\widetilde{\Lambda}(x,t)$ is
attained at some point $(x_{0},t_{0})\in\mathbb{S}^{n-1}\times[0,T^{\prime}]$.
By choosing a suitable orthonormal frame, we may assume
$\\{b_{ij}(x_{0},t_{0})\\}\quad\text{is diagonal
and}\quad\lambda_{\max}(b_{ij})(x_{0},t_{0})=b_{11}(x_{0},t_{0}).$
Thus, the new function defined on $\mathbb{S}^{n-1}\times[0,T^{\prime}]$
$\Lambda(x,t)=\log b_{11}-A\log h+B|\nabla h|^{2}$
also attains its maximum at $(x_{0},t_{0})$. Thus, we have at $(x_{0},t_{0})$
(38) $0=\Lambda_{i}=b^{11}b_{11;i}-A\frac{h_{i}}{h}+2B\sum_{k}h_{k}h_{ki},$
and
(39)
$\begin{split}0\geq\Lambda_{ij}&=b^{11}b_{11;ij}-(b^{11})^{2}b_{11;i}b_{11;j}\\\
&\hskip
11.00008pt-A\Bigl{(}\frac{h_{ij}}{h}-\frac{h_{i}h_{j}}{h^{2}}\Bigr{)}+2B\sum_{k}(h_{kj}h_{ki}+h_{k}h_{kij}),\end{split}$
where $(b^{ij})$ is the inverse of the matrix $(b_{ij})$. Without loss of
generality, we can assume $t_{0}>0$. Then, we get at $(x_{0},t_{0})$
(40)
$\begin{split}0\leq\partial_{t}\Lambda&=b^{11}(\partial_{t}h_{11}+\partial_{t}h)-A\frac{\partial_{t}h}{h}+2B\sum_{k}h_{k}\partial_{t}h_{k}.\end{split}$
From the equation (15), we know
(41) $\log(h-\partial_{t}h)=\log\mathcal{K}(x,t)+\log f(x)G(\nabla h+hx).$
Set
$w(x,t)=\log\Big{[}f(x)G(\nabla h+hx)\Big{]},$
where
$G(\nabla h+hx)=\frac{|\nabla h+hx|^{n}}{g\bigg{(}\frac{\nabla h+hx}{|\nabla
h+hx|}\bigg{)}}.$
Differentiating (41) gives
(42) $\frac{h_{k}-\partial_{t}h_{k}}{h-\partial_{t}h}=-b^{ji}b_{ij;k}+w_{k},$
and
(43)
$\frac{h_{11}-\partial_{t}h_{11}}{h-\partial_{t}h}=\frac{(h_{1}-\partial_{t}h_{1})^{2}}{(h-\partial_{t}h)^{2}}-b^{ii}b_{ii;11}+b^{ii}b^{jj}(b_{ij;1})^{2}+w_{11}.$
Multiplying both sides of (43) by $-b^{11}$, it yields
(44)
$\begin{split}\frac{b^{11}\partial_{t}h_{11}-b^{11}h_{11}}{h-\partial_{t}h}&\leq
b^{11}b^{ii}b_{ii;11}-b^{11}b^{ii}b^{jj}(b_{ij;1})^{2}-b^{11}w_{11}\\\ &\leq
b^{11}b^{ii}b_{11;ii}-b^{11}b^{ii}b^{11}(b_{i1;1})^{2}-\sum_{i}b^{ii}\\\
&\hskip 11.00008pt+b^{11}(n-1-w_{11}),\end{split}$
where we use the Ricci identity $b_{ii;11}=b_{11;ii}-b_{11}+b_{ii}$. We know
that $b^{ij}\Lambda_{ij}\leq 0$ from (39) which implies at $(x_{0},t_{0})$
$\begin{split}b^{11}b^{ii}b_{11;ii}-(b^{11})^{2}b^{ii}(b_{11;i})^{2}\leq
Ab^{ii}\Bigl{(}\frac{h_{ii}}{h}-\frac{h_{i}^{2}}{h^{2}}\Bigr{)}-2B\sum_{k}b^{ii}(h_{ki}^{2}+h_{k}h_{kii}).\end{split}$
Thus,
(45) $\begin{split}&b^{11}b^{ii}b_{11;ii}-(b^{11})^{2}b^{ii}(b_{11;i})^{2}\\\
\leq&\frac{(n-1)A}{h}-A\sum_{i}b^{ii}-\frac{Ab^{ii}h_{i}^{2}}{h^{2}}+4(n-1)Bh-2B\sum_{i}b_{ii}-2Bh^{2}\sum_{i}b^{ii}\\\
&-2Bb^{ii}h_{k}b_{ii;k}+2Bb^{ii}h_{i}^{2},\end{split}$
where we use the following equalities
$\displaystyle b^{ii}h_{ii}=b^{ii}(b_{ii}-h)=n-1-h\sum_{i}b^{ii},$
$\displaystyle\sum_{k}b^{ii}h_{ki}^{2}=b^{ii}h_{ii}^{2}=b^{ii}(b_{ii}^{2}-2hb_{ii}+h^{2})=-2(n-1)h+\sum_{i}b_{ii}+h^{2}\sum_{i}b^{ii},$
$\displaystyle\sum_{k}b^{ii}h_{k}h_{kii}=\sum_{k}b^{ii}h_{k}(b_{ki;i}-h_{i}\delta_{ki})=\sum_{k}b^{ii}h_{k}b_{ii;k}-b^{ii}h_{i}^{2}.$
Here the fact that $b_{ij;k}$ is symmetric in all indices is used to get the
third equality above. Inserting the inequality (45) into (44), we obtain that
(46)
$\begin{split}\frac{b^{11}\partial_{t}h_{11}-b^{11}h_{11}}{h-\partial_{t}h}&\leq\frac{(n-1)A}{h}-(A+2Bh^{2}+1)\sum_{i}b^{ii}-\frac{A-2Bh^{2}}{h^{2}}b^{ii}h_{i}^{2}\\\
&\hskip 11.99998pt+4(n-1)Bh-2B\sum_{i}b_{ii}\\\ &\hskip
11.99998pt-2B\sum_{k}b^{ii}h_{k}b_{ii;k}+b^{11}(n-1-w_{11}).\end{split}$
Using (42), we get
(47) $\frac{2B\sum_{k}h_{k}\partial_{t}h_{k}}{h-\partial_{t}h}=\frac{2B|\nabla
h|^{2}}{h-\partial_{t}h}+2B\sum_{k}b^{ii}h_{k}b_{ii;k}-2B\langle\nabla
h,\nabla w\rangle.$
Now dividing (40) by $h-\partial_{t}h$ gives
$\begin{split}0&\leq\frac{b^{11}(\partial_{t}h_{11}-h_{11}+b_{11}-h+\partial_{t}h)}{h-\partial_{t}h}-\frac{A\partial_{t}h}{h(h-\partial_{t}h)}+\frac{2B\sum_{k}h_{k}\partial_{t}h_{k}}{h-\partial_{t}h}\\\
&=\frac{b^{11}(\partial_{t}h_{11}-h_{11})}{h-\partial_{t}h}+\frac{2B\sum_{k}h_{k}\partial_{t}h_{k}}{h-\partial_{t}h}-b^{11}+\frac{A}{h}-\frac{A-1}{h-\partial_{t}h},\end{split}$
which together with (46) and (47) implies that
$\begin{split}0&\leq\frac{nA}{h}-(A+2Bh^{2}+1)\sum_{i}b^{ii}-\frac{A-2Bh^{2}}{h^{2}}\sum_{i}b^{ii}h_{i}^{2}\\\
&\hskip 11.99998pt+4(n-1)Bh-2B\sum_{i}b_{ii}+b^{11}(n-2-w_{11})\\\ &\hskip
11.99998pt-2B\langle\nabla h,\nabla w\rangle-\frac{A-1-2B|\nabla
h|^{2}}{h-\partial_{t}h}.\end{split}$
Now we choose $A=n+2BC^{2}$, where $C$ is the constant in Lemma 3, we can
obtain
(48) $(A-n+3)\sum_{i}b^{ii}+2B\sum_{i}b_{ii}\leq
C_{1}(A+B)-b^{11}w_{11}-2B\langle\nabla h,\nabla w\rangle,$
where $C_{1}$ is a positive constant depending only on $n$ and the constant
$C$ in Lemma 3.
A direct calculation gives
$\displaystyle\nabla_{i}G(\nabla
h+hx)=\langle\overline{\nabla}G,\overline{\nabla}_{i}\overline{\nabla}h\rangle=\langle\overline{\nabla}G,e_{k}\rangle
b_{ik},$
and
$\displaystyle\nabla_{i}\nabla_{j}G(\nabla h+hx)$ $\displaystyle=$
$\displaystyle\overline{\nabla}^{2}G\Big{(}\overline{\nabla}_{j}(\overline{\nabla}h)\rangle,\overline{\nabla}_{i}(\overline{\nabla}h)\Big{)}+\Big{\langle}\overline{\nabla}G,\overline{\nabla}_{i}\overline{\nabla}_{j}(\overline{\nabla}h)\Big{\rangle}$
$\displaystyle=$
$\displaystyle\overline{\nabla}^{2}G\Big{(}\sum_{k}b_{ik}e_{k},\sum_{l}b_{jl}e_{l}\Big{)}-\langle\overline{\nabla}G,x\rangle
b_{ij}+\sum_{k}\langle\overline{\nabla}G,e_{k}\rangle b_{ij;k}.$
Thus, we have
$\begin{split}-2B\langle\nabla h,\nabla
w\rangle&=-2B\sum_{k}h_{k}\left(\frac{f_{k}}{f}+\frac{\sum_{l}\langle\overline{\nabla}G,e_{l}\rangle
b_{kl}}{G}\right)\\\ &\leq
C_{2}B+\sum_{k}\frac{2Bh_{k}\langle\overline{\nabla}G,e_{k}\rangle
b_{kk}}{G},\end{split}$
where $C_{2}$ is a positive constant depending only on the constants $C$ in
Lemma 3, $\left\|f\right\|_{C^{1}(\mathbb{S}^{n-1})}$ and
$\min\limits_{\mathbb{S}^{n-1}}f$. Moreover, we have by Lemma 6
$\displaystyle-w_{11}$ $\displaystyle=$
$\displaystyle\frac{f_{1}^{2}}{f^{2}}-\frac{f_{11}}{f}+\frac{[\langle\overline{\nabla}G,e_{1}\rangle
b_{11}]^{2}}{G^{2}}$
$\displaystyle-\frac{1}{G}\Big{[}\overline{\nabla}^{2}G(e_{1},e_{1})b^{2}_{11}-\langle\overline{\nabla}G,x\rangle
b_{11}+\langle\overline{\nabla}G,e_{k}\rangle b_{11,k}\Big{]}$
$\displaystyle\leq$ $\displaystyle
C_{3}(1+b_{11}+b_{11}^{2})+\frac{\langle\overline{\nabla}G,e_{k}\rangle
b_{11,k}}{G},$
where $C_{3}$ is a positive constant depending only on the constants $C$ in
Lemmas 3 and 5, $\left\|f\right\|_{C^{2}(\mathbb{S}^{n-1})}$,
$\left\|g\right\|_{C^{2}(\mathbb{S}^{n-1})}$,
$\min\limits_{\mathbb{S}^{n-1}}f$ and $\min\limits_{\mathbb{S}^{n-1}}g$.
Therefore, combining the two inequalities above, we have
(49) $\begin{split}&-b^{11}w_{11}-2B\langle\nabla h,\nabla w\rangle\\\
\leq&C_{3}(b^{11}+1+b_{11})+C_{2}B+\sum_{k}\frac{\langle\overline{\nabla}G,e_{k}\rangle}{G}(b^{11}b_{11;k}+2Bh_{k}h_{kk})\\\
=&C_{3}(b^{11}+1+b_{11})+C_{2}B+\sum_{k}\frac{\langle\overline{\nabla}G,e_{k}\rangle}{G}\cdot\frac{Ah_{k}}{h}\\\
\leq&C_{3}(b^{11}+1+b_{11})+C_{2}B+C_{4}A,\end{split}$
where we have used the equality (38), and $C_{4}$ is a positive constant
depending only on the constants $C$ in Lemma 3 and $n$. Inserting (49) into
(48), we have
$(A-n)\sum_{i}b^{ii}+2B\sum_{i}b_{ii}\leq(C_{1}+C_{4})A+(C_{1}+C_{2})B+C_{3}(b^{11}+1+b_{11}),$
which together with $A=n+2BC^{2}$ implies that
$\begin{split}&(2BC^{2}-C_{3})\sum_{i}b^{ii}+(2B-C_{3})\sum_{i}b_{ii}\\\
\leq&(C_{1}+C_{4})(n+2BC^{2})+(C_{1}+C_{2})B+C_{3}.\end{split}$
If we choose
$B=\max\left\\{\frac{C_{3}+1}{2},\frac{C_{3}+1}{2C^{2}}\right\\}$, we see that
$b_{11}(x_{0},t_{0})$ is bounded from above by a positive constant depending
only on $n$, $C_{1}$, $C_{2}$, $C_{3}$ and $C_{4}$. Using Lemmas 3,
$\widetilde{\Lambda}(x_{0},t_{0})$ is bounded from above by a positive
constant depending only on $n$, $C$, $C_{1}$, $C_{2}$, $C_{3}$ and $C_{4}$.
Thus, we prove the conclusion of this lemma, by noticing that $T^{\prime}$ can
be any number in $(0,T)$. ∎
## 5\. Existence of solutions
In this section, we will complete the proof of Theorem 3. Combining Lemma 7
and Lemma 8, we see that the principal curvatures of $M_{t}$ has uniform
positive upper and lower bounds. This together with Lemmas 3 and 5 implies
that the evolution equation (15) is uniformly parabolic on any finite time
interval. Thus, using Krylov- Safonov estimates [34] and Schauder estimates of
the parabolic equations, we can say that the smooth solution of (15) exists
for all time. And by these estimates again, a subsequence of $M_{t}$ converges
in $C^{\infty}$ to a positive, smooth, uniformly convex hypersurface
$M_{\infty}$ in $\mathbb{R}^{n}$. Now to complete the proof of Theorem 3, it
remains to check the support function of $M_{\infty}$ satisfies Eq. (4).
Let $\tilde{h}$ be the support function of $M_{\infty}$. We need to prove that
$\tilde{h}$ is a solution to the following equation
(50) $g\bigg{(}\frac{\nabla h+hx}{|\nabla h+hx|}\bigg{)}|\nabla
h+hx|^{-n}h\det(\nabla^{2}h+hI)=f\text{ on }\mathbb{S}^{n-1}.$
By Lemma 1, $J^{\prime}(t)\leq 0$ for any $t>0$. Since
$\int_{0}^{t}[-J^{\prime}(t)]\mathop{}\\!\mathrm{d}t=J(0)-J(t)\leq
C\int_{\mathbb{S}^{n-1}}fdx,$
we get
$\int_{0}^{\infty}[-J^{\prime}(t)]\mathop{}\\!\mathrm{d}t\leq
C\int_{\mathbb{S}^{n-1}}fdx.$
Thus, there exists a subsequence of times $t_{j}\to\infty$ such that
$-J^{\prime}(t_{j})\to 0\text{ as }t_{j}\to\infty.$
Thus, we obtain using (2.2)
$\int_{\mathbb{S}^{n-1}}\frac{1}{g\tilde{h}\widetilde{\mathcal{K}}\tilde{\rho}^{n}}(g\tilde{h}-f\widetilde{\mathcal{K}}\tilde{\rho}^{n})^{2}dx=0,$
where $\widetilde{\mathcal{K}}$ is the Gauss curvature of $M_{\infty}$. It
implies that
$g(u)\frac{\tilde{h}(x)}{\widetilde{\mathcal{K}}\tilde{\rho}^{n}(u)}=f(x)\quad\mbox{on}\quad\mathbb{S}^{n-1},$
which means $\tilde{h}$ is a solution to the equation (50).
## References
* [1] B. Andrews, Monotone quantities and unique limits for evolving convex hypersurfaces, Internat. Math. Res. Notices, (1997), pp. 1001–1031.
* [2] B. Andrews, P. Guan and L. Ni, Flow by powers of the Gauss curvature, Adv. Math., 299 (2016), pp. 174–201.
* [3] G. Bianchi, K. J. Böröczky and A. Colesanti, The Orlicz version of the $L_{p}$ Minkowski problem for $-n<p<0$, Adv. in Appl. Math., 111 (2019), p. 101937.
* [4] K. J. Böröczky, E. Lutwak, D. Yang, G. Y. Zhang and Y. M. Zhao, The Gauss image problem, Communications on Pure and Applied Mathematics, Vol. LXXIII, 1406-1452 (2020), pp. 1046–1452.
* [5] K. J. Böröczky and F. Fodor, The $L_{p}$ dual Minkowski problem for $p>1$ and $q>0$, J. Differential Equations, 266 (2019), pp. 7980–8033.
* [6] K. J. Böröczky, M. Henk and H. Pollehn, Subspace concentration of dual curvature measures of symmetric convex bodies, J. Differential Geom., 109 (2018), pp. 411–429.
* [7] K. J. Böröczky, E. Lutwak, D. Yang, and G. Zhang, The logarithmic Minkowski problem, J. Amer. Math. Soc., 26 (2013), pp. 831–852.
* [8] S. Brendle, K. Choi and P. Daskalopoulos, Asymptotic behavior of flows by powers of the Gaussian curvature, Acta Math., 219 (2017), pp. 1–16.
* [9] P. Bryan, M. N. Ivaki and J. Scheuer, A unified flow approach to smooth, even $L_{p}$-Minkowski problems, Anal. PDE, 12 (2019), pp. 259–280.
* [10] C. Chen, Y. Huang and Y. Zhao, Smooth solutions to the $L_{p}$ dual Minkowski problem, Math. Ann., 373 (2019), pp. 953–976.
* [11] H. Chen, S. Chen and Q.-R. Li, Variations of a class of Monge-Ampere type functionals and their applications. Accepted by Anal. PDE.
* [12] S. Chen, Q.-R. Li and G. Zhu, The logarithmic Minkowski problem for non-symmetric measures, Trans. Amer. Math. Soc., 371 (2019), pp. 2623–2641.
* [13] L. Chen, Y. Liu, J. Lu and N. Xiang, Existence of smooth even solutions to the dual Orlicz-Minkowski problem Authors, arXiv:2005.02639.
* [14] K.-S. Chou and X.-J. Wang, A logarithmic Gauss curvature flow and the Minkowski problem, Ann. Inst. H. Poincaré Anal. Non Linéaire, 17 (2000), pp. 733–751.
* [15] , The $L_{p}$-Minkowski problem and the Minkowski problem in centroaffine geometry, Adv. Math., 205 (2006), pp. 33–83.
* [16] M. E. Gage and Y. Li, Evolving plane curves by curvature in relative geometries. II, Duke Math. J., 75 (1994), pp. 79–98.
* [17] R. J. Gardner, D. Hug, W. Weil, S. Xing and D. Ye, General volumes in the Orlicz-Brunn-Minkowski theory and a related Minkowski problem I, Calc. Var. Partial Differential Equations, 58 (2019), pp. Paper No. 12, 35\.
* [18] C. Gutiérrez, The Monge-Ampère Equation, Second Edition, Part of the Progress in Nonlinear Differential Equations and Their Applications book series (PNLDE, volume 44).
* [19] R. J. Gardner, D. Hug, S. Xing and D. Ye, General volumes in the Orlicz-Brunn-Minkowski theory and a related Minkowski problem II, Calc. Var. Partial Differential Equations, 59 (2020), pp. Paper No. 15, 33.
* [20] C. Gerhardt, Non-scale-invariant inverse curvature flows in Euclidean space, Calc. Var. Partial Differential Equations, 49 (2014), pp. 471–489.
* [21] C. Haberl, E. Lutwak, D. Yang and G. Zhang, The even Orlicz Minkowski problem, Adv. Math., 224 (2010), pp. 2485–2510.
* [22] R. S. Hamilton, Remarks on the entropy and Harnack estimates for the Gauss curvature flow, Comm. Anal. Geom., 2 (1994), pp. 155–165.
* [23] M. Henk and H. Pollehn, Necessary subspace concentration conditions for the even dual Minkowski problem, Adv. Math., 323 (2018), pp. 114–141.
* [24] Q. Huang and B. He, On the Orlicz Minkowski problem for polytopes, Discrete Comput. Geom., 48 (2012), pp. 281–297.
* [25] Y. Huang and Y. Jiang, Variational characterization for the planar dual Minkowski problem, J. Funct. Anal., 277 (2019), pp. 2209–2236.
* [26] Y. Huang, E. Lutwak, D. Yang and G. Zhang, Geometric measures in the dual Brunn-Minkowski theory and their associated Minkowski problems, Acta Math., 216 (2016), pp. 325–388.
* [27] , The $L_{p}$-Aleksandrov problem for $L_{p}$-integral curvature, J. Differential Geom., 110 (2018), pp. 1–29.
* [28] Y. Huang and Y. Zhao, On the $L_{p}$ dual Minkowski problem, Adv. Math., 332 (2018), pp. 57–84.
* [29] D. Hug, E. Lutwak, D. Yang and G. Zhang, On the $L_{p}$ Minkowski problem for polytopes, Discrete Comput. Geom., 33 (2005), pp. 699–715.
* [30] M. N. Ivaki, Deforming a hypersurface by Gauss curvature and support function, J. Funct. Anal., 271 (2016), pp. 2133–2165.
* [31] H. Jian and J. Lu, Existence of solutions to the Orlicz-Minkowski problem, Adv. Math., 344 (2019), pp. 262–288.
* [32] H. Jian, J. Lu and X.-J. Wang, A priori estimates and existence of solutions to the prescribed centroaffine curvature problem, J. Funct. Anal., 274 (2018), pp. 826–862.
* [33] H. Jian, J. Lu and G. Zhu, Mirror symmetric solutions to the centro-affine Minkowski problem, Calc. Var. Partial Differential Equations, 55 (2016), pp. Art. 41, 22 pp.
* [34] N. V. Krylov and M. V. Safonov, A property of the solutions of parabolic equations with measurable coefficients, Izv. Akad. Nauk SSSR Ser. Mat., 44 (1980), pp. 161–175, 239.
* [35] Q.-R. Li, J. Liu and J. Lu, Non-uniqueness of solutions to the $L_{p}$ dual Minkowski problem. Preprint.
* [36] Q.-R. Li, W. Sheng and X.-J. Wang, Flow by Gauss curvature to the Aleksandrov and dual Minkowski problems, J. Eur. Math. Soc. (JEMS), 22 (2020), pp. 893–923.
* [37] Y. Liu and J. Lu, A flow method for the dual Orlicz-Minkowski problem, Trans. Amer. Math. Soc., 373 (2020), pp. 5833–5853.
* [38] J. Lu, Nonexistence of maximizers for the functional of the centroaffine Minkowski problem, Sci. China Math., 61 (2018), pp. 511–516.
* [39] , A remark on rotationally symmetric solutions to the centroaffine Minkowski problem, J. Differential Equations, 266 (2019), pp. 4394–4431.
* [40] J. Lu and X.-J. Wang, Rotationally symmetric solutions to the $L_{p}$-Minkowski problem, J. Differential Equations, 254 (2013), pp. 983–1005.
* [41] E. Lutwak, The Brunn-Minkowski-Firey theory. I. Mixed volumes and the Minkowski problem, J. Differential Geom., 38 (1993), pp. 131–150.
* [42] E. Lutwak, D. Yang and G. Zhang, On the $L_{p}$-Minkowski problem, Trans. Amer. Math. Soc., 356 (2004), pp. 4359–4370.
* [43] , $L_{p}$ dual curvature measures, Adv. Math., 329 (2018), pp. 85–132.
* [44] A. Stancu, The discrete planar $L_{0}$-Minkowski problem, Adv. Math., 167 (2002), pp. 160–174.
* [45] J. Urbas, An expansion of convex hypersurfaces, J. Differential Geom., 33 (1991), pp. 91–125.
* [46] Y. Zhao, The dual Minkowski problem for negative indices, Calc. Var. Partial Differential Equations, 56 (2017), p. 56:18.
* [47] , Existence of solutions to the even dual Minkowski problem, J. Differential Geom., 110 (2018), pp. 543–572.
* [48] G. Zhu, The logarithmic Minkowski problem for polytopes, Adv. Math., 262 (2014), pp. 909–931.
* [49] , The centro-affine Minkowski problem for polytopes, J. Differential Geom., 101 (2015), pp. 159–174.
|
Figure 1: Using conventional methods to train the network, the training
process of the network is a black box, and the training process is
uncontrollable. Our proposed method projects the high-dimensional features of
the data into a 2D workspace, where the user can manually edit the high-
dimensional features. Using this method to train the network can control the
training process of the network to a certain extent, which not only allows
people to better understand the training process of the network but also
integrates human knowledge into the training process of the network, thereby
improving the performance of the network.
# SpaceEditing: Integrating Human Knowledge into Deep Neural Networks via
Interactive Latent Space Editing
Jiafu Wei, Ding Xia, Haoran Xie, Chia-Ming Chang, Chuntao Li, and Xi Yang
Jiafu Wei, Chuntao Li, and Xi Yang (corresponding author) are with Jilin
University. E-mail<EMAIL_ADDRESS>{lct33<EMAIL_ADDRESS>Ding Xia and Chia-Ming Chang are with the University of Tokyo. E-mail:
<EMAIL_ADDRESS><EMAIL_ADDRESS>Haoran Xie is with Japan
Advanced Institute of Science and Technology. E-mail<EMAIL_ADDRESS>
###### Abstract
We propose an interactive editing method that allows humans to help deep
neural networks (DNNs) learn a latent space more consistent with human
knowledge, thereby improving classification accuracy on indistinguishable
ambiguous data. Firstly, we visualize high-dimensional data features through
dimensionality reduction methods and design an interactive system SpaceEditing
to display the visualized data. SpaceEditing provides a 2D workspace based on
the idea of spatial layout. In this workspace, the user can move the
projection data in it according to the system guidance. Then, SpaceEditing
will find the corresponding high-dimensional features according to the
projection data moved by the user, and feed the high-dimensional features back
to the network for retraining, therefore achieving the purpose of
interactively modifying the high-dimensional latent space for the user.
Secondly, to more rationally incorporate human knowledge into the training
process of neural networks, we design a new loss function that enables the
network to learn user-modified information. Finally, We demonstrate how
SpaceEditing meets user needs through three case studies while evaluating our
proposed new method, and the results confirm the effectiveness of our method.
###### Index Terms:
Interaction, deep learning, latent space, spatial editing.
## 1 Introduction
Although deep neural networks (DNNs) maintain excellent results in
classification, they still struggle to distinguish similar ambiguous data. The
machine learning community has realized the disadvantage of networks in
dealing with abstract things [51], such as things like shapes and concepts. In
addition, when faced with datasets in some specialized fields, such as
archaeological-related datasets, the performance of the network is not
satisfactory. The domain dataset requires corresponding domain knowledge.
Therefore, the recognition of features by humans should help the learning of
deep learning networks.
However, the learning process of current deep learning networks is still
uncontrollable (Fig. 1). In general, whether in the training process or the
fine-tuning process, people can only judge the effect of network training
through the results of loss or metrics, but cannot directly process the data.
If people want to achieve better results, they can only start with traditional
fine-tuning methods such as adjusting hyperparameters, but such methods often
require multiple debugging to achieve better results. To let people
participate in the network training process more intuitively, the role of
high-dimensional features in network training has been paid more and more
attention[59]. High-dimensional features can be observed through projection
visualization [52, 31, 23], but existing methods cannot directly affect high-
dimensional features, and the latent space learned by existing deep learning
networks is still uncontrollable.
Interactive machine learning is a good direction to address the above problem,
which tries to let human knowledge help the network learn. For example, Sakata
et al. [44] introduce a network called CROWNN, which allows people to
participate in the network classification process, and then it leverages the
learned human strengths to better perform classification tasks.
The user interface is one of the important means to support interactive
machine learning. On the one hand, the user interface can help people observe
the relationship between data more intuitively. On the other hand, the user
can participate in the training of the network through the interactive
function of the user interface. Therefore, it is necessary to design a
suitable spatial layout structure for the user interface. For example, Chang
et al. [12] used the spatial layout idea to design a system for improving the
annotation quality of non-professional annotators.
Although in past work, we proposed a method to fine-tune the network using 2D
projections of the data[58], this method does not work directly on high-
dimensional features, and at the same time, the effect of this method is not
significant. Therefore, in this paper, we propose a novel interactive machine
learning method to address the above issues. Through our designed system,
users can intuitively observe the distribution of data in latent space. Then,
users can also modify the high-dimensional features in the latent space, and
the modified information will be used to retrain the network to improve the
performance of the network. Our method realizes the visualization of high-
dimensional features by projecting the high-dimensional features in the
network onto a 2D workspace, and the location information of the projected
point reflects the classification result of the point to a certain extent.
Based on the various interactive functions of our system, users can reclassify
the projection results based on their knowledge, and can also move the points
they think are misclassified to the positions they think are correctly
classified. Then, the system can automatically feed back the result of the
user’s movement to the network for retraining (Fig. 2).
To summarize, our contributions include:
1. 1.
We propose a novel and effective method that enables users to interactively
edit the latent space based on their knowledge, thereby guiding the network’s
learning process. This method can incorporate human knowledge into the
training process of the network, which not only makes the network jump out of
the local minimum area and improves the performance of the network but also
allows users to obtain a more understandable latent space.
2. 2.
We design a new interactive system, SpaceEditing, which can apply our proposed
method and allow users to synchronize their operations from two-dimensional
space to high-dimensional space. In addition, it also has various interactive
functions such as enlarge function, visual volume adjustment, interactive
movement, movement guidance, and history record, which provide feasibility for
manual editing of latent space.
3. 3.
We conduct three case studies to evaluate the effect of SpaceEditing on
different types of machine learning tasks and users with different identities.
The case studies indicate: the usefulness of the system; the effectiveness of
the proposed method; the flexibility to adapt to different scenarios; the
user’s experience and evaluation.
Figure 2: The workflow of our proposed method. (1) In the preprocessing
stage, the raw data is trained to obtain a network for visualizing the images.
(2) The data features output by the network are projected into a 2D workspace
for the user to observe. (3) In the 2D workspace, users can easily observe the
visualized data, and at the same time, they can use the functions provided by
the system to easily interact with the projection points. (4) When the user
edits the projection point in the 2D workspace, (5) the system automatically
performs calculations in a high-dimensional latent space. (6) The network will
learn according to the changes of projection points before and after moving,
(7) therefore achieving the purpose of the user helping the network to
retrain.
## 2 Related work
### 2.1 Interactive Machine Learning
With the rapid development of graphical interfaces, the importance of humans
in the working and learning process of machines has become increasingly
apparent [36, 9, 32]. The advantage of IML (Interactive Machine Learning) is
that the addition of humans can help machines complete abstract tasks that are
difficult for machines [44, 16, 37, 2, 17]. Machine learning can be used as a
creative tool, and human participation can help group machine learning in a
creative and artistic direction [19, 24]. Through an interactive method, it is
easier for the machine to generate the data and networks the user wants [14,
53, 64, 37]. In addition, some studies add brain interface to the process of
machine learning [34], which not only makes the system show stronger learning
ability [16] but also shortens the training time of the network [33].
Furthermore, IML systems pay special attention to user experience and user
understanding of the system due to human involvement, so it is necessary to
design robust and user-friendly systems [4, 49, 47, 65]. Human-oriented is the
premise to achieve this goal. Driven by the human-oriented design concept[21,
40, 62, 20], many meaningful new methods of interactive machine learning have
been born[22, 48].
There are many works based on the idea of IML, but there is no way to train
the network through human interaction with the latent space, and the potential
of the latent space has not been tapped. To this end, we propose a new deep
learning method for interacting with the latent space.
### 2.2 Conventional Fine-tuning Methodes
Fine-tuning is an important method in the field of machine learning. The time
required to train a new network can be greatly simplified by modifying part of
the network to the network required by the user [63]. Based on traditional
fine-tuning methods, many novel fine-tuning methods have been generated. For
example, by changing part of the network structure to retrieve and classify
data such as images, Radenović et al. [41] finally realized a fine-tuning
method without manual annotation. Rosa et al. [43] introduce harmony search
and some of its variants to fine-tune image classification, filling a gap in
CNN parameter optimization research. Observing that not all parameters need to
be updated during fine-tuning, Xu et al. [61] proposed an efficient fine-
tuning technique CHILD-TUNING, which masks the gradients of non-sub-networks
in the reverse process. In addition to the above methods, visualization
techniques are also applied to fine-tune the network. For example, Amershi et
al. [3] designed a visualization tool ModelTracker, which can analyze the
performance of the network and help users fine-tune the network.
Latent space plays a very important role in network training [59]. Although
the above works have achieved very good results in network fine-tuning, they
have not proposed new fine-tuning methods from the perspective of latent
space. In addition, in the datasets of some specialized fields, it is usually
necessary to combine a large amount of domain knowledge to achieve better
results [8], therefore the human-in-the-loop method is very suitable. For
example, to address the low accuracy of medical image segmentation, Wang et
al. [56] proposed a novel interactive segmentation framework. Based on the
above ideas, we propose a novel retraining method from latent space and human-
in-the-loop perspectives.
### 2.3 Spatial Layout
The concept of spatial layout is widely used in management because spatial
layout not only stores information but also reflects the relationship between
information to a certain extent [28, 30]. There are many meaningful types of
research based on spatial layouts, such as dimensionality reduction
visualization [54, 7]. By mapping high-dimensional data to a 2D or 3D
interface, people can more easily observe the layout relationship between the
data and then analyze the data [5, 52, 31, 15]. The spatial layout can also
reflect the relationship between icons, objects, and other information [46,
60]. Reasonable use of spatial layout can achieve the purpose of assisting
users. For example, to address the lack of professional annotators, Chang et
al. [12] utilize spatial layout to design a novel annotation interface to
improve the annotation quality of non-expert image annotations. Wang et al.
[55] used the hierarchical spatial structure to optimize the way the map,
which solved the tedious process of users zooming in to see map details and
zooming out to see an overview. Mai et al. [38] discussed and studied the
factors affecting the user’s spatial layout, and showed the factors affecting
the user’s related spatial layout. By combining the advantages of small
multiples and visual aggregation with interactive browsing, Lekschas et al.
[35] propose a structured design space to guide the design of visual-spatial
layouts.
Numerous studies have proven that information can be effectively managed
through spatial arrangement [27, 57, 25]. One example is a spatial search
system that makes it easy for users to search for desired information in 2D
space by interacting with the visualized data [10]. Chen et al. [13] designed
a system to facilitate bug discovery through semantic data search, in which
users interactively create a topology to convey information in a spatial
layout. Human-centric spatial layout techniques have also emerged, which
visualize time-series data through user interfaces to enhance human-to-human
collaboration [39]. Asai et al. [6] combine a code editor with an interactive
scatterplot editor enabling users to effectively understand the behavior of
statistical modeling algorithms. With the continuous development of
technology, spatial layout, and machine learning have been continuously
integrated, resulting in a large number of novel and excellent research
results [11, 42]. For example, Eisenstadt et al. [18] leveraged machine
learning techniques to process information representations about building room
types and the spatial layout of individual rooms.
## 3 System
The latent space means the representation of encoded data, however it is
incomprehensible in most cases. To further identify and understand the latent
space, and integrate human capabilities into the machine training process, we
design a novel interactive system SpaceEditing that allows users to adjust the
position of vectors in the latent space. Then, the system will retrain the
parameters of the network with moved vectors to improve the performance step
by step.
### 3.1 Design Goals
The primary design goal of SpaceEditing is to provide a novel method to
interact with the latent space for users. The system should highlight the
relationships and connections between different types of data. At the same
time, the basic requirement is that the user can visually observe the
distribution of the data in the latent space, and the user can easily interact
with the vectors in the latent space.
We proposed three primary designing goals for SpaceEditing:
1. 1.
Visualize the data representation intuitively. A fundamental function is the
visualization of the latent space, where our system could clearly and
correctly describe the similarities and differences within data
representations. In this way, users can locate target data and make the right
modification without effort.
2. 2.
Design an effective interactive system. A batch-selection mechanism is
critical for a dataset comprising millions of images. With this kind of
function, we can alleviate the burden that users need to process tons of data
one by one. Besides, to assist the batch-selection mechanism, we need to
develop a set of interaction mechanisms accordingly.
3. 3.
Facilitate users by highlighting ambiguous data. A suggestion mechanism for
ambiguous data (data with wrong predicted values and data in unreasonable
locations) can substantially improve user experience. Therefore, we provide
corresponding movement guidance functions for users.
### 3.2 System Description
Figure 3: System. The system can be divided into three parts: a) function
bar, b) workspace and c) history record module. In the workspace, projected
coordinates are combined with the data image itself. When the user mouses over
a certain point, an enlarged image of the point will be displayed, and at the
same time, a light purple guide line and guide circle will also be displayed
to provide a general direction for the user to move. The workspace shows a
total of four classes, corresponding to four colors. The history record module
is used to store the points operated by the user.
Our system consists of three parts: function bar, workspace, and history
record (Fig. 3).
#### 3.2.1 Function Bar
Data classes and representative images are displayed in the function bar. When
the user clicks on the corresponding representative picture, the data of this
class in the workspace can be controlled to be displayed or hidden, and the
user can hide irrelevant data through this function (Fig. 4).
There are two buttons below, the reset button is used to restore user
operations, and the update button is used to control the network for
retraining. The bottom display boxes show the accuracy before and after the
update.
Figure 4: Display and hide. Click the image representing the class on the
left to hide or display the corresponding class in the workspace.
#### 3.2.2 Interactive User Workspace
The system uses a novel spatial layout that helps users obtain a more
comprehensible representation of the latent space (Fig. 3(b)) . We apply
Isomap [50] to latent representations of the data, resulting in a 2D visual
layout, the reason for applying Isomap as a dimensionality reduction method is
explained in Section 3.4. In order to display the data more intuitively on the
workspace, the data itself is displayed in the form of a thumbnail, and the
predicted class of the data is displayed in the form of a colored coordinate
point in the lower right corner of the thumbnail, where different colors
represent different predicted values. Through such a spatial layout design,
not only the prediction of the data is clear at a glance, but also the data
itself can be better presented to the user, which is convenient for the user
to compare and interact with the data.
The distribution of the data reflects the classification results of the
network. We used heatmaps of different colors as backgrounds for different
classes, which play the role of guidance for user movement. In the workspace,
users can explore and interactively modify the data in it. At the same time,
this system also has basic redo and undo functions, which brings convenience
to user operations.
Then enter the description of the main functions of the system.
Enlarge function. How to facilitate users to compare data is an important
consideration when we design the system. When the mouse hovers over a point,
the image represented by the point is enlarged (Fig. 3(b)). When the user is
faced with the situation of distinguishing different data, this function can
help the user distinguish mixed data, which is convenient for users to compare
adjacent data and make better movement judgments. At the same time, the system
also has basic interactive zoom and wheel pan functions.
Visual volume adjustment. If the amount of data is too large, it will
inevitably affect the user’s observation, therefore the system adds the
function of visual volume adjustment. The user can control the data range
displayed on the current screen by dragging the slider bar below the workspace
(Fig. 5). The slider bar indicates the importance of the data from left to
right. We define importance as how confident the network is about the outputs.
Therefore, we use the softmax function to sort the maximum value of the
network output in descending order, and the sorted result is the importance we
define. We associate importance with the range of data displayed. In addition,
the user can also control the number of data displayed on the current screen
through the input window on the right side of the slider bar. After entering a
number, the workspace will only display the corresponding number of projected
points. Users can hide over-displayed data according to the above functions.
Figure 5: Visual volume adjustment. An example of using the importance slider
bar to adjust the amount of displayed data.
Interactive movement. The most basic interactive function in the system is to
move the projected coordinates in the user’s workspace. The user can move a
single point, or use the Lasso tool to move multiple points at the same time
(Fig. 6).
Figure 6: Interactive movement. An example of multi-point movement using the
Lasso tool.
Movement guidance. To greatly facilitate the user’s operation, the system
should have some guidance functions. Our system uses a pink box to highlight
data with incorrect predictions to facilitate the user to select the data to
be moved (Fig. 3(b)). In addition, the system adds a guideline and a guide
circle to remind the user of the approximate location of the cluster formed by
the ground truth of the current moving point. We also use heatmaps to show the
distribution of each class, which can also be used as a reference for user
movement.
#### 3.2.3 History Record
During the interaction process, it is inevitable for users to make movement
errors and want to modify the movement process. At the same time, users may
also find interesting patterns and want to record their movements. Therefore,
we provide the history record module (Fig. 3(c)). The history of user movement
will be stored in the history record module. By clicking on the corresponding
history record, the workspace can be traced back to the corresponding state.
Users can also use the display mechanism of the history module to make further
comparisons of the data. In addition, this module provides redo and undo
functions.
### 3.3 Feedback Calculation
Figure 7: An example of simulating user movement. The user first observes the
data using the system functions, and finds the moving points $\mathit{m}$.
Then, the user moves the moving points $\mathit{m}$ to the place he thinks is
appropriate according to his knowledge. The system will select reference
positive points $\mathit{p}$ and reference negative points $\mathit{n}$
according to the position before and after the user moves.
In order to incorporate human knowledge into the training process of neural
networks, we design a new loss function to allow the user’s operations in the
2D workspace will be learned by high-dimensional hidden vectors, and the
network can learn a latent space that is more in line with human knowledge.
We strive to make the image features of the user’s movement in the latent
space closer to the position after the user’s movement in the latent space,
for which we draw on the idea of triplet loss [45]. The purpose of triplet
loss is to make the features of the same label as close as possible in spatial
position, while the features of different labels are as far away as possible
in spatial position. Our purpose is to make the user’s moving features closer
to the moved spatial positions, therefore we need to find a high-dimensional
feature $\mathit{P}$ in the high-dimensional space as a reference. In the 2D
workspace, we set the point the user moved to point $\mathit{m}$. We set the
$\mathit{k}$ points with the same label closest to point $\mathit{m}$ after
the user moves to point $\mathit{p_{i}}$ (Fig. 7), where
0$<$$\mathit{i}$$<$$\mathit{k}$. To calculate the high-dimensional feature
$\mathit{P}$ in the high-dimensional space, we perform a weighted sum over the
points $\mathit{pi}$ (0$<$$\mathit{i}$$<$$\mathit{k}$), where the weight is
the inverse of the distance between point $\mathit{m}$ and point
$\mathit{p_{i}}$ in the high-dimensional space (see Formula (1)).
$P=\sum\nolimits_{i}^{k}\frac{1}{||m-p_{i}||_{2}^{2}}\times p_{i}$ (1)
where $\mathit{m}$ and $\mathit{p}$i represent features in the high-
dimensional space, corresponding to points found in two-dimensional space.
At the same time, in order to suppress the aggregation phenomenon of different
classes of features to a certain extent, we define a high-dimensional feature
$\mathit{N}$. We set the $\mathit{k}$ points with different labels closest to
point $\mathit{m}$ before moving to point $\mathit{n_{i}}$, where
0$<$$\mathit{i}$$<$$\mathit{k}$. In the same way, the corresponding point
$\mathit{N}$ is obtained according to point $\mathit{n_{i}}$
(0$<$$\mathit{i}$$<$$\mathit{k}$) (see Formula (2)). What the loss function
needs to do is to move the point $\mathit{m}$ in the high-dimensional space
closer to the high-dimensional feature $\mathit{P}$, and make the distance
between point $\mathit{m}$ and high-dimensional feature $\mathit{N}$ in the
high-dimensional space further apart, therefore the network can learn a latent
space that is closer to the user’s movement.
$N=\sum\nolimits_{i}^{k}\frac{1}{||m-n_{i}||_{2}^{2}}\times n_{i}$ (2)
where $\mathit{n}$i represent features in the high-dimensional space,
corresponding to points found in 2D space.
The classification loss $\mathit{loss}$cls is obtained by performing a Cross-
entropy (CE) calculation between the predicted labels and the ground-truth
labels. The distance difference loss $\mathit{loss}$dis is calculated based on
the points the user moves (see Formula (3)).
$loss_{dis}=\sum\nolimits_{i}^{D}max(||m_{i}-P_{i}||_{2}^{2}-||m_{i}-N_{i}||_{2}^{2}+\delta,0)$
(3)
where $\mathit{D}$ represents the number of user moving points, $\mathit{m}$i,
$\mathit{P}$i and $\mathit{N}$i represent corresponding features in high-
dimensional space, and $\mathit{\delta}$ is the margin used to control the
distance between points $\mathit{P}$i and $\mathit{N}$i.
The total loss function is weighted by the classification loss and the
distance difference loss before and after adjustment (see Formula (4)). Using
the total loss function, the network can be retrained based on where the user
moves, and the output will be closer to what the user changed, therefore the
user can assist the network in better classification.
$Loss=w_{cls}\times loss_{cls}+w_{dis}\times loss_{dis}$ (4)
### 3.4 Network and Dimensionality Reduction Method Selection
After comparisons, we choose the classic ResNet18[29] as the basic
classification network. We used the pre-trained network of ResNet18, and to
avoid overfitting, we froze the first few layers of the network. The fully
connected layer of the network consists of two layers, and the number of nodes
in each layer is 2048 and 512 respectively. We apply a dimensionality
reduction method on layer 512 to visualize the data on the user workspace. The
network was trained with a learning rate between 0.001 and 0.0005 and used the
Adam optimizer with a fixed batch size of 128.
We tested four commonly used dimensionality reduction methods on the garbage
classification dataset, namely PCA, MDS, t-SNE, and Isomap (Fig. 8). Isomap
keeps the distance relationship between samples after dimensionality reduction
unchanged, and find the real manifold of high-dimensional data so that the
results after dimensionality reduction can more accurately correspond to high-
dimensional feature vectors. Therefore, we choose Isomap as the dimensionality
reduction method used by the system.
Figure 8: Example of dimensionality reduction results for PCA, MDS, t-SNE,
and Isomap.
## 4 EVALUATION
### 4.1 Performance Metrics
In order to accurately evaluate the effectiveness of our method, we choose the
visualization results as the subjective evaluation metrics, and at the same
time, we choose the performance metrics of the network as the objective
evaluation metrics.
Subjective evaluation metrics. The visualization results of retraining are an
important consideration in evaluating our method. The visualization results
can not only reflect the quality of the network classification results but
also reflect whether the user’s editing of the latent space is effective. If
the user-edited retrained projection results are close to the user-edited
results, we can assume that the network has learned the user-edited latent
space.
Objective evaluation metrics. We use micro-F1 and ROC commonly used in the
classification field as performance metrics to evaluate the performance of the
network. For the final retraining results, we apply ROC for evaluation, and at
the same time, we plot the micro-F1 of each epoch as a micro-F1 curve.
### 4.2 Datasets Selection
We evaluate the proposed system using three datasets of varying difficulty,
namely bronze dataset (self-collected), garbage classification dataset [1],
and head pose dataset [26] (Fig. 9). And we invited users of different
identities to conduct case studies on these datasets. Each of the three types
of datasets selected for the experiments has four classes. The bronze dataset
for experiments consists of 800 images, of which about 400 are train set,
about 150 are validation set, and about 250 are test set. Both the garbage
classification dataset and the head pose dataset for experiments have 700
images, of which about 400 are train set, about 100 are validation set, and
the remaining about 200 are test set. The experimental results will be used to
demonstrate whether our designed system is useful in different scenarios.
Figure 9: Example images for the three selected datasets.
The bronze dataset represents datasets in the specialized field, and the
similarities between bronzes from adjacent ages are so great that only
experienced archaeologists can accurately classify bronzes. There are multiple
sub-classes under each class in the garbage classification dataset, which
poses a certain challenge to the classification effect of the network. The
head pose dataset has orientation properties that are difficult for networks
to discern, and networks are particularly poor at distinguishing facing left
from facing right. When it comes to discriminating images in the three
datasets above, humans have an advantage that machines cannot.
## 5 CASE STUDY
### 5.1 Approach
We first train on raw data to obtain a network for visualizing images. Then,
the high-dimensional features are projected into a 2D workspace through the
network, which facilitates user interaction with high-dimensional hidden
vectors. The user can help network learning by adjusting the position of
projected points in the 2D workspace. When the user operates the projection
points on the 2D workspace, the network will learn according to the changes
before and after the projection points, therefore achieving the purpose of
artificially assisted network retraining.
### 5.2 Task
In order to verify the function of the system and the effectiveness of the
method, we invited users of different identities to evaluate. Their task is to
move the data in the 2D workspace based on their knowledge, then retrain the
network and evaluate the updated results. The conditions for stopping the
update are at the user’s discretion. The process of conducting a case study is
as follows:
1. 1.
Preparation stage. Users watch a video explaining the principles and functions
of our system, after which we further explain to users how the system works
and their tasks, and answer their questions;
2. 2.
Practice stage. Let users use the system for 10 minutes to familiarize
themselves with system functions;
3. 3.
Test stage. The user officially started the test. For the training sets of
different difficulty, we stipulated different operation times, including 45
minutes for the bronze dataset, 25 minutes for the garbage classification
dataset, and 20 minutes for the head pose dataset;
4. 4.
Interview stage. Interview with the user about the functionality of the system
and the results of the case study.
### 5.3 Bronze Dataset
The difficulty in dating bronzes lies in whether the judges can grasp the
characteristics of each age. At the same time, they are also required to
understand the styles, crafts, and casting methods formed in different
regions. Often only well-trained experts can make accurate judgments, and
experts also need to compare the features between images many times when
making judgments. In this case study, we invited a researcher in the field of
archaeology, $\mathit{E}$A, to use our system, who had seven years of studying
archaeology experience and had participated in the task of collation and
classification of large datasets of bronzes.
#### 5.3.1 User Operation
We prepared the dataset and pretrained network in advance for $\mathit{E}$A as
needed. $\mathit{E}$A first looked at the classes of the dataset and found
that there were bronzes from the early Western Zhou and late Shang dynasties
in the data. Based on his expertise, he speculates that the classification
accuracy of the data from these two dynasties may be lower. Then, he carefully
observed the distribution of data projections and found that there was no
clear boundary between the data projections corresponding to these two
dynasties, which confirmed his conjecture. At the same time, he found that the
data far from the center of the class cluster is often located in the
transition period of age. This part of the data is often mixed with the common
characteristics of the two ages. The classification results of the network on
this part of the data are not very good, therefore he focused on this part of
the data. Then, he takes the center position of each class cluster as the
reference frame, and the data far from the center of the cluster as the main
mobile data. His method is to classify the data at the cluster center
according to a certain characteristic of bronzes, such as shape, therefore he
divides the data at the center of each class cluster into several sub-classes.
After that, he moves the data away from the cluster center to where he sees
fit in the cluster center. He said in a later interview that he wanted to
subdivide the interior of each class based on important features such as the
shape and decoration of the bronzes. Through the interactive features designed
by the system, the researcher can easily move the data where he seems
reasonable.
Figure 10: Study results on the bronze dataset. (a) Comparison of retrained
projection distributions without and with user editing. (b) Comparison of
micro-F1 curves obtained by retraining without and with user editing. (c)
Comparison of ROC curves obtained from the final results of retraining without
and with user editing.
#### 5.3.2 Results
There is no clear boundary between the retrained projection distribution
without $\mathit{E}$A editing, but sharp boundaries emerged between the
retrained projection distribution with $\mathit{E}$A editing (Fig. 10(a)).
Interestingly, we also find that the retrained projection distribution with
$\mathit{E}$A editing from left to right corresponds to the order of the
bronze age. From the results of micro-F1 curves (Fig. 10(b)) and ROC curves
(Fig. 10(c)), it can be seen that the performance of the network has been
significantly improved after $\mathit{E}$A editing.
#### 5.3.3 User Feedback
$``$The system can prompt me with similarities and differences between data
that I would otherwise ignore.$"$ $\mathit{E}$A believes that the system can
help him discover relationships between data, which is ideal for his use case:
partitioning data by constantly comparing the differences between them. He can
easily observe the data using the capabilities of the system, and he can
further compare similarities and differences between the data using the
history record module. He even comes up with new usages, such as using the
spatial layout structure we designed to divide the task of archaeological
data, which can greatly reduce his workload. In addition, he also commented,
$``$Both shape and decoration are very important to the classification of
bronzes, and it is difficult for us to take into account both when we manually
classify these data, but the use of 2D layout structure solves this problem to
a certain extent.$"$ Placing data with multiple characteristics between
clusters, rather than just one of them, is an advantage of a 2D spatial
layout.
However, $\mathit{E}$A also said that people in the field of archaeology often
use various software to process the data. From his point of view, as a person
who is not in the field of deep learning, it is still difficult for our system
to get started, and a certain adaptation stage is required. At the same time,
archaeological datasets are inherently difficult, so more straightforward
systems tend to be more applicable.
### 5.4 Garbage Classification Dataset
The difficulty of garbage classification datasets is that there are many
subclasses under each class. We invited deep learning researcher $\mathit{E}$B
to conduct this case study, he has 7 years of computer learning experience and
4 years of deep learning experience, and his main research direction is fine-
grained classification. At the same time, he is proficient in basic garbage
sorting rules.
Figure 11: Study results on the garbage classification dataset. (a)
Comparison of retrained projection distributions without and with three edits
by the user. (b) Comparison of micro-F1 curves obtained by retraining without
and with three edits by the user. (c) Comparison of ROC curves obtained from
the final results of retraining without and with three edits by the user.
#### 5.4.1 Process
$\mathit{E}$B initially used the system’s adjust display points and zoom in
comparison functions. In post-use interviews, $\mathit{E}$B said he had
experience designing user interfaces, therefore he paid particular attention
to whether the system could clearly display data and compare data easily.
$\mathit{E}$B believes that the functions designed by the system are very
comprehensive, and the logic between each component is smooth, which can meet
his needs.
$\mathit{E}$B compared the relationship between the data in combination with
our system and current classification accuracy. He found that classes with low
classification accuracy were projected less densely, while classes with high
classification accuracy were projected more tightly. He speculates that the
reason may be that the network’s feature learning of this part of the data is
not thorough enough. Therefore, he focuses on predicting data that is not
dense enough. He first moved a small amount of deviated data to the vicinity
of the corresponding cluster, and then set the training period to 8 epochs.
But the result after retraining did not change much. He guessed it might be
because he was moving too little data. From the perspective of the
relationship between the fully connected layer and the latent space, he
intends to move the data with wrong predicted values to the vicinity of the
data with correct predicted values, and at the same time perform intra-class
aggregation operations on the data of each class. In a subsequent interview,
he stated that the fully connected layer of the network can be seen as a
function, therefore whether the predicted value is correct depends on the
independent variable features. If the wrongly predicted features are closer to
the correctly predicted features, the output of the network is also more
likely to change from wrongly predicted to correctly predicted. Then,
$\mathit{E}$B made a second retraining according to the above idea, this time
the training period is 12 epochs. The accuracy of this training has been
significantly improved. At the same time, $\mathit{E}$B believes that there is
a certain gap between the classes in the visualization results after
retraining, so he believes that his idea is feasible. Then, he continued the
above idea, focusing the third time on the purple class with no clear
boundaries. He mainly separated the purple class from the others and then did
a third retraining. After the third retraining, the accuracy rate improved
significantly. At the same time, there are obvious gaps between the classes in
the visualization results. $\mathit{E}$B is very satisfied with the result of
this movement, and then he ends the operation.
#### 5.4.2 Results
Through the retrained visualization results without editing and with three
editing by $\mathit{E}$B (Fig. 11(a)), we can clearly feel the process of
separating the originally mixed data. We can also feel the changes brought by
user interaction to the network from the micro-F1 curves (Fig. 11(b)) and ROC
curves (Fig. 11(c)). At the same time, the metric curves and visualization
results demonstrate the effectiveness of our method.
#### 5.4.3 User Feedback
$\mathit{E}$B praised the utility of the system and commented, $``$The system
is so novel that it is intelligible to be able to connect the latent space
with data so directly.$"$ At the same time, $\mathit{E}$B also said that he is
more concerned about how the system presents data to the user. He believes
that the system we designed is fully functional, not only has basic zooming,
viewing, and guiding functions, but also can adjust the display density, which
can display data well, and the system is also very user-friendly. In addition,
$\mathit{E}$B also repeatedly observed the results of retraining. He believes
that the network is gradually learning the latent space after his movement,
and the retrained network can also show better performance.
$\mathit{E}$B also suggested to the system that a 2D spatial layout might not
be sufficient when dealing with complex distributions of data points. Although
we use Isomap, a dimensionality reduction method that can correspond to low-
dimensional and high-dimensional, it cannot fully reflect the relationship
between features. In future work, we will try to elevate the spatial layout to
three-dimensional space.
Figure 12: Study results on the head pose dataset. (a) Comparison of
retrained projection distributions without and with editing by six users. (b)
Comparison of mean micro-F1 curves obtained by retraining without and with
editing by six users. (c) Comparison of ROC curves obtained from the final
results of retraining without and with editing by six users.
### 5.5 Head Pose Dataset
The head pose dataset includes four classes, facing up, facing down, facing
left, and facing right. We invited 6 graduate students (3 males, 3 females) to
join our system evaluation, their majors include bioinformatics, computer
vision, and recommender systems, and all of them are deep learning beginners.
#### 5.5.1 Process
After simple exercises, they can quickly master the skills of using the system
and understand the meaning of the spatial layout of the system.
They generally check the classification of the projection data from the
beginning. Four users found that the projection results of the facing up class
have the highest degree of aggregation, and the facing up class has the
highest classification accuracy. They feel that the density of the projection
data will affect the accuracy. They believe that aggregating scattered data
can be an efficient way to move. Another user found that the projected data
for the facing left and facing right classes were almost mixed, with the
lowest classification accuracy for both. Combining his knowledge, he assumed
that neural networks are not very distinguishable between the concepts of the
facing left and facing right classes, therefore he focused on dealing with
facing left and facing right classes. The last user knows some visualization
techniques, so she pays more attention to the distribution of projections
before and after moving. She observes that the data for the four classes are
more or less mixed together. $``$If I separate the data from the four classes
that are mixed, will the network learn such a distribution?$"$ With this idea
in mind, she applies the functions of the system to process the projection
data.
#### 5.5.2 Results
Compared to the without editing visualization result, the six user
visualization results produced clear demarcations between all four classes
(Fig. 12(a)). In particular, the three classes of purple, red, and orange that
were originally mixed can have a clear cluster structure after editing and
retraining by the users. At the same time, it can be seen from the results of
micro-F1 curves (Fig. 12(b)) and ROC curves (Fig. 12(c)) that the users’
editing has significantly improved the performance of the network.
#### 5.5.3 User Feedback
All six users expressed a positive view of the retrained results, they all
felt that the updated network learned their $``$knowledge$"$ and that by
moving the data process and results, they were able to further understand the
concept of latent space. One of the users commented, $``$The method used by
this system meets my needs in an application, and it solves many invisible
problems encountered in network training, making the whole process more
intuitive and clear.$"$ Making the originally uncontrollable training process
controllable has always been the direction of our efforts. When it comes to
the design of the system interface, they generally believe that such a spatial
layout allows them to grasp the spatial location information of the samples,
thumbnail images, and the overall distribution of the data, which is concise
and intuitive.
## 6 Discussion
The system facilitates the observation of data and helps to discover
relationships between data. The system can intuitively observe the
similarities and differences between data, and the user can use the spatial
layout of the system to dig out the similarities between adjacent data that
may be overlooked, and can also dig out the differences between data that are
far away. In addition, the system can help users organize data. For example,
archaeologists need to process each image when classifying archaeological
data, which is a tedious and tedious process. But by borrowing our spatial
layout, they can divide the data according to the projection results, which
greatly reduces the workload.
The system facilitates the partitioning of data. The combination of spatial
layout and interactive functions is not only beneficial to classify data
between classes but also to classify data within the same class according to
attributes such as shape or pattern. At the same time, such a spatial layout
can also provide a buffer for the user. For example, due to the continuity of
the ages, bronze wares often show the characteristics of two ages. Simply
dividing this type of data into a specific age will be biased. Using the two-
dimensional layout of our system, the user can solve this problem by placing
this part of the data between two age clusters.
This system allows users to obtain a more understandable latent space. The
system connects a high-dimensional latent space with a 2D workspace, allowing
users to apply their knowledge to the training process of the network, which
can make the network jump out of the local minimum area and improve the
performance of the network. At the same time, the original concept of latent
space is relatively abstract, but users can intuitively feel the changing
process of latent space in the process of using the system we designed.
This system allows users to speed up the learning process of the network. By
visualizing the changes before and after retraining, the user can see that the
updated results are closer to the results they moved. Meanwhile, the training
process of the original network is unknown, but our system exposes this
process to some extent.
The system functions are well-designed, easy to use, and can be displayed
clearly with data. In the latter two case studies, they both rated the system
as easy to use, an assessment that differed from the researcher $\mathit{E}$A.
We can think that for users with a certain deep learning foundation, the
system is relatively easy to use. In addition, all users believed that the
interactive function settings of the system were very reasonable, the designed
functions could meet their needs in use, and the combination of system layout
and functions could clearly display the data.
## 7 CONCLUSION
In this paper, we introduce an interactive system that not only connects a
high-dimensional latent space with a 2D workspace but also allows the user to
interact with the visualized data and retrain the network, enabling humans to
help the network learn. The results of three case studies prove that latent
vectors can be edited by the user, and through the system we designed, human
knowledge in classification can be incorporated into the training process of
the network, thereby improving the performance of the network.
## Acknowledgments
This work is supported in part by the Young Scientists Fund of the National
Natural Science Foundation of China (Grant No.62206106) and Jilin University
(Grant No.419021422B08).
## References
* [1] Domestic waste classification image dataset, 2021.
* [2] S. Amershi, M. Cakmak, W. B. Knox, and T. Kulesza. Power to the people: The role of humans in interactive machine learning. Ai Magazine, 35(4):105–120, 2014.
* [3] S. Amershi, M. Chickering, S. M. Drucker, B. Lee, P. Simard, and J. Suh. Modeltracker: Redesigning performance analysis tools for machine learning. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 337–346, 2015.
* [4] S. Amershi, J. Fogarty, A. Kapoor, and D. Tan. Effective end-user interaction with machine learning. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 25, pp. 1529–1532, 2011.
* [5] G. Armstrong, C. Martino, G. Rahman, A. Gonzalez, Y. Vázquez-Baeza, G. Mishne, and R. Knight. Uniform manifold approximation and projection (umap) reveals composite patterns and resolves visualization artifacts in microbiome data. Msystems, 6(5):e00691–21, 2021.
* [6] K. Asai, T. Fukusato, and T. Igarashi. Integrated development environment with interactive scatter plot for examining statistical modeling. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–7, 2020.
* [7] S. Ayesha, M. K. Hanif, and R. Talib. Overview and comparative study of dimensionality reduction techniques for high dimensional data. Information Fusion, 59:44–58, 2020.
* [8] C. Baumgartner, C. Böhm, and D. Baumgartner. Modelling of classification rules on metabolic patterns including machine learning and expert knowledge. Journal of biomedical informatics, 38(2):89–98, 2005.
* [9] N. Boukhelifa, A. Bezerianos, and E. Lutton. Evaluation of interactive machine learning systems. In Human and Machine Learning, pp. 341–360. Springer, 2018.
* [10] E. T. Brown, J. Liu, C. E. Brodley, and R. Chang. Dis-function: Learning distance functions interactively. In 2012 IEEE conference on visual analytics science and technology (VAST), pp. 83–92. IEEE, 2012.
* [11] S. S. Bukhari, T. M. Breuel, A. Asi, and J. El-Sana. Layout analysis for arabic historical document images using machine learning. In 2012 International Conference on Frontiers in Handwriting Recognition, pp. 639–644. IEEE, 2012.
* [12] C.-M. Chang, C.-H. Lee, and T. Igarashi. Spatial labeling: leveraging spatial layout for improving label quality in non-expert image annotation. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–12, 2021.
* [13] N.-C. Chen, J. Suh, J. Verwey, G. Ramos, S. Drucker, and P. Simard. Anchorviz: Facilitating classifier error discovery through interactive semantic data exploration. In 23rd International Conference on Intelligent User Interfaces, pp. 269–280, 2018.
* [14] J. Costa, A. Bock, C. Emmart, C. Hansen, A. Ynnerman, and C. Silva. Interactive visualization of atmospheric effects for celestial bodies. IEEE Transactions on Visualization and Computer Graphics, 27(2):785–795, 2020.
* [15] M. A. Cox and T. F. Cox. Multidimensional scaling. In Handbook of data visualization, pp. 315–347. Springer, 2008\.
* [16] C. de la Torre-Ortiz, M. M. Spapé, L. Kangassalo, and T. Ruotsalo. Brain relevance feedback for interactive image generation. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, pp. 1060–1070, 2020.
* [17] J. J. Dudley and P. O. Kristensson. A review of user interface design for interactive machine learning. ACM Transactions on Interactive Intelligent Systems (TiiS), 8(2):1–37, 2018.
* [18] V. Eisenstadt, H. Arora, C. Ziegler, J. Bielski, C. Langenhan, K.-D. Althoff, and A. Dengel. Exploring optimal ways to represent topological and spatial features of building designs in deep learning methods and applications for architecture. 2021\.
* [19] R. Fiebrink and B. Caramiaux. The machine learning algorithm as creative musical tool. arXiv preprint arXiv:1611.00379, 2016.
* [20] R. Fiebrink, P. R. Cook, and D. Trueman. Human model evaluation in interactive supervised learning. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 147–156, 2011.
* [21] J. Françoise and F. Bevilacqua. Motion-sound mapping through interaction: An approach to user-centered design of auditory feedback using machine learning. ACM Transactions on Interactive Intelligent Systems (TiiS), 8(2):1–30, 2018.
* [22] S. Gehrmann, H. Strobelt, R. Krüger, H. Pfister, and A. M. Rush. Visual interaction with deep learning models through collaborative semantic inference. IEEE transactions on visualization and computer graphics, 26(1):884–894, 2019.
* [23] X. Geng, D.-C. Zhan, and Z.-H. Zhou. Supervised nonlinear dimensionality reduction for visualization and classification. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 35(6):1098–1107, 2005.
* [24] M. Gillies. Understanding the role of interactive machine learning in movement interaction design. ACM Transactions on Computer-Human Interaction (TOCHI), 26(1):1–34, 2019.
* [25] D. Gotz, J. Zhang, W. Wang, J. Shrestha, and D. Borland. Visual analysis of high-dimensional event sequence data via dynamic hierarchical aggregation. IEEE transactions on visualization and computer graphics, 26(1):440–450, 2019.
* [26] N. Gourier, D. Hall, and J. L. Crowley. Estimating face orientation from robust detection of salient facial structures. In FG Net workshop on visual observation of deictic gestures, vol. 6, p. 7. Citeseer, 2004.
* [27] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), vol. 2, pp. 1735–1742. IEEE, 2006.
* [28] A. Harel, D. J. Kravitz, and C. I. Baker. Deconstructing visual scenes in cortex: gradients of object and spatial layout information. Cerebral Cortex, 23(4):947–957, 2013.
* [29] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
* [30] T. Honcharenko, G. Ryzhakova, Y. Borodavka, V. Savenko, and O. Polosenko. Method for representing spatial information of topological relations based on a multidimensional data model. ARPN Journal of Engineering and Applied Sciences, 16(7):802–809, 2021.
* [31] G. Ivosev, L. Burton, and R. Bonner. Dimensionality reduction and visualization in principal component analysis. Analytical chemistry, 80(13):4933–4944, 2008.
* [32] M. I. Jordan and T. M. Mitchell. Machine learning: Trends, perspectives, and prospects. Science, 349(6245):255–260, 2015.
* [33] N. Kosmyna, F. Tarpin-Bernard, and B. Rivet. Adding human learning in brain–computer interfaces (bcis) towards a practical control modality. ACM Transactions on Computer-Human Interaction (TOCHI), 22(3):1–37, 2015.
* [34] N. Kosmyna, F. Tarpin-Bernard, and B. Rivet. Conceptual priming for in-game bci training. ACM Transactions on Computer-Human Interaction (TOCHI), 22(5):1–25, 2015.
* [35] F. Lekschas, X. Zhou, W. Chen, N. Gehlenborg, B. Bach, and H. Pfister. A generic framework and library for exploration of small multiples through interactive piling. IEEE Transactions on Visualization and Computer Graphics, 27(2):358–368, 2020.
* [36] W. Li, D. Sadigh, S. S. Sastry, and S. A. Seshia. Synthesis for human-in-the-loop control systems. In International conference on tools and algorithms for the construction and analysis of systems, pp. 470–484. Springer, 2014.
* [37] Z. Luo, J. Zhou, H. Zhu, D. Du, X. Han, and H. Fu. Simpmodeling: Sketching implicit field to guide mesh modeling for 3d animalmorphic head design. In The 34th Annual ACM Symposium on User Interface Software and Technology, pp. 854–863, 2021.
* [38] C. Mai, T. Wiltzius, F. Alt, and H. Hußmann. Feeling alone in public: investigating the influence of spatial layout on users’ vr experience. In Proceedings of the 10th Nordic conference on human-computer interaction, pp. 286–298, 2018.
* [39] M. Mărăşoiu, A. F. Blackwell, A. Sarkar, and M. Spott. Clarifying hypotheses by sketching data. In Proceedings of the Eurographics/IEEE VGTC Conference on Visualization: Short Papers, pp. 125–129, 2016.
* [40] A. T. Nguyen, A. Kharosekar, S. Krishnan, S. Krishnan, E. Tate, B. C. Wallace, and M. Lease. Believe it or not: Designing a human-ai partnership for mixed-initiative fact-checking. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, pp. 189–199, 2018.
* [41] F. Radenović, G. Tolias, and O. Chum. Fine-tuning cnn image retrieval with no human annotation. IEEE transactions on pattern analysis and machine intelligence, 41(7):1655–1668, 2018.
* [42] M. Rahbar, M. Mahdavinejad, A. H. Markazi, and M. Bemanian. Architectural layout design through deep learning and agent-based modeling: A hybrid approach. Journal of Building Engineering, 47:103822, 2022.
* [43] G. Rosa, J. Papa, A. Marana, W. Scheirer, and D. Cox. Fine-tuning convolutional neural networks using harmony search. In Iberoamerican Congress on Pattern Recognition, pp. 683–690. Springer, 2015.
* [44] Y. Sakata, Y. Baba, and H. Kashima. Crownn: Human-in-the-loop network with crowd-generated inputs. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7555–7559. IEEE, 2019.
* [45] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 815–823, 2015.
* [46] F. M. Shipman III, C. C. Marshall, and T. P. Moran. Finding and using implicit structure in human-organized spatial layouts of information. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 346–353, 1995.
* [47] T. Spinner, U. Schlegel, H. Schäfer, and M. El-Assady. explainer: A visual analytics framework for interactive and explainable machine learning. IEEE transactions on visualization and computer graphics, 26(1):1064–1074, 2019.
* [48] T. Spinner, U. Schlegel, H. Schäfer, and M. El-Assady. explainer: A visual analytics framework for interactive and explainable machine learning. IEEE transactions on visualization and computer graphics, 26(1):1064–1074, 2019.
* [49] J. Talbot, B. Lee, A. Kapoor, and D. S. Tan. Ensemblematrix: interactive visualization to support machine learning with multiple classifiers. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 1283–1292, 2009.
* [50] J. B. Tenenbaum, V. d. Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. science, 290(5500):2319–2323, 2000.
* [51] M. Torres and F. Cantú. Learning to see: Convolutional neural networks for the analysis of social science data. Political Analysis, 30(1):113–131, 2022.
* [52] L. Van der Maaten and G. Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.
* [53] A. Verbraeck and E. Eisemann. Interactive black-hole visualization. IEEE Transactions on Visualization and Computer Graphics, 27(2):796–805, 2020.
* [54] D. Wang and J. Gu. Vasc: dimension reduction and visualization of single-cell rna-seq data by deep variational autoencoder. Genomics, proteomics & bioinformatics, 16(5):320–331, 2018.
* [55] F. Wang, Y. Li, D. Sakamoto, and T. Igarashi. Hierarchical route maps for efficient navigation. In Proceedings of the 19th international conference on Intelligent User Interfaces, pp. 169–178, 2014.
* [56] G. Wang, W. Li, M. A. Zuluaga, R. Pratt, P. A. Patel, M. Aertsen, T. Doel, A. L. David, J. Deprest, S. Ourselin, et al. Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE transactions on medical imaging, 37(7):1562–1573, 2018.
* [57] N. Watanabe, M. Washida, and T. Igarashi. Bubble clusters: an interface for manipulating spatial aggregation of graphical objects. In Proceedings of the 20th annual ACM symposium on User interface software and technology, pp. 173–182, 2007.
* [58] J. Wei, H. Xie, C.-M. Chang, and X. Yang. Fine-tuning deep neural networks by interactively refining the 2d latent space of ambiguous images.
* [59] P. I. Wójcik and M. Kurdziel. Training neural networks on high-dimensional data using random projection. Pattern Analysis and Applications, 22(3):1221–1231, 2019.
* [60] J. Wu, X. Zhang, J. Nichols, and J. P. Bigham. Screen parsing: Towards reverse engineering of ui models from screenshots. In The 34th Annual ACM Symposium on User Interface Software and Technology, pp. 470–483, 2021.
* [61] R. Xu, F. Luo, Z. Zhang, C. Tan, B. Chang, S. Huang, and F. Huang. Raise a child in large language model: Towards effective and generalizable fine-tuning. arXiv preprint arXiv:2109.05687, 2021.
* [62] W. Yang, X. Wang, J. Lu, W. Dou, and S. Liu. Interactive steering of hierarchical clustering. IEEE Transactions on Visualization and Computer Graphics, 27(10):3953–3967, 2020.
* [63] X. Yin, W. Chen, X. Wu, and H. Yue. Fine-tuning and visualization of convolutional neural networks. In 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), pp. 1310–1315. IEEE, 2017.
* [64] X. Zhang and P. J. Guo. Mallard: Turn the web into a contextualized prototyping environment for machine learning. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, pp. 605–618, 2019.
* [65] J. Zhao, M. Fan, and M. Feng. Chartseer: Interactive steering exploratory visual analysis with machine intelligence. IEEE Transactions on Visualization and Computer Graphics, 2020.
|
# Numerical fluid dynamics for FRG flow equations:
Zero-dimensional QFTs as numerical test cases.
I. The $O(N)$ model
Adrian Koenigstein<EMAIL_ADDRESS>Institut für
Theoretische Physik, Goethe University,
Max-von-Laue-Straße 1, D-60438 Frankfurt am Main, Germany Martin J. Steil
<EMAIL_ADDRESS>Technische Universität Darmstadt,
Department of Physics, Institut für Kernphysik, Theoriezentrum,
Schlossgartenstraße 2, D-64289 Darmstadt, Germany Nicolas Wink
<EMAIL_ADDRESS>Institut für Theoretische Physik, University
Heidelberg,
Philosophenweg 16, D-69120 Heidelberg, Germany Eduardo Grossi
<EMAIL_ADDRESS>Center for Nuclear Theory, Department of Physics
and Astronomy,
Stony Brook University, Stony Brook, NY 11794, U.S.A. Jens Braun
<EMAIL_ADDRESS>Technische Universität Darmstadt, Department
of Physics, Institut für Kernphysik, Theoriezentrum,
Schlossgartenstraße 2, D-64289 Darmstadt, Germany Helmholtz Research Academy
Hesse for FAIR, Campus Darmstadt,
D-64289 Darmstadt, Germany ExtreMe Matter Institute EMMI, GSI,
Planckstraße 1, D-64291 Darmstadt, Germany Michael Buballa
<EMAIL_ADDRESS>Technische Universität Darmstadt,
Department of Physics, Institut für Kernphysik, Theoriezentrum,
Schlossgartenstraße 2, D-64289 Darmstadt, Germany Helmholtz Research Academy
Hesse for FAIR, Campus Darmstadt,
D-64289 Darmstadt, Germany Dirk H. Rischke<EMAIL_ADDRESS>frankfurt.de Institut für Theoretische Physik, Goethe University,
Max-von-Laue-Straße 1, D-60438 Frankfurt am Main, Germany Helmholtz Research
Academy Hesse for FAIR, Campus Riedberg,
Max-von-Laue-Straße 12, D-60438 Frankfurt am Main, Germany
###### Abstract
The functional renormalization group (FRG) approach is a powerful tool for
studies of a large variety of systems, ranging from statistical physics over
the theory of the strong interaction to gravity. The practical application of
this approach relies on the derivation of so-called flow equations, which
describe the change of the quantum effective action under the variation of a
coarse-graining parameter. In the present work, we discuss in detail a novel
approach to solve such flow equations. This approach relies on the fact that
RG equations can be rewritten such that they exhibit similarities with the
conservation laws of fluid dynamics. This observation can be exploited in
different ways. First of all, we show that this allows to employ powerful
numerical techniques developed in the context of fluid dynamics to solve RG
equations. In particular, it allows to reliably treat the emergence of non-
analytic behavior in the RG flow of the effective action as it is expected to
occur in studies of, e.g., spontaneous symmetry breaking. Second, the analogy
between RG equations and fluid dynamics offers the opportunity to gain novel
insights into RG flows and their interpretation in general, including the
irreversibility of RG flows. We work out this connection in practice by
applying it to zero-dimensional quantum-field theoretical models. The
generalization to higher-dimensional models is also discussed. Our findings
are expected to help improving future FRG studies of quantum field theories in
higher dimensions both on a qualitative and quantitative level.
Functional Renormalization Group, conservation laws, numerical fluid dynamics,
$O(N)$ model, zero-dimensional QFT
###### Contents
1. I Introduction
2. II The Functional Renormalization Group – an introduction in zero dimensions
1. II.1 The partition function in zero dimensions
2. II.2 Solving integrals with flow equations
1. II.2.1 The scale-dependent partition function
2. II.2.2 A flow equation for the scale-dependent partition function
3. II.3 The Functional Renormalization Group equation
1. II.3.1 The scale-dependent Schwinger functional
2. II.3.2 The scale-dependent effective action
3. II.3.3 The Exact Renormalization Group equation
4. II.4 Contextualization with FRG in higher-dimensional space-time
5. II.5 $n$-point correlation functions
3. III The $O(N)$ model in zero dimensions and its treatment within the FRG
1. III.1 The zero-dimensional $O(N)$ model
2. III.2 Symmetry restoration during the RG flow
3. III.3 FRG formulation and flow equations
1. III.3.1 The exact RG flow equation of the zero-dimensional $O(N)$ model
2. III.3.2 FRG Taylor (vertex) expansion of the $O(N)$ model
4. IV FRG flow equations and (numerical) fluid dynamics
1. IV.1 Conservative form of FRG flow equations – advection-diffusion equations
1. IV.1.1 The conservative form
2. IV.1.2 Advection-diffusion equation, irreversibility of RG flows, and entropy production
2. IV.2 Finite-volume method
3. IV.3 Kurganov-Tadmor (KT) central scheme
4. IV.4 Boundary conditions and computational domain in FRG flow equations
1. IV.4.1 Boundary condition at $\sigma=0$
2. IV.4.2 Boundary condition at $\sigma\rightarrow\infty$
5. V Zero-dimensional field theory as testing ground for FRG
1. V.1 Test case I: Non-analytic initial condition
1. V.1.1 General discussion of the FRG flow – advection and diffusion
2. V.1.2 Tests of the spatial resolution $\Delta x$
3. V.1.3 Tests of the size of the computational domain
4. V.1.4 Tests of the UV and IR scales
2. V.2 Test case II: $\phi^{4}$ theory
1. V.2.1 Results obtained using the KT scheme
2. V.2.2 Results obtained using the FRG Taylor (vertex) expansion
3. V.3 Test case III: $\phi^{6}$ potential
4. V.4 Test case IV: the $\sigma=0$ boundary
6. VI Conclusions and outlook
7. A Numerical derivatives
8. B Coleman-Mermin-Wagner-Hohenberg theorem in zero dimensions: Absence of spontaneous symmetry breaking and of phase transitions
1. B.0.1 Ehrenfest classification of phase transitions
2. B.0.2 Landau’s theory of phase transitions
3. B.0.3 The Coleman-Mermin-Wagner-Hohenberg theorem
4. B.0.4 Phase transitions during the RG flow
## I Introduction
In statistical mechanics and quantum field theory (QFT) the central objective
is to compute the expectation values of physical observables from a partition
of probabilities among the various microscopic states of a given model or
theory. On a technical level, the calculation of expectation values oftentimes
corresponds to the evaluation of nested sums (for discrete systems) or
complicated high-dimensional integrals (for continuous systems) in the
framework of partition functions or functional integrals. In most cases such
computations cannot be done analytically. Various methods were developed to
overcome this difficulty. Focusing on high-energy physics, stochastic methods
have been developed to study Quantum Chromodynamics from first principles (see
Refs. [1, 2, 3, 4] for reviews), but also systematic approximation schemes
such as (chiral) perturbation theory (see Refs. [5, 6] for reviews) or the
large-$N$ expansion [7, 8, 9] have been employed, where (at least) parts of
the calculations can still be performed analytically. Within the last decades
non-perturbative holographic and functional methods, such as the AdS/CFT
correspondence [10, 11], Dyson-Schwinger equations (see Ref. [12] for a
review), and the (Functional) Renormalization Group ((F)RG) (see Ref. [13] for
a recent review) have significantly gained importance and nowadays provide a
viable complement to Monte-Carlo simulations and semi-analytic methods.
However, despite great success within various areas of physics, holographic
and functional methods are sometimes still criticized for the lack of
providing reliable systematic and numerical error estimates. In this work, we
will provide important steps to amend this shortcoming for the FRG approach.
Although the mathematical formulation of the FRG approach is in principle
exact, a first source of systematic errors is introduced by the fact that one
has to make certain approximations (truncations) in order to actually perform
calculations. However, since the method is non-perturbative, the
identification of, e.g., a small expansion parameter is challenging, if at all
possible. A lot of work has already been invested into this question, e.g.,
approximation errors can be evaluated by comparing different truncation
schemes and truncation orders against each other [14, 15, 16]. Furthermore,
the comparison with other non-perturbative methods [17], effective field
theories [18, 19, 20, 14, 21], or with Monte-Carlo studies [22, 23, 24, 25]
can provide estimates on the reliability of the results.
A second source of systematic errors arises from the way the RG flow equations
are solved in practice. In recent work by two of us and collaborators [26,
27], it was pointed out that the possible appearance of non-analytic behavior
in field space as well as the influence of the boundary conditions require
great care in the numerical solution of RG flow equations. In particular, it
was shown that these equations can be cast into a conservative form, such that
analogies to fluid-dynamical flow equations become manifest and allow to
access to the highly developed toolbox of numerical fluid dynamics, e.g., in
the case of Refs. [26, 27] including the discontinuous Galerkin method. In
consequence, this suggests that a systematic analysis of the quality of the
different numerical methods to solve RG flow equations as well as an analysis
of the structure of the RG flow equations themselves is in order. The question
of numerical errors in FRG calculations was systematically addressed in Ref.
[26] by a comparison of numerical results with analytically known solutions
for the $O(N)$ model in the large-$N$ limit [28, 29, 30]. Furthermore,
phenomena like shock waves in the derivative of the effective potential along
the field space direction during the RG flow, which are directly related to
phase transitions [26, 27, 31, 32], were resolved and interpreted in a fluid-
dynamical framework.
The goal of the present work is threefold. On the one hand, we will continue
to elaborate on the analogies between RG flow equations and (numeric) fluid
dynamics, including precision and stability tests for numerical schemes. On
the other hand, we will contribute to the ongoing discussion on truncation
schemes of the FRG framework. In addition, this article is supposed to provide
a low-level introduction to the FRG method within the fluid-dynamic mindset
also for non-experts and (under)graduate students.
In order to provide reliable estimates of the precision of numerical methods
and the quality of truncation schemes, the standard approach is to compare
numerical results and/or results from truncations against analytically known
results. However, analytically known results for non-trivial QFTs or
statistical mechanics are scarce. Fortunately, there is a class of non-trivial
QFTs, where either analytic results are known or numerical results can be
easily obtained with arbitrary precision: zero-dimensional QFTs. In this work,
we choose the zero-dimensional $O(N)$ model as a testing ground to
systematically analyze the precision of the numerical methods which are used
to solve the RG flow equations. Furthermore, we will use zero-dimensional QFT
to demonstrate the similarities between RG flow equations and conservation
laws from fluid dynamics (which also generalize to an arbitrary number of
space-time dimensions and different field content). We will elucidate the
different roles played by advective and diffusive contributions in the RG flow
equations as partial differential equations (PDEs). Furthermore, we start a
discussion of the relation between the RG time, entropy production in the RG
flow, the dissipative character of the FRG equation, and the irreversibility
of RG transformations during the RG flow. This discussion is deepened in part
II and III of this series of publications [33, 31].
In order to numerically solve the RG flow equations, in this work we apply the
Kurganov-Tadmor scheme, a finite-volume method which is well-established in
numerical fluid dynamics. We test the accuracy of the FRG results against
direct evaluations of expectation values from the partition function, which
can be calculated to in principle arbitrary precision in zero space-time
dimensions. We note that the RG flow equations arising in the FRG framework
for certain zero-dimensional models, and in particular the $O(N)$ model, are
exact PDEs. Therefore, they do not involve any systematic error of the first
kind mentioned above, namely truncation errors. Possible errors are therefore
solely of the second kind, introduced by the numerical scheme used to solve
the flow equations.
As a next step, we will analyze the FRG Taylor expansion as a truncation to
the FRG approach and contrast our findings with the general properties of the
FRG equation as a non-linear PDE in zero space-time dimensions. In a follow-up
publication, we will also introduce more elaborate zero-dimensional models
including Grassmann numbers (mimicking fermionic degrees of freedom in $d=0$)
[34]. In this context, we will apply the methods developed in the present work
to investigate several truncation schemes by comparing against exact results
for a constructed fermion-boson-model. Generalizing our findings from zero
dimensions to higher-dimensional QFTs is not necessarily trivial.
Nevertheless, we will comment on this issue at various places throughout this
work. We thus hope that this paper will contribute to ongoing debates on
subtleties of the RG flow equations. Furthermore, we hope to establish
reliable minimal requirements for numerical methods to solve RG flow
equations, which can be used as benchmark tests for future numerical
toolboxes.
The length of this paper is explained by the fact that we have tried to make
the presentation self-contained as much as possible. This should enable the
reader not familiar with the FRG approach to understand all arguments and
intermediate steps without resorting to the literature. The more experienced
reader can certainly skip or skim over some parts, as indicated below.
The remainder of this paper is organized as follows. In Sec. II we give an
introduction to the FRG approach for zero-dimensional QFTs. In Sec. III we
focus on the zero-dimensional $O(N)$ model and its respective RG flow
equation. Readers familiar with the FRG approach and the $O(N)$ model can omit
these two sections. The relationship between RG flow equations and fluid
dynamics is discussed in Sec. IV. Readers familiar with fluid dynamics may be
interested in the analogy between the FRG and fluid dynamics discussed in
Sub.Sec. IV.1, but can skip over the remainder of this section that focuses on
details of the numeric implementation. Section V presents our numerical
results. Readers familiar with both the FRG approach and fluid dynamics should
focus on this section and the Sub.Sec. IV.1. We conclude this work with a
discussion and an outlook for future studies in Sec. VI. In the Appendices, we
list useful formulas for the calculation of numerical derivatives and present
a discussion of the absence of spontaneous symmetry breaking in zero space-
time dimensions.
## II The Functional Renormalization Group – an introduction in zero
dimensions
This section provides an introduction to the Functional Renormalization Group
and a detailed derivation of the FRG equation [35, 36, 37] for a zero-
dimensional QFT. Our discussion is geared towards non-experts. Readers who are
familiar with the FRG method might still find this discussion instructive,
because we will introduce the FRG without any direct reference to
regularization and renormalization, only based on properties of (functional)
integrals. This sheds light on the details and structure of the flow equations
and the technical subtleties in their solution. In addition we use this
introduction to establish some notation and special features of zero-
dimensional field theory.
As already mentioned in the introduction, the efficient and sufficiently
precise calculation of correlation functions is key to understanding the
properties of a particular model or theory. Usually this is done by
introducing a partition function or functional integral that provides a
probability distribution for the microstates of the model and serves as a
generating functional for the $n$-point-correlation functions [38, 39, 40,
41]. The partition function is based on an energy function that can be a
discrete or continuous Hamilton function or an action, which determines the
microscopic properties of the model. Another way of calculating the $n$-point
correlation functions is to calculate the effective infrared action of the
model, for example via the FRG equation. Both methods are discussed in this
section.
### II.1 The partition function in zero dimensions
Consider a zero-dimensional QFT with a single real bosonic scalar field or
degree of freedom $\phi$. While all definitions generalize to arbitrary QFTs
in zero or higher dimensions and arbitrary space-time backgrounds, in zero
dimensions the field $\phi$ does not depend on the space-time position. The
same applies to derivatives of the field or space-time integrals, which simply
do not exist. This implies that the action $\mathcal{S}[\phi]$ of the model is
identical to the Lagrangian $\mathcal{L}[\phi]$. The action, the Lagrangian,
and also the Hamiltonian $\mathcal{H}[\phi]$ are simply functions of $\phi$
instead of functionals111Nevertheless, we will stick to the notation of
functionals using square brackets, in order to facilitate the generalization
to a nonzero number of space-time dimensions, as long as we do not focus on
particular zero-dimensional examples. . Furthermore, because of the absence of
a space-time derivative and thus of kinetic terms,
$\mathcal{S}[\phi]=\mathcal{L}[\phi]=\mathcal{H}[\phi]=U(\phi)$, where
$U(\phi)$ is the potential. Therefore, the only requirement for these
functions is that they must be bounded from below, in order to exclude
“negative-energy states”222 We put “negative-energy states” in quotation
marks, because all quantities in zero-dimensional field theory are
dimensionless, hence bare numbers without physical dimensions. For
convenience, we will still use the well-established notions from higher-
dimensional QFT in our discussion. and to obtain positive normalizable
probability distributions. Apart from this requirement, for the moment we do
not demand any additional properties, like symmetries (e.g., $\mathbb{Z}_{2}$,
$\phi\rightarrow-\phi$) or analyticity.
If we choose a specific model with action $\mathcal{S}[\phi]$ all expectation
values of arbitrary functions $f(\phi)$ that do not grow exponentially in
$\phi$ are defined and can be calculated via the following expression
$\displaystyle\langle
f(\phi)\rangle\equiv\frac{\int_{-\infty}^{\infty}\mathrm{d}\phi\,f(\phi)\,\mathrm{e}^{-\mathcal{S}[\phi]}}{\int_{-\infty}^{\infty}\mathrm{d}\phi\,\mathrm{e}^{-\mathcal{S}[\phi]}}\,,$
(1)
where $\mathrm{e}^{-\mathcal{S}[\phi]}$ provides the partition of
probabilities among the microstates. Note that due to the zero-dimensional
nature all expectation values for such a model reduce to proper one-
dimensional integrals over $\phi$. Such integrals can be computed to extremely
high precision using standard techniques of numerical integration [42, 43]. It
is worth emphasizing that the current discussion holds also for non-analytic
$\mathcal{S}[\phi]$ and/or $f(\phi)$. Some specific choices of
$\mathcal{S}[\phi]$ and $f(\phi)$ even allow for an analytic evaluation of Eq.
(1), see, e.g., Ref. [44]. The possibility to compute expectation values to
high precision makes zero-dimensional field theory of great interest as a
testing ground for approximations and/or numerical methods.
Some explicit examples of zero-dimensional field theories used as a testing
ground for methods in statistical mechanics and QFT can be found in Refs. [45,
46, 47, 48, 49, 50, 44, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
64, 65, 66]. In Ref. [55], for example, the asymptotic convergence and the
vanishing convergence radius of perturbation theory of $\phi^{4}$-theory is
discussed. Approximation schemes such as the large-$N$, the FRG vertex
expansion, or the FRG Taylor expansion were analyzed in Ref. [44]. Zero-
dimensional field theory was also used to study density-functional theory [56,
57, 59] and applied to fermionic fields [54]. Recently, it was used to study
and visualize 2PI effective actions [60] – also in the FRG framework [61, 63,
64].
Usually the calculation of expectation values is facilitated by a suitably
defined generating functional
$\displaystyle\mathcal{Z}[J]\equiv\mathcal{N}\int_{-\infty}^{\infty}\mathrm{d}\phi\,\mathrm{e}^{-\mathcal{S}[\phi]+J\,\phi}\,,$
(2)
from which one can derive all correlation functions by taking the
corresponding number of derivatives with respect to the external source $J$,
$\displaystyle\langle f(\phi)\rangle=\frac{f(\tfrac{\delta}{\delta
J})\,\mathcal{Z}[J]}{\mathcal{Z}[J]}\bigg{|}_{J=0}\,.$ (3)
One should note that if $f(\phi)$ is non-analytic, then Eq. (3) is to be
understood symbolically. Otherwise, it is defined through a Taylor series in
$\frac{\delta}{\delta J}$. Irrespective of that, Eqs. (1) and (2) are always
well defined and Eq. (2) can be always calculated for arbitrary $J$. One can
even show in zero dimensions that $\mathcal{Z}[J]\in C^{\infty}$, hence,
$\mathcal{Z}[J]$ is a smooth function, see Ref. [52] and App. B. We shall come
back to this crucial point later on in our discussion of the Coleman-Mermin-
Wagner-Hohenberg theorem [67, 68, 69].
The normalization $\mathcal{N}$ is not an observable quantity. For our
purposes, it is convenient to choose
$\displaystyle\mathcal{Z}[0]\overset{!}{=}1\,,$
$\displaystyle\longleftrightarrow$
$\displaystyle\mathcal{N}^{-1}=\int_{-\infty}^{\infty}\mathrm{d}\phi\,\mathrm{e}^{-\mathcal{S}[\phi]}\,.$
(4)
As already mentioned above, calculating expectation values in a zero-
dimensional QFT via Eq. (1) is (numerically) rather straightforward. In
contrast, for higher-dimensional models or theories with non-trivial field-
content etc. calculating functional integrals similar to Eq. (1) with
sufficient precision is usually extremely challenging or might even be
impossible with limited computational resources. Therefore, alternative
methods or approximation schemes apart from “direct numerical integration”,
like in lattice simulations, are of great interest. One of these alternatives,
which is at the heart of this work, is the FRG.
In the following, we will therefore focus on the FRG as a specific method for
calculating $n$-point correlation functions in QFT and statistical mechanics.
In contrast to the usual motivation of the FRG, arising in the discussion of
renormalization and the integration of momentum shells from ultraviolet to
infrared energy scales, we will take a different path to arrive at the FRG
equation, which does not require any knowledge of renormalization. To this
end, we will follow and extend the discussion in Refs. [44, 51, 52, 53, 54]
and discuss its technical properties as an alternative way of solving the
integrals in Eqs. (1) and (2).
### II.2 Solving integrals with flow equations
The starting point is the observation that there is one well-known non-trivial
class of actions $\mathcal{S}[\phi]$ for which the calculation of integrals
like Eq. (1) is straightforward, even in higher dimensions and even for more
complicated field content. These actions are QFTs for “(massive) free
particles” and correspond to Gaussian-type integrals. In the present case the
Gaussian-type action takes the following simple form,
$\displaystyle\mathcal{S}[\phi]=\tfrac{m^{2}}{2}\,\phi^{2}\,.$ (5)
where $m$ is called a “mass” for convenience, although it is actually a
dimensionless quantity in zero space-time dimensions.
For non-trivial actions $\mathcal{S}[\phi]$, Eq. (1) can still be approximated
by a Gaussian integral, as long as $\mathcal{S}[\phi]$ contains a mass term
(5) with a coefficient $m^{2}$ that is much larger than all other scales
contained in $\mathcal{S}[\phi]$. If this is the case, the Gaussian part of
the integrand $\mathrm{e}^{-\mathcal{S}[\phi]}$ completely dominates the
integrals in Eqs. (1) and (2). The reason is that the mass term $\sim\phi^{2}$
is dominant for small and moderate $\phi$, and most of the area under the
curve $\mathrm{e}^{-\mathcal{S}[\phi]}$ lies in the region of small $\phi$,
similar to a pure Gaussian integral. For very large values of $\phi$ other
terms in the action $\mathcal{S}[\phi]$ may become more important.
Nevertheless, if $m^{2}$ is large enough, the corresponding area under the
curve $\mathrm{e}^{-\mathcal{S}[\phi]}$ is completely negligible in regions
where $\phi$ is large, because $\mathcal{S}[\phi]$ is bounded from below such
that $\mathrm{e}^{-\mathcal{S}[\phi]}$ tends to zero exponentially fast for
$\phi\to\infty$. In summary, the Gaussian part with the huge mass term
dominates the integral and even non-trivial $\mathcal{S}[\phi]$ can be
approximated by Gaussian integrals.
This observation generalizes to higher dimensions and arbitrary field content,
but is more apparent in a zero-dimensional field theory with one degree of
freedom. This is illustrated in Figs. 1 and 2, which are discussed in the
following subsubsection.
#### II.2.1 The scale-dependent partition function
Based on the above observation, let us now introduce the following quantity:
$\displaystyle\mathcal{Z}_{t}[J]\equiv\mathcal{N}\,\int_{-\infty}^{\infty}\mathrm{d}\phi\,\mathrm{e}^{-\mathcal{S}[\phi]-\Delta\mathcal{S}_{t}[\phi]+J\,\phi}\,,$
(6)
which is called the scale-dependent generating functional or scale-dependent
partition function. It differs from the usual partition function (2) only by a
scale-dependent mass term
$\displaystyle\Delta\mathcal{S}_{t}[\phi]\equiv\tfrac{1}{2}\,r(t)\,\phi^{2}\,.$
(7)
We directly adopt the common notation from the FRG community and call $r(t)$
the regulator (shape) function, which depends on the RG scale (“time”)
$t\in[\,0,\infty)$, see, e.g., Refs. [70, 71]. We will discuss this
interpretation of $r(t)$ and $t$ in Sub.Sec. II.4. For now, we only demand
that the function $r(t)$ has such properties that $\mathcal{Z}_{t}[J]$
interpolates between an almost Gaussian-type partition function333This is also
why the UV fixed point of RG flows is denoted as the trivial or Gaussian fixed
point. with extremely massive free fields at $t=0$ and the actual partition
function $\mathcal{Z}[J]$ that we are interested in at $t\rightarrow\infty$.
In order to achieve this behavior, $r(t)$ has to have the following
properties:
1. 1.
In the limit of $t\rightarrow 0$, $r(t)$ ($\mathcal{S}[\phi]$) should behave
like a mass (term), similar to what we discussed at the beginning of this
section, and be much larger than all other scales in $\mathcal{S}[\phi]$.
Oftentimes in the literature $r(t)$ is set to infinity at $t=0$. We will see,
cf. Sub.Sec. II.3, that this is not suitable.
2. 2.
For $t\rightarrow\infty$, $r(t)$ is supposed to vanish, such that
$\lim_{t\rightarrow\infty}\mathcal{Z}_{t}[J]=\mathcal{Z}[J]$. The same applies
to expectation values calculated from $\mathcal{Z}_{t}[J]$, which become
expectation values of $\mathcal{Z}[J]$. For practical calculations it is
sufficient to assume that, for $t\rightarrow\infty$, $r(t)$ becomes much
smaller than all scales in $\mathcal{S}[\phi]$, because then the contribution
$\Delta\mathcal{S}_{t}[\phi]$ to the whole integrand
$\mathrm{e}^{-\mathcal{S}[\phi]-\Delta\mathcal{S}_{t}[\phi]}$ is negligible
and the integrand is almost identical to $\mathrm{e}^{-\mathcal{S}[\phi]}$.
The value $\lim_{t\rightarrow\infty}r(t)=r_{\mathrm{IR}}\gtrsim 0$ is usually
referred to as (numerical) infrared (IR) cutoff.
3. 3.
The interpretation of $r(t)$ ($\mathcal{S}[\phi]$) as a mass (term) is
guaranteed by further demanding monotonicity, ${\partial_{t}r(t)\leq
0\quad\forall t}$. We will provide additional arguments for monotonicity in
Sub.Sub.Sec. II.2.2.
4. 4.
In order to be able to smoothly deform the integral in Eq. (2) and for the
following derivation of evolution equations, we further require $r(t)\in
C^{1}$.
Apart from these four properties there are no further requirements on $r(t)$
in zero dimensions.444For the subtleties associated with the choice of
regulators in higher-dimensional theories, we refer the interested reader to
Refs. [70, 71, 72, 73, 74, 75]. Note that for higher-dimensional field
theories the fourth requirement turns into $\Delta\mathcal{S}_{t}[\phi]\in
C^{1}$. A specific choice which is used in large parts of our work is the so-
called exponential regulator (shape) function
$\displaystyle r(t)=\Lambda\,\mathrm{e}^{-t}\,,$ (8)
with an ultraviolet (UV) cutoff $\Lambda$, which must be chosen much larger
than all scales in $\mathcal{S}[\phi]$.
Figure 1: The integrand (upper panel) and exponent (lower panel) from Eq. (6)
(at $J=0$) as a function of the field variable $\phi$ for various RG times
$t=0,1,2,\ldots,15$ and for the action (9). We choose the exponential
regulator (8) with UV cutoff $\Lambda=10^{3}$, which is notably larger than
the absolute value of the mass term and the quartic coupling. The IR cutoff
scale $r_{\mathrm{IR}}$ was chosen at $t=15$, which corresponds to
$r_{\mathrm{IR}}\simeq 3.06\cdot 10^{-4}$. This value is significantly smaller
than all scales in $\mathcal{S}[\phi]$.
In order to get a better intuition of the effect of $r(t)$ on the integral
(6), in Fig. 1 we show the integrand at $J=0$,
$\mathrm{e}^{-\mathcal{S}[\phi]-\Delta\mathcal{S}_{t}[\phi]}$, and the
respective exponent for different values of $t$ for the analytic action
$\displaystyle\mathcal{S}(\phi)=-\tfrac{1}{2}\,\phi^{2}+\tfrac{1}{4!}\,\phi^{4}$
(9)
and in Fig. 2 the same quantities for the non-analytic action
$\displaystyle\mathcal{S}(\phi)=\begin{cases}-\phi^{2}\,,&\text{if}\quad|\phi|\leq\tfrac{5}{4}\,,\vphantom{\bigg{(}\bigg{)}}\\\
-\big{(}\tfrac{5}{4}\big{)}^{2}\,,&\text{if}\quad\tfrac{5}{4}<|\phi|\leq
2\,,\vphantom{\bigg{(}\bigg{)}}\\\
\tfrac{1}{48}\,\big{(}\phi^{4}-91\big{)}&\text{if}\quad
2<|\phi|\,.\vphantom{\bigg{(}\bigg{)}}\\\ \end{cases}$ (10)
The figures show how the integrands are deformed from Gaussian-shaped
integrands to the integrands $\mathrm{e}^{-\mathcal{S}[\phi]}$. One observes
that, as long as $r(t)$ is much larger than all other parameters in
$\mathcal{S}[\phi]$, the Gaussian-like mass term dominates, while for
increasing $t$ the regulator $r(t)$ becomes negligible. The most interesting
part, where the integrands change their shapes significantly, is where $r(t)$
is of the same order as the scales in $\mathcal{S}(\phi)$.
Figure 2: The same as in Fig. 1, but for the action (10).
#### II.2.2 A flow equation for the scale-dependent partition function
The change of the integrals with $t$ between the two limiting cases at $t=0$
and $t\rightarrow\infty$ is called RG flow. If this RG flow is known, we can
obtain the function
${\mathcal{Z}(J)\equiv\lim_{t\rightarrow\infty}\mathcal{Z}_{t}[J]=\mathcal{Z}[J]}$
right from the Gaussian-like partition function $\mathcal{Z}_{t=0}[J]$ without
the need to calculate the $\phi$-integral in the partition function (2)
directly. For zero dimensions this does not seem to be an advantage, because
the integrals in field space are (at least numerically) simple to compute. For
higher dimensions, however, circumventing the challenging functional
integration is a tremendous benefit.
The RG flow of $\mathcal{Z}_{t}[J]$ is characterized by taking the derivative
with respect to the RG time $t$,
$\displaystyle\partial_{t}\mathcal{Z}_{t}[J]=\vphantom{\bigg{(}\bigg{)}}$ (11)
$\displaystyle=\,$
$\displaystyle-\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\mathcal{N}\int_{-\infty}^{\infty}\mathrm{d}\phi\,\phi^{2}\,\mathrm{e}^{-\mathcal{S}[\phi]-\Delta\mathcal{S}[\phi]+J\,\phi}=\vphantom{\bigg{(}\bigg{)}}$
$\displaystyle=\,$
$\displaystyle-\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\frac{\delta^{2}\mathcal{Z}_{t}[J]}{\delta
J\,\delta J}\equiv\vphantom{\bigg{(}\bigg{)}}$ $\displaystyle\equiv\,$
$\displaystyle-\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\mathcal{Z}^{(2)}_{t,JJ}[J]\,,\vphantom{\bigg{(}\bigg{)}}$
which is a PDE for a function $\mathcal{Z}(t,J)$ in the $(t,J)$-plane,
$\displaystyle\partial_{t}\mathcal{Z}(t,J)=-\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\partial_{J}^{2}\mathcal{Z}(t,J)\,.$
(12)
With slight modifications, this also applies to higher-dimensional QFTs.
Solving this equation with appropriate initial and boundary conditions results
in a function $\mathcal{Z}(J)$ from which one can calculate expectation values
by taking ordinary (numerical) derivatives with respect to $J$ at $J=0$, cf.
Eq. (3).
The structure of this equation is that of a linear one-dimensional diffusion
equation (heat equation) [74, 54, 76, 77], where $t$ corresponds to the
temporal direction, while $J$ corresponds to the spatial direction. The term
$-\tfrac{1}{2}\,\partial_{t}r(t)$ corresponds to a time-dependent (positive
definite) diffusion coefficient555Note that in zero dimensions one can get rid
of $\partial_{t}r(t)$ by an appropriate reparametrization of the time
coordinate $t$, which nevertheless keeps the structure of the equation
unchanged. In higher dimensions this elimination of $r(t)$ is in general not
possible. The positivity of the diffusion coefficient is directly related to
the stability of solutions of the heat equation [78, 79] and positivity – here
guaranteed by the regulator properties – is necessary for a stable solution
[74, 75]. . This also motivates the name RG “time” for the parameter $t$. We
will come back to the concept of RG “time” in the true sense of the word and
the diffusive, irreversible character of RG flows in Sub.Sec. IV.1.
In zero dimensions, the Eq. (12) is a PDE in two variables. For the remainder
of this subsection we will discuss properties and practical issues considering
this exact PDE. We will neither discuss any kind of expansions in $J$ nor its
application in higher dimensions. However, some of the issues and questions
raised in the following are also relevant for higher-dimensional theories.
Finding the correct initial and boundary conditions for numerical solutions of
Eq. (12) as an exact PDE is challenging. By construction
$\mathcal{Z}_{t=0}[J]$ approaches a Gaussian integral,
$\displaystyle\mathcal{Z}_{t=0}[J]=\vphantom{\bigg{(}\bigg{)}}$ (13)
$\displaystyle=\,$
$\displaystyle\mathcal{N}\,\int_{-\infty}^{\infty}\mathrm{d}\phi\,\mathrm{e}^{-\frac{1}{2}r(0)\,\phi^{2}+J\,\phi}\,\mathrm{e}^{-\mathcal{S}(\phi)}=\vphantom{\bigg{(}\bigg{)}}$
$\displaystyle=\,$
$\displaystyle\mathcal{N}\,\int_{-\infty}^{\infty}\tfrac{\mathrm{d}\tilde{\phi}}{\sqrt{r(0)}}\,\mathrm{e}^{-\frac{1}{2}\,\tilde{\phi}^{2}+J\,\frac{\tilde{\phi}}{\sqrt{r(0)}}}\,\Big{[}1-\mathcal{O}\big{(}\mathcal{S}\big{(}r(0)^{-\frac{1}{2}}\big{)}\big{)}\Big{]}$
$\displaystyle=\,$
$\displaystyle\mathcal{N}\,\sqrt{\tfrac{2\pi}{r(0)}}\,\mathrm{e}^{\frac{J^{2}}{2r(0)}}\,\Big{[}1-\mathcal{O}\big{(}\mathcal{S}\big{(}r(0)^{-\frac{1}{2}}\big{)}\big{)}\Big{]}\,,\vphantom{\bigg{(}\bigg{)}}$
with $\tilde{\phi}\equiv\sqrt{r(0)}\,\phi$ and independent of the explicit
shape of $\mathcal{S}[\phi]$. Considering different actions
$\mathcal{S}[\phi]$ with couplings of the same order of magnitude we can
choose the same regulator with an $r(0)$ larger than all internal scales
involved in the different actions. The initial condition $\mathcal{Z}(0,J)$ is
then independent of the explicit action under consideration.
According to the integral formulation (6), $\mathcal{Z}(t,J)$ changes for
different actions when $t>0$. In the differential formulation of the Eq. (12)
those changes are generated by the diffusion term on the right-hand side.
However, we argued that it is permissible to use identical initial conditions
$\mathcal{Z}(0,J)$ for different actions involving similar scales (as long as
these are much smaller than $r(0)$). This then results in an identical
diffusion on the right-hand side of Eq. (12) when the latter is computed by
means of a second derivative of $\mathcal{Z}(0,J)$. If one uses identical
large-$J$ boundary conditions for the solution of the PDE (12) for different
actions, this would imply that, despite different $\mathcal{S}[\phi]$, the RG
time evolution leads to identical $\mathcal{Z}(J)$ for $t\rightarrow\infty$,
which in general cannot be correct.
In order to resolve this problem, particular action-dependent spatial boundary
conditions seem to be necessary for a direct numerical solution starting at
$t=0$ with a Gaussian for $\mathcal{Z}(0,J)$. It is not obvious how to derive
or formulate such boundary conditions from the asymptotics of Eq. (12) alone.
In light of this, a numerical solution of Eq. (12) in the $(t,J)$ plane by
means of a spatial discretization in $J$ direction and an integration in $t$
direction appears to be conceptually questionable.
However, this invalidates by no means the flow equation for $\mathcal{Z}(t,J)$
in general. Augmenting it (at $t=0$) with information from the integral
formulation (6) or, equivalently, other additional information, could enable
practical computations using the PDE (12). But it is at this point (at least
to us) not obvious how one would implement a numerical solution strategy for
the PDE (12) avoiding integrals of the action.
There is another well-known drawback in using the partition function
$\mathcal{Z}[J]$ for calculating $n$-point correlation functions (or
expectation values) $\langle\phi^{n}\rangle$. The latter are rather
inefficient in storing information, because they contain redundant information
in the form of disconnected and reducible terms, see Refs. [80, 40, 38, 39] or
the mathematical theory of moment- and cumulant-generating functionals in
statistics for details [81]. This is further discussed in Sub.Sec. II.5.
However, the redundant information in $\langle\phi^{n}\rangle$ is not
necessarily a strong argument against the use of the flow equation (12) in
practical computations, since the irreducible information can be extracted
from the correlation functions $\langle\phi^{n}\rangle$.
In order to resolve both the problem of initial and boundary conditions for
$\mathcal{Z}(t,J)$ as well as the issue of redundant information in
$\langle\phi^{n}\rangle$, we now consider two different generating
functionals, which are better suited for practical calculations of $n$-point
correlation functions or expectation values, respectively. To this end, we
employ the Schwinger functional,
$\displaystyle\mathcal{W}[J]\equiv\ln\mathcal{Z}[J]\,,$
$\displaystyle\mathcal{W}[0]=0\,,$ (14)
and its Legendre transform, the effective action,
$\displaystyle\Gamma[\varphi]\equiv\mathop{\mathrm{sup}}_{J}\big{\\{}J\,\varphi-\mathcal{W}[J]\big{\\}}\,.$
(15)
Here, “$\mathrm{sup}$” denotes the supremum with respect to the source $J$.
The Schwinger functional generates all connected $n$-point correlation
functions while the effective action generates all one-particle irreducible
(1PI) $n$-point vertex (correlation) functions, see Sub.Sec. II.5 or Refs.
[80, 40, 38, 39] for details.
In general $\mathcal{W}[J]$ is convex with a positive definite Hessian
$\mathcal{W}^{(2)}_{JJ}[J]$, which implies convexity for $\Gamma[\varphi]$,
since the Legendre transform of a convex function is convex by definition, see
e.g., Refs. [82, 83] for details. In the present case the convexity of
$\mathcal{W}[J]=\mathcal{W}(J)$ becomes apparent considering its second
derivative,
$\displaystyle\partial_{J}^{2}\mathcal{W}(J)=\langle\phi^{2}\rangle_{J}-\langle\phi\rangle_{J}\langle\phi\rangle_{J}=\langle(\phi-\langle\phi\rangle_{J})^{2}\rangle_{J}\,,$
(16)
which, as the expectation value of a positive quantity, is always
positive.666Note that also $\mathcal{Z}[J]$ is convex, which can be seen by
investigating its second derivative. In zero dimensions, also smoothness,
$\mathcal{Z}[J]\in C^{\infty}$, directly translates to $\mathcal{W}[J]\in
C^{\infty}$ and $\Gamma[\varphi]\in C^{\infty}$, because all derivatives
$\mathcal{W}[J]$ and $\Gamma[\varphi]$ can be entirely expressed in terms of
derivatives of $\mathcal{Z}[J]$, see Sub.Sec. II.5. We will need both
properties several times during our discussion, see also the discussion in
App. B.
Having these definitions at hand, we shall start the next section by defining
scale-dependent generating functionals $\mathcal{W}_{t}[J]$ and
$\Gamma_{t}[\varphi]$. From these, we will also derive and discuss two flow
equations which are similar to Eq. (11). The final result of the next
subsection is the FRG equation (known as Wetterich equation), which is the
exact analogue to Eq. (11) on the level of $\Gamma_{t}[\varphi]$. It provides
the opportunity to circumvent the direct calculation of integrals of type (1).
### II.3 The Functional Renormalization Group equation
In this subsection we derive and discuss the Functional Renormalization Group
(FRG) equation [35, 36, 37] (also known as Exact Renormalization Group
equation) for our zero-dimensional toy-model QFT. All formulas presented in
this section can be generalized to higher dimensions and arbitrary field
content, see e.g., Refs. [84, 85, 86, 87, 73, 88].
#### II.3.1 The scale-dependent Schwinger functional
We begin the derivation by introducing the scale-dependent Schwinger
functional starting from definition (6),
$\displaystyle\mathcal{W}_{t}[J]\equiv\ln\mathcal{Z}_{t}[J]\,.$ (17)
It follows from our previous discussion that for $t\rightarrow\infty$ the
Schwinger functional (14) is recovered,
$\displaystyle\lim\limits_{t\rightarrow\infty}\mathcal{W}_{t}[J]=\mathcal{W}[J]\,,$
(18)
while $\mathcal{W}_{t=0}[J]$ is given by the logarithm of Eq. (13).
The insertion of the regulator (7) into $\mathcal{Z}_{t}[J]$ does not spoil
the convexity and smoothness (in zero dimensions) of the Schwinger functional:
$\mathcal{W}_{t}[J]$ and $\mathcal{Z}_{t}[J]$ are convex and smooth for all
$t$.
Completely analogous to Eq. (11) one can derive a PDE for
$\mathcal{W}_{t}[J]=\mathcal{W}(t,J)$ in the $(t,J)$ plane,
$\displaystyle\partial_{t}\mathcal{W}(t,J)=\vphantom{\bigg{(}\bigg{)}}$ (19)
$\displaystyle=\,$
$\displaystyle-\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\Big{(}\partial_{J}^{2}\mathcal{W}(t,J)+\big{[}\partial_{J}\mathcal{W}(t,J)\big{]}^{2}\Big{)}\,.\vphantom{\bigg{(}\bigg{)}}$
which describes the flow of $\mathcal{W}(t,J)$ from $t=0$ to
$t\rightarrow\infty$777In terms of its structure Eq. (19) is also known as the
Polchinski equation in the context of the RG for higher-dimensional QFTs.
However, in the original work [89] an effective action $L(\Lambda,\phi)$ takes
the role of $\mathcal{W}$ and it is formulated in terms of the fields $\phi$
instead of the sources $J$. For relations between the original Polchinski
equation and the flow equations studied in this work and selected applications
of the Polchinski equation, see, e.g., Refs. [90, 91, 92, 93]..
We could now repeat the discussion about the issues of initial and boundary
conditions for the solution of this PDE. However, the problems are almost
identical to those of Eq. (12), because on the level of the PDE, we only
substituted the function $\mathcal{Z}(t,J)$ by $\mathcal{W}(t,J)$ via the
logarithm, which does not change the structure of the problem fundamentally.
Formulating appropriate initial and boundary conditions in the spatial $J$
direction therefore remains as complicated as before. Note that the PDE (19)
became more complicated when compared to Eq. (12) due to the non-linear term
on the right-hand side. In summary, the scale-dependent Schwinger functional
is, from a practical point of view, as badly suited as $\mathcal{Z}(t,J)$ to
perform the (numeric) calculation of the functional integral via a flow
equation starting from a Gaussian-type integral.
In the following we will focus on the scale-dependent effective (average)
action and its respective flow equation, which does not suffer from the issues
of particular initial and boundary conditions. As an added benefit, the
effective action is also the most efficient functional in terms of storing
information of a theory at hand. Formulating proper initial and boundary
conditions for the flow equations for $\mathcal{Z}(t,J)$ and
$\mathcal{W}(t,J)$ and if possible implementing adequate numerical schemes in
the context of zero-dimensional field theories would certainly be interesting
from an academic point of view. Translating the initial and boundary
conditions for the scale-dependent effective (average) action to
$\mathcal{Z}(t,J)$ and $\mathcal{W}(t,J)$ could be a possible and potentially
feasible strategy. A comparison of the flows of $\mathcal{Z}(t,J)$,
$\mathcal{W}(t,J)$, and $\Gamma(t,\varphi)$, both conceptually and for
explicitly specified actions, is a worthwhile subject of future work.
#### II.3.2 The scale-dependent effective action
We now define the scale-dependent effective action $\Gamma_{t}[\varphi]$ via
the Legendre transform of Eq. (17) with respect to the sources $J$ at a RG
timescale $t$,
$\displaystyle\Gamma_{t}[\varphi]\equiv\,$
$\displaystyle\mathop{\mathrm{sup}}_{J}\big{\\{}J\,\varphi-\mathcal{W}_{t}[J]\big{\\}}\equiv\vphantom{\bigg{(}\bigg{)}}$
(20) $\displaystyle\equiv\,$ $\displaystyle
J_{t}(\varphi)\,\varphi-\mathcal{W}_{t}[J_{t}(\varphi)]\,,\vphantom{\bigg{(}\bigg{)}}$
(21)
where we introduced the source $J_{t}(\varphi)$ which realizes the supremum.
Note that, analogous to $\mathcal{Z}_{t}[J]$ and $\mathcal{W}_{t}[J]$, the
convexity and smoothness (in zero dimensions) of $\Gamma_{t}[\varphi]$ is not
spoiled by the $t$ dependence, because the properties of the Legendre
transformation still ensure both.
To obtain an explicit relation for the scale-dependent source
$J_{t}(\varphi)$, which realizes the supremum in Eq. (20), we consider the
functional derivative of Eq. (20) at the supremum to find the important
relation
$\displaystyle\mathcal{W}_{t,J}^{(1)}[J_{t}(\varphi)]\equiv\frac{\delta\mathcal{W}_{t}[J]}{\delta
J}\bigg{|}_{J=J_{t}(\varphi)}=\varphi\,,$ (22)
which will be used frequently in the following. Taking the functional
derivative of Eq. (21) with respect to $\varphi$ and using Eq. (22) we
ultimately find
$\displaystyle\Gamma^{(1)}_{t,\varphi}[\varphi]\equiv\frac{\delta\Gamma_{t}[\varphi]}{\delta\varphi}=J_{t}(\varphi)\,,$
(23)
which is referred to as quantum equation of motion. Due to the strict
convexity of $\Gamma_{t}[\varphi]$ the function $J_{t}(\varphi)$ is bijective
and as such can be inverted, which can be achieved by considering Eq. (22) at
fixed value $J$ for $J_{t}$:
$\displaystyle\varphi_{t}(J)\equiv\frac{\delta\mathcal{W}_{t}[J]}{\delta
J}\,,$ (24)
where $\varphi_{t}(J)$ is the so-called scale-dependent classical field
(sometimes also referred to as scale-dependent mean field).
The subtle relations between, and scale dependences of, $\varphi_{t}(J)$ and
$J_{t}(\varphi)$ are rarely discussed in literature and usually suppressed in
the notation. The relation between $\varphi_{t}$ and $J_{t}$ will be of
particular importance in the discussion of $n$-point correlation functions in
Sub.Sec. II.5. The scale dependence of $\varphi_{t}(J)$ from Eq. (24) is not
related to a rescaling (RG transformation) using, e.g., a wave-function
renormalization for $\varphi$.
Before we derive the FRG equation, which is the flow equation for
$\Gamma_{t}[\varphi]$ and a PDE for the function $\Gamma(t,\varphi)$ in the
$(t,\varphi)$ plane, we check whether we will run into the same issues
(related to initial and boundary conditions) as before. Hence, first of all,
we must derive the initial condition for the PDE for $\Gamma(t,\varphi)$. To
this end, we study the limit $t\rightarrow 0$ of $\Gamma_{t}[\varphi]$. We use
the definitions (6), (17), (20), and (21) to obtain
$\displaystyle\mathrm{e}^{-\Gamma_{t}[\varphi]}=\,$
$\displaystyle\mathrm{e}^{-\mathop{\mathrm{sup}}_{J}\\{J\,\varphi-\mathcal{W}_{t}[J]\\}}=\vphantom{\bigg{(}\bigg{)}}$
(25) $\displaystyle=\,$
$\displaystyle\mathrm{e}^{\ln\mathcal{Z}_{t}[J_{t}(\varphi)]-J_{t}(\varphi)\,\varphi}=\vphantom{\bigg{(}\bigg{)}}$
$\displaystyle=\,$
$\displaystyle\mathcal{N}\int_{-\infty}^{\infty}\mathrm{d}\phi\,\mathrm{e}^{-\mathcal{S}[\phi]-\Delta\mathcal{S}_{t}[\phi]+J_{t}(\varphi)\,(\phi-\varphi)}\,.\vphantom{\bigg{(}\bigg{)}}$
We now shift the integration variable888It is the same shift that is used in
the background field formalism [94, 95], where the full fluctuating quantum
field $\phi$ is split into a background field configuration $\varphi$ and
additional fluctuations $\phi^{\prime}$ about the background field. This is
why $\varphi$ is called the classical or mean field.
$\phi\mapsto\phi^{\prime}=\phi-\varphi$. Using Eq. (7), we find
$\displaystyle\mathrm{e}^{-\Gamma_{t}[\varphi]+\Delta\mathcal{S}_{t}[\varphi]}=\vphantom{\bigg{(}\bigg{)}}$
(26) $\displaystyle=\,$
$\displaystyle\mathcal{N}\int_{-\infty}^{\infty}\mathrm{d}\phi^{\prime}\,\mathrm{e}^{-\mathcal{S}[\phi^{\prime}+\varphi]-\Delta\mathcal{S}_{t}[\phi^{\prime}]-r(t)\,\phi^{\prime}\,\varphi+\Gamma^{(1)}_{t,\varphi}[\varphi]\,\phi^{\prime}}\,.\vphantom{\bigg{(}\bigg{)}}$
In the next step, we introduce the scale-dependent effective average action,
$\displaystyle\bar{\Gamma}_{t}[\varphi]\equiv\Gamma_{t}[\varphi]-\Delta\mathcal{S}_{t}[\varphi]\,,$
(27)
which also tends to the effective action $\Gamma[\varphi]$ for
$t\rightarrow\infty$, because the second term vanishes in this limit, cf. Eq.
(8).
At any finite value of $t$ (including $t=0$), $\bar{\Gamma}_{t}[\varphi]$
differs from $\Gamma_{t}[\varphi]$ and is no longer guaranteed to be convex,
which can be seen directly from the second term in Eq. (27). Convexity is only
recovered for $t\rightarrow\infty$. However, the second term in Eq. (27) does
not violate the smoothness of $\bar{\Gamma}_{t}[\varphi]$ in zero dimensions
for all $t$, because
$\Delta\mathcal{S}_{t}[\varphi]\equiv\mathcal{S}_{t}(\varphi)\in C^{\infty}$
in $\varphi$.
We express Eq. (26) in terms of the scale-dependent effective average action
(27) and, for the sake of convenience, revert the notation
$\phi^{\prime}\rightarrow\phi$,
$\displaystyle\mathrm{e}^{-\bar{\Gamma}_{t}[\varphi]}=\mathcal{N}\,\int_{-\infty}^{\infty}\mathrm{d}\phi\,\mathrm{e}^{-\mathcal{S}[\phi+\varphi]-\Delta\mathcal{S}_{t}[\phi]+\bar{\Gamma}^{(1)}_{t,\varphi}[\varphi]\,\phi}\,.$
(28)
In the next step one formally introduces the normalization of a Gaussian
integral with mass $r(t)$ and takes the logarithm, which results in
$\displaystyle\bar{\Gamma}_{t}[\varphi]=\vphantom{\bigg{(}\bigg{)}}$ (29)
$\displaystyle=\,$
$\displaystyle-\ln\int_{-\infty}^{\infty}\mathrm{d}\phi\,\sqrt{\tfrac{r(t)}{2\pi}}\,\mathrm{e}^{-\mathcal{S}[\phi+\varphi]-\frac{1}{2}r(t)\,\phi^{2}+\bar{\Gamma}^{(1)}_{t,\varphi}[\varphi]\,\phi}-\vphantom{\bigg{(}\bigg{)}}$
$\displaystyle-\ln\Big{[}\mathcal{N}\sqrt{\tfrac{2\pi}{r(t)}}\Big{]}\,.\vphantom{\bigg{(}\bigg{)}}$
We are now ready to study the limit $t\rightarrow 0$, which corresponds to the
initial condition for a possible flow equation for $\Gamma_{t}[\varphi]$ or
$\bar{\Gamma}_{t}[\varphi]$, respectively. Focusing on the $\phi$ integral in
the first term on the right-hand side of Eq. (29), we employ the fact that the
regulator terms act like a Gaussian representation of the Dirac delta
distribution,
$\displaystyle\lim\limits_{t\rightarrow
0}\sqrt{\tfrac{r(t)}{2\pi}}\,\mathrm{e}^{-\frac{1}{2}r(t)\,\phi^{2}}\approx\delta(\phi)\,,$
(30)
as long as $r(t)$ is much larger than all scales in $\mathcal{S}[\phi]$. Thus,
denoting
$\displaystyle
c(t)\equiv-\ln\Big{[}\mathcal{N}\,\sqrt{\tfrac{2\pi}{r(t)}}\,\Big{]}\,,$ (31)
we find as $t\rightarrow 0$
$\displaystyle\bar{\Gamma}_{t}[\varphi]\rightarrow\,$
$\displaystyle-\ln\int_{-\infty}^{\infty}\mathrm{d}\phi\,\delta(\phi)\,\mathrm{e}^{-\mathcal{S}[\phi+\varphi]+\bar{\Gamma}^{(1)}_{t,\varphi}[\varphi]\,\phi}+c(t)=\vphantom{\bigg{(}\bigg{)}}$
$\displaystyle=\,$
$\displaystyle\mathcal{S}[\varphi]+c(t)\,,\qquad(t\rightarrow\infty)\vphantom{\bigg{(}\bigg{)}}$
(32)
This means that the initial condition for a flow of
$\bar{\Gamma}_{t}[\varphi]$ is given by the classical action $\mathcal{S}$
evaluated for the classical field $\varphi$ and some additional $t$ dependent,
but $\varphi$ independent term $c(t)$. This choice for an initial condition of
a PDE for $\bar{\Gamma}_{t}[\varphi]$ has subtle consequences:
Although $c(t)$ does not depend on $\varphi$, it is large,
$c(t)\sim\frac{1}{2}\ln r(t)$. Consequently, as far as the initial condition
for the PDE for $\Gamma_{t}[\varphi]$ or $\bar{\Gamma}_{t}[\varphi]$ is
concerned, it seems as if we run into the same problem as before: The initial
condition is dominated by the artificial mass of the regulator $r(t)$,
independent of the specific action $\mathcal{S}[\phi]$, and differences in the
specific choice for $\mathcal{S}[\phi]$ enter the initial condition only as
small deviations from the large term $c(t)$. Furthermore, $c(t)$ contains the
normalization constant $\mathcal{N}$, which was fixed according to Eq. (4).
However, precisely because $c(t)$ appears like the normalization
$\mathcal{N}$, it should be irrelevant for all physical observables. Indeed
this is the case, because all $\varphi$ independent terms in
$\Gamma_{t}[\varphi]$ do not enter the $n$-point correlation functions, since
the latter are calculated as derivatives of $\Gamma[\varphi]$ with respect to
$\varphi$ at $t\rightarrow\infty$, see Sub.Sec. II.5. This implies that an
additive, $\varphi$ independent term in the three effective actions
$\Gamma[\varphi]$, $\Gamma_{t}[\varphi]$, and $\bar{\Gamma}_{t}[\varphi]$ is
irrelevant and only relative differences in the effective actions are
observable. Therefore, we can simply omit $c(t)$ and take as initial condition
for the PDE for $\bar{\Gamma}_{t}[\varphi]$ the value $\mathcal{S}[\varphi]$,
which perfectly incorporates the difference between different models with
distinct actions $\mathcal{S}[\phi]$.
One problem in disregarding $c(t)$ remains: one has to ensure that a PDE for
$\bar{\Gamma}_{t}[\varphi]$ must not contain any terms without field
derivatives of $\bar{\Gamma}_{t}[\varphi]$. Otherwise $c(t)$ would influence
the flow in a time-dependent manner. Fortunately, this does not happen, as we
will see later, and the FRG equation (37) does not contain terms without field
derivatives of $\bar{\Gamma}_{t}[\varphi]$ on the right-hand side.
This, however, brings up another question: After Eq. (27) we argued that
$\bar{\Gamma}_{t}[\varphi]$ does not need to be convex, but must still be
smooth for all $t$. Let us for example consider the non-analytic action (10)
as an initial condition, $\bar{\Gamma}_{t=0}[\varphi]=\mathcal{S}[\varphi]$.
This action does not cause any problems for the convexity and the smoothness
of $\mathcal{Z}_{t}[J]$ and $\mathcal{W}_{t}[J]$ at arbitrary $t$, see for
example App. B and Fig. 36. The non-convexity of $\mathcal{S}[\varphi]$ is
also not a problem for $\bar{\Gamma}_{t}[\varphi]$, which does not necessarily
need to be convex at finite $t$. Nevertheless, the smoothness of
$\bar{\Gamma}_{t}[\varphi]$ is violated by this choice of
$\mathcal{S}[\varphi]$ at $t=0$. This issue originates from relation (30),
which is exactly fulfilled only in the limit $\Lambda\rightarrow\infty$ for
the UV cutoff. This, however, leads to a trivial theory of infinitely massive
particles at $t=0$, cf. Eq. (6). If one chooses a reasonably large but finite
$\Lambda$ and does not use Eq. (30), one would ensure that
$\bar{\Gamma}_{t}[\varphi]$ is also smooth at $t=0$. However, then the initial
condition is not exactly $\mathcal{S}[\varphi]$, but rather an extremely
complicated expression. In consequence, if we use the approximation (30) even
for finite $\Lambda$, one has to pay the price of introducing errors into the
initial condition as well as violating the smoothness of
$\bar{\Gamma}_{t}[\varphi]$ at $t=0$. In return one has a well-defined initial
condition $\mathcal{S}[\varphi]$ for the PDE for $\bar{\Gamma}_{t}[\varphi]$.
However, if $\Lambda$ is chosen to be much larger than all scales in
$\mathcal{S}[\phi]$, the errors from the initial condition are minor and
expected to be of magnitude
$\displaystyle\mathrm{error}\approx\frac{\text{largest scale
in\,\,}\mathcal{S}}{\Lambda}\,,$ (33)
We will come back to this issue in Sec. V in the context of RG consistency
[96, 97, 98, 99, 100].
Additionally, we will find that also the smoothness of
$\bar{\Gamma}_{t}[\varphi]$ is recovered automatically for all $t>0$ by the
structure of the PDE for $\bar{\Gamma}_{t}[\varphi]$, because it always
contains diffusive contributions which immediately smear out kinks in the
initial condition right in the first time step. We will also come back to this
issue later on, after we have derived the FRG equation (37) and discussed its
diffusive, irreversible character.
#### II.3.3 The Exact Renormalization Group equation
In analogy to the previous flow equations, the FRG equation, which is the flow
equation for $\bar{\Gamma}_{t}[\varphi]$, is obtained by taking the derivative
of $\bar{\Gamma}_{t}[\varphi]$ with respect to $t$ and using the definitions
(20) and (27) to express the derivative of $\bar{\Gamma}_{t}[\varphi]$ by the
scale-dependent Schwinger functional,
$\displaystyle\partial_{t}\bar{\Gamma}_{t}[\varphi]=\,$
$\displaystyle\partial_{t}\,\big{(}\Gamma_{t}[\varphi]-\Delta\mathcal{S}_{t}[\varphi]\big{)}=\vphantom{\bigg{(}\bigg{)}}$
(34) $\displaystyle=\,$
$\displaystyle\partial_{t}\,\big{(}J_{t}(\varphi)\,\varphi-\mathcal{W}_{t}[J_{t}(\varphi)]-\Delta\mathcal{S}_{t}[\varphi]\big{)}=\vphantom{\bigg{(}\bigg{)}}$
$\displaystyle=\,$
$\displaystyle[\partial_{t}J_{t}(\varphi)]\,\varphi-\partial_{t}\mathcal{W}_{t}[J_{t}(\varphi)]-\vphantom{\bigg{(}\bigg{)}}$
$\displaystyle-[\partial_{t}J_{t}(\varphi)]\,\mathcal{W}^{(1)}_{t,J_{t}}[J_{t}]-\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\varphi^{2}=\vphantom{\bigg{(}\bigg{)}}$
$\displaystyle=\,$
$\displaystyle-\partial_{t}\mathcal{W}_{t}[J_{t}(\varphi)]-\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\varphi^{2}\,,\vphantom{\bigg{(}\bigg{)}}$
where we used the chainrule and Eq. (22).
We now use the flow equation for the Schwinger functional (19) to substitute
the first term on the right-hand side. Again employing the identity (22), the
last term in the last line of Eq. (34) cancels with the non-linear term in Eq.
(19), such that
$\displaystyle\partial_{t}\bar{\Gamma}_{t}[\varphi]=\,$
$\displaystyle\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\mathcal{W}_{t,JJ}^{(2)}[J_{t}(\varphi)]\,.$
(35)
It remains to replace the second derivative of the scale-dependent Schwinger
functional by a corresponding derivative of $\bar{\Gamma}_{t}[\varphi]$. This
is done via the identity
$\displaystyle 1=\frac{\delta
J_{t}(\varphi)}{\delta\varphi}\,\frac{\delta\varphi}{\delta
J_{t}(\varphi)}=\Gamma^{(2)}_{t,\varphi\varphi}[\varphi]\,\mathcal{W}^{(2)}_{t,JJ}[J_{t}(\varphi)]\,,$
(36)
which follows from Eqs. (22) and (23). Plugging this into Eq. (35) and using
Eq. (27) with Eq. (7) we obtain the FRG equation, Exact Renormalization Group
equation or Wetterich equation [35, 36, 37]
$\displaystyle\partial_{t}\bar{\Gamma}_{t}[\varphi]=\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\big{[}\bar{\Gamma}^{(2)}_{t,\varphi\varphi}[\varphi]+r(t)\big{]}^{-1}\,,$
(37)
which is a flow equation – a PDE – for the scale-dependent effective average
action $\bar{\Gamma}(t,\varphi)$ in the $(t,\varphi)$ plane,
$\displaystyle\partial_{t}\bar{\Gamma}(t,\varphi)=\frac{\frac{1}{2}\,\partial_{t}r(t)}{\partial_{\varphi}^{2}\bar{\Gamma}(t,\varphi)+r(t)}\,,$
(38)
with the initial condition $\bar{\Gamma}(t=0,\varphi)=\mathcal{S}[\varphi]$.
Some remarks are in order:
1. 1.
In contrast to the PDEs for $\mathcal{Z}(t,J)$ and $\mathcal{W}(t,J)$ the FRG
equation can be initialized with a suitable initial condition at $t=0$ that
produces distinct flows for different actions $\mathcal{S}[\phi]$, as was
discussed in the previous subsubsection.
2. 2.
The spatial boundary conditions, i.e., for $\varphi\rightarrow\pm\infty$ are
provided by the asymptotics of the FRG equation (38) itself and by the
requirement that $\mathcal{S}[\varphi]$ must be bounded from below: The action
$\mathcal{S}[\varphi]$ of an (interacting) field theory must at least grow
like $\varphi^{2}$ for large $|\varphi|$ and the dominant contribution for
large $|\varphi|$ must be even in $\varphi$. For actions
$\mathcal{S}[\varphi]$ that grow asymptotically faster than $\varphi^{2}$ the
denominator on the right-hand side of the PDE (38) already diverges at
$t\approx 0$, such that
$\displaystyle\lim\limits_{|\varphi|\rightarrow\infty}\partial_{t}\bar{\Gamma}(t,\varphi)\approx
0\,.$ (39)
It follows that for $|\varphi|\rightarrow\infty$ the function
$\bar{\Gamma}(t,\varphi)$ does not change at all, but keeps its initial value
$\mathcal{S}[\varphi]$. These are perfectly valid boundary conditions for a
PDE. The scenario for initial conditions with
$\lim\limits_{|\varphi|\rightarrow\infty}\mathcal{S}[\varphi]\sim\varphi^{2}$
is more delicate. We will return to this issue and a detailed discussion of
boundary conditions, when we discuss the numerical implementation and solution
of Eq. (38) in Sub.Sec. IV.4 in the context of numerical fluid dynamics.
3. 3.
The structure of the PDE (38) is again a diffusion equation. In contrast to
the PDEs (12) and (19) it is non-linear in the second-order spatial
derivatives of $\Gamma(t,\varphi)$ that appear in the denominator. By applying
the same formalism to models with different field content, the FRG equation
can also acquire convective/advective terms and source terms. We will thus
find that the FRG equation shares many properties with other notable
advection-diffusion equations, e.g., the Navier-Stokes equation [101]. This is
discussed in Sec. IV, where our numerical approach to the FRG equation is
presented in more detail. However, it should be already mentioned at this
point that analyzing and solving non-linear advection-diffusion-source/sink
equations like Eq. (38) is a state-of-the-art problem in numerical
mathematics. Thus, some care is required in the search for well-established
numerical solution schemes for PDEs of this type.
4. 4.
In zero dimensions, similar to the flow equations for $\mathcal{Z}(t,J)$ and
$\mathcal{W}(t,J)$, one can reparameterize the flow time $t$ in terms of $r$
in Eq. (38) and get rid of the prefactor $\partial_{t}r(t)$. Additionally, one
could eliminate $r(t)$ in the denominator in Eq. (38) by shifting
$\bar{\Gamma}(t,\varphi)\rightarrow\bar{\Gamma}(r,\varphi)-\tfrac{1}{2}\,r\,\varphi^{2}$
and switching from $t$ to $r$ as flow parameter, which corresponds to the
zero-dimensional analogue of the rescaled “dimensionless” flow equation in
fixed-point form, but is not suited for the practical calculations in this
work.
This reparameterization effectively corresponds to different choices of
regulator (shape) functions in zero dimensions. However, for higher-
dimensional problems, different choices of regulators do not need to be
related to each other via simple reparametrization of the RG time. In any
case, the effective dynamics in the PDE during the RG flow strongly depends on
the parametrization of the RG scale as well as the explicit choice of
regulator, which has two direct consequences: First, although the dynamics and
$t$ evolution of observables (the $n$-point correlation functions) during the
RG flow might be highly interesting and must also be studied to ensure that
the UV and IR cutoff scales are chosen appropriately, one must clearly state
that only the IR value of $\Gamma[\varphi]$ is mathematically and physically
meaningful and suitable for extracting information on the $n$-point
correlation functions. This is demonstrated and discussed again in the context
of numerical precision tests of the $O(N)$ model in Sec. V. Second, from a
numerical point of view, some parametrizations or choices of regulators might
be more challenging for the numerical integrators than others and must be
adopted to the specific problems at hand. On the level of the PDE this
corresponds to the time-dependent strength of the diffusion, see below.
5. 5.
Unrelated to the present discussion, a formulation of the FRG equation using
mean fields carrying an explicit scale dependence (in higher dimensions often
related to a running wave-function renormalization) is also possible with a
careful consideration and distinction between total and partial derivatives
with respect to $t$. Generalizations including composite mean fields are also
possible, see, e.g., Ref. [73].
Using a zero-dimensional field theory with one degree of freedom, we have
therefore demonstrated that it is possible to transform the problem of solving
functional integrals like Eqs. (1) and (2) for a model with action
$\mathcal{S}[\phi]$ into solving the PDE (38) in $t$ and $\varphi$ with
initial condition $\mathcal{S}[\varphi]$. The FRG equation thus directly
implements the idea of transforming Gaussian-type functional integrals into
arbitrary functional integrals, but on the level of the effective action
$\Gamma[\varphi]$ rather than the partition function $\mathcal{Z}[J]$. Both
formulations of the problem of calculating $n$-point correlation functions –
the functional-integral formulation and the FRG formulation – are
mathematically equivalent. This, however, is, as we have seen, a highly non-
trivial statement and demands numerical precision tests, which are part of
this work.
In Ref. [73] it is shown that the FRG framework can be generalized to models
or theories with arbitrary field content in arbitrary dimensions and space-
time background (even a formulation for space-time itself, i.e., quantum
gravity is possible [85, 86], see Ref. [13] for a recent review).
Before we introduce the zero-dimensional $O(N)$ model and explain the relation
of the FRG to fluid dynamics, followed by our main discussion of zero-
dimensional QFTs as a testing ground for numerical methods and truncation
schemes, we discuss two further issues. The first contextualizes our previous
discussion with an interpretation of the FRG from the RG perspective (also for
higher-dimensional field theories). Furthermore, it briefly discusses the
generalization of the FRG equation to different field content. This can also
be found in Refs. [73, 74, 13, 87, 88, 102, 103, 104]. The second issue
discusses the relation between the $n$-point correlation functions of the
different generating functionals $\mathcal{Z}_{(t)}[J]$,
$\mathcal{W}_{(t)}[J]$, and $\Gamma_{(t)}[\varphi]$. This is needed for a
comparison of the exact results from the partition function $\mathcal{Z}[J]$
with our results from the FRG based on $\Gamma[\varphi]$. Readers familiar
with these issues may skip the following two sections.
### II.4 Contextualization with FRG in higher-dimensional space-time
The structure of the FRG equation (38) is already very general and extends
with only minor modifications to arbitrary fields and dimensions. Derivations
can be found in, e.g., Refs. [73, 102, 88]. The FRG equation reads
$\displaystyle\partial_{t}\bar{\Gamma}_{t}[\Phi]=\mathrm{STr}\Big{[}\big{(}\tfrac{1}{2}\,\partial_{t}R_{t}\big{)}\,\big{(}\bar{\Gamma}^{(2)}_{t}[\Phi]+R_{t}\big{)}^{-1}\Big{]}\,.$
(40)
The supertrace in Eq. (40) entails sums over internal indices and different
fields and integrals over momenta, taking minus signs for fermionic fields
properly into account. The fundamental difference between ERG Eq. (40) and its
counterpart in zero dimensions (37), is that the ERG equation in $d>0$ is a
functional differential equation for the classical fields $\Phi$. It does not
naturally present as a PDE which necessitates truncations in practical
computation to project the ERG equation onto a finite set of coupled ODEs
and/or PDEs. The regulator $R_{t}$ for computations in $d>0$ is no longer a
simple scalar function but an operator with a particular, non-trivial
structure in position/momentum space. While different regulator choices are
still possible in higher dimensions, corresponding RG flows are no-longer
related by simple rescaling and a suitable regulator choice for the problem at
hand becomes particularity important when considering explicit truncated FRG
flow equations see, e.g., Refs. [70, 71]. More details can be found in, e.g.,
Refs. [73, 74, 87, 102, 103, 104, 88, 13, 105]. The equation is based upon
momentum locality, i.e., the integrand of the momentum integral on the right-
hand side is peaked around the RG scale $k\approx q$ (for conventional
regulators), see, e.g., Fig. 1 in Ref. [102] or Fig. 3.1 of Ref. [106], where
$q$ is the loop momentum and
$\displaystyle t=-\ln\big{(}\tfrac{k}{\Lambda}\big{)}\,.$ (41)
The FRG equation can be interpreted as a direct implementation of Wilson’s
approach to the RG [107, 108, 109].
In general, the space-time dimensionality has to be taken into account when
considering the convergence properties of different expansion schemes. For
example, the vertex expansion is believed to work very well for QCD in $d=4$
dimensions (see, e.g., Ref. [110] for a recent overview), however, as we will
discuss below, the convergence of the expansion is in general not guaranteed.
The vertex expansion is an expansion in terms of moments of the quantum
effective action, explained in detail in Sub.Sub.Sec. III.3.2. Here, the
moments are the irreducible parts of scattering kernels.
The convergence of this expansion is given by two main ingredients,
1. 1.
phase-space suppression,
2. 2.
finite couplings.
The first point means that higher-order vertices, which originate from quantum
effects and are typically not present in the classical action, come with
increasing suppression factors, e.g., due to the angular integrations. The
second point simply relates to the fact that all couplings have to stay
finite. Otherwise the argument related to phase-space suppression simply does
not work. There are several scenarios where this can be the case. The main one
being the presence of resonances, where couplings can be divergent. Also large
densities might circumvent the effect of phase-space suppression, but are not
our main concern in this work. The last, and for this work most important
effect, is that of the dimension.
In particular, for zero-dimensional space-time the angular integrations are
not present, and hence the entire argument of phase-space suppression does not
work. Zero-dimensional QFT is ultra-local – defined only in a single point –
and thus extremely coupled in field space. This, of course, has to be kept in
mind when considering convergence properties of vertex expansions.
Still, also a parallel work in $1+1$ spacetime dimensions by some of us and
collaborators [32] generically supports these statements and the increasing
importance of local interactions in low spacetime dimensions.
### II.5 $n$-point correlation functions
In this section we discuss the scale-dependent correlation functions, which
can be extracted from the (scale-)dependent generating functionals
$\mathcal{Z}_{(t)}$, $\mathcal{W}_{(t)}$, and $\Gamma_{(t)}$. We restrict the
discussion to a zero-dimensional quantum theory with a single real scalar. The
concepts and expressions can be generalized to theories including arbitrary
fields and generalize to higher dimensions. For a broader discussion in the
context of QTFs we refer the interested reader to the textbooks [39, 38, 94,
111, 40]. For a comprehensive discussion of correlation functions and their
relations in the FRG see, e.g., Refs. [83, 103].
Correlation functions can be extracted by taking successive functional
derivatives of the generating functional, cf. Eq. (1):
$\displaystyle\langle\phi^{n}\rangle_{t,J}\equiv\frac{\mathcal{Z}^{(n)}_{t,J\cdots
J}[J]}{\mathcal{Z}_{t}[J]}\,.$ (42)
(The non-observable normalization, which we fixed by means of Eq. (4)
cancels.)
According to Eq. (24), the one-point correlation function
$\displaystyle\langle\phi\rangle_{t,J}=\frac{\mathcal{Z}^{(1)}_{t,J}[J]}{\mathcal{Z}_{t}[J]}=\mathcal{W}^{(1)}_{t,J}[J]$
(43)
equals the scale-dependent classical field $\varphi_{t}(J)$.
The two-point correlation function
$\displaystyle\langle\phi^{2}\rangle_{t,J}=\frac{\mathcal{Z}^{(2)}_{t,JJ}[J]}{\mathcal{Z}_{t}[J]}=\mathcal{W}^{(2)}_{t,JJ}[J]+\langle\phi\rangle_{t,J}^{2}$
(44)
is of particular interest in QFT since it is related to the transition
amplitude between two states. In $d>0$ such an amplitude between $\phi(x_{1})$
and $\phi(x_{2})$ encodes the particle motion between the space-time points
$x_{1}$ and $x_{2}$. $\langle\phi^{2}\rangle_{t,J}$ includes the
disconnected999“Connected” and “disconnected” in this context refers to the
connectivity of the Feynman-diagram representation of the correlation
functions. In a connected Feynman diagram all external lines are connected in
the diagram through at least one internal line. contribution
$\langle\phi\rangle_{t,J}^{2}$. This information is already stored in the
$1$-point correlation function. Higher-order $n$-point correlation functions
include disconnected parts consisting of products of lower $m$-point functions
with $m<n$ [81]. The disconnected contributions correspond to scattering
processes where only a subset of the fields interact with each other and are
as such irrelevant for observables. Loosely speaking, $\mathcal{Z}_{t}[J]$
contains redundant information in the form of these disconnected diagrams.
The Schwinger functional $\mathcal{W}_{t}[J]$ does not contain this redundant
information. Functional derivatives of $\mathcal{W}_{t}[J]$ generate connected
$n$-point functions:
$\displaystyle\langle\phi^{n}\rangle^{c}_{t,J}\equiv\mathcal{W}^{(n)}_{t,J\ldots
J}[J]\,.$ (45)
The first two connected $n$-point functions are
$\displaystyle\langle\phi\rangle^{c}_{t,J}=\,$
$\displaystyle\langle\phi\rangle_{t,J}=\mathcal{W}^{(1)}_{t,J}[J]\,,\vphantom{\bigg{(}\bigg{)}}$
(46) $\displaystyle\langle\phi^{2}\rangle^{c}_{t,J}=\,$
$\displaystyle\langle\phi^{2}\rangle_{t,J}-\langle\phi\rangle_{t,J}^{2}=\mathcal{W}^{(2)}_{t,JJ}[J]\,.\vphantom{\bigg{(}\bigg{)}}$
(47)
Higher-order $n$-point functions are interpreted as interaction vertices. For
example, the connected three-point correlation function is given by
$\displaystyle\langle\phi^{3}\rangle^{c}_{t,J}=\,$
$\displaystyle\langle\phi^{3}\rangle_{t,J}-3\,\langle\phi^{2}\rangle_{t,J}\,\langle\phi\rangle_{t,J}+2\,\langle\phi\rangle_{t,J}^{3}\,.$
(48)
The Schwinger functional, as the generating functional of connected
correlation functions, still contains redundant information since connected
correlation functions can be decomposed into 1PI101010One-particle irreducible
(1PI) in this context refers to Feynman diagrams, which cannot be split into
two disconnected diagrams by cutting a single internal line. vertex
functions. 1PI vertex functions encode all information about a QFT.
The effective action $\Gamma_{t}[\varphi]$ is the generating functional of 1PI
vertex functions [112, 94, 80, 111, 38, 39, 40, 113]. We now introduce a
central object in functional approaches to QFT: the full scale-dependent
propagator
$\displaystyle
G_{t}^{\varphi\varphi}[\varphi_{t}]\equiv\mathcal{W}^{(2)}_{t,J_{t}J_{t}}[J_{t}]=\big{(}\Gamma^{(2)}_{t,\varphi_{t}\varphi_{t}}[\varphi_{t}]\big{)}^{-1}\,,$
(49)
where the last equality follows from Eq. (36). Recalling Eqs. (23) and (24) we
then obtain
$\displaystyle\frac{\delta}{\delta J_{t}}=\frac{\delta\varphi_{t}}{\delta
J_{t}}\,\frac{\delta}{\delta\varphi_{t}}=\,$
$\displaystyle\mathcal{W}^{(2)}_{t,J_{t}J_{t}}[J_{t}]\,\frac{\delta}{\delta\varphi_{t}}\equiv
G_{t}^{\varphi\varphi}[\varphi_{t}]\,\frac{\delta}{\delta\varphi_{t}}\,,$ (50)
Here we dropped the explicit $\varphi$ ($J$) dependence of the source
realizing the supremum $J_{t}$ (the scale-dependent mean field $\varphi_{t}$)
for readability only and will do so for the remainder of this section.
Equation (50) is basically a chain rule, which allows to convert functional
$J_{t}$ derivatives into $\varphi_{t}$ derivatives. The correlation function
$\langle\phi^{n}\rangle_{t,J_{t}}$ for $n\geq 1$ can be rewritten by
successively pulling out functional $J_{t}$ derivatives,
$\displaystyle\langle\phi^{n}\rangle_{t,J_{t}}=\,$
$\displaystyle\frac{\mathcal{Z}^{(n)}_{t,J_{t}\cdots
J_{t}}[J_{t}]}{\mathcal{Z}_{t}[J_{t}]}=\bigg{(}\frac{\delta}{\delta
J_{t}}+\varphi_{t}\bigg{)}\,\frac{\mathcal{Z}^{(n-1)}_{t,J_{t}\ldots
J_{t}}[J_{t}]}{\mathcal{Z}_{t}[J_{t}]}=\vphantom{\bigg{(}\Bigg{)}}$
$\displaystyle=\,$
$\displaystyle\Bigg{(}\prod_{i=1}^{n-1}\bigg{(}\frac{\delta}{\delta
J_{t}}+\varphi_{t}\bigg{)}\Bigg{)}\,\varphi_{t}\,,\vphantom{\bigg{(}\bigg{)}}$
(51)
where the $\varphi_{t}$ terms account for the derivatives of the normalization
$1/\mathcal{Z}_{t}[J_{t}]$. Using the chain rule (50) in Eq. (51) we arrive at
$\displaystyle\langle\phi^{n}\rangle_{t,J_{t}}=\Bigg{(}\prod_{i=1}^{n-1}\bigg{(}G_{t}^{\varphi\varphi}[\varphi_{t}]\,\frac{\delta}{\delta\varphi_{t}}+\varphi_{t}\bigg{)}\Bigg{)}\,\varphi_{t}\,,$
(52)
which expresses the correlation function $\langle\phi^{n}\rangle_{t,J_{t}}$
completely in terms of $\varphi_{t}$, $G_{t}^{\varphi\varphi}[\varphi_{t}]$,
and 1PI vertices for $n\geq 3$. The higher ($n\geq 3$) 1PI vertices
$\Gamma^{(n)}_{t,\varphi_{t}\cdots\varphi_{t}}[\varphi_{t}]$ emerge in Eq.
(52) from the functional derivatives of the propagator. Taking the
$\varphi_{t}$ derivative of Eq. (36) (for $\varphi\equiv\varphi_{t}$, $J\equiv
J_{t}$), we derive
$\displaystyle\frac{\delta}{\delta\varphi_{t}}\,G_{t}^{\varphi\varphi}[\varphi_{t}]=\,$
$\displaystyle\frac{\delta}{\delta\varphi_{t}}\,\big{(}\Gamma^{(2)}_{t,\varphi_{t}\varphi_{t}}[\varphi_{t}]\big{)}^{-1}\vphantom{\bigg{(}\bigg{)}}$
(53) $\displaystyle=\,$ $\displaystyle-
G_{t}^{\varphi\varphi}[\varphi_{t}]\,\Gamma^{(3)}_{t,\varphi_{t}\varphi_{t}\varphi_{t}}[\varphi_{t}]\,G_{t}^{\varphi\varphi}[\varphi_{t}]\,,\vphantom{\bigg{(}\bigg{)}}$
where we have used Eq. (49) and where
$\displaystyle\frac{\delta}{\delta\varphi_{t}}\,\Gamma^{(n)}_{t,\varphi_{t}\cdots\varphi_{t}}[\varphi_{t}]=\Gamma^{(n+1)}_{t,\varphi_{t}\cdots\varphi_{t}\varphi_{t}}[\varphi_{t}]\,.$
(54)
From the definition (45) and Eq. (50) it is even simpler to derive
$\displaystyle\langle\phi^{n}\rangle_{t,J_{t}}^{c}=\,$
$\displaystyle\Bigg{(}\prod_{i=1}^{n-1}\bigg{(}G_{t}^{\varphi\varphi}[\varphi_{t}]\,\frac{\delta}{\delta\varphi_{t}}\bigg{)}\Bigg{)}\,\varphi_{t}\,,$
(55)
which establishes a decomposition of connected correlation functions in terms
of $\varphi_{t}$, $G_{t}^{\varphi\varphi}[\varphi_{t}]$, and 1PI vertices for
$n\geq 3$. Equation (55) is simpler than Eq. (52) because disconnected
contributions arising from the term $\sim\varphi_{t}$ in the parenthesis in
Eq. (52) are absent.
In terms of $\Gamma_{t}[\varphi_{t}]$ the first three (connected) correlation
functions are given by
$\displaystyle\langle\phi^{1}\rangle_{t,J_{t}}=\,$
$\displaystyle\langle\phi^{1}\rangle_{t,J_{t}}^{c}=\varphi_{t}\,,\vphantom{\bigg{(}\bigg{)}}$
(56) $\displaystyle\langle\phi^{2}\rangle_{t,J_{t}}^{c}=\,$ $\displaystyle
G_{t}^{\varphi\varphi}[\varphi_{t}]\,,\vphantom{\bigg{(}\bigg{)}}$ (57)
$\displaystyle\langle\phi^{2}\rangle_{t,J_{t}}=\,$ $\displaystyle
G_{t}^{\varphi\varphi}[\varphi_{t}]+\varphi_{t}^{2}\,,\vphantom{\bigg{(}\bigg{)}}$
(58) $\displaystyle\langle\phi^{3}\rangle_{t,J_{t}}^{c}=\,$
$\displaystyle-\big{(}G_{t}^{\varphi\varphi}[\varphi_{t}]\big{)}^{3}\,\Gamma^{(3)}_{t,\varphi_{t}\varphi_{t}\varphi_{t}}[\varphi_{t}]\,,\vphantom{\bigg{(}\bigg{)}}$
(59) $\displaystyle\langle\phi^{3}\rangle_{t,J_{t}}=\,$
$\displaystyle-\big{(}G_{t}^{\varphi\varphi}[\varphi_{t}]\big{)}^{3}\,\Gamma^{(3)}_{t,\varphi_{t}\varphi_{t}\varphi_{t}}[\varphi_{t}]+\vphantom{\bigg{(}\bigg{)}}$
(60)
$\displaystyle+3\,G_{t}^{\varphi\varphi}[\varphi_{t}]\varphi_{t}+\varphi_{t}^{3}\,.\vphantom{\bigg{(}\bigg{)}}$
We will need these relations among the different $n$-point correlation
functions to compare our numerical results from solving the RG flow equation
with fluid-dynamical methods to the direct computation of the correlation
functions from the partition function $\mathcal{Z}[J]$.
## III The $O(N)$ model in zero dimensions and its treatment within the FRG
Zero-dimensional $O(N)$ models are predominantly studied for pedagogical and
conceptual purposes [45, 46, 47, 48, 49, 50, 62, 53, 44, 54, 52, 51, 55, 56,
58, 60, 63, 64]. In Ref. [44] the model was used to compare the quality of
perturbation theory, the large-$N$ expansion, and the FRG vertex/Taylor
expansion with the exact result. The primary focus of the present work is to
push this analysis even further and to study the limits of untruncated RG flow
equations as well as the FRG Taylor expansion.
$O(N)$ models in higher dimensions play an important role in understanding
spin systems, like the Ising model [114, 115, 104], and magnetization
phenomena. Furthermore, they are often used as toy models and are of utmost
importance for understanding the Anderson-Brout-Englert-Guralnik-Hagen-Higgs-
Kibble mechanism and the formation of a chiral condensate in strong-
interaction matter. In the context of numerical methods for the FRG, two of us
used the $O(N)$ model in $d=3$ to study numerical solutions of RG flow
equations in the large-$N$ limit [26].
This section is structured as follows: In Sub.Sec. III.1 we introduce the
$O(N)$ model on the level of the classical action and the functional integral.
We further comment on the calculation of expectation values and 1PI vertex
functions from the functional integral, which are our observables of interest.
Thereafter, in Sub.Sec. III.2, we comment on symmetry restoration during the
RG flow, for scenarios in which the classical action
$\mathcal{S}[\vec{\varphi}\,]=U(t_{\mathrm{UV}},\vec{\varphi}\,)$ possesses a
non-trivial minimum. In Sub.Sec. III.3, we introduce the exact FRG formulation
of the model, which includes the derivation of the RG flow equation as an
exact PDE and generalization of Eq. (38). We close this section by deriving
the FRG Taylor expansion for the $O(N)$ model, which is a commonly used
expansion scheme in FRG studies.
### III.1 The zero-dimensional $O(N)$ model
Consider a zero-dimensional theory of $N$ bosonic scalars $\phi_{a}$, which
transform according to
$\displaystyle\phi_{a}\mapsto\phi^{\prime}_{a}=O_{ab}\,\phi_{b}\,,$ (61)
where $O\in O(N)$ and $a,b\in\\{1,\,\ldots,\,N\\}$. In vector notation, this
reads
$\displaystyle\vec{\phi}\mapsto\vec{\phi}^{\,\prime}=O\,\vec{\phi}\,,$ (62)
where $\vec{\phi}=(\phi_{1},\,\phi_{2},\,\ldots,\,\phi_{N})$. If the action
$\mathcal{S}[\vec{\phi}\,]$ of the model possesses an $O(N)$ symmetry, it can
contain all possible terms that are functions of the $O(N)$ invariant
$\displaystyle\rho\equiv\tfrac{1}{2}\,\phi_{a}\,\phi_{a}\equiv\tfrac{1}{2}\,\vec{\phi}^{\,2}$
(63)
This implies that the most general action obeying this symmetry is given by
$\displaystyle\mathcal{S}[\vec{\phi}\,]=U(\vec{\phi}\,)=U(\rho)\,,$ (64)
where $U(\rho)$ is the effective potential, in analogy to models from higher-
dimensional space-times. This effective potential might for example include a
bosonic “mass term” $m^{2}\rho$ as well as other interaction terms containing
arbitrary powers of $\rho$. Although one may now be tempted to assume that the
effective potential $U(\rho)$ must be a power series or an analytic function
of $\rho$, as long as it fulfills all symmetries it can be any continuous
function of $\rho$ which is bounded from below, cf. the discussion in Sec. II
for the special case of the $O(1)$ model.
In the remainder of this section we will summarize relevant relations for the
$O(N)$ model. For a more detailed discussion, we refer the interested reader
to Ref. [44] and references therein.
All generating functionals of the theory retain the $O(N)$ symmetry of the
action, which makes them functionals of the invariants
$\tfrac{1}{2}\,\vec{J}^{\,2}$ for $\mathcal{Z}$ and $\mathcal{W}$ and
$\varrho\equiv\frac{1}{2}\,\vec{\varphi}^{\,2}$ for $\Gamma$. This entails
that all $n$-point correlation functions for odd $n$ vanish by symmetry and
all $n$-point correlation functions of a given order of even $n$ are
proportional to each other, e.g., for the four-point function we find
$\displaystyle\langle\phi_{i}\,\phi_{i}\,\phi_{j}\,\phi_{j}\rangle=\tfrac{1}{3}\,\langle\phi_{i}\,\phi_{i}\,\phi_{i}\,\phi_{i}\rangle\,,$
(65)
for $i\neq j$ and $i,j\in\\{1,\ldots,N\\}$ (no summation over repeated indices
implied here). For the proof, use that $\tfrac{\delta}{\delta
J_{i}}\mathcal{Z}\big{(}\tfrac{1}{2}\vec{J}^{\,2}\big{)}=J_{i}\,\mathcal{Z}^{\prime}\big{(}\tfrac{1}{2}\,\vec{J}^{\,2}\big{)}$
and set the source $\vec{J}=0$ at the end of the calculation. Using the $O(N)$
symmetry on the right-hand side of
$\displaystyle\langle\phi_{i_{1}}\,\cdots\,\phi_{i_{n}}\rangle=\vphantom{\bigg{(}\bigg{)}}$
(66) $\displaystyle=\,$
$\displaystyle\frac{1}{\mathcal{Z}[0]}\,\int_{-\infty}^{\infty}\mathrm{d}^{N}\phi\,\phi_{i_{1}}\,\cdots\,\phi_{i_{n}}\,\mathrm{e}^{-U(\vec{\phi}^{\,2}/2)}\,,\vphantom{\bigg{(}\bigg{)}}$
one can relate correlation functions of even order $2n$ to the expectation
value $\langle(\vec{\phi}^{\,2})^{n}\rangle$. For the two-, four-, and six-
point functions, which are studied in this work, we find
$\displaystyle\langle\phi_{i}\,\phi_{j}\,\rangle=\,$
$\displaystyle\frac{1}{N}\,\delta_{ij}\,\langle\vec{\phi}^{\,2}\rangle\,,\vphantom{\bigg{(}\bigg{)}}$
(67) $\displaystyle\langle\phi_{i}\,\phi_{j}\,\phi_{k}\,\phi_{l}\rangle=\,$
$\displaystyle\frac{1}{N(N+2)}\,(\delta_{ij}\,\delta_{kl}+\delta_{ik}\,\delta_{jl}+\delta_{il}\,\delta_{jk})\,\langle(\vec{\phi}^{\,2})^{2}\rangle\,,\vphantom{\bigg{(}\bigg{)}}$
(68)
$\displaystyle\langle\phi_{i}\,\phi_{j}\,\phi_{k}\,\phi_{l}\,\phi_{m}\,\phi_{n}\rangle=\,$
$\displaystyle\frac{1}{N(N+2)(N+4)}\,(\delta_{ij}\,\delta_{kl}\,\delta_{mn}+\mathrm{all\
permutations})\,\langle(\vec{\phi}^{\,2})^{3}\rangle\,.\vphantom{\bigg{(}\bigg{)}}$
(69)
Connected correlation functions and 1PI vertex functions are related to
correlation functions as outlined in Sub.Sec. II.5. Using the fact that, for
odd $n$, all $n$-point correlation functions and all $n$-point 1PI vertex
functions vanish by symmetry, the following relations hold for the two-,
four-, and six-point functions (no summation over repeated indices):
$\displaystyle\langle\phi_{i}\,\phi_{i}\rangle^{c}=\,$
$\displaystyle\langle\phi_{i}\,\phi_{i}\rangle=\big{(}\Gamma^{(2)}_{\varphi_{i}\varphi_{i}}\big{)}^{-1}\,,\vphantom{\bigg{(}\bigg{)}}$
(70)
$\displaystyle\langle\phi_{i}\,\phi_{i}\,\phi_{i}\,\phi_{i}\rangle^{c}=\,$
$\displaystyle\langle\phi_{i}\,\phi_{i}\,\phi_{i}\,\phi_{i}\rangle-3\,\langle\phi_{i}\,\phi_{i}\rangle^{2}=-\langle\phi_{i}\phi_{i}\rangle^{4}\,\Gamma^{(4)}_{\varphi_{i}\varphi_{i}\varphi_{i}\varphi_{i}}\,,\vphantom{\bigg{(}\bigg{)}}$
(71)
$\displaystyle\langle\phi_{i}\,\phi_{i}\,\phi_{i}\,\phi_{i}\,\phi_{i}\,\phi_{i}\rangle^{c}=\,$
$\displaystyle\langle\phi_{i}\,\phi_{i}\,\phi_{i}\,\phi_{i}\,\phi_{i}\,\phi_{i}\rangle-15\,\langle\phi_{i}\,\phi_{i}\,\phi_{i}\,\phi_{i}\rangle\,\langle\phi_{i}\,\phi_{i}\rangle+30\,\langle\phi_{i}\,\phi_{i}\rangle^{3}=\vphantom{\bigg{(}\bigg{)}}$
(72) $\displaystyle=\,$
$\displaystyle-\langle\phi_{i}\,\phi_{i}\rangle^{6}\,\Gamma^{(6)}_{\varphi_{i}\varphi_{i}\varphi_{i}\varphi_{i}\varphi_{i}\varphi_{i}}+10\,\langle\phi_{i}\,\phi_{i}\rangle^{-1}\,(\langle\phi_{i}\,\phi_{i}\,\phi_{i}\,\phi_{i}\rangle^{c})^{2}\,.\vphantom{\bigg{(}\bigg{)}}$
Inserting Eqs. (67) – (69) into Eqs. (70) – (72) and solving for the 1PI
vertex functions yields
$\displaystyle\Gamma^{(2)}\equiv\,$
$\displaystyle\Gamma^{(2)}_{\varphi_{i}\varphi_{i}}=N\,\frac{1}{\langle\vec{\phi}^{\,2}\rangle}\,,\vphantom{\bigg{(}\bigg{)}}$
(73) $\displaystyle\Gamma^{(4)}\equiv\,$
$\displaystyle\Gamma^{(4)}_{\varphi_{i}\varphi_{i}\varphi_{i}\varphi_{i}}=3\,N^{2}\,\frac{1}{\langle\vec{\phi}^{\,2}\rangle^{2}}\,\bigg{[}1-\frac{N}{N+2}\,\frac{\langle(\vec{\phi}^{\,2})^{2}\rangle}{\langle\vec{\phi}^{\,2}\rangle^{2}}\bigg{]}\,,\vphantom{\bigg{(}\bigg{)}}$
(74) $\displaystyle\Gamma^{(6)}\equiv\,$
$\displaystyle\Gamma^{(6)}_{\varphi_{i}\ldots\varphi_{i}}=60\,N^{3}\,\frac{1}{\langle\vec{\phi}^{\,2}\rangle^{3}}\bigg{[}1-\frac{9\,N}{4\,(N+2)}\,\frac{\langle(\vec{\phi}^{\,2})^{2}\rangle}{\langle\vec{\phi}^{\,2}\rangle^{2}}+\frac{3\,N^{2}}{2\,(N+2)^{2}}\,\frac{\langle(\vec{\phi}^{\,2})^{2}\rangle^{2}}{\langle\vec{\phi}^{\,2}\rangle^{4}}-\frac{N^{2}}{4\,(N+2)\,(N+4)}\,\frac{\langle(\vec{\phi}^{\,2})^{3}\rangle}{\langle\vec{\phi}^{\,2}\rangle^{3}}\bigg{]}\,.\vphantom{\bigg{(}\bigg{)}}$
(75)
In summary, computing arbitrary correlation functions (or 1PI vertex
functions) of the zero-dimensional $O(N)$ model boils down to computing
expectation values $\langle(\vec{\phi}^{\,2})^{n}\rangle$. The latter can be
computed using Eq. (66). Because of the $O(N)$ symmetry of the integrand, this
is most easily done in spherical coordinates. Performing the integration over
spherical coordinates, we have
$\displaystyle\int_{-\infty}^{\infty}\mathrm{d}\phi_{1}\cdots\int_{-\infty}^{\infty}\mathrm{d}\phi_{N}=\frac{2\,\pi^{\frac{N}{2}}}{\Gamma\big{(}\frac{N}{2}\big{)}}\int_{0}^{\infty}\mathrm{d}\rho\,(2\rho)^{\frac{N}{2}-1}\,,$
(76)
Then the expectation value is a simple one-dimensional integral,
$\displaystyle\langle(\vec{\phi}^{\,2})^{n}\rangle=\frac{2^{n}\int_{0}^{\infty}\mathrm{d}\rho\,\rho^{\frac{N}{2}-1}\,\rho^{n}\,\mathrm{e}^{-U(\rho)}}{\int_{0}^{\infty}\mathrm{d}\rho\,\rho^{\frac{N}{2}-1}\,\mathrm{e}^{-U(\rho)}}\,.$
(77)
For certain potentials $U(\rho)$, the integral (77) can even be computed
symbolically in terms of known functions [44, 56, 31], whereas for general
$U(\rho)$ a numerical evaluation to high precision is straightforward using
standard methods [42, 43]. Thus, the zero-dimensional $O(N)$ model is an ideal
testing ground for alternative methods to calculate correlation functions,
such as, e.g., the FRG.
### III.2 Symmetry restoration during the RG flow
Besides being invariant under $O(N)$ transformations the classical action
(potential) $\mathcal{S}[\vec{\phi}\,]=U(\vec{\phi}\,)$ is also invariant
under the discrete $\mathbb{Z}_{2}$ transformation
$\displaystyle\phi_{a}\rightarrow-\phi_{a}\,,$ (78)
which, as already mentioned above, implies that all $n$-point functions with
odd $n$ vanish, e.g., the one-point function
$\varphi_{a}=\langle\phi_{a}\rangle=0$.
However, it is possible to consider actions (potentials)
$\mathcal{S}[\rho]=U(\rho)$ which possess non-trivial minima $\rho_{0}\neq 0$.
This means that the RG flow of $\bar{\Gamma}_{t}[\vec{\varphi}\,]$ of such
models is initialized in a symmetry-broken regime in the UV, where the $O(N)$
symmetry is broken to its $O(N-1)$ subgroup. (For the $O(1)$ model, this
reduces to a breaking of the $\mathbb{Z}_{2}$ symmetry.) Following the
discussion in App. B, this property of the classical action neither translates
to the full quantum effective action $\Gamma[\vec{\varphi}\,]$ in the IR nor
to the $n$-point functions, due to a limiting case of the Coleman-Mermin-
Wagner-Hohenberg theorem [67, 68, 69]. The theorem states that there is no
long-range order in $d\leq 2$ dimensions if the interactions between the
constituents are sufficiently short of range. Therefore, there is no breaking
of a (continuous) symmetry in such systems in the IR, i.e., after integrating
out all quantum fluctuations, even when starting with a classical action in
the UV that has non-trivial minima. This is the equivalent of the statement
that $\varphi_{a}=\langle\phi_{a}\rangle=0$. The “Nambu-Goldstone modes” [116,
117, 112]111111We put the term “Nambu-Goldstone modes” in quotation marks,
because in zero dimensions the concept of “massless modes” can only refer to
the curvature masses in the corresponding bosonic field direction, which are
obtained from the effective potential $U(\rho)$. But the actual particle
masses in a higher-dimensional QFT are derived from the poles of the real-time
propagators, which simply do not exist in zero dimensions. , which we will
also call pions121212We adopt the high-energy terminology. Condensed-matter
physicists associate the pions with quasiparticles – the Anderson-Bogoliubov
modes. $\vec{\pi}$ in the zero-dimensional $O(N)$ model, and the radial
$\sigma$ mode “vaporize” any condensate and smear out all cusps in
$\bar{\Gamma}_{t}[\vec{\varphi}\,]$ during the RG flow. In the IR all modes
are then “massive” again.
There are two reasons, why this feature of symmetry restoration on the level
of $\bar{\Gamma}_{t}[\vec{\varphi}\,]$ is desirable for our numerical tests:
1. 1.
Symmetry breaking/restoration associated with condensation/“vaporization” is
an essential property of all kinds of QFTs [38, 39, 40] and we have to show
that it is correctly captured by our numerical tools. This is especially
important, because it was shown by two of us and collaborators [26, 27] that
non-analytic behavior in the effective potential $U(t,\vec{\varphi}\,)$, cf.
Refs. [22, 118], which is directly associated with dynamical symmetry
breaking/restoration, is realized as shock and rarefaction waves in field
space during the RG flow.
2. 2.
The possibility of dynamical symmetry restoration on the level of
$\bar{\Gamma}_{t}[\vec{\varphi}\,]$ is also a desired feature in order to
demonstrate that it is of utmost importance to choose the UV cutoff $\Lambda$
and the IR cutoff $r_{\mathrm{IR}}$ as well as initial and boundary conditions
in numerical FRG-flow calculations carefully. For our example it is expected
that if the IR cutoff time $t_{\mathrm{IR}}$ is chosen too small, such that
the regulator $r(t)$ is still too large, the system might still be in the
symmetry-broken phase (indicated by a non-trivial minimum). This means that
the scale-dependent effective average action
$\bar{\Gamma}_{t_{\mathrm{IR}}}[\vec{\varphi}\,]$ at this RG scale cannot be
interpreted as the full quantum effective action $\Gamma[\vec{\varphi}\,]$,
because the Coleman-Mermin-Wagner-Hohenberg theorem is still violated. The
same applies to a problematic implementation boundary conditions, especially
at $\varrho=0$, which can lead to a violation of the Coleman-Mermin-Wagner-
Hohenberg theorem, such that the system is not in the restored phase in the
IR.
For a direct physical consequences of these subtleties, we refer to the
parallel works [32, 31] by two of us and collaborators.
In a follow-up publication [34], we will generalize the zero-dimensional
$O(N)$ model to a model involving fermions (Grassmann numbers) and bosons. The
more complicated interactions may also allow for dynamical symmetry breaking
via attractive fermion interactions during the RG flow. Of course, the system
must return to the restored phase in the limit $t\to\infty$.
### III.3 FRG formulation and flow equations
This subsection is dedicated to the FRG formulation of the $O(N)$ model of the
previous Sub.Sec. III.1. To this end, we demonstrate how to arrive at the
exact untruncated RG flow equation of the $O(N)$ model. Furthermore, we
introduce a commonly used truncation scheme for RG flow equations – the FRG
Taylor expansion, see, e.g., Refs. [87, 102, 104, 88, 44, 74, 13]. We start
our discussion with general remarks on the derivation of RG flow equations and
truncation schemes.
From Sec. II and especially Sub.Secs. II.3 and II.4 we have learned that the
FRG equation (40) constitutes an exact PDE for the RG time evolution of the
full field-dependent effective average action $\bar{\Gamma}_{t}[\Phi]$ with
initial condition $\bar{\Gamma}_{t=0}[\Phi]=\mathcal{S}[\Phi]$. Here, $\Phi$
stands for the field space vector of all fields of the specific model under
consideration. However, if there is more than one field space degree of
freedom, the direct (numerical) solution of the FRG equation (40) as a PDE is
exceedingly difficult, because of the high dimensionality of the field space.
In higher space-time dimensions, space-time or momentum dependences of the
fields complicate this issue and promote Eq. (40) to a functional integro-
partial-differential equation with a functional $\mathcal{S}[\Phi(x)]$ or
$\mathcal{S}[\tilde{\Phi}(p)]$ as initial condition.
Instead of solving Eq. (40) directly (independent of the dimensionality and
the field content), one usually specifies some ansatz function for the
effective average action $\bar{\Gamma}_{t}[\Phi]$, which involves only a
finite number of $t$ dependent couplings (vertices). The ansatz function for
$\bar{\Gamma}_{t}[\Phi]$ must respect all symmetries of the model and the
functional integral. Afterwards, one works out a projection prescription,
which extracts these couplings from $\bar{\Gamma}_{t}[\Phi]$. Usually this is
done by
1. 1.
Taking a suitable number of (functional) derivatives in field (and/or
momentum) space,
2. 2.
Evaluating the resulting expression on a specific (usually constant) field
configuration (and/or at specific external momenta, energies etc.),
3. 3.
Applying contractions of open field space and space-time indices with suitable
tensors.
Thus, inserting the ansatz for $\bar{\Gamma}_{t}[\Phi]$ into the FRG equation
(40) and applying these projection rules to both sides of the equation yields
a coupled set of PDEs and/or ODEs for the couplings. This system of
differential equations must be initialized at $t=0$ with the values of the
couplings taken from the specific choice of the classical action
$\mathcal{S}[\Phi]$. The system for the $t$ dependent couplings is then
evolved to $t\rightarrow\infty$. If needed, the values of the couplings at
$t\rightarrow\infty$ can afterwards be reinserted in $\bar{\Gamma}_{t}[\Phi]$
to obtain the effective action $\Gamma[\Phi]$ in the IR. We will present this
procedure explicitly for the zero-dimensional $O(N)$ model in the next
paragraphs.
However, by considering an ansatz function for $\bar{\Gamma}_{t}[\Phi]$, which
consists of a finite number (of usually an infinite set) of all the possible
interaction terms that respect the symmetries of the system, one effectively
introduces an approximation. In the context of the FRG this is called a
truncation. The concept of a truncation of the system can directly be seen
from Eq. (40): Taking an appropriate number of field space derivatives of this
equation to project on a specific coupling, the right-hand side of this
equation depends on higher-order interaction vertices. These are up to two
orders higher than the ones on the left-hand side, because of
$\bar{\Gamma}_{t}^{(2)}[\Phi]$ already involves two field space derivatives.
The highest-order couplings in the system of PDEs for the couplings are,
however, set to zero by definition via the ansatz for
$\bar{\Gamma}_{t}[\Phi]$, because only a finite number of couplings is evolved
with $t$. As a result Eq. (40), which originally corresponds to a coupled
system of infinitely many ODEs and PDEs for couplings of all orders in field
and momentum or position space, is reduced to a finite set of PDEs for the
couplings involved in the ansatz for $\bar{\Gamma}_{t}[\Phi]$, see Refs. [87,
102, 104, 88, 74, 13] for general discussions or, e.g., Refs. [119, 120, 121,
18, 14] for specific applications. After all, the quality of the ansatz
completely determines the quality of the approximation to the actual IR
effective action $\Gamma[\Phi]$ after the RG flow of the truncated system is
solved.
In general, finding reliable truncations for a given problem is a challenging
problem. In particular, the identification of a small parameter to justify the
truncations is a difficult task. In fact, such a parameter may not even exist.
It may also turn out that a given truncation yields reliable results for one
observable but not for another. The latter observation may even be considered
a feature as it allows to identify mechanisms underlying specific phenomena.
In any case, there are construction schemes for systematic ansätze for the
effective action. Commonly used truncation schemes are for example the
derivative expansion [87, 122, 115, 123], which relies on the expansion of
$\bar{\Gamma}_{t}[\Phi]$ in powers of derivatives (momenta) but includes all
orders of field-dependent vertices at the same momentum order. Another
expansion scheme is the vertex expansion, which expands
$\bar{\Gamma}_{t}[\Phi]$ in terms of (momentum-dependent) $n$-point functions.
Oftentimes different expansion schemes are combined, in order to keep the
system of PDEs tractable [18, 19, 20, 14]. Moreover, truncations can always be
benchmarked against perturbative studies, see, e.g., Refs. [124, 102] for
instructive examples.
One measure for the quality of these expansion schemes is comparing terms of
different order. It is expected and can also be observed for certain systems
and situations, see e.g., Refs. [125, 15, 126, 127, 128, 16, 14], that the
expansions seem to converge and deviations in the observables are decreasing
by increasing the expansion order. In the FRG community, this is often
referred to as apparent convergence. Another indication for the quality of the
truncation is the comparison of FRG results with results from other methods
[23, 129, 22, 130, 131, 17], e.g., Monte-Carlo simulations, or the comparison
of critical exponents derived from the FRG and other methods.
In this context, zero-dimensional QFTs play a very special role: Due to the
absence of space-time and momentum dependences of the fields, the effective
average action $\bar{\Gamma}_{t}[\Phi]=\bar{\Gamma}_{t}(\Phi)$ is merely a
function (not a functional) of the fields $\Phi$ and of the $t$ dependent
couplings accompanying all possible terms which respect the symmetry of the
model. This structure can, however, be summarized in terms of effective $\Phi$
and $t$ dependent terms. It is therefore possible to express the effective
average action in terms of a finite amount of terms, which nevertheless
incorporate all possible interactions to all orders in the fields and do not
even need to be analytic functions of the fields. In consequence, truncating
the system is superfluous and the PDEs, which are derived via projections from
the FRG equation, constitute an exact and complete system. Solving this system
must therefore lead to the exact effective action $\Gamma[\Phi]$ in the IR and
is therefore completely equivalent to solving the functional integral. In
other words, calculating $n$-point correlation functions via the (functional)
integral or via the FRG equation (if done properly) must yield identical
results without truncation errors.
This feature makes zero-dimensional QFT particularly interesting for several
reasons:
1. 1.
It can be used to test the quality of numerical schemes which are used to
solve the flow equations.
2. 2.
It can be used to estimate the errors resulting from the choices of various
parameters entering the RG flow equations like UV and IR cutoff scales, etc..
3. 3.
It can be used to test commonly used truncation schemes by artificially
truncating the system to a non-complete set of ordinary first-order
differential equations.
All these tests can be performed on a quantitative level, by studying the
relative errors of the FRG results for $n$-point correlation functions
compared to the exact results from the functional integral. We provide results
for various precision tests in Sec. V.
For the remainder of this section, we will proceed as follows: First, we will
derive the untruncated exact RG flow equation for the zero-dimensional $O(N)$
model. Afterwards, we introduce a commonly used truncation scheme – the FRG
Taylor (vertex) expansion.
#### III.3.1 The exact RG flow equation of the zero-dimensional $O(N)$ model
For the special case of the zero-dimensional $O(N)$ model, the most general
ansatz for the effective average action is given by a scale-dependent
effective potential
$\displaystyle\bar{\Gamma}_{t}[\vec{\varphi}\,]=U(t,\vec{\varphi}\,)=U(t,\varrho)\,.$
(79)
This ansatz can describe arbitrary $O(N)$ invariant effective actions and can
include terms at all orders of $\varrho=\tfrac{1}{2}\,\vec{\varphi}^{\,2}$.
However, it is in principle not restricted to analytic (Taylor-expandable)
functions. Truncations of $\bar{\Gamma}_{t}[\vec{\varphi}\,]$ are not
required.
In order to arrive at the exact flow equation for $U(t,\vec{\varphi}\,)$ one
has to perform the following steps:
1. 1.
Insert the function (79) into the FRG equation (40).
2. 2.
Invert the full field-dependent two-point function
$\displaystyle\big{(}\bar{\Gamma}^{(2)}_{t,\varphi\varphi}[\vec{\varphi}\,]+R_{t}\big{)}_{ij}\,.$
(80)
3. 3.
Take the trace in field space.
4. 4.
Remove the redundant $N-1$ field space directions in $\vec{\varphi}$.
For the last step, the RG flow equation can be evaluated on a constant
background field configuration131313Here we adopt terminology from higher-
dimensional FRG: The word “constant” is therefore somewhat misleading in a QFT
which cannot vary in space-time, but it is used anyhow.
$\varphi_{1}=\ldots=\varphi_{N-1}=0$ and $\varphi_{N}=\sigma$. Without loss of
generality, the $\varphi_{N}$ direction was singled out as the direction of
the radial $\sigma$ mode and the constant background field.
The inversion of the full field-dependent two-point function (80) can be
performed analytically [28, 132, 103, 104] by introducing the complete,
orthogonal, and idempotent field space projection operators
$\displaystyle\mathcal{P}^{\perp}_{ij}(\vec{\varphi}\,)\equiv\delta_{ij}-\frac{\varphi_{i}\,\varphi_{j}}{\vec{\varphi}^{\,2}}\,,$
$\displaystyle\mathcal{P}^{\parallel}_{ij}(\vec{\varphi}\,)\equiv\frac{\varphi_{i}\,\varphi_{j}}{\vec{\varphi}^{\,2}}\,.$
(81)
The projection operators are used to decompose the full field-dependent two-
point function (80) into components perpendicular ($\perp$) and parallel
($\parallel$) to $\vec{\varphi}$, which can be inverted separately. The
regulator $R_{t}$ is matrix-valued and diagonal in field space,
$\displaystyle(R_{t})_{ij}=\delta_{ij}\,r(t)\,,$ (82)
where $r(t)$ again is denoted as regulator shape function, cf. Eqs. (7) and
(8). One finds that
$\displaystyle\big{(}\bar{\Gamma}^{(2)}_{t,\varphi\varphi}[\vec{\varphi}\,]+R_{t}\big{)}^{-1}_{ij}=\vphantom{\bigg{(}\bigg{)}}$
(83) $\displaystyle=\,$
$\displaystyle\mathcal{P}^{\parallel}_{ij}(\vec{\varphi}\,)\,\frac{1}{r(t)+\partial_{\varrho}U(t,\varrho)+2\varrho\,\partial_{\varrho}^{2}U(t,\varrho)}+\vphantom{\bigg{(}\bigg{)}}$
$\displaystyle+\mathcal{P}^{\perp}_{ij}(\vec{\varphi}\,)\,\frac{1}{r(t)+\partial_{\varrho}U(t,\varrho)}\,,\vphantom{\bigg{(}\bigg{)}}$
which can be inserted directly into the FRG equation (40).
After taking the field space trace and evaluating the resulting equation on
the constant background field configuration, we arrive at the RG flow equation
for the effective potential
$\displaystyle\partial_{t}U(t,\sigma)=\,$
$\displaystyle\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\frac{N-1}{r(t)+\frac{1}{\sigma}\,\partial_{\sigma}U(t,\sigma)}+\vphantom{\Bigg{(}\Bigg{)}}$
(84)
$\displaystyle+\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\frac{1}{r(t)+\partial_{\sigma}^{2}U(t,\sigma)}=\vphantom{\Bigg{(}\Bigg{)}}$
$\displaystyle=\,$
$\displaystyle\begin{gathered}\includegraphics{diagrams/potential_pi.pdf}\end{gathered}+\begin{gathered}\includegraphics{diagrams/potential_sigma.pdf}\end{gathered}\,.$
(87)
This RG flow equation is an exact non-linear PDE for the effective potential
$U(t,\sigma)$, which is of first order in RG time $t$ and of first and second
order in the field space direction $\sigma$. It also includes an explicit
$\sigma$ dependence. A detailed analysis of the structure of this PDE,
including its relation to conservation equations and fluid dynamics is
provided in Sub.Sec. IV.1.
For now, we conclude this section with a few comments on the widely used
diagrammatic notation of the PDE and its relation to the RG flow equation (38)
from Sec. II: Similar to Feynman diagrams which are commonly used in
perturbation theory, the propagators141414The term “propagator” is of course
misleading for a QFT in a single point, where “propagation” in the true sense
of the word is not possible. Nevertheless, we again adopt the notation from
higher-dimensional QFT and statistical mechanics. are depicted as lines;
blue-jagged lines for the $\sigma$ propagator,
$\displaystyle\frac{1}{r(t)+\partial_{\sigma}^{2}U(t,\sigma)}\,,$ (88)
and red-dashed lines for the $\pi$ propagators
$\displaystyle\frac{1}{r(t)+\frac{1}{\sigma}\,\partial_{\sigma}U(t,\sigma)}\,.$
(89)
The crossed circle ($\otimes$) stands for the regulator insertion
$\frac{1}{2}\,\partial_{t}r(t)$. (The factor $\frac{1}{2}$ is often not
included in the regulator insertion, but written in front of the diagrams.
See, e.g., Refs. [24, 16, 133, 134, 102, 104, 44] for different notations.)
The factor $N-1$ is the multiplicity of the pion-loop contribution (indicated
by the vector over the pion field in the diagram, cf. Eq. (84)) and
corresponds to the number of pions in the system.
For the special case $N=1$, the $O(N)$ model reduces to the $O(1)$ model. Such
a theory of a single scalar field in zero dimensions, was used in the
introductory section II on FRG. In this limit, the pion contributions to the
flow equation vanish. As already stated in Sec. II, we find that for non-zero
pion contributions ($N>1$) the flow equation for $U(t,\sigma)$ acquires a term
that is of first order in the spatial derivative,
$\partial_{\sigma}U(t,\sigma)$, which no longer has diffusive character, but
corresponds to advection in field space. This is further discussed in Sec. IV.
#### III.3.2 FRG Taylor (vertex) expansion of the $O(N)$ model
The FRG Taylor (vertex) expansion is based on the assumption that the
effective (average) action $\bar{\Gamma}_{t}[\vec{\varphi}\,]$ can be expanded
in a series in field space with RG-time dependent expansion coefficients [87].
In zero dimensions, this effectively reduces to an expansion of the effective
potential $U(t,\varrho)$, cf. Eq. (79). Consequently, it is also equivalent to
a Taylor expansion of the effective potential, which is well-known from
higher-dimensional truncation schemes [135, 87, 133, 16, 15, 136, 14, 18, 20,
19, 122, 104]. Throughout this work, we will therefore use the term “FRG
Taylor expansion” to refer to this approach. The RG-scale dependent expansion
coefficients $\bar{\Gamma}^{(2n)}(t)$ correspond directly to the scale-
dependent vertex functions
$\bar{\Gamma}^{(2n)}_{t,\varphi_{i}\ldots\varphi_{j}}$ of the QFT. For $d>0$,
these expansion coefficients are usually position or momentum dependent
whereas in $d=0$ the coefficients depend only on the RG time $t$.
The assumption of expandability and thus differentiability significantly
restricts the form of the effective action
$\bar{\Gamma}_{t}[\vec{\varphi}\,]=U(t,\vec{\varphi}\,)$, cf. Refs. [130,
131]. In fact, it neither allows for the formation of any non-analytic
behavior throughout the RG flow nor for any non-analytic initial conditions.
However, non-analytic initial conditions are not forbidden, as we will see in
Sec. V. Furthermore, it is well known that non-analyticities can (and in some
models have to) form in the effective potential during the RG flow [118, 26,
27, 137]. Considering these caveats, an expansion in vertices of a given
theory has always to be considered with care. Still, this expansion scheme is
used in certain applications.
In our work, we restrict our analysis of the precision of this truncation
scheme to RG flows with rather specific properties: We study initial
conditions that are analytic. Furthermore, we know, cf. App. B, that the IR
effective action is smooth for the special case of zero dimensions, which is a
necessary condition for the convergence of a (Taylor) series. It should,
however, be noted that smoothness is only a necessary but not a sufficient
condition for the convergence of a Taylor series151515A textbook example for a
smooth function which has a non-converging Taylor series around $x=0$ is
$f(x)=\begin{cases}\mathrm{e}^{-1/x}&\text{if }x>0,\\\
0&\text{else.}\end{cases}$ . Only analyticity would formally imply the
convergence of a Taylor series at all $\vec{\varphi}$. Additionally, we argue
that for sufficiently small $N$, the diffusive contributions to the RG flow
are important, which smear out any possible cusps. In summary, we expect that
for these extremely special scenarios it is unlikely that non-analyticities
will form and disappear again during the RG flow. Nevertheless, we do not know
if a small finite number of expansion coefficients is always enough to reach a
reliable approximation of $\bar{\Gamma}_{t}[\vec{\varphi}\,]$ during the RG
flow or if it is always necessary to flow the effective potential as a PDE
without additional assumptions. This (rather limited) applicability of the FRG
Taylor expansion to analytic initial conditions will be tested by calculating
the relative errors of 1PI $n$-point vertex functions in the FRG Taylor
expansion in comparison with the exact results and the results from the flows
of a full field-dependent $U(t,\sigma)$ in Sec. V.
The FRG Taylor expansion of the zero-dimensional $O(N)$ model is given by the
following ansatz [44, 52, 51, 56],
$\displaystyle\bar{\Gamma}_{t}[\vec{\varphi}\,]=\,$
$\displaystyle\sum_{n=0}^{m}\frac{\bar{\Gamma}^{(2n)}(t)}{(2n-1)!!}\,\frac{1}{n!}\,\bigg{(}\frac{\vec{\varphi}^{\,2}}{2}\bigg{)}^{n}=\vphantom{\bigg{(}\bigg{)}^{2}}$
(90) $\displaystyle=\,$
$\displaystyle\bar{\Gamma}^{(0)}(t)+\bar{\Gamma}^{(2)}(t)\,\frac{\vec{\varphi}^{\,2}}{2}+\frac{\bar{\Gamma}^{(4)}(t)}{3}\,\frac{1}{2}\,\bigg{(}\frac{\vec{\varphi}^{\,2}}{2}\bigg{)}^{2}+\ldots\,,\vphantom{\bigg{(}\bigg{)}^{2}}$
where $\bar{\Gamma}^{(2n)}(t)$ are $t$ dependent expansion coefficients and
$m$ is the truncation order. The factors of $(2n-1)!!$ and $n!$ were
introduced in order to have
$\bar{\Gamma}^{(2n)}(t_{\mathrm{IR}})=\Gamma^{(2n)}_{\varphi_{i}\ldots\varphi_{i}}$
in the IR, where $\Gamma^{(2n)}_{\varphi_{i}\ldots\varphi_{i}}$ are the 1PI
$2n$-point vertex functions in the IR, with all indices being identical (no
summation over $i$ here), see also Eqs. (73) – (75). In order to arrive at the
corresponding flow equations, we proceed in a similar manner as before in
Sub.Sub.Sec. III.3.1: We insert our ansatz (90) into the full field-dependent
two-point function (80) and use the field space projection operators (81) to
invert the latter. We obtain
$\displaystyle\big{(}\bar{\Gamma}^{(2)}_{t,\varphi\varphi}[\vec{\varphi}\,]+R_{t}\big{)}^{-1}_{ij}=\vphantom{\bigg{(}\bigg{)}}$
(91) $\displaystyle=\,$
$\displaystyle\mathcal{P}^{\perp}_{ij}(\vec{\varphi}\,)\,G^{\pi\pi}_{t}(\vec{\varphi}\,)+\mathcal{P}^{\parallel}_{ij}(\vec{\varphi}\,)\,G^{\sigma\sigma}_{t}(\vec{\varphi}\,)\,,\vphantom{\bigg{(}\bigg{)}}$
where
$\displaystyle G^{\pi\pi}_{t}(\vec{\varphi}\,)\equiv\,$
$\displaystyle\Bigg{[}r(t)+\sum_{n=1}^{m+1}\frac{\bar{\Gamma}^{(2n)}(t)}{(2n-1)!!}\,\frac{1}{(n-1)!}\,\bigg{(}\frac{\vec{\varphi}^{\,2}}{2}\bigg{)}^{n-1}\Bigg{]}^{-1}\,,$
$\displaystyle G^{\sigma\sigma}_{t}(\vec{\varphi}\,)\equiv\,$
$\displaystyle\Bigg{[}r(t)+\sum_{n=1}^{m+1}\frac{\bar{\Gamma}^{(2n)}(t)}{(2n-3)!!}\,\frac{1}{(n-1)!}\,\bigg{(}\frac{\vec{\varphi}^{\,2}}{2}\bigg{)}^{n-1}\Bigg{]}^{-1}\,,$
are the field-dependent propagators of the pion and sigma field in the Taylor
expansion.
This result can be inserted into the FRG equation (40), where the trace in
field space is evaluated to
$\displaystyle\partial_{t}\,\bar{\Gamma}_{t}[\vec{\varphi}\,]=\vphantom{\bigg{(}\bigg{)}}$
(92) $\displaystyle=\,$
$\displaystyle\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\big{[}(N-1)\,G^{\pi\pi}_{t}(\vec{\varphi}\,)+G^{\sigma\sigma}_{t}(\vec{\varphi}\,)\big{]}\,.\vphantom{\bigg{(}\bigg{)}}$
Finally, we insert the ansatz (90) for the effective average action into the
left-hand side of this equation and expand the propagators
$G^{\circ\circ}_{t}(\vec{\varphi}\,)$ up to order $n=m$ in the expansion
coefficients $\bar{\Gamma}^{(2n)}(t)$. This can also be achieved by
successively taking derivatives with respect to the fields and setting
$\vec{\varphi}=0$ afterwards. By comparing the expansion coefficients on the
left- and right-hand sides of the equation, one arrives at a coupled set of
ordinary differential equations for the $\bar{\Gamma}^{(2n)}(t)$ with $0\leq
n\leq m$. The flow equation for $\bar{\Gamma}^{(2m)}(t)$ contains
$\bar{\Gamma}^{(2m+2)}(t)$ on the right-hand side. We truncate the system by
neglecting the flow of $\bar{\Gamma}^{(2m+2)}(t)$, taking
$\partial_{t}\bar{\Gamma}^{(2m+2)}(t)=0$.
For an automatization of the derivation of the flow equations (the system of
ODEs) via computer algebra routines such as Mathematica [138], it is advisable
to formulate the FRG Taylor expansion in the invariant
${\varrho=\tfrac{1}{2}\,\vec{\varphi}^{\,2}}$,
$\displaystyle\bar{\Gamma}_{t}[\varrho]=\,$
$\displaystyle\sum_{n=0}^{m}\frac{\bar{\Gamma}^{(2n)}(t)}{(2n-1)!!}\,\frac{\varrho^{n}}{n!}\,.\vphantom{\bigg{(}\bigg{)}}$
(93)
Equation (92) becomes
$\displaystyle\partial_{t}\,\bar{\Gamma}_{t}[\varrho]=\,$
$\displaystyle\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\big{[}(N-1)\,G^{\pi\pi}_{t}(\varrho)+G^{\sigma\sigma}_{t}(\varrho)\big{]}\,,$
(94)
while
$\displaystyle G^{\pi\pi}_{t}(\varrho)\equiv\,$
$\displaystyle\bigg{[}r(t)+\sum_{n=1}^{m+1}\frac{\bar{\Gamma}^{(2n)}(t)}{(2n-1)!!}\,\frac{\varrho^{n-1}}{(n-1)!}\bigg{]}^{-1}\,,$
(95) $\displaystyle G^{\sigma\sigma}_{t}(\varrho)\equiv\,$
$\displaystyle\bigg{[}r(t)+\sum_{n=1}^{m+1}\frac{\bar{\Gamma}^{(2n)}(t)}{(2n-3)!!}\,\frac{\varrho^{n-1}}{(n-1)!}\bigg{]}^{-1}\,.$
(96)
The coupled set of ODEs for the expansion coefficients
$\bar{\Gamma}^{(2n)}(t)$ is given by [44, 52]161616We do not indicate $t$
dependences of the $\bar{\Gamma}^{(2n)}(t)$ for reasons of readability. ,
$\displaystyle\partial_{t}\bar{\Gamma}^{(0)}=\,$
$\displaystyle\frac{N}{2}\,\,\frac{\partial_{t}r(t)}{r(t)+\bar{\Gamma}^{(2)}}\,,\vphantom{\Bigg{(}\Bigg{)}}$
(97) $\displaystyle\partial_{t}\bar{\Gamma}^{(2)}=\,$
$\displaystyle-\frac{N+2}{6}\,\frac{\partial_{t}r(t)}{\big{[}r(t)+\bar{\Gamma}^{(2)}\big{]}^{2}}\,\bar{\Gamma}^{(4)}\,,\vphantom{\Bigg{(}\Bigg{)}}$
$\displaystyle\partial_{t}\bar{\Gamma}^{(4)}=\,$
$\displaystyle\frac{N+8}{3}\,\frac{\partial_{t}r(t)}{\big{[}r(t)+\bar{\Gamma}^{(2)}\big{]}^{3}}\,\big{[}\bar{\Gamma}^{(4)}\big{]}^{2}-\vphantom{\Bigg{(}\Bigg{)}}$
$\displaystyle-\frac{N+4}{10}\,\frac{\partial_{t}r(t)}{\big{[}r(t)+\bar{\Gamma}^{(2)}\big{]}^{2}}\,\bar{\Gamma}^{(6)}\,,\vphantom{\Bigg{(}\Bigg{)}}$
$\displaystyle\vdots\,\,$
Recall that
$\displaystyle\partial_{t}\bar{\Gamma}^{(n)}=0\,$ (98)
for $n\geq 2m+2$ in this approximation.
## IV FRG flow equations and (numerical) fluid dynamics
In this section, we discuss the formulation of the RG flow equation as an
advection-diffusion equation, as well as its interpretation in the context of
fluid dynamics, including its numerical implementation.
The fluid-dynamical formulation of the exact RG flow equation for the
effective potential $U(t,\varrho)$ of models of $O(N)$ type (in the large-$N$
limit [28]) is also presented in a recent and a parallel publication by some
of us and collaborators [26, 27]. It was shown that the RG flow equation can
be recast in the form of a pure advection equation (a hyperbolic conservation
law) for the derivative of the effective potential
$u(t,\varrho)=\partial_{\varrho}U(t,\varrho)$, where $u(t,\varrho)$ serves as
the conserved quantity (the fluid), the RG time $t$ as a temporal coordinate
and $\varrho$ as a spatial coordinate. In this section, we generalize this
result and discuss various consequences for the numerical implementation and
interpretation of FRG flow equations.171717Generalizations of the fluid-
dynamical picture of FRG flow equations from the large-$N$ results of Ref.
[26] to systems with finite $N$ as well as the inclusion of fermions were
already presented by us in various talks (see, e.g., Refs. [139, 140]) and
discussed in a master thesis [141] co-supervised by some of us, as well as a
PhD thesis by one of us [142], see also Ref. [27]. Furthermore, also in Ref.
[118] a formulation of the flow equation as a conservation law and a
discussion of shock waves based on the characteristics is presented, however,
without really elaborating on a fluid-dynamical interpretation and its
consequences.
### IV.1 Conservative form of FRG flow equations – advection-diffusion
equations
The formulation of FRG flow equations in terms of a fluid-dynamical language
has two major advantages:
1. 1.
It provides an intuitive explanation for all kinds of phenomena observed in
FRG flow equations, e.g., the flattening of the effective potential for small
$\sigma$ in the IR, which occurs in conjunction with a non-differentiable
point of the effective potential at the ground state. Such non-analytic
behavior cannot be handled and systematically analyzed by commonly used
numerical schemes such as the Taylor expansion or related discretization
schemes for the effective potential, since the latter strongly rely on
differentiability. However, these phenomena have a direct impact on the
physics, for instance on the occurrence of phase transitions [118, 22, 130,
131, 26, 83, 143, 27], and therefore must be resolved and analyzed accurately
also on a numerical level.
2. 2.
The formulation of the FRG flow equations in terms of fluid-dynamical concepts
provides access to the highly developed and extremely powerful toolbox of
numerical fluid dynamics, which finds applications in a wide area of fields,
ranging from the natural sciences and engineering all the way to economics.
How to adopt these methods to flow equations arising in the FRG framework is
discussed in detail in Sub.Secs. IV.2 and IV.3.
Interestingly, the idea of interpreting RG flow equations as “flow” equations
in the true sense of the word is not new and explains the term “RG flow
equations”: A discussion of analogies between “RG flow” and hydro-dynamical
flow can be found in widely used textbooks [39, 144] and is discussed via the
example of field-independent coupling constants in the context of perturbative
renormalization. Furthermore, the RG flow was already associated with gradient
flow and dissipative processes in Refs. [145, 146, 147, 148, 149, 148, 74],
even though a stringent fluid-dynamical interpretation and formulation was not
presented.
It is therefore also not accidental that the (F)RG community has chosen the
term “RG time” for the logarithm of the RG scale $k$ over the UV cutoff
$\Lambda$, $\tilde{t}=\ln\big{(}\tfrac{k}{\Lambda}\big{)}$. In contrast, we
find that $t=-\tilde{t}\in[0,\,\infty)$ can be naturally identified as a
temporal coordinate in the fluid-dynamical picture of (F)RG flow equations,
see below.
It was also discussed, see, e.g., Refs. [74, 76, 54], that – on the level of
the scale-dependent generation functionals $\mathcal{Z}_{t}[J]$ or
$\mathcal{W}_{t}[J]$ – the corresponding PDEs can be considered as a (non-
linear) functional diffusion equations for the source fields $J$ (cf. Eqs.
(11) and (19) for the respective zero-dimensional versions). Sometimes Eq.
(11) is even explicitly denoted as a (non-linear) heat equation, which is also
a specific fluid-dynamical problem [77, 78, 79, 150].
Considering the obvious analogies between RG flow equations arising in the FRG
framework and fluid-dynamical equations, it is remarkable that the FRG
equation (40) was so far not more systematically investigated and compared to
equations well-known from fluid dynamics. For the related RG flow equations
the situation is slightly different and the mathematical analysis on the level
of PDEs was more systematic, see, e.g., Refs. [151, 152, 149, 148, 74].
Furthermore, certain phenomena well-known in fluid dynamics, such as
discontinuities (shock waves), rarefaction waves, or cusps, occur in the
solution of such PDEs. These require a careful numerical treatment to resolve
them, but their occurrence was very often ignored by numerical approaches to
solve the FRG equations by erroneously assuming that the solution
$U(t,\sigma)$ is continuous and differentiable. Still, there are some
publications which use numerical schemes to systematically capture non-
analytic behavior or discuss the limitations of numerical methods in the
presence of these effects, see, e.g., Refs. [137, 118].
In order to make the fluid-dynamical analogy more apparent, we present a
formulation of the RG flow equation (84) for the effective potential
$U(t,\sigma)$ in terms of a conservation law. Furthermore, we discuss its
fluid-dynamical interpretation on a qualitative level and classify the various
contributions to the PDE (the RG flow) in the fluid-dynamical picture. This
sets the stage for an adequate qualitative interpretation of the RG flow
equation and possible numerical approaches, which are presented in the next
two Sub.Secs. IV.2 and IV.3.
#### IV.1.1 The conservative form
Starting from the RG flow equation (84) of the effective potential
$U(t,\sigma)$, we have several options to recast the flow equation in a
conservative form, two of which are:
1. 1.
Following Refs. [118, 26, 27, 139, 142, 141], we can take an overall
derivative of Eq. (84) with respect to the $O(N)$ invariant
$\varrho=\tfrac{1}{2}\,\sigma^{2}$ and express the resulting equation in terms
of $\varrho$ and $u(t,\varrho)\equiv\partial_{\varrho}U(t,\varrho)$,
$\displaystyle\partial_{t}u(t,\varrho)=\frac{\mathrm{d}}{\mathrm{d}\varrho}\,\bigg{(}\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\frac{N-1}{r(t)+u(t,\varrho)}+\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\frac{1}{r(t)+u(t,\varrho)+2\varrho\,\partial_{\varrho}u(t,\varrho)}\bigg{)}\,.$
(99)
2. 2.
Another option is to formulate the problem on the level of the background
field $\sigma$ itself [140] and by alternatively defining
$u(t,\sigma)\equiv\partial_{\sigma}U(t,\sigma)$. Taking an overall derivative
of Eq. (84) with respect to $\sigma$ yields,
$\displaystyle\partial_{t}u(t,\sigma)=\frac{\mathrm{d}}{\mathrm{d}\sigma}\,\bigg{(}\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\frac{N-1}{r(t)+\frac{1}{\sigma}\,u(t,\sigma)}+\big{[}\tfrac{1}{2}\,\partial_{t}r(t)\big{]}\,\frac{1}{r(t)+\partial_{\sigma}u(t,\sigma)}\bigg{)}\,.$
(100)
In both cases one ends up with a one-dimensional conservation law, where $u$
plays the role of the conserved quantity (the fluid), $t$ can be identified
with the time variable and $\varrho$ or $\sigma$ are identified as the spatial
variable.
The conservative form of the RG flow equation (84) for the effective potential
$U$ on the level of its derivative $u$ is not restricted to zero space-time
dimensions or models with purely bosonic field content, see also Refs. [118,
26, 27, 139, 140, 142, 141]. As a matter of fact, this formulation generalizes
to arbitrary dimensions and also to models which include fermionic degrees of
freedom on the level of the local potential approximation (LPA). In
particular, the flow equation for the effective potential for models of
strong-interaction matter, such as the quark-meson, the Nambu-Jona-Lasinio,
and the Gross-Neveu(-Yukawa) model can be formulated in this
fashion181818Meanwhile, we and our collaborators [141, 27, 153, 32] were also
working on the conservative formulation of (F)RG flow equations in higher
dimensions in more advanced truncations as well as on conservative
formulations of (F)RG flow equations for zero-dimensional systems involving
fermions (Grassmann numbers) [34]. .
In this context, it is also worthwhile to note that Eq. (100) can be derived
not only by taking a derivative of the FRG flow equation for the effective
|
# SN 2023zaw: the low-energy explosion of an ultra-stripped star, with non-
radioactive heating
T. Moore Astrophysics Research Centre, School of Mathematics and Physics,
Queen’s University Belfast, BT7 1NN, UK European Southern Observatory, Alonso
de Córdova 3107, Casilla 19, Santiago, Chile J. H. Gillanders Astrophysics
sub-Department, Department of Physics, University of Oxford, Keble Road,
Oxford, OX1 3RH, UK M. Nicholl Astrophysics Research Centre, School of
Mathematics and Physics, Queen’s University Belfast, BT7 1NN, UK M. E. Huber
Institute for Astronomy, University of Hawai’i, 2680 Woodlawn Drive, Honolulu,
HI 96822, USA S. J. Smartt Astrophysics sub-Department, Department of
Physics, University of Oxford, Keble Road, Oxford, OX1 3RH, UK Astrophysics
Research Centre, School of Mathematics and Physics, Queen’s University
Belfast, BT7 1NN, UK S. Srivastav Astrophysics sub-Department, Department of
Physics, University of Oxford, Keble Road, Oxford, OX1 3RH, UK H. F. Stevance
Astrophysics sub-Department, Department of Physics, University of Oxford,
Keble Road, Oxford, OX1 3RH, UK Astrophysics Research Centre, School of
Mathematics and Physics, Queen’s University Belfast, BT7 1NN, UK Department
of Physics, The University of Auckland, Private Bag 92019, Auckland, New
Zealand T.-W. Chen Graduate Institute of Astronomy, National Central
University, 300 Jhongda Road, 32001 Jhongli, Taiwan K. C. Chambers Institute
for Astronomy, University of Hawai’i, 2680 Woodlawn Drive, Honolulu, HI 96822,
USA J. P. Anderson European Southern Observatory, Alonso de Córdova 3107,
Casilla 19, Santiago, Chile Millennium Institute of Astrophysics (MAS),
Nuncio Monseñor Sótero Sanz 100, Providencia, Santiago, Chile M. D. Fulton
Astrophysics Research Centre, School of Mathematics and Physics, Queen’s
University Belfast, BT7 1NN, UK S. R. Oates Department of Physics, Lancaster
University, Lancaster, LA1 4YB, UK C. Angus Astrophysics Research Centre,
School of Mathematics and Physics, Queen’s University Belfast, BT7 1NN, UK G.
Pignata Instituto de Alta Investigación, Universidad de Tarapacá, Arica,
Casilla 7D, Chile N. Erasmus South African Astronomical Observatory, PO Box
9, Observatory 7935, Cape Town, South Africa H. Gao Institute for Astronomy,
University of Hawai’i, 2680 Woodlawn Drive, Honolulu, HI 96822, USA J. Herman
Institute for Astronomy, University of Hawai’i, 2680 Woodlawn Drive, Honolulu,
HI 96822, USA C.-C. Lin Institute for Astronomy, University of Hawai’i, 2680
Woodlawn Drive, Honolulu, HI 96822, USA T. Lowe Institute for Astronomy,
University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 E. A. Magnier
Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu,
HI 96822 P. Minguez Institute for Astronomy, University of Hawaii, 2680
Woodlawn Drive, Honolulu, HI 96822 C.-C. Ngeow Graduate Institute of
Astronomy, National Central University, 300 Jhongda Road, 32001 Jhongli,
Taiwan X. Sheng Astrophysics Research Centre, School of Mathematics and
Physics, Queen’s University Belfast, BT7 1NN, UK S. A. Sim Astrophysics
Research Centre, School of Mathematics and Physics, Queen’s University
Belfast, BT7 1NN, UK K. W. Smith Astrophysics Research Centre, School of
Mathematics and Physics, Queen’s University Belfast, BT7 1NN, UK R. Wainscoat
Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu,
HI 96822 S. Yang Henan Academy of Sciences, Zhengzhou 450046, Henan, China
D. R. Young Astrophysics Research Centre, School of Mathematics and Physics,
Queen’s University Belfast, BT7 1NN, UK K.-J. Zeng Graduate Institute of
Astronomy, National Central University, 300 Jhongda Road, 32001 Jhongli,
Taiwan
###### Abstract
Most stripped envelope supernova progenitors are formed through binary
interaction, losing hydrogen and/or helium from their outer layers. An
emerging class of supernovae with the highest degree of envelope-stripping are
thought to be the product of stripping by a NS companion. However, relatively
few examples are known and the outcomes of such systems can be diverse and are
poorly understood at present. Here, we present spectroscopic observations and
high cadence multi-band photometry of SN 2023zaw, a low ejecta mass and
rapidly evolving supernova. SN 2023zaw was discovered in a nearby spiral
galaxy at D = 39.7 Mpc, with significant Milky Way extinction, $E(B-V)=0.21$,
and significant (but uncertain) host extinction. Bayesian evidence comparison
reveals that nickel is not the only power source and an additional energy
source is required to explain our observations. Our models suggest an ejecta
mass of $M_{\rm ej}\sim 0.07$ M⊙and a synthesised nickel mass of $M_{\rm
ej}\sim 0.007$ M⊙is required to explain the explosion. However an additional
heating from a magnetar or interaction with circumstellar material is required
to power the early light curve.
Transient sources (1851) — Supernovae (1668) — Core-collapse supernovae (304)
– Type Ib supernovae (1729) – Circumstellar matter (241)
††facilities: Gemini:Gillett, Swift, PS1, PO:1.2m, Liverpool:2m††software:
Astropy (Astropy Collaboration et al., 2013, 2018, 2022), Numpy (Harris et
al., 2020), Matplotlib (Hunter, 2007), Mosfit (Guillochon et al., 2018), Hoki
(Stevance et al., 2020), PSF (Nicholl et al., 2023), DRAGONS (Labrie et al.,
2023, 2023)
## 1 Introduction
Modern sky surveys such as the Asteroid Terrestrial-impact Last Alert System
(ATLAS; Tonry et al. 2018), Zwicky Transient Facility (ZTF; Bellm et al.
2019), and the Panoramic Survey Telescope and Rapid Response System (Pan-
STARRS; Chambers et al. 2016) are revealing the extremes of core-collapse
supernovae (SNe) and optical transients (Inserra, 2019; Modjaz et al., 2019).
A small number of SNe, often belonging to the hydrogen poor Types Ib and Ic,
show rapid evolution and brighten and fade on timescales much faster than
typical classes of SNe (Poznanski et al., 2010; Drout et al., 2013; De et al.,
2018; Prentice et al., 2018, 2020; Chen et al., 2020; Yan et al., 2023; Ho et
al., 2023).Generally, a small ejecta mass is invoked to explain the rapid
evolution of fast transients (Moriya et al., 2017). A small ejecta mass
reduces the photon diffusion timescale allowing the light curve to peak and
begin to decline rapidly. Low ejecta mass interpretations require a physically
compatible powering source. Invoking radioactive 56Ni in fast evolving SNe
frequently produces unphysical ejecta mass to nickel mass ratios (Prentice et
al., 2018; Gillanders et al., 2020; Prentice et al., 2020; Chen et al., 2020).
Additional mechanisms have been suggested to boost the luminosity of these
supernovae e.g circumstellar material interaction, energy injection from a
magnetar, and shock cooling emission (Yao et al., 2020; Sawada et al., 2022)
In this paper, we present spectrophotometric follow-up observations of the
rapidly evolving SN 2023zaw.111While preparing this manuscript, another pre-
print on the same source appeared on the arXiv (Das et al., 2024). We
determine a time of maximum light of MJD $60287.1\pm 0.2$ from fitting a
polynomial to the ZTF $g$-band. Classified as a Type Ib, SN 2023zaw rises
rapidly to maximum light ($<$ 4 days) followed by a similarly fast decline,
comparable to the fastest fading Type I SN 2019bkc (Chen et al., 2020;
Prentice et al., 2020).
## 2 Discovery and Follow-up
SN 2023zaw was discovered on 2023 December 7 05:34 UT (MJD 60285.23) by ZTF
(Bellm et al., 2019) and registered on the Transient Name Server at 11:50 UT
(Sollerman, 2023) with the discovery mag $g=19.34$. We independently detected
SN 2023zaw in ATLAS data (Smith et al., 2020) a few hours later at 08:00 UT as
the field visibility moved from California to Hawaii, at mag $o=18.74$. The
transient is offset 8.97” N, 19.15” W from UGC 03048, a spiral galaxy with a
redshift from the NASA Extragalactic Database (NED) of 0.010150 $\pm$ 0.000026
(Springob et al., 2005). From NED the median redshift-independent distance to
UGC 03048 is 39.7 Mpc, based on the Tully-Fisher method (Tully et al., 2013).
The SN is offset 8.97” North and 19.15” West, or 4.1 kpc, from the galaxy
center. SN 2023zaw is located on the edge of one of the two prominent arms of
UGC 03048. The Milky Way extinction along this line of sight is AV = 0.6555
mag (Schlafly & Finkbeiner, 2011). Na I D lines in the classification spectrum
suggest additional host extinction is significant (Poznanski et al., 2012).
Four AstroNotes regarding SN 2023zaw were released on the Transient Name
Server222https://www.wis-tns.org/object/2023zaw at the time of discovery,
commenting on its early evolution. Karambelkar et al. (2023a) highlighted the
discovery and fast fading nature of SN 2023zaw, along with an observation of
the transient with NOT/ALFOSC. The Kinder project (Lee et al., 2023) reported
a color-dependent fade using observations performed on the 40-cm SLT at Lulin
Observatory, Taiwan. In Fulton et al. (2023), we reported the combined ATLAS
and ZTF data and highlighted that this source was flagged by our ‘Fastfinder’
filter and annotator on the Lasair broker333https://lasair-ztf.lsst.ac.uk
(Smith et al., 2019) to find fast evolving objects in the ZTF public alert
stream. Both Karambelkar et al. (2023a) and Fulton et al. (2023) identified SN
2023zaw as a fast-fading, sub-luminous and red transient. Spectroscopic
observations with Keck (Karambelkar et al., 2023b) reported an apparent
similarity with the candidate ‘.Ia’ SN 2010X (Kasliwal et al., 2010). Finally,
Gillanders et al. (2023) classified the object as a Type Ib SN based on
observations performed with Gemini-N/GMOS, and this spectrum was immediately
made public on the TNS.
### 2.1 Photometry
Photometry for SN 2023zaw (internal name ATLAS23wuw) were obtained from the
ATLAS forced photometry server (Shingles et al., 2021) and binned by day. The
ATLAS (Tonry et al., 2018) system is a all-sky survey for potentially
dangerous near-Earth objects. ATLAS data are processed using the ATLAS Science
Server (Smith et al., 2020) to search for stationary transients. We obtained
measurements in the $g$ and $r$-bands using the Lasair broker (Smith et al.,
2019) and public ZTF stream data444https://lasair-
ztf.lsst.ac.uk/objects/ZTF23absbqun.
We triggered follow-up observations with the 1.8m Pan-STARRS1 on the Haleakala
mountain, Hawaii (Chambers et al., 2016). The Pan-STARRS1 telescope has a 7
deg2 field of view and features a 1.4 gigapixel camera. Observations in the
$grizy_{\rm P1}$ were taken with a daily cadence between MJD 60291 and 60295.
An additional $riz_{\rm P1}$ epoch was taken on MJD 60302, all further Pan-
STARRS observations were performed using the $iz_{\rm P1}$ bands. Optical
imaging was triggered with the 2.0-m Liverpool Telescope (LT; Steele et al.,
2004) using IO:O in $riz$ bands under program PL23B26 (PI: M. Fulton).
Measurements were made by PSF fitting using Source-Extractor (Bertin &
Arnouts, 1996) without host-galaxy subtraction. We observed SN 2023zaw with
the 0.4m SLT telescope as a part of the Kinder project (Chen et al., 2021) and
measured psf $griz$-band photometry. Three epochs of photometric observations
were performed with the GMOS-N instrument at the Gemini-North 8.1-m telescope,
under program ID GN-2023B-Q-125 (PI: M. Huber). These were obtained at MJDs
60329.2 ($riz$-band), 60339.3 ($riz$-band) and 60341.2 ($ri$-band). These
observations were bias-subtracted and flat-field corrected using standard
recipes in DRAGONS (Labrie et al., 2023, 2023). We also present three epochs
of $r$-band photometry derived from the acquisition images obtained prior to
our spectroscopic observations with GMOS-N (see Section 2.2 for details).
Aperture photometry was performed using PSF (Nicholl et al., 2023) with a
small optimised aperture, an encircled energy correction, and local background
subtraction.
The Ultra-Violet and Optical Telescope (UVOT; Roming et al., 2005) onboard the
Neil Gehrels Swift Observatory (Swift; Gehrels et al., 2004) satellite
observed SN 2023zaw on MJD 60293 and MJD 60296. A single $uvm2$ exposure was
performed on MJD 60291. While observations were taken in the $u$, $b$, $v$,
$uvw1$, $uvm2$ and $uvw2$ bands on days MJD 60293 and MJD 60296. The images at
each epoch were co-added, and the count rates obtained from the stacked images
using the Swift tool uvotsource. To extract the source counts, we used a
source aperture of $5\arcsec$ radius and an aperture of $20\arcsec$ radius for
the background. The source count rates were converted to magnitudes using the
UVOT photometric zero points (Poole et al., 2008; Breeveld et al., 2011). All
Swift observations are non-detections of the transient.
The Milky Way extinction-corrected light curve of SN 2023zaw is presented in
Figure 1. All photometry will be provided in the online version of this paper
and as a machine readable table.
Figure 1: Multicolor light curves with corrections for Milky Way foreground
extinction and time dilation for (z = 0.010150) applied. Each telescope is
shown with a different marker and unfilled markers indicate upper limits. We
exclude Swift/UVOT non-detections for visual clarity. Additionally we show the
expected decline rate of a 56Co tail (0.98 mag / 100 day) (Woosley et al.,
1989).
### 2.2 Spectroscopy
We observed SN 2023zaw at three different phases with the Gemini-North/GMOS-N
instrument under program ID GN-2023B-Q-125 (PI: M. Huber). Our three
observations commenced on MJDs 60291.3, 60295.3 and 60313.4 (corresponding to
phases from maximum light of $\approx+4.2$, +8.2 and +26.0 days,
respectively). All observations were performed using the R400 grating,
sampling the $\approx 4200-9100$ Å wavelength range at a spectral resolution
of R $\sim 2000$.
We reduced all three epochs of Gemini observations using the DRAGONS pipeline
(Labrie et al., 2023, 2023) following standard recipes, and the spectra were
all flux-calibrated against the same standard star. The contribution of the
host galaxy was estimated and subtracted, and each reduced, co-added spectrum
agrees well with the background-subtracted Pan-STARRS photometry obtained at
the same epoch. All spectra in this work will be made publicly available on
the WISeREP repository (Yaron & Gal-Yam, 2012).
## 3 Analysis
### 3.1 Host Galaxy and Milky Way Foreground Extinction
There is a strong and narrow absorption line in the +4.2 and +8.2 day GMOS-N
spectra, consistent with Na i D absorption at the redshift of UGC 03048. The
GMOS-N spectral resolution does not allow the D1 and D2 components to be
separately measured. After normalising the spectrum, we fit a single Gaussian
to the blended absorption line, with a centre $\lambda_{c}=5953.94$ Å
($z=0.0104$), a FWHM width $=11.4$ Å, and an equivalent width, ${\rm
EW}=2.10\pm 0.22$ Å.
Measurements of the equivalent width of the Na i doublet have been shown to be
correlated with the line-of-sight extinction (Poznanski et al., 2012), and
this method has often been applied to extragalactic transients. While there is
a reasonably linear relation between line strength and $E(B-V)$, up to a total
${\rm EW}_{\rm(D_{1}+D_{2})}\simeq 0.7$ Å, the relationship then saturates. No
quantitative and unique measurement of $E(B-V)$ appears possible beyond this,
but we can say that an ${\rm EW}=2.10\pm 0.22$ Å requires a minimum of
$E(B-V)_{\rm host}\gtrsim 0.5$. The Milky Way foreground extinction is also
significant along this line of sight, with $E(B-V)_{\rm MW}=0.2$ (Schlafly &
Finkbeiner, 2011). Throughout the rest of this manuscript, we apply a total
extinction of $E(B-V)=0.7$, noting that a somewhat higher value cannot be
discounted.
### 3.2 Light Curves
The light curves of SN 2023zaw are shown in Figure 1. Shortly after discovery,
SN 2023zaw reaches a maximum brightness of r = 17.6 mag and g = 18 mag. The
rapid rising phase of the light curve is not well observed, but the time from
explosion to $g$-band peak is constrained to be less than four days by the
ATLAS $o$-band non-detections (at depths correspond to Mo of -14.5 and -14.8
mag) at 2.7 and 1.8 days pre-discovery, respectively. The photometric
evolution after maximum light is similarly rapid. Due to the short rise time
we only observe the rising portion of the light curve in the $go$-bands. SN
2023zaw evolves extremely fast when compared to the representative Type Ib SN
2007Y (Stritzinger et al., 2009) in Figure 2. Initially SN 2023zaw fades $\sim
3$ magnitudes in $\sim 10\,$days in the $r$-band, with SN 2007Y fading by less
than a magnitude during the same interval. After this early rapid fade, SN
2023zaw is comparable to SN 2019bkc which is the fastest known Type I SN. SN
2023zaw settles to an apparent radioactive tail which we observe in the
$riz$-bands.
Figure 2: Light curves of SN 2023zaw in the $griz$-bands compared to the fast
supernovae SN 2019bkc (Prentice et al., 2020; Chen et al., 2020), SN 2019dge
(Yao et al., 2020), and the representative Type SN Ib SNe 2007Y (Stritzinger
et al., 2009) and SN 2007C (Drout et al., 2011; Stritzinger et al., 2018).
Each light curve is in the rest frame and has been corrected for Milky Way and
host galaxy extinctions (Drout et al., 2011). These supernovae were chosen to
represent the population USSNe and typical Type Ib SNe. Figure 3: Fit results
to the complete light curves using MOSFiT (Guillochon et al., 2018) for our
physical models compared to photometry, where downward arrows are upper-
limits, realisations are repented as solid lines and, band colors match those
in Figure 1. Panel (a) shows nickel-only model realisations. Panels (c), (d),
and (e) show the nickel + exp model, magnetar + nickel model, and
circumstellar material interaction + Nickel models respectively. Panel (b)
shows a fit to only the late-time light curve to estimate a synthesized
nickel-mass, and Panel (f) shows our shock-cooling emission + nickel, the
shock cooling model follows (Piro et al., 2021). The Bayesian evidence for
each model is calculated for model comparison. When quoting the evidence for
each model the name of each model in the MOSFiT code is used to ensure the
reproducability of these results. Model evidence is not included in panel (b)
as the model is only evaluated against the light curve tail. The MOSFiT input
file will be made available as a data behind the figure file with the online
article and all models are or will be made publicly available
### 3.3 Light Curve Modeling
We use the Modular Open Source Fitter for Transients
MOSFiT555https://github.com/guillochon/MOSFiT which is a Python-based modular
code which evaluates a user-defined physical model to the observed light
curves of transients. The code is described in detail by Guillochon et al.
(2018). We use the dynesty (Speagle, 2020) nested sampling package option in
MOSFiT to evaluate for a series of different models. For all models we assume
an optical opacity ($\kappa$) of 0.1 cm2 g-1 and a gamma-ray opacity
($\kappa_{\gamma}$) of 0.027 cm2 g-1. We apply a constraint to the host galaxy
H column density of n${}_{\rm H,host}(\rm cm^{-2})>3.43\times 10^{21}$ for
consistency with our adopted minimum host extinction (Section 3.1), assuming
RV = 3.1 and n${}_{\rm H}(\rm cm^{-2})/\rm A_{V}=2.21\times 10^{21}$ (Güver &
Özel, 2009). All models that are considered in this work are shown in Figure 3
together with the observed light curves of SN 2023zaw .
The rising phase of SN 2023zaw is poorly sampled compared to the peak and tail
of the light curve, which makes achieving a good fit at early time difficult.
Following Wheeler et al. (2015), we use the observed rise time $t_{\rm r}$ and
photospheric velocity $v_{\rm ph}$ to estimate an ejecta mass $M_{\rm ej}$
using the equation $M_{\rm ej}\ \approx 1/2\cdot\beta\cdot c/\kappa\cdot
v_{\rm ph}t_{\rm r}^{2}$ (Arnett, 1982), where $\beta=13.8$ is constant, $c$
is the speed of light, $t_{\rm r}$ is the rise time, and $\kappa$ is the
opacity. To approximate the rise time we adopt the midpoint between the last
non-detection and the first detection as the explosion epoch (MJD 60284.3, 0.9
days before detection) and the $g$-band peak for the time of maximum, this
gives a rise time ($t_{\rm r}$) estimate of $\sim 2.75$ days. For the
photospheric velocity $v_{\rm ph}$ we use the velocity derived from spectral
modelling in Section 4. Following this analysis we estimate an ejected mass
$M_{\rm ej}\sim 0.07$ M⊙. Considering a $\pm$1 day uncertainty on the rise
time our mass estimate could vary by +0.06 or -0.04 M⊙. To capture the
constraint in our priors we adopt a Gaussian prior on the ejecta mass $M_{\rm
ej}$ described by sigma = 0.07 and mu = 0.06 on the ejected mass $M_{\rm ej}$
parameter.
#### 3.3.1 Nickel Powered Explosion
We attempt to fit the multiband light curves with 56Ni decay, using the built-
in default MOSFiT model (Nadyozhin, 1994; Guillochon et al., 2018). Despite a
reasonable agreement with $i$-band observations, overall this model does not
provide a satisfactory fit to the light curves of SN 2023zaw . During the SN
rise the model conflicts with the deep ATLAS $o$-band limits, and fails to
reproduce the maximum luminosity in the $gcr$-bands. The model clearly
diverges from the late-time $rz$-band tail and only adequately matches the
decline between MJD 60300 and MJD 60320. We find the most probable model
ejects M${}_{\rm ej}=0.06^{+0.01}_{-0.01}$ M⊙ of material with a large 56Ni
fraction $f_{\rm Ni}=0.90^{+0.06}_{-0.11}$ and a reasonable SN-like ejecta
velocity $v_{\rm ej}=6400^{+300}_{-300}$ $\rm{km}\,s^{-1}$ . This implies
M${}_{\rm Ni}\sim$ 0.05 M⊙ was synthesised in the explosion, this is
significantly less than a typical Type Ib SN (Anderson, 2019; Rodríguez et
al., 2023), and is comparable to the total ejected mass. This presents a major
problem for this model, contradicting our observed spectra. An ejecta of
mostly 56Ni and its decay products should be dominated by iron-group
absorption in the blue this is not observed in the spectra of SN 2023zaw.
Additionally, a nickel yield of MNi= 0.05 M⊙ was difficult to rationalise in
the context of the ultra-stripped SN iPTF 14gqr (Sawada et al., 2022).
Clearly, a 56Ni-only model does not reproduce our observations, requires an
unrealistic nickel-fraction, and can only adequately explain the light curve
tail. Therefore, we exclude the scenario where nickel-decay is the only
mechanism powering SN 2023zaw and seek an explanation with another mechanism
in addition to nickel decay.
The light curve tail begins approximately ten days after maximum. The decline-
rate slows and follows a power-law decline, an evolution which is similar to
the $i$-band tail of the fast-fading SN 2019bkc (Chen et al., 2020; Prentice
et al., 2020) where this was attributed to radioactivity.
Assuming the heating at late times is powered by decay of 56Co to 56Fe, we can
estimate a synthesised nickel mass for SN 2023zaw. We apply the same model as
before but with a restriction to fit only the late time $riz$-band data (MJD
$>$ 60300). The sampler only evaluates against the late-time photometry, but
we provide a prior constraining the explosion epoch between the last ATLAS
non-detection (MJD 60282.51) and the first detection (MJD = 60285.23). The
tail requires that $M_{\rm Ni}\ \sim 0.006$ M⊙ was synthesised and ejected to
power this phase of the light curve. Seeking to verify our method, we
reanalyse the tail of SN 2019bkc and find a nickel mass M${}_{\rm Ni}\ \sim
0.005$ M⊙, which is compatible with M${}_{\rm Ni}\ =0.001-0.01$ M⊙ estimated
by Chen et al. (2020).
Motivated by the poor agreement of the nickel decay model to the initial peak
but apparent agreement to the light curve tail, we consider a generalised
early-time heating source plus a nickel decay tail. Using MOSFiT we adopt a
physics-agnostic analytical prescription for an exponentially rising
additional energy source that declines from its maximum ($t_{peak}$) following
a power-law (named exppow). It is described by $L=L_{\rm
scale}\cdot(1-e^{-t/t_{peak}})^{\alpha}\cdot(t/t_{peak})^{-\beta}$, where
$L_{\rm scale}$, $\alpha$, $\beta$, and $t_{peak}$ are free parameters.
Combining the default and exppow models we created a new MOSFiT model called
exppowni.
We fit the exppowni model as before using the same constraints on opacity and
host galaxy extinction. The model realisations are shown in Figure 3. We see
good agreement with observations and consistency with the ATLAS non-
detections, the observed color, and late-time luminosities. We estimate a
nickel mass of $M_{\rm Ni}\sim 0.006$ M⊙, which is in agreement with the fit
for only the light curve tail. It is clear that an additional luminosity
source is required to simultaneously match the fast rise, the peak luminosity
and the observed 56Co tail. The parameterised nature of the exppowni model
provides insight into the timescale of the additional luminosity source, which
reaches $t_{peak}$ between $1.6-2.6$ days after explosion and dominates the
luminosity during this phase with a scale luminosity $L_{\rm scale}$ $\sim
10^{42}$ ergs s-1.
#### 3.3.2 Circumstellar Material Interaction + Nickel
We evaluate interaction with nearby circumstellar material (CSM) as a possible
extra energy source for SN 2023zaw. CSM interaction can produce unusual and
rapidly evolving transients (e.g. Moore et al., 2023; Kuncarayakti et al.,
2023; Nagao et al., 2023; Perley et al., 2022). We use the CSMNI model in
MOSFiT (Villar et al., 2017; Jiang et al., 2020; Moore et al., 2023) which
combines the luminosity of 56Ni decay and heating from shock propagation
following an ejecta-CSM collision. The CSM interaction physics is implemented
following the treatment of Chatzopoulos et al. (2013). We use the adapted
model setup used by Moore et al. (2023) and Srivastav et al. (2023) where the
onset of interaction is delayed until the ejected material reaches an inner
CSM radius.
We evaluate the csmni model against our observations and show the CSM model
realisations in Figure 3. This model fits the light curve peak very well and
matches the overall light curve evolution in all bands, including close
agreement to the late-time tail. The derived model parameters are: $M_{\rm
ej}=0.069^{+0.005}_{-0.004}$ M⊙, $f_{\rm Ni}=0.12^{+0.02}_{-0.02}$, $r_{\rm
csm}=63^{+13}_{-12}$ AU, $M_{\rm csm}=0.23^{+0.06}_{-0.05}$ M⊙, where, $M_{\rm
csm}$ is the mass of the CSM material and $r_{\rm csm}$ is the CSM radius.
This model implies $M_{\rm Ni}\sim 0.008$ M⊙, which is compatible with our
estimate from just the lightcurve tail. The posterior of the CSM density
profile reveals the model is insensitive to a wind or disc-like CSM structure
parameter $s$ and the CSM mass and radius show strong degeneracy. The derived
kinetic energy for this model is low at $E_{k}\sim$ 1049 erg. We note the +61
day spectrum of SN 2023zaw (Das et al., 2024) shows narrow helium lines,
likely from interaction. These results suggest late-stage mass loss before
explosion consistent with Wu & Fuller (2022).
#### 3.3.3 Magnetar + Nickel
We investigate the feasibility of magnetar heating as the additional energy
source using the Magnetar + Nickel (‘magni’) model described by Nicholl et al.
(2017); Gomez et al. (2022), which combines the luminosity of magnetar spin-
down (Kasen & Bildsten, 2010; Woosley, 2010) and radioactive heating. This
model is significantly better than radioactive heating alone, and matches the
peak but diverges from the $riz$-band tail. Our derived Magnetar + Ni model
has the following parameters, ejecta mass $M_{\rm ej}=0.055^{+0.006}_{-0.005}$
M⊙, nickel fraction $f_{\rm Ni}=0.08^{+0.06}_{-0.09}$, and $M_{\rm Ni}\sim
0.004$ M⊙. We find magnetar parameters of $P_{\rm spin}=6.3^{+1.6}_{-1.1}$ ms
and $B_{\perp}=0.19^{+0.17}_{-0.20}\times 10^{14}$ G. The B-field required
from our models is consistent with the population of superluminous supernovae.
However the spin period, $P_{\rm spin}$ is towards the longer end of the range
of 1-6 ms (e.g. Kashiyama et al., 2016; Nicholl et al., 2017). Given the low
ejecta mass the region of the ejected material which is being heated by the
central engine should be visible. Therefore, we would expect the W-shaped O II
absorption lines to be present in the spectrum which are the characteristic
signature of central engines in Type I superluminous supernovae (Mazzali et
al., 2016; Quimby et al., 2018), unfortunately this region of the SN 2023zaw
spectrum was not observed.
#### 3.3.4 Shock Cooling + Nickel
Finally, we consider the contribution of 56Ni and shock cooling emission of a
shocked stellar envelope following the analytical Piro et al. (2021) model. In
this scenario the progenitor star possesses an extended envelope which is
shocked by the expanding SN and produces observable shock cooling emission. We
have added this model to MOSFiT as the pironi model. We show our model
realisations in Figure. 3. Our models require $f_{\rm
Ni}=0.01^{+0.03}_{-0.01}$, $M_{\rm ej}=0.04^{+0.03}_{-0.02}$ M⊙, $M_{\rm
Ni}\sim 0.0004$ M⊙, $M_{\rm env}\sim 0.3$ M⊙, and $R_{\rm
star}=10^{10.5^{+0.5}_{-0.4}}\,\rm cm$. The shock cooling model shows
relatively poor agreement with the overall light curve of SN 2023zaw. The
model is too luminous and contradicts the ATLAS non-detections and does not
well reproduce the observed tail.
We note that this model has excellent agreement to observations when we adopt
the host extinction ($\rm A_{v,host}=1.12$ mag) estimate made by Das et al.
(2024) but not for our adopted value (see Section 3.1). Evaluating the pironi
model with a lower value of host extinction, $\rm A_{v,host}=1.12$ mag, we
find $f_{\rm Ni}\sim 0.009$, an envelope mass $M_{\rm env}\sim 0.31$ M⊙, an
ejected mass $M_{\rm ej}\sim 0.03$ M⊙ and a stellar radius $R_{\rm star}\sim
10^{10.3}\,\rm cm$.
For both treatments of host extinction we find envelope masses close to or
smaller than 0.2 M⊙ which is compatible with the envelope masses of an ultra-
stripped progenitor (Tauris et al., 2015).
#### 3.3.5 Model Comparison
To compare the models we use the Bayesian model evidence (marginal likelihood)
scores for each model which are returned by MOSFiT when using the nested
sampler Dynesty (Speagle, 2020). From the Bayesian evidence scores a Bayes
factor (BF) can be used to compare two models. The BF comparing model $x$ and
model $y$ is given by B $\equiv$ Zx/Zy, where Zi is the Bayesian evidence for
model i. A Bayes factor B $>$ 10 indicates a strong preference, and B $>$ 100
is considered definitive.
We find a BF Zexpowni/Z${}_{\rm nickel}\ \sim 10^{21}$, where Zexppowni,
Znickel are the evidence for their respective models, this shows evidence
favouring an additional powering source. This statistical test and the poor
agreement of 56Ni only modelling with the light curves provides evidence that
SN 2023zaw requires an additional power source. Comparing our Ni model and
shock cooling model we find Zpironi/Z${}_{\rm nickel}\ \sim 10^{1}$, the shock
cooling + Ni model is preferred over Ni alone. Comparing between models in
this way our analysis favours CSM + Ni model above all others. When comparing
the alternative models we calculate Zcsmni/Zmagni $\sim 10^{10}$, favouring
CSM interaction over the spin-down of a newly born neutron star. However, we
note that the MOSFiT CSMNI model is the most complex evaluated in this work
and the most flexible while likely oversimplifying the physics involved. While
the CSM + Ni model reproduces the light curve tail luminosity better than any
other model, the blackbody spectrum assumed in MOSFiT may be unreliable at
this phase as the ejected material becomes transparent.
Through MOSFiT modelling we have shown that the total emission of SN 2023zaw
cannot be explained with radioactivity alone. We have found evidence that
suggests another energy source is required which peaks early in the evolution
of SN 2023zaw and dominates the emission at this phase. We disfavour shock
cooling emission as this additional source due to poor agreement to the multi-
color photometry, in particular our ATLAS non-detections. A BF analysis
favours CSM + nickel with a CSM mass which is extreme but not beyond
theoretical predictions (Wu & Fuller, 2022). However, a magnetar + nickel
explanation for SN 2023zaw should not be ruled out. We consider both models as
viable explanations for SN 2023zaw.
### 3.4 Spectral Modelling and Analysis
We present post-peak spectra taken +4, +8 and +26 days after maximum light.
All spectra used in this work are shown in Figure 4. Our first spectrum taken
+4.2 days after maximum light shows well developed absorption features and a
prominent Ca II NIR triplet. Weak Fe II features may exist around 4500–5000 Å,
we find that a 5700 K blackbody produces a continuum in agreement with the
observed spectrum. As noted in Section 3.1 the spectra show a prominent Na I D
line blend.
The spectra show little evolution between the +4 day and +8 day observations
considering the rapidly evolving light curve. At +8 days the spectroscopic
features have more developed line profiles and a broad emission feature at
7100 Å is apparent. Our final observation 26 days after maximum light contains
little information about SN 2023zaw and we exclude it from further
quantitative analysis.
SN 2023zaw was compared to a ’.Ia’ SN candidate SN 2010X (Karambelkar et al.,
2023b), suggesting a .Ia origin for this SN. .Ia SNe are the theorised
explosion of a helium shell on the surface of a white dwarf (Shen & Bildsten,
2009; Shen et al., 2010). In Figure 5 we present a spectroscopic comparison to
He shell detonation models (Sim et al., 2012). Although the model provides
some agreement to the SED of the observed spectrum at +4.2 days, it predicts
emission where in the observed spectrum we see a deep He I absorption feature
at $\sim$5700 Å. At +8.2 days from explosion the differences between the model
and our observations become clear, the He detonation models show strong
emission peaks for lines not present in data. With such clear divergence from
the He detonation model we rule out a .Ia origin for SN 2023zaw , in agreement
with Das et al. (2024).
Figure 4: Optical spectroscopy of SN 2023zaw. The phase in days relative to
maximum light are indicated. These spectra have been telluric and Galactic
line of sight extinction corrected, and corrected for host galaxy recessional
velocity. The inset plot shows the blended Na I D lines in the + 4.2 and + 8.2
day spectra. We also include spectra from SN 2019wxt (Agudo et al., 2023a), SN
2007C (Modjaz et al., 2014), SN 2019bkc (Chen et al., 2020; Prentice et al.,
2020).
Given the striking similarity to both the SEDs and spectral features of the SN
2019wxt spectra, here we undertake a similar process to that presented by
Agudo et al. (2023b) of using tardis to model the photospheric-phase spectra.
tardis (Kerzendorf & Sim, 2014) is a one-dimensional, time-independent, Monte
Carlo radiative transfer spectral synthesis code capable of simulating the
spectra of an array of different explosive transients. The code assumes an
inner region of completely opaque inner ejecta, beneath which all energy
injection into the system is assumed to be thermalised. This inner boundary is
surrounded by a number of optically thin shells that represent the line-
forming region of the ejecta (these shells encompass the computational domain
of the simulation). A (user-defined) number of photon quanta (dubbed
$r$-packets) are created at the inner boundary, and are assigned properties
randomly sampled from a single-temperature blackbody. The trajectory of these
$r$-packets are simulated as they traverse the computational domain, with any
interaction (either free $e^{-}$ scattering, or bound–bound transitions)
simulated. The $r$-packets that escape the outer boundary of the simulation
are used to generate a synthetic spectrum, which can be compared (usually via
$\chi$-by-eye; see e.g., Stehle et al., 2005) to the observed spectrum, and
iterated upon to improve agreement.
While tardis is a time-independent code, it is possible to evolve the input
parameters to obtain a sequence of self-consistent models, as we do here for
the +4.2 and +8.2 day spectra of SN 2023zaw. These user-defined input
parameters include specifying the time since explosion, $t_{\rm exp}$ (which
we set to be 2 days pre-maximum), the inner and outer boundary of the
computational domain (defined in velocity-space, where $v_{\rm
inner}^{+4.2\,{\rm d}}=12500$ km s-1, $v_{\rm inner}^{+8.2\,{\rm d}}=5000$ km
s-1, and $v_{\rm outer}=20000$ km s-1), the abundance and density of the
ejecta material – here we utilise a uniform abundance across the entire ejecta
and across both epochs (see Table 1), and we invoke an exponential profile,
where:
$\begin{split}\rho\left(v,t_{\rm exp}\right)=2\times
10^{-12}&\times\exp\left[\frac{-v}{6000\ {\rm km\,s}^{-1}}\right]\\\
&\times\left(\frac{2\ {\rm day}}{t_{\rm exp}}\right)^{3}{\rm
g\,cm}^{-3}.\end{split}$ (1)
We utilise the dilute-lte, nebular and scatter approximations for excitation,
ionisation and line treatment, respectively, as well as including the recomb-
nlte He treatment (as presented by Boyle et al., 2017), to better capture NLTE
excitation effects for He I.
Table 1: tardis model compositions. Element | Mass fraction
---|---
He | 0.50
O | 0.30
Si | 0.20
Ca | $5\times 10^{-6}$
Figure 5: Best-fitting tardis models (red), compared to the +4.2 and +8.2 d
observed spectra of SN 2023zaw (black). The +4.2 d observed spectrum (and
associated model spectra) have been vertically offset for clarity (by $6\times
10^{-17}{\rm erg}\,{\rm s}^{-1}\,{\rm cm}^{-2}\,{\rm\AA}^{-1}$). Also plotted
are two model spectra from a helium shell detonation simulation (blue)
presented by Sim et al. (2012). These models have phases comparable to those
of the observed spectra ($\Delta t<0.13$ d), and have been re-scaled and
arbitrarily offset to roughly match the continua of the observed spectra.
We present our model fits in Figure 5. The observations possess a number of
prominent absorption features, located at $\sim 5700$, 6100, 6400, 6800, 7400
and 8200 Å. We find that we can reproduce the +4.2 d spectrum with a
relatively simple composition, made up of He, O, Si and Ca, where the 6100 Å
feature is produced by Si II, the 7400 Å feature by O I, the 8200 Å feature by
Ca II, and all others (i.e., 5700, 6400 and 6800 Å) by He I. We over-produce
the 5700 Å He absorption feature, and do not perfectly match the continuum
blueward of $\lesssim 5400$ Å, but overall the fit to the data is good.
We note here that the NLTE He treatment within tardis is a simple,
empirically-derived approximation designed to account for the effects of
recombination from He II $\xrightarrow{}$ He I. As such, it is possible that
the estimated level populations within our tardis simulation have deviated
from the true level populations. This possible issue has been noted before,
and was proposed as the reason behind the disagreement between the relative
strengths of He features in the case of SN 2019wxt (Agudo et al., 2023b).
There, they manually altered the relative level populations of He I to better
match the observed relative strengths of the He I features. Here we opt to not
explore such variations, as our focus is on constraining the elements that
dominate the composition in the line-forming region of the ejecta material of
SN 2023zaw.
Attempts to fit the observations in a more detailed manner than presented here
should be approached with extreme caution, given we have no reliable
constraint on the true level of extinction. As a result, the true continuum of
these observed spectra could be much bluer than what we present here (see
Section 3.1 for details on our extinction estimates), which would
significantly alter the agreement of our models to the data.
Although our model composition is quite rudimentary, it aligns with our SN Ib
classification, and not .Ia. Going beyond classification arguments, we are
able to constrain the composition to be $\sim 80$% He and O, and $\sim 20$% Si
(in the line-forming region). Our inner velocity estimates derived from this
modelling ($v_{\rm inner}^{+4.2\,{\rm d}}=12500$ km s-1 and $v_{\rm
inner}^{+8.2\,{\rm d}}=5000$ km s-1) indicates the photosphere is receding
quickly into the ejecta material, consistent with a small mass of ejected
material.
### 3.5 Volumetric Rates
Here we present a preliminary volumetric rate estimate for SN 2023zaw-like
rapidly evolving stripped-envelope SNe (SESNe). We use the methodology
described by Srivastav et al. (2022) for estimating rates of SNe Iax. We
consider rapidly evolving SESNe detected by the ATLAS survey that occurred
within a distance of 100 Mpc during a 5 year window spanning 2017 September 21
and 2022 September 20 (Srivastav et al., 2022). To estimate the recovery
efficiency of SN 2023zaw-like transients within 100 Mpc, we use the ATLAS
survey simulator (McBrien, 2021). We use Gaussian Processes interpolated ATLAS
$c$ and $o$-band light curves of SN 2023zaw, produced by interpolating the
light curves using the public extrabol (Thornton et al., 2023) code. These
were then injected 10,000 times at a range of times, sky locations and
redshift bins spanning up to D = 100 Mpc in the simulation. A simulated
transient was considered to be recovered as a detection if it produced a
minimum of 6 to 8 detections of $5\sigma$ (or greater) significance. Although
difference detections in the ATLAS data stream are flagged as candidate
transients if they produce 3 individual $5\sigma$ detections on any given
night, this criterion is more realistic since human scanners will be confident
about promoting candidates to the TNS if they have detections over at least
two distinct nights.
The volumetric rate is thus estimated using
$R=\frac{N}{\eta VT},$
where $T$ is the time duration of the mock survey, $N$ is the number of ultra-
stripped SNe detected within the considered time duration, $\eta$ represents
the recovery efficiency from the ATLAS survey simulator and $V$ is the volume
probed within 100 Mpc. We consider $N=3$, representing SN 2019bkc (Prentice et
al., 2020; Chen et al., 2020), SN 2019dge (Yao et al., 2020) and SN 2021agco
(Yan et al., 2023). The recovery efficiency obtained from the survey simulator
is $\eta\approx 0.06\pm 0.02$.
From the above, we estimate a rate of $R\approx 2.5^{+2.5}_{-1.4}\pm 0.9\times
10^{-6}$ Mpc-3 yr-1 $h^{3}_{70}$ for rapidly evolving SN 2023zaw-like SESNe,
where $h_{70}=H_{0}/70$. The statistical uncertainty derives from $1\sigma$
Gaussian errors from single-sided upper and lower limits for Poisson
statistics (Gehrels, 1986) and the systematic uncertainty is based on the
error on the recovery efficiency $\eta$. The above rate estimate for SN
2023zaw-like SESNe accounts for $\sim 1-6\%$ of the CCSN rate and $\sim
5-20\%$ of the SESN rate computed by Frohmaier et al. (2021). We note here
that $\sim 15\%$ of transients in the 100 Mpc ATLAS sample do not have a
spectroscopic classification (Srivastav et al., 2022), and it is possible that
the representation of SN 2023zaw-like rapidly evolving SESNe is
disproportionately higher within the unclassified sample. Nonetheless, at a
few per cent of the CCSN rate, these transients clearly constitute a rare
class of stellar explosions.
### 3.6 Potential Evolutionary Route
Using the physical characteristics inferred from our analysis we can look for
potential progenitor systems in the Binary Population and Spectral Synthesis
data release (BPASSv2.2.2 Eldridge et al. 2017; Stanway & Eldridge 2018;
Stevance et al. 2020). Assuming a solar metallicity (Z=0.02) we look for
models with M${}_{\rm H}<0.01$ M⊙ and hydrogen mass fraction $X<0.001$
(Dessart et al., 2012). An infered low ejecta mass in Section 3.3 further
constrains our search to models with ejecta mass $<0.1$ M⊙. Given the low
kinetic energy and ejecta mass of SN 2023zaw we report below the results
obtained by assuming the lower supernova explosion energy ($10^{50}$ ergs). In
addition we apply a condition for explodability of the progenitor star,
commonly a threshold on the mass of the Oxygen Neon core ($>$1.38 M⊙) is used
to determine if a model is a candidate for core collapse. When this condition
is applied no models in BPASSv2.2.2 are found to match all of our conditions;
but relaxing the condition to an ONe core mass greater than 1.30 M⊙ (removing
one significant figure from the threshold), we find 10 models that fit our
criteria. When we include weights dependent on the Initial Mass Function as
well as the binary fractions and period distributions (our fiducial IMF –
Kroupa 2001 – and our binary fractions and period distributions are based on
Moe & Di Stefano 2017), this corresponds to 28 such systems per million solar
masses at solar metallicity.
BPASS data are a grid of stellar evolution models and does not contain all
possible observable outcomes. However, natively finding candidate progenitor
stars just at the threshold of explodability is interesting when considering
the observed rarity of SN 2023zaw-like explosions. The progenitor of SN
2023zaw is very likely a lower mass star: the initial masses of our 6 systems
on the cusp of explodability range from 7.5 to 9 M⊙, which due to the initial
mass function are some of the most common massive stars in the Universe. The
low rate of SN 2023zaw-like SNe and BPASS results are naturally reconciled if
potential progenitors of these SNe fail to explode most of the time, as their
cores do not reach the necessary physical conditions. The 10 models we find
with ONe core mass $>1.3$ M⊙ represent only 2.6 percent of the massive stars
(M${}_{\rm ZAMS}>7.5$ M⊙) that fit our stripping and ejecta mass criteria, and
it is unlikely all of these would explode. This is in agreement with the
calculated rates in Section 3.5 of a few percent of the CCSN rate.
## 4 Summary and Conclusions
In this section we summarize the properties of SN 2023zaw.
1. 1.
SN 2023zaw shows a rapid rise ($<$ 4 rest-frame days), and initial decline
from maximum light which settles to a radioactive tail 10 days after peak.
Comparisons to USSNe and rapidly evolving supernovae shows that SN 2023zaw is
among the fastest fading SNe yet discovered, comparable to SN 2019bkc (Chen et
al., 2020; Prentice et al., 2020). We consider radioactive nickel as the power
source for SN 2023zaw , finding that nickel alone cannot power both the peak
and the tail of the lightcurve, unlike SN 2019dge and SN 2019wxt (Yao et al.,
2020; Agudo et al., 2023a). An additional power source is required.
2. 2.
We consider several additional powering mechanisms and use a Bayes factor
analysis to select a preferred model. This analysis favors interaction with a
CSM ($M_{\rm csm}=0.23^{+0.06}_{-0.05}$ M⊙ $r_{\rm csm}=63^{+13}_{-12}$ AU) to
boost the initial luminosity of SN 2023zaw. We note that spectroscopic
signatures of interaction were not observed in the spectra and suggest that
the CSM envelope was swept up by the photosphere before our first
spectroscopic observation.
3. 3.
Through spectroscopic comparison we show SN 2023zaw is similar to type Ib SNe
and shows lines and line strengths similar to SN 2007C (Modjaz et al., 2014)
and SN 2019wxt (Agudo et al., 2023a). Monte Carlo radiative transfer modelling
with tardis shows SN 2023zaw has a composition dominated by He, O and Si. The
spectroscopic evolution is not compatible with He shell detonation models.
4. 4.
A simulated ATLAS survey and estimate of the spectroscopic completeness of the
ATLAS Volume Limited Survey (D $<$ 100 Mpc) yields a rates estimate of
$R\approx 2.5^{+2.5}_{-1.4}\pm 0.9\times 10^{-6}$ Mpc-3 yr-1 $h^{3}_{70}$. SN
2023zaw-like transients could be as common as $\sim 1-6\%$ of the CCSN rate.
Searching for potential progenitor stars in BPASS models we propose that SN
2023zaw-like events are the result of lower mass progentitors (MZAMS = 7.5 – 9
M⊙ ) whose cores are at the threshold of explodability. The low observed rate
of these SNe is then a result of the fact that only a few percent of these
stars end with a core mass sufficient to result in core collapse.
We have shown SN 2023zaw to be part of a small group of rapidly evolving SNe
with a low ejecta mass ($M\simeq 0.07$M⊙ ) and estimated a total nickel mass
synthesised in the explosion $M_{\rm Ni}\simeq\,0.006$ M⊙. Furthermore,
through BF analysis we find evidence in favour of an extra luminosity source
in addition to the radioactive decay of 56Ni. With our estimate of host galaxy
extinction and significant Milky Way extinction in the line of sight we cannot
reproduce the observe light curve with shock cooling emission as favoured by
(Das et al., 2024) and favour interaction with a detached CSM to boost the
luminosity of SN 2023zaw or magnetar energy injection.
## acknowledgments
SJS, SS, KWS and DRY acknowledge funding from STFC Grants ST/Y001605/1,
ST/X001253/1, ST/X006506/1 and ST/T000198/1. SJS acknowledges a Royal Society
Research Professorship. MN is supported by the European Research Council (ERC)
under the European Union’s Horizon 2020 research and innovation programme
(grant agreement No. 948381) and by UK Space Agency Grant No. ST/Y000692/1.
TWC acknowledges the Yushan Young Fellow Program by the Ministry of Education,
Taiwan for the financial support. SY acknowledges the funding from the
National Natural Science Foundation of China under Grant No. 12303046. HFS is
supported by the Eric and Wendy Schmidt A.I. in Science Fellowship. Pan-STARRS
is primarily funded to search for near-earth asteroids through NASA grants
NNX08AR22G and NNX14AM74G. The Pan-STARRS science products for transient
follow-up are made possible through the contributions of the University of
Hawaii Institute for Astronomy and Queen’s University Belfast. ATLAS is
primarily funded through NASA grants NN12AR55G, 80NSSC18K0284, and
80NSSC18K1575. The ATLAS science products are provided by the University of
Hawaii, Queen’s University Belfast, STScI, SAAO and Millennium Institute of
Astrophysics in Chile. We thank Lulin staff H.-Y. Hsiao, W.-J. Hou, C.-S. Lin,
H.-C. Lin, and J.-K. Guo for observations and data management. Based on
observations obtained at the international Gemini Observatory (under program
ID GN-2023B-Q-125), a program of NSF NOIRLab, which is managed by the
Association of Universities for Research in Astronomy (AURA) under a
cooperative agreement with the U.S. National Science Foundation on behalf of
the Gemini Observatory partnership: the U.S. National Science Foundation
(United States), National Research Council (Canada), Agencia Nacional de
Investigación y Desarrollo (Chile), Ministerio de Ciencia, Tecnología e
Innovación (Argentina), Ministério da Ciência, Tecnologia, Inovações e
Comunicações (Brazil), and Korea Astronomy and Space Science Institute
(Republic of Korea). This work was enabled by observations made from the
Gemini North telescope, located within the Maunakea Science Reserve and
adjacent to the summit of Maunakea. We are grateful for the privilege of
observing the Universe from a place that is unique in both its astronomical
quality and its cultural significance. Lasair is supported by the UKRI Science
and Technology Facilities Council and is a collaboration between the
University of Edinburgh (grant ST/N002512/1) and QUB (grant ST/N002520/1)
within the LSST:UK Science Consortium. ZTF is supported by National Science
Foundation grant AST-1440341 and a collaboration including Caltech, IPAC, the
Weizmann Institute for Science, the Oskar Klein Center at Stockholm
University, the University of Maryland, the University of Washington,
Deutsches Elektronen-Synchrotron and Humboldt University, Los Alamos National
Laboratories, the TANGO Consortium of Taiwan, the University of Wisconsin at
Milwaukee, and Lawrence Berkeley National Laboratories.
## Appendix A MOSFiT Model Posterior
Figure 6: Physical parameter posterior distribution of the circumstellar
material + nickel model. Where the important physical parameters are nickel
fraction ($f_{\rm Ni}$), kinetic energy ($E_{\rm k}$), CSM mass ($M_{\rm
csm}$), ejecta mass ($M_{\rm ej}$), the CSM radius ($R_{\rm 0}$) in units of
AU, CSM density ($\rho$), $t_{\rm exp}$ is relative to the first photometric
detection (MJD 60285.23) of SN 2023zaw.
## References
* Agudo et al. (2023a) Agudo, I., Amati, L., An, T., et al. 2023a, A&A, 675, A201, doi: 10.1051/0004-6361/202244751
* Agudo et al. (2023b) —. 2023b, A&A, 675, A201, doi: 10.1051/0004-6361/202244751
* Anderson (2019) Anderson, J. P. 2019, A&A, 628, A7, doi: 10.1051/0004-6361/201935027
* Arnett (1982) Arnett, W. D. 1982, ApJ, 253, 785, doi: 10.1086/159681
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f
* Astropy Collaboration et al. (2022) Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, ApJ, 935, 167, doi: 10.3847/1538-4357/ac7c74
* Bellm et al. (2019) Bellm, E. C., Kulkarni, S. R., Graham, M. J., et al. 2019, PASP, 131, 018002, doi: 10.1088/1538-3873/aaecbe
* Bertin & Arnouts (1996) Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393, doi: 10.1051/aas:1996164
* Boyle et al. (2017) Boyle, A., Sim, S. A., Hachinger, S., & Kerzendorf, W. 2017, A&A, 599, A46, doi: 10.1051/0004-6361/201629712
* Breeveld et al. (2011) Breeveld, A. A., Landsman, W., Holland, S. T., et al. 2011, in American Institute of Physics Conference Series, Vol. 1358, American Institute of Physics Conference Series, ed. J. E. McEnery, J. L. Racusin, & N. Gehrels, 373–376, doi: 10.1063/1.3621807
* Chambers et al. (2016) Chambers, K. C., Magnier, E. A., Metcalfe, N., et al. 2016, ArXiv e-prints. https://arxiv.org/abs/1612.05560
* Chatzopoulos et al. (2013) Chatzopoulos, E., Wheeler, J. C., Vinko, J., Horvath, Z. L., & Nagy, A. 2013, ApJ, 773, 76, doi: 10.1088/0004-637X/773/1/76
* Chen et al. (2020) Chen, P., Dong, S., Stritzinger, M. D., et al. 2020, ApJ, 889, L6, doi: 10.3847/2041-8213/ab62a4
* Chen et al. (2021) Chen, T. W., Yang, S., Pan, Y. C., et al. 2021, Transient Name Server AstroNote, 92, 1
* Das et al. (2024) Das, K. K., Fremling, C., Kasliwal, M. M., et al. 2024, arXiv e-prints, arXiv:2403.08165, doi: 10.48550/arXiv.2403.08165
* De et al. (2018) De, K., Kasliwal, M. M., Ofek, E. O., et al. 2018, Science, 362, 201, doi: 10.1126/science.aas8693
* Dessart et al. (2012) Dessart, L., Hillier, D. J., Li, C., & Woosley, S. 2012, MNRAS, 424, 2139, doi: 10.1111/j.1365-2966.2012.21374.x
* Drout et al. (2011) Drout, M. R., Soderberg, A. M., Gal-Yam, A., et al. 2011, ApJ, 741, 97, doi: 10.1088/0004-637X/741/2/97
* Drout et al. (2013) Drout, M. R., Soderberg, A. M., Mazzali, P. A., et al. 2013, ApJ, 774, 58, doi: 10.1088/0004-637X/774/1/58
* Eldridge et al. (2017) Eldridge, J. J., Stanway, E. R., Xiao, L., et al. 2017, PASA, 34, e058, doi: 10.1017/pasa.2017.51
* Frohmaier et al. (2021) Frohmaier, C., Angus, C. R., Vincenzi, M., et al. 2021, MNRAS, 500, 5142, doi: 10.1093/mnras/staa3607
* Fulton et al. (2023) Fulton, M., Moore, T., Srivastav, S., et al. 2023, Transient Name Server AstroNote, 339, 1
* Gehrels (1986) Gehrels, N. 1986, ApJ, 303, 336, doi: 10.1086/164079
* Gehrels et al. (2004) Gehrels, N., Chincarini, G., Giommi, P., et al. 2004, ApJ, 611, 1005, doi: 10.1086/422091
* Gillanders et al. (2020) Gillanders, J. H., Sim, S. A., & Smartt, S. J. 2020, MNRAS, 497, 246, doi: 10.1093/mnras/staa1822
* Gillanders et al. (2023) Gillanders, J. H., Huber, M., Chambers, K., et al. 2023, Transient Name Server AstroNote, 341, 1
* Gomez et al. (2022) Gomez, S., Berger, E., Nicholl, M., Blanchard, P. K., & Hosseinzadeh, G. 2022, ApJ, 941, 107, doi: 10.3847/1538-4357/ac9842
* Guillochon et al. (2018) Guillochon, J., Nicholl, M., Villar, V. A., et al. 2018, ApJS, 236, 6, doi: 10.3847/1538-4365/aab761
* Güver & Özel (2009) Güver, T., & Özel, F. 2009, MNRAS, 400, 2050, doi: 10.1111/j.1365-2966.2009.15598.x
* Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2
* Ho et al. (2023) Ho, A. Y. Q., Perley, D. A., Gal-Yam, A., et al. 2023, ApJ, 949, 120, doi: 10.3847/1538-4357/acc533
* Hunter (2007) Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90, doi: 10.1109/MCSE.2007.55
* Inserra (2019) Inserra, C. 2019, Nature Astronomy, 3, 697, doi: 10.1038/s41550-019-0854-4
* Jiang et al. (2020) Jiang, B., Jiang, S., & Ashley Villar, V. 2020, Research Notes of the American Astronomical Society, 4, 16, doi: 10.3847/2515-5172/ab7128
* Karambelkar et al. (2023a) Karambelkar, V., Andreoni, I., Sollerman, J., et al. 2023a, Transient Name Server AstroNote, 335, 1
* Karambelkar et al. (2023b) Karambelkar, V., Das, K., Lin, Z., et al. 2023b, Transient Name Server AstroNote, 340, 1
* Kasen & Bildsten (2010) Kasen, D., & Bildsten, L. 2010, ApJ, 717, 245, doi: 10.1088/0004-637X/717/1/245
* Kashiyama et al. (2016) Kashiyama, K., Murase, K., Bartos, I., Kiuchi, K., & Margutti, R. 2016, ApJ, 818, 94, doi: 10.3847/0004-637X/818/1/94
* Kasliwal et al. (2010) Kasliwal, M. M., Kulkarni, S. R., Gal-Yam, A., et al. 2010, ApJ, 723, L98, doi: 10.1088/2041-8205/723/1/L98
* Kerzendorf & Sim (2014) Kerzendorf, W. E., & Sim, S. A. 2014, MNRAS, 440, 387, doi: 10.1093/mnras/stu055
* Kroupa (2001) Kroupa, P. 2001, MNRAS, 322, 231, doi: 10.1046/j.1365-8711.2001.04022.x
* Kuncarayakti et al. (2023) Kuncarayakti, H., Sollerman, J., Izzo, L., et al. 2023, arXiv e-prints, arXiv:2303.16925, doi: 10.48550/arXiv.2303.16925
* Labrie et al. (2023) Labrie, K., Simpson, C., Cardenes, R., et al. 2023, Research Notes of the American Astronomical Society, 7, 214, doi: 10.3847/2515-5172/ad0044
* Labrie et al. (2023) Labrie, K., Simpson, C., Turner, J., et al. 2023, DRAGONS, 3.1.0, Zenodo, doi: 10.5281/zenodo.7776065
* Lee et al. (2023) Lee, M. H., Zeng, K. J., Chen, T. W., et al. 2023, Transient Name Server AstroNote, 338, 1
* Mazzali et al. (2016) Mazzali, P. A., Sullivan, M., Pian, E., Greiner, J., & Kann, D. A. 2016, MNRAS, 458, 3455, doi: 10.1093/mnras/stw512
* McBrien (2021) McBrien, O. 2021, PhD thesis, Queen’s University Belfast
* Modjaz et al. (2019) Modjaz, M., Gutiérrez, C. P., & Arcavi, I. 2019, Nature Astronomy, 3, 717, doi: 10.1038/s41550-019-0856-2
* Modjaz et al. (2014) Modjaz, M., Blondin, S., Kirshner, R. P., et al. 2014, AJ, 147, 99, doi: 10.1088/0004-6256/147/5/99
* Moe & Di Stefano (2017) Moe, M., & Di Stefano, R. 2017, ApJS, 230, 15, doi: 10.3847/1538-4365/aa6fb6
* Moore et al. (2023) Moore, T., Smartt, S. J., Nicholl, M., et al. 2023, ApJ, 956, L31, doi: 10.3847/2041-8213/acfc25
* Moriya et al. (2017) Moriya, T. J., Mazzali, P. A., Tominaga, N., et al. 2017, MNRAS, 466, 2085, doi: 10.1093/mnras/stw3225
* Nadyozhin (1994) Nadyozhin, D. K. 1994, ApJS, 92, 527, doi: 10.1086/192008
* Nagao et al. (2023) Nagao, T., Kuncarayakti, H., Maeda, K., et al. 2023, A&A, 673, A27, doi: 10.1051/0004-6361/202346084
* Nicholl et al. (2017) Nicholl, M., Guillochon, J., & Berger, E. 2017, ApJ, 850, 55, doi: 10.3847/1538-4357/aa9334
* Nicholl et al. (2023) Nicholl, M., Srivastav, S., Fulton, M. D., et al. 2023, ApJ, 954, L28, doi: 10.3847/2041-8213/acf0ba
* Perley et al. (2022) Perley, D., Sollerman, J., Schulze, S., et al. 2022, in American Astronomical Society Meeting Abstracts, Vol. 54, American Astronomical Society Meeting #240, 232.08
* Piro et al. (2021) Piro, A. L., Haynie, A., & Yao, Y. 2021, ApJ, 909, 209, doi: 10.3847/1538-4357/abe2b1
* Poole et al. (2008) Poole, T. S., Breeveld, A. A., Page, M. J., et al. 2008, 383, 627, doi: 10.1111/j.1365-2966.2007.12563.x
* Poznanski et al. (2012) Poznanski, D., Prochaska, J. X., & Bloom, J. S. 2012, MNRAS, 426, 1465, doi: 10.1111/j.1365-2966.2012.21796.x
* Poznanski et al. (2010) Poznanski, D., Chornock, R., Nugent, P. E., et al. 2010, Science, 327, 58, doi: 10.1126/science.1181709
* Prentice et al. (2018) Prentice, S. J., Maguire, K., Smartt, S. J., et al. 2018, ApJ, 865, L3, doi: 10.3847/2041-8213/aadd90
* Prentice et al. (2020) Prentice, S. J., Maguire, K., Flörs, A., et al. 2020, A&A, 635, A186, doi: 10.1051/0004-6361/201936515
* Quimby et al. (2018) Quimby, R. M., De Cia, A., Gal-Yam, A., et al. 2018, ApJ, 855, 2, doi: 10.3847/1538-4357/aaac2f
* Rodríguez et al. (2023) Rodríguez, Ó., Maoz, D., & Nakar, E. 2023, ApJ, 955, 71, doi: 10.3847/1538-4357/ace2bd
* Roming et al. (2005) Roming, P. W. A., Kennedy, T. E., Mason, K. O., et al. 2005, Space Sci. Rev., 120, 95, doi: 10.1007/s11214-005-5095-4
* Sawada et al. (2022) Sawada, R., Kashiyama, K., & Suwa, Y. 2022, ApJ, 927, 223, doi: 10.3847/1538-4357/ac53ae
* Schlafly & Finkbeiner (2011) Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103, doi: 10.1088/0004-637X/737/2/103
* Shen & Bildsten (2009) Shen, K. J., & Bildsten, L. 2009, ApJ, 699, 1365, doi: 10.1088/0004-637X/699/2/1365
* Shen et al. (2010) Shen, K. J., Kasen, D., Weinberg, N. N., Bildsten, L., & Scannapieco, E. 2010, ApJ, 715, 767, doi: 10.1088/0004-637X/715/2/767
* Shingles et al. (2021) Shingles, L., Smith, K. W., Young, D. R., et al. 2021, Transient Name Server AstroNote, 7, 1
* Sim et al. (2012) Sim, S. A., Fink, M., Kromer, M., et al. 2012, MNRAS, 420, 3003, doi: 10.1111/j.1365-2966.2011.20162.x
* Smith et al. (2019) Smith, K. W., Williams, R. D., Young, D. R., et al. 2019, Research Notes of the American Astronomical Society, 3, 26, doi: 10.3847/2515-5172/ab020f
* Smith et al. (2020) Smith, K. W., Smartt, S. J., Young, D. R., et al. 2020, PASP, 132, 085002, doi: 10.1088/1538-3873/ab936e
* Sollerman (2023) Sollerman, J. 2023, Transient Name Server Discovery Report, 2023-3158, 1
* Speagle (2020) Speagle, J. S. 2020, MNRAS, 493, 3132, doi: 10.1093/mnras/staa278
* Springob et al. (2005) Springob, C. M., Haynes, M. P., Giovanelli, R., & Kent, B. R. 2005, ApJS, 160, 149, doi: 10.1086/431550
* Srivastav et al. (2022) Srivastav, S., Smartt, S. J., Huber, M. E., et al. 2022, MNRAS, 511, 2708, doi: 10.1093/mnras/stac177
* Srivastav et al. (2023) Srivastav, S., Moore, T., Nicholl, M., et al. 2023, ApJ, 956, L34, doi: 10.3847/2041-8213/acffaf
* Stanway & Eldridge (2018) Stanway, E. R., & Eldridge, J. J. 2018, MNRAS, 479, 75, doi: 10.1093/mnras/sty1353
* Steele et al. (2004) Steele, I. A., Smith, R. J., Rees, P. C., et al. 2004, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 5489, Ground-based Telescopes, ed. J. Oschmann, Jacobus M., 679–692, doi: 10.1117/12.551456
* Stehle et al. (2005) Stehle, M., Mazzali, P. A., Benetti, S., & Hillebrandt, W. 2005, MNRAS, 360, 1231, doi: 10.1111/j.1365-2966.2005.09116.x
* Stevance et al. (2020) Stevance, H., Eldridge, J., & Stanway, E. 2020, The Journal of Open Source Software, 5, 1987, doi: 10.21105/joss.01987
* Stritzinger et al. (2009) Stritzinger, M., Mazzali, P., Phillips, M. M., et al. 2009, ApJ, 696, 713, doi: 10.1088/0004-637X/696/1/713
* Stritzinger et al. (2018) Stritzinger, M. D., Anderson, J. P., Contreras, C., et al. 2018, A&A, 609, A134, doi: 10.1051/0004-6361/201730842
* Tauris et al. (2015) Tauris, T. M., Langer, N., & Podsiadlowski, P. 2015, MNRAS, 451, 2123, doi: 10.1093/mnras/stv990
* Thornton et al. (2023) Thornton, I., Villar, V. A., Gomez, S., & Hosseinzadeh, G. 2023, in American Astronomical Society Meeting Abstracts, Vol. 55, American Astronomical Society Meeting Abstracts, 107.24
* Tonry et al. (2018) Tonry, J. L., Denneau, L., Heinze, A. N., et al. 2018, PASP, 130, 064505, doi: 10.1088/1538-3873/aabadf
* Tully et al. (2013) Tully, R. B., Courtois, H. M., Dolphin, A. E., et al. 2013, AJ, 146, 86, doi: 10.1088/0004-6256/146/4/86
* Villar et al. (2017) Villar, V. A., Berger, E., Metzger, B. D., & Guillochon, J. 2017, ApJ, 849, 70, doi: 10.3847/1538-4357/aa8fcb
* Wheeler et al. (2015) Wheeler, J. C., Johnson, V., & Clocchiatti, A. 2015, MNRAS, 450, 1295, doi: 10.1093/mnras/stv650
* Woosley (2010) Woosley, S. E. 2010, ApJ, 719, L204, doi: 10.1088/2041-8205/719/2/L204
* Woosley et al. (1989) Woosley, S. E., Pinto, P. A., & Hartmann, D. 1989, ApJ, 346, 395, doi: 10.1086/168019
* Wu & Fuller (2022) Wu, S. C., & Fuller, J. 2022, ApJ, 940, L27, doi: 10.3847/2041-8213/ac9b3d
* Yan et al. (2023) Yan, S., Wang, X., Gao, X., et al. 2023, ApJ, 959, L32, doi: 10.3847/2041-8213/ad0cc3
* Yao et al. (2020) Yao, Y., De, K., Kasliwal, M. M., et al. 2020, ApJ, 900, 46, doi: 10.3847/1538-4357/abaa3d
* Yaron & Gal-Yam (2012) Yaron, O., & Gal-Yam, A. 2012, PASP, 124, 668, doi: 10.1086/666656
|
# Parameter Estimation in Ill-conditioned Low-inertia Power Systems
Rajasekhar Anguluri, Lalitha Sankar, and Oliver Kosut This work is funded in
part by the NSF under grant OAC-1934766. All the authors are with the School
of Electrical, Computer, and Energy Engineering, Arizona State University,
Tempe, AZ, 85281 USA (e-mail: {rangulur,lalithasankar,okosut}@asu.edu).
###### Abstract
This paper examines model parameter estimation in dynamic power systems whose
governing electro-mechanical equations are ill-conditioned or singular. This
ill-conditioning is because of converter-interfaced power systems generators’
zero or small inertia contribution. Consequently, the overall system inertia
decreases, resulting in low-inertia power systems. We show that the standard
state-space model based on least squares or subspace estimators fails to exist
for these models. We overcome this challenge by considering a least-squares
estimator directly on the coupled swing-equation model but not on its
transformed first-order state-space form. We specifically focus on estimating
inertia (mechanical and virtual) and damping constants, although our method is
general enough for estimating other parameters. Our theoretical analysis
highlights the role of network topology on the parameter estimates of an
individual generator. For generators with greater connectivity, estimation of
the associated parameters is more susceptible to variations in other generator
states. Furthermore, we numerically show that estimating the parameters by
ignoring their ill-conditioning aspects yields highly unreliable results.
## I INTRODUCTION
Accurate knowledge of power system model parameters, including inertia and
damping, is essential to assess operating states, perform dynamic simulations,
and study stability margins. Recently, with increasing penetration of
inverter-based (IB) distributed energy resources (DERs) in the bulk power
system, the effective system inertia is decreased, making it challenging to
stabilize demand-supply mismatch. Further, this increase in IB-DERs
significantly increases the number of unknown system parameters to estimate.
Estimating dynamic parameters of synchronous machines and other network
devices and loads, is a classical problem [1, 2]. Numerous algorithms have
been proposed for parameter estimation, both in the presence and absence of
closed-loop controllers using local or wide-area ambient measurements,
including [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. Within this body
of work, some approaches use _white box models_ , wherein the model structure
is completely known and deterministic (e.g., model structures given by
Newton’s laws, mass and energy conservation principles). In power systems,
Heffron-Phillips models (fourth order and beyond) have been a mainstay of
estimation algorithms. The opposite extreme are _black box models_ , or purely
data-driven stochastic models in which no prior knowledge is assumed, and an
input/output relation is derived from measurements. Examples include modal
analysis, dynamic equivalents, and Koopman methods. Although black box methods
are extremely useful for wide-area monitoring, they have limited utility for
planning, contingency, and stability analysis.
Another line of research focuses on _grey box models_ [17, 18]. These models
combine the advantages of the white and black box approaches to exploit prior
knowledge of physical relationships or model structure, where possible, and
learning unknown parameters from data. This methodology is particularly
relevant in the context of IB-DERs, which inevitably introduce many unknown
parameters [19, 20, 21, 22]. However, there are many shortcomings in the
existing literature; most papers: (i) focus on net inertia111Estimated as a
weighted average of single area inertia estimates. rather than the inertia of
each device or each area as they connect to each other; (ii) focus on
estimation in the presence of transient disturbance with little work on
ambient disturbances; and (iii) do not consider the effect of frequency and
voltage dependent loads, leading to large (inertia) estimation errors (up to
40% [23]); see [24, 25], for a recent account on various parameter estimation
in low-inertia systems.
We put forth a simple strategy for overcoming the above limitations using a
simple constrained least squares estimator to estimate parameters using
ambient measurements. Least squares type estimators are already used for
estimating inertia in power systems; however, these estimators assume that the
inertia is strictly greater than zero. This assumption implies that the
electro-mechanical dynamics are well defined. However, this assumption does
not hold for converter-based generators. For e.g., droop-control based
generators provide zero inertia [22]. Consequently, the electro-mechanical
dynamics are not well-defined or ill-conditioned, thereby giving rise to a
descriptor system (see Section III for details). We develop a framework for
parameter estimation for these systems, with special attention to inertia and
damping. Beyond the motivating example of parameter estimation in power
systems, our results apply more broadly to other engineering systems modeled
using second-order differential equations, such as structural mechanical and
acoustic systems and fluid mechanics. We summarize our contributions below:
1. (i)
For low-inertia power systems consisting of synchronous and converter-
interfaced generators, we study a constrained least-squares estimation problem
that allow us to tackle systems with exactly zero-inertia.
2. (ii)
We highlight the role of network connectivity on the estimation performance.
Specifically, using the closed-form formulas of the estimators, we show that
for generators with greater connectivity, estimation of the associated
parameters is more susceptible to variations in other generator states.
3. (iii)
Our simulation results on the IEEE 39 bus system show that estimating the
parameters by ignoring their ill-conditioning aspects yields highly unreliable
results
## II Dynamics of Low-inertia Power Systems
We introduce the frequency dynamics for a low-inertia power system, comprised
of multiple synchronous generators (SGs) and converter-interfaced distributed
energy resources (DERs). We later use these models for parameter estimation
subject to suitable physical constraints.
We model a power network of $N$ buses with an undirected graph
$\mathcal{G}:=(\mathcal{V},\mathcal{E})$, where nodes
$\mathcal{V}=\\{1,\ldots,N\\}$ and edges
$\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}$ denote buses and
transmission lines, respectively. For a $i$-th node in $\mathcal{V}$ we
associate a generator (synchronous or converter-faced) whose frequency
response around a steady state is governed by the swing equation [22, 26, 27]:
$\displaystyle\frac{2H_{i}}{\omega_{0}}\Delta\dot{\omega}_{i}(t)\\!=\\!\left[\Delta
P_{m,i}(t)\\!-\\!\Delta P_{e,i}(t)\right]-FC_{i}(t)+\tilde{\epsilon}_{i}(t),$
(1)
where $\omega_{0}=120\pi$ is the rated angular frequency,
$\Delta\omega_{i}(t)=\omega_{i}(t)-\omega_{0}$, and $2H_{i}/w_{0}$ is the
inertia constant. $\Delta P_{m,i}(t)$ is the deviation from the steady
mechanical power injection. $\Delta P_{e,i}(t)$ is the deviation from the
electrical power output, and $\tilde{\epsilon}_{i}(t)$, a zero-mean Gaussian
process with a known variance, models the ambient fluctuation in loads as well
as process noise. Further, $\Delta P_{e,i}(t)$ equals the sum of deviations of
the power flows on the lines connected to node $i$ [26, 27]:
$\displaystyle\Delta P_{e,i}(t)=\sum_{ij\in\mathcal{E}}\Delta
P_{ij}(t)=\beta_{ij}(\Delta\delta_{i}(t)-\Delta\delta_{j}(t)),$ (2)
where $\beta_{ij}\\!=\\!|V_{i}||V_{j}|b_{ij}$ with $b_{ij}\\!>\\!0$ denoting
the susceptance and $|V_{i}|$, $|V_{j}|$ are the rated voltage magnitudes. The
angular deviation $\Delta\delta_{i}(t)$ is obtained by integrating
$\Delta\omega_{i}(t)$.
The frequency controller output $FC_{i}(t)$ enforces the system frequency
stability due to a large imbalance between the mechanical and electrical
power. In SGs, primary frequency controllers (PFCs) provide the frequency
support. On the other hand, in grid-forming converters, the behavior of PFC is
emulated by fast frequency regulators. We assume that this goal is achieved by
a proportional feedback control that adjusts the power generation set-point
based on the frequency deviation: $FC_{i}(t)=K_{i}\Delta\omega_{i}(t)/w_{0}$
[22] (see Remark 1).
We assume that some of the loads depend on the system frequency. Similar to
frequency controllers, these loads provide a damping stabilizing effect on the
frequency. We model these loads as $\Delta
P_{i,\text{Load}}(t)=D_{i,\text{load}}\Delta\omega_{i}(t)/w_{0}$, where
$D_{i,\text{load}}$ is the damping coefficient. By slight abuse of notation,
we denote the total frequency support by
$FC_{i}(t)=(K_{i}+D_{i,\text{load}})\Delta\omega_{i}(t)/w_{0}$ and let
$D_{i}=K_{i}+D_{i,\text{load}}$.
We drop $\Delta$ notation in the state variables. From (1) and (2), and our
discussions on the frequency controller, we can express the dynamics for all
generators compactly as
$\displaystyle\underbrace{\begin{bmatrix}I&0\\\ 0&M\end{bmatrix}}_{\triangleq
E}\begin{bmatrix}\dot{\boldsymbol{\delta}}(t)\\\
\dot{\boldsymbol{\omega}}(t)\end{bmatrix}\\!=\\!\underbrace{\begin{bmatrix}0&I\\\
-H_{\beta}&-D\end{bmatrix}}_{\triangleq
A}\begin{bmatrix}{\boldsymbol{\delta}}(t)\\\
{\boldsymbol{\omega}}(t)\end{bmatrix}\\!+\\!\begin{bmatrix}\mathbf{0}\\\
\boldsymbol{\epsilon}(t)\end{bmatrix},$ (3)
where
${\boldsymbol{\delta}}=[\delta_{1},\ldots,\delta_{N}]^{\mathsf{T}}\in\mathbb{R}^{N}$
and
${\boldsymbol{\omega}},\dot{\boldsymbol{\delta}},\dot{\boldsymbol{\omega}},\boldsymbol{\epsilon}$
and $\mathbf{0}$ are defined similarly. The $i$-th component of the process
noise $\boldsymbol{\epsilon}(t)$ is given by
$\epsilon_{i}(t)=\tilde{\epsilon}_{i}(t)+P_{m,i}(t)$. The matrices $I$ and $0$
are $N\times N$ identity and all-zeros matrices. The Laplacian matrix
$H_{\beta}$ is defined as $[H_{\beta}]_{ij}=-\beta_{ij}$ for
$(i,j)\in\mathcal{E}$ and $[H_{g}]_{ij}=0$ otherwise; and
$[H_{g}]_{ii}=\sum_{(i,j)\in\mathcal{E}}\beta_{ij}$. Finally,
$M=\text{diag}(M_{11},\ldots,M_{ii})$ and
$D=\text{diag}(D_{11},\ldots,D_{NN})$ are diagonal inertia and damping
matrices, where $D_{ii}=D_{i}/w_{0}$ and $M_{ii}=2H_{i}/w_{0}$.
From (3) note that the Laplacian $H_{g}$ is determined by the line
susceptances, and hence, it is independent of the type of the generator
(synchronous or converter-based). Thus, each generator is characterized by
$M_{i}$ and $D_{i}$. However, we show that the estimates $\hat{M}_{i}$ and
$\hat{D}_{i}$ are influenced by $H_{g}$. The effect of $H_{g}$ is ignored in
prior works, which focus on either estimating each machine’s inertia or the
aggregated inertia.
The (classical) model in (3) is a starting point for many downstream tasks,
including control design, storage placement, oscillation localization, and
stability analysis. In these applications, the model in (3) is simplified by
left multiplying $E^{-1}$ on both sides of (3). Unfortunately, in low-inertia
power systems, this kind of simplification is not possible because the inertia
constant $M_{i}$ could be small for VSMs and exactly zero for droop-control
based generators [?]. Consequently, $E$ in (3) is not invertible. In this
case, this as a _linear descriptor_ or _differential-algebraic system_. The
latter term derives from the fact that some of the equations represented by
(3) are purely algebraic (and not differential) in that the left-hand side is
zero. These systems appear in the field of robotics, economics, and circuits;
in power systems, they also arise when generator dynamics and algebraic power-
flow algebraic equations explicitly are considered together. In our case, a
descriptor system arises due to the ill-conditioning of parameters caused by
the low inertia of IB-DERs. In the following section, we discuss why parameter
estimation is difficult in these systems and then describe a new strategy to
obtain reliable parameter estimates.
###### Remark 1
In general, the frequency controller might not be a simple proportional
control and can be of higher order; for example, in SG, turbine dynamics
contribute to $FC_{i}(t)$. However, for ease of analysis, we neglect these
dynamics. This approximation is valid for converter-interfaced generators
because the controller time constants are small; however, this approximation
might not be accurate for SGs. $\square$
## III Structure Preserving Estimation Problem
For the continuous-time model in (3), we first obtain a discrete-time model
using Euler’s method. We then formulate a constrained least squares
optimization problem for estimating the parameters using this discrete-time
model.
We assume that we can estimate the generator states ${\boldsymbol{\delta}}$
and ${\boldsymbol{\omega}}$ using PMU measurements [?]. Let $k=0,1,\ldots$ and
define $\mathbf{z}[k]\triangleq\mathbf{z}(kT_{s})$, where $T_{s}$ is the
discretization step (hereafter, the sampling period) and
$\mathbf{z}[k]=[\boldsymbol{\delta}[k]^{\mathsf{T}}\,\boldsymbol{\omega}[k]^{\mathsf{T}}]^{\mathsf{T}}$.
The relationship among $T_{s}$, resolution of the PMU measurements, and the
time-scale of the estimation horizon is explained in great detail in [28].
Using the Euler-Mayurama discretization method, we get the discrete-time
dynamics [29, 26]:
$\displaystyle
E(\mathbf{z}[k+1]-\mathbf{z}[k])=T_{s}A\mathbf{z}[k]+\begin{bmatrix}0\\\
\mathbf{r}[k]\end{bmatrix},$ (4)
where $\mathbf{r}[k]$ is the discretized process noise (cf.
$\boldsymbol{\epsilon}(t)$ in (3)), and
$\mathbf{r}[k]\sim\mathcal{N}(0,\Sigma_{\epsilon})$, where
$\Sigma_{\epsilon}=T_{s}\text{diag}(\sigma^{2}_{1},\ldots,\sigma^{2}_{N})$.
The diagonal structure of $\Sigma_{\epsilon}$ is because the ambient
fluctuations are spatially uncorrelated across different buses.
The standard practice in the literature [30, 26, 27, 22, 19] is to re-write
(4) as
$\displaystyle\mathbf{z}[k+1]=(I+T_{s}E^{-1}A)\mathbf{z}[k]+E^{-1}\begin{bmatrix}0\\\
\mathbf{r}[k]\end{bmatrix},$ (5)
and estimate $A_{d}\triangleq(I+T_{s}E^{-1}A)$ using
$\mathbf{z}[0],\ldots,\mathbf{z}[\mathcal{T}-1]$. This naïve estimate has many
drawbacks: (i) $A_{d}$ might not be well-defined if $E$ is not invertible,
which is the case for droop-control based generators, as discussed earlier;
(ii) $E^{-1}$ adversely affects the noise vector by distorting its spatially
uncorrelated property; and (iii) decomposing the estimate of $A_{d}$ to
uniquely estimate $M$ and $D$ is impossible in general.
We overcome the limitations of the naïve estimator by considering the
following constrained least-squares optimization that does not require $E$ to
be invertible:
$\displaystyle\\{\hat{M},\hat{D}\\}$
$\displaystyle=\operatorname*{arg\,min}_{M,D\,\in\,\mathcal{D}}\sum_{k=0}^{\mathcal{T}-1}\left\lVert
E(\mathbf{z}[k+1]\\!-\\!\mathbf{z}[k])\\!-\\!T_{s}A\,\mathbf{z}[k]\right\rVert_{2}^{2}$
$\displaystyle\textrm{ s.t. }0\leq D_{ii}\leq D_{\text{max}},\text{ for all
}i,$ (6) $\displaystyle\quad\,\,M_{i}=0,\text{ for }i\in\mathcal{V}_{DC},$
where $\mathcal{D}$ is the set of non-negative diagonal matrices;
$D_{\text{max}}$ is a known term that imposes practical limits on $D$; and
$\mathcal{V}_{DC}$ are the nodes corresponding to the droop-control
generators. The equality constraint in (III) ensures that the estimate
$\hat{M}_{i}$, $\text{for }i\in\mathcal{V}_{DC}$, is zero. From (5), we note
that the expression inside the norm term in (III) is the process noise
$[\mathbf{0}^{\mathsf{T}}\,\mathbf{r}[k]^{\mathsf{T}}]^{\mathsf{T}}$. Thus,
the proposed estimator attempts to find parameters that best explain the
variations of the ambient fluctuations over the time horizon
$k=0,\ldots,\mathcal{T}-1$.
We rewrite (III) in the standard least squares form. Define the vectors
$\mathbf{m}=[M_{11},\ldots,M_{NN}]^{\mathsf{T}}$,
$\mathbf{d}=[D_{11},\ldots,D_{NN}]^{\mathsf{T}}$. Let
$\tilde{\boldsymbol{\omega}}[k]=\boldsymbol{\omega}[k+1]-\boldsymbol{\omega}[k]$
and
$\boldsymbol{\delta}_{0:\mathcal{T}-1}=[\boldsymbol{\delta}[0],\ldots,\boldsymbol{\delta}[\mathcal{T}-1]]^{\mathsf{T}}$.
Let $\text{Diag}(\tilde{\boldsymbol{\omega}}[k])$ be the diagonal matrix with
the entries of $\tilde{\boldsymbol{\omega}}[k]$ on the main diagonal, and
define the data matrix:
$\displaystyle
W_{0:\mathcal{T}-1}=\begin{bmatrix}\text{Diag}(\tilde{\boldsymbol{\omega}}[0])&T_{s}\text{Diag}{\boldsymbol{\omega}}[0]\\\
\text{Diag}(\tilde{\boldsymbol{\omega}}[1])&T_{s}\text{Diag}{\boldsymbol{\omega}}[1]\\\
\vdots&\vdots\\\
\text{Diag}(\tilde{\boldsymbol{\omega}}[\mathcal{T}-1])&T_{s}\text{Diag}{\boldsymbol{\omega}}[\mathcal{T}-1]\end{bmatrix}.$
(7)
Then the optimization in (III) can be compactly expressed as
$\displaystyle\\{\hat{\mathbf{m}},\hat{\mathbf{d}}\\}$
$\displaystyle=\operatorname*{arg\,min}_{\mathbf{m},\mathbf{d}\,\in\,\mathbb{R}^{N}}\left\lVert
W_{0:\mathcal{T}-1}\begin{bmatrix}\mathbf{m}\\\
\mathbf{d}\end{bmatrix}+T_{s}(I\otimes
H_{\beta})\boldsymbol{\delta}_{0:\mathcal{T}-1}\right\rVert_{2}^{2}$ (8)
$\displaystyle\textrm{s.t. }\mathbf{0}\leq\mathbf{d}\leq
D_{\text{max}}\mathbf{1},\text{ and }\Gamma\begin{bmatrix}\mathbf{m}\\\
\mathbf{d}\end{bmatrix}=\mathbf{0},$
where $\mathbf{1}$ is the all-ones vector; $I$ is an
$\mathcal{T}\times\mathcal{T}$ identity matrix; and $\otimes$ is the matrix
Kronecker product. The $n\times 2N$ selection matrix $\Gamma$ (with $n$
denoting the size of $\mathcal{V}_{DC}$) selects the entries of $\mathbf{m}$
associated with the droop-control generators.
Optimization problems similar to (8) are recently studied in the literature of
inertia and damping estimation [?]. These studies, however, ignore the zero-
inertia constraints and $H_{\beta}$ term; hence, they require damping
constraints to make the estimation problem mathematically well-posed. In
contrast, the problem in (8) is well-posed even when we ignore the damping
constraints, thereby making it useful for the cases where $D_{\max}$ is
unknown. Using the special case below, we study the role of topology, encoded
in the susceptance matrix $H_{b}$, on the parametric estimates of the $i$-th
generator.
_Special case (unconstrained optimization)_ : Suppose that
$W_{0:\mathcal{T}-1}$ has full column rank.222For an appropriate choice of
$N$, in general, the full column rank assumption holds because of the presence
of additive noise in the measurements. Let us ignore the constraints in (8).
Then, the problem in (8) reduces to the unconstrained least squares problem,
which admits the following solution:
$\displaystyle\begin{bmatrix}\hat{\mathbf{m}}\\\
\hat{\mathbf{d}}\end{bmatrix}$
$\displaystyle=-T_{s}W_{0:\mathcal{T}-1}^{+}(I\otimes
H_{\beta})\boldsymbol{\delta}_{0:\mathcal{T}-1},$ (9)
where $W_{0:\mathcal{T}-1}^{+}$ is the pseudo-inverse of
$W_{0:\mathcal{T}-1}$, and is given by
$W_{0:\mathcal{T}-1}^{+}=(W_{0:\mathcal{T}-1}^{\mathsf{T}}W_{0:\mathcal{T}-1})^{-1}W_{0:\mathcal{T}-1}^{\mathsf{T}}$.
By exploiting the diagonal structure of the blocks in $W_{0:\mathcal{T}-1}$ in
(7), we can express estimates at the $i$-th generator node as
$\displaystyle\begin{split}\hat{m}_{i}&=-\sum_{j=1}^{N}[H_{\beta}]_{i,j}\left(\sum_{k=0}^{\mathcal{T}-1}\left[\frac{c_{i,2}}{c_{i,3}}\tilde{\omega}_{i}[k]-\frac{c_{i,1}}{c_{i,3}}{\omega}_{i}[k]\right]\delta_{j}[k]\right)\\\
\hat{d}_{i}&=-\sum_{j=1}^{N}[H_{\beta}]_{i,j}\left(\sum_{k=0}^{\mathcal{T}-1}\left[\frac{c_{i,0}}{c_{i,3}}{\omega}_{i}[k]-\frac{c_{i,1}}{c_{i,3}}\tilde{\omega}_{i}[k]\right]\delta_{j}[k]\right),\end{split}$
(10)
where the constants $c_{i,3}=c_{i,0}c_{i,2}-c^{2}_{i,1}$;
$c_{i,0}=\sum_{k=0}^{\mathcal{T}-1}\tilde{\omega}^{2}_{i}[k]$;
$c_{i,1}=\sum_{k=0}^{\mathcal{T}-1}\tilde{\omega}_{i}{\omega}_{i}[k]$; and
$c_{i,2}=\sum_{k=0}^{\mathcal{T}-1}{\omega}^{2}_{i}[k]$. In the above
expressions, we set $T_{s}=1$ for simplicity. The constants $c_{i,0},c_{i,1}$,
and $c_{i,2}$ depend on the $i$-th generator’s frequencies. They determine the
contribution of the frequency and its rate of change
$\tilde{\omega}_{i}[k]=\omega[k+1]-\omega[k]$ on the $i$-th estimate.
The $i$-th inertia (or damping) estimate in (10) is a weighted average of the
suceptance values of the lines connected to the $i$-th node. These weights
depend both on the $i$-th node’s frequencies and the angles of all generators.
Thus, for generators with greater connectivity, estimation of the associated
parameters is more susceptible to variations in other generator states.
Consequently, these parameters cannot be estimated using local measurements.
But it makes sense to estimate the inertia of a largely isolated microgrid as
it has a few or no connections with other parts of the network. Finally, we
can only estimate the parameters of a generator in a large system when both
the local frequency and the power measurements are available. Recall from (2)
that the power deviations encode the topological information.
We comment on the statistical properties of the estimate in (9). Because
$\mathbf{w}(t)$ and $\boldsymbol{\delta}(t)$ in (3) are correlated random
processes, $W_{0:\mathcal{T}-1}$ and $\boldsymbol{\delta}_{0:\mathcal{T}-1}$
are random and correlated. Thus, characterizing the distribution of the
estimate in (9) is hard. A workaround is to interpret the optimization in (8)
as a means for obtaining the parameters of the linear model:
$\displaystyle-T_{s}(I\otimes
H_{\beta})\boldsymbol{\delta}_{0:\mathcal{T}-1}=W_{0:\mathcal{T}-1}\begin{bmatrix}\mathbf{m}\\\
\mathbf{d}\end{bmatrix}+\boldsymbol{\zeta},$ (11)
where $\boldsymbol{\zeta}$ and the filtered process noise accumulated over
time have same distributions; however, $W_{0:\mathcal{T}-1}$ and
$\boldsymbol{\delta}_{0:\mathcal{T}-1}$ are responses due to the initial
state. Hence, they are deterministic terms. With these assumptions, it follows
that
$\displaystyle\begin{bmatrix}\hat{\mathbf{m}}\\\
\hat{\mathbf{d}}\end{bmatrix}\\!\sim\\!\mathcal{N}\left(\begin{bmatrix}\mathbf{m}^{*}\\\
\mathbf{d}^{*}\end{bmatrix},{T^{2}_{s}}W_{0:\mathcal{T}-1}^{+}\Sigma_{\boldsymbol{\zeta}}(W_{0:\mathcal{T}-1}^{+})^{\mathsf{T}}\right),$
(12)
where $(\mathbf{m}^{*},\mathbf{d}^{*})$ is the unknown truth, and
$\Sigma_{\boldsymbol{\zeta}}$ is the covariance matrix of
$\boldsymbol{\zeta}$. The characterization in (12) holds even for the non-
Gaussian process noise, thanks to the asymptotic (in $\mathcal{T}$) normality
of the least squares estimator (see [17]). If $\Sigma_{\boldsymbol{\zeta}}$ is
diagonal,
$W_{0:\mathcal{T}-1}^{+}\Sigma_{\boldsymbol{\zeta}}(W_{0:\mathcal{T}-1}^{+})^{\mathsf{T}}$
is a $2\times 2$ block matrix with diagonal blocks. The off-diagonal blocks
capture correlations between the inertia and damping estimate at a given node.
Thus, the variance of the estimates are not influenced by the variations in
other generator states. Unfortunately, $\Sigma_{\boldsymbol{\zeta}}$ cannot be
diagonal because the process noise gets filtered through the dynamics in (4);
and hence, $\Sigma_{\boldsymbol{\zeta}}$ is dense, and so is the covariance
matrix in (12). Hence, the variance of each estimate depends both on the
network and the variations in other generator states.
The discrete-time variance $\sigma_{i}^{2}T_{s}$ of $\mathbf{r}[k]$ (see (4))
due to the process noise and loads is hidden in the covariance matrix
$\text{Cov}(\boldsymbol{\zeta})$. By assuming $\sigma_{i}^{2}=\sigma^{2}$, for
all $i\in N$, we can write $\text{Cov}(\boldsymbol{\zeta})=\sigma_{i}T_{s}Q$,
where the matrix $Q$ solely depends on the system dynamic matrices and the
topology. Thus, the covariance term in (12) is effectively scaled by
$T_{s}^{3}\sigma^{2}$. This result highlights the trade-off between the
sampling time and the variance of load fluctuations, allowing us to down- or
up-sample measurements to improve the estimates’ quality. For instance, for
highly fluctuating loads (higher values of $\sigma^{2}$), we can down-sample
the measurements for computational speedup with minimal loss in the estimation
performance.
We close this section by pointing out the importance of the structure
preserving estimation problem in (III) in the case where we have access to a
few generator states but not all of them. Here, we cannot rewrite (III) as the
constrained form in (8). Nonetheless, we can use expectation-maximization (EM)
type algorithms, which at high level solves the optimization in (8), but the
data matrix $W_{0,\mathcal{T}-1}$ should be replaced with Kalman estimates. We
leave this study for the future.
## IV Simulation Results
We illustrate the performance of the structure-persevering inertia and damping
constants estimator in (8) on the IEEE 39-bus, 10-generator benchmark system.
See Ref. [26] for a single line diagram of the topology and the location of
the generator buses. The inertia constants of the generators are summarized in
Table I and all damping constants are set to $d_{i}^{*}=0.0531$ p.u. We use
Kron-reduction technique [26] to obtain the matrix $H_{\beta}$ in (3). We
obtain the initial values of $(\boldsymbol{\delta(t)},\boldsymbol{\omega(t)})$
and the line susceptance values from [31]. We set the discretization time-step
$T_{s}=1/60$ sec. We use these parameter values to generate the frequency
measurements using the discrete-time model in (4).
TABLE I: IEEE 39-bus synchronous generator inertia (p.u.) $m^{*}_{1}$ | $m^{*}_{2}$ | $m^{*}_{3}$ | $m^{*}_{4}$ | $m_{5}^{*}$
---|---|---|---|---
$0.2228$ | $0.1607$ | $0.1873$ | $0.1517$ | $0.1379$
$m^{*}_{6}$ | $m^{*}_{7}$ | $m^{*}_{8}$ | $m^{*}_{9}$ | $m_{10}^{*}$
$0.1846$ | $0.1401$ | $0.18289$ | $0.1830$ | $2.6526$
### IV-A Case study 1: estimation performance and validation
First, we explore the case where there are no converter-based generators, and
the inertia constants of all synchronous generators are not close to zero; see
Table I. We also ignore the damping constraints for simplicity. Thus, we
consider the unconstrained optimization problem.
Our first simulations focus on the inertia and damping estimation error
behavior as a function of the estimation time $\mathcal{T}$. We define the
following error metrics:
$\displaystyle
E_{\text{int}}=\frac{1}{10}\sum_{i=1}^{10}(\hat{m}_{i}-\hat{m}_{i}^{*})^{2}\,\,;D_{\text{int}}=\frac{1}{10}\sum_{i=1}^{10}(\hat{d}_{i}-\hat{d}_{i}^{*})^{2}.$
(13)
These metrics capture estimation error (squared) of a random generator node.
We set the process noise standard deviation $\sigma=0.01$ p.u. [26]. Fig. 1
illustrates Montecarlo estimate of the mean and the standard deviation (no.
of. trails = 100) of the error metrics. We note that more measurements are
required to estimate damping accurately than inertia. This is because the
inertia estimate in (10) depends more on the difference of the frequencies at
$k$ and $k+1$. Thus, the process noise is less in $\omega[k+1]-\omega[k]$ than
compared to $\omega[k]$. On other hand, the damping estimator rely more on
$\omega[k]$; and hence, its performance is strongly influenced by the process
noise. As a result, it requires more measurements to accurately estimate the
true damping.
Our second simulations focus on the probability distribution of the estimation
error for a random generator. We chose $i=3$. Fig. 2 and Fig. 3 illustrate
empirical histograms for the estimation time horizons $\mathcal{T}=50$ and
$\mathcal{T}=200$.
### IV-B Case study 2: comparison with the naïve esimator in [26]
Next, we examine the estimator’s performance in the presence of both
synchronous and converter-faced generators. For the latter, we chose VSMs,
whose behavior is emulated by setting the inertia constants to be close (but
not exactly) to zero. In particular, we set $m^{*}_{3}=0.0019$,
$m^{*}_{4}=0.0015$, and $m^{*}_{5}=0.0014$. We also compared the performance
of our estimator with the naïve estimator that first estimates $A_{d}$ in (5)
and then extract the inertia constants. We estimate $A_{d}$ using the maximum-
likelihood technique suggested in [26]. We report our findings in Table II.
Therein, the values in the parenthesis indicate relative estimation errors.
For all the generators, including the VSMs, our structure preserving estimator
quite accurately estimated the inertia constants.
TABLE II: IEEE 39-bus synchronous generator inertia (p.u.) True | our method | naïve estimator
---|---|---
$m^{*}_{1}$ = 0.2228 | 0.2228 (-0.005e-03) | -0.0384 (-1.1724)
$m^{*}_{2}$ = 0.1607 | 0.1607 (-0.251e-03) | -0.0014 (-1.0090)
$m^{*}_{3}$ = 0.0019 | 0.0019 (-0.042e-03) | -0.0008 (-1.4535)
$m^{*}_{4}$ = 0.0015 | 0.0015 (-0.031e-03) | -0.0002 (-0.8677)
$m^{*}_{5}$ = 0.0014 | 0.0014 (-0.873e-03) | -0.0002 (-1.1791)
$m^{*}_{6}$ = 0.1846 | 0.1845 (-0.054e-03) | -0.0915 (-1.4959)
$m^{*}_{7}$ = 0.1401 | 0.1401 (-0.019e-03) | -0.0864 (-0.3833)
$m^{*}_{8}$ = 0.1289 | 0.1289 (-0.015e-03) | -0.0144 (-0.8880)
$m^{*}_{9}$ = 0.1830 | 0.1830 (-0.023e-03) | -0.0369 (-1.2015)
$m^{*}_{10}$ = 2.6526 | 2.6526 (-0.004e-03) | 1.6507 (-0.3777)
The simulations presented in this section supported many of our theoretical
observations and outperformed methods that do not consider the ill-
conditioning aspects as in studies in case 2. These observations have
implications for design and implementation of real-time algorithms for
estimating inertia and damping in low-inertia systems.
Figure 1: Estimation error as a function of estimation time horizon. The
shaded region denotes the standard deviation (averaged over 100 trails). From
both the top and bottom panels, we clearly see that the average error in (13)
decreases by increasing $\mathcal{T}$. However, compared to inertia, we need
more measurements to estimate damping accurately. Figure 2: Empirical
probability distribution of the error deviation of inertia and damping of
generator labelled 3. For $\mathcal{T}=50$, the top panel presents the
histograms of error deviations of inertia for various noise levels. The bottom
panel presents similar plots for damping. In both the panels, the spread
increases (range of x-axis) with increase in $\sigma$. However, this is more
pronounced for the case of error deviations of damping constant. Figure 3:
Empirical probability distribution of the error deviation of inertia and
damping of generator labelled 3. For $\mathcal{T}=200$, the top panel presents
the histograms of error deviations of inertia for various noise levels. The
bottom panel presents similar plots for damping. Compared to Fig. 2, the
distribution is more concentrated around zero. This agrees with our intuition
that estimation error decreases with the increase in measurements.
## V Concluding Remarks
A simple observation that the parameters of multiple areas or generators could
be directly estimated using a descriptor or ill-conditioned electro-mechanical
dynamics allowed us to estimate the inertia and damping of power systems with
a mix of synchronous and converter-interfaced generators. The latter includes
synchronous virtual machines and droop-control-based generators for which the
inertia constants are exactly or approximately zero, thereby rendering the
utility of the existing inertia and damping estimation methods, which almost
always assume non-negligible inertia. We overcome this limitation by studying
a constrained least-squares estimator on the descriptor-type dynamics, where
the constraints set the inertia of droop-controlled generators to zero. We
argued that the proposed estimator is well-posed and admits a unique solution,
at least for a special case. Furthermore, we discussed some limitations of the
naïve estimator in the context of inertia and damping estimation.
Our analysis highlighted the role of network connectivity on the estimators’
performance, which has not been properly studied in the literature. In
particular, using the closed-form expressions of the estimators, we showed
that for generators with greater connectivity, estimation of the associated
parameters is more susceptible to variations in other generator states.
Finally, our simulation results showed that estimating the parameters by
ignoring the ill-conditioning aspects yields highly unreliable results.
## References
* [1] LA Kilgore. Calculation of synchronous machine constants-reactances and time constants affecting transient characteristics. Transactions of the American Institute of Electrical Engg., 50(4):1201–1213, 1931.
* [2] Sherwin H Wright. Determination of synchronous machine constants by test reactances, resistances, and time constants. Transactions of the American Institute of Electrical Engineers, 50(4):1331–1350, 1931.
* [3] M Burth, George C Verghese, and M Velez-Reyes. Subset selection for improved parameter estimation in on-line identification of a synchronous generator. IEEE Transactions on Power Systems, 14(1):218–225, 1999.
* [4] Artem Mikhalev, Alexander Emchinov, Samuel Chevalier, Yury Maximov, and Petr Vorobev. A bayesian framework for power system components identification. In 2020 IEEE Power & Energy Society General Meeting (PESGM), pages 1–5. IEEE, 2020.
* [5] Dexin Li, Haifeng Zhang, Bo Wang, Guanqun Zhuang, and Deyou Yang. Data-driven electromechanical parameter estimation in dynamic model of a synchronous generator. In 2021 IEEE/IAS Industrial and Commercial Power System Asia, pages 858–863. IEEE, 2021.
* [6] Behrooz Zaker, Ramtin Khalili, Hadi Rabieyan, and Mehdi Karrari. A new method to identify synchronous generator and turbine-governor parameters of a gas unit using a closed-loop model. International Transactions on Electrical Energy Systems, 31(11):e13110, 2021.
* [7] Arindam Mitra, Abheejeet Mohapatra, Saikat Chakrabarti, and Subrata Sarkar. Online measurement based joint parameter estimation of synchronous generator and exciter. IEEE Transactions on Energy Conversion, 36(2):820–830, 2020.
* [8] Andrey Gorbunov, Anatoly Dymarsky, and Janusz Bialek. Estimation of parameters of a dynamic generator model from modal PMU measurements. IEEE Trans. on Power Systems, 35(1):53–62, 2019.
* [9] Song Guo, Sean Norris, and Janusz Bialek. Adaptive parameter estimation of power system dynamic model using modal information. IEEE Trans. on Power Systems, 29(6):2854–2861, 2014.
* [10] Junbo Zhao, Antonio Gómez-Expósito, Marcos Netto, Lamine Mili, Ali Abur, Vladimir Terzija, Innocent Kamwa, Bikash Pal, Abhinav Kumar Singh, Junjian Qi, et al. Power system dynamic state estimation: Motivations, definitions, methodologies, and future work. IEEE Transactions on Power Systems, 34(4):3188–3198, 2019.
* [11] S Armina Foroutan and Anurag Srivastava. Generator model validation and calibration using synchrophasor data. In 2019 IEEE Industry Applications Society Annual Meeting, pages 1–6. IEEE, 2019.
* [12] Dmitry Kosterev. Hydro turbine-governor model validation in pacific northwest. IEEE Trans. on Power Systems, 19(2):1144–1149, 2004.
* [13] Jin Ma, DONG Han, W-J Sheng, R-M He, C-Y Yue, and J Zhang. Wide area measurements-based model validation and its application. IET generation, transmission & distribution, 2(6):906–916, 2008\.
* [14] Yikui Liu, Lei Wu, and Jie Li. D-PMU based applications for emerging active distribution systems: A review. Electric Power Systems Research, 179:106063, 2020.
* [15] Lingling Fan and Yasser Wehbe. Extended Kalman filtering based real-time dynamic state and parameter estimation using PMU data. Electric Power Systems Research, 103:168–177, 2013.
* [16] Xiaozhe Wang. Estimating dynamic load parameters from ambient pmu measurements. In 2017 IEEE Power & Energy Society General Meeting, pages 1–5. IEEE, 2017.
* [17] Lennart Ljung. System identification. In Signal analysis and prediction, pages 163–173. Springer, 1998\.
* [18] Henrik Melgaard. Identification of physical models. DTU Compute, 1994.
* [19] Jinpeng Guo, Xiaozhe Wang, and Boon-Teck Ooi. Online model-free estimation of the dynamic system model for a power system with renewables in ambient conditions. IEEE Access, 8:96878–96887, 2020.
* [20] Xiaozhe Wang, Janusz W Bialek, and Konstantin Turitsyn. Pmu-based estimation of dynamic state jacobian matrix and dynamic system state matrix in ambient conditions. IEEE Transactions on Power Systems, 33(1):681–690, 2017.
* [21] Muyang Liu, Junru Chen, and Federico Milano. On-line inertia estimation for synchronous and non-synchronous devices. IEEE Transactions on Power Systems, 36(3):2693–2701, 2020.
* [22] Diala Nouti, Ferdinanda Ponci, and Antonello Monti. Heterogeneous inertia estimation for power systems with high penetration of converter-interfaced generation. Energies, 14(16):5047, 2021.
* [23] Dimitrios Zografos and Mehrdad Ghandhari. Estimation of power system inertia. In 2016 IEEE Power and Energy Society General Meeting (PESGM), pages 1–5. IEEE, 2016.
* [24] Evelyn Heylen, Fei Teng, and Goran Strbac. Challenges and opportunities of inertia estimation and forecasting in low-inertia power systems. Renewable and Sustainable Energy Reviews, 147:111176, 2021.
* [25] Bendong Tan, Junbo Zhao, Marcos Netto, Venkat Krishnan, Vladimir Terzija, and Yingchen Zhang. Power system inertia estimation: Review of methods and the impacts of converter-interfaced generations. International Journal of Electrical Power & Energy Systems, 134:107362, 2022.
* [26] Andrey Y Lokhov, Marc Vuffray, Dmitry Shemetov, Deepjyoti Deka, and Michael Chertkov. Online learning of power transmission dynamics. In 2018 Power Systems Computation Conference, pages 1–7. IEEE, 2018\.
* [27] Andrey Y Lokhov, Deepjyoti Deka, Marc Vuffray, and Michael Chertkov. Uncovering power transmission dynamic model from incomplete pmu observations. In 2018 IEEE Conference on Decision and Control (CDC), pages 4008–4013. IEEE, 2018.
* [28] Federico Milano. Rotor speed-free estimation of the frequency of the center of inertia. IEEE Transactions on Power Systems, 33(1):1153–1155, 2017.
* [29] Simo Särkkä and Arno Solin. Applied stochastic differential equations, volume 10. Cambridge University Press, 2019.
* [30] Ujjwol Tamrakar, Nischal Guruwacharya, Niranjan Bhujel, Felipe Wilches-Bernal, Timothy M Hansen, and Reinaldo Tonkoski. Inertia estimation in power systems using energy storage and system identification techniques. In 2020 International Symposium on Power Electronics, Electrical Drives, Automation and Motion (SPEEDAM), pages 577–582. IEEE, 2020.
* [31] Ian Hiskens. Ieee pes task force on benchmark systems for stability controls. Technical report, 2013.
|
# How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time
Strategy Games
Jonathan Dodge, Sean Penney, Claudia Hilderbrand, Andrew Anderson, Margaret
Burnett
Oregon State University Corvallis, OR; USA $\\{$ dodgej, penneys, minic,
anderan2, burnett<EMAIL_ADDRESS>
###### Abstract
How should an AI-based explanation system explain an agent’s complex behavior
to ordinary end users who have no background in AI? Answering this question is
an active research area, for if an AI-based explanation system could
effectively explain intelligent agents’ behavior, it could enable the end
users to understand, assess, and appropriately trust (or distrust) the agents
attempting to help them. To provide insights into this question, we turned to
human expert explainers in the real-time strategy domain – “shoutcasters” – to
understand (1) how they foraged in an evolving strategy game in real time, (2)
how they assessed the players’ behaviors, and (3) how they constructed
pertinent and timely explanations out of their insights and delivered them to
their audience. The results provided insights into shoutcasters’ foraging
strategies for gleaning information necessary to assess and explain the
players; a characterization of the types of implicit questions shoutcasters
answered; and implications for creating explanations by using the patterns and
abstraction levels these human experts revealed.
###### category:
H.1.2 User/Machine systems Human Information Processing
###### category:
H.5.2 Information interfaces and presentation (e.g., HCI) User Interfaces
###### category:
I.2.m Artificial Intelligence Miscellaneous
###### keywords:
Explainable AI; Intelligent Agents; RTS Games; StarCraft; Information Foraging
.
## 1 Introduction
Real-time strategy (RTS) games are becoming more popular artificial
intelligence (AI) research platforms. A number of factors have contributed to
this trend. First, RTS games are a challenge for AI because they involve real-
time adversarial planning within sequential, dynamic, and partially observable
environments [22]. Second, AI advancements made in the RTS domain can be
mapped to real world combat mission planning and execution such as an AI
system trained to control a fleet of drones for missions in simulated
environments [31].
Figure 1: Two shoutcasters providing commentary for a professional StarCraft
match.
People without AI training will need to _understand_ and ultimately _assess_
the decisions of such a system, based on what such intelligent systems
recommend or decide to do on their own. For example, imagine “Jake,” the proud
owner of a new self-driving car, who needs to monitor the AI system driving
his car through complex traffic like the Los Angeles freeway system at rush
hour, assessing when to trust the system [16] and when he should take control.
Ideally, an interactive explanation system could help Jake assess whether and
when the AI is making its decisions “ _for the right reasons_ ” — in real
time.
Scenarios like this are the motivation for an emerging area of research
referred to as “Explainable AI,” where an automated explanation device
presents an AI system’s decisions and actions in a form useful to the intended
audience — here, Jake. There are recent research advances in explainable AI,
as we discuss in the Related Work section, but only a few focus on explaining
_complex strategy environments_ like RTS games and fewer draw from expert
explainers. To help fill this gap, we conducted an investigation in the
setting of StarCraft II, a popular RTS game [22] available to AI researchers
[33].
We looked to “shoutcasters” (sportscasters for e-sports like RTS games) like
those in Figure 1. In StarCraft e-sports, two players compete while the
shoutcasters provide _real-time_ commentary. They are helpful to investigate
for explaining AI agents in real time to people like Jake for two reasons.
First, they face an _assessment_ task similar to explaining Jake’s car-driving
agent to him. Specifically, they must 1) discover the actions of the player,
2) make sense of them and 3) assess them, particularly if they discover good,
bad, or unorthodox behavior. They must do all this while simultaneously
constructing an explanation of their discoveries in real-time.
Second, shoutcasters are _expert explainers_. As communication professionals,
they are paid to inform an audience they cannot see or receive
feedback/questions from. Hoffman & Klein [10] researched five stages of
explanation, looking at how explanations are formed from observation of an
event, generating one or more possible explanations, judging the plausibility
of said explanations, and either resolving or extending the explanation. Their
findings help to illustrate the complexity of shoutcasters’ task, due to its
abductive nature of explaining the past and anticipating the future. In short,
shoutcasters must anticipate and _answer the questions the audience are not
able to ask_ , all while passively watching the video stream.
Because shoutcasters explain in parallel to gathering their information, we
guided part of our investigation using Information Foraging Theory (IFT) [26],
which explains how people go about their information seeking activities. It is
based on naturalistic predator-prey models, in which the _predator_
(shoutcaster) searches _patches_ (parts of the information environment) to
find _prey_ (evidence of players’ decision process) by following the _cues_
(signposts in the environment that seem to point toward prey) based on their
_scent_ (predator’s guess at how related to the prey a cue is). IFT constructs
have been used to explain and predict people’s information-seeking behavior in
several domains, such as understanding navigations through web sites or
programming and software engineering environments [6, 8, 9, 15, 20, 23, 24,
25, 30]. However, to our knowledge, it has not been used before to investigate
explaining RTS environments like StarCraft.
Using this framework, we investigated the following research questions (RQs):
1. RQ1
_The What and the Where_ : What information do shoutcasters seek to generate
explanations, and where do they find it?
2. RQ2
_The How_ : How do shoutcasters seek the information they seek?
3. RQ3
_The Questions_ : What implicit questions do shoutcasters answer and how do
they form their answers?
4. RQ4
_The Explanations_ : What relationships and objects do shoutcasters use when
building their explanations?
## 2 Background and Related Work
In the HCI community, research has begun to investigate the benefits to humans
of explaining AI. “Jake” (our test pilot) improving his mental model is
critical to his success, since Jake needs a reasonable mental model of the AI
system to assess whether it is making decisions _for the right reasons_.
Mental models, defined as “internal representations that people build based on
their experiences in the real world,” enable users to predict system behavior
[21]. Kulesza et al. [14] found those who adjusted their mental models most in
response to explanations of AI (a recommender system) were best able to
customize recommendations. Further, participants who improved their mental
models the most found debugging more worthwhile and engaging.
Building upon this finding, Kulesza et al. [13] then identified principles for
explaining (in a “white box” fashion) to users how a machine learning system
makes its predictions more transparent to the user. In user studies with a
prototype following these principles, participants’ quality of mental models
increased by up to 52%, and along with these improvements came better ability
to customize the intelligent agents. Kapoor et al. [11] also showed that
explaining AI increased user satisfaction and interacting with the
explanations enabled users to construct classifiers that were more aligned
with target preferences. Bostandjiev et al.’s work on a music recommendation
system [3] found that explanation led to a remarkable increase in user-
satisfaction with their system.
As to what people _want_ explained about AI systems, one influential work into
explainable AI has been Lim & Dey’s [18] investigation into information
demanded from context-aware intelligent systems. They categorized users’
information needs into various “intelligibility types,” and investigated which
types provided the most benefit to user understanding. Among these types were
“What” questions (What did the system do?), “Why” questions (Why did the
system do X?), and so on. In this paper, we draw upon these results to
categorize the kinds of questions that shoutcasters’ explanations answered.
Other research confirms that explanations containing _certain_ intelligibility
types make a difference in user attitude towards the system. For example,
findings by Cotter et al. [7] showed that justifying _why_ an algorithm works
(but not on _how_ it works) were helpful for increasing users’ confidence in
the system — but not for improving their trust. Other work shows that the
relative importance of the intellibility types may vary with the domain; for
example, findings by Castelli et al. [4] in the domain of smart homes showed a
strong interest in “What” questions, but few of the other intellibility types.
| Tournament | Shoutcasters | Players | Game
---|---|---|---|---
1 | 2017 IEM Katowice | ToD and PiG | Neeb vs Jjakji | 2
2 | 2017 IEM Katowice | Rotterdam and Maynarde | Harstem vs TY | 1
3 | 2017 GSL Season 1 Code S | Artosis and tasteless | Soo vs Dark | 2
4 | 2016 WESG Finals | Tenshi and Zeweig | DeMuslim vs iGXY | 1
5 | 2017 StarLeague S1 Premier | Wolf and Brendan | Innovation vs Dark | 1
6 | 2016 KeSPA Cup | Wolf and Brendan | Maru vs Patience | 1
7 | 2016 IEM Geonggi | Kaelaris and Funka | Byun vs Iasonu | 2
8 | 2016 IEM Shanghai | Rotterdam and Nathanias | ShowTime vs Iasonu | 3
9 | 2016 WCS Global Finals | iNcontroL and Rotterdam | Nerchio vs Elazer | 2
10 | 2016 DreamHack Open Leipzig | Rifkin and ZombieGrub | Snute vs ShowTime | 3
Table 1: Summary of StarCraft 2 games studied. Please consult our
supplementary materials for transcripts and links to videos.
Constructing effective explanations of AI is not straightforward, especially
when the underlying AI system is complex. Both Kulesza et al. [13] and
Guestrin et al. [27] point to a potential trade-off between _faithfulness_ and
_interpretability_ in explanation. The latter group developed an algorithm
that can explain (in a “black box” fashion) predictions of any classifier in a
faithful way, and also approximate it locally with an interpretable model. In
their work, they described the fidelity-interpretability trade-off, in which
making an explanation more faithful was likely to reduce its interpretability,
and vice versa. However, humans manage this trade-off by accounting for many
factors, such as the audience’s current situation, their background, amount of
time available, etc. One goal of the current study is to understand how expert
human explainers, like our shoutcasters, manage this trade-off.
In the domain of assessing RTS intelligent agents, Kim et al. [12] invited 20
experienced players to assess the skill levels of AI bots playing StarCraft.
They observed that human rankings were different in several ways to a ranking
computed from the bots’ competition win rate, because humans weighed certain
factors like decision-making skill more heavily. The mismatch between
empirical results and perception scores may be because AI bots that are
effective against each other proved less effective against humans.
Cheung et al. [5] studied StarCraft from a different perspective, that of non-
participant spectators. Their investigations produced a set of nine personas
that helped to illuminate _who_ these spectators are and _why_ they watch.
Since shoutcasters are one of the personas, they discussed how shoutcasters
affect the spectator experience and how they judiciously decide how and when
to reveal different types of information, both to entertain and inform the
audience.
The closest work to our own is Metoyer et al.’s [19] investigation into the
vocabulary and language structure of explaining RTS games. In their study,
novices and experts acted in pairs, with the novice watching the expert play
and providing questions, while the expert thought aloud and answered
questions. They developed qualitative coding schemes of the content and
structure of the explanations the expert players offered. Our investigation is
subtly different in that our explainers are expert _communicators_ about the
game and must _anticipate_ audience questions on their own. However, given the
pertinence of their work, we modified their code set to analyze shoutcasters’
utterances.
## 3 Methodology
Figure 2: A screenshot from an analyzed game, modified to highlight the
patches available to our casters: _HUD_ [1, bottom] (Information about current
game state, e.g., resources held, income rate, supply, and upgrade status);
_Minimap_ [2, lower left] (Zoomed out version of the main window); _“Tab”_ [3,
top left] (Provides details on demand, currently set on “Production”);
_Workers killed_ [4, center left] (Shows that 9 Red workers have died
recently); _Popup_ [5, center] (visualizations that compare player
performance, usually shown briefly). Regions 3 and 5 will be detailed in
Figures 4 and 5.
In order to study high quality explanations and capable players, we considered
only games from professional tournaments denoted as “Premier” by
TeamLiquid111http://wiki.teamliquid.net/starcraft2/Premier_Tournaments. Using
these criteria, we selected 10 matches available with video on demand from
professional StarCraft 2 tournaments from 2016 and 2017 (Table 1).
Professional matches have multiple games, so we randomly selected one game
from each match for analysis. 16 distinct shoutcasters appeared across the 10
videos, with two casters222Here, caster pair (_caster_ or _pair_ for short)
differentiates our observed individuals from the population of shoutcasters as
a whole. commentating each time.
Shoutcasters should both inform and _entertain_ , so they fill dead air time
with jokes. Thus, we filtered the casters’ utterances by relevance. To do so,
two researchers independently coded 32% of statements in the corpus as
relevant or irrelevant to explaining the game. We achieved a 95% inter-rater
reliability (IRR), as measured by the Jaccard index. (The Jaccard index is the
size of the intersection of the codes applied by the researchers divided by
the size of the union.) Then, the researchers split up and coded the rest of
the corpus.
Research questions RQ1 and RQ2 investigated how the casters seek information
onscreen, so we used IFT constructs to discover the types of information
casters sought and how they unearthed it. For RQ1 (the patches in which they
sought information), we simply counted the casters’ navigations among patches.
Changes in the display screen identified these for us automatically. For RQ2
(_how_ they went about their information foraging), we coded the 110 instances
of caster navigation by the context where it took place, based on player
actions — Building, Fighting, Moving, Scouting — or simply caster navigation.
Two researchers independently coded 21% of the data in this manner, with IRR
of 80% (Jaccard). After achieving IRR, one researcher coded the remainder of
the data.
For RQ3 (implicit questions the shoutcasters answered), we coded the casters’
utterances by the Lim & Dey [18] questions they answered. We added a judgment
code to capture caster evaluation on the _quality_ of actions. The complete
code set will be detailed in the RQ3 Results section. Using this code set, two
researchers independently coded 34% of the 1024 explanations in the corpus,
with 80% inter-rater reliability (Jaccard). After achieving IRR, the
researchers split up the remainder of the coding.
To investigate RQ4 (explanation content), we drew content coding rules from
Metoyer et al. [19]’s analysis of explaining Wargus games and added some codes
to account for differences in gameplay and study structure. (For ease of
presentation, in this paper we use the terms “numeric quantity” and
“indefinite quantity” instead of their terms “identified discrete” and
“indefinite quantity”, respectively.) Two researchers independently coded the
corpus, one category at a time (e.g., Objects, Actions, …), achieving an
average of 78% IRR on more than 20% of the data in each category. One
researcher then finished coding the rest of the corpus. Since all data sources
are public, we have provided all data and coding rules in supplementary
materials to enable replicability and support further research.
## 4 Results
### 4.1 RQ1 Results: What information do shoutcasters seek to generate
explanations, and where do they find it?
We used two frameworks to investigate casters’ information seeking behaviors.
To situate _what_ information casters sought in a common framework for
conceptualizing intelligent agents, we turned to the Performance, Environment,
Actuators, Sensors (PEAS) model [28]. We drew from Information Foraging Theory
(IFT) to understand _where_ casters did their information seeking, beginning
with the places their desired information could be found. These places are
called information “patches” in IFT terminology.
Table 2 columns 1 and 2 show the correspondence between PEAS constructs and
patches in the game that the casters in our data actually used. Performance
measures showed assets, resources, successes, and failures, e.g., Figure 2
region 4 (showing that Blue has killed 9 of Red’s workers) and region 5
(showing that Blue has killed 19 units to Red’s 3, etc.). Table 2 shows that
casters rarely consulted performance measures, especially those that examined
_past_ game states. However, they discussed basic performance measures
available in the HUD (Figure 2 region 1), which contained _present_ state
information, e.g., resources held or upgrade status.
| Patch Name | State | Agg. | Usage | Pair 1 | Pair 2 | Pair 3 | Pair 4 | Pair 5 | Pair 6 | Pair 7 | Pair 8 | Pair 9 | Pair 10
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
_Performance_ | Units Lost popup: Shows count and resource value of the units each player has lost. | Past | High | 6 | | | | | | 2 | 2 | 1 | 1 |
Units Lost tab: Same as above, but as a tab. | Past | High | 5 | 1 | 1 | | | 1 | 1 | 1 | | |
Income tab: Provides resource gain rate. | Present | High | 2 | | | | 1 | | 1 | | | |
Income popup: Shows each player’s resource gain rate and worker count. | Present | High | 2 | | | | | 1 | 1 | | | |
Army tab: Shows supply and resource value of currently held non-worker units. | Present | High | 1 | | | | 1 | | | | | |
Income Advantage graph: Shows time series data comparing resource gain rate. | Past | High | 1 | 1 | | | | | | | | |
Units Killed popup: Essentially the opposite of the Units Lost popup | Past | High | 1 | 1 | | | | | | | | |
_Environment_ | Units tab: Shows currently possessed units. | Present | Low | 51 | 1 | 2 | 2 | 10 | 1 | 13 | 20 | 2 | |
Upgrades tab: Like Units tab, but for upgrades to units and buildings. | Present | Low | 5 | | | | 1 | 3 | | | 1 | |
Structures tab: Like Units tab, but buildings. | Present | Low | 2 | 1 | | | 1 | | | | | |
_Actuator_ | Production tab: Shows the units, structures, and upgrades that are in progress, i.e. have been started, but are yet to finish. | Present | Low | Preferred (by choice) “always-on” tab, not counted
_Sensor_ | Minimap: Shows zoomed out map view | Present | Med | Too many to count
Vision Toggle: Shows only the vision available to _one_ of the players. | Present | Low | 36 | 5 | 8 | 1 | 2 | 1 | 5 | 1 | 7 | 5 | 1
Table 2: This table illustrates description, classification, and usage rates
of the patches and enrichment operations we observed casters using. Each patch
is classified by: 1\. The part of the PEAS model that this patch illuminates
best (column 1), 2\. whether it examines past or present game states (column
3), and 3\. degree to which the patch aggregates data in its visualization
(column 4). The remaining columns show total usage counts, as well as per
caster pair usage. Note that there are additional patches passively available
(Main Window and HUD) which do not have navigation properties.
The Environment, or where the agent is situated, corresponds to the game state
(map, units, structures, etc.), shown in the main window in Figure 2. The
Environment mattered for ambient awareness, but casters navigated to Sensor
and Actuator information most actively, so we turn to those constructs next.
Sensors helped the agent collect information about the environment and
corresponded to the local vision area provided by individual units themselves
in our domain. Figure 2 region 2 (Minimap) shows a “bird’s eye view” of the
portion of the environment observable by the Sensors. Casters used patches
containing information about Sensors very often, with Minimap and Vision
Toggles being among the most used patches in Table 2. The casters had
“superpowers” with respect to Sensors (and performance measures) — their
interface allowed _full_ observation of the environment, whereas players could
only _partially_ observe it. As the only ways for casters to peer through the
players’ sensors, the casters extensively used the Minimap and the Vision
Toggle.
Actuators were the means for the agents to interact with their environment,
such as building a unit. Figure 2 region 3 (Production Tab) shows some of the
actuators the player was using, namely that Player Blue was building 5 types
of objects, whereas Red was building 8. Casters almost always kept
visualizations of actions _in progress_ on display. RTS actions had a
_duration_ , meaning that when a player took an action, time passed before its
consequence had been realized. The Production tab’s popularity was likely due
to the fact that it is the _only stable_ view of information about actuators
and their associated actions.
In fact, prior to the game in our corpus, Pair 3 had this exchange, which
demonstrated their perception of the production tab’s importance to doing
their job,
> Pair 3a: _“What if we took someone who knows literally nothing about
> StarCraft, just teach them a few phrases and _what everything is on the
> production tab_?”
> _Pair 3b:_ “Oh, I would be out of a job.”_
#### 4.1.1 Implications for an interactive explainer
Abstracting beyond the StarCraft components to the PEAS model revealed a
pattern of the casters’ behaviors with implications for future explanation
systems, which we characterize as: “keep your Sensors close, but your
Actuators closer.” This aligns with other research, showing that real-time
visualization of agent actions can improve system transparency [34].
However, these results contrast with many of today’s explanation systems,
which tend to prioritize Performance measures, but the Actuators and Sensors
seemed to form the very core of these expert explainers’ information seeking
for presentation to their audience. Our results instead suggest that an
explanation system should prioritize useful, readily accessible information
about what an agent did or can do (Actuators) and of what it can see or has
seen (Sensors).
### 4.2 RQ2 Results: The How: How do shoutcasters seek the information they
seek?
Section RQ1 discussed the What and Where (i.e., the content casters sought and
locations where they sought it.) We then considered how they decided to move
among these places.
The casters’ foraging moves seemed to follow a common foraging “loop” through
the available information patch types: an Actuators-Environment-Performance
loop with occasional forays over to Sensors (Figure 3). Specifically, the
casters tended to start at the “always-on” Actuator-related patches of current
state’s _actions in-progress_ ; then when something triggered a change in
their focus, they checked the Environment for current game state information
and occasionally Performance measures of past states. If they needed more
information along the way, they went to the Sensors to see through a player’s
eyes. We will refer to this process as the “A-E-P+S loop”.
Figure 3: The A-E-P+S loop was a common information foraging strategy some
casters used in foraging for agent behavior. It starts at the Actuators, and
returns there throughout the foraging process. If a caster interrupted the
loop, they usually did so to return to the Actuators.
To help derive what caused casters to leave Actuator patches, which seemed to
have so much importance to them, Information Foraging Theory (IFT) explains
why people (information predators) leave one patch to move to another as a
cost/benefit decision, based on the value of information in the patch a
predator is already in versus the value per cost of going to another patch
[26]. Staying in the same patch is generally the least expensive, but when
there is less value to be gained by staying versus moving to another patch,
the predator moves to the other patch. However, the predator is not
omniscient: decisions are based upon the predator’s _perception_ of the cost
and value that other patches will actually deliver. They formed these
perceptions from both their prior experience with different patch types [24]
and from the cues (signposts in their information environment) that provided
concise information about content available in other patches. For the casters,
certain types of cues tended to trigger a move.
Figure 4: The Units Lost tab (left image) shows the number of units lost and
their total value, in terms of resources spent, for both players. In this
example from Pair 2, we see that Blue Player (top) has lost merely 2800
minerals worth of units so far in this game, while Red has lost more than
7000. The Units Killed popup (right image) allows shoutcasters to quickly
compare player performance via a “tug-of-war” visualization. In this example
from Pair 1, as we see that Blue Player (left) has killed 19 units, while Red
has killed merely 3. The main difference between these two styles of
visualization is that the tab offers more options and information depth to
“drill down” into.
Impending combat was the most common cue triggering a move from the Actuators
type (Production tab) to the Environment type (Units tab) — i.e., from A to E
in the A-E-P+S loop. In Figure 3, the cue was co-located, opposing units,
indicative of imminent combat, which led to caster navigation to a new patch
to illuminate the environment. In fact, combat cues triggered navigations to
the Units tab most frequently, accounting for 30 of the 51 navigations there
(Table 2).
Interestingly, this cue type was different from the static cues most prior IFT
research has used. In previous IFT investigations, cues tended to be static
decorations (text or occasionally images) that label a navigation device, like
a hyperlink or button that leads to another information patch. In contrast,
cues like the onset of combat are dynamic and often did not provide an
affordable direct navigation. However, cues like this were considered cues
because they “provide users with concise information about content that is not
immediately available” [26]. In the case of combat, they suggested high value
in another location, namely the Units tab.
Combat ending was a dynamic cue that triggered a move to a Performance
measure. Of the 13 navigations to a past-facing Performance measure (via tab
or popup), 10 occurred shortly after combat ended as a form of “after-action
review.” Occasionally, the shoutcasters visited other Performance patches,
such as the Income, Units Lost, and Army tabs, to demonstrate reasons why a
player had accrued an in-game lead, or the magnitude of that lead (7
navigations). However, signs of completed fighting were the main cues for
visiting a Performance patch.
Figure 5: The Production tab, showing the build actions currently in progress
for each player. Each unit/structure type is represented by a glyph (which
serves as a link to navigate to that object), provided a progress bar for
duration, and given the number of objects of that type. Thus, we can see that
Blue Player (top row) is building 5 different types of things, while Red
(bottom row) is building 4 types of things. The Structures, Upgrades, and
Units tab look fairly similar to the Production tab.
The most common detour out of the A-E-P part of the loop to a Sensor patch was
to enrich the information environment via the Vision Toggle (36 navigations).
The data did not reveal exactly what cue(s) led to this move, but the move
itself had a common theme: to assess scouting operations. The casters used the
Vision Toggle to allow themselves to see the game through the eyes of only
_one_ of the players, but their default behavior was to view the game with ALL
vision. This provided the casters with the ability to observe _both_ players’
units and structures simultaneously. Toggling the Vision Sensor in this way
enabled them to assess what information was or had been gathered by each
player via their scouting actions (29 of the 36 total Vision Toggles), since
an enemy unit would only appear to the player’s sensors if they had a friendly
unit (e.g., a scout) nearby. Toggling the vision Sensor was the second most
common patch move.
Besides the act of following cues, IFT has another foraging operation:
_enriching_ their information environment to make it more valuable or cost-
efficient [26]. The aforementioned Vision Toggle was one example of this, and
another was when casters added on information visualizations derived from the
raw data, like Performance measure popups or other basic visualizations. Two
examples of the data obtained through this enrichment is shown in Figure 4.
These Performance measures gave the shoutcasters at-a-glance information about
the ways one player was winning. For example, the most commonly used tab,
Units Lost tab (Figure 4) showed the number of units lost and their total
value, in terms of resources spent. This measure achieves “at a glance” by
aggregating _all_ the data samples together by taking a _sum_ ; derived values
like this allow the visualization to scale to large data sets [29]. However,
Table 2 indicates that the lower data aggregation patches were more heavily
used. As Figure 5 shows, the casters used the Production tab to see units
grouped by type, so _type_ information was maintained with only _positional_
data lost. This contrasts with the Minimap (medium aggregation), in which type
information is discarded but positional information maintained at a lower
_granularity_. The casters used Performance measure patches primarily to
understand present state data (HUD), but these patches were also the only way
to access _past_ state information (Table 2).
#### 4.2.1 Implications for an interactive explainer
These results have several implications for automated explanation systems in
this domain. First, the A-E-P+S loop and how the casters traversed it reveals
priority and timing implications for automated explanation systems. For
example, the cues that led them to switch to different information patches
could also be cues in an automated system about the need to avail different
information at appropriate times. For example, our casters showed a strong
preference for actuator information as “steady state” visualization, but
preferred performance information upon conclusion of a subtask.
Viewing the casters’ behaviors through the dual lens of PEAS + IFT has
implications for not only the kinds of patches that an explanation system
would need to provide, but also the cost to users of not providing these
patches in a readily accessible format. For example, PEAS + IFT revealed a
costly foraging problem for the casters due to the relative inaccessibility of
some Actuator patches. In StarCraft, there is no easily accessible mechanism
by which they could navigate to an Actuator patch with fighting or scouting
actions in progress.
Instead, the only way the casters could get access to these actions was via
_painstaking_ camera placement. To accomplish this, the casters made countless
navigations to move the camera using the Minimap, traditional scrolling, or
via tabs with links to the right unit or building. But despite all these
navigation affordances, sometimes the casters were unable to place the camera
on all the actions they needed to see.
For example, at one point when Pair 4 had the camera on a fight at Xy’s base,
a second fight broke out at DeMuslim’s base, which they completely missed:
> Pair 4a: _< surprised, noticing something amiss>
> “Xy actually killed the 3rd base of DeMuslim.”
> …<the pair tries to figure what must have happened>…
> _Pair 4b:_ “Oh my god, you’re right Alex.”
> _Pair 4a:_ “Yeah, it was killed during all that action.”_
### 4.3 RQ3 Results: What implicit questions do shoutcasters answer and how
do they form their answers?
Figure 6: Frequency of Lim & Dey questions answered by casters, with one line per caster pair. Y-Axis represents percentages of the utterances which answered that category of question (X-Axis). Note how casters structured answers consistently. Code | Freq | Description | Example
---|---|---|---
What | 595 | What the player did or anything about game state | “The liberators are moving forward as well”
What-could- | 376 | What the player could have done or what will happen | “Going to be chasing those medivacs away”
happen | | |
How-to | 233 | Explaining rules, directives, audience tips, high level strategies | “He should definitely try for the counter attack right away”
*How-good/bad- | 112 | Evaluation of player actions | “Very good snipe there for Neeb”
was-that-action | | |
Why-did | 27 | Why the player performed an action | “…that allowed Dark to hold onto that 4th base, it allowed him to get those ultralisks out”
Why-didn’t | 6 | Why the player did not perform an action | “The probe already left a while ago, so we knew it wasn’t going to be a pylon rush”
Table 3: Utterance type code set, slightly modified from the schema proposed
by Lim & Dey. The asterisk denotes the code that we added, How-good/bad-was-
that-action because the casters judged actions based on their quality.
For the first two research questions, we considered how the shoutcasters
gathered and assessed information. We now shift focus to the explanations
themselves.
Much of the prior research into explaining agent behavior starts at some kind
of observable effect and then explains something about that effect or its
causes [11, 13, 17, 32]. In RTS games, most such observable effects are the
result of player actions, and recall from RQ1 that the casters spent most of
their information-gathering effort examining the players’ Actuators to
discover and understand actions.
The casters used the information they gained to craft explanations to answer
implicit questions (i.e., questions their audience “should be” wondering)
about player actions. Thus, drawing from prior work about the nature of
questions people ask about AI, we coded the 1024 casters’ explanations using
the Lim & Dey “intelligibility types” [18].
The shoutcasters were remarkably consistent (Figure 6) in the types of
implicit questions they answered. As Table 3 sums up, casters overwhelmingly
chose to answer What, with What-could-happen and How-to high on their list.
(The total is greater than 1024 because explanations answered multiple
questions and/or fit into multiple categories.)
These results surprised us. Whereas Lim & Dey [17] found that Why was the most
demanded explanation type from users, the casters rarely provided Why answers.
More specifically, in the Lim & Dey study, approximately 48 of 250
participants, (19%) demanded a Why explanation. To contrast with our study,
only 27 of the casters’ 1024 utterances (approximately 3%) were Why answers.
#### 4.3.1 Discussion and implications for an interactive explainer
Why so few Whys? Should an automated explainer, like our shoutcasters, eschew
Why explanations, in favor of What?
One possibility is that the casters delivered exactly what their audience
wanted, and thus the casters’ distribution of explanation types was well
chosen. After all, the casters were experts paid to provide commentary for
prestigious tournaments, so they would know their audience well. The expertise
level of the audience may have been fairly high, because the tournament videos
were available only _on demand_ (as opposed to _broadcast_ like some
professional sports) at websites that casual audience members may not even
know about. If a well-informed audience expected the players to do exactly
what they did, their expectations would not be violated, which, according to
Lim & Dey, suggests less demand for Why [17]. This suggests that the extent to
which an automated explainer needs to emphasize Why explanations may depend on
both the _expertise_ of the intended audience, which drives their
expectations, and the agent’s _competence_ , which drives failure to meet
reasonable expectations.
However, another possibility is that the audience really did want Why
explanations, but the casters rarely provided them because of the time they
required — both theirs and the audience’s. The shoutcasters explained in _real
time_ as the players performed their actions. It takes time to understand the
present, predict the future, and link present to future; and spending time in
these ways reduces the time allowable for explaining interesting activities
happening in present. The corpus showed casters interrupting themselves and
each other as new events transpired, as they tried to keep up with the time
constraints. This also has implications to the audience’s workflow, because it
takes time for the audience to mentally process shoutcasters’ departures from
the present, particularly when interesting actions continuously occur.
Even more critical to an explanation system, Why questions also tend to
require extra effort (cognitive or computing resources), because they require
connecting two time slices:
> Pair 10: _“After seeing the first phoenix and, of course, the second one
> confirmed, Snute is going to invest in a couple spore crawlers.”_
In this example, the casters had to connect past information (scouting the
phoenix, a flying unit) with a prediction of the future (investing in spore
crawlers, an air defense structure).
Answering Why-didn’t questions was even rarer than answering Why questions
(Table 3). Like Why questions, Why-didn’t questions required casters to make a
connection between previous game state and a potential current or future game
state. For example, Pair 2: _“The probe already left a while ago, so we knew
it wasn’t going to be a pylon rush..”_ Why-didn’t answers’ rarity is
consistent with the finding that understanding a Why-didn’t explanation
requires even more mental effort than a Why explanation [18]. As for an
interactive explanation system, supporting Why questions requires solving both
a _temporal credit assignment problem_ (determining the effect of an action
taken at a particular time on the outcome) and a _structural_ one (determining
the effect of a particular system element on the outcome). See [2] for an
accessible explanation of these problems.
Table 4: Occurrence frequencies for each code, as a percent of the total
number of utterances in the corpus. From left to right: Object (pink),
Action (orange), Spatial (green), Temporal (yellow), and Quantitative
(blue) codes. The casters were consistent about kinds of the content they
rarely included, but inconsistent about the kinds of content they most
favored.
The casters found a potentially “satisficing” approximation of Why, a
combination of What and What-could-happen, the two most frequent explanation
types. Their What answers explained what the player did, what happened in the
game, and description of the game state. These were all things happening in
the present, and did not require the additional cognitive steps required to
answer Why or Why-didn’t, which may have contributed to its high frequency.
Further, the audience needed this kind of “play-by-play” information to stay
informed about the game’s progression; for example, Pair 4: _“This one hero,
marine, is starting to kill the vikings.”_ When adding on What-could-happen,
casters were pairing What with what the player will or could do, i.e., a
hypothetical outcome. For example,
> Pair 1: _“…if he gets warning of this he’ll be able to get back up behind
> his wall in.”_
Although answering the question What-could-happen required predicting the
future, it did not also require the casters to tie together information from
_past_ and future.
The other two frequent answers, How-good/bad-was-that-
action and How-to, also sometimes contained “why” information. For How-
good/bad-was-that-action, casters _judged_ an action e.g.: Pair 1: _“Nice
maneuver from Jjakji, he knows he can’t fight Neeb front on right now, he
needs to go around the edges.”_ For How-to, casters gave the audience tips and
explained high level strategies. For example, consider this rule-like
explanation, which implies the reason “why” the player used a particular army
composition: Pair 10: _“Roach ravager in general is really good…”_
The next rule-like How-to example is an even closer approximation to “why”
information. Pair 8: _“Obviously when there are 4 protoss units on the other
side of the map, you need to produce more zerglings, which means even fewer
drones for Iasonu.”_ In this case, the casters are giving a rule: given a
general game state (protoss units on their side of the map) the player should
perform an action (produce zerglings). But the example does more; it also
implies a Why answer to the question “Why isn’t Iasonu making more drones?”
Since this implied answer simply relates the present to a rule or best
practice, it was produced at much lower expense than a true Why answer that
required tying past events to the present.
Mechanisms casters used to circumvent the need for disruptive and resource-
intensive Why explanations, such as using How-to, may also be ways to
alleviate the same problems in explanation systems.
### 4.4 RQ4 Results: What relationships and objects do shoutcasters use when
building their explanations?
To inform future explanation systems’ content by expert explanations — the
patterns of nouns, verbs, and adjectives/adverbs in these professionally
crafted explanations — we drew upon a code set from prior work [19] (see the
Methodology section). Table 4 shows how much caster pairs used each of these
types of content, grouping the objects (nouns) in the first group of columns,
then actions (verbs), and then properties (adjectives and adverbs). Table 5
shows how the casters’ explanations used these concepts _together_ , i.e.,
which properties they paired with which objects and actions.
The casters’ explanation sentences tended to be noun-verb constructions, so we
began with the nouns. The most frequently described objects were fighting
object, production object, and enemy, with frequencies of 53%, 40%, and 9%,
respectively, as shown in Figure 4. (This is similar to results from [19],
where production, fighting, and enemy objects were the three most popular
object subcodes.) As to the actions (“verbs”), the casters mainly discussed
fighting (40%) and building (23%). It is not surprising that the casters
frequently discussed fighting, since combat skills are important in StarCraft
[12], and producing is often a prerequisite to fighting. This may suggest
that, in RTS environments, an explanation system may be able to focus on only
the most important subset of actions and objects, without needing to track and
reason about most of the others.
The casters were quite strategic in how they put together these nouns and
verbs with properties. The casters’ used particular properties with these
nouns and verbs to paint the bigger picture of how the game was going for each
player, and how that tied to the players’ strategies. We illustrate in the
next subsections a few of the ways casters communicated about player decisions
— succinctly enough for real time.
##### “This part of the map is mine!”: Spatial properties
RTS players claim territory in battles with the arrangement of their military
units, e.g.:
> Pair 3: _“He’s actually arcing these roaches out in such a great way so that
> he’s going to block anything that’s going to try to come back.”_
As the arrangement column of Table 5 shows, the objects that were used most
with arrangment were fighting objects (12%, 72 instances) and enemy, (10%,
26 instances). Note that arrangement is very similar to point/region, but at
a smaller scale; Arrangement of production object, such as exactly where
buildings are placed in one’s base, appeared to be less significant, co-
occurring only 5% of the time.
The degree to which an RTS player wishes to be aggressive or passive is often
evident in their choice of what distance to keep from their opponent, and the
casters often took this into account in their explanations. One example of
this was evaluation of potential new base locations.
> Pair 5: _“…if he takes the one [base] thats closer that’s near his natural
> [base], then it’s close to Innovation so he can harass.”_
Here, the casters communicated the control of parts of the map by describing
_bases_ as a region, and then relating two regions with a distance. The
magnitude of that distance then informed whether the player was able to more
easily attack. Of the casters’ utterances that described distance along with
production object, 27 out of 44 referred to the distance between bases or
moving to/from a base.
##### “When should I…”: Temporal properties
Casters’ explanations often reflected players’ priorities for allocating
limited resources. One way they did so was using speed properties: Pair 4:
_“We see a really quick third [base] here from XY, like five minutes third.”_
Since extra bases provide additional resource gathering capacity, the audience
could infer that the player intended to follow an “economic” strategy, as
those resources could have otherwise been spent on military units or upgrades.
This contrasts with the following example, Pair 8: _“He’s going for very fast
lurker den…”_ The second example indicated the player’s intent to follow a
different strategy: unlocking stronger units (lurkers). Speed co-occurred
with building/producing most often (12%, 36 instances).
Table 5: Co-Occurrence Matrix. Across rows: Object (pink, top rows) and
Action (orange, bottom rows) codes. Across columns: Spatial (green, left),
Temporal (yellow, center), and Quantitative (blue, right). Co-occurrence
rates were calculated by dividing the intersection of the subcodes by the
union.
##### “Do I care how many?”: Quantitative properties
We found it surprising how often the casters described quantities without
numbers. In fact, the casters often did not even include _type_ information
when they described the players’ holdings, instead focusing on comparative
properties (Table 5). For example, Pair 1: _“There is too much supply for him
to handle. Neeb finalizes the score here after a fantastic game.”_ Here,
“supply” is so generic, we do not even know what kind of things Neeb had –
only that he had “too much” of it.
In contrast, when the casters discussed cheap military units, like “marines”
and “zerglings,” they tended to provide _type_ information, but about half of
their mentions still included no precise numbers. Perhaps it was a matter of
the high cost to get that information: cheap units are often built in large
quantities, so deriving a precise quantity is often very tedious. Further,
adding one weak unit that is cheap to build has little impact on army
strength, so getting a precise number may not have been worthwhile – i.e. the
value of knowing precise quantities is low. To illustrate, consider the
following example, which quantified the army size of both players vaguely,
using _indefinite quantity_ properties: Pair 6: _“That’s a lot of marines and
marauders and not enough stalkers.”_
In the RTS domain, workers are a very important unit. Consistent with this
importance, workers are the only unit where the casters were automatically
alerted to their death (Figure 2, region 4), and are also available at a
glance on the HUD (Figure 2 region 1). Correspondingly, the casters often gave
precise quantities of workers (a production object). Workers (workers,
drones, scvs, and probes) had 46 co-occurrences with numeric quantities, but
only 12 with indefinite quantities (e.g., lot, some, few). Pair 2: _“…it
really feels like Harstem is doing everything right, and [yet] somehow ended
up losing 5 workers.”_
#### 4.4.1 Implications for an interactive explainer
These results have particularly important implications for interactive
explanation systems with real-time constraints. Namely, the results suggest
that an effective way to communicate about strategies and tactics is to modify
the critical objects and actions with particular properties that suggest
strategies. This not only affords a _succinct_ way to communicate about
strategies and tactics (fewer words) but also a _lighter load_ for both the
system and the audience than attempting to build and process a rigorous
explanation of strategy.
Specifically, spatial properties can communicate beyond the actual properties
of objects to strategies themselves; for example, casters used distance to
point out plans to attack or defend. Temporal properties can be used in
explanations of strategies when choices in resource allocation determines
available strategies.
Finally, an interactive explanation system could use the quantitive property
results to help ensure alignment in the level of abstraction used by the human
and the system. For example, a player can abstract a quantity of units into a
_single group_ or think of them as _individual units_. Knowing the level of
abstraction the human players use in different situations can help an
interactive explanation system choose the level of abstraction that will meet
human expectations. Using properties in these strategic ways may enable an
interactive explanation system to meet its real-time constraints while at the
same time improving its communicativeness to the audience.
## 5 Conclusion
The results of our study suggest that explaining intelligent agents to humans
has much to gain from looking to the human experts. In our case, the expert
explainers — RTS shoutcasters — revealed implications into what, when, and how
human audiences of such systems need explanations, and how real-time
constraints can come together with explanation-building strategies. Among the
results we learned were:
1. RQ1
Investigating the what’s and where’s of casters’ real-time information
foraging to _assess and understand_ the players showed that the most commonly
used _patches_ of the information environment were the Actuators (“A” in the
PEAS model). This suggests that, in contrast to today’s explanation systems,
which tend to present mostly Performance measures, explanations should
consider presenting more from the Actuators and Sensors.
2. RQ2
The how’s of casters’ foraging revealed a common pattern, which we termed the
A-E-P+S loop, and the most common cues and triggers that led shoutcasters to
move through this loop. Future explanation systems may be well-served to
prioritize and recommend explanations according to this loop and its triggers.
3. RQ3
As _model explainer_ , the casters revealed strategies for “satisficing” with
explanations that may not have precisely answered all the questions the
audience had in mind, but were feasible given the time and resource
constraints in effect when comprehending, assessing, and explaining, all in
_real time_ as play progresses. These strategies may be likewise applicable to
interactive explanation systems.
4. RQ4
The detailed contents of the casters’ explanations revealed patterns of how
they paired properties (“adjectives and adverbs”) with different objects
(“nouns”) and actions (“verbs”). Interactive explanation systems may be able
to leverage these patterns to communicate succinctly about an agent’s tactics
and strategies.
Ultimately, both shoutcasters’ and explanation systems’ jobs are to improve
the audience members’ mental model of the agents’ behavior. As Cheung, et al.
[5] put it, “…commentators are valued for their ability to expose the depth of
the game.” Hopefully, future explanation systems will be valued for the same
reason.
## 6 Acknowledgments
Removed for anonymous review.
## References
* [1]
* [2] Adrian K Agogino and Kagan Tumer. 2004. Unifying temporal and structural credit assignment problems. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems-Volume 2. IEEE Computer Society, 980–987.
* [3] Svetlin Bostandjiev, John O’Donovan, and Tobias Höllerer. 2012. TasteWeights: A visual interactive hybrid recommender system. In Proceedings of the Sixth ACM Conference on Recommender Systems. ACM, 35–42.
* [4] Nico Castelli, Corinna Ogonowski, Timo Jakobi, Martin Stein, Gunnar Stevens, and Volker Wulf. 2017. What happened in my home?: An end-user development approach for smart home data visualization. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 853–866.
* [5] Gifford Cheung and Jeff Huang. 2011. Starcraft from the stands: Understanding the game spectator. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11). ACM, New York, NY, USA, 763–772. DOI:http://dx.doi.org/10.1145/1978942.1979053
* [6] Ed H Chi, Peter Pirolli, Kim Chen, and James Pitkow. 2001. Using information scent to model user information needs and actions and the web. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 490–497.
* [7] Kelley Cotter, Janghee Cho, and Emilee Rader. 2017. Explaining the news feed algorithm: An analysis of the “News Feed FYI” blog. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 1553–1560.
* [8] Scott D. Fleming, Chris Scaffidi, David Piorkowski, Margaret Burnett, Rachel Bellamy, Joseph Lawrance, and Irwin Kwan. 2013. An information foraging theory perspective on tools for debugging, refactoring, and reuse tasks. ACM Transactions on Software Engineering and Methodology (TOSEM) 22, 2 (2013), 14.
* [9] Wai-Tat Fu and Peter Pirolli. 2007. SNIF-ACT: A cognitive model of user navigation on the world wide web. Human-Computer Interaction 22, 4 (2007), 355–412.
* [10] Robert R Hoffman and Gary Klein. 2017. Explaining explanation, part 1: theoretical foundations. IEEE Intelligent Systems 32, 3 (2017), 68–73.
* [11] Ashish Kapoor, Bongshin Lee, Desney Tan, and Eric Horvitz. 2010. Interactive optimization for steering machine classification. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1343–1352.
* [12] Man-Je Kim, Kyung-Joong Kim, SeungJun Kim, and Anind K Dey. 2016. Evaluation of starcraft artificial intelligence competition bots by experienced human players. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 1915–1921.
* [13] Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015\. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces. ACM, 126–137.
* [14] Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. 2012. Tell me more?: the effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1–10.
* [15] Sandeep Kaur Kuttal, Anita Sarma, and Gregg Rothermel. 2013. Predator behavior in the wild web world of bugs: An information foraging theory perspective. In Visual Languages and Human-Centric Computing (VL/HCC), 2013 IEEE Symposium on. IEEE, 59–66.
* [16] Jason Levine. 2017. Americans are right not to trust self-driving cars. (2017).
* [17] Brian Y Lim and Anind K Dey. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th international conference on Ubiquitous computing. ACM, 195–204.
* [18] Brian Y. Lim, Anind K. Dey, and Daniel Avrahami. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2119–2128.
* [19] Ronald Metoyer, Simone Stumpf, Christoph Neumann, Jonathan Dodge, Jill Cao, and Aaron Schnabel. 2010. Explaining how to play real-time strategy games. Knowledge-Based Systems 23, 4 (2010), 295–301.
* [20] Nan Niu, Anas Mahmoud, Zhangji Chen, and Gary Bradshaw. 2013. Departures from optimality: Understanding human analyst’s information foraging in assisted requirements tracing. In Proceedings of the 2013 International Conference on Software Engineering. IEEE Press, 572–581.
* [21] Donald A Norman. 1983. Some observations on mental models. Mental models 7, 112 (1983), 7–14.
* [22] S. Ontañón, G. Synnaeve, A. Uriarte, F. Richoux, D. Churchill, and M. Preuss. 2013. A survey of real-time strategy game AI research and competition in StarCraft. IEEE Transactions on Computational Intelligence and AI in Games 5, 4 (Dec 2013), 293–311. DOI:http://dx.doi.org/10.1109/TCIAIG.2013.2286295
* [23] Alexandre Perez and Rui Abreu. 2014. A diagnosis-based approach to software comprehension. In Proceedings of the 22nd International Conference on Program Comprehension. ACM, 37–47.
* [24] David Piorkowski, Scott D. Fleming, Christopher Scaffidi, Margaret Burnett, Irwin Kwan, Austin Z Henley, Jamie Macbeth, Charles Hill, and Amber Horvath. 2015. To fix or to learn? How production bias affects developers’ information foraging during debugging. In Software Maintenance and Evolution (ICSME), 2015 IEEE International Conference on. IEEE, 11–20.
* [25] David Piorkowski, Austin Z Henley, Tahmid Nabi, Scott D Fleming, Christopher Scaffidi, and Margaret Burnett. 2016. Foraging and navigations, fundamentally: Developers’ predictions of value and cost. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, 97–108.
* [26] Peter Pirolli. 2007. Information foraging theory: Adaptive interaction with information. Oxford University Press.
* [27] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1135–1144.
* [28] Stuart J. Russell and Peter Norvig. 2003. Artificial Intelligence: A modern approach (2 ed.). Pearson Education.
* [29] Robert Spence. 2007. Information Visualization: Design for interaction (2Nd Edition). Prentice-Hall, Inc., Upper Saddle River, NJ, USA.
* [30] Sruti Srinivasa Ragavan, Sandeep Kaur Kuttal, Charles Hill, Anita Sarma, David Piorkowski, and Margaret Burnett. 2016. Foraging among an overabundance of similar variants. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 3509–3521.
* [31] Katia Sycara, Christian Lebiere, Yulong Pei, Donald Morrison, and Michael Lewis. 2015. Abstraction of analytical models from cognitive models of human control of robotic swarms. In International Conference on Cognitive Modeling. University of Pittsburgh.
* [32] Joe Tullio, Anind K Dey, Jason Chalecki, and James Fogarty. 2007. How it works: A field study of non-technical users interacting with an intelligent system. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 31–40.
* [33] Oriol Vinyals. 2016. DeepMind and blizzard to release starcraft II as an AI research environment. (2016). https://deepmind.com/blog/deepmind-and-blizzard-release-starcraft-ii-ai-research-environment/
* [34] Robert H Wortham, Andreas Theodorou, and Joanna J Bryson. 2017. Improving robot transparency:real-time visualisation of robot AI substantially improves understanding in naive observers, In IEEE RO-MAN 2017. IEEE RO-MAN 2017 (August 2017). http://opus.bath.ac.uk/55793/
|
# Dynamic Benchmarking of Masked Language Models
on Temporal Concept Drift with Multiple Views
Katerina Margatina♢ Shuai Wang† Yogarshi Vyas†
Neha Anna John† Yassine Benajiba† Miguel Ballesteros†
♢University of Sheffield †AWS AI Labs
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
Work done during an internship at AWS AI Labs.
###### Abstract
Temporal concept drift refers to the problem of data changing over time. In
NLP, that would entail that language (e.g. new expressions, meaning shifts)
and factual knowledge (e.g. new concepts, updated facts) evolve over time.
Focusing on the latter, we benchmark $11$ pretrained masked language models
(MLMs) on a series of tests designed to evaluate the effect of temporal
concept drift, as it is crucial that widely used language models remain up-to-
date with the ever-evolving factual updates of the real world. Specifically,
we provide a holistic framework that (1) dynamically creates temporal test
sets of any time granularity (e.g. month, quarter, year) of factual data from
Wikidata, (2) constructs fine-grained splits of tests (e.g. updated, new,
unchanged facts) to ensure comprehensive analysis, and (3) evaluates MLMs in
three distinct ways (single-token probing, multi-token generation, MLM
scoring). In contrast to prior work, our framework aims to unveil how robust
an MLM is over time and thus to provide a signal in case it has become
outdated, by leveraging multiple views of evaluation.
## 1 Introduction
In the real world, what people talk about and how they tend to speak and write
changes constantly over time. In Natural Language Processing (NLP), this
entails a challenging shift of the textual data distribution that is commonly
referred to as temporal concept drift. Prior work has identified that
pretrained language models (PLMs) tend to become outdated soon after new
topics and concepts are emerging Lazaridou et al. (2021); Dhingra et al.
(2022); Agarwal and Nenkova (2022); Luu et al. (2022), limiting their
capability to be robust to newly generated data.
We consider the desiderata of language models’ robustness to temporal drift to
be twofold. First, LMs should be well adapted to the dynamic use of language,
from the linguistic perspective. Language changes over time, pronunciations
evolve, new words and expressions are borrowed or invented, the meaning of old
words drifts, and morphology develops or decays Blank (1999); Traugott and
Dasher (2001); Kulkarni et al. (2015). Second, LMs should be aware of the
ever-changing reality of the world, from a factual perspective. Models’
factual knowledge should be up-to-date with new facts and concepts (e.g.
Covid-$19$) to be of use continuously. In this work, we focus on the latter;
the temporal robustness of LMs to facts that change over time.
Figure 1: Querying pretrained MLMs on their knowledge about the Prime Minister
of the United Kingdom.
In an ideal scenario, we would like to know exactly when the factual knowledge
of a model is “expired” so that we could adapt it to the new (or updated) set
of facts. In reality, this is a challenging task. A large body of work has
focused on the part of (continually) adapting an “outdated” model to the new
data distribution Guu et al. (2020); Yogatama et al. (2021); Sun et al.
(2020); Biesialska et al. (2020); Jang et al. (2022b); Jin et al. (2022);
Chakrabarty et al. (2022). This line of work is parallel to ours, as we focus
on the crucial step before adaptation, the evaluation of the model on temporal
concept drift: How can we know if a language model is outdated or not?
Let us consider the case where we desire a language model to be up-to-date
with the Prime Minister of the United Kingdom (Figure 1).111The time of
writing of this paper is September $2022$. A plausible way to evaluate this is
to use the lama-probe paradigm Petroni et al. (2019) and query the LM as a
knowledge base (KB). This would mean that we could form the query as ‘‘The
surname of the Prime Minister of the United Kingdom is <mask>.’’, give it as
an input to a (masked) LM and inspect the output token distribution for the
<mask> token. Figure 1 shows the top prediction for a series of RoBERTa
models.222Except for the RoBERTa base and large models, we also show the
predictions of models trained with Twitter data until $2019$, $2020$, $2021$,
and $2022$, respectively Loureiro et al. (2022). We first observe that the
most widely used RoBERTa base and large models are both outdated in terms of
factual knowledge, as they predict the names of PMs that served from $2010$
until $2019$. Next, while the last three models ($2020$-$2022$) answer
correctly, the $2019$ model answers the (correct) first name of the PM
(Boris), not the surname (Johnson) which is asked for.
This is a handy illustration of the many challenges in evaluating MLMs for
temporal robustness in the LMs-as-KBs framework. First, this $2019$ model
would be considered to have made a mistake (as the prediction is different
than the gold label and the metric is accuracy), even though the factual
knowledge was correct (the name of the PM of the UK). Second, notice that we
designed the query to ask for the surname (instead of the name of the PM), as
this results in a single mask. The lama-probe and related frameworks do not
handle multi-token queries for MLMs (e.g., Boris Johnson). Finally, we mark
with a ? the answers of the first two RoBERTa models, because even though
their answers are out-of-date for our current evaluation (October $2022$),
their answers could have been correct in an evaluation setting in the time of
the training data ($2019$). This illustrates the obscurity of the temporal
window in which the model is expected to be correct, if the model is not
trained with a temporally-aware design Lazaridou et al. (2021); Dhingra et al.
(2022); Loureiro et al. (2022); Jang et al. (2022a).
In this work, we aim to address such limitations and provide a holistic
framework for dynamic benchmarking of masked language models on temporal
concept drift, with a focus on facts that change over time. Following the
propositions of Kiela et al. (2021) and Søgaard et al. (2021) that advocate
for a focus on dynamic (i.e., test sets should not become saturated) and
targeted (i.e., use of multiple, independent test sets for realistic
performance estimates) benchmarking respectively, and building on prior work
Jiang et al. (2020b); Dhingra et al. (2022); Jang et al. (2022a), we create a
large open-source test set that can be dynamically updated over time,
containing temporal fine-grained subsets of examples that can be used to query
masked language models and evaluate their factual knowledge over time.
#### Contributions
(1) We release DynamicTempLAMA, an improved version of the static TempLAMA
Dhingra et al. (2022) test set consisting of Wikidata relations, that is used
to evaluate temporal robustness of MLMs. We provide data and code to
dynamically keep DynamicTempLAMA up-to-date over
time.333https://github.com/amazon-science/temporal-robustness (2) We propose a
novel evaluation framework to first create temporal splits of test sets of any
granularity (month, quarter, year) and then to further create fine-grained
splits of facts that are unchanged, updated, new or deleted, aiming to improve
comprehensiveness (§3.1). (3) We introduce three distinct evaluation views
with multiple metrics (§3.3) to ensure comprehensive results and provide
analysis of benchmarking a large set open-source temporal RoBERTa models
(§3.2).
## 2 Related Work
#### Temporal Concept Drift
Evaluation of the robustness of language models on temporal concept drift has
seen a rising interest in the recent years. Previous work has focused on
methods to continually adapt models over time Hombaiah et al. (2021); Rosin et
al. (2022); Lazaridou et al. (2022). Another area of research is evaluation of
temporal robustness which has been explored both in the upstream LM
pretraining task Jiang et al. (2020b); Lazaridou et al. (2021); Dhingra et al.
(2022); Jang et al. (2022a); Loureiro et al. (2022) and in downstream tasks
such as sentiment analysis Lukes and Søgaard (2018); Agarwal and Nenkova
(2022), named entity recognition Rijhwani and Preotiuc-Pietro (2020); Onoe et
al. (2022), question answering Mavromatis et al. (2021); Liška et al. (2022),
and rumor detection Mu et al. (2023). It has also been studied for model
explanations Zhao et al. (2022) and for text classification in legal,
biomedical Chalkidis and Søgaard (2022), and social media Röttger and
Pierrehumbert (2021) domains.
Luu et al. (2022) explore the setting of temporal misalignment (i.e., training
and test data drawn from different periods of time) for both upstream and
downstream tasks and find that temporal adaptation should not be seen as a
substitute for finding temporally aligned labeled data for fine-tuning.
The closest work to ours is TempLAMA Dhingra et al. (2022). However, we differ
across four axes: (i) TempLAMA is static, while we provide code to dynamically
download facts in a fine-grained fashion from any periods of time (not only
yearly), (ii) we evaluate the same models over time focusing on the evaluation
of robustness over time, we do not explore the best adaptation technique to
address the problem, (iii) we do not fine-tune the models to adapt them to the
domain/format of the test data, and (iv) we address benchmarking of masked LMs
(not auto-regressive) including more evaluation techniques. Finally, similar
to our motivation, Jang et al. (2022a) recently explored lifelong adaptation
and evaluation of temporal concept drift in LMs and introduced TemporalWiki
for continual adaptation and Twiki-Probes for evaluation. The major difference
is that the authors focus on providing corpora to adapt an LM over time, while
in our paper we focus on evaluating temporal robustness of LMs.
DynamicTempLAMA is a holistic evaluation framework, while “Twiki-Probes are
not natural sentences; they are factual phrases synthetically generated from a
naive concatenation of Subject, Relation, and Object”.
#### Language Models as Knowledge Bases
The cloze-style LM evaluation framework for factual knowledge, lama Petroni et
al. (2019), follows the setting depicted in Figure 1. A knowledge base
relation is transformed into natural language text with a manually created
template and then passed as an input to an LM. The framework is based on
treating the output distribution for the mask token as the retrieved answers
to the query AlKhamissi et al. (2022). The lama probe has since been
extensively used to evaluate factual knowledge in LMs Petroni et al. (2020);
Talmor et al. (2020); Kassner et al. (2021); Sung et al. (2021); Dhingra et
al. (2022); Fierro and Søgaard (2022), while other works have been exploring
its limitations and ways to improve it Kassner and Schütze (2020); Haviv et
al. (2021); Elazar et al. (2021); Zhong et al. (2021); Qin and Eisner (2021).
A particular challenge in our experimental setting, is the text compatibility
between the model (i.e., its pre-training data) and the format of test
examples, named as “language mismatch” by Talmor et al. (2020). Dhingra et al.
(2022) opts to fine-tune the model under evaluation with part of the test set
to adapt it to the format of the task. We argue that this process suffers from
many caveats; it is inefficient and impractical to fine-tune a model whose
capabilities are under evaluation, it risks optimization stability and
overfitting issues due to the small training dataset, and enforces extra
biases and errors, especially in the case of temporal robustness evaluation.
## 3 Dynamic Benchmarking of Temporal Concept Drift
In this section we describe in detail the steps to (re)create DynamicTempLAMA,
our dynamically updated test set with facts from Wikidata (§3.1). We then
present the open-source temporal RoBERTa models (TimeLMs) Loureiro et al.
(2022) that we use for benchmarking (§3.2). Finally, we introduce the
evaluation framework under which we investigate how well the TimeLMs perform
in terms of temporal robustness (§3.3).
The research question that we try to address with our work is: How can we
measure temporal drift robustness of PLMs with an evaluation framework that
is: unsupervised (no labeled downstream data), efficient (quality test set of
facts—no need to run inference on a large corpus to compute perplexity for
every token), dynamic (test set easily generated per request—can be used to
dynamically evaluate new concepts over time), general (option to create test
sets of any time granularity), and comprehensive (battery of targeted test
sets that evaluate different LM capabilities and multiple views of
evaluation).
Wikidata ID Relation Template #Facts #Examples Possible Split(s) P$54$ member
of sports team <subject> plays for <object>. $3772$ $50558$
$\mathcal{D}^{\textsc{updated}}$ P$69$ educated at <subject> attended
<object>. $232$ $2420$ $\mathcal{D}^{\textsc{updated}}$,
$\mathcal{D}^{\textsc{unchanged}}$ P$6$ head of government <object> is the
head of the government of <subject>. $578$ $7815$
$\mathcal{D}^{\textsc{updated}}$ P$279$ subclass of <subject> is a subclass of
<object>. $5$ $70$ $\mathcal{D}^{\textsc{new}}$,
$\mathcal{D}^{\textsc{updated}}$
Table 1: Examples of relations and their corresponding templates that we
include in DynamicTempLAMA. #Facts denote the unique number of facts for each
relation, while #Examples denotes the total number of example we have
collected for each relation in the time range between 2019-Q1 and 2022-Q2.
Possible Split(s) indicate the type of fine-grained split that each relation
would potentially belong to.
### 3.1 Dynamic-TempLAMA
We base our implementation on the TempLAMA Dhingra et al. (2022) code, while
we make several changes in terms of accessibility (i.e. option to dynamically
update the test set), flexibility (i.e. option to adjust the granularity of
the temporal splits) and comprehensiveness (i.e. fine-grained splits and
multiple evaluation views). We provide a high-level overview of the process to
create DynamicTempLAMA in Figure 2.
#### Data Collection
We start the process by selecting a set of relations collected from the
Wikidata KB (Figure 2(a)).444All possible relations from Wikidata can be found
here https://www.wikidata.org/wiki/Wikidata:List_of_properties. Specifically,
we use the $9$ relations used in the TempLAMA dataset, followed by $7$ more
that we also decided to collect. We collect all relations from Wikidata in the
span of $2019-2022$. We then manually craft a cloze style query, i.e template,
for each relation.Table 1 shows a few examples of relations and templates,
along with dataset statistics.555Details on all relations and templates of
DynamicTempLAMA can be found in Tables 6 & 7 in the Appendix A.1. We explain
the data collection process in detail in Appendix A.1.
#### Temporal Splits
In this stage, we have a very large collection of facts for which we have
temporal information (i.e., that the fact is true) in the time range we
investigate ($2019-2022$). In the TempLAMA dataset, the facts are divided
yearly. However, we would ideally like to benchmark temporal models of any
time granularity. Specifically, since we benchmark temporal models that are
trained quarterly (§3.2), a yearly split would not be useful to evaluate
temporal concept drift of the four models trained on each quarter of a year.
Consequently, we divide the large set of collected facts per quarter (Figure
2(b)), while adding the functionality to our implementation to split the facts
in any time granularity (monthly, quarterly, yearly).
(a) Data collection
(b) Temporal Splits
(c) Fine-grained Splits
Figure 2: The process for creating DynamicTempLAMA. We first collect data from
Wikidata (a), we then divide it to quarterly temporal splits (b) and finally
we create more targeted fine-grained sets (c).
#### Fine-grained Splits
For a given time range, from timestep $t$ to $t+1$ (e.g.
2019-Q1$\rightarrow$2019-Q2), we further create comprehensive test sets that
contain examples with unchanged, updated, new or deleted facts, denoted by
$\mathcal{D}_{t+1}^{\textsc{unchanged}},\mathcal{D}_{t+1}^{\textsc{updated}},\mathcal{D}_{t+1}^{\textsc{new}}$
and $\mathcal{D}_{t+1}^{\textsc{deleted}}$ respectively (Figure 2(c)). We
create these splits to be able to measure different capabilities of the MLM in
terms of robustness to temporal concept drift. The motivation for this stems
from limitations of prior work Dhingra et al. (2022) to shed light into what
kind of data each temporal test set contains. For instance, we pose questions
like How many facts were updated from timestep $t\rightarrow t+1$? How many
facts remained unchanged? What was the change? The object or the subject? Are
there new facts in timestep $t+1$ that were not present before? We argue that
it is essential to distinguish between these sub-tests, so that each split can
target specific capabilities of the LM. First, we can use
$\mathcal{D}_{t+1}^{\textsc{unchanged}}$ to evaluate knowledge preservation
(i.e. how well a model can preserve knowledge over time). Second, we can use
$\mathcal{D}_{t+1}^{\textsc{updated}},\mathcal{D}_{t+1}^{\textsc{new}}$ and
$\mathcal{D}_{t+1}^{\textsc{deleted}}$ to measure adaptation (i.e. how well a
model adapts to new information/facts). Finally, we can measure overall
temporal robustness by evaluating a temporal model from timestep $t$ on
$\mathcal{D}_{t+1}^{\textsc{updated}}$ and $\mathcal{D}_{t+1}^{\textsc{new}}$
in timesteps for $t\in[t+1,t+2,...)$. We believe that this framework is
particularly useful for insightful evaluation of methods that aim to adapt
language models over time Guu et al. (2020); Yogatama et al. (2021); Sun et
al. (2020); Biesialska et al. (2020); Jang et al. (2022b); Jin et al. (2022);
Chakrabarty et al. (2022).
### 3.2 Temporal Models
In contrast with prior work that uses private, in-house models for temporal
robustness evaluation that are not accessible by the community Lazaridou et
al. (2021); Dhingra et al. (2022), we instead benchmark a series of open-
source temporal models. Despite our aim for transparency, energy efficiency
Strubell et al. (2019) and reproducibility, we also believe that the dynamic
nature of the task at hand requires accessibility to past, present and future
models, to ensure that the findings of evaluation studies in temporal concept
drift are meaningful, trustworthy and serve their purpose in evaluating models
in a ever-evolving world. Under this assumption, we believe that studies on
temporal robustness should ideally build on each other, so that we can have a
holistic view as to how these models truly evolve over time.
To this end, we use the Diachronic Language Models (TimeLMs) Loureiro et al.
(2022) that are publicly available in the HuggingFace hub Wolf et al.
(2019).666https://huggingface.co/cardiffnlp TimeLMs are RoBERTamodels Liu et
al. (2019) trained quarterly on Twitter data. All models are initialised from
the original roberta-base model checkpoint and are later trained using data
from the previous quarters and the new temporal data from the new time period.
For instance, the first model (2019-Q4) was trained with data sampled from
Twitter until December $2019$, while the second model (2020-Q1) was trained on
the concatenation of all the data used to train 2019-Q4 and temporally-aligned
data sampled from the first quarter of $2020$. There are $11$ TimeLMs in
total, from 2019-Q4 until 2022-Q2.
Finally, we would like to draw attention to two specific points. First, all
TimeLMs are trained using the same RoBERTa (base) tokenizer and thus have the
same vocabulary. This is crucial when evaluating models in a Cloze-style
format, like the lama-probe, in order to evaluate fair comparison among the
models. Second, Loureiro et al. (2022) aim to continue training and releasing
TimeLMs every quarter, which is a very important and promising initiative to
help with the dynamic evaluation of LMs in temporal concept drift in the
future.
### 3.3 Temporal Concept Drift Evaluation
#### Single-token probing
Our first evaluation type is single-token probing, which was introduced in the
seminal lama-probe work of Petroni et al. (2019). The idea is simple and
follows the fill-in-the-blank format. Specifically, we convert each relation
using its template to natural language text (see Figure 2(a)) replacing the
<object> with the mask token (i.e., <mask> for RoBERTa). Then, as shown in
Figure 1, we give the prompt as an input to the MLM and obtain a probability
distribution over the vocabulary for the <mask> token. We use the metrics from
Petroni et al. (2019), that are Accuracy, Mean Reciprocal Rank (MRR) and
Precision at k (P@k).777P@k$=1$, if the gold label is in the top-k predictions
of the model, therefore P@1 corresponds to Accuracy. Note that a crucial
limitation of this approach is that it considers only facts with single-token
objects. This results in trimming down the test sets by $~{}95\%$, while
limiting the actual value of the test (as most facts and concepts contain
multiple words).
#### Multi-token generation
We aim to address this limitation and include multi-token objects to our
evaluation framework. It is important to note that we are benchmarking masked
language models instead of autoregressive left-to-right language models like
Dhingra et al. (2022). This is crucial because the latter, decoder-based
family of models, can be used off-the-shelf to generate multiple tokens. In
contrast, MLMs are trained with $~{}15\%$ of their inputs masked and optimized
to predict only the masked tokens. We therefore use the formulation introduced
by Wang and Cho (2019), that is essentially a decoding-based strategy for MLMs
based on Gibbs sampling. Specifically, we consider the setting that we do not
know a priori the correct number of masks for each label. Instead, we
enumerate from a single mask up to $M$ masks, i.e., $m=1,...,M$. Following
Jiang et al. (2020a), we choose $M=5$, as all our facts are in the English
language. When $m>1$, we add $m$ consecutive masks to the input and we pass
the input to the model $m$ times, when each time we sequentially sample each
mask from left to right. At each iteration we replace the mask with the
corresponding token prediction of the previous iteration. This way, we can
extend the lama probe to include multi-token labels in our test set. The
setting is entirely different than the single-token approach, as here we have
$m$ predictions from the model with an increasing number of tokens, while the
correct label can consist of any number of tokens in the range of $1,...,M$.
Another difference here is the evaluation metrics. Because we converted the
task to text generation, we borrow generation metrics such as ROUGE Lin
(2004), while also including standard metrics like F1-macro. Finally, we also
include as a metric BERT-score Zhang* et al. (2020) as an additional
informative metric from the perspective of contextual semantics. In effect, we
evaluate factual knowledge over time of MLMs, where facts include multiple
correct answers and each answer consists of multiple tokens. We consider a
prediction correct if the model correctly predicts any of the acceptable
answers.
#### MLM scoring
Finally, as a third lens of evaluation we use the MLM scoring framework of
Salazar et al. (2020). Contrary to the previous approaches, MLM scoring aims
to measure the probability of the correct answer (i.e., of the masks), instead
of generating the most probable answer. More specifically, we evaluate MLMs
out of the box via their pseudo-log-likelihood scores (PLLs), which are
computed by masking tokens one by one. PLLs have been widely used to measure
the equivalence of perplexity (of autoregressive language models) for MLMs in
unlabelled data Lazaridou et al. (2021). Still, computing PLLs for large
corpora is a very costly process in terms of time and resources Loureiro et
al. (2022). Instead, we propose to combine the lama and MLM scoring frameworks
to create an efficient and targeted evaluation framework for temporal factual
knowledge.
Models Temporal Splits 2019-Q2 2019-Q3 2019-Q4 2020-Q1 2020-Q2 2020-Q3 2020-Q4
2021-Q1 2021-Q2 2021-Q3 2021-Q4 2022-Q1 2022-Q2 2019-Q4 $34.88$ $33.96$
$34.44$ $34.93$ $34.76$ $34.73$ $34.02$ $34.18$ $34.70$ $34.34$ $34.92$
$35.46$ $35.31$ 2020-Q1 $24.47$ $24.01$ $24.45$ $24.67$ $24.59$ $24.44$
$23.98$ $23.94$ $24.25$ $23.96$ $24.20$ $24.5$ $24.42$ 2020-Q2 $22.94$ $22.29$
$22.92$ $23.24$ $23.23$ $23.12$ $22.57$ $22.55$ $22.90$ $22.59$ $22.91$
$23.23$ $23.11$ 2020-Q3 $22.39$ $21.87$ $22.22$ $22.60$ $22.52$ $22.42$
$21.99$ $22.00$ $22.29$ $21.92$ $22.18$ $22.42$ $22.30$ 2020-Q4 $25.56$
$25.28$ $25.68$ $25.96$ $25.89$ $25.79$ $25.51$ $25.44$ $25.71$ $25.50$
$25.69$ $25.97$ $25.72$ 2021-Q1 $25.76$ $25.28$ $25.91$ $26.18$ $26.14$
$26.18$ $25.75$ $25.63$ $25.99$ $25.77$ $26.01$ $26.32$ $26.02$ 2021-Q2
$23.75$ $23.47$ $23.94$ $24.10$ $24.10$ $24.12$ $23.63$ $23.60$ $24.05$
$23.75$ $24.12$ $24.37$ $24.16$ 2021-Q3 $22.95$ $22.61$ $23.00$ $23.14$
$23.12$ $23.16$ $22.84$ $22.77$ $23.00$ $22.82$ $23.03$ $23.30$ $23.06$
2021-Q4 $23.37$ $23.01$ $23.41$ $23.59$ $23.55$ $23.68$ $23.37$ $23.27$
$23.60$ $23.40$ $23.58$ $23.76$ $23.61$ 2022-Q1 $24.25$ $23.83$ $24.42$
$24.56$ $24.57$ $24.68$ $24.40$ $24.26$ $24.52$ $24.35$ $24.51$ $24.71$
$24.58$ 2022-Q2 21.48 20.95 21.42 21.59 21.57 21.61 21.25 21.12 21.44 21.13
21.31 21.49 21.39
Table 2: MLM scoring (median pseudo-log-likelihood scores) averaged for each
temporal split.
### 3.4 Dataset Analysis
We consider different subsets of the DynamicTempLAMA test sets for the three
different evaluation settings (§3.3). For the multi-token and MLM scoring
settings, we keep the full dataset, for single-token we first tokenize the
labels and keep only the test examples that have at least one label with a
single token. This results in a very aggressive filtering of the dataset.
Specifically, each quarterly temporal split consists of $~{}8500$ test
examples on average for the multi-token setting, but for the single-token this
results in only $~{}450$ examples, marking a loss of $95\%$ of the
data.888Table 5 in the Appendix shows all the statistics in detail.
Additionally, the distribution of the fine-grained splits is of great
interest, as it will shape the interpretation of the results and the general
challenges of the evaluation framework. $\mathcal{D}^{\textsc{updated}}$ and
$\mathcal{D}^{\textsc{unchanged}}$ (i.e., the splits of the most interest)
constitute around $96\%$ and $0.3\%$, respectively, of the total examples for
the single-token evaluation, and $95\%$ and $1.8\%$ for the multi-token. This
is arguably a very skewed distribution, showing the importance of our work in
diving the temporal splits into further fine-grained splits. This is
essential, because we would have different expectations for a model trained on
timestep $t$ while tested on data from both $t$ and $t-1$; for unchanged facts
it would be desirable to keep equal performance in both sets (i.e., knowledge
preservation §4.2), while for updated facts we would like to see improved
performance in timestep $t$ (i.e., adaptation §4.3).
(a) Single-token
(b) Multi-token
Figure 3: Overall performance over time ($2019-2022$) for both single and
multi-token evaluation. $X$-axis corresponds to the TimeLMs and the $Y$-axis
to different metrics depending on the type of the evaluation.
## 4 Results
(a) Updated split.
(b) New split.
Figure 4: Multi-token evaluation for evolving and emerging facts.
### 4.1 Temporal robustness
We first evaluate temporal robustness of the $11$ TimeLMs, defined as the
overall performance over time (§3.1). Figure 3 shows the average performance
in all temporal and fine-grained splits in the time range from 2019-Q4 to
2022-Q2 for two types of evaluation, single-token probing and multi-token
generation. For the former evaluation type, (Fig. 3(a)), all models perform
similarly for all metrics. However, when we evaluate multi-token generation
the models gradually improve over time. (Fig. 3(b)). This difference shows the
importance of considering multiple views and evaluations for the same LM
capability (i.e., temporal robustness).
We attribute the similar single-token performance to the fact that these
temporal datasets contain almost exclusively unchanged facts (§3.4). It is
therefore a positive outcome to observe that TimeLMs can preserve acquired
knowledge (§4.2). The findings for overall multi-token evaluation corroborate
the intuition that more recent models, that are trained with temporal data of
the entire range, should perform better than “past” (e.g. 2020) models that
have not seen “future” data (e.g. 2022) during training. We also provide the
overall results with MLM scoring in Table 2. We also observe that the last
model performs best across all temporal splits, showing the effectiveness of
adaptation with more recent unlabelled data (§3.2). Even though we observe
that this pattern holds for most temporal splits (i.e., scores improving for
each column $\downarrow$), the 2020-Q4 and 2021-Q1 TimeLMs produce worse PLL
scores than their previous or later versions. This is more evident in the
overall density plot in Figure 5. This finding entails that either the
distribution shift in these quarters was a lot stronger than the other
temporal periods, or the training of these particular models was not as
successful as it would have been expected.
Figure 5: Overall PLL distributions for TimeLMs.
Example Input Ground Truth Labels #Tokens #Answers Split $1$ Alex Morgan plays
for _X_. United States women’s national soccer team $7$ $2$ 2021-Q4 Orlando
Pride $2$ $2$ Cristiano Ronaldo plays for _X_. Juventus F.C. $5$ $1$ 2021-Q2
Juventus F.C., Manchester United F.C. $5,6$ $2$ 2021-Q3 Manchester United F.C.
$6$ $1$ 2021-Q4 $2$ _X_ is the head of the government of Italy. Giuseppe Conte
$5$ $1$ 2020-Q4 Giuseppe Conte, Mario Draghi $5,3$ $2$ 2021-Q1 Mario Draghi
$3$ $1$ 2021-Q2
Table 3: Qualitative analysis of certain examples in DynamicTempLAMA.
### 4.2 Knowledge preservation
We use the $\mathcal{D}^{\textsc{unchanged}}$ split to evaluate the capability
of MLMs to preserve knowledge over time. Figure 6 shows that for both single
and multi-token evaluation all TimeLMs demonstrate similar performance over
time, showing strong knowledge preserving skills. Surprisingly, different
metrics show different patterns among the models for a single split. While in
general we should not compare the performance of the single model over time
(as the test sets are different), the comparision is valid in this case
because the splits contain unchanged facts, and hence most temporal test sets
are almost identical. All plots are shown in Figure 7 in the Appendix.
### 4.3 Adaptation to emerging & evolving concepts
Finally, we use the $\mathcal{D}^{\textsc{new}}$ and
$\mathcal{D}^{\textsc{updated}}$ splits for evaluation of emerging and
evolving concepts, respectively. Here to ensure fair comparison, we evaluate
the TimeLMs for a specific time window; for each model trained on timestep
$t$, we keep the test sets from $t-1$, $t$ and $t+1$. We observe in Figure 4
that in these cases the results vary among the models. There is not a very
clear pattern as before, so case-by-case examination would be required. Still,
a common pattern for the Updated split is that the middle set tends to have
the highest performance ($\wedge$ shape). This means that models manage to
effectively adapt to the updated facts of that timestep ($t$), but on the next
timestep ($t+1$) they underpeform as they are unaware of the factual changes,
thus requiring adaptation. We provide all plots in the Appendix, including the
Deleted split, which is more difficult to interpret intuitively (i.e., why are
some facts deleted from Wikidata after a certain point?).
## 5 Qualitative Analysis
Table 3 provides some examples from the DynamicTempLAMA test set that can help
us further interpret our results and inspect existing challenges. We first
observe that all examples have multi-token labels (i.e., objects from the
Subject-relation-object format) and are in effect discarded in the single-
token evaluation setup, making the inclusion of multiple views essential for
this task.
More specifically, in $1$, we observe that one label (United States women’s
national soccer team) has more than $M=5$ tokens. It is therefore excluded
even from the multi-token the test set, leaving MLM scoring to be the only
method that could evaluate it. Interestingly, we manually tested the 2021-Q4
temporal model and found that it produces $1.6$ and $307.3$ average PLL scores
for the two options respectively, making the disregarded label a far more
confident prediction.
In the second and third example, we observe how the correct answer for the
query changes over time, making the granularity of the evaluation (i.e.,
yearly, quarterly, monthly) an important factor in the correct assessment of
the model’s temporal factual knowledge. For instance, for the example $3$, we
can carefully inspect how the predictions of the models change for facts that
change over time (Table 4). However, even though PLL scores can follow
intuitive temporal patterns (i.e., the PLL value can increase or decrease
according to the point in time that the fact has changed), comparison between
scores is not always helpful (i.e., word frequency can obscure factual
knowledge) leaving room for improving the lama formulation.
Figure 6: Single and multi-token evaluation for the Unchanged split.
TimeLms Guiseppe Conte Mario Draghi 2020-Q4 $3.8$ $33.3$ 2021-Q1 $3.5$ $22.7$
2021-Q2 $3.5$ $25.7$ 2021-Q3 $3.8$ $23.8$
Table 4: PLL scores for Example $2$ from Table 3.
## 6 Conclusion & Future Work
We addressed MLMs’ robustness on temporal concept drift and introduced
DynamicTempLAMA: a dataset for dynamic benchmarking of factual knowledge in
temporal, fine-grained splits, from 2019-Q4 to 2022-Q2 that contain facts over
time. We release our codebase to dynamically update the current test set over
time and the option to extend it with custom (i) templates, (ii) relations
from Wikidata, (iii) any period of time (years) and (iv) granularity of time
(month/quarter/year). We include multiple views of evaluation, showing that it
is essential in order to properly interpret the results of our benchmarking
study of $11$ temporal RoBERTa models. We consider experimentation with
improving MLM decoding and addressing “domain mismatch” as open areas of
research for future work. Our code can be found at https://github.com/amazon-
science/temporal-robustness.
## Acknowledgements
We would like to thank the anonymous reviewers and the AWS AI team for their
valuable feedback on the paper. We also thank Robert Vacareanu and Karthikeyan
K for their help with running experiments.
## Limitations
#### Lower bound estimate
A very common issue with the lama probe evaluation framework Petroni et al.
(2019) is that it constitutes a lower bound estimate for its performance on
factual knowledge retrieval. Specifically, if a model performs well, one can
infer that it has the tested reasoning skill. However, failure does not entail
that the reasoning skill is missing, as it is possible that there is a problem
with the lexical-syntactic construction we picked Talmor et al. (2020). Any
given prompt only provides a lower bound estimate of the knowledge contained
in an LM Jiang et al. (2020b).
#### Domain mismatch
Despite the advantages of zero-shot evaluation, performance of a model might
be adversely affected by mismatches between the language the pre-trained LM
was trained on and the language of the examples in our tasks Jiang et al.
(2020b). It is quite possible that a fact that the LM does know cannot be
retrieved due to the prompts not being effective queries for the fact Jiang et
al. (2020b). Prior work proposes to fine-tune the model with a small set of
examples taken from the test set (and removed of course) in order to address
the incompatibility problem or ‘language mismatch’ Talmor et al. (2020);
Dhingra et al. (2022). We argue that this process suffers for multiple
limitations, such as that it not practical for a fast evaluation of the
capabilities of a PLM at hand and it faces optimization stability issues due
to the small training dataset, inter alia. The major limitation, however, is
that such fine-tuning enforces extra biases and errors, especially in the case
of temporal robustness evaluation.
#### MLM decoding (multi-token labels)
In this work we tried to address the problem of decoding from masked language
models, by incorporating two distinct approaches to the evaluation framework;
multi-token generation with MLMs Wang and Cho (2019) and MLM scoring Salazar
et al. (2020). Still, we observe that both methods provide results that are
hard to interpret (§5), leaving the problems of (i) decoding or generating
multiple tokens from MLMs and (ii) evaluation of factual knowledge in LMs as
open areas of research.
#### Manual Templates
For lama-style probing Petroni et al. (2019), prior work creates the templates
manually. This is a limitation both in terms of scale (i.e., generalization to
many different kinds of inputs) and consistency (i.e., how do models perform
with minimal changes to their inputs?). LMs do not reason in an abstract
manner and are context-dependent Talmor et al. (2020). It is therefore
essential to address this problem and include functionalities to incorporate a
set of diverse templates for each evaluation setup.
#### English Twitter MLMs
Finally, our dataset, DynamicTempLAMA, following prior work Dhingra et al.
(2022), collects and evaluates facts from the Wikidata in the English language
alone, and benchmarks RoBERTa language models trained in English Twitter data.
We understand that this is a limitation and further data collection and
experimentation in more languages would be strongly encouraged.
## References
* Agarwal and Nenkova (2022) Oshin Agarwal and Ani Nenkova. 2022. Temporal effects on pre-trained models for language processing tasks. _Transactions of the Association for Computational Linguistics_ , 10:904–921.
* AlKhamissi et al. (2022) Badr AlKhamissi, Millicent Li, Asli Celikyilmaz, Mona Diab, and Marjan Ghazvininejad. 2022. A review on language models as knowledge bases.
* Biesialska et al. (2020) Magdalena Biesialska, Katarzyna Biesialska, and Marta R. Costa-jussà. 2020. Continual lifelong learning in natural language processing: A survey. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 6523–6541, Barcelona, Spain (Online). International Committee on Computational Linguistics.
* Blank (1999) Andreas Blank. 1999. Why do new meanings occur? a cognitive typology of the motivations for lexical semantic change.
* Chakrabarty et al. (2022) Tuhin Chakrabarty, Thomas Scialom, and Smaranda Muresan. 2022. Fine-tuned language models can be continual learners. In _Challenges & Perspectives in Creating Large Language Models_.
* Chalkidis and Søgaard (2022) Ilias Chalkidis and Anders Søgaard. 2022. Improved multi-label classification under temporal concept drift: Rethinking group-robust algorithms in a label-wise setting. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics_ , Dublin, Ireland.
* Dhingra et al. (2022) Bhuwan Dhingra, Jeremy R. Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, and William W. Cohen. 2022. Time-aware language models as temporal knowledge bases. _Transactions of the Association for Computational Linguistics_ , 10:257–273.
* Elazar et al. (2021) Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. 2021. Measuring and improving consistency in pretrained language models. _Transactions of the Association for Computational Linguistics_ , 9:1012–1031.
* Fierro and Søgaard (2022) Constanza Fierro and Anders Søgaard. 2022. Factual consistency of multilingual pretrained language models. In _Findings of the Association for Computational Linguistics: ACL 2022_ , pages 3046–3052, Dublin, Ireland. Association for Computational Linguistics.
* Guu et al. (2020) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Realm: Retrieval-augmented language model pre-training. In _Proceedings of the 37th International Conference on Machine Learning_ , ICML’20. JMLR.org.
* Haviv et al. (2021) Adi Haviv, Jonathan Berant, and Amir Globerson. 2021. BERTese: Learning to speak to BERT. In _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume_ , pages 3618–3623, Online. Association for Computational Linguistics.
* Hombaiah et al. (2021) Spurthi Amba Hombaiah, Tao Chen, Mingyang Zhang, Michael Bendersky, and Marc-Alexander Najork. 2021. Dynamic language models for continuously evolving content. _Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining_.
* Jang et al. (2022a) Joel Jang, Seonghyeon Ye, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, and Minjoon Seo. 2022a. Temporalwiki: A lifelong benchmark for training and evaluating ever-evolving language models.
* Jang et al. (2022b) Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun KIM, Stanley Jungkyu Choi, and Minjoon Seo. 2022b. Towards continual knowledge learning of language models. In _International Conference on Learning Representations_.
* Jiang et al. (2020a) Zhengbao Jiang, Antonios Anastasopoulos, Jun Araki, Haibo Ding, and Graham Neubig. 2020a. X-FACTR: Multilingual factual knowledge retrieval from pretrained language models. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 5943–5959, Online. Association for Computational Linguistics.
* Jiang et al. (2020b) Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020b. How can we know what language models know? _Transactions of the Association for Computational Linguistics_ , 8:423–438.
* Jin et al. (2022) Xisen Jin, Dejiao Zhang, Henghui Zhu, Wei Xiao, Shang-Wen Li, Xiaokai Wei, Andrew Arnold, and Xiang Ren. 2022. Lifelong pretraining: Continually adapting language models to emerging corpora. In _Proceedings of BigScience Episode #5 – Workshop on Challenges & Perspectives in Creating Large Language Models_, pages 1–16, virtual+Dublin. Association for Computational Linguistics.
* Kassner et al. (2021) Nora Kassner, Philipp Dufter, and Hinrich Schütze. 2021. Multilingual LAMA: Investigating knowledge in multilingual pretrained language models. In _Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics_.
* Kassner and Schütze (2020) Nora Kassner and Hinrich Schütze. 2020. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Kiela et al. (2021) Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. 2021. Dynabench: Rethinking benchmarking in NLP. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 4110–4124, Online. Association for Computational Linguistics.
* Kulkarni et al. (2015) Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. Statistically significant detection of linguistic change. In _Proceedings of the 24th International Conference on World Wide Web_ , WWW ’15, page 625–635, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.
* Lazaridou et al. (2022) Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. 2022. Internet-augmented language models through few-shot prompting for open-domain question answering.
* Lazaridou et al. (2021) Angeliki Lazaridou, Adhiguna Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d’Autume, Tomáš Kočiský, Sebastian Ruder, Dani Yogatama, Kris Cao, Susannah Young, and Phil Blunsom. 2021. Mind the gap: Assessing temporal generalization in neural language models. In _Advances in Neural Information Processing Systems_.
* Lin (2004) Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In _Text Summarization Branches Out_ , pages 74–81, Barcelona, Spain. Association for Computational Linguistics.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach.
* Liška et al. (2022) Adam Liška, Tomáš Kočiský, Elena Gribovskaya, Tayfun Terzi, Eren Sezener, Devang Agrawal, Cyprien de Masson d’Autume, Tim Scholtes, Manzil Zaheer, Susannah Young, Ellen Gilsenan-McMahon, Sophia Austin, Phil Blunsom, and Angeliki Lazaridou. 2022. Streamingqa: A benchmark for adaptation to new knowledge over time in question answering models.
* Loureiro et al. (2022) Daniel Loureiro, Francesco Barbieri, Leonardo Neves, Luis Espinosa Anke, and Jose Camacho-collados. 2022. TimeLMs: Diachronic language models from Twitter. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations_ , pages 251–260, Dublin, Ireland. Association for Computational Linguistics.
* Lukes and Søgaard (2018) Jan Lukes and Anders Søgaard. 2018. Sentiment analysis under temporal shift. In _Proceedings of the Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis_ , pages 65–71.
* Luu et al. (2022) Kelvin Luu, Daniel Khashabi, Suchin Gururangan, Karishma Mandyam, and Noah A. Smith. 2022. Time waits for no one! analysis and challenges of temporal misalignment. In _Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 5944–5958, Seattle, United States. Association for Computational Linguistics.
* Mavromatis et al. (2021) Costas Mavromatis, Prasanna Lakkur Subramanyam, Vassilis N. Ioannidis, Soji Adeshina, Phillip R. Howard, Tetiana Grinberg, Nagib Hakim, and George Karypis. 2021. Tempoqr: Temporal question reasoning over knowledge graphs.
* Mu et al. (2023) Yida Mu, Kalina Bontcheva, and Nikolaos Aletras. 2023. It’s about time: Rethinking evaluation on rumor detection benchmarks using chronological splits. In _Findings of the Conference of the European Chapter of the Association for Computational Linguistics_.
* Onoe et al. (2022) Yasumasa Onoe, Michael Zhang, Eunsol Choi, and Greg Durrett. 2022. Entity cloze by date: What LMs know about unseen entities. In _Findings of the Association for Computational Linguistics: NAACL 2022_ , pages 693–702, Seattle, United States. Association for Computational Linguistics.
* Petroni et al. (2020) Fabio Petroni, Patrick S. H. Lewis, Aleksandra Piktus, Tim Rocktäschel, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. 2020. How context affects language models’ factual predictions. _CoRR_ , abs/2005.04611.
* Petroni et al. (2019) Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In _Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing_.
* Qin and Eisner (2021) Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying LMs with mixtures of soft prompts. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 5203–5212, Online. Association for Computational Linguistics.
* Rijhwani and Preotiuc-Pietro (2020) Shruti Rijhwani and Daniel Preotiuc-Pietro. 2020. Temporally-informed analysis of named entity recognition. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7605–7617, Online. Association for Computational Linguistics.
* Rosin et al. (2022) Guy D. Rosin, Ido Guy, and Kira Radinsky. 2022. Time masking for temporal language models. _Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining_.
* Röttger and Pierrehumbert (2021) Paul Röttger and Janet Pierrehumbert. 2021. Temporal adaptation of BERT and performance on downstream document classification: Insights from social media. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pages 2400–2412, Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Salazar et al. (2020) Julian Salazar, Davis Liang, Toan Q. Nguyen, and Katrin Kirchhoff. 2020. Masked language model scoring. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 2699–2712, Online. Association for Computational Linguistics.
* Søgaard et al. (2021) Anders Søgaard, Sebastian Ebert, Jasmijn Bastings, and Katja Filippova. 2021\. We need to talk about random splits. In _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume_ , pages 1823–1832, Online. Association for Computational Linguistics.
* Strubell et al. (2019) Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. _CoRR_ , abs/1906.02243.
* Sun et al. (2020) Fan-Keng Sun, Cheng-Hao Ho, and Hung-Yi Lee. 2020. {LAMAL}: {LA}nguage modeling is all you need for lifelong language learning. In _International Conference on Learning Representations_.
* Sung et al. (2021) Mujeen Sung, Jinhyuk Lee, Sean Yi, Minji Jeon, Sungdong Kim, and Jaewoo Kang. 2021\. Can language models be biomedical knowledge bases? In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 4723–4734, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Talmor et al. (2020) Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2020. oLMpics-on what language model pre-training captures. _Transactions of the Association for Computational Linguistics_.
* Traugott and Dasher (2001) Elizabeth Closs Traugott and Richard B. Dasher. 2001. _Regularity in Semantic Change_. Cambridge Studies in Linguistics. Cambridge University Press.
* Wang and Cho (2019) Alex Wang and Kyunghyun Cho. 2019. BERT has a mouth, and it must speak: BERT as a Markov random field language model. In _Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation_ , pages 30–36, Minneapolis, Minnesota. Association for Computational Linguistics.
* Wolf et al. (2019) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. _CoRR_ , abs/1910.03771.
* Yogatama et al. (2021) Dani Yogatama, Cyprien de Masson d’Autume, and Lingpeng Kong. 2021. Adaptive semiparametric language models. _Transactions of the Association for Computational Linguistics_ , 9:362–373.
* Zhang* et al. (2020) Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In _International Conference on Learning Representations_.
* Zhao et al. (2022) Zhixue Zhao, George Chrysostomou, Kalina Bontcheva, and Nikolaos Aletras. 2022. On the impact of temporal concept drift on model explanations. In _Findings of the Conference on Empirical Methods in Natural Language Processing_.
* Zhong et al. (2021) Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021. Factual probing is [MASK]: Learning vs. learning to recall. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 5017–5033, Online. Association for Computational Linguistics.
## Appendix A Appendix
### A.1 Data Collection for DynamicTempLAMA
Following Dhingra et al. (2022), we identify all facts in the Wikidata
snapshot, which have either a start or an end date after $2010$ and whose
subjects and objects are both entities with Wikipedia pages.1 Among these
$482$K facts, we identify subject and relation pairs which have multiple
objects at different times and select $16$ relations with the most such
subjects. Then, for these relations we manually write template cloze queries
(i.e., templates) and populate them with the $1000$ most frequent subjects per
relation. For each subject and each relation we gather all the objects with
their associated time interval and construct a separate query for each year in
that interval. When intervals for the object entities overlap, we add all of
them to the list of correct answers. The query and the corresponding year form
the input texts and the temporal information $t$, while the object entity is
the target that we want to predict (i.e., gold label). In contrast to Dhingra
et al. (2022), we do extra temporal divisions. Specifically, we get each
yearly split and divide it further in quarterly splits (§3.1, Figure 2(b)),
following the same algorithm.
Temporal Split Unchanged Updated Deleted New Total %Unchanged %Updated %Lost
2019-Q2 $479|8523$ $1|165$ $7|124$ $9|121$ $496|8933$ $96.6|95.4\%$
$0.2|1.8\%$ $94.4\%$ 2019-Q3 $451|8154$ $3|248$ $36|430$ $5|205$ $495|9037$
$91.1|90.2\%$ $0.6|2.7\%$ $94.5\%$ 2019-Q4 $454|8271$ $0|151$ $3|140$ $12|120$
$469|8682$ $96.8|95.3\%$ $0.0|1.7\%$ $94.6\%$ 2020-Q1 $456|8243$ $3|296$
$9|126$ $15|273$ $483|8938$ $94.4|92.2\%$ $0.6|3.3\%$ $94.6\%$ 2020-Q2
$470|8451$ $0|92$ $2|95$. $2|59$ $474|8697$ $99.2|97.2\%$ $0.0|1.1\%$ $94.5\%$
2020-Q3 $446|8254$ $2|179$ $26|238$ $10|133$ $484|8804$ $92.1|93.8\%$
$0.4|2.0\%$ $94.5\%$ 2020-Q4 $452|8298$ $2|124$ $4|111$ $5|97$ $463|8630$
$97.6|96.2\%$ $0.4|1.4\%$ $94.6\%$ 2021-Q1 $453|8238$ $1|269$ $4|131$ $14|215$
$472|8853$ $96.0|93.1\%$ $0.2|3.0\%$ $94.7\%$ 2021-Q2 $460|8344$ $2|90$
$7|128$ $5|76$ $474|8638$ $97.0|96.6\%$ $0.4|1.0\%$ $94.5\%$ 2021-Q3
$445|8164$ $2|164$ $19|220$ $2|99$ $468|8647$ $95.1|94.4\%$ $0.4|1.9\%$
$94.6\%$ 2021-Q4 $443|8213$ $1|128$ $4|82$ $5|90$ $453|8513$ $97.8|96.5\%$
$0.2|1.5\%$ $94.7\%$ 2022-Q1 $442|8189$ $1|111$ $7|117$ $6|126$ $456|8543$
$96.9|95.9\%$ $0.2|1.3\%$ $94.7\%$ 2022-Q2 $446|8287$ $0|56$ $2|40$ $2|34$
$450|8417$ $99.1|98.5\%$ $0.0|0.7\%$ $94.7\%$
Table 5: Total number of examples for each temporal and fine-grained split in DynamicTempLAMA. We show both the single-token and the multi-token datasets (up to $M=5$ tokens). Cell scheme to be read single | multi. %Unchanged and %Updated show the percentage of the total examples that are part of the Unchanged and Updated set respectively. %Lost shows the percentage of examples we lose when we filter out the dataset for the single-token evaluation setting.
Wikidata ID | Relation | Template | #Facts | #Examples | Possible Split(s)
---|---|---|---|---|---
P$54$ | member of sports team | <subject> plays for <object>. | $3772$ | $50558$ | $\mathcal{D}^{\textsc{updated}}$
P$39$ | position held | <subject> holds the position of <object>. | $2961$ | $34835$ | $\mathcal{D}^{\textsc{updated}}$
P$108$ | employer | <subject> works for <object>. | $1544$ | $20531$ | $\mathcal{D}^{\textsc{updated}}$
P$102$ | political party | <subject> is a member of the <object>. | $1068$ | $14232$ | $\mathcal{D}^{\textsc{updated}}$
P$286$ | head coach | <object> is the head coach of <subject>. | $987$ | $11935$ | $\mathcal{D}^{\textsc{updated}}$
P$69$ | educated at | <subject> attended <object>. | $232$ | $2420$ | $\mathcal{D}^{\textsc{updated}}$, $\mathcal{D}^{\textsc{unchanged}}$
P$488$ | chairperson | <object> is the chair of <subject>. | $629$ | $8468$ | $\mathcal{D}^{\textsc{updated}}$
P$6$ | head of government | <object> is the head of the government of <subject>. | $578$ | $7815$ | $\mathcal{D}^{\textsc{updated}}$
P$279$ | subclass of | <subject> is a subclass of <object>. | $5$ | $70$ | $\mathcal{D}^{\textsc{new}}$, $\mathcal{D}^{\textsc{updated}}$
P$127$ | owned by | <subject> is owned by <object>. | $394$ | $5326$ | $\mathcal{D}^{\textsc{updated}}$, $\mathcal{D}^{\textsc{unchanged}}$
P$1001$ | legal term | <subject> is a legal term in <object>. | $37$ | $423$ | $\mathcal{D}^{\textsc{unchanged}}$
P$106$ | profession | <subject> is a <object> by profession. | $83$ | $1090$ | $\mathcal{D}^{\textsc{updated}}$, $\mathcal{D}^{\textsc{new}}$, $\mathcal{D}^{\textsc{unchanged}}$
P$27$ | citizen | <subject> is <object> citizen. | $147$ | $1983$ | $\mathcal{D}^{\textsc{new}}$, $\mathcal{D}^{\textsc{unchanged}}$
P$176$ | produced by | <subject> is produced by <object>. | $24$ | $276$ | $\mathcal{D}^{\textsc{new}}$, $\mathcal{D}^{\textsc{unchanged}}$
P$138$ | named after | <subject> is named after <object>. | $73$ | $1009$ | $\mathcal{D}^{\textsc{new}}$, $\mathcal{D}^{\textsc{unchanged}}$
P$937$ | work location | <subject> used to work in <object>. | $38$ | $507$ | $\mathcal{D}^{\textsc{new}}$, $\mathcal{D}^{\textsc{unchanged}}$
Table 6: The list of templates we used for each relation in the
DynamicTempLAMA dataset.
Wikidata ID | Relation | Input | Labels | Split
---|---|---|---|---
P$54$ | member of sports team | Cristiano Ronaldo plays for _X_. | Juventus F.C., Manchester United F.C. | 2021-Q3
P$39$ | position held | Martina Anderson holds the position of _X_. | member of the European Parliament | 2019-Q4
P$108$ | employer | George van Kooten works for _X_. | University of Cambridge | 2022-Q2
P$102$ | political party | Elena Kountoura is a member of the _X_. | Independent Greeks, SYRIZA | 2019-Q2
P$286$ | head coach | _X_ is the head coach of New York Red Bulls. | Gerhard Struber | 2020-Q4
P$69$ | educated at | Sarafina Nance attended _X_. | Tufts University, University of California, Berkeley | 2020-Q2
P$488$ | chairperson | _X_ is the chair of Lloyds Banking Group. | Lord Blackwell | 2022-Q2
P$6$ | head of government | _X_ is the head of the government of United Kingdom. | Theresa May, Boris Johnson | 2019-Q3
P$279$ | subclass of | Mercedes-Benz A-Class is a subclass of _X_. | compact car | 2022-Q2
P$127$ | owned by | DeepMind is owned by _X_. | Alphabet Inc. | 2021-Q4
P$1001$ | legal term | Commonwealth of Independent States Free Trade Area is a legal term in _X_. | ’Ukraine’, ’Russia’, ’Belarus’, ’Armenia’, ’Kazakhstan’, ’Moldova’, ’Kyrgyzstan’, ’Uzbekistan’, ’Tajikistan’ | 2022-Q2
P$106$ | profession | Penny James is a _X_ by profession. | chief executive officer | 2019-Q3
P$27$ | citizen | Yulia Putintseva is _X_ citizen. | Kazakhstan 2022-Q1 |
P$176$ | produced by | Land Rover Discovery series is produced by _X_. | Jaguar Land Rover | 2022-Q2
P$138$ | named after | Bayes Business School is named after _X_. | Thomas Bayes | 2021-Q3
P$937$ | work location | Eliza Vozemberg used to work in _X_. | Strasbourg, City of Brussels | 2022-Q2
Table 7: Examples of DynamicTempLAMA for each relation.
### A.2 Full Results
We provide the full results with all metrics for the Unchanged split in Figure
7, and the Updated, New and Deleted splits for multi-token generation in
Figure 9.
(a) Single Token
(b) Multi-token
Figure 7: Single-token probing and multi-token generation for the Unchanged
split.
(a) Updated Split
(b) New Split
(c) Deleted Split
Figure 8: Multi-token generation results for various fine-grained splits.
(a) Updated Split
(b) New Split
(c) Deleted Split
Figure 9: Multi-token generation results for various fine-grained splits. Here
for each model trained on timestep $t$, we keep the test sets from $t-1$, $t$
and $t+1$.
|
# A Dichotomy for the dimension of solenoidal attractors on high dimensional
space
Haojie Ren School of Mathematical Sciences, Fudan University, No 220 Handan
Road, Shanghai, China 200433<EMAIL_ADDRESS>
###### Abstract.
We study dynamical systems generated by skew products:
$T:[0,1)\times\mathbb{C}\to[0,1)\times\mathbb{C}\quad\quad T(x,y)=(bx\mod
1,\gamma y+\phi(x))$
where integer $b\geq 2$, $\gamma\in\mathbb{C}$ such that $0<|\gamma|<1$, and
$\phi$ is a real analytic $\mathbb{Z}$-periodic function. Let $\Delta\in[0,1)$
such that $\gamma=|\gamma|e^{2\pi i\Delta}$. For the case
$\Delta\notin\mathbb{Q}$ we prove the following dichotomy for the solenoidal
attractor $K^{\phi}_{b,\,\gamma}$ for $T$: Either $K^{\phi}_{b,\,\gamma}$ is a
graph of real analytic function, or the Hausdorff dimension of
$K^{\phi}_{b,\,\gamma}$ is equal to $\min\\{3,1+\frac{\log b}{\log
1/|\gamma|}\\}$. Furthermore, given $b$ and $\phi$, the former alternative
only happens for countable many $\gamma$ unless $\phi$ is constant.
## 1\. introduction
One of main purposes of dynamical system is to study the strange attractors.
In this paper, we concern the Hausdorff dimension of solenoidal attractors on
skew products.
For any positive integer $d$, let
$\mathbf{B}_{d}:=\\{\,z\in\mathbb{R}^{d}:\,|z|\leq 1\\}.$ For any functions
$f:[0,1)\to[0,1)$ and $g:[0,1)\times\mathbf{B}_{d}\to\mathbf{B}_{d}$, define
the map
$F:\,[0,1)\times\mathbf{B}_{d}\to[0,1)\times\mathbf{B}_{d}\quad\quad
F(x,\,y)=\big{(}f(x),\,g(x,y)\big{)}.$
Thus
(1.1)
$K_{F}:=\bigcap_{n=0}^{\infty}F^{n}\big{(}\,[0,1)\times\mathbf{B}_{d}\,\big{)}$
is an invariant set and called solenoidal attractor or solenoid for $F$.
Denote $\text{dim}_{H}(K_{F})$ as the Hausdorff dimension of $K_{F}$.
Historical remarks. The study of solenoidal attractors on skew products has
long history. For the case $d=1$, Alexander and Yorke [12] introduced a class
maps called generalized baker’s transformation:
$B:[-1,1]\times[-1,1]\circlearrowleft\quad\quad
B(x,y)=\begin{cases}(2x-1,\gamma y+(1-\gamma))&\,x\geq 0\\\ (2x+1,\gamma
y-(1-\gamma))&\,x<0.\end{cases}$
Solomyak [8, 6] proved $\text{dim}_{H}(K_{B})=2$ for Lebesgue-a.e.
$\gamma\in(1/2,1]$. Later Hochman [3] showed the packing dimension of the
exception set is zero and for every $\gamma$ transcendental in Varjú [13]. See
[14] for some $\gamma$ algebraic.
To generalize $B$ to the nonlinear case, Tsujii [9] introduced dynamical
systems generated by maps:
(1.2)
$\tilde{T}:[0,1)\times\mathbb{R}\to[0,1)\times\mathbb{R}\quad\quad\tilde{T}(x,y)=(bx\mod
1,\gamma y+\phi(x))$
where integer $b\geq 2$, $0<\gamma<1$ and $\phi$ is a real
$\mathbb{Z}$-periodic Lipschitz function. ( Note that
$\tilde{T}\big{(}\,[0,1)\times\mathbf{B}(0,\,\frac{\parallel\phi\parallel_{\infty}}{1-\gamma}\,)\,\big{)}\subset[0,1)\times\mathbf{B}(0,\,\frac{\parallel\phi\parallel_{\infty}}{1-\gamma}\,)$,
thus we can define the solenoids $K_{\tilde{T}}$ by (1.1)). For
$\gamma\in(1/b,1)$, Tsujii [9] proved that $\text{dim}(K_{\hat{T}})=2$ for
$C^{2}$ generic $\phi$. Later the author [22] gave a complete answer for all
$\gamma\in(0,1)$ when $\phi$ is a real analytic $\mathbb{Z}$-periodic
function. See e.g. [2, 7, 15, 16] for results about the dimension of solenoids
when $d=1$.
For the case $d=2$, studying the dimension of solenoids got harder. A well-
known example is Smale-Williams-type solenoids generated by the map
$Q:[0,1)\times\mathbb{R}^{2}\circlearrowleft\quad\quad Q(x,y)=(bx\mod 1,\gamma
y+\phi(x),\lambda y+\psi(x))$
where integer $b\geq 2$, $0<\gamma,\,\lambda<1$ and $\phi,\,\psi$ are real
$\mathbb{Z}$-periodic Lipschitz functions. In particular $K_{Q}$ is called
Smale-Williams solenoid if $\phi(x)=\cos(2\pi x)$, $\psi(x)=\sin(2\pi x)$ and
$b=2$. When $\gamma,\,\lambda<1/b,$ $K_{Q}$ is called thin solenoid. [30, 31]
gave the positive answers for the dimension of thin Smale-Williams solenoids.
Later Simon [35] calculated the the Hausdorff dimension of thin Smale-Williams
solenoids when $Q$ is one-to-one map. Recently Mohammadpour et al.[33]
consider a general class of functions $Q^{\prime}$. Under the transversality
assumption, [33] calculated the the Hausdorff dimension of thin solenoids for
$Q^{\prime}$ when $Q^{\prime}$ is injective on $K_{Q^{\prime}}$. More results
about the Hausdorff dimension of solenoids can be found in [34, 35, 36, 37,
38].
Main findings. It is well-known that transversality is vital to study the
dimension of solenoids. One of our goals in this paper is to find ways to
remove the transversality assumption for the case $d\geq 2$. Another goal is
to explore ways to use the methods of additive combination[3, 27, 2, 17] to
study the dimension of strange attractors for nonlinear maps as [11, 22] did.
Thus we introduce the following nonlinear maps
(1.3) $T:[0,1)\times\mathbb{C}\to[0,1)\times\mathbb{C}\quad\quad
T(x,y)=(bx\mod 1,\gamma y+\phi(x))$
where integer $b\geq 2$, $\gamma\in\mathbb{C}$ such that $0<|\gamma|<1$, and
$\phi$ is a real $\mathbb{Z}$-periodic Lipschitz function. Note that map $T$
is a generalization of $\tilde{T}$, see (1.2). Let $K^{\phi}_{b,\,\gamma}$ be
the solenoid for $T$. For any $\gamma\in\mathbb{C}\setminus\\{0\\}$, let
$\Delta=\Delta(\gamma)\in[0,1)$ be the unique number such that
(1.4) $\gamma=|\gamma|e^{2\pi i\Delta(\gamma)}.$
The main result of this paper is following.
###### Main Theorem.
Let $b\geq 2$ be an integer, $\gamma\in\mathbb{C}$ such that $0<|\gamma|<1$.
Let $\phi$ be a $\mathbb{Z}$-periodic real analytic function. Then exactly one
of the following holds:
1. (i)
$K^{\phi}_{b,\gamma}$ is a graph of a real analytic function;
2. (ii)
$dim_{H}(K^{\phi}_{b,\gamma})=\min\\{3,1+\frac{\log b}{\log 1/|\gamma|}\\}$
when $\Delta\notin\mathbb{Q}$;
3. (iii)
$dim_{H}(K^{\phi}_{b,\gamma})\geq\min\\{2,1+\frac{\log b}{\log 1/|\gamma|}\\}$
when $\Delta\in\mathbb{Q}$.
Moreover given $b$ and non-constant $\phi$, the first alternative only holds
for countable many $0<|\gamma|<1$.
Organization. In Sect. 2, we will introduce Theorem A which is used for
proving Main Theorem. The rest parts of this paper is devoted to prove Theorem
A. In Sect. 3, we will introduce the classical Ledrappier-Young theory for the
map $T$, and recall some basic properties of entropy that serve as the basic
tools in this paper. The main difficulty in proving Theorem A is proving
Theorem 5.1, since the rest parts can be proved by the similar method of [22]
(See Sect.2.2 for explanations). Thus we shall show the dimension conservation
property of $m_{x}$ in Sect. 4, which is used to prove Theorem 5.1 in Sect. 5.
Finally we will use Theorem 5.1 to give a sketchy proof for Theorem A in Sect.
6.
Acknowledgment. We would like to thank Feliks Przytycki for useful
communication.
## 2\. Theorem A and proof of Main Theorem
In this section, we shall first give the expression of solenoidal attractors
$K^{\phi}_{b,\,\gamma}$. Later we will introduce Theorem A and give some
explanations about the ideas of proof, which is our main finding in this
paper. Finally we will use Theorem A to prove the Main Theorem as we did in
[22].
### 2.1. Expression of $K^{\phi}_{b,\,\gamma}$
Let $\mathbb{Z}_{+}$ denote the set of positive integers and $\mathbb{R}_{+}$
denote the set of nonnegative real numbers. Let $\mathbb{N}$ be the set of
nonnegative integers. Let $\varLambda=\\{0,1,...,b-1\\}$,
$\varLambda^{\\#}=\bigcup_{n=1}^{\infty}\varLambda^{n}$,
$\Sigma=\varLambda^{\mathbb{Z}_{+}}$. For any word
$\textbf{j}\in\varLambda^{\\#}\cup\Sigma$ let $|\textbf{j}|=t$ if
$\textbf{j}=j_{1}j_{2}\ldots j_{t}\in\varLambda^{\\#}$, and
$|\textbf{j}|=\infty$ if $\textbf{j}\in\Sigma$. For any $x\in[0,1]$ define
function
(2.1)
$S(x,\textbf{j})=S_{b,\,\gamma}^{\phi}(x,\textbf{j})=\sum\limits_{n=1}^{|\textbf{j}|}{\gamma^{n-1}\phi\left(\frac{x}{b^{n}}+\frac{j_{1}}{b^{n}}+\frac{j_{2}}{b^{n-1}}+\cdot\cdot\cdot+\frac{j_{n}}{b}\right)},$
thus we have
$K^{\phi}_{b,\,\gamma}=\bigg{\\{}(x,S(x,\textbf{j})):x\in[0,1),\,\textbf{j}\in\Sigma\bigg{\\}}.$
See [9] for details, the proof in [9, Sect. 2] works for all
$\gamma\in\mathbb{C}$ such that $|\gamma|\in(0,1)$.
It is necessary to recall the following formulas from [9], which serve as the
basic tools in our paper. For any $x\in[0,1]$, $\textbf{w}=w_{1}w_{2}\ldots
w_{m}\in\varLambda^{\\#}$ and each
$\textbf{i},\,\textbf{j}\in\varLambda^{\\#}\cup\Sigma,$ let
(2.2) $\textbf{w}(x):=\frac{x+w_{1}+\ldots+w_{m}b^{m-1}}{b^{m}}$
and
(2.3) $\textbf{w}\textbf{i}=w_{1}\ldots w_{m}i_{1}i_{2}\ldots$
as usual, so we have
(2.4)
$S(x,\textbf{w}\textbf{i})=S(x,\textbf{w})+\gamma^{m}S(\textbf{w}(x),\textbf{i})$
by the definition of function $S(\cdot,\cdot)$. Therefore we have
(2.5)
$S(x,\,\textbf{w}\textbf{i})-S(x,\,\textbf{w}\textbf{j})=\gamma^{m}\bigg{(}S(\textbf{w}(x),\,\textbf{i})-S(\textbf{w}(x),\,\textbf{j})\bigg{)}.$
### 2.2. Theorem A
Let $\nu$ denote even distributed probability measure on $\varLambda$ and let
$\nu^{\mathbb{Z}_{+}}$ be product measure on $\Sigma.$ Denote the Lebesgue
measure on $[0,1)$ as $\mathcal{m}$. Define the probability measure on
$[0,1)\times\mathbb{C}$
$\omega:=G(\mathcal{m}\times\nu^{\mathbb{Z}_{+}})$
where the map
$G:[0,1)\times\Sigma\to[0,1)\times\mathbb{C}\quad\quad
G(x,\textbf{j})=(x,S(x,\textbf{j})).$
In fact the measure $\omega$ is the SRB measure for the map $T$. For every
$x\in[0,1]$ define the map
(2.6) $S_{x}:\Sigma\to\mathbb{C}\quad\quad
S_{x}(\,\textbf{j}\,)=S(x,\textbf{j})$
and measure
(2.7) $m_{x}:=S_{x}(\nu^{\mathbb{Z}_{+}})$
as in [9, 22]. Note that $\omega=\int_{[0,1)}\big{(}\delta_{x}\times
m_{x}\big{)}\,dx.$ For any $n\geq 1$, we have
(2.8)
$m_{x}=\frac{1}{b^{n}}\sum_{\textbf{j}\in\varLambda^{n}}T^{n}(m_{\textbf{j}(x)})$
where probability measures
(2.9) $T^{n}(m_{\textbf{j}\,(x)})=f_{x,\,\textbf{j}}(m_{\textbf{j}(x)})$
and
$f_{x,\,\textbf{j}}(y)=\gamma^{n}y+S(x,\,\textbf{j}),\,\forall\,y\in\mathbb{C}$.
See [9, Section 2] for details.
A probability measure $\mu$ in a metric space $X$ is called exact-dimensional
if there exists a constant $\kappa\geq 0$ such that for $\mu-\text{a.e.}$ $x$,
(2.10) $\lim_{r\to 0}\frac{\log\mu\big{(}\mathbf{B}(x,r)\big{)}}{\log
r}=\kappa.$
In this situation, we write $\text{dim}(\mu)=\kappa$ and call it the dimension
of $\mu$.
To study the dimension of the SRB measure $\omega$ as [22] did, we shall
recall the following gentle transversality condition (H). Shen and the author
[11] put it and studied the case $\gamma\in(1/b,1)$. Recently Gao and Shen
[18] studied the case $\gamma\in\mathbb{C}$ such that $0<|\gamma|<1$.
###### Definition 2.1.
Given an integer $b\geq 2$ and $\gamma\in\mathbb{C}$ such that $0<|\gamma|<1$,
we say that a real $\mathbb{Z}$-periodic $C^{1}$ function $\phi(x)$ satisfies
the condition (H) if
$S(x,\textbf{j})-S(x,\textbf{i})\not\equiv
0,\quad\forall\,\textbf{j}\neq\textbf{i}\in\Sigma.$
The following theorem is our main findings in this paper.
###### Theorem A.
Let $\phi(x)$ be a real analytic $\mathbb{Z}$-periodic function satisfies the
condition (H) for an integer $b\geq 2$, $\gamma\in\mathbb{C}$ such that
$0<|\gamma|<1$. If $\Delta\in\mathbb{R}\setminus\mathbb{Q}$, then
$dim(\omega)=\min\\{1+\frac{\log b}{\log 1/|\gamma|},3\\}.$
Recall that $\Delta$ is defined by (1.4).
Explanations of Theorem A. Inspired by the recent works [3, 2, 11], the author
[22] has studied Theorem A for the case $\gamma\in(0,1)$, thus it is nature to
consider whether these methods can be used to study the general case
$\gamma\in\mathbb{C}$. In fact, most parts of proof in [22] also work for the
case $\gamma\in\mathbb{C}$ such that $|\gamma|\in(0,1)$. Therefore, if the
inverse theorem for convolution on $\mathbb{R}$ of Hochman [3, Theorem 2.7]
also holds for all measures in $\mathscr{P}(\mathbb{C})$, there will be little
difficulty to prove Theorem A. Unfortunately that is not true, studying the
entropy of convolutions on $\mathbb{R}^{d}$ gets more tough.
To study the selfsimilar measures on $\mathbb{R}^{d}$, Hochman [27]
generalized the inverse theorem on $\mathbb{R}$ [3, Theorem 2.7] to high-
dimension spaces [27, Theorem 2.8]. However the latter is not easy to use
directly. Thus the main difficulty in our paper is to give a corollary of [27,
Theorem 2.8], whose condition is easier to verify in our system (See Theorem
5.1).
Indeed, The classical Ledrappier-Young theory implies that: for Lebesgue-a.e.
$x\in[0,1]$
$\text{dim}(m_{x})=\alpha.$
Thus it suffice to prove $\alpha=\min\\{2,\frac{\log b}{\log 1/|\gamma|}\\}$
(See Theorem 3.1 for details).
As mentioned before, studying the dimension of measures $m_{x}$ directly is
hard, thus we hope to understand the properties of projection measures
$\pi_{\theta}m_{x}$ and the conditional measures
$\\{\,(m_{x})_{\pi_{\theta}^{-1}(z)}\\}_{z\in\mathbb{R}}$, see Definition 4.1.
Thanks to the methods in [25, 26], we build a new Ledrappier-Young theory for
the nonlinear system generated by the map $T$, which establishes the
connections between $\text{dim}(m_{x})$ and $\text{dim}(\pi_{\theta}m_{x})$,
$\text{dim}\big{(}(m_{x})_{\pi_{\theta}^{-1}(z)}\big{)}$ for the case
$\Delta\notin\mathbb{Q}$, see Theorem 4.1. Therefore inspired by the work of
Hochman [27], we can use the skills in [3, 27, 2, 22] to build Theorem 5.1
which is a corollary of [27, Theorem 2.8]. The rest of proofs are similar to
[22] and we will omit some details for convenience.
### 2.3. Proof of Main Theorem
We shall first introduce the following Theorem 2.1, which is an immediate
consequence of [18, Theorem 2.1]. Note that the proof of [22, Theorem 2.1]
also works for $\gamma\in\mathbb{C}$ such that $0<|\gamma|<1$, thus we omit
the proof of Theorem 2.1.
###### Theorem 2.1.
Fix $b\geq 2$ integer and $\gamma\in\mathbb{C}$ such that $0<|\gamma|<1$.
Assume that $\phi$ is analytic $\mathbb{Z}$-periodic function. Then exactly
one of the following holds:
1. (H.1)
$K^{\phi}_{b,\,\gamma}$ is a graph of real analytic function;
2. (H.2)
$\phi$ satisfies the condition (H).
For any $\theta\in\mathbb{R}$, let $\pi_{\theta}:\mathbb{C}\to\mathbb{R}$ be
the projection function such that
(2.11) $z\mapsto\text{Re}(ze^{-2\pi i\theta})$
where $\text{Re}(ze^{-2\pi i\theta})$ is the real part of $ze^{-2\pi
i\theta}$.
###### Proof of the Main Theorem.
If $\phi$ is a non-constant function, then there exists $m\in\mathbb{Z}_{+}$
and $x^{\prime}\in[0,1)$ such that
$\phi(\frac{x^{\prime}}{b^{m}})\neq\phi(\frac{x^{\prime}+1}{b^{m}})$. Let
$10_{\infty}=100\ldots,\,0_{\infty}=00\ldots\in\Sigma$. Thus the function
$g(\gamma):=S^{\phi}_{b,\,\gamma}(x^{\prime},10_{\infty})-S^{\phi}_{b,\,\gamma}(x^{\prime},0_{\infty})$
has at most countable zero points in $\mathbf{B}(0,1)$, which implies: (i)
holds for at most countable many $\gamma\in\mathbb{C}$ such that
$0<|\gamma|<1$ and we only need to consider the case that condition (H) holds
by Theorem 2.1.
For the case $\Delta\in\mathbb{R}\setminus\mathbb{Q}$, it suffice to show
$dim(\omega)=\min\\{1+\frac{\log b}{\log 1/|\gamma|},3\\}$ since
$\text{supp}(\omega)=K^{\phi}_{b,\,\gamma}$ and
$\text{dim}_{B}(K^{\phi}_{b,\,\gamma})\leq\min\\{1+\frac{\log b}{\log
1/|\gamma|},3\\}.$ Thus we have (ii) holds by Theorem A.
For the case $\Delta\in\mathbb{Q}$, we may assume that $\Delta\neq 0$ by [22,
Main Theorem]. Let $\Delta=k/m\in(0,1)$ for some $k,\,m\in\mathbb{Z}_{+}.$
Since the condition (H) holds, there exist $\theta\in[0,1)$ such that
(2.12) $\pi_{\theta}\circ
S^{\phi}_{b,\,\gamma}(x,1_{\infty})-\pi_{\theta}\circ
S^{\phi}_{b,\,\gamma}(x,0_{\infty})\not\equiv 0.$
define the function $\overline{\phi}:\mathbb{R}\to\mathbb{R}$ be such that
$\overline{\phi}(x)=\pi_{\theta}\bigg{(}\,\sum_{t=0}^{m-1}\gamma^{m-1-t}\phi(b^{t}x)\,\bigg{)}.$
Thus (2.12) implies $\overline{\phi}$ is a real analytic $\mathbb{Z}$-periodic
function such that the condition (H) holds for $b^{m}$ and $|\gamma|^{m}.$
Therefore we have
(2.13)
$\text{dim}\big{(}\,\omega^{\overline{\phi}}_{b^{m},\,|\gamma|^{m}}\,\big{)}=\min\\{2,1+\frac{\log
b}{\log 1/|\gamma|}\\}$
by [22, Theorem A]. Define the function
$\hat{\pi}_{\theta}:[0,1)\times\mathbb{C}\to[0,1)\times\mathbb{C}$ such that
$\hat{\pi}_{\theta}(x,z)=(x,\,\pi_{\theta}(z))$. Therefore we have
$\hat{\pi}_{\theta}(\,\omega^{\phi}_{b,\,\gamma}\,)=\omega^{\overline{\phi}}_{b^{m},\,|\gamma|^{m}},$
which implies
$\text{dim}\big{(}\,\omega^{\phi}_{b,\,\gamma}\,\big{)}\geq\min\\{2,1+\frac{\log
b}{\log 1/|\gamma|}\\}$ by
$\text{dim}\big{(}\,\omega^{\phi}_{b,\,\gamma}\,\big{)}\geq\text{dim}\big{(}\,\hat{\pi}_{\theta}(\omega^{\phi}_{b,\,\gamma})\,\big{)}$
and (2.13). Therefore (iii) holds. ∎
## 3\. Ledrappier-Young theory and entropy
In this section we shall first recall the classical Ledrappier-Young formula
for $T$. Then we will give a brief introduce the conditional measure theorem
of Rohlin [21] and recall the definition of entropy. Finally we will introduce
some basic properties of the entropy of measures, which serve as basic tools
in our paper.
### 3.1. Ledrappier-Young formula
The following Theorem 3.1 is the classical Ledrappier-Young formula for SRB
measure $\omega$, which can be proved by exactly the same methods in [22,
Theorem 3.1]. Thus we omit the proof, see also [4, 5, 19].
###### Theorem 3.1.
Let $b\geq 2$ be an integer and $\gamma\in\mathbb{C}$ such that
$0<|\gamma|<1$. If $\phi:\mathbb{R}\to\mathbb{R}$ is a $\mathbb{Z}$-periodic
Lipschitz function, then
1. (1)
$\omega$ is exact dimensional;
2. (2)
there is a constant $\alpha\in[0,2]$ such that for Lebesgue-a.e. $x\in[0,1)$,
$m_{x}$ is exact dimensional and $\dim(m_{x})=\alpha$ and
(3.1) $\dim(\omega)=1+\alpha.$
Recall that the definition of $m_{x}$ is from (2.7).
### 3.2. Conditional measure and entropy
Let $(\Omega,\mathscr{B},\mu)$ be a Lebesgue space (see [23, Definition 2.3]).
A partition $\eta$ of $\Omega$ is called measurable partition if, up to a set
of measure zero, the quotient space is separated by a countable number of
measurable sets $\\{B_{i}\\}$. Let $\hat{\eta}$ be the sub-$\sigma$-algebra of
$\mathscr{B}$ whose elements are unions of elements of $\eta$.
###### Theorem 3.2 (Rohlin [21]).
Let $\eta$ be a measurable partition of a Lebesgue space
$(\Omega,\mathscr{B},\mu)$. Then for every $x$ in the set of full
$\mu$-measure, there is a probability measure $\mu^{\eta}_{x}$ defined on
$\eta(x)$, the element of $\eta$ containing $x$. These measures are uniquely
characterized (up to set of $\mu$-measure 0) by the following properties: if
$A\subset\Omega$ is a measurable set, then $x\mapsto\mu^{\eta}_{x}(A)$ is
$\hat{\eta}$-measurable and
$\mu(A)=\int\mu^{\eta}_{x}(A)d\mu(x).$
These properties imply that for any $g\in L^{1}(\Omega,\mathscr{B},\mu)$,
$\mu^{\eta}_{x}(g)=\mathbf{E}_{\mu}(g|\,\hat{\eta})(x)$ for
$\mu-\text{a.e.\,}x$, and $\mu(g)=\int\mathbf{E}_{\mu}(g|\,\hat{\eta})d\mu.$
In Theorem 3.2, we call $\\{\,\mu^{\eta}_{x}\,\\}_{x\in\Omega}$ the canonical
system of conditional measure with $\eta$. A countable partition $\mathcal{P}$
is a countable collection of pairwise disjoint measurable subsets of $\Omega$
whose union is equal to $\Omega$. For any sub-$\sigma$-algebra $\mathcal{M}$
of $\mathscr{B}$, any countable $\mathscr{B}$-measurable partition
$\mathcal{P}$ of $\Omega$ we define the conditional information
$\mathbf{I}_{\mu}(\,\mathcal{P}\,|\,\mathcal{M})=-\sum_{I\in\mathcal{P}}\chi_{I}\text{log}_{b}\mathbf{E}_{\mu}(\chi_{I}|\mathcal{M}),$
and the conditional entropy
$H_{\mu}(\,\mathcal{P}\,|\,\mathcal{M})=\int_{\Omega}\mathbf{I}_{\mu}(\,\mathcal{P}\,|\,\mathcal{M})d\mu.$
If $\mathcal{M}=\\{\emptyset,\,\Omega\\}$, let
$\mathbf{I}_{\mu}(\mathcal{P})=\mathbf{I}_{\mu}(\,\mathcal{P}\,|\,\mathcal{M})$
and $H(\mu,\mathcal{P})=H_{\mu}(\,\mathcal{P}\,|\,\mathcal{M})$ for
convenience.
For any countable $\mathscr{B}$-measurable partition $\mathcal{Q}$ of
$\Omega$. Let $\mathcal{Q}(x)$ be the member of $\mathcal{Q}$ that contains
$x$. If $\mu(\mathcal{Q}(x))>0$, we call the conditional measure
$\mu_{\mathcal{Q}(x)}(A)=\frac{\mu(A\cap\mathcal{Q}(x))}{\mu(\mathcal{Q}(x))}$
a $\mathcal{Q}$-component of $\mu$. Let function
$f_{b}:\mathbb{R}_{+}\to\mathbb{R}_{+}$ be a function such that
$f_{b}(p)=-p\log_{b}p$
where the common convention $0\log 0=0$ is adopted. In particular we have
$H(\mu,\mathcal{Q})=\sum_{Q\in\mathcal{Q}}f_{b}\big{(}\mu(Q)\big{)}.$
For another countable partition $\mathcal{P}$, we also have
$H(\mu,\mathcal{Q}|\mathcal{P})=\sum_{P\in\mathcal{P},\,\mu(P)>0}\mu(P)H(\mu_{P},\mathcal{Q}).$
When $\mathcal{Q}$ is a refinement of $\mathcal{P}$, i.e.,
$\mathcal{Q}(x)\subset\mathcal{P}(x)$ for every $x\in\Omega$, we have
$H(\mu,\mathcal{Q}|\mathcal{P})=H(\mu,\mathcal{Q})-H(\mu,\mathcal{P}).$
### 3.3. Entropy of measures
In this subsection we shall introduce some notations whose were used
extensively in [3, 2, 17, 27, 22], which also serve as the basic language in
this paper. Latter we will recall some basic facts about the Entropy of
measures.
Let $(\Omega,\mathscr{B},\mu)$ be a Lebesgue space. If there exists a sequence
of partitions $\mathcal{Q}_{i}$, $i=0,1,2,\cdots$, such that
$\mathcal{Q}_{i+1}$ is a refinement of $\mathcal{Q}_{i}$, we shall denote
$\mu_{x,\,i}=\mu_{\mathcal{Q}_{i}(x)}$, and call it a $i$-th component measure
of $\mu$. For a finite set $I$ of integers, if for every $i\in I$, there is a
random variable $Y_{i}$ defined over $(\Omega,\hat{\mathcal{Q}_{i}},\mu)$, see
Section 3.2 for the definition of $\hat{\mathcal{Q}_{i}}$. Then we shall use
the following notation
$\mathbb{P}_{i\in I}(K_{i})=\mathbb{P}_{i\in
I}^{\,\mu}(K_{i}):=\frac{1}{\\#I}\sum_{i\in I}\mu(K_{i}),$
where $K_{i}$ is an event for $Y_{i}$. If $Y_{i}$’s are $\mathbb{R}$-valued
random variable, we shall also use the notation
$\mathbb{E}_{i\in I}(Y_{i})=\mathbb{E}^{\,\mu}_{i\in
I}(Y_{i}):=\frac{1}{\\#I}\sum_{i\in I}\mathbb{E}(Y_{i}).$
Therefore the following holds
$H(\mu,\mathcal{Q}_{m+n}|\mathcal{Q}_{n})=\mathbb{E}(H(\mu_{x,\,n},\mathcal{Q}_{m+n}))=\mathbb{E}^{\,\mu}_{i=n}(H(\mu_{x,\,i},\mathcal{Q}_{i+m})).$
In most cases of this paper, we shall consider the situation that
$\Omega=\mathbb{R},\,\mathbb{C}$. Write $\mathbb{R}^{d}$ for $\mathbb{R}$ or
$\mathbb{C}$. Let $\mathcal{L}^{\mathbb{R}}_{n}$ be the partition of
$\mathbb{R}$ into $b$-adic intervals of level $n$, i.e., the intervals
$[j/b^{n},(j+1)/b^{n})$, $j\in\mathbb{Z}$. In this paper we also regard
$\mathbb{C}$ as $\mathbb{R}^{2}$. Similar let
$\mathcal{L}^{\mathbb{C}}_{n}:=\bigg{\\{}\,[j_{1}/b^{n},(j_{1}+1)/b^{n})\times[j_{2}/b^{n},(j_{2}+1)/b^{n})\,\bigg{\\}}_{j_{1},\,j_{2}\in\mathbb{Z}}.$
In the rest of paper, we may write
$\mathcal{L}^{\mathbb{R}}_{n},\,\mathcal{L}^{\mathbb{C}}_{n}$ as
$\mathcal{L}_{n}$ for convenience. For any $I\in\mathcal{L}_{n}$ and
$\mu(I)>0$ let
$\mu^{I}:=g_{I}(\mu_{I})$
where map $g_{I}(z):=b^{n}\cdot z+c_{I},\,\forall\,z\in\mathbb{R}^{d}$ and
$g_{I}(I)=[0,1)^{d}$ holds for some $c_{I}\in\mathbb{R}^{d}$. As [3] did,
denote
(3.2) $\mu^{x,\,i}=\mu^{\mathcal{L}_{i}(x)}.$
Let $\mathscr{P}(\mathbb{R}^{d})$ denote the collection of all Borel
probability measures in $\mathbb{R}^{d}$. If a probability measure
$\mu\in\mathscr{P}(\mathbb{R}^{d})$ is exact dimensional, its dimension is
closely related to the entropy. In fact we have the following result [10,
Theorem 4.4]. See also [20, Theorem 1.3].
###### Proposition 3.1.
If $\mu\in\mathscr{P}(\mathbb{R}^{d})$ is exact dimensional, then
$\dim(\mu)=\lim\limits_{n\to\infty}\frac{1}{n}H(\mu,\mathcal{L}_{n}).$
The following are some well-known facts about entropy and conditional entropy,
which will be used a lot in our work. See [3, Section 3.1] for details. Define
function
(3.3) $H:[0,1]\to\mathbb{R}_{+}\quad\quad H(p)=f_{b}(t)+f_{b}(1-t).$
###### Lemma 3.1 (Concavity and convexity).
Consider a measurable space $(\Omega,\mathscr{B})$ which is endowed with
partitions $\mathcal{Q}$ and $\mathcal{P}$ such that $\mathcal{P}$ is a
refinement of $\mathcal{Q}$. Let $\mu,\mu^{\prime}$ be probability measures in
$(\Omega,\mathscr{B})$. The for any $t\in(0,1)$,
$(\text{concavity})\quad
tH(\mu,\mathcal{Q})+(1-t)H(\mu^{\prime},\mathcal{Q})\leq
H(t\mu+(1-t)\mu^{\prime},\mathcal{Q}),$ $\qquad\qquad
tH(\mu,\mathcal{P}|\mathcal{Q})+(1-t)H(\mu^{\prime},\mathcal{P}|\mathcal{Q})\leq
H(t\mu+(1-t)\mu^{\prime},\mathcal{P}|\mathcal{Q}),$ $(\text{convexity})\quad
H(t\mu+(1-t)\mu^{\prime},\mathcal{Q})\leq
tH(\mu,\mathcal{Q})+(1-t)H(\mu^{\prime},\mathcal{Q})+H(t).$
###### Lemma 3.2.
Let $\mu\in\mathscr{P}(\mathbb{R}^{d})$. There is a constant $C_{d}>0$ such
that for any affine map $f(x)=ax+c$,
$a,\in\mathbb{R}\setminus\\{0\\},\,c\in\mathbb{R}^{d}$ and for any
$n\in\mathbb{N}$ we have
$\left|H(f\mu,\,\mathcal{L}_{n+[\log_{b}|a|]})-H(\mu,\,\mathcal{L}_{n})\right|\leq
C_{d}.$
###### Lemma 3.3.
Given a probability space $(\Omega,\mathscr{B},\mu)$, if
$f,g:\Omega\to\mathbb{R}^{d}$ are measurable and $\sup_{x}|f(x)-g(x)|\leq
b^{-n}$ then
$\left|H(f\mu,\mathcal{L}_{n})-H(g\mu,\mathcal{L}_{n})\right|\leq C_{d},$
where $C_{d}$ is an absolute constant.
The following lemma is from [3, Lemma 3.4].
###### Lemma 3.4.
For $R\geq 1$ and $\mu\in\mathscr{P}([-R,R]^{2})$ and integers $m\leq n$,
$\frac{1}{n}H(\mu,\mathcal{L}_{n})=\mathbb{E}^{\,\mu}_{0\leq
i<n}\bigg{(}\frac{1}{m}H(\mu^{x,\,i},\mathcal{L}_{m})\bigg{)}+O\big{(}\frac{m}{n}+\frac{\log
R}{n}\big{)}.$
See (3.2) for the definition of $\mu^{x,\,i}$.
## 4\. dimension conservation
In this section, we shall first introduce the definition of dimension
conservation. Later we will give the dimension conservation property of
measures $m_{x}$. The following notation is introduced by Furstenberg [24].
###### Definition 4.1.
For any $\theta\in[0,1]$, a Borel probability measure
$\mu\in\mathscr{P}(\mathbb{C})$ is called $\theta$-dimension conservation if
we have
1. (1)
$\mu,\,\pi_{\theta}\mu$ are exact-dimensional;
2. (2)
for $\mu\text{-a.e.}\,z\in\mathbb{C}$, $\mu^{\eta_{\theta}}_{z}$ is exact-
dimensional and
$\text{dim}(\mu)=\text{dim}(\pi_{\theta}\mu)+\text{dim}(\mu^{\eta_{\theta}}_{z})$
where $\pi_{\theta}$ is from (2.11), partition
$\eta_{\theta}:=\big{\\{}\,\pi_{\theta}^{-1}(\,\\{x\\}\,):\,x\in\mathbb{R}\,\big{\\}}$
and $\\{\,\mu^{\eta_{\theta}}_{z}\,\\}_{z\in\mathbb{C}}$ is the canonical
system of conditional measure with $\eta_{\theta}$ (See Theorem 3.2).
Feng and Hu [25] studied the exact-dimensional properties of selfsimilar
measures by Ledrappier-Young theory [5]. The difference is that they consider
the system in symbolic space instead of Riemann space. They also provided some
useful tools for this approach. Based on these, Falconer and Jin [26] studied
the dimension conservation properties of random self-similar. Following this
strategy we have the following result, which is important to prove Theorem
5.1.
###### Theorem 4.1.
Let $b\geq 2$ be an integer and $\gamma\in\mathbb{C}$ such that
$0<|\gamma|<1$. Let $\phi:\mathbb{R}\to\mathbb{R}$ be a $\mathbb{Z}$-periodic
Lipschitz function. If $\Delta\in\mathbb{R}\setminus\mathbb{Q}$, then there
exists nonnegative constants $\beta,\,\upsilon$ such that the following holds.
For Lebesgue-$\text{a.e.}\,(x,\theta)\in[0,1)^{2},$ the measure $m_{x}$ is
$\theta$-dimension conservation. Furthermore we have
1. (D.1)
$\text{dim}(\pi_{\theta}m_{x})=\beta$;
2. (D.2)
for $m_{x}\text{-a.e.}\,z\in\mathbb{C}$,
$\text{dim}\big{(}\,(m_{x})^{\eta_{\theta}}_{z}\,\big{)}=\upsilon$.
Recall $\Delta=\Delta(\gamma)$ is defined by (1.4). We only consider the case
$\Delta\notin\mathbb{Q}$ in the rest of paper. The following is a corollary of
Theorem 4.1 by the observation of Hochman [27, Lemma 3.21(5)]. We offer the
proof for the completeness.
###### Corollary 4.1.
If $\alpha\geq\beta+1$, then $\alpha\geq 2.$
###### Proof.
By Theorem 4.1 there exists $x,\,\theta_{1},\,\theta_{2}\in[0,1)$ such that
$\theta_{1}\neq\theta_{2}$ and the following holds:
$m_{x},\,\pi_{\theta_{j}}m_{x}$ are exact-dimensional and
$\text{dim}(m_{x})=\alpha$, $\text{dim}(\pi_{\theta_{j}}m_{x})=\beta$,
$j=1,2.$ For any $n\in\mathbb{Z}_{+}$, since each atom of
$\pi_{\theta_{1}}^{-1}(\mathcal{L}^{\mathbb{R}}_{n})\bigvee\pi_{\theta_{2}}^{-1}(\mathcal{L}^{\mathbb{R}}_{n})$
intersects $\boldmath{O}_{\theta_{1},\,\theta_{2}}(1)$ atoms of
$\mathcal{L}^{\mathbb{C}}_{n}$ and vice versa, we have
(4.1) $\displaystyle H(m_{x},\,\mathcal{L}^{\mathbb{C}}_{n})$
$\displaystyle=H\big{(}m_{x},\,\pi_{\theta_{1}}^{-1}(\mathcal{L}^{\mathbb{R}}_{n})\vee\pi_{\theta_{2}}^{-1}(\mathcal{L}^{\mathbb{R}}_{n})\big{)}+\boldmath{O}_{\theta_{1},\,\theta_{2}}(1)$
$\displaystyle=H(\,m_{x},\,\pi_{\theta_{1}}^{-1}(\mathcal{L}^{\mathbb{R}}_{n}))+H\big{(}m_{x},\,\pi_{\theta_{2}}^{-1}(\mathcal{L}^{\mathbb{R}}_{n})\mid\pi_{\theta_{1}}^{-1}(\mathcal{L}^{\mathbb{R}}_{n})\big{)}+\boldmath{O}_{\theta_{1},\,\theta_{2}}(1)$
$\displaystyle\geq
H\big{(}m_{x},\,\pi_{\theta_{1}}^{-1}(\mathcal{L}^{\mathbb{R}}_{n})\mid\pi_{\theta_{2}}^{-1}(\mathcal{L}^{\mathbb{R}}_{n})\big{)}+H\big{(}m_{x},\,\pi_{\theta_{2}}^{-1}(\mathcal{L}^{\mathbb{R}}_{n})\mid\pi_{\theta_{1}}^{-1}(\mathcal{L}^{\mathbb{R}}_{n})\big{)}+\boldmath{O}_{\theta_{1},\,\theta_{2}}(1).$
The above also implies
$H\big{(}m_{x},\,\pi_{\theta_{1}}^{-1}(\mathcal{L}^{\mathbb{R}}_{n})\mid\pi_{\theta_{2}}^{-1}(\mathcal{L}^{\mathbb{R}}_{n})\big{)}=H(m_{x},\,\mathcal{L}^{\mathbb{C}}_{n})-H(\pi_{\theta_{2}}m_{x},\,\mathcal{L}^{\mathbb{R}}_{n})+\boldmath{O}_{\theta_{1},\,\theta_{2}}(1)$
and
$H\big{(}m_{x},\,\pi_{\theta_{2}}^{-1}(\mathcal{L}^{\mathbb{R}}_{n})\mid\pi_{\theta_{1}}^{-1}(\mathcal{L}^{\mathbb{R}}_{n})\big{)}=H(m_{x},\,\mathcal{L}^{\mathbb{C}}_{n})-H(\pi_{\theta_{1}}m_{x},\,\mathcal{L}^{\mathbb{R}}_{n})+\boldmath{O}_{\theta_{1},\,\theta_{2}}(1).$
Combining these with (4.1), we have
$\frac{1}{n}H(m_{x},\,\mathcal{L}^{\mathbb{C}}_{n})\geq\sum_{k=1}^{2}\bigg{(}\frac{1}{n}H(m_{x},\,\mathcal{L}^{\mathbb{C}}_{n})-\frac{1}{n}H(\pi_{\theta_{k}}m_{x},\,\mathcal{L}^{\mathbb{R}}_{n})\bigg{)}+\boldmath{O}_{\theta_{1},\,\theta_{2}}(\frac{1}{n}).$
Therefore when $n$ goes to infinity, we have
$\alpha\geq 2(\alpha-\beta)$
by Proposition 3.1, which implies $\alpha\geq 2.$ ∎
###### Remark 4.1.
Under the observation of Corollary 4.1, we only need Theorem 4.1 (D.1) to
prove Theorem 5.1. For completeness and the readers to understand our ideas,
we still introduce Theorem 4.1 (D.2) and give the proof.
### 4.1. Group extension
By the classical Ledrappier-Young theory [4, 5, 19], we can only get the
formula (3.1). To study the dimension of $m_{x}$, it is important to
understand the projection measures $\pi_{\theta}m_{x}$ and conditional
measures $(m_{x})^{\eta_{\theta}}_{z}$. Thus it is nature to consider the map
$\displaystyle\hat{T}:$
$\displaystyle\Sigma\times[0,1)^{2}\,\to\,\Sigma\times[0,1)^{2}$
$\displaystyle(\,\textbf{j},\,x,\,\theta)\,\mapsto\,(\,\lfloor
bx\rfloor\,\textbf{j},\,bx\mod 1,\,\theta-\Delta\mod 1)$
where $\lfloor bx\rfloor$ is the largest integer not greater than $bx$ and see
(2.3) for $\lfloor bx\rfloor\,\textbf{j}$. Recall $\mathcal{m}$ is the
Lebesgue measure on $[0,1)$. Let
$\tilde{\omega}:=\nu^{\mathbb{Z}_{+}}\times\mathcal{m}\times\mathcal{m}$
and $\mathcal{B}$ be the Borel set of $\Sigma\times[0,1)^{2}$. For any metric
space $(X,\,\rho)$, denote $\mathcal{B}_{X}$ as the Borel set of $X$. Let us
consider the probability space
$(\Sigma\times[0,1)^{2},\,\mathcal{B},\,\tilde{\omega})$ and we have the
following Lemma 4.1.
###### Lemma 4.1.
If $\Delta\notin\mathbb{Q}$, then $\hat{T}$ is ergodic.
###### Proof.
Let
$F:\varLambda^{\mathbb{Z}}\times[0,1)\to\varLambda^{\mathbb{Z}}\times[0,1)$ be
the map such that
$(\textbf{j},\,\theta)\,\mapsto(\tau(\textbf{j}),\,\theta-\Delta\mod 1)$ where
$\tau$ is the left shift map on $\varLambda^{\mathbb{Z}}$. Let us consider the
probability space
$(\varLambda^{\mathbb{Z}}\times[0,1),\,\mathcal{B}_{\varLambda^{\mathbb{Z}}\times[0,1)},\,\nu^{\mathbb{Z}}\times\mathcal{m})$.
It suffice to proof that $F$ is ergodic, since $\Pi\circ F=\hat{T}\circ\Pi$
where
$\Pi:\,\varLambda^{\mathbb{Z}}\times[0,1)\,\to\Sigma\times[0,1)\times[0,1)$ is
the map such that
$(\textbf{j},\,\theta)\,\mapsto\,\bigg{(}j_{0}j_{-1}\ldots,\,\sum_{k=1}^{\infty}\frac{j_{k}}{b^{k}},\,\theta\bigg{)}.$
Regard $[0,1)$ as a group with the addition
$a+b:=(a+b)\mod 1\quad\quad\forall\,a,b\in[0,1)$
as usual. Assume $F$ is not ergodic, then there exists a proper closed
subgroup $H$ of $[0,1)$ and functions $f^{\prime}\in
C(\varLambda^{\mathbb{Z}};\,H),\,h\in C(\varLambda^{\mathbb{Z}};\,[0,1))$ such
that $\Delta=h\circ\tau-f^{\prime}-h$ by [28, Theorem 5.1]. Let
$1_{\infty}=\ldots 111\ldots\in\Lambda^{\mathbb{Z}}$. Thus
$f^{\prime}(1_{\infty})=-\Delta\in H$, which contradicts with that $H$ is a
proper closed subgroup of $[0,1)$. ∎
### 4.2. Proof of Theorem 4.1 (D.1)
In the rest of paper we denote $\log$ as $\log_{b}$ for convenience. Let
$R^{\phi}_{\gamma}:=\frac{2\parallel\phi\parallel_{\infty}}{1-|\gamma|}.$
Following [25], for any function $g:\Sigma\to\mathbb{R}^{d}$ and every
$\textbf{j}\in\Sigma$, $n\in\mathbb{N}$, let
(4.2)
$\mathbf{B}_{g}(\textbf{j},\,n):=g^{-1}\bigg{(}\,\mathbf{B}\big{(}g(\textbf{j}),\,R^{\phi}_{\gamma}|\gamma|^{n}\,\big{)}\,\bigg{)}.$
Let $\tau$ be the left shift on $\Sigma$ as usual. Let $\mathcal{P}$ be the
partition of $\Sigma$ such that
$\mathcal{P}=\\{\,[k]\,:\,k\in\varLambda\,\\}$
where $[k]=\\{\,\textbf{j}\in\Sigma:j_{1}=k\,\\}.$ It is necessary to recall
the following basic facts about $\pi_{\theta}$. For any $\theta\in\mathbb{R}$,
we have
(4.3) $\pi_{\theta}(\gamma z)=|\gamma|\pi_{\theta-\Delta}(z)\quad\quad\forall
z\in\mathbb{C}.$
Therefore (2.9) implies: for each $n\in\mathbb{Z}_{+}$ and
$\textbf{w}\in\varLambda^{n}$, we have
(4.4)
$\pi_{\theta}(T^{n}m_{\textbf{w}(x)})=g_{\textbf{w},\,\theta}(\pi_{\theta-n\Delta}m_{\textbf{w}(x)})$
where function $g_{\textbf{w},\,\theta}(z):=|\gamma|^{n}z+\pi_{\theta}\circ
S_{x}(\textbf{w})$ for each $z\in\mathbb{R}$.
###### Lemma 4.2.
For any $\theta,x\in[0,1],$ and $\textbf{j}\in\Sigma$, $n\in\mathbb{N}$, we
have
(4.5) $\mathbf{B}_{\pi_{\theta}\circ
S_{x}}(\textbf{j},\,n+1)\cap\mathcal{P}(\textbf{j})=\tau^{-1}\bigg{(}\,\mathbf{B}_{\pi_{\theta-\Delta}\circ
S_{(x+j_{1})/b}}(\tau(\textbf{j}),\,n)\,\bigg{)}\cap\mathcal{P}(\textbf{j}).$
###### Proof.
For any $\textbf{i}\in\Sigma$ such that $i_{1}=j_{1}$ and $|\pi_{\theta}\circ
S_{x}(\textbf{j})-\pi_{\theta}\circ S_{x}(\textbf{i})|\leq
R^{\phi}_{\gamma}|\gamma|^{n+1}$, we have
$\bigg{|}\,\pi_{\theta}\bigg{(}\gamma
S_{(x+j_{1})/b}\circ\tau(\textbf{j})-\gamma
S_{(x+j_{1})/b}\circ\tau(\textbf{i})\bigg{)}\,\bigg{|}\leq
R^{\phi}_{\gamma}|\gamma|^{n+1}$
by the definition of function $S_{x}$ and (2.5). This implies
$\bigg{|}\,\pi_{\theta-\Delta}\bigg{(}S_{(x+j_{1})/b}\circ\tau(\textbf{j})-S_{(x+j_{1})/b}\circ\tau(\textbf{i})\bigg{)}\,\bigg{|}\leq
R^{\phi}_{\gamma}|\gamma|^{n}$
by (4.3). Combining the above with (4.2), we have
$\mathbf{B}_{\pi_{\theta}\circ
S_{x}}(\textbf{j},\,n+1)\cap\mathcal{P}(\textbf{j})\subset\tau^{-1}\bigg{(}\mathbf{B}_{\pi_{\theta-\Delta}\circ
S_{(x+j_{1})/b}}(\tau(\textbf{j}),\,n)\bigg{)}\cap\mathcal{P}(\textbf{j}).$
Finally the other side can be proved by the same method. ∎
Let $\eta$ be the partition of $\Sigma\times[0,1)^{2}$ such that
$\eta:=\big{\\{}\,\Sigma\times\\{x\\}\times\\{\theta\\}:\,\theta,x\in[0,1)\,\big{\\}}$
and $\hat{\eta}$ be the sub-$\sigma$-algebra of $\mathcal{B}$ generated by
$\eta$. Let $\tilde{\mathcal{P}}$ be the sub-$\sigma$-algebra of $\mathcal{B}$
generated by partition
$\big{\\{}\,[k]\times[0,1)^{2}:\,k\in\varLambda\,\big{\\}}.$
Define the function
$\Phi:\Sigma\times[0,1)^{2}\to\mathbb{R}\quad\quad\Phi(\,\textbf{j},\,x,\,\theta)=\pi_{\theta}\circ
S_{x}(\textbf{j}).$
Let $\mathcal{B}_{\Phi}:=\Phi^{-1}\big{(}\mathcal{B}_{\mathbb{R}}\big{)}$ be
the sub-$\sigma$-algebra of $\mathcal{B}$. The following is an immediate
consequence of [25, Proposition 3.5].
###### Lemma 4.3.
For $\hat{\omega}$-a.e. $(\,\textbf{j},x,\theta)\in\Sigma\times[0,1)^{2}$ we
have
(4.6)
$\lim_{n\to\infty}\log\frac{\nu^{\mathbb{Z}_{+}}\big{(}\mathbf{B}_{\pi_{\theta}\circ
S_{x}}(\textbf{j},\,n)\cap\mathcal{P}(\textbf{j})\big{)}}{\nu^{\mathbb{Z}_{+}}\big{(}\mathbf{B}_{\pi_{\theta}\circ
S_{x}}(\textbf{j},\,n)\big{)}}=-\mathbf{I}_{\hat{\omega}}\big{(}\tilde{\mathcal{P}}|\hat{\eta}\vee\mathcal{B}_{\Phi}\big{)}(\,\textbf{j},x,\theta).$
Furthermore, set
$g(\textbf{j},x,\theta)=\sup_{n\in\mathbb{N}}-\log\frac{\nu^{\mathbb{Z}_{+}}\big{(}\mathbf{B}_{\pi_{\theta}\circ
S_{x}}(\textbf{j},\,n)\cap\mathcal{P}(\textbf{j})\big{)}}{\nu^{\mathbb{Z}_{+}}\big{(}\mathbf{B}_{\pi_{\theta}\circ
S_{x}}(\textbf{j},\,n)\big{)}}.$
Then $g\geq 0$ and $g\in
L^{1}(\Sigma\times[0,1)^{2},\,\mathcal{B},\,\hat{\omega}).$
We also need the following ergodic theorem due to Maker [29].
###### Theorem 4.2.
Let $(\Omega,\,\mathcal{M},\,\mu,\,G)$ be a measure-preserving system and let
$\\{g_{n}\\}$ be integrable functions on $(\Omega,\,\mathcal{M},\,\mu)$. If
$g_{n}(x)\to g(x)$ a.e. and if $sup_{n}|g_{n}(x)|=\overline{g}(x)$ is
integrable, then for a.e. $x$,
$\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}g_{n-k}\circ
G^{k}(x)=g_{\infty}(x),$
where $g_{\infty}(x)=\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}g\circ
G^{k}(x).$
For any $\textbf{j}\in\Sigma$ and $n\in\mathbb{Z}_{+}$, let
$\textbf{j}_{n}=j_{1}j_{2}\ldots j_{n}\in\varLambda^{n}.$
###### Lemma 4.4.
For Lebesgue-$\text{a.e.}\,(x,\theta)\in[0,1)^{2}$ the following holds. For
$\pi_{\theta}m_{x}$-a.e. $z\in\mathbb{R}$, we have
$\lim_{r\to
0}\frac{\log\bigg{(}\pi_{\theta}m_{x}\big{(}\mathbf{B}(z,\,r)\big{)}\bigg{)}}{\log
r}=\frac{H_{\hat{\omega}}\big{(}\tilde{\mathcal{P}}|\hat{\eta}\vee\mathcal{B}_{\Phi}\big{)}-\log
b}{\log|\gamma|}.$
###### Proof.
For any $(\textbf{j},\,x,\,\theta)\in\Sigma\times[0,1)^{2}$ and
$n\in\mathbb{Z}_{+}$, we have
(4.7) $\displaystyle\nu^{\mathbb{Z}_{+}}\big{(}\mathbf{B}_{\pi_{\theta}\circ
S_{x}}(\textbf{j},\,n)\big{)}$
$\displaystyle=\prod_{k=0}^{n-1}\frac{\nu^{\mathbb{Z}_{+}}\big{(}\mathbf{B}_{\pi_{\theta-k\Delta}\circ
S_{\textbf{j}_{k}(x)}}(\tau^{k}(\textbf{j}),\,n-k)\big{)}}{\nu^{\mathbb{Z}_{+}}\big{(}\mathbf{B}_{\pi_{\theta-(k+1)\Delta}\circ
S_{\textbf{j}_{k+1}(x)}}(\tau^{k+1}(\textbf{j}),\,n-k-1)\big{)}}$
$\displaystyle=\prod_{k=0}^{n-1}\frac{\nu^{\mathbb{Z}_{+}}\big{(}\mathbf{B}_{\pi_{\theta-k\Delta}\circ
S_{\textbf{j}_{k}(x)}}(\tau^{k}(\textbf{j}),\,n-k)\big{)}}{b\nu^{\mathbb{Z}_{+}}\big{(}\,\mathbf{B}_{\pi_{\theta-k\Delta}\circ
S_{\textbf{j}_{k}(x)}}(\tau^{k}(\textbf{j}),\,n-k)\cap\mathcal{P}\big{(}\tau^{k}(\textbf{j})\big{)}\,\big{)}}$
by the definition of $R^{\phi}_{\gamma}$ and (4.5) where
$\textbf{j}_{0}(x)=x$. For any $n\in\mathbb{N}$, let function
$g_{n}:\Sigma\times[0,1)^{2}\to\mathbb{Z}_{+}$ be such that
$g_{n}(\textbf{j},x,\theta)=-\log\frac{\nu^{\mathbb{Z}_{+}}\big{(}\mathbf{B}_{\pi_{\theta}\circ
S_{x}}(\textbf{j},\,n)\cap\mathcal{P}(\textbf{j})\big{)}}{\nu^{\mathbb{Z}_{+}}\big{(}\mathbf{B}_{\pi_{\theta}\circ
S_{x}}(\textbf{j},\,n)\big{)}}$
Thus we have
$\log\bigg{(}\nu^{\mathbb{Z}_{+}}\big{(}\,\mathbf{B}_{\pi_{\theta}\circ
S_{x}}(\textbf{j},\,n)\,\big{)}\bigg{)}=n\log
1/b+\sum_{k=0}^{n-1}g_{n-k}\circ\hat{T}^{k}(\textbf{j},x,\theta),$
which implies: for $\hat{\omega}$-a.e.
$(\textbf{j},\,x,\,\theta)\in\Sigma\times[0,1)^{2}$ we have
$\lim_{n\to\infty}\frac{\log\bigg{(}\nu^{\mathbb{Z}_{+}}\big{(}\mathbf{B}_{\pi_{\theta}\circ
S_{x}}(\textbf{j},\,n)\big{)}\bigg{)}}{n\log|\gamma|}=\frac{H_{\hat{\omega}}\big{(}\hat{\mathcal{P}}|\hat{\eta}\vee\mathcal{B}_{\Phi}\big{)}-\log
b}{\log|\gamma|}$
by Lemma 4.3 and Theorem 4.2. Combining this with
$\nu^{\mathbb{Z}_{+}}\big{(}\mathbf{B}_{\pi_{\theta}\circ
S_{x}}(\textbf{j},\,n)\big{)}=\pi_{\theta}m_{x}\bigg{(}\,\mathbf{B}\big{(}\,\pi_{\theta}\circ
S_{x}(\textbf{j}),\,R^{\phi}_{\gamma}|\gamma|^{n}\,\big{)}\,\bigg{)},$
thus Lemma 4.4 holds. ∎
### 4.3. Proof of Theorem 4.1
This subsection is devoted to give a sketchy proof for the rest of Theorem
4.1, since the details are similar to Sect. 4.2 and [26, Sect. 3.1, Sect.
3.3]. Define function $\hat{S}:\Sigma\times[0,1)^{2}\to\mathbb{C}$ such that
$(\textbf{j},\,x,\,\theta)\mapsto S(x,\,\textbf{j})$
and
$\mathcal{B}_{\hat{S}}:=\hat{S}^{-1}\big{(}\mathcal{B}_{\mathbb{C}}\big{)}$.
For any $x,\theta\in[0,1)$, define the partition of $\Sigma$ be such that
$\tilde{\eta}_{x,\,\theta}:=\big{\\{}\,(\pi_{\theta}\circ
S_{x})^{-1}(\,\\{\,z\,\\}\,)\,:\,z\in\mathbb{R}\,\big{\\}}.$
By [25, Proposition 3.5] we have the following result.
###### Lemma 4.5.
For $\hat{\omega}$-a.e. $(\textbf{j},x,\theta)\in\Sigma\times[0,1)^{2}$ we
have
(4.8)
$\lim_{n\to\infty}\log\frac{\nu^{\mathbb{Z}_{+}}_{(\textbf{j},\,x,\,\theta)}\big{(}\mathbf{B}_{S_{x}}(\textbf{j},\,n)\cap\mathcal{P}(\textbf{j})\big{)}}{\nu^{\mathbb{Z}_{+}}_{(\textbf{j},\,x,\,\theta)}\big{(}\mathbf{B}_{S_{x}}(\textbf{j},\,n)\big{)}}=-\mathbf{I}_{\hat{\omega}}\big{(}\tilde{\mathcal{P}}|\hat{\eta}\vee\mathcal{B}_{\hat{S}}\big{)}(\textbf{j},x,\theta)$
where
$\nu^{\mathbb{Z}_{+}}_{(\textbf{j},\,x,\,\theta)}=(\nu^{\mathbb{Z}_{+}})^{\tilde{\eta}_{x,\,\theta}}_{\textbf{j}}$
and
$\big{\\{}\,(\nu^{\mathbb{Z}_{+}})^{\tilde{\eta}_{x,\,\theta}}_{\textbf{j}}:\,\textbf{j}\in\Sigma\,\big{\\}}$
is the canonical system of conditional measure with
$\tilde{\eta}_{x,\,\theta}$. Furthermore, set
$h(\textbf{j},x,\theta)=\sup_{n\in\mathbb{N}}-\log\frac{\nu^{\mathbb{Z}_{+}}_{(\textbf{j},\,x,\,\theta)}\big{(}\mathbf{B}_{S_{x}}(\textbf{j},\,n)\cap\mathcal{P}(\textbf{j})\big{)}}{\nu^{\mathbb{Z}_{+}}_{(\textbf{j},\,x,\,\theta)}\big{(}\mathbf{B}_{S_{x}}(\textbf{j},\,n)\big{)}}.$
Then $h\geq 0$ and $h\in
L^{1}(\Sigma\times[0,1)^{2},\,\mathcal{B},\,\hat{\omega}).$
###### Lemma 4.6.
For any $x\in[0,1)$ and $\textbf{j}\in\Sigma$, $n\in\mathbb{N}$, we have
(4.9)
$\mathbf{B}_{S_{x}}(\textbf{j},\,n+1)\cap\mathcal{P}(\textbf{j})=\tau^{-1}\bigg{(}\mathbf{B}_{S_{(x+j_{1})/b}}(\tau(\textbf{j}),\,n)\bigg{)}\cap\mathcal{P}(\textbf{j}).$
###### Proof.
The proof is similar to Lemma 4.2. ∎
###### Lemma 4.7.
For every $x,\theta\in[0,1)$ and $A\in\mathcal{B}_{\Sigma}$, for
$\nu^{\mathbb{Z}_{+}}$-a.e. $\textbf{j}\in\Sigma,$
$(\nu^{\mathbb{Z}_{+}})^{\tilde{\eta}_{x,\,\theta}}_{\textbf{j}}(A)=\lim_{n\to\infty}\frac{\nu^{\mathbb{Z}_{+}}\bigg{(}A\cap\mathbf{B}_{\pi_{\theta}\circ
S_{x}}(\textbf{j},\,n)\bigg{)}}{\nu^{\mathbb{Z}_{+}}\bigg{(}\mathbf{B}_{\pi_{\theta}\circ
S_{x}}(\textbf{j},\,n)\bigg{)}}$
###### Proof.
The proof is similar to [26, Lemma 3.1]. ∎
###### Lemma 4.8.
For Lebesgue-$\text{a.e.}\,(x,\theta)\in[0,1)^{2}$, for $m_{x}$-a.e.
$z\in\mathbb{C}$,
$\lim_{r\to
0}\frac{\log\bigg{(}(m_{x})^{\eta_{\theta}}_{z}(\mathbf{B}(z,\,r))\bigg{)}}{\log
r}=\frac{H_{\hat{\omega}}\big{(}\tilde{\mathcal{P}}|\hat{\eta}\vee\mathcal{B}_{\hat{S}}\big{)}-H_{\hat{\omega}}\big{(}\tilde{\mathcal{P}}|\hat{\eta}\vee\mathcal{B}_{\Phi}\big{)}}{\log|\gamma|}.$
We only give a sketchy proof here, see [26, Section 3.3] for details if the
readers are unfamiliar with Ledrappier-Young theory.
###### Proof.
Combining Lemma 4.5, Lemma 4.3 with Lemma 4.6 and Lemma 4.7 the following
holds: for every $k\in\mathbb{Z}_{+}$, for $\hat{\omega}$-a.e.
$(\textbf{j},x,\theta)\in\Sigma\times[0,1)^{2},$ we have
$\nu^{\mathbb{Z}_{+}}_{(\textbf{j},\,x,\,\theta)}\big{(}\mathbf{B}_{S_{x}}(\textbf{j},\,k)\cap\mathcal{P}(\textbf{j})\big{)}=\nu^{\mathbb{Z}_{+}}_{\hat{T}(\textbf{j},\,x,\,\theta)}\big{(}\mathbf{B}_{S_{\frac{x+j_{1}}{b}}}(\tau(\textbf{j}),\,k-1)\big{)}\cdot\text{exp}\bigg{(}-\mathbf{I}_{\hat{\omega}}\big{(}\tilde{\mathcal{P}}|\hat{\eta}\vee\mathcal{B}_{\Phi}\big{)}(\textbf{j},x,\theta)\bigg{)}.$
Thus we have
$\displaystyle\nu^{\mathbb{Z}_{+}}_{(\textbf{j},\,x,\,\theta)}\big{(}\mathbf{B}_{S_{x}}(\textbf{j},\,k)\cap\mathcal{P}(\textbf{j})\big{)}$
$\displaystyle=\prod_{k=0}^{n-1}\bigg{(}\frac{\nu^{\mathbb{Z}_{+}}_{\hat{T}(\textbf{j},\,x,\,\theta)}\big{(}\mathbf{B}_{S_{\textbf{j}_{k}(x)}}(\tau^{k}(\textbf{j}),\,n-k)\big{)}}{\nu^{\mathbb{Z}_{+}}_{\hat{T}(\textbf{j},\,x,\,\theta)}\big{(}\mathbf{B}_{S_{\textbf{j}_{k}(x)}}(\tau^{k}(\textbf{j}),\,n-k)\cap\mathcal{P}(\tau^{k}(\textbf{j}))\big{)}}\bigg{)}\cdot\text{exp}\bigg{(}-\mathbf{I}_{\hat{\omega}}\big{(}\tilde{\mathcal{P}}|\hat{\eta}\vee\mathcal{B}_{\Phi}\big{)}\circ\hat{T}^{k}(\textbf{j},x,\theta)\bigg{)},$
which implies, for $\hat{\omega}$-a.e.
$(\textbf{j},\,x,\,\theta)\in\Sigma\times[0,1)^{2},$ we have
$\lim_{n\to\infty}\frac{\log\nu^{\mathbb{Z}_{+}}_{(\textbf{j},\,x,\,\theta)}\big{(}\mathbf{B}_{S_{x}}(\textbf{j},\,k)\cap\mathcal{P}(\textbf{j})\big{)}}{n\log|\gamma|}=\frac{H_{\hat{\omega}}\big{(}\tilde{\mathcal{P}}|\hat{\eta}\vee\mathcal{B}_{\hat{S}}\big{)}-H_{\hat{\omega}}\big{(}\tilde{\mathcal{P}}|\hat{\eta}\vee\mathcal{B}_{\Phi}\big{)}}{\log|\gamma|}$
by Theorem 4.2, Lemma 4.8 and Birkhoff Ergodic Theorem. ∎
###### Lemma 4.9.
For Lebesgue-$\text{a.e.}\,(x,\theta)\in[0,1)^{2}$, for $m_{x}$-a.e.
$z\in\mathbb{C}$,
$\lim_{r\to 0}\frac{\log\big{(}m_{x}(\mathbf{B}(z,\,r))\big{)}}{\log
r}=\frac{H_{\hat{\omega}}\big{(}\tilde{\mathcal{P}}|\hat{\eta}\vee\mathcal{B}_{\hat{S}}\big{)}-\log
b}{\log|\gamma|}.$
###### Proof.
The proof is similar to Lemma 4.4 by Lemma 4.6. ∎
###### The proof of Theorem 4.1.
Combining Lemma 4.9 and Lemma 4.4 with Lemma 4.8, we have Theorem 4.1 holds. ∎
## 5\. The inverse theorem for entropy
The main result of this section is Theorem 5.1, which is the corollary of [27,
Theorem 2.8] by Theorem 4.1. We will use Theorem 5.1 to study the convolution
of $m_{x}$ and any probability measures on $\mathbb{C}$ in Sect. 6.
###### Theorem 5.1.
Let integer $b\geq 2$ and $\gamma\in\mathbb{C}$ such that $0<|\gamma|<1$. Let
$\phi:\mathbb{R}\to\mathbb{R}$ be a $\mathbb{Z}$-periodic Lipschitz function
such that condition (H) holds. If $\alpha<2$ and
$\Delta\in\mathbb{R}\setminus\mathbb{Q}$, then the following holds.
For any $\varepsilon>0$ and $R>0$, there are
$\delta_{6}=\delta_{6}(\varepsilon,\,R,\,\phi,\,\gamma),\,\delta_{7}=\delta_{7}(\varepsilon,\,R,\,\phi,\,\gamma)>0$
with the following properties: for every
$m>M(\varepsilon,\,R,\,\phi,\,\gamma)$,
$n>N(\varepsilon,\,R,\,\phi,\,\gamma,\,m)$ and $x\in[0,1)$, if
$\mu\in\mathscr{P}\big{(}[-R,R]^{2}\big{)}$ is a measure such that
$\frac{1}{n}H(\mu,\,\mathcal{L}_{n})>\varepsilon,$
and if
$\mathbb{P}^{\,m_{x}}_{0\leq
i<n}\left(\frac{1}{m}H((m_{x})^{x,\,i},\mathcal{L}^{\mathbb{C}}_{m})<\alpha+\delta_{6}\right)>1-\delta_{6},$
then we have
$\frac{1}{n}H(\mu\ast
m_{x},\,\mathcal{L}_{n})\geq\frac{1}{n}H(m_{x},\,\mathcal{L}_{n})+\delta_{7}.$
### 5.1. The projection theorem for entropy
This subsection is devoted to analyze the uniform low bound of
$\frac{1}{n}H(\pi_{\theta}m_{x},\,\mathcal{L}_{n})$ for all
$x,\,\theta\in[0,1]$ when $n$ is large enough. See Lemma 5.2 for details. We
shall first introduce the following Lemma 5.1, whose idea is inspired by [22,
Lemma 5.1].
###### Lemma 5.1.
If condition (H) holds and $\Delta\in\mathbb{R}\setminus\mathbb{Q}$, then the
measure $\pi_{\theta}m_{x}$ has no atom for each $x,\,\theta\in[0,1].$
###### Proof.
Assume the lemma fails. Since the support of $\pi_{\theta}m_{x}$ is contained
in the ball $\mathbf{B}(0,\frac{\parallel\phi\parallel_{\infty}}{1-|\gamma|})$
for all $x,\,\theta\in[0,1]$, these are $x_{1},\,\theta_{1}\in[0,1]$ and
$p\in\mathbb{R}$ such that
(5.1)
$\pi_{\theta_{1}}m_{x_{1}}(\\{p\\})=\max_{x,\,\theta\in[0,1],\,z\in\mathbb{R}}\pi_{\theta}m_{x}(\\{z\\})>0$
by the compactness of the probability measures $\pi_{\theta}m_{x}$ in the weak
star topology and the continuity of function $\pi_{\theta}\circ S_{x}$.
Therefore, for any $n\in\mathbb{N}$, (2.8) and (4.4) imply
(5.2)
$\pi_{\theta_{1}}m_{x_{1}}(\\{p\\})=\frac{1}{b^{n}}\sum_{\textbf{w}\in\varLambda^{n}}\pi_{\theta_{1}-n\Delta}m_{\textbf{w}(x_{1})}(\\{p_{\textbf{w}}\\}).$
where $p_{\textbf{w}}=\frac{p-\pi_{\theta_{1}}\circ
S_{x_{1}}(\textbf{w})}{|\gamma|^{n}}.$ Combining this with (5.1) and (5.2) we
have
$\pi_{\theta_{1}-n\Delta}m_{\textbf{w}(x_{1})}(\\{p_{\textbf{w}}\\})=\pi_{\theta_{1}}m_{x_{1}}(\\{p\\})>0\quad\quad\forall\,\textbf{w}\in\varLambda^{n}.$
This implies
(5.3) $M_{0}:=\sup_{\textbf{w}\in\varLambda^{\\#}}|P_{\textbf{w}}|<\infty,$
since the supports of the family of probability measures
$\\{\,\pi_{\theta}m_{x}\,\\}_{\theta,\,x\in[0,1]}$ have uniform bound in
$\mathbb{R}$.
Since $S(x,0_{\infty})-S(x,1_{\infty})\not\equiv 0$, there are
$x_{2},\,\theta_{2}\in[0,1)$ such that
(5.4) $\pi_{\theta_{2}}\circ S_{x_{2}}(0_{\infty})-\pi_{\theta_{2}}\circ
S_{x_{2}}(1_{\infty})\neq 0.$
Let $n_{t}\in\mathbb{N}$ and
$\textbf{w}^{n_{t}}\in\varLambda^{n_{t}},\,t=1,2,\ldots$ be such that
$\lim_{t\to\infty}(\,\theta_{1}-n_{t}\Delta\mod 1)=\theta_{2}$ and
$\lim_{t\to\infty}\textbf{w}^{n_{t}}(x_{1})=x_{2}$, since
$\Delta\notin\mathbb{Q}$ and $x_{2}$ is $1/b^{n_{t}}$-approximate by the set
$\\{\,\textbf{w}(x_{1}):\,\textbf{w}\in\varLambda^{n_{t}}\\}$. For any
$t,\,m\in\mathbb{Z}_{+}$, by the definition of $p_{\textbf{w}}$ we have
$\pi_{\theta_{1}}\circ
S_{x_{1}}(\textbf{w}^{n_{t}}0_{m})-\pi_{\theta_{1}}\circ
S_{x_{1}}(\textbf{w}^{n_{t}}1_{m})=|\gamma|^{n_{t}+m}(p_{\textbf{w}^{n_{t}}1_{m}}-p_{\textbf{w}^{n_{t}}0_{m}}),$
which implies that
$\big{|}\pi_{(\theta_{1}-n_{t}\Delta\mod 1)}\circ
S_{\textbf{w}^{n_{t}}(x_{1})}(0_{m})-\pi_{(\theta_{1}-n_{t}\Delta\mod 1)}\circ
S_{\textbf{w}^{n_{t}}(x_{1})}(1_{m})\big{|}\leq 2|\gamma|^{m}M_{0}$
where $1_{m}=11\ldots 1,0_{m}=00\ldots 0\in\varLambda^{m}.$ This contradicts
(5.4) when $m,\,t$ goes to infinity. ∎
For any $n\in\mathbb{N}$, let $\tilde{n}$ be the unique integer such that
$b^{-\tilde{n}}\leq|\gamma|^{n}<b^{-\tilde{n}+1}$.
###### Lemma 5.2.
If the condition (H) holds and $\Delta\notin\mathbb{Q}$, we have
$\lim_{n\to\infty}\inf_{x,\,\theta\in[0,1]}\frac{1}{n}H(\pi_{\theta}m_{x},\,\mathcal{L}_{n})=\beta.$
###### Proof.
We shall first prove the following claim.
###### Claim 1.
For any $x,\,\theta\in[0,1]$ and $\ell,\,n\in\mathbb{Z}_{+}$ such that
$\ell,n/\ell$ is large enough, we have
$\frac{1}{n}H(\pi_{\theta}m_{x},\,\mathcal{L}_{n})\geq\bigg{(}1+O\big{(}\frac{1}{\tilde{\ell}}+\frac{\tilde{\ell}}{n}\big{)}\bigg{)}\cdot\frac{1}{t_{n,\ell}}\sum_{k=0}^{t_{n,\ell}-1}\bigg{(}\frac{1}{b^{\tilde{\ell}}}\sum_{\textbf{w}\in\varLambda^{\tilde{\ell}}}\frac{1}{\tilde{\ell}}H(\pi_{\theta-k\ell\Delta}m_{\textbf{w}(0)},\,\mathcal{L}_{\tilde{\ell}})\bigg{)}+O(\frac{1}{\tilde{\ell}}+\frac{\tilde{\ell}}{n})$
where $t_{n,\ell}=\max\\{t\in\mathbb{N}:\,\widetilde{t\ell}\leq n\\}$.
This is enough to conclude the proof. Indeed, for any $\ell\in\mathbb{N}$ such
that $\hat{\ell}>0$, let $f_{\ell}:[0,1)\to\mathbb{R}$ be a function such that
$f_{\ell}(\theta):=\frac{1}{b^{\tilde{\ell}}}\sum_{\textbf{w}\in\varLambda^{\tilde{\ell}}}\frac{1}{\tilde{\ell}}H(\pi_{\theta}m_{\textbf{w}(0)},\,\mathcal{L}_{\tilde{\ell}}).$
Since $\Delta\in\mathbb{R}\setminus\mathbb{Q}$ implies
$\ell\Delta\notin\mathbb{Q}$, therefore we have
(5.5)
$\lim_{n\to\infty}\inf_{\theta\in[0,1)}\frac{1}{n}\sum_{k=0}^{n-1}f_{\ell}(\theta-k\ell\Delta)=\int_{[0,1)}f_{\ell}(s)ds$
by the Unique Ergodic Theorem [23, Theorem 6.19]. Also we have
$\int_{[0,1)}f_{\ell}(s)ds=\int_{[0,1)^{2}}\frac{1}{\tilde{\ell}}H(\pi_{\theta}m_{x},\,\mathcal{L}_{\tilde{\ell}})dxd\theta+O(\frac{1}{\tilde{\ell}}).$
Combining this with
$\lim_{\ell\to\infty}\int_{[0,1)^{2}}\frac{1}{\tilde{\ell}}H(\pi_{\theta}m_{x},\,\mathcal{L}_{\tilde{\ell}})dxd\theta=\beta$
by Proposition 3.1, Theorem 4.1 and Lebesgue Convergence Theorem, we have
$\lim_{\ell\to\infty}\int_{[0,1)}f_{\ell}(s)ds=\beta.$
Combining this with (5.5) and Claim 1, we have Lemma 5.2 holds.
Now let us prove Claim 1. Since
$n-\widetilde{t_{n,\ell}\ell}=O_{b,\,\gamma}(\tilde{\ell})$, we have
$\frac{1}{n}H(\pi_{\theta}m_{x},\,\mathcal{L}_{n})=\frac{1}{n}\sum_{k=0}^{t_{n,\ell}-1}H(\pi_{\theta}m_{x},\,\mathcal{L}_{\widetilde{(k\ell+\ell)}}|\mathcal{L}_{\widetilde{(k\ell)}})+O(\frac{\tilde{\ell}}{n}).$
By the concavity of conditional entropy, this implies
(5.6)
$\frac{1}{n}H(\pi_{\theta}m_{x},\,\mathcal{L}_{n})\geq\frac{1}{n}\sum_{k=0}^{t_{n,\ell}-1}\frac{1}{b^{k\ell}}\sum_{\textbf{w}\in\varLambda^{k\ell}}H(\pi_{\theta}(T^{k\ell}m_{\textbf{w}(x)}),\,\mathcal{L}_{\widetilde{(k\ell+\ell)}}|\mathcal{L}_{\widetilde{k\ell}})+O(\frac{\tilde{\ell}}{n}).$
Also (4.4) implies the measure $\pi_{\theta}(T^{k\ell}m_{\textbf{w}(x)})$ is
supported in an interval of length $O(b^{-\widetilde{k\ell}})$. Thus we have
$H(\pi_{\theta}(T^{k\ell}m_{\textbf{w}(x)}),\,\mathcal{L}_{\widetilde{(k\ell+\ell)}}|\mathcal{L}_{\widetilde{k\ell}})=H(\pi_{\theta}(T^{k\ell}m_{\textbf{w}(x)}),\,\mathcal{L}_{\widetilde{(k\ell+\ell)}})+O(1).$
Combining this with
$\widetilde{(k\ell+\ell)}=\widetilde{k\ell}+\tilde{\ell}+O_{b,\,\gamma}(1)$
and (4.4), we have
$H(\pi_{\theta}(T^{k\ell}m_{\textbf{w}(x)}),\,\mathcal{L}_{\widetilde{(k\ell+\ell)}}|\mathcal{L}_{\widetilde{k\ell}})=H(\pi_{\theta-k\ell\Delta}m_{\textbf{w}(x)},\,\mathcal{L}_{\tilde{\ell}})+O(1).$
Therefore the following holds:
$\frac{1}{n}H(\pi_{\theta}m_{x},\,\mathcal{L}_{n})\geq\frac{1}{n}\sum_{k=0}^{t_{n,\ell}-1}\frac{1}{b^{k\ell}}\sum_{\textbf{w}\in\varLambda^{k\ell}}H(\pi_{\theta-k\ell\Delta}m_{\textbf{w}(x)},\,\mathcal{L}_{\tilde{\ell}})+O(\frac{\tilde{\ell}}{n}+\frac{t_{n,\ell}}{n}).$
Since $\parallel\pi_{\theta-k\ell\Delta}\circ
S_{\textbf{w}(x)}-\pi_{\theta-k\ell\Delta}\circ
S_{(\tau^{k\ell-\tilde{\ell}}\textbf{w})(0)}\parallel_{\infty}=O(b^{-\tilde{\ell}})$
for any $k\geq L_{0}(b,\gamma),$ therefore we have
$\frac{1}{n}H(\pi_{\theta}m_{x},\,\mathcal{L}_{n})\geq\frac{1}{n}\sum_{k=0}^{t_{n,\ell}-1}\frac{1}{b^{\tilde{\ell}}}\sum_{\textbf{w}\in\varLambda^{\tilde{\ell}}}H(\pi_{\theta-k\ell\Delta}m_{\textbf{w}(0)},\,\mathcal{L}_{\tilde{\ell}})+O(\frac{\tilde{\ell}}{n}+\frac{t_{n,\ell}}{n}).$
Combining this with $O(\frac{t_{n,\ell}}{n})=O(\frac{1}{\tilde{\ell}})$ and
$\frac{1}{n}=\frac{t_{n,\ell}\tilde{\ell}}{n}\cdot\frac{1}{t_{n,\ell}}\cdot\frac{1}{\tilde{\ell}}=\bigg{(}1+0(\frac{1}{\tilde{\ell}}+\frac{\tilde{\ell}}{n})\bigg{)}\cdot\frac{1}{t_{n,\ell}}\frac{1}{\tilde{\ell}},$
we have this claim holds. ∎
### 5.2. An inverse theorem for entropy
In this section, we shall introduce the inverse theorem for convolutions on
$\mathbb{R}^{d}$ due to Hochman [27]. Before that it is necessary to recall
the following notations from [27].
For a linear subspace $V\leq\mathbb{C}$, let $W=V^{\perp}$ be the orthogonal
complement of $V$ and let $\pi_{V}$ be the orthogonal projection on $V$. Thus,
if $V=\\{\,te^{2\pi i\theta}\,\\}_{t\in\mathbb{R}}$ for some
$\theta\in\mathbb{R}$, then we have $\pi_{V}=\pi_{\theta}.$ For any set
$A\subset\mathbb{C}$ and $\varepsilon>0$ write $\varepsilon$-neighborhood of
$A$ by
$A^{(\varepsilon)}:=\\{z\in\mathbb{C}:\,d(z,A)<\varepsilon\\}.$
###### Definition 5.1.
Let $V\leq\mathbb{C}$ be a linear subspace and $\varepsilon>0$. A measure
$\mu\in\mathscr{P}(\mathbb{C})$ is $(V,\varepsilon)$-concentrated if there is
a translate $W$ such that $\mu(W^{(\varepsilon)})\geq 1-\varepsilon.$
We also need the following definition.
###### Definition 5.2.
Let $V\leq\mathbb{C}$ be a linear subspace, $W=V^{\perp}$ is the orthogonal
complement, and $\varepsilon>0$. A probability measure
$\mu\in\mathscr{P}(\mathbb{C})$ is $(V,\varepsilon)$-saturated at scale $m$,
or $(V,\varepsilon,m)$-saturated, if
$\frac{1}{m}H(\mu,\mathcal{L}_{m})\geq\frac{1}{m}H(\pi_{W}\mu,\mathcal{L}_{m})+\text{dim}(V)-\varepsilon.$
###### Remark 5.1.
It is worth noting that for the special case $V=\mathbb{C}$,
$(V,\varepsilon,m)$-saturated implies $\frac{1}{m}H(\mu,\mathcal{L}_{m})\geq
2-\varepsilon$. Also $(V,\varepsilon)$-saturated at scale $m$ is trivial if
$V=\\{0\\}$.
The following Theorem 5.2 is the inverse theorem of Hochman, which serves as a
basic tool to study the dimension of $m_{x}$ in our paper. See [27, Theorem
2.8].
###### Theorem 5.2 (Hochman).
For any $R,\varepsilon>0$ and $m\in\mathbb{N}$ there is a
$\delta_{0}=\delta_{0}(\varepsilon,\,R,\,m)>0$ such that for every
$n>N_{0}(\varepsilon,\,R,\,\delta_{0},\,m)$, the following holds: if
$\mu,\zeta\in\mathscr{P}([-R,R]^{2})$ and
$\frac{1}{n}H(\mu\ast\zeta,\mathcal{L}_{n})<\frac{1}{n}H(\mu,\mathcal{L}_{n})+\delta_{0},$
then there exists a sequence $V_{0},\ldots V_{n}\leq\mathbb{C}$ of subspace
such that
(5.7) $\mathbb{P}^{\,\mu\times\zeta}_{0\leq
i<n}\left(\,\begin{matrix}\mu^{x,\,i}\,\text{is}\,(V_{i},\,\varepsilon,\,m)-\text{saturated
and}\\\
\zeta^{y,\,i}\,\text{is}\,(V_{i},\,\varepsilon)-\text{concentrated}\end{matrix}\,\right)>1-\varepsilon.$
Note that the definitions of $\mu^{x,\,i},\,\zeta^{y,\,i}$ are from (3.2).
### 5.3. The proof of Theorem 5.1
We shall first introduce the following Lemma 5.3, which is an important
observation for proving Theorem 5.1.
###### Lemma 5.3.
Assume the condition (H) holds and $\Delta\in\mathbb{R}\setminus\mathbb{Q}.$
Then for any $\delta,\,\varepsilon>0$, $m\geq M_{2}(\varepsilon,\delta),$ and
$k\geq K_{1}(\varepsilon,\delta,m)$, the following holds:
(5.8)
$\inf_{x,\,\theta\in[0,1]}\mathbb{P}^{\,m_{x}}_{i=k}\bigg{(}\,\frac{1}{m}H\big{(}\pi_{\theta}(m_{x})^{z,\,i},\mathcal{L}_{m}\big{)}\geq\beta-\varepsilon\,\bigg{)}\geq
1-\delta.$
Before giving the proof, we shall introduce the following results [2, Lemma
3.3] and [27, Lemma 3.4]. Recall that the total variation distance between
$\mu,\eta\in\mathscr{P}(\mathbb{C})$ is
$\parallel\mu-\eta\parallel=\sup_{A\in\mathscr{B}(\mathbb{C})}|\,\mu(A)-\eta(A)|.$
###### Lemma 5.4.
For every $\varepsilon>0$ there exists a
$\delta_{1}=\delta_{1}(\varepsilon)>0$ with the following property. For any
$0<\delta\leq\delta_{1},$ suppose that a probability measure
$\eta\in\mathscr{P}(\mathbb{C})$ can be written as a convex combination
$\eta=(1-\delta)\eta^{\prime}+\delta\eta^{\prime\prime}$. Then for every $k$,
we have
$\mathbb{P}^{\,\eta}_{i=k}\big{(}z\,:\,\parallel\eta_{z,\,i}-\eta^{\prime}_{z,\,i}\parallel<\varepsilon\big{)}>1-\varepsilon.$
In fact [2, Lemma 3.3] only dealt with the case
$\eta\in\mathscr{P}(\mathbb{R})$, however the proof also works for all
$\eta\in\mathscr{P}(\mathbb{R}^{d})$.
###### Lemma 5.5.
If $\mathcal{A}$ is the partition of $\mathbb{R}^{d}$, and if
$\mu,\eta\in\mathscr{P}(\mathbb{R}^{d})$ are supported at most $k$ atoms of
partition $\mathcal{A}$ and $\parallel\mu-\eta\parallel<\varepsilon,$ then
$|H(\mu,\,\mathcal{A})-H(\eta,\,\mathcal{A})|<2\varepsilon\log_{b}k+2H(\frac{1}{2}\varepsilon)$
where $H(\frac{1}{2}\varepsilon)$ is defined by (3.3).
Notation. For any $n\in\mathbb{N}$, let $\hat{n}$ be the unique integer such
that
(5.9) $|\gamma|^{\hat{n}}\leq b^{-n}<|\gamma|^{\hat{n}-1}$
as [22] did. We also need the following corollary of Lemma 5.1.
###### Corollary 5.1.
If the condition (H) holds, then for any $\varepsilon>0$, there is a
$\delta_{2}=\delta_{2}(\varepsilon)>0$ such that the following holds. For any
$x\in[0,1]$ and $n\in\mathbb{N}$, we have
(5.10)
$m_{x}\big{(}\partial\mathcal{L}_{n}^{(\delta_{2}/b^{n})}\big{)}<\varepsilon$
where set
$\partial\mathcal{L}_{n}:=\bigcup_{k_{1},k_{2}\in\mathbb{Z}}\bigg{\\{}z\in\mathbb{C}:\,\text{Re}(z)=\frac{k_{1}}{b^{n}}\,\text{or}\,\text{Im}(z)=\frac{k_{2}}{b^{n}}\bigg{\\}}.$
###### Proof.
By Lemma 5.1 there is $\delta^{\prime}>0$ such that
(5.11)
$\sup_{x,\,\theta\in[0,1],\,z\in\mathbb{R}}\pi_{\theta}m_{x}\big{(}\mathbf{B}(z,\delta^{\prime})\big{)}\leq\frac{\varepsilon}{2},$
since the probability measures $\pi_{\theta}m_{x}$ are compact in the weak
star topology and the functions $\pi_{\theta}\circ S_{x}(\textbf{j})$ are
continuous. Recall
$R^{\phi}_{\gamma}=\frac{2\parallel\phi\parallel_{\infty}}{1-|\gamma|}.$ There
is constant $\ell\in\mathbb{N}$ and $c_{\ell}>0$ such that
(5.12)
$c_{\ell}>\frac{1}{b^{n}|\gamma|^{\hat{n}+\ell}}>R^{\phi}_{\gamma}\quad\quad\forall
n\in\mathbb{N}.$
For any set $A\subset\mathbb{C}$ and $z_{1},\,z_{2}\in\mathbb{C}$, denote the
set
$z_{1}A+z_{2}=\big{\\{}\,z_{1}\cdot z+z_{2}\,:\,z\in A\,\\}.$
Let $\delta_{2}=\delta^{\prime}/c_{\ell}$. For any $n\in\mathbb{N}$, combining
(2.8) with (2.9) we have
(5.13)
$m_{x}\big{(}\partial\mathcal{L}_{n}^{(\delta_{2}/b^{n})}\big{)}=\frac{1}{b^{\hat{n}+\ell}}\sum_{\textbf{w}\in\varLambda^{\hat{n}+\ell}}m_{\textbf{w}(x)}\bigg{(}\frac{\partial\mathcal{L}_{n}^{(\delta_{2}/b^{n})}-S(x,\textbf{w})}{\gamma^{\hat{n}+\ell}}\bigg{)}.$
Also the definition of $\partial\mathcal{L}_{n}$ and (5.12) imply that: for
each $\textbf{w}\in\varLambda^{\hat{n}+\ell}$ such that
$m_{\textbf{w}(x)}\bigg{(}\frac{\partial\mathcal{L}_{n}^{(\delta_{2}/b^{n})}-S(x,\textbf{w})}{\gamma^{\hat{n}+\ell}}\bigg{)}>0,$
there are $k_{1}^{\textbf{w}},\,k_{2}^{\textbf{w}}\in\mathbb{Z}$ such that
$\bigg{(}\frac{\partial\mathcal{L}_{n}-S(x,\textbf{w})}{\gamma^{\hat{n}+\ell}}\bigg{)}\bigcap\text{supp}\big{(}m_{\textbf{w}(x)}\big{)}\subset\frac{\big{\\{}z\in\mathbb{C}:\,\text{Re}(z)=\frac{k_{1}^{\textbf{w}}}{b^{n}}\,\text{or}\,\text{Im}(z)=\frac{k_{2}^{\textbf{w}}}{b^{n}}\big{\\}}-S(x,\textbf{w})}{\gamma^{\hat{n}+\ell}},$
which implies
$\bigg{(}\frac{\partial\mathcal{L}_{n}^{(\delta_{2}/b^{n})}-S(x,\textbf{w})}{\gamma^{\hat{n}+\ell}}\bigg{)}\bigcap\text{supp}\big{(}m_{\textbf{w}(x)}\big{)}\subset\pi_{\theta_{\textbf{w}}}^{-1}\bigg{(}\mathbf{B}(z_{1}^{\textbf{w}},\frac{\delta_{2}}{b^{n}|\gamma|^{\hat{n}+\ell}})\bigg{)}\bigcup\pi_{\theta^{\prime}_{\textbf{w}}}^{-1}\bigg{(}\mathbf{B}(z_{2}^{\textbf{w}},\frac{\delta_{2}}{b^{n}|\gamma|^{\hat{n}+\ell}})\bigg{)}$
for some $\theta_{\textbf{w}},\,\theta^{\prime}_{\textbf{w}}\in[0,1]$ and
$z_{1}^{\textbf{w}},\,z_{2}^{\textbf{w}}\in\mathbb{R}.$ Therefore we have
$m_{x}\big{(}\partial\mathcal{L}_{n}^{(\delta_{2}/b^{n})}\big{)}<\varepsilon$
by (5.11), (5.12) and (5.13), $\delta_{2}=\delta^{\prime}/c_{\ell}$. ∎
###### The proof of Lemma 5.3.
For any $\delta,\,\varepsilon>0$, choose
$\delta_{1}=\delta_{1}\big{(}\min\\{\delta,\,\varepsilon/8\\}\big{)}\in(0,1)$
by Lemma 5.4. Thus there is a $\delta_{2}=\delta_{2}(\delta_{1})>0$ such that
(5.10) holds by Corollary 5.1. Let $\ell\in\mathbb{N}$ be a constant such that
(5.14)
$2R^{\phi}_{\gamma}|\gamma|^{\hat{n}+\ell}\leq\frac{\delta_{2}}{b^{n}}\quad\quad\forall
n\in\mathbb{N}.$
By Lemma 5.2 there is a number $M_{1}=M_{1}(\varepsilon)\in\mathbb{N}$ such
that, for all $m\geq M_{1}$ we have
(5.15)
$\inf_{x,\,\theta\in[0,1]}\frac{1}{m}H(\pi_{\theta}m_{x},\,\mathcal{L}_{m})\geq\beta-\varepsilon/3.$
For any $x,\,\theta\in[0,1]$ and $k\geq M_{1}$, let
$A_{k,\,\ell}\subset\varLambda^{\hat{k}+\ell}$ be the union of all elements
$\textbf{w}\in\varLambda^{\hat{k}+\ell}$ such that
$\text{supp}(T^{\hat{k}+\ell}m_{\textbf{w}(x)})\subset I_{\textbf{w}}$ for
some $I_{\textbf{w}}\in\mathcal{L}_{k}^{\mathbb{C}}$. Let
$A^{c}_{k,\,\ell}:=\varLambda^{\hat{k}+\ell}\setminus A_{k,\,\ell}.$ Thus for
each $\textbf{w}\in A^{c}_{k,\,\ell}$ we have
$\text{supp}(T^{\hat{k}+\ell}m_{\textbf{w}(x)})\subset\partial\mathcal{L}_{k}^{(\delta_{2}/b^{k})}$
by (5.14), therefore we have
(5.16) $\frac{\\#A^{c}_{k,\,\ell}}{b^{\hat{k}+\ell}}\leq\delta_{1}$
by Corollary 5.1. Note that $\delta_{1}<1$ implies $A_{k}\neq\emptyset$.
Define
$\eta^{\prime}:=\frac{1}{\\#A_{k,\,\ell}}\sum_{\textbf{w}\in
A_{k,\,\ell}}T^{\hat{k}+\ell}m_{\textbf{w}(x)}$
and $\eta^{\prime\prime}\in\mathscr{P}(\mathbb{C})$ such that
$m_{x}=\bigg{(}1-\frac{\\#A^{c}_{k,\,\ell}}{b^{\hat{k}+\ell}}\bigg{)}\,\eta^{\prime}+\bigg{(}\frac{\\#A^{c}_{k,\,\ell}}{b^{\hat{k}+\ell}}\bigg{)}\,\eta^{\prime\prime}.$
Combining this with Lemma 5.4, we have
$\mathbb{P}_{i=k}^{m_{x}}\bigg{(}z\,:\,\parallel(m_{x})_{z,\,i}-\eta^{\prime}_{z,\,i}\parallel<\varepsilon/8\bigg{)}>1-\delta.$
Thus Lemma 5.5 implies that the following holds
(5.17)
$\mathbb{P}_{i=k}^{m_{x}}\bigg{(}z\,:\,\big{|}\frac{1}{m}H\big{(}(m_{x})_{z,\,i},\pi_{\theta}^{-1}(\mathcal{L}_{i+m})\big{)}-\frac{1}{m}H\big{(}\eta^{\prime}_{z,\,i},\pi_{\theta}^{-1}(\mathcal{L}_{i+m})\big{)}\big{|}\leq\frac{1}{4}\varepsilon+\frac{2H(\varepsilon/8)}{m}\,\bigg{)}>1-\delta.$
For any $I\in\mathcal{L}_{k}$, define the set
$B_{I,\,k}:=\big{\\{}\,\textbf{w}\in
A_{k,\,\ell}:\,\text{supp}(T^{\hat{k}+\ell}m_{\textbf{w}(x)})\subset
I\,\big{\\}}$. For the case $B_{I,k}\neq\emptyset$, we have
(5.18)
$\displaystyle\frac{1}{m}H\big{(}\eta^{\prime}_{I},\pi_{\theta}^{-1}(\mathcal{L}_{k+m})\big{)}$
$\displaystyle\geq\frac{1}{\\#B_{I,\,k}}\sum_{\textbf{w}\in
B_{I,\,k}}\frac{1}{m}H\big{(}T^{\hat{k}+\ell}m_{\textbf{w}(x)},\pi_{\theta}^{-1}(\mathcal{L}_{k+m})\big{)}$
$\displaystyle=\frac{1}{\\#B_{I,\,k}}\sum_{\textbf{w}\in
B_{I,\,k}}\frac{1}{m}H\big{(}\pi_{\theta-(\hat{k}+\ell)\Delta}m_{\textbf{w}(x)},\mathcal{L}_{m}\big{)}+O(\frac{\ell}{m})$
$\displaystyle\geq\beta-\varepsilon/3+O(\frac{\ell}{m})$
by the concavity of conditional entropy and (2.9), (5.14). Also we have
(5.19)
$\frac{1}{m}H\big{(}(m_{x})_{z,\,k},\pi_{\theta}^{-1}(\mathcal{L}_{k+m})\big{)}=\frac{1}{m}H\big{(}\pi_{\theta}(m_{x})^{z,\,k},\mathcal{L}_{m}\big{)}+O(\frac{1}{m}).$
Let $M_{3}\in\mathbb{N}$ be large enough such that
$O(\frac{1}{m})+O(\frac{\ell}{m})+\frac{2H(\varepsilon/8)}{m}\leq\varepsilon/6$
for all $m\geq M_{3}$. Thus for $m\geq M_{2}:=\max\\{M_{1},M_{3}\\}$, we have
$\mathbb{P}_{i=k}^{m_{x}}\bigg{(}z\,:\,\frac{1}{m}H\big{(}\pi_{\theta}(m_{x})^{z,\,k},\mathcal{L}_{m}\big{)}\geq\beta-\varepsilon\bigg{)}>1-\delta$
by (5.17), (5.18) and (5.19). ∎
We still need the following observation from [3].
###### Lemma 5.6.
For any $\varepsilon>0$ and $R>0$, there is
$\delta_{3}=\delta_{3}(\varepsilon,R)>0$ such that the following holds. For
all $n\geq N_{1}(\varepsilon,\,\delta_{3},R)$ and
$\zeta\in\mathscr{P}([-R,R]^{2})$, if
$\mathbb{P}^{\,\zeta}_{0\leq
i<n}\left(\,\begin{matrix}\zeta^{x,\,i}\,\text{is}\,(0,\,\delta_{3})-\text{concentrated}\end{matrix}\,\right)>1-2\delta_{3},$
then
$\frac{1}{n}H(\zeta,\mathcal{L_{n}})<\varepsilon.$
###### Proof.
Without loss generality we may assume $\delta_{3}=1/b^{m}$ for some
$m\in\mathbb{Z}_{+}$. Since by Lemma 3.4 we have
$\frac{1}{n}H(\zeta,\mathcal{L}_{n})=\mathbb{E}^{\,\zeta}_{0\leq
i<n}\bigg{(}\frac{1}{m}H(\zeta^{x,i},\mathcal{L}_{m})\bigg{)}+O\big{(}\frac{m}{n}+\frac{\log
R}{n}\big{)},$
thus it suffice to prove the following claim.
###### Claim 2.
For any $\varepsilon>0$, $m\geq M(\varepsilon)$ and measure
$\mu\in\mathscr{P}([0,1]^{2})$, the following holds. If
$\mu\,\text{is}\,(0,\,1/b^{m})-\text{concentrated}$, then
$\frac{1}{m}H(\mu,\mathcal{L}_{m})\leq\varepsilon/2.$
Indeed we only need to choose $n$ large enough. In the rest we shall prove the
claim. By the definition of $(0,\,1/b^{m})$-concentrated, there are
$\tau^{\prime},\tau^{\prime\prime}\in\mathscr{P}([0,1]^{2})$ such that
$\text{supp}(\tau^{\prime})\subset\boldsymbol{B}(p,1/b^{m})$ for some
$p\in\mathbb{C}$ and
$\mu=(1-t)\tau^{\prime}+t\tau^{\prime\prime}$
where $t\leq 1/b^{m}.$ By the convexity of entropy we have
$\frac{1}{m}H(\mu,\mathcal{L}_{m})\leq(1-t)\,\frac{1}{m}H(\tau^{\prime},\mathcal{L}_{m})+t\,\frac{1}{m}H(\tau^{\prime\prime},\mathcal{L}_{m})+H(t)=O(\frac{1}{m})$
since $\tau^{\prime}$ intersects at most nine elements of $\mathcal{L}_{m}$
and $H(t)=O(\sqrt{t})$. ∎
###### The proof of Theorem 5.1.
Choose $\delta_{3}=\delta_{3}(\varepsilon,R)$ and
$N_{1}(\varepsilon,\delta_{3},R)$ by Lemma 5.6. Also $\alpha<2$ implies that
$\alpha<\beta+1$ by Corollary 4.1. let
$\delta_{6}:=(\frac{1}{9}\min\\{\delta_{3},\beta+1-\alpha,\,2-\alpha,\,1\\})^{2}.$
Also by Lemma 5.3, for any $m\geq M_{2}(\delta_{6},\delta_{6})>0$ and $k\geq
K_{1}(\delta_{6},\delta_{6},m)$ such that
(5.20)
$\inf_{x,\,\theta\in[0,1]}\mathbb{P}^{\,m_{x}}_{i=k}\bigg{(}\,\frac{1}{m}H\big{(}\pi_{\theta}(m_{x})^{z,\,i},\mathcal{L}_{m}\big{)}\geq\beta-\delta_{6}\,\bigg{)}\geq
1-\delta_{6}.$
By Theorem 5.2 choose $\delta_{7}=\delta_{0}(\delta_{6},R,m)$ and
$N_{0}(\delta_{6},R,\delta_{7},m)$. Let $M=M_{2}(\delta_{6},\delta_{6})$ and
$N=N(\varepsilon,\phi,\gamma)=\max\\{N_{0},N_{1},K_{1}/\delta_{3}\\}$.
If Theorem 5.1 fails for some $m\geq M$ and $n\geq N$, then there exists a
sequence $V_{0},\ldots V_{n-1}\leq\mathbb{C}$ of subspace such that (5.7)
holds for measures $m_{x}$ and $\mu$. Thus, by Markov equality there are
subset $I\subset\\{0,1,\ldots,n-1\\}$ such that $\frac{\\#I}{b}\geq
1-\sqrt{\delta_{6}}$ and
(5.21)
$\mathbb{P}^{\,m_{x}\times\mu}_{i=k}\left(\,\begin{matrix}(m_{x})^{z,\,i}\,\text{is}\,(V_{i},\,\delta_{6},\,m)-\text{saturated
and}\\\
\mu^{y,\,i}\,\text{is}\,(V_{i},\,\delta_{6})-\text{concentrated}\end{matrix}\,\right)\geq
1-\sqrt{\delta_{6}}$
for each $k\in I.$ Let $I_{j}$ be the union of all the elements $t\in I$ such
that $\text{dim}(V_{t})=j$ for $j=0,1,2$. Also our condition implies that
there are subset $J\subset\\{0,1,\ldots,n-1\\}$ such that $\frac{\\#J}{b}\geq
1-\sqrt{\delta_{6}}$ and
(5.22)
$\mathbb{P}^{\,m_{x}}_{i=k}\left(\frac{1}{m}H((m_{x})^{x,\,i},\mathcal{L}^{\mathbb{C}}_{i+m})<\alpha+\delta_{6}\right)\geq
1-\sqrt{\delta_{6}}$
for each $k\in J.$ Since $\alpha+\delta_{6}<2-\delta_{6}$ and
$1-\sqrt{\delta_{6}}>1/2$, we have $I_{2}\cap J=\emptyset$ by Remark 5.1,
(5.21) and (5.22). Therefore we have
(5.23) $\frac{\\#I_{2}}{n}\leq\sqrt{\delta_{6}}.$
Also we have $\alpha+\delta_{6}<\beta+1-\delta_{6}$. Thus (5.20), (5.21),
(5.22) and $\frac{K_{1}}{n}\leq\delta_{6}$ imply that
$\frac{\\#I_{1}}{n}\leq 2\sqrt{\delta_{6}}.$
Combining this with (5.23) we have
$\frac{\\#I_{0}}{n}\geq 1-4\sqrt{\delta_{6}}.$
Thus we have
$\mathbb{P}^{\,\mu}_{0\leq
i<n}\left(\,\begin{matrix}\mu^{y,\,i}\,\text{is}\,(0,\,\delta_{6})-\text{concentrated}\end{matrix}\,\right)\geq(1-4\sqrt{\delta_{6}})(1-\sqrt{\delta_{6}})\geq
1-\delta_{3}$
by (5.21). This implies $\frac{1}{n}H(\mu,\mathcal{L}_{n})<\varepsilon$ by
Lemma 5.6, which contradicts our condition. ∎
## 6\. The proof of Theorem A
In this section, we will use Theorem 5.1 to finish the proof of Theorem A.
Since the proof is similar to [22], some details will be omitted. In the rest
of paper, we suppose that $\phi(x)$ is a real analytic $\mathbb{Z}$-periodic
function such that the condition (H) holds for fixed integer $b\geq 2$ and
$\gamma\in\mathbb{C}$ such that $0<|\gamma|<1$.
### 6.1. Entropy Porosity
In this subsection, we first recall the definition of entropy porosity from
[2], then we will show the entropy porosity properties of $m_{x}$ for all
$x\in[0,1]$.
###### Definition 6.1 (Entropy porous).
A measure $\mu\in\mathscr{P}(\mathbb{C})$ is $(h,\delta,m)$- entropy porous
from scale $n_{1}$ to $n_{2}$ if
$\mathbb{P}^{\,\mu}_{n_{1}\leq
i<n_{2}}\left(\frac{1}{m}H(\mu^{x,\,i},\mathcal{L}^{\mathbb{C}}_{m})<h+\delta\right)>1-\delta.$
The main result of this subsection is the following Theorem 6.1. The idea of
the proof is inspired by [2, Proposition 3.2] and [22, Theorem 5.1].
###### Theorem 6.1.
Assume the condition (H) holds and $\Delta\in\mathbb{R}\setminus\mathbb{Q}$.
Then for any $\varepsilon>0$, $m\geq M_{4}(\varepsilon),$ $k\geq
K_{4}(\varepsilon,m)$ and $n\geq N_{4}(\varepsilon,m,k)$, the following holds:
$\nu^{n}\left(\left\\{\textbf{i}\in\varLambda^{n}:m_{\textbf{i}(0)}\text{ is
}(\alpha,\varepsilon,m)-\text{entropy porous from scale }1\text{ to
}k\right\\}\right)>1-\varepsilon.$
Before giving the proof of Theorem 6.1, we shall introduce the following Lemma
6.1. The proof is the same as [22, Lemma 5.2] by Lemma 5.1, thus we omit the
details.
###### Lemma 6.1.
For any $\varepsilon>0,m\geq M_{5}(\varepsilon),n\geq N_{5}(\varepsilon,m)$,
$\inf\limits_{x\in[0,1]}\mathbb{\nu}^{n}\left(\,\left\\{\textbf{i}\in\varLambda^{n}:\alpha-\varepsilon<\frac{1}{m}H(m_{\textbf{i}(x)},\mathcal{L}_{m})<\alpha+\varepsilon\right\\}\,\right)>1-\varepsilon.$
The following is a corollary of Lemma 6.1.
###### Corollary 6.1.
Assume the condition (H) holds and $\Delta\in\mathbb{R}\setminus\mathbb{Q}.$
Then for any $\varepsilon>0$, $m\geq M_{6}(\varepsilon)$ and $k\geq
K_{6}(\varepsilon,m)$, the following holds:
(6.1)
$\inf_{x\in[0,1]}\mathbb{P}^{\,m_{x}}_{i=k}\bigg{(}\,\frac{1}{m}H\big{(}(m_{x})^{z,\,i},\mathcal{L}_{m}\big{)}\geq\alpha-\varepsilon\,\bigg{)}\geq
1-2\varepsilon.$
###### Proof.
For any $k,\,\ell\in\mathbb{Z}_{+}$ large enough, define the set
$B_{k,\,\ell}=B_{k,\,\ell}(\varepsilon,m):=\left\\{\textbf{i}\in\varLambda^{\hat{k}+\ell}:\frac{1}{m}H(m_{\textbf{i}(x)},\mathcal{L}_{m})>\alpha-\varepsilon/8\right\\}$
and $\tilde{A}_{k,\,\ell}:=A_{k,\,\ell}\cap B_{k,\,\ell}$ where $A_{k,\,\ell}$
is from Lemma 5.3. let
$\tilde{A}^{c}_{k,\,\ell}:=\varLambda^{\hat{k}+\ell}\setminus\tilde{A}_{k,\,\ell}$,
thus Corollary 6.1 can be proved by the similar method of Lemma 5.3 with Lemma
6.1. ∎
###### Lemma 6.2.
Assume the condition (H) holds and $\Delta\in\mathbb{R}\setminus\mathbb{Q}.$
For any $\varepsilon>0$, there exists $\delta>0$ such that if $m\geq
M_{7}(\varepsilon)$ and $k\geq K_{7}(\varepsilon,m)$ and if
$\left|\frac{1}{k}H(m_{x},\mathcal{L}_{k})-\alpha\right|<\frac{\delta}{2}$,
then $m_{x}$ is $(\alpha,\varepsilon,m)$-entropy porous from scale $1$ to $k$.
###### Proof.
The method of proof is similar to [2, Lemma 3.7] by Lemma 3.4 and Corollary
6.1, thus we omit the details. ∎
###### The proof of Theorem 6.1.
The proof is same as [22, Theorem 5.1]. ∎
### 6.2. Exponential separation
In this subsection, we deduce from the condition (H) the exponential
separation properties as [22] did.
###### Definition 6.2.
Let $E_{1},E_{2},\ldots$ be subsets of $\mathbb{C}$. For any $\varepsilon>0$
and $Q\subset\mathbb{Z}_{+}$, we say that the sequence
$(E_{n})_{n\in\mathbb{Z}_{+}}$ is $(\varepsilon,Q)$ -exponential separation if
$|p-q|>\sqrt{2}\,\varepsilon^{\hat{n}}\quad\quad\forall p\neq q\in
E_{\hat{n}}$
for each $n\in Q.$
The following is a corollary of [3, Lemma 5.8].
###### Corollary 6.2.
For any $k\in\mathbb{N}$ and compact interval $J\subset\mathbb{R}$, let
$F:J\to\mathbb{C}$ be a $k+1$-times continuously differentiable function. Let
$M=\parallel F\parallel_{J,\,k+1}$, and let $0<d<1$ be such that for every
$x\in J$ there is a $p\in\\{0,1,\ldots,k\\}$ with $|F^{(p)}(x)|>d.$ Then for
every $0<\rho<(d/2)^{2^{k}}$, the set
$F^{-1}\big{(}\mathbf{B}(0,\rho)\big{)}\subset J$ can be cover by
$O_{k,M,|J|}(1/{d^{k+1}})$ intervals of length $\leq 2(\rho/d)^{1/{2^{k}}}$
each.
###### Proof.
Let $J_{t}=[a_{t},b_{t}),$ $t=1,2,\ldots,L$ be the disjoint intervals such
that $J=\bigcup_{t=1}^{L}J_{t}$ and $|J_{t}|<\frac{d}{4M}$ where
$L=\lfloor\frac{4M|J|}{d}\rfloor+1$. For each $t\in\\{1,2,\ldots,L\\}$ there
is a $p_{t}\in\\{0,1,\ldots,k\\}$ with $|F^{(p_{t})}(a_{t})|>d$, thus either
$\big{|}\,\text{Re}(\,F^{(p_{t})}(a_{t})\,)\big{|}>d/2$ or
$\big{|}\,\text{Im}(\,F^{(p_{t})}(a_{t})\,)\big{|}>d/2$. For the former case
let $F_{t}:J\to\mathbb{R}$ be a $p_{t}$-times continuously differentiable
function such that $\text{Re}(F)|_{J_{t}}\equiv F_{t}|_{J_{t}}$, $\parallel
F_{t}\parallel_{J,\,p_{t}}=O_{M,\,|J|,\,p_{t}}(1)$ and
$|F_{t}^{(p_{t})}(x)|\geq d/4$ for each $x\in J$. Do the same thing for the
other case. Thus we have the set $F_{t}^{-1}(-\rho,\rho)\subset J$ can be
cover by $O_{k,M,|J|}(1/{d^{p_{t}}})$ intervals of length $\leq
2(\rho/d)^{1/{2^{p_{t}}}}$ each by [3, Lemma 5.8], which implies the corollary
holds. ∎
The main result of this subsection is the following Theorem 6.2.
###### Theorem 6.2.
There exists $\ell_{0}\in\mathbb{N}$ and $\varepsilon_{0}>0$ such that the
following holds for Lebesgue-a.e.$\,x\in[0,1]$.
For any integer $\ell\geq\ell_{0}$, there exists a set
$Q_{x,\,\ell}\subset\mathbb{Z}_{+}$ such that
1. (i)
$\\#Q_{x,\,\ell}=\infty$;
2. (ii)
for any $\textbf{w}\in\varLambda^{\ell}$, the sequence
$(X_{n}^{\textbf{w},\,x})_{n\in\mathbb{Z}_{+}}$ is
$(\varepsilon_{0},Q_{x,\,\ell})$-exponential separation
where
$X_{n}^{\textbf{w},\,x}=\big{\\{}S(x,\,\textbf{j}\textbf{w})\,:\,\textbf{j}\in\varLambda^{n-\ell}\big{\\}}$
for $n>\ell$ and $X_{n}^{\textbf{w},\,x}=\\{0\\}$ for $n\leq\ell$.
###### Proof.
The proof of [11, Lemma 5.2] works for all $\gamma\in\mathbb{C}$ such that
$0<|\gamma|<1$. Combining this with Corollary 6.2 we can prove Theorem 6.2 by
the same method of [22, Theorem 4.1], thus we omit the proof. ∎
### 6.3. Transversality
In this subsection we give some quantified estimates for transversality. For
any $\textbf{i},\,\textbf{j}\in\varLambda^{\\#}\cup\Sigma$ and integer $1\leq
k\leq|\textbf{j}|$, recall $\textbf{j}_{k}=j_{1}j_{2}\ldots j_{k}$. Write
$\textbf{i}<\textbf{j}$ if $\textbf{i}=\textbf{j}_{|\textbf{i}|}$ holds. When
$|\textbf{j}|<\infty$, let $I_{\textbf{j}}$ be an interval in $[0,1]$ such
that
(6.2)
$I_{\textbf{j}}=\bigg{[}\frac{i_{1}+i_{2}b+\dots+i_{n}b^{n-1}}{b^{n}},\frac{1+i_{1}+i_{2}b+\dots+i_{n}b^{n-1}}{b^{n}}\bigg{)}.$
The main result of this subsection is the following Theorem 6.3.
###### Theorem 6.3.
For any $t_{0}>0$, there exists an integer $t>t_{0}$, real number $\Xi_{1}>0$
and $\textbf{h},\,\textbf{h}^{\prime},\,\textbf{a}\in\varLambda^{t}$ with the
following property. For every $z\in I_{\textbf{a}}$ and
$\textbf{i},\,\textbf{j}\in\varLambda^{\\#}$, if $\textbf{h}<\textbf{i}$,
$\textbf{h}^{\prime}<\textbf{j}$, then
1. (A.1)
$|S^{\prime}(z,\,\textbf{i})|\,,\,|S^{\prime}(z,\,\textbf{j})|>\Xi_{1};$
2. (A.2)
$|S^{\prime}(z,\,\textbf{i})-S^{\prime}(z,\,\textbf{j})|>\Xi_{1};$
3. (A.3)
$\frac{\parallel\phi^{\prime}\parallel_{\infty}}{(1-\gamma)b^{t}}<\Xi_{1}/4.$
###### Proof.
Since [22, Lemma 6.1, Lemma 6.2] can be extended the case
$\gamma\in\mathbb{C}$ such that $0<|\gamma|<1$, thus we could prove that (A.1)
and (A.2) hold for some $t^{\prime}>t_{0}$, $\Xi_{1}>0$ and
$\textbf{h},\,\textbf{h}^{\prime},\,\textbf{a}\in\varLambda^{t^{\prime}}$ by
the same method of [22, Theorem 6.1]. Therefore it suffice to choose
$t>t^{\prime}$ large enough to make sure (A.3) also holds and replace
$\textbf{h},\,\textbf{h}^{\prime},\,\textbf{a}\in\varLambda^{t^{\prime}}$ by
$\textbf{h}\textbf{0}_{t-t^{\prime}},\,\textbf{h}^{\prime}\textbf{0}_{t-t^{\prime}},\,\textbf{0}_{t-t^{\prime}}\textbf{a}\in\varLambda^{t}$.
∎
### 6.4. The partitions of the space $\varLambda^{\\#}$
This subsection is devoted to construct a sequence of partitions
$\mathcal{L}_{n}^{\varLambda^{\\#}}$ of the space $\varLambda^{\\#}$ by the
same method of [22, Section.7]. Combining Theorem 3.1 with Theorem 6.2 and
Theorem 6.3, there exists an integer $t>0$ such that (A.3) holds, some
constants $\Xi_{1},\,C>0$, a point $x_{0}\in[0,1)$, set
$M\subset\mathbb{Z}_{+}$ and
$\textbf{a},\,\textbf{h},\,\textbf{h}^{\prime}\in\varLambda^{t}$ with the
following properties.
1. (B.1)
For each $\textbf{w}\in\varLambda^{t}$, the sequence
$(X_{n}^{\textbf{w},\,x_{0}}\,)_{n\in\mathbb{Z}_{+}}$ is
$(|\gamma|^{C/2},M)$-exponential separation where $X_{n}^{\textbf{w},\,x_{0}}$
is from Theorem 6.2;
2. (B.2)
$dim(m_{x_{0}})=\alpha$;
3. (B.3)
For every $z\in I_{\textbf{a}}$ and
$\textbf{i},\,\textbf{j}\in\varLambda^{\\#}$, if $\textbf{h}<\textbf{i}$,
$\textbf{h}^{\prime}<\textbf{j}$ then (A.1), (A.2) hold;
4. (B.4)
$\\#M=\infty.$
We also fix such elements
$\\{t,\,x_{0},\,C,\,\Xi_{1},\,M,\,\textbf{a},\,\textbf{h},\textbf{h}^{\prime}\\}$.
Let $\overline{\pi}:\varLambda^{\\#}\to\mathbb{N}\times\mathbb{C}^{3}$ be the
map such that
$\textbf{w}\mapsto\bigg{(}|\textbf{w}|,\,S(\textbf{w}(x_{0}),\textbf{h}),\,S(\textbf{w}(x_{0}),\textbf{h}^{\prime}),\,S(x_{0},\textbf{w})\bigg{)}.$
###### Definition 6.3.
For any integer $n\geq 1$, let $\mathcal{L}_{n}^{\varLambda^{\\#}}$ be the
union of all the non-empty subsets of $\varLambda^{\\#}$ of the following form
$\overline{\pi}^{\,-1}\left(\\{m\\}\times I_{1}\times I_{2}\times J\right),$
where
$m\in\mathbb{N},\,I_{1},\,I_{2}\in\mathcal{L}_{n},\,J\in\mathcal{L}^{\mathbb{C}}_{n+[m\log_{b}1/|\gamma|]}.$
The partition $\mathcal{L}_{0}^{\varLambda^{\\#}}$ consists of non-empty
subsets of $\varLambda^{\\#}$ of the following form
$\overline{\pi}^{\,-1}\left(\\{m\\}\times\mathbb{C}\times\mathbb{C}\times
J\right),$
where $m\in\mathbb{N},J\in\mathcal{L}^{\mathbb{C}}_{[m\log_{b}1/|\gamma|]}.$
### 6.5. The proof of Theorem A
We shall recall the following notations from [22]. For any probability measure
$\xi\in\mathscr{P}(\varLambda^{\\#})$ and $\textbf{u}\in\varLambda^{\\#}$,
define the probability measure $A_{\textbf{u}}(\xi)\in\mathscr{P}(\mathbb{C})$
be such that
(6.3)
$A_{\textbf{u}}(\xi)=\sum_{\textbf{w}\in\text{supp}(\xi)}\xi(\\{\textbf{w}\\})\,\delta_{S(x_{0},\textbf{w}\,\textbf{u})}.$
Let $B_{\textbf{q}}(\xi)\in\mathscr{P}(\mathbb{C})$ be the measure such that
for any set $A\in\mathscr{B}(\mathbb{C})$,
(6.4)
$B_{\textbf{q}}(\xi)(A)=\xi\times\nu^{\mathbb{Z}_{+}}\big{(}\,\big{\\{}\,(\textbf{w},\,\textbf{j})\in\varLambda^{\\#}\times\Sigma:S(x_{0},\textbf{w}\,\textbf{q}\,\textbf{j})\in
A\big{\\}}\,\big{)}.$
We also define the discrete measure on $\varLambda^{\\#}$
(6.5)
$\theta^{\,\textbf{u}}_{n}:=\frac{1}{b^{\hat{n}-t}}\sum_{\textbf{w}\in\varLambda^{\hat{n}-t}}\delta_{\textbf{w}\textbf{u}}.$
###### The proof of Theorem A.
[22, Lemma 8.1, corollary 8.1 and Lemma 8.3] can be extended to the case
$\gamma\in\mathbb{C}$ such that $0<|\gamma|<1$ by the same methods. Also by
(B.1), (B.2) and (B.3), the proofs of [22, Lemma 7.1, Lemma 7.2, and Lemma
8.2, Lemma 8.4, Lemma 8.5 ] also work for the case $\gamma\in\mathbb{C}$ such
that $0<|\gamma|<1$. Combining these with Theorem 5.1, we extend the
conclusion of [22, Lemma 8.6] to the case $\gamma\in\mathbb{C}$ if
$\alpha<\min\\{2,\frac{\log b}{\log 1/|\gamma|}\\}$. Assume
$\alpha<\min\\{2,\frac{\log b}{\log 1/|\gamma|}\\}$, then there is a
contradiction as we did in [22]. ∎
## References
* [1] K. Barański, B. Bárány, and J. Romanowska. On the dimension of the graph of the classical weierstrass function. Adv. Math., 265, 32–59, 2014.
* [2] B. Bárány, M. Hochman, and A. Rapaport. Hausdorff dimension of planar self-affine sets and measures. Invent. Math., 216(3):601–659, 2019.
* [3] M. Hochman. On self-similar sets with overlaps and inverse theorems for entropy. Ann. Math., 773–822, 2014.
* [4] F. Ledrappier. On the dimension of some graphs. Contemp. Math., 135, 285–293, 1992.
* [5] F. Ledrappier and L.S. Young. The metric entropy of diffeomorphisms: Part ii: Relations between entropy, exponents and dimension. Ann. Math., 540–574, 1985.
* [6] Y. Peres and B. Solomyak. Absolute continuity of Bernoulli convolutions, a simple proof. Math. Res. Lett, 3, 231-C239., 1996.
* [7] W. Shen. Hausdorff dimension of the graphs of the classical weierstrass functions. Math. Z., 289(1-2), 223–266, 2018.
* [8] B. Solomyak. On the random series $\sum\pm\lambda^{n}$ (an Erdös problem). Ann. Math, 142, 611-C625, 1995.
* [9] M. Tsujii. Fat solenoidal attractors. Nonlinearity, 14(5), 1011–1027, 2001.
* [10] L. S. Young. Dimension, entropy and Lyapunov exponents. Ergodic Theory Dyn. Syst., 2(1):109–124, 1982.
* [11] H. Ren. and W. Shen: A Dichotomy for the Weierstrass-type functions. Invent. Math.,226(3),1057-1100 (2021)
* [12] J.C. Alexander., and J. A. Yorke: Fat Baker’s transformations. Ergodic Theory Dyn. Syst.,4,1-23 (1984)
* [13] P. Varjú. On the dimension of Bernoulli convolutions for all transcendental parameters. Ann. Math, 189(3): 1001-1010, 2019.
* [14] P. Varjú. Absolute continuity of Bernoulli convolutions for algebraic parameters. J. Amer. Math. Soc., 32(2): 351-397, 2019.
* [15] Z. Zhang. On the smooth dependence of SRB measures for partially hyperbolic systems. Commun. Math. Phys., 358: 45-79, 2018.
* [16] M. Rams. Absolute continuity for the SBR measure for non-linear fat baker maps. Nonlinearity, 16: 1649-1655, 2003.
* [17] M. Hochman., and A. Rapaport. Hausdorff dimension of planar self-affine sets with overlaps. J. Eur. Math. Soc., 7: 2361-2441, 2022.
* [18] R. Gao, and W. Shen. Low complexity of optimizing measures over an expanding circle map. preprint, Math. ArXiv, 2206.05467V1.
* [19] L. Shu. Dimension theory for invariant measures of endomorphisms. Comm. Math. Phys., 1: 65-99, 2010.
* [20] A.H. Fan, K.S.Lau. and H. Rao. Relationships between different dimensions of a measure. Monatsh. Math., 135(3): 191-201, 2002.
* [21] V. A. Rokhlin. On the fundamental ideas of measure theory. Trans. Amer. Math. Soc, 10: 1-52, 1962.
* [22] H. Ren. A Dichotomy for the dimension of SRB measure. preprint, Math. ArXiv, 2208.04576V1.
* [23] P. Walters. An introduction to Ergodic Theory. GTM, 79. Springer, Berlin-New York, 1982.
* [24] H. Furstenberg. Ergodic fractal measures and dimension conservation. Ergodic Theory Dyn. Syst., 2: 405-422, 2008.
* [25] D. Feng, and H. Hu. Dimension theory of iterated function systems. Comm. Pure Appl. Math., 62(11): 1435-1500, 2009.
* [26] K. Falconer, and H. Hu. Exact dimensionality and projections of random self-similar measures and sets. J. Lon. Math. Soc., 90(2): 388-412, 2014.
* [27] M. Hochman. On self-similar sets with overlaps and inverse theorems for entropy in $\mathbb{R}^{d}$. Mem. Amer. Math. Soc., 265, no.1287, 2021.
* [28] W. Parry. Skew products of shifts with a compact Lie group. J. Lon. Math. Soc., 56(2): 395-404, 1997.
* [29] P. T. Maker. The ergodic theorem for a sequence of functions. Duke Math. J., 6: 27-30, 1940.
* [30] B. Hasselblatt and J. Schmeling. Dimension product structure of hyperbolic sets. Modern Dynamical Systems and Applications. Eds. B. Hasselblatt, M. Brin and Y. Pesin. Cambridge University Press, New York, 2004,pp.331-345.
* [31] K. Simon. Hausdorff dimension for non-invertible maps. Ergod. Theor. Dynam. Syst., 13: 199-212, 1993.
* [32] K. Simon. The Hausdorff dimension of the Smale–Williams solenoid with different contraction coefficients. Proc. Am. Math. Soc., 125: 1221-8, 1997.
* [33] R. Mohammadpour, F. Przytycki and M. Rams. Hausdorff and packing dimensions and measures for nonlinear transversally non-conformal thin solenoids. Ergod. Theor. Dynam. Syst., 11: 3458-3489, 2002.
* [34] R. Bortolotti and E. Silva. Hausdorff dimension of thin higher-dimensional solenoidal attractors. Nonlinearity, 6: 3261-3282, 2022.
* [35] K. Simon and B. Solomyak. Hausdorff dimension for horseshoes in $\mathbb{R}^{3}$. Ergod. Theor. Dynam. Syst., 5: 1343-1363, 1999.
* [36] E. Mihailescu and M. Urbański. Transversal families of hyperbolic skew-products. Discrete Contin. Dynam. Syst., A 21: 907-928, 2008.
* [37] M. Rams and K. Simon. Hausdorff and packing measure for solenoids. Ergod. Theor. Dynam. Syst., 23: 273-292, 2003.
* [38] M. Rams. Hausdorff and packing measure for thick solenoids. Studia Math., 163. no.3, : 193-202, 2004.
|
# GRB 221009A, its precursor and two afterglows in the Fermi data
Boris E. Stern,1 I.I. Tkachev,1,2
1Institute for Nuclear Research of the Russian Academy of Sciences, Moscow
117312, Russia
2Physics Department and Laboratory of Cosmology and Elementary Particle
Physics, Novosibirsk State University, Novosibirsk 630090, Russia
E-mail<EMAIL_ADDRESS>
###### Abstract
We study GRB 221009A, the brightest gamma-ray burst in the history of
observations, using Fermi data. To calibrate them for large inclination
angles, we use the Vela X gamma-ray source. Light curves in different spectral
ranges demonstrate a 300 s overlap of afterglow and delayed episodes of soft
prompt emission. We demonstrate that a relatively weak burst precursor that
occurs 3 minutes before the main episode has its own afterglow, i.e.,
presumably, its own external shock. The main afterglow is the brightest one,
includes a photon with an energy of 400 GeV 9 hours after the burst, and is
visible in the LAT data for up to two days.
###### keywords:
gamma-ray burst: individual, methods: data analysis
††pubyear: 2023††pagerange: GRB 221009A, its precursor and two afterglows in
the Fermi data–GRB 221009A, its precursor and two afterglows in the Fermi data
## 1 Introduction
The recent and brightest gamma-ray burst, GRB 221009A, has been detected by
many space X-ray – $\gamma$-ray observatories, including Fermi (Veres et al.,
2022; Lesage et al., 2022), Swift (for more detailed analysis see Williams et
al. (2023)), SRG/ART-XC (Lapshov et al., 2022), Konus-Wind (Frederiks et al.,
2022) and others. The burst and its afterglow were also registered by LHAASO
on Earth in the range of hundreds GeV to several TeV (Huang et al., 2022).
Carpet-2 on Baksan Neutruno Observatory has detected an atmospheric shower 250
TeV photon from the location of GRB 221009A (Dzhappuev et al., 2022). On the
other hand HAWC collaboration reports no detection of photons from the
afterglow in TeV range beyond 8 hours after the trigger (Ayala et al., 2022)
and claim the upper limit on the energy flux $4.16\cdot 10^{-12}$ TeV cm-2
s-1.
The burst was intrinsically strong and relatively nearby, z = 0.151. The
apparent brightness of GRB 221009A is exceptional. In Fermi GBM burst catalog
it exceeds the next brightest by factor 15 in energy fluence. However the
strongest impact of this event is due to claims of two photons 18 GeV (LHAASO)
and 250 TeV (Carpet 2) which cannot come from z = 0.15 because of the
absorption on extragalactic background light. There already have appeared
numerous e-prints suggesting new physics to explain these photons.
Taking advantage of the brightness of GRB 221009A we try to find something new
about GRBs themselves in publicly available Fermi data. Namely:
– Is there anything interesting between the precursor of the burst and its
main emission three minutes later.
– What does the transition from the prompt phase of gamma-ray bursts to
afterglow look like?
– How bright is the afterglow and how long can it be traced in the GeV range.
## 2 Data and their calibration
Some raw Fermi data for GRB 221009A are shown in Fig. 1. Large Area Telescope
(LAT) and NaI detectors of Gamma Burst Monitor (GBM) were oversaturated while
Bismuth Germanate (BGO) scintillation detectors satisfactory reproduce the
peak flux in energy channels above $\sim$ 1 MeV. There are no LAT data in the
most interesting intervals 220 – 240 and 260 – 270 seconds (photon detections
lack not only from the GRB direction but from the whole sky).
Figure 1: Time evolution of GRB 221009A in raw data. Prompt phase and early
afterglow. (a) Count rates in various energy ranges on a logarithmic scale,
upper curve GBM NaI 295 - 540 keV, middle curve GBM BGO 22 - 38 MeV, lower
blue curve LAT, $8^{\circ}$ circle around the location of GRB 221009A, yellow
curve - all LAT photons. Dips at 210 and 260 s result from detector
saturation. (b) Individual LAT photons in the $8^{\circ}$ circle around the
position of GRB 221009A (left logarithmic energy scale) and the angle between
the LAT axis and the burst direction (right scale). Note the emission in the
10-200 MeV range 50 s after the precursor.
The next circumstance that makes the direct interpretation of LAT data
problematic is a large angle $\theta$ between the direction of LAT z-axis and
the burst location. The burst has occurred at the very edge of the telescope
field of view where the detection efficiency is low. Fig 1b shows time
dependence of $\theta$ on time during the event. The inclination angle varied
from $75^{\circ}$ to more than $80^{\circ}$ at $t\sim 500$ s when the source
leaved the field of view for an hour.
The angular dependence of LAT effective area is given in Ajello et al. (2021),
see also Fermi LAT Performance. However these data have insufficient
resolution for this specific problem, therefore we have performed a detailed
calibration of LAT detection efficiency for large incident angles using the
brightest GeV source Vela X (both the pulsar and the nebula). The same object
was used by Fermi team for calibration of the point spread function (Ajello et
al., 2021) .
We use photons in $8^{\circ}$ circle around Vela X location as the calibration
sample. The size of this circle is a result of a trade-off between sufficient
containment for $\sim 100$ MeV photons $(>68\%)$ and the contamination of the
sample with background photons. The result of our calibration is shown in Fig.
2.
Figure 2: Calibration data for LAT performance at large incident angles using
Vela-X gamma-pulsar. The ratio of effective area to on-axis effective area as
a function of angle between the photon source and LAT z-axis. Curves from top
to bottom correspond to energy intervals : $>1$ GeV, 1 GeV – 562 MeV, 562 MeV
– 316 MeV, 316 MeV – 178 MeV, 178 MeV – 100 MeV.
The number of photons in our calibration sample is $9.4\cdot 10^{6}$ while the
number of background photons is $~{}2\cdot 10^{6}$ as we have estimated from a
neighbouring site of Milky Way. So, the background is considerable, but its
effect should be moderate because the angular dispersion of background photons
contributes the detection efficiency with different signs: the number of
photons incident at $\theta+\Delta$ is comparable with those arrived at
$\theta-\Delta$. Neverphtheless wide angular cut in calibration sample mimics
some extension of the field of view. We have checked the effect of the wide
angular cut with the same calibration cutting the sample at $4^{\circ}$. The
difference at $\theta=78^{\circ}$ for photons with $E\gtrsim 1$ GeV is
$~{}40\%$ and factor 2.5 for $100-178$ MeV interval. The latter large value is
certainly the effect of widening of point spread function at edge of the field
of view. Therefore we prefer to use the calibration with $8^{\circ}$
calibration sample as it better reproduces the soft end of the spectrum and an
extra background just slightly affects its hard range.
Except angular dependence of detection efficiency one should take into account
the energy dependence. We use that described by Ajello et al. (2021). Note
that for lowest energy bin 100 – 178 MeV that we use the efficiency is 0.45 of
the maximal efficiency. Therefore a 100 MeV photon detected by LAT at 400 s
after the trigger represents $\sim$ 300 photons of the same energy crossing
the detector area (see Fig. 1b and Fig. 2). In our analysis and calibration we
do not distinguish the conversion type and the quality class of photons, using
total effective area for the "source" event class.
## 3 Main episode and onset of the afterglow
Normalized Fermi data for the first 600 seconds are shown in Fig. 3. LAT data
were normalized using our Vela X calibration results (Fig. 2) and energy
dependent effective area from Ajello et al. (2021). Total effective area of
two BGO detectors was set to 200 cm2 independently on energy and incident
angle as such assumption is sufficient for qualitative demonstration. For the
energy and angular performance of BGO detectors see Meegan et al. (2009).
BGO 22 - 38 MeV energy flux in 300 - 600 s interval is very sensitive to the
background model. We use three parametric description: constant plus one
sinusoidal half period with fitting intervals 300 - 150 s and 600 - 1300 s.
Resulting $\chi^{2}$ is good, however the error in energy fluence in BGO us
large therefore we do not take BGO data into consideration when reconstructing
photon spectra. We see a striking transition in time behaviour at $\sim$ 300
s: a sharp decline of main pulse changes to a flat smooth slope. The exception
is another soft episode of prompt emission at 400 - 600 s which we discuss
below. It would be reasonable to suggest then the high energy emission after
300 s can be considered as the main afterglow.
Figure 3: Prompt GRB and early afrerglow in different energy ranges.
Histogram: energy flux of LAT photons normalized to angular-dependent
effective area calibrated with Vela-X source.Green line: energy flux in 0.93 -
2.1 MeV range from Gamma-Burst Monitor BGO detectors. Blue circles: the same
in 22 - 38 MeV energy band.
Phenomenologically, prompt emission usually undergoes a very fast variation of
intensity and spectrum while afterglow has a smooth long decline with a stable
wide spectral energy distribution. Prompt emission sometimes consists of
multiple pulses of different duration and spectra, this pulses can overlap in
time producing in some cases complex structures with a wide temporal Fourier
power spectrum (Beloborodov et al., 2000). Their time behaviour is very
diverse. On the contrary, all afterglows have typical time behaviour: a power
law decline slightly faster than $t^{-1}$.
Theoretically, there exists a paradigm that the prompt emission arises from
internal shocks (or magnetic reconnection or both) in the jet while the
afterglow from external shock form collision of the jet with ambient medium
(see e.g. Piran (2005)), dominated by the stellar wind of the GRB progenitor
(Chevalier & Li, 1999). Physically, the prompt emission in many cases can be
described as a radiation of an optically thick medium due to multiple
Comptonization (e.g. Ito et al. (2018)), while afterglow better corresponds to
synchrotron/Compton radiation of electrons accelerated in an optically thin
environment, see Nava (2018) for a review.
In the case of GRB 221009A we can describe as the prompt emission the
precursor, a soft pulse at $t\sim 180\,-\,200$ s, two main hard pulses and a
soft long structure in $400\,-\,600$ s interval. Photon flux detected by LAT
since 250 s with its spectrum (see Fig. 4) and light curve resembles an
afterglow rather than a prompt emission. The structure at 400 - 600 s is
probably an independent prompt emission episode overlapping in time with the
early afterglow.
Probably the afterglow mechanism (presumably the external shock) has turned on
slightly earlier, e.g at 230 s. Thereafter we use this time as a reference
point for a power law decline of the afterglow.
Figure 4: Spectral energy distribution of photons detected by LAT at time
intervals 230 - 500 s and 4000 - 6000 s reconstructed with calibrated
effective area (see Fig. 2).
Our estimate of the energy fluence represented by 229 photons detected by LAT
in the time interval 220 - 500 s is $1.55\cdot 10^{-3}$ erg/cm2. Actual
fluence could be several times higher since we don not know how many photons
are lost in over-saturation gaps. The low energy fluence preliminary estimated
by Lesage et al. (2022) is $2.9\cdot 10^{-2}$ erg/cm2, the total energy
fluence can be much higher, see Frederiks et al. (2023). The spectral energy
distribution in 100 MeV - 100 GeV range is shown in Fig. 4. We made a forward-
folding power law fit to the numbers of detected photons in energy bins. The
resulting photon index for LAT photons in background-free 8∘ field is
-1.92$\pm 0.04$ which is consistent with estimate of Pillera et al. (2022)
## 4 The main afterglow
After 500 seconds since the trigger the GRB came out of LAT field. The next
time window was open from $\sim 4100$ to $\sim 5700$ seconds. Then the
location of GRB221009 periodically appeared in the field of view with duty
cycle $\sim 20$ %.
Figure 5: Events in the $1^{\circ}$ circle around GRB 221009A detected by LAT.
(a) Photons in the time interval $10^{3}$ \- $3\cdot 10^{5}$ s since the
trigger (green squares) and background photons in the interval of the same
length but before trigger (red circles). (b) The same but as a histogram. The
average background normalized to the same logarithmic bins is shown by the
green line. The estimate of the background has been done with $8^{\circ}$
field around the GRB location for 300000 seconds before the burst.
Fig. 5 shows counts of photons detected by LAT versus logarithm of time.
Unlike the main episode which is essentially background free, the background
during the late afterglow is considerable and is too large in $8^{\circ}$
field of view which we accepted for the description of the main episode. For
this reason we analyse the afterglow using $1^{\circ}$ circle. The background
estimated with photons detected from the same direction during 300 000 seconds
prior the trigger is shown in Fig. 5. With this window the afterglow is
significant up to two days (see Fig. 5b). Further contraction of the field of
view does not improve the significance.
Fig. 5a shows LAT photons in log-log scale for $1^{\circ}$ field. It looks
like the afterglow is getting softer with time, however note a 400 GeV photon
at $\log(t)\sim 4.5$ (33554 s). The angular deviation of this photon from the
GRB location is $0.06^{\circ}$, the probability of chance coincidence with
such angular deviation during a day is $\sim 10^{-6}$. This photon is missing
in the telegram of Fermi team, but was noticed by Xia et al. (2022). The
impression of softening can result from the lack of soft photons in the main
episode and their excess at $t\sim 10^{5}$ s. The former can be explained by a
stronger suppression of soft photons at large incident angles (see Fig. 2a)
and the latter by the contribution of a softer background. The statistics is
insufficient to reveal an evolution of the afterglow spectrum.
Figure 6: The afterglow of several $\gamma$-ray bursts. Black: GRB221009A,
squares with error bars – this work, dot with an upper limit – HAWC
Collaboration. Blue: GRB 190114C, dots – MAGIC (0.3 - 1 TeV), stars with error
bars – Fermi LAT. Green: GRB 190829A – H.E.S.S. (0.2 - 4 TeV). Violet: GRB
180720B – H.E.S.S. (100 - 440 GeV). Red: GRB 130427A – Fermi LAT.
The spectral energy distributions for the beginning of the afterglow and for
the second window are shown in Fig. 4. They are consistent with each other and
just slightly differ from the "canonical" spectrum with flat SED (photon index
$\alpha=-2$). We suggest that it would be reasonable to set the spectral index
to -2 when fitting the afterglow energy flux. Fig. 6 shows the photon flux of
the afterglow versus time. The flux is normalized to $3^{\circ}$ field of view
using energy dependent relative containment in $3^{\circ}$ and $1^{\circ}$
circles averaged over spectrum with $\alpha=-2$. We add for comparison some
data points for high energy fluxes of other bright GRBs: GRB 130427SA,
(Ackermann et al., 2014), GRB 180720B (Abdalla et al., 2019), GRB 190114C
(Acciari et al., 2019) and GRB 190829 (Abdalla et al., 2021). These data are
summarised and discussed by Miceli & Nava (2022). The afterglow of GRB is
slightly brighter than afterglow of GRB 130427A and almost an order of
magnitude brighter than GRB 18072B , 190114C and 190829 which were detected by
Cherenkov telescopes. This is the first case when the afterglow is still
visible in the Fermi range two days after the GRB. Intriguingly, it was not
detected by the HAWC Collaboration (Ayala et al., 2022), see Fig. 6.
The afterglow is still $3\sigma$ significant in 100 000 – 200 000 s interval:
the number of photons is 40 versus 23 photons in the "mirror" -100 000 –
-200000 s interval. The afterglow light curve in 0.1 - 10 GeV range can be
described as a power law $F\sim t^{-\alpha_{\tau}}$, where
$\alpha_{\tau}=1.32\pm 0.05$.
## 5 The afterglow of the precursor
Fermi LAT detected 6 photons in the range 20 - 200 MeV in time interval 7 - 50
seconds after the trigger (Fig. 1b) when the angle $\theta$ varied from
$65^{\circ}$ to $70^{\circ}$ and LAT effective area was several times less
than for on-axis photons (see Fig. 2). Note that the prompt precursor emission
already has relaxed when LAT has detected first of these photons. We estimate
the energy fluence represented by these 6 photons as $~{}1.5\cdot 10^{-6}$ erg
cm-2.
How significant is this "preafterglow"? We have counted photons from
$8^{\circ}$ circle when the orientation of LAT z-axis to the GRB was
$63^{\circ}<\theta<71^{\circ}$ during 30 000 seconds before the burst. The
result is 302 photons for 23400 seconds which gives the expectation 0.64
photons for 50 s interval after the precursor. The probability to sample 6
photons by chance is $0.5\cdot 10^{-4}$ ($4\sigma$). Note that there is no
"look elsewhere" effect in this case: the photons appear in a proper place
with no sample manipulation. Therefore this significance is quite sufficient
to claim that Fermi has detected an afterglow of a precursor. This is the
first case of such detection. The idea that a GRB precursor could produce its
own afterglow was suggested by Nappo et al. (2014).
The spectral fit to 6 photons is, of course, quite loose, the resulting
spectral index is $\alpha=-2.5\pm 0.5$ which is consistent with the spectrum
of the main afterglow with $\alpha\sim-2$.
## 6 Discussion and conclusions
Probably the most interesting fact that we see in Fermi data is the afterglow
of the precursor. This is the first direct evidence of such phenomenon at
least if we treat a precursor as a relatively weak event separated from the
main episode by a long time interval.
A weak precursor that occurs long before (up to several minutes) the main GRB
is a feature of many GRBs. The estimates of their occurrence varies from 3%
GRBs (Koshut et al., 1995) up to 10% (Troja et al., 2010) or even 20%
(Lazzati, 2005). A useful review of the phenomenon is given by Zhu (2015). In
fact, this fraction may be even higher, since the precursor can be easily
lost. The precursor of GRB 221009A has a few hundred times lower peak count
rate and several thousand times lower energy fluence than the main emission
episode. Such relatively weak precursor can by detected only in rare cases of
very strong GRBs and one can not exclude that this phenomenon is common for
the majority of GRBs and we observe just strongest precursors.
However this weak precursor has a bright afterglow. The energy fluence of the
precursor and its afterglow is: $\sim 1.7\cdot 10^{-5}$ erg cm-2 (0.1 - 4.8
MeV, BGO) and $\sim 1.5\cdot 10^{-5}$ erg cm-2 correspondingly. The estimate
by Minaev et al. (2022) of the precursor fluence is $2.38\pm 0.04\cdot
10^{-5}$ erg cm-2. Therefore, they are comparable, while the main afterglow is
much weaker than the main prompt emission episode: $4.7\cdot 10^{-4}$ erg cm-2
versus $3.3\cdot 10^{-2}$ erg cm-2 (our estimate using BGO counts in 0.1 - 4.8
MeV range), or even 0.21 erg cm-2 in 0.02 - 10 MeV band according to estimate
of Frederiks et al. (2023). The difference in this ratio is two orders of
magnitude. This is a hint that the precursor could differ from the main GRB in
its nature. If we follow the paradigm that the prompt emission originates from
internal shocks in the jet and the afterglow comes from external shock in the
ambient medium (see Piran (2005)), then we have to conclude that internal
shock is pathologically weak in the case of the precursor. In principle, one
cannot exclude that a precursor with its afterglow is an essentially different
phenomenon preceding the main emission. Unfortunately, there is a very small
chance to observe such afterglow directly in the nearest future, nevertheless
it probably could be sensed statistically using existing data bases.
As for the main afterglow, this is the brightest one as well as the GRB 221009
itself. In other respects (relative energy flux, spectrum, decay law) it looks
typical. The afterglow due to its brightness is visible in Fermi data during
two days and could be visible even longer by Cherenkov telescopes unless bad
observational conditions including bright moon.
HAWC collaboration has reported no detections beyond 8 hours after the event
and set the upper limit $4.16\cdot 10^{-12}$ erg cm-2 s-1 (Ayala et al.,
2022). This upper limit is in apparent tension with Fermi data unless one
implies a sharp spectral decline between $\sim 10$ GeV and TeV energy ranges.
However other GRB do not have such decline in their afterglows which is
supported by data of Cherenkov telescopes for other GRBs (cf. Fermi and MAGIC
data in Fig. 6). Moreover, the 400 GeV photon detected at 9 hours is an
argument against a spectral cutoff. The publication of LHAASO results may
clarify this issue. Note that the second brightest afterglow of GRB 130427A
also has not been detected above 100 GeV, the upper limit set by VERITAS (Aliu
et al., 2014) $\sim 10^{-11}$ erg cm-2 s-1 at the next day could be in some
tension with Fermi data too.
## Acknowledgements
We thank Fermi team for the excellent data base and NASA for the open data
policy.
## Data Availability
This work is based on publicly available Fermi database and the quantitative
description of the Fermi Large Area Telescope performance. The work of I.T.
was supported by the Russian Science Foundation grant 23-42-00066.
## References
* Abdalla et al. (2019) Abdalla H., et al., 2019, Nature, 575, 464
* Abdalla et al. (2021) Abdalla H., et al., 2021, Science, 372, 1081
* Acciari et al. (2019) Acciari V. A., et al., 2019, Nature, 575, 459
* Ackermann et al. (2014) Ackermann M., et al., 2014, Science, 343, 42
* Ajello et al. (2021) Ajello M., et al., 2021, ApJS, 256, 12
* Aliu et al. (2014) Aliu E., et al., 2014, ApJ, 795, L3
* Ayala et al. (2022) Ayala H., et al., 2022, GRB Coordinates Network, 32683, 1
* Beloborodov et al. (2000) Beloborodov A. M., Stern B. E., Svensson R., 2000, ApJ, 535, 158
* Chevalier & Li (1999) Chevalier R. A., Li Z.-Y., 1999, ApJ, 520, L29
* Dzhappuev et al. (2022) Dzhappuev D. D., et al., 2022, The Astronomer’s Telegram, 15669, 1
* Frederiks et al. (2022) Frederiks D., Lysenko A., Ridnaia A., Svinkin D., Tsvetkova A., Ulanov M., Cline T., Konus-Wind Team 2022, GRB Coordinates Network, 32668, 1
* Frederiks et al. (2023) Frederiks D., et al., 2023, arXiv:2302.13383
* Huang et al. (2022) Huang Y., Hu S., Chen S., Zha M., Liu C., Yao Z., Cao Z., Experiment T. L., 2022, GRB Coordinates Network, 32677, 1
* Ito et al. (2018) Ito H., Levinson A., Stern B. E., Nagataki S., 2018, MNRAS, 474, 2828
* Koshut et al. (1995) Koshut T. M., Kouveliotou C., Paciesas W. S., van Paradijs J., Pendleton G. N., Briggs M. S., Fishman G. J., Meegan C. A., 1995, ApJ, 452, 145
* Lapshov et al. (2022) Lapshov I., Molkov S., Mereminsky I., Semena A., Arefiev V., Tkachenko A., Lutovinov A., SRG/ART-XC Team 2022, GRB Coordinates Network, 32663, 1
* Lazzati (2005) Lazzati D., 2005, MNRAS, 357, 722
* Lesage et al. (2022) Lesage S., Veres P., Roberts O. J., Burns E., Bissaldi E., Fermi GBM Team 2022, GRB Coordinates Network, 32642, 1
* Meegan et al. (2009) Meegan C., et al., 2009, ApJ, 702, 791
* Miceli & Nava (2022) Miceli D., Nava L., 2022, Galaxies, 10, 66
* Minaev et al. (2022) Minaev P., Pozanenko A., Chelovekov I., GRB IKI FuN 2022, GRB Coordinates Network, 32819, 1
* Nappo et al. (2014) Nappo F., Ghisellini G., Ghirlanda G., Melandri A., Nava L., Burlon D., 2014, MNRAS, 445, 1625
* Nava (2018) Nava L., 2018, Int. J. Mod. Phys. D, 27, 1842003
* Pillera et al. (2022) Pillera R., Bissaldi E., Omodei N., La Mura G., Longo F., 2022, The Astronomer’s Telegram, 15656, 1
* Piran (2005) Piran T., 2005, Rev. Mod. Phys., 76, 1143
* Troja et al. (2010) Troja E., Rosswog S., Gehrels N., 2010, ApJ, 723, 1711
* Veres et al. (2022) Veres P., Burns E., Bissaldi E., Lesage S., Roberts O., Fermi GBM Team 2022, GRB Coordinates Network, 32636, 1
* Williams et al. (2023) Williams M. A., et al., 2023, arXiv:2302.03642
* Xia et al. (2022) Xia Z.-Q., Wang Y., Yuan Q., Fan Y.-Z., 2022, arXiv:2210.13052
* Zhu (2015) Zhu S., 2015, PhD thesis, Univ. of Maryland
|
# WikiGUM: Exhaustive Entity Linking for Wikification in 12 Genres
Jessica Lin
Department of Linguistics
Georgetown University
<EMAIL_ADDRESS>
&Amir Zeldes
Department of Linguistics
Georgetown University
<EMAIL_ADDRESS>
###### Abstract
Previous work on Entity Linking has focused on resources targeting non-nested
proper named entity mentions, often in data from Wikipedia, i.e. Wikification.
In this paper, we present and evaluate WikiGUM, a fully wikified dataset,
covering all mentions of named entities, including their non-named and
pronominal mentions, as well as mentions nested within other mentions. The
dataset covers a broad range of 12 written and spoken genres, most of which
have not been included in Entity Linking efforts to date, leading to poor
performance by a pretrained SOTA system in our evaluation. The availability of
a variety of other annotations for the same data also enables further research
on entities in context.
## 1 Introduction
Entity linking (EL) involves identifying entities within a text and
subsequently linking their mentions to a knowledge base or table of
authorities. The former step is often referred to as Named Entity Recognition
(NER) and the latter may also be referred to as entity disambiguation. In this
study, we will focus on the latter task by following the popular approach of
mapping named entities to Wikipedia entities Milne and Witten (2008);
Shnayderman et al. (2019), i.e. Wikification.
Wikification is the task of adding links to Wikipedia pages to mentions of
named entities in a written or spoken text. This task supports Natural
Language Understanding in downstream tasks such as question answering,
summarization, and relation extraction. However the scope and structure of EL
depends heavily on datasets which are either automatically derived from
hyperlinked text and thus suffer some limitations, or are created via human
annotation, a time-consuming and expensive task. Despite numerous existing EL
datasets Cucerzan (2007); Ji et al. (2015); Kulkarni et al. (2009); Milne and
Witten (2008); Ratinov et al. (2011), few have attempted to capture nested
entity structure, as in Figure 1, which never occurs in hyperlinks, which
cannot be nested.
Figure 1: Nested entity linking.
Instead, annotations have focused on flat mention structure from popular
online sources, leaving out important information in nested entities that can
be useful for downstream tasks. Closest to the resource presented here is the
Nested Named Entities (NNE) dataset Ringland et al. (2019), which is a large,
manually-annotated, nested named entity dataset over English newswire,
however, it does not include entity linking. Although NNE includes fine-
grained semantic information in nested entity types, it is not linked to any
identifiers (e.g. a Wikipedia page). Furthermore, even if used for mention
recognition, the data is not ideal for testing on diverse genres, as NNE
solely covers news text. We also note other datasets capturing some nested
entity structure, such as the Abstract Meaning Representation (AMR) corpus
(Banarescu et al., 2013), which includes compositional nesting e.g. in
possessives such as Toronto’s international airport, composed of a city and an
airport. However, since AMR is not word-aligned to text, even those nested
entities that are covered are not aligned to their textual position.
In this paper, we present and evaluate a gold standard wikified dataset,
called WikiGUM, in which named and non-named entities have been annotated
manually. WikiGUM is based on the existing GUM dataset (Georgetown University
Multilayer corpus, Zeldes 2017), and goes beyond other EL corpora, in covering
all mentions of named entities (NEs), including non-named and pronominal
mentions, as well as nested mentions, for 12 genres of English text. WikiGUM
also enables assessment of EL annotations by highlighting challenges that are
common in our dataset, and reveals the relatively poor coverage of state-of-
the-art NLP systems for EL in diverse genres (Section 4). Taken together, we
aim to facilitate new research on nested NER and EL, to promote recognition of
all NE mentions and a deeper understanding of the hierarchical structure of
entities in text.
## 2 WikiGUM
The underying GUM Zeldes (2017) corpus is a manually annotated dataset with
multiple layers, including POS tagging (Penn tags, CLAWS5, Universal POS),
sentence types (e.g. declarative, imperative, yes/no question), UD dependency
trees Nivre et al. (2016), coreference resolution (including bridging anaphora
and split antecedents), and RST discourse parses Mann and Thompson (1988).
Data covers 12 genres: academic, biographies, conversation, fiction, forums,
how-to, interviews, news, speeches, textbooks, travel and vlogs.
Text Types | Source | Documents | # of NE Mentions | # of Nested Wikified Mentions | Total Mentions | Tokens
---|---|---|---|---|---|---
Interviews | Wikinews | 19 | 1,146 | 107 | 5,204 | 18,037
News stories | Wikinews | 21 | 1,221 | 217 | 4,130 | 14,094
Travel guides | Wikivoyage | 17 | 1,327 | 174 | 4,087 | 14,955
How-to guides | WikiHow | 19 | 94 | 8 | 4,469 | 16,920
Academic writing | various | 16 | 329 | 48 | 4,486 | 15,110
Biographies | Wikipedia | 20 | 2,450 | 413 | 5,763 | 17,951
Fiction | various | 18 | 195 | 9 | 4,737 | 16,307
Forum discussions | Reddit | 18 | 196 | 2 | 4,530 | 16,286
Conversations | UCSB corpus | 5 | 31 | 0 | 1,477 | 5,698
Political speeches | various | 5 | 316 | 32 | 1,423 | 4,831
CC Vlogs | YouTube | 5 | 39 | 1 | 1,355 | 5,180
Textbooks | OpenStax | 5 | 188 | 21 | 1,507 | 5,376
Total | 168 | 7,352 | 1,032 | 43,168 | 150,745
Table 1: Statistics on WikiGUM
WikiGUM adds a layer of Wikipedia identifiers to all NEs in GUM, which are
identified automatically by having the gold PTB POS tag NNP(S) for their
syntactic head, based on gold syntax trees (for some resulting issues, see
below), as well as non-named mentions coreferring to them based on coreference
annotations. Since GUM is expanded by students in classroom annotation every
year, and we plan to continue adding Wikification in the future, no closed or
pre-prepared ontology is applied to the Wiki identifiers, making the task
simpler for student annotators who only need to find a corresponding Wikipedia
article.
That said, the existing 10 entity types in GUM111Types: person, place, org,
animal, plant, event, time, substance, abstract and inanimate object. mean
that our EL benefits from the same categorization scheme as a rough ontology,
and the availability of semantic information from WikiData means that many
relationships between entities can be explored. All referential NPs, including
pronouns and even clauses (if they co-refer with a named entity based on GUM’s
coreference annotations, for example movie titles), were selected as markables
for annotation. Note that nested markables are always included, for example:
. [the airport in [Cuba]place]place
Our general guideline for entity linking is that NEs, including pronominal and
non-named mentions, were manually linked to the corresponding Wikipedia
article whenever one exists, using the version controlled online editor GitDox
Zhang and Zeldes (2017). For example:
. Kim likes [The Terminator]abstract. [This movie]abstract is her favorite.
In this example, the span This movie should also be linked to the Wikipedia
page that refers to The Terminator (the movie). Statistics on WikiGUM, which
is freely available under the same Creative Commons license as GUM, are shown
in Table 1.
Although the basic Wikification task is fairly straightforward, some
ambiguous/tricky cases during annotation included:
* •
Generic terms: some capitalized common nouns that have Wikipedia links appear
within NEs, and are tagged NNP(S), but do not correspond to named entities.
For example, Oil is incorrectly proposed as a NE due to capitalization within
the NE the Oil Capital of the World (referring to Tulsa, OK) and due to the
POS tag NNP. It can be tempting to link ‘oil’ as a NE candidate to the
Wikipedia article ‘Petroleum’. However in context, ‘oil’ is a generic, non-
named modifier to ‘Capital’, and should not be linked as a NE. Annotators
should be mindful of context of terms tagged NNP(S) within NEs, rather than
linking any NNP span.
* •
Subset of entity with the same type: a common type of ambiguity for place
entities arises when names are reused in different countries, regions, cities
or villages. For example, terms like ‘North’, ‘South’, ‘East’, and ‘West’ as a
subset of a region are hard to disambiguate, and they are common in street
names in North America. In this case, annotators must look at the broader
context and carefully check whether the entity refers to a subset or not, for
example cities and their metropolitan areas, streets with and without cardinal
directions, or other parts of cities which sometimes have separate Wikipedia
entries.
* •
Distinct links for identical mentions: It is sometimes hard for annotators to
realize that an entity string has several EL variants. This happens often in
abbreviations, which may be labelled with the wrong entity type. For example,
‘JFK’ can be a person’s name (the 35th US President) or a place name (JFK
Airport in New York), depending on context. We instructed annotators to
prioritize the existing entity type annotation: if JFK is tagged as a place,
it is linked to the article about the airport. Another common issue affects
ancient place names which do not exist nowadays, resulting in difficulty for
EL. For instance, Jorvik is the viking name of York, and was therefore linked
to the closest equivalent article, ‘Scandinavian York’ rather than ‘York’. In
other cases, we relied on the coreference annotations to establish
equivalence: for example England’s City of Festivals was labeled as
coreferring with York, and was therefore considered equivalent to York for EL
purposes.
* •
Lack of background knowledge: In some cases context alone cannot help
annotators decide on the right sense of an entity, especially in academic
texts, but also in discussion forums. Academic articles often assume readers
have detailed knowledge of the topic and thus provide little context for the
target entity. For example, ‘Su’ in ‘Su et al. 2016’ is a named entity, but it
may be difficult to know whether there is a corresponding Wikipedia article
based solely on the author’s name.
## 3 Related work
Table 2 compares WikiGUM to other EL corpora. Most current EL datasets are
based on newswire text, overlooking the impact of genre on EL – for example,
Dai (2018) notes that the biomedical domain involves complex and unique entity
mentions. As EL datasets are developed for evaluation of EL systems, out-of-
domain data could create challenges for conventional tools. Furthermore, most
previous work Cucerzan (2007); Ji et al. (2015); Kulkarni et al. (2009); Milne
and Witten (2008); Ratinov et al. (2011) has focused on identifying and
classifying atomic, flat mention structures, leaving out the semantic
information available in nested mentions.
As shown in Table 2, most datasets do not contain nested entity linking
annotations, with ACE2004 being the exception Ratinov et al. (2011).
Unfortunately, no dataset covers all mentions of named entities, including
their non-named mentions (‘the same airport’, or ‘it’). As mentioned above, we
do see some research on nested entity structure Glavaš and Šnajder (2014);
Hong et al. (2016), for example the NNE corpus Ringland et al. (2019) contains
fine-grained semantic information including e.g. the category city nesting a
state, which could easily be used for EL. However they are not disambiguated
or linked to a table of authorities, in addition to excluding non-named
mentions of the same entities. WikiGUM thus differs from previous EL datasets
and is rich in terms of both genre and entity structure, as well as being
among the larger available datasets as shown in the Table.
Dataset | Paper | # of documents | # of NEs | # of genres | NE/N | Pronouns | Nested Entities
---|---|---|---|---|---|---|---
WikiGUM | – | 168 | 7,352 | 12 | NE&N | |
ACE2004 | Ratinov et al. (2011) | 36 | 256 | 1 | NE | |
AIDA-A | Hoffart et al. (2011) | 216 | 5,917 | 1 | NE | |
AIDA-B | Hoffart et al. (2011) | 231 | 5,616 | 1 | NE | |
AQUAINT | Milne and Witten (2008) | 50 | 727 | 1 | NE&N | |
Derczynski | Derczynski et al. (2015) | 182 | 210 | 1 | NE | |
IITB | Kulkarni et al. (2009) | 107 | 17,200 | 1 | NE | |
KORE50 | Hoffart et al. (2012) | 50 | 148 | 1 | NE | |
MSNBC | Cucerzan (2007) | 20 | 656 | 1 | NE | |
n3-RSS-500 | Röder et al. (2014) | 500 | 1,000 | 1 | NE | |
n3-Reuters-128 | Röder et al. (2014) | 128 | 880 | 1 | NE | |
OKE2015 | Nuzzolese et al. (2015) | not specified | 718 | 1 | NE&Roles | |
OKE2016 | Nuzzolese et al. (2016) | not specified | 940 | 1 | NE&Roles | |
Table 2: English EL datasets. NE/N indicates whether only named entities (NE)
or also common nouns (N) are included. Note that ACE 2004 is a subset of the
documents used in the ACE 2004 coreference documents.
## 4 Evaluation
In this section we evaluate inter-annotator agreement, as well as the extent
to which existing Wikification technology already captures the information in
WikiGUM.
### Inter-annotator agreement
Measuring agreement for Wikification involves two main complementary aspects:
span detection and Wikification (including the decision whether to link an
entity and to what). Since GUM already contains mention boundaries and
named/non-named status, we focus on the latter task, measuring linking
agreement. To calculate agreement, we carried out an inter-annotator agreement
experiment by double annotating 3,103 tokens of corpus data containing 237
entities after adjudication, about 3% of the data. We compute both Cohen’s
Kappa and simple percent agreement (percentage of exact match), shown in Table
3. Note that computing Cohen’s Kappa here is somewhat artificial, as in the
real world there is an (almost) unlimited space of possible Wikipedia
identifiers. For simplicity, we define the space of possible links as the
union of any values annotators used in this subset, meaning that any link
chosen at any point by any annotator is considered a possible value for the
annotation, and any disagreement is penalized by the metric.222An anonymous
reviewer has asked whether this means that search ambiguity and an
overwhelming number of options impacted our process: this is certainly true,
and somewhat inevitable given that annotators were unrestricted in the
identifiers they could choose from Wikipedia. However the high level of
absolute agreement suggests that in practice annotators were surprisingly
internally consistent.
Metric | Score
---|---
Agreement | 0.8903
Cohen’s $\kappa$ | 0.8782
Table 3: Results of the inter-annotator agreement experiment
Results in Table 3 show that agreement is far beyond chance, with Kappa=.87
and simple agreement of .89. While this indicates very good agreement, raters
did disagree on ambiguous cases, which is worth discussing. A major source of
disagreement involves linking the same entity string to distinct but related
identifiers, i.e. the name variants issue highlighted in Section 2. This is
often due to lack of context information: for example, in the sentence “CC
makes things more complex”, the mention CC could be linked to Creative Commons
license (public copyright license) or Creative Commons (organization that
produced the Creative Commons license). In this case, broader context and
reasoning are required to make consistent decisions. Another example is the
sentence “According to the Arts and Humanities Citation Index Professor
Chomsky is the eighth most cited scholar of all time.”, the mention Arts is
not a NE and should not be linked, but was linked by one annotator to “The
arts”, which has a linkable article and can easily be confused with a named
concept of sorts.
### NLP coverage
To evaluate the usefulness of WikiGUM beyond existing resources, we test a
recent SOTA pretrained end-to-end neural system (e2e, Kolitsas et al. 2018) on
the test set and compare it to a baseline strategy. Our baseline system uses a
neural constituent parser Mrini et al. (2020) to identify predicted noun
phrases and simply checks the exact string of every phrase headed by a proper
noun to see if it has a Wikipedia article (using the Python library
wikipedia). Since the SOTA system cannot identify nested mentions, but we do
not know which one of two nested mentions it might identify (the bigger or
smaller one), we evaluate in multiple scenarios: counting all mentions, only
unnested mentions, and, since the system cannot identify pronouns, with and
without them.
| data | P | R | F1 | links
---|---|---|---|---|---
e2e | all | 0.398 | 0.192 | 0.259 | 827
| -pron | 0.441 | 0.259 | 0.327 | 677
| -nest | 0.363 | 0.203 | 0.260 | 713
| -both | 0.363 | 0.253 | 0.298 | 573
baseline | all | 0.480 | 0.182 | 0.264 | 827
| -pron | 0.480 | 0.223 | 0.304 | 677
| -nest | 0.363 | 0.203 | 0.260 | 713
| -both | 0.391 | 0.214 | 0.277 | 573
Table 4: Baseline and e2e results on WikiGUM test set.
The results in 4 show that e2e, even when trained on the largest available
Wikification dataset (AIDA, $\approx$1,000 documents, 18K links) does not
generalize well to the domains found in our corpus, barely outperforming a
naive lookup baseline. Comparing the scenarios, we see that best performance
for both systems is achieved when removing pronouns, which is unsurprising
since neither strategy can be expected to link them. However removing nested
mentions does not result in higher scores: this is because some common
targets, such as places, are often nested in larger names (organizations,
office-holders), and removing them disrupts score gains from their correct
identification. Nevertheless, the most lenient possible evaluations are in the
low 30s, as opposed to a score of 82.6 on AIDA (Kolitsas et al., 2018, 524).
This suggests that, unsurprisingly, the corpus covers a range of entities and
contexts that are under- or unrepresented in previous benchmarks.
## 5 Conclusion
This paper presented WikiGUM, the first exhaustive Wikification dataset for
named entity linking, including nested and pronominal mentions in 12 genres of
English text. Our evaluation suggests a high level of agreement, as well as
coverage for a significant amount of entities not retrieved by a SOTA neural
linking system. We hope that this dataset will enable further research on
entity linking and increase coverage for all types of linkable named entities
across a broad spectrum of genres, both spoken and written.
## References
* Banarescu et al. (2013) Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013\. Abstract meaning representation for sembanking. In _Proceedings of the 7th linguistic annotation workshop and interoperability with discourse_ , pages 178–186.
* Cucerzan (2007) Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on Wikipedia data. In _Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL)_ , pages 708–716.
* Dai (2018) Xiang Dai. 2018. Recognizing complex entity mentions: A review and future directions. In _Proceedings of ACL 2018, Student Research Workshop_ , pages 37–44.
* Derczynski et al. (2015) Leon Derczynski, Diana Maynard, Giuseppe Rizzo, Marieke Van Erp, Genevieve Gorrell, Raphaël Troncy, Johann Petrak, and Kalina Bontcheva. 2015. Analysis of named entity recognition and linking for tweets. _Information Processing & Management_, 51(2):32–49.
* Glavaš and Šnajder (2014) Goran Glavaš and Jan Šnajder. 2014. Constructing coherent event hierarchies from news stories. In _Proceedings of TextGraphs-9: the workshop on Graph-based Methods for Natural Language Processing_ , pages 34–38.
* Hoffart et al. (2012) Johannes Hoffart, Stephan Seufert, Dat Ba Nguyen, Martin Theobald, and Gerhard Weikum. 2012. KORE: keyphrase overlap relatedness for entity disambiguation. In _Proceedings of the 21st ACM international conference on Information and knowledge management_ , pages 545–554.
* Hoffart et al. (2011) Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In _Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing_ , pages 782–792.
* Hong et al. (2016) Yu Hong, Tongtao Zhang, Tim O’Gorman, Sharone Horowit-Hendler, Heng Ji, and Martha Palmer. 2016. Building a cross-document event-event relation corpus. In _Proceedings of the 10th Linguistic Annotation Workshop held in conjunction with ACL 2016 (LAW-X 2016)_ , pages 1–6.
* Ji et al. (2015) Heng Ji, Joel Nothman, Ben Hachey, and Radu Florian. 2015. Overview of TAC-KBP2015 tri-lingual entity discovery and linking. In _TAC_.
* Kolitsas et al. (2018) Nikolaos Kolitsas, Octavian-Eugen Ganea, and Thomas Hofmann. 2018. End-to-end neural entity linking. In _Proceedings of the 22nd Conference on Computational Natural Language Learning_ , pages 519–529, Brussels, Belgium. Association for Computational Linguistics.
* Kulkarni et al. (2009) Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti. 2009. Collective annotation of Wikipedia entities in web text. In _Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining_ , pages 457–466.
* Mann and Thompson (1988) William C Mann and Sandra A Thompson. 1988. Rhetorical Structure Theory: Toward a functional theory of text organization. _Text_ , 8(3):243–281.
* Milne and Witten (2008) David Milne and Ian H Witten. 2008. Learning to link with Wikipedia. In _Proceedings of the 17th ACM conference on Information and knowledge management_ , pages 509–518.
* Mrini et al. (2020) Khalil Mrini, Franck Dernoncourt, Quan Hung Tran, Trung Bui, Walter Chang, and Ndapa Nakashole. 2020. Rethinking self-attention: Towards interpretability in neural parsing. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 731–742, Online. Association for Computational Linguistics.
* Nivre et al. (2016) Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal Dependencies v1: A multilingual treebank collection. In _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16)_ , pages 1659–1666.
* Nuzzolese et al. (2015) Andrea Giovanni Nuzzolese, Anna Lisa Gentile, Valentina Presutti, Aldo Gangemi, Darío Garigliotti, and Roberto Navigli. 2015. Open knowledge extraction challenge. In _Semantic Web Evaluation Challenges_ , pages 3–15, Cham. Springer International Publishing.
* Nuzzolese et al. (2016) Andrea Giovanni Nuzzolese, Anna Lisa Gentile, Valentina Presutti, Aldo Gangemi, Robert Meusel, and Heiko Paulheim. 2016. The second open knowledge extraction challenge. In _Semantic Web Evaluation Challenge_ , pages 3–16. Springer.
* Ratinov et al. (2011) Lev Ratinov, Dan Roth, Doug Downey, and Mike Anderson. 2011. Local and global algorithms for disambiguation to Wikipedia. In _Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies_ , pages 1375–1384.
* Ringland et al. (2019) Nicky Ringland, Xiang Dai, Ben Hachey, Sarvnaz Karimi, Cecile Paris, and James R Curran. 2019. NNE: A dataset for nested named entity recognition in English newswire. _arXiv preprint arXiv:1906.01359_.
* Röder et al. (2014) Michael Röder, Ricardo Usbeck, Sebastian Hellmann, Daniel Gerber, and Andreas Both. 2014. N3-A collection of datasets for named entity recognition and disambiguation in the NLP interchange format. In _LREC_ , pages 3529–3533.
* Shnayderman et al. (2019) Ilya Shnayderman, Liat Ein-Dor, Yosi Mass, Alon Halfon, Benjamin Sznajder, Artem Spector, Yoav Katz, Dafna Sheinwald, Ranit Aharonov, and Noam Slonim. 2019\. Fast end-to-end wikification. _arXiv preprint arXiv:1908.06785_.
* Zeldes (2017) Amir Zeldes. 2017. The GUM corpus: Creating multilayer resources in the classroom. _Language Resources and Evaluation_ , 51(3):581–612.
* Zhang and Zeldes (2017) Shuo Zhang and Amir Zeldes. 2017. GitDOX: A linked version controlled online XML editor for manuscript transcription. In _FLAIRS Conference_ , pages 619–623.
|
# Kink-antikink scattering-induced breathing bound states and oscillons in a
parametrized $\phi^{4}$ model
F. Naha Nzoupe Laboratory of Mechanics, Department of Physics, Faculty of
Science, University of Yaoundé I P.O. Box 812 Yaoundé, Cameroon
Alain M. Dikandé Laboratory of Research on Advanced Materials and Nonlinear
Science (LaRAMaNS), Department of Physics, Faculty of Science, University of
Buea P.O. Box 63 Buea, Cameroon.
<EMAIL_ADDRESS> C. Tchawoua Laboratory of Mechanics, Department of
Physics, Faculty of Science, University of Yaoundé I P.O. Box 812 Yaoundé,
Cameroon
###### Abstract
Recent studies have emphasized the important role that a shape deformability
of scalar-field models pertaining to the same class with the standard
$\phi^{4}$ field, can play in controlling the production of a specific type of
breathing bound states so-called oscillons. In the context of cosmology, the
built-in mechanism of oscillons suggests that they can affect the standard
picture of scalar ultra-light dark matter. In the present work kink
scatterings are investigated in a parametrized model of bistable system
admitting the classical $\phi^{4}$ field as an asymptotic limit, with focus on
the formation of long-lived low-amplitude almost harmonic oscillations of the
scalar field around a vacuum. The parametrized model is characterized by a
double-well potential with a shape-deformation parameter that changes only the
steepness of the potential walls, and hence the flatness of the hump of the
potential barrier, leaving unaffected the two degenerate minima and the
barrier height. It is found that the variation of the deformability parameter
promotes several additional vibrational modes in the kink-phonon scattering
potential, leading to suppression of the two-bounce windows in kink-antikink
scatterings and the production of oscillons. Numerical results suggest that
the anharmonicity of the potential barrier, characterized by a flat barrier
hump, is the main determinant factor for the production of oscillons in
double-well systems.
###### keywords:
scalar field; parametrized $\phi^{4}$ model; instantons; kink-antikink
collision; oscillons.
Received (Day Month Year)Revised (Day Month Year)
PACS Nos.: 03.50.-z; 05.45.Yv; 11.10.St; 03.65.Nk.
## 1 Introduction
The generation and interactions of solitary waves and solitons have attracted
a great deal of interest over the past years, due to the fact that they can
control many features related to the dynamics of natural systems ranging from
biology and organic polymers, to classical and quantized fields in condensed-
matter and high-energy physics [1, 2, 3, 4, 5, 6]. The simplest localized
solutions known in field theory are kink and antikink solitons, they display
topological profiles in ($1+1$) space-time dimensions and can be generated in
classical as well as quantum scalar field systems.
In non-integrable scalar field theories such as the $\phi^{4}$ field [4, 7],
scatterings of a kink-antikink pair usually give rise to a competition between
a bion state and a two-soliton solution characterized by a fractal structure
in the parameter space of scattering velocity [8]. For some impact velocities
the kink-antikink collision will give birth to a breather-like bound-state
(bion) solution, that radiates progressively until a total annihilation of the
pair. For other ranges of velocities, the pair performs an inelastic
scattering with the solitons colliding once and separating thereafter. There
also exist particular regions in velocity ($n-$bounce windows), where the
scalar field at the center of mass can bounce several times ($n$ times) before
the final separation of the pair. The later $n-$bounce windows have been
explained as the consequence of a resonance mechanism for the exchange of
energy between the vibrational and translational modes, resulting from
discrete eigenstates of the Schrödinger-like equation inherent to the
stability analysis of the $\phi^{4}$ kink [8, 9].
The last decade has witnessed a regain of interest in kink scatterings in non-
integrable models, marked by intensive studies for instance of multi-kink
collisions[10, 11, 12, 13, 14, 15], the interactions of a kink or an anti-kink
with a boundary or a defect[16, 17], the scattering processes in models with
generalized dynamics [18], nonpolynomial models [19, 20, 21, 22], polynomial
models with one [23, 24, 16, 25, 26, 27, 28, 29, 30, 31, 32] and two [33, 34,
35, 36, 37] scalar fields and so on. However all these studies involve mostly
two universal models which are the sine-Gordon model [1], assumed to describe
systems with periodic one-site potentials, and the $\phi^{4}$ model intended
for physical systems with double-well (DW) potentials. Although the $\phi^{4}$
kink for example has very recently been linked with topological excitations
observed in buckled graphene nanoribbon [38], real physical systems to which
the two universal models address are actually rather quite diverse, and most
often unique in some aspects of their physical features. Indeed the $\phi^{4}$
and sine-Gordon model have fixed extrema while their shape profiles, including
their their potential barriers, are rigid which confine their applicability to
a very narrow class of physical systems. To lift the shortcomings related to
the rigidity of shape profiles, these two universal models have been
parametrized leading to two hierarchies of deformable-shape one-site
potentials i.e. the Remoissenet-Peyard periodic potential [39, 40, 41], and
the family of Dikandé-Kofané (DK) DW potentials [41, 42, 43].
In two recent studies [22, 44], Bazeia et al addressed the issue of the
influence of shape deformability of DW potentials, on kink-antikink
scatterings with production of oscillon bound states [14, 22, 44]. Thus they
first applied the shape deformability procedure to the standard $\phi^{4}$ by
introducing a bistable model with non-polynomial potential, which they called
sinh-deformed $\phi^{4}$ potential [22]. Despite this new model showing
similar features with the $\phi^{4}$ model a new phenomenon was observed,
indeed under certain conditions the kink-antikink pair in the new model was
found to convert, after collision, into long-lived low amplitude and almost
harmonic oscillations of the scalar field around one vacuum. They interpreted
these almost harmonic oscillations as a bound state of individual oscillons
[45]. Later on the authors investigated [44] kink-antikink collisions with
production of oscillons, considering two members of the family of DK DW
potentials [42, 43, 46, 47]. One member was a DW potential with variable
separation between the two degenerate minima but with fixed barrier height
[42], and the other member was the DW potential with variable barrier height
but fixed positions of the two degenerate minima. The oscillons production in
these two members of the family of DK DW potentials was established, and shown
to occur when the distance between the minima gets smaller for the first
member, and when the barrier height becomes lower for the second member. Based
on their results with the two DK DW models, the authors concluded that the
lowering of kink energy with increase of the shape deformability parameter was
the determinant factor favoring the production of oscillons in the two models.
Stimulated by the studies of Bazeia et al [22, 44], in the present work we
investigate kink-antikink collisions and the possible production of oscillons
in a DW model with fixed barrier height and fixed separation between the two
degenerate minima, but a variable curvature of the barrier hump. With this we
wish to establish that parametrized DW models with increasing kink energy as a
function of a deformability parameter, are also quite prone to production of
oscillons upon kink-antikink collisions. In fact we will show that the
anharmonicity of the potential at its maximum, characterized by a flat barrier
hump with increasing deformability parameter, is more likely to represent the
unifying factor favoring the production of oscillons in the DK DW hierarchy.
Proceeding with we shall introduce a new member to the family of DK DW
potentials [43], characterized by a parametrization that leaves unaffected the
barrier height and positions of the two potential minima, but allows tuning
the steepness (or the curvatures) of the potential walls causing the barrier
hump to flatten out.
In Section 2 we introduce the member of parametrized DK DW potential with
variable steepness, and formulate its field-theoretical dynamics. This enables
us determine some associate characteristic quantities such as its kink and
antikink solutions and the kink creation energy. In section 3 we examine the
kink-antikink scatterings, with emphasis on the production of oscillons as the
deformability paramater is varied. Section 4 is devoted to a summary of
results and to conclusion.
## 2 The model, kink solution and kink-phonon scattering spectrum
Consider a field-theoretical model in ($1+1$) dimensional space-time, the
dynamics of which is described by the Lagrangian:
$L=\frac{1}{2}\left(\frac{\partial\varphi}{\partial
t}\right)^{2}-\frac{1}{2}\left(\frac{\partial\varphi}{\partial
x}\right)^{2}-V(\varphi,\mu),$ (1)
where $\varphi(x,t)$ is a real scalar field in one space ($x$) and temporal
($t$) dimensions. $V(\varphi,\mu)$ is a one-body scalar potential which can be
expressed more generally [43]:
$V(\varphi,\mu)=\frac{1}{8}\left(\frac{\sinh^{2}(\alpha(\mu)\varphi)}{\mu^{2}}-1\right)^{2},\hskip
8.5359pt\mu>0.$ (2)
In the present study we pick:
$\alpha(\mu)=asinh(\mu),$ (3)
which is a function of a real parameter $\mu$ assumed to control shape profile
of the DW potential. For arbitrary values of the shape defomability parameter
$\mu$, the scalar potential $V(\varphi,\mu)$ is a bistable function symmetric
around a potential barrier located at the equilibrium state $\varphi=0$. The
potential possesses two degenerate vacuum states at $\varphi=\pm 1$. Thus,
unlike the two members of the DK DW potential discussed by Bazeia et al in
ref. [44], the barrier height and minima positions of the parametrized DW
potential (2) are always fix. However, on fig. 1, where $V(\varphi,\mu)$ is
sketched for some values of $\mu$, one sees that the variation of $\mu$
influences the steepness of the potential walls.
Figure 1: (Color online) Plot of the double-well potential $V(\varphi,\mu)$,
for some values of $\mu$: $\mu=0$ (Solid line), $\mu=2.0$ (Dashed line),
$\mu=4.0$ (Dot-dashed line), $\mu=8.0$ (Dotted line).
Quite interesting fig. 1 suggests that the change in steepness of the
potential walls, caused by a variation of the deformability parameter $\mu$,
has the consequence of rendering the top of the potential barrier either flat
or sharp. Indeed, when $\mu$ tends to zero the parametrized DW potential (2)
reduces exactly to the standard $\phi^{4}$ potential [48, 49]:
$V(u)=\frac{1}{8}\left(u^{2}-1\right)^{2}.$ (4)
As $\mu$ increases the minima positions and the barrier height remain
unchanged, but the slope of the potential walls gets steeper: the narrowest
part of the potential barrier broadens while the flatness (i.e. the
anharmonicity) of the barrier hump (or top) becomes more pronounced, resulting
in an enhancement of the confinement of the two potential wells.
The Lagrangian in formula (1) leads to the following equation of motion for
the field $\varphi$:
$\frac{\partial^{2}\varphi}{\partial
t^{2}}-\frac{\partial^{2}\varphi}{\partial
x^{2}}+\frac{d}{d\varphi}V(\varphi,\mu)=0.$ (5)
In the static regime, the solitary-wave solution to this equation is given by:
$\varphi_{K,\bar{K}}(x)=\pm\frac{1}{\alpha(\mu)}\tanh^{-1}\left[\frac{\mu}{\sqrt{1+\mu^{2}}}\tanh\frac{\sqrt{2}x}{d(\mu)}\right],$
(6)
where:
$d(\mu)=\frac{2\mu}{\alpha(\mu)\sqrt{(1+\mu^{2})}}.$ (7)
The solution with ”+” sign stands for a kink $\varphi_{K}(x)$, while the
solution with ”-” sign stands for an antikink $\varphi_{\bar{K}}(x)$ of width
$d(\mu)$. The characteristic energy (or rest mass) associated with the static
kink and static antikink solution eq. (6), is obtained by using the general
expression:
$E_{K}=\int^{+\infty}_{-\infty}\rho_{\mu}(x)dx,$ (8)
with:
$\rho_{\mu}(x)=\frac{1}{2}\left(\frac{\partial\varphi}{\partial
x}\right)^{2}+V(\varphi,\mu)$ (9)
the kink energy density. Substituting the solitary-wave solution obtained in
formula (6) this yields:
$E_{K}=\frac{1}{4\alpha(\mu)\mu^{2}}\left[2\alpha(\mu)(1+\mu^{2})-sinh(2\alpha(\mu))\right].$
(10)
In fig. 2, shape profiles of the static kink solution $\varphi_{K}(x)$ (a) and
of the kink energy density $\rho_{\mu}(x)$ (b), are plotted versus the spatial
coordinate $x$ for some values of the deformability parameter $\mu$. The
bottom graph in the figure, i.e. graph (c), represents the variation of the
kink rest energy as a function of the deformability parameter $\mu$.
Figure 2: (Color online) (a) Shape of the kink $\varphi_{K}(x)$ and (b) of the
energy density $\rho_{\mu}(x)$ as a function of $x$, for: $\mu=0$ (Solid
line), $\mu=2.0$ (Dashed line), $\mu=4.0$ (Dot-dashed line) and $\mu=8.0$
(Dotted line). (c) Variation of the kink creation energy $E_{K}$, as a
function of $\mu$.
One sees that as the deformability parameter $\mu$ increases, the asymptotic
values of $\varphi_{K}(x)$ as $|x|\rightarrow\infty$ remains the same but a
decrease in the kink width is noticeable. On the other hand, an increase of
$\mu$ leaves the maximum of the energy density unaffected but affects the
width of the energy density in the region covered by the barrier and the
potential wells. Remarkably the energy density seems to decrease with $\mu$ as
we go far in the region covered by the repulsive walls of the potential, such
that in this region the kink is expected to become more localized.
Fig. 2c depicts the kink rest energy as a monotonically increasing function of
the shape deformability parameter. In other words, an increase in $\mu$ will
enhance the kink stability and hence the sharpness of the kink profile.
Most of the processes from the kink-antikink collisions arise as a consequence
of vibrational modes inherent to the kink scattering excitation spectrum.
Usually a perturbation theory is utilized to derive the spectrum of localized
excitations around a kink [47, 48]. To this last point, perturbing linearly
the scalar field $\varphi(x,t)$ around the one kink solution $\varphi_{K}(x)$
i.e. $\varphi(x,t)=\varphi_{K}(x)+\eta(x)\exp(-i\omega t)$, yields the
following Schrödinger-like eigenvalue problem [47, 48]:
$\left[-\frac{\partial^{2}}{\partial
x^{2}}+V_{sch}(x,\mu)\right]\eta=\omega^{2}\eta.$ (11)
In this eigenvalue equation the quantity
$V_{sch}(x,\mu)=\frac{d^{2}V}{d\varphi^{2}}|_{\phi_{K}}$ is the scattering
potential, which in the present case is given by:
$\displaystyle V_{sch}(x,\mu)$ $\displaystyle=$ $\displaystyle
a_{0}\frac{\left[\mu^{2}\tanh^{4}\left(\frac{x}{d(\mu)}\right)+3\tanh^{2}\left(\frac{x}{d(\mu)}\right)-c(\mu)\right]}{\left[d(\mu)\left(\mu^{2}\tanh^{2}\left(\frac{x}{d(\mu)}\right)-c(\mu)\right)\right]^{2}},$
$\displaystyle c(\mu)$ $\displaystyle=$ $\displaystyle 1+\mu^{2}.$ (12)
Note that this scattering potential determines the kink stability upon
scattering with phonons [47, 48]. Instructively an identical expression for
$V_{sch}(x,\mu)$ is obtained by taking a linear perturbation around an
antikink solution.
Distinct profiles of the scattering potential $V_{sch}(x,\mu)$, for different
values of the deformability parameter $\mu$, are represented in fig. 3.
Figure 3: (Color online) Plot of the scattering potential $V_{sch}(x,\mu)$ as
a function of $x$, for $\mu=0$ (Solid line), $\mu=1.0$ (Dashed line),
$\mu=2.0$ (Dot-dashed line) and $\mu=3.0$ (Dotted line).
One sees that as $\mu$ increases, the scattering potential has its width that
gradually decreases and its asymptotic limit growing drastically higher. In
the range $0<\mu\lesssim 1.2$ the potential possesses a global minimum located
at $x=0$, which transforms into a local maximum together with the appearance
of two degenerate minima in the potential as the value of $\mu$ rises larger
than $1.2$.
The same way as the scattering potential, the occurrence of bound states holds
a key importance in grasping some relevant features of the scattering
structure of the system [22, 44]. In particular a resonance mechanism for the
exchange of energy between the translational mode and a vibrational mode, may
result in rich consequences in the spectral features of the system [44]. To
gain insight onto this last featurte, we solved the eigenvalue equation (11)
for $\mu$ and results emphasizing the influence of the parametrization on the
appearance of bound states, are shown in fig. 4.
Figure 4: (Color online) Plot of the squared frequencies $\omega^{2}$ of the
vibrational states, as a function of $\mu$.
We note the presence of a zero-mode for all the values of the shape
deformability parameter, moreover the appearance of new bound states is
observed as $\mu$ rises. For instance, as $\mu$ lies in the range
$0.55\lesssim\mu\lesssim 1.8$ we notice the presence of two vibrational
states, and in the ranges $1.8\lesssim\mu\lesssim 4.0$ and $\mu\gtrsim 4$ a
third and a fourth bound state emerge respectively. Furthermore the lower
vibrational has its frequency increasing as $\mu$ grows to a specific value,
then decreasing while the frequencies of higher vibrational states are
monotonically increasing functions of the deformability parameter.
## 3 Analysis of kink-antikink collisions
The dynamical equation pertaining to colliding kink-antikink pairs will be
solved numerically in this section, with the aim to explore some
characteristic spectral features of kink-antikink scatterings and identify
vibrational models associated with the collisions. To this end, eq. (5) is
discretized on a spacial grid with periodic boundary conditions. The grid is
divided into $N$ nodes such that zone widths $\delta x$ in the simulations
have fixed size, with the location of the $n^{th}$ point on the grid given by
$x_{n}=n\Delta x$. The scalar field is then defined by
$\varphi_{n}(t)=\varphi(x_{n},t)$ for $n=1,2,...,N$. The second-order spatial
derivative is approximated using a fourth-order central-difference scheme
[50], which leads to a set of $N$ coupled second-order ordinary differential
equations in $\varphi_{n}$ i.e.:
$\displaystyle\frac{\partial^{2}\varphi_{n}}{\partial
t^{2}}=\frac{1}{12(\Delta
x)^{2}}(-\varphi_{n-2}+16\varphi_{n-1}-30\varphi_{n}$
$\displaystyle+16\varphi_{n+1}-\varphi_{n+2})-\frac{dV(\varphi_{n},\mu)}{d\varphi_{n}},$
(13)
which is solved numerically using a fourth-order runge-kutta scheme with fixed
step. The accuracy of our algorithm stands with errors that scale as $(\Delta
x)^{2}$ and $(\Delta t)^{4}$.
The initial data used in our simulations represent a kink and antikink
centered at the points $x=-x_{0}$ and $x=x_{0}$ respectively, and moving
forward each other with initial velocities $\upsilon$ in the laboratory frame.
The definition of the starting function can therefore be expresssed:
$\displaystyle\varphi(x,0)=\varphi_{K}(x+x_{0},v,0)-\varphi_{K}(x-x_{0},-v,0)-\varphi_{m},$
(14)
where $\varphi_{m}=\pm 1$ for the kink-antikink and the antikink-kink initial
configurations, respectively. We set the grid to be sufficiently large, with
left and right boundaries respectively at $x_{l}=-400$ and $x_{r}=+400$, and
the separation distance to be $2x_{0}=24$. The choice of a large grid together
with periodic boundary conditions, was to avoid the reflected kink forms to
travel to the boundary, and also to prevent any radiation emitted during the
collision process to eventually find itself back to interact with the kinks.
The grid is discretized with $N=10^{5}$ nodes and all simulations were run
with a temporal step size $\Delta t=0.7(\Delta x)$, found to be a good
consensus between the costly computational time and the production of results
form high-resolution runs.
Several outputs obtained from numerical simulations at some different initial
velocities are now doscussed. For weak velocities, the kink and antikink are
expected to bound upon collision and have enough time to radiate sufficient
energy forming a bion state. This is shown in figs. 5(a) and 5(e), where we
plotted the evolution of the center of mass $\varphi(x=0,t)$ of the kink-
antikink pair, for some values of the shape deformability parameter for which
the system has just one vibrational mode. Irrespective of the barrier
deformation, the kink-antikink pair moving with an initial velocity lower than
a critical velocity $\upsilon_{c}$ settles to an erratically oscillating bion
state. But for large velocities $\upsilon>\upsilon_{c}$, the kink-antikink
pair does not have enough time to radiate sufficient energy to form a bion
state. Thus the kink and antikink will once collide and permanently reflect
each other during the scattering process. This is represented in figs. 5(b)
and 5(f).
Figure 5: (Color online) Possible results for a kink-antikink collision at
several initial velocities, considering two values of the shape deformability
parameter $\mu$. $\mu=0$: (a), (b),(c) and (d). $\mu=0.5$: (e), (f),(g) and
(h). Values of $\mu$ were choosen such that only one vibrational mode sppears
in the excitation spectrum.
For all the considered values of $\mu$ one can note the appearance of a spike
illustrating the collision, followed by a leveling off at $\varphi=+1$
implying that the kink and antikink have reflected and traveled far from each
other. Still, the transition between the bion state and the reflection state
is not smooth as the initial velocities increase. There are regions of values
of $\upsilon$ for which these two states alternate. This regions were reported
in several works as ”windows” [8, 9, 24, 25, 44]. For instance, in figs. 5(c)
and 5(g), we note two spikes in the evolution of the center of mass implying
that the kink and antikink collide and reflect, then return to collide again
before receding to a permanent reflection. Referring to the number of
collisions before the last and permanent reflection, the sets of contiguous
initial velocities leading to this state can be identified as forming a two-
bounce window. The presence of a three-bounce window is evidenced in figs.
5(d) and 5(h), the velocities lying in the three-bounce windows are found on
the edge of the two-bounce regions. When values of the deformability parameter
$\mu$ are located in the range where there exists only one vibrational mode,
the system presents a similar fractal structure as the one observed from the
scattering process in the $\phi^{4}$ model. For example we can note the
existence of a four-bounce window in fig. 6.
Figure 6: (Color online) A four-bounce window taking $\mu=0.5$ is evidenced by
plotting $\varphi(x=0,t)$ for $\upsilon=0.1445$. Note the presence of four
large spikes illustrating collision after which the kink and antikink reflect
and recede from each other forming a two-soliton state.
The appearance of n-bounce windows is also expected to be observed for this
range of shape deformability parameter values. To further understand the
structure of scattering in the system, in fig. 7 we plotted the time of the
three bounces as a function of the initial velocity.
Figure 7: (Color online) Kink-antikink collision times to first (blue), second
(red) and third (dotted-black) bounces, as a function of initial velocity for
(a) $\mu=0.5$ and (b) $\mu=5.0$.
The intervals in which the time for the third collisions diverges, range in
the two-bounce windows. A case with only one vibrational mode is illustrated
in Fig. 7(a), which shows plot of the collision times for $\mu=0.5$ (compare
with figs. 5(e), 5(f), 5(g) and 5(h)). We can note that bion states are formed
for $\upsilon<\upsilon_{c}\sim 0.183$ while for $\upsilon>\upsilon_{c}$, the
collision results in an inelastic scattering between the pair corresponding to
one-bounce around one vacuum. Moreover, the figure captures the complete set
of two-bounce windows of which the width continuously decreases and
accumulates around $\upsilon_{c}$. The scattering times in the presence of
several vibrational modes are plotted in fig. 7(b) where we considered the
shape of the potential for $\mu=5$. We first see that $\upsilon_{c}$ grows
larger with $\mu$. From our simulations we observed that the variation of the
critical velocities is not a monotonic function of the shape deformability
parameter, we first observed a decrease of $\upsilon_{c}$ as $\mu$ increases
till $\mu\sim 1$ then recedes to an increasing behavior (this result is not
presented here). This reflects that the attraction between the kink and
antikink lessens as the system is deformed departing from the $\phi^{4}$
model, but the attractive interaction gets stronger and stronger as
$\mu\gtrsim 1$. Suppression of two-bounce windows is also observed in fig.
7(b). This is justified by the presence of several vibrational modes
complicating the energy transfer from the translational mode to just one
vibrational mode to achieve resonance conditions.
One of our main point of focus in this work is the possible production of
oscillons. For relatively small values of the deformability parameter $\mu$,
for which only one vibrational mode is generated in the excitation spectrum,
oscillon structures cannot appear, the kink-antikink collisions with initial
velocities lower than the critical velocity can only result in bion and
n-bounce states. The $\phi^{4}$ model being an asymptotic limit of the
parametrized DW model in this range of values of $\mu$, this agrees with the
$\phi^{4}$ model showing no evidence for the formation of oscillons as a
result of the scattering process. We see from fig. 8 that for larger values of
$\mu$, the presence of more than one vibrational mode favors the production of
oscillons.
Figure 8: (Color online) Oscillons resulting from kink-antikink collisions:
(a) $\mu=3.0$ and $\upsilon=0.23$ $(\upsilon_{c}=0.245)$ showing one bion and
two oscillons, (b) $\mu=5.0$ and $\upsilon=0.236$ $(\upsilon_{c}=0.27)$
showing one bion and four oscillons.
At some velocities of the colliding kink, lower than the critical velocities,
the bion is formed and travels with oscillons which can oscillate around each
other, or escape to infinities. We can compare the appearance of one bion and
two oscillons travelling together with a large flux of emitted radiation in
fig. 8(a), and the appearance of one bion and four oscillons in fig. 8(b),
travelling with quite no radiation and a larger degree of harmonicity. Note
that the larger the shape deformability parameter the greater the number of
created oscillons.
## 4 Conclusion
Oscillons are breather-like bound states generated by self-interactions of
kink-antikink pairs that exist in some scalar-field models [51, 52, 53, 54,
55], in the context of cosmology their built-in mechanism suggests that they
can affect the standard picture of scalar ultra-light dark matter. In two
recent studies [22, 44] the generation of oscillons in bistable systems,
characterized by a parametrized double-well potential, was discussed with
emphasis on the influence of the shape deformability on the oscillon
production. First [22] the authors considered a deformable $\phi^{4}$
potential represented by an hyperbolic double-well potential, and established
that the deformability favors the emergence of oscillon modes from kink-ankink
collisions and for well selected intial velocities of the colliding kinks.
Later on [44] they extended the study to two members of the family of Dikandé-
Kofané DW potentials. One of these members has its double-well minima fixed
but a variable height of the potential barrier, whereas the other member has
fixed barrier height but a variable separation between the two potential
minima.
To determine more exactly which of the characteristic features introduced by
the potential deformability-i.e. the variable positions of potential minima,
or the variable height of the potential barrier- effectively controls the
oscillon production, in this work we revisited the study by considering a
parametrized DW potential with fixed potential minima and fixed barrier height
fixed. However the steepness of the potential walls, and hence the flatness of
the barrier top, can be tuned by varying a deformability parameter. The
parametrized DW potential has the particularity to reduce to a $\phi^{4}$
potential, just the same as with the already known family of DW potentials
proposed in refs. [42, 43, 47] and referred to as Dikandé-Kofané potential.
Examining the kink-antikink scattering processes, we found that the
parametrized bistable model inherits some of the general features of the
$\phi^{4}$ model that is the possibility of formation of bion states,
reflected states and also $n-$bounce windows. However the appearance of
additional modes in the scattering spectrum, as the DW potential deformation
becomes predominant in our model, suggests the possibility of suppression of
the two-bounce windows due to a kind of interference, as was already detailed
in some other works [25, 44].
Long-lived, quasi-harmonic and low-amplitude structures called oscillons were
shown to form after kink-antikink collisions with some initial velocities less
than a critical velocity. This is not observed for low values of $\mu$, where
the model has only one vibrational state and is more close to the $\phi^{4}$
model. The rising number of vibrational states as $\mu$ increases yields to an
intricate situation where the realization of the mechanism of resonant energy
exchange between the translational and one vibrational more becomes more
difficult. The appearance of oscillons is thus favored by the deformation in
our model.
In the works of Bazeia et al, reporting the appearance of oscillons in
hyperbolic models [44] for two deformable double-well potentials, they showed
that the production of oscillons is boosted by applying the conformational
changes from those potentials’ deformability such as reducing the distance
between the minima keeping the barrier height fixed, or decreasing the barrier
height while keeping the minima fixed. They pointed out that the factor
unifying the two contexts is the lowering of the kink energy by the
deformability in the two models. The scattering dynamics at the center of mass
in our present model are roughly the same as the one in the work of ref. [22,
44], however the increase of the kink energy in our model disagrees with a
tentative idea to extend the consideration of kink energy being a determinant
unifying factor to a more general case. The deformation in our model is
manifest through an increase of the steepness of the potential walls, with the
barrier top becoming flattened hence imposing an anharmonic shape to the
potential barrier. This trend can also be observed as an implicit result of
the deformation in the two models considered by Bazeia et al [44], and also in
the sinh-deformed $\phi^{4}$ model [22] shown to also allow the creation of
oscillons. Bistable systems modeled by potentials with anharmonic barrier are
thus suggested to be good candidates to observe the formation of oscillons in
kink-scattering processes.
## acknowledgements
the work of A. M. Dikandé is supported by the Alexander von Humboldt (AvH)
Foundation.
## References
* [1] A. R. Bishop and T. Schneider (eds.), Solitons and Condensed Matter Physics, Proceedings of the Symposium on Nonlinear (Soliton) Structure and Dynamics in Condensed Matter (Oxford, England, June 27-29, 1978).
* [2] R. Rajaraman, Solitons and Instantons: An Introduction to Solitons and Instantons in Quantum Field theory (North-Holland, Amsterdam, 1982).
* [3] A. Vilenkin and E. P. S. Shellard, Cosmic Strings and Other Topological Defects (Cambridge University Press, Cambridge, 2000).
* [4] T. Vachaspati, Kinks and domain walls: An Introduction to Classical and Quantum Solitons (Cambridge University Press, Cambridge, 2006).
* [5] N. Manton and P. Sutcliffe, Topological Solitons (Cambridge University Press, Cambridge, 2004).
* [6] J. Liu, Z.-K. Guo, R.-G. Cai and G. Shiu, Phys. Rev. D 99, 103506(2019).
* [7] P. G. Kevrekidis and J. Cuevas-Maraver (eds.), A Dynamical Perspective on the $\phi^{4}$ Model: past, present and future (Springer, Berlin, 2019).
* [8] P. Anninos, S. Olivera and R. A. Matzner, Phys. Rev. D 44, 1147 (1991)
* [9] D. K. Campbell, J. S. Schonfeld and C. A. Wintage, Physica D 9, 1 (1983).
* [10] A. M. Marjaneh, V. A. Gani, D. Saadatmand, S. V . Dmitriev and K. Javidan, J. High Energy Phys. 07, 028 (2017).
* [11] A. M. Marjaneh, A. Askari, D. Saadatmand and S. V. Dmitriev, Eur. Phys. J. B 91, 22 (2018).
* [12] D. Saadatmand, S. V. Dmitriev and P. G. Kevrekidis, Phys. Rev. D 92, 056005 (2015).
* [13] A. M. Marjaneh, D. Saadatmand, K. Zhou, S. V. Dmitriev and M. E. Zomorrodian, Commun. Nonlinear Sci. Numer. Simul. 49, 30 (2017).
* [14] V. A. Gani, A. M. Marjaneh and D. Saadatmand, Eur. Phys. J. C 79, 620 (2019).
* [15] J. T. Giblin, L. Hui, E. A. Lim and I. S. Yang, Phys. Rev. D 82, 045019 (2010).
* [16] P. Dorey, A. Halavanau, J. Mercer, T. Romanczukiewicz and Y. Shnir, J. High Energy Phys. 1705, 107 (2017).
* [17] R. Arthur, P. Dorey and R. Parini, J. Phys. A 49, 165205 (2016).
* [18] A. R. Gomes, R. Menezes, K. Z. Nobera and F. C. Simas, Phys. Rev. D 90, 065022 (2014).
* [19] F. C. Simas, A. R. Gomes and K. Z. Nobera, Phys. Lett. B 775, 290 (2017).
* [20] V. A. Gani, A. M. Marjaneh, A. Askari, E. Belendryasova and D. Saadatmand, Eur. Phys. J. C 78, 345 (2018).
* [21] D. Bazeia, E. Belendryasova and V. A. Gani, J. Phys. Conf. Ser. 934, 012032 (2017).
* [22] D. Bazeia, E. Belendryasova and V. A. Gani, Eur. Phys. J. C 78, 340 (2018).
* [23] F. C. Lima, F. C. Simas, K. Z. Nobrega and A. R. Gomes, J. High Energy Phys. 2019, 147 (2019).
* [24] P. Dorey and T. Romanczukiewicz, Phys. Lett. B 779, 117 (2018).
* [25] F. C. Simas, A. R. Gomes, K. Z. Nobera and J. C. R. E. Oliveira, J. High Energy Phys. 1609, 104 (2016).
* [26] A. Demirkaya, R. Decker, P. G. Kevrekidis, I. C. Christov and A. Saxena, J. High Energy Phys. 12, 071 (2017).
* [27] V. A. Gani, A. E. Kudryavtsev, M. A. Lizunova, Phys. Rev. D 89, 125009 (2014).
* [28] H. Weigel, J. Phys. Conf. Ser. 482, 012045 (2014).
* [29] T. Romanczukiewicz, Phys. Lett. B 773, 295 (2017).
* [30] E. Belendryasova and V. A. Gani, J. Phys. Conf. Ser. 934, 012059 (2017).
* [31] V. A. Gani, V. Lensky and M. A. Lizunova, J. High Energy Phys. 08, 147 (2015).
* [32] E. Belendryasova and V. A. Gani, Commun. Nonlinear Sci. Numer. Simul. 67, 414 (2019).
* [33] A. Halavanau, T. Romanczukiewicz and Y. Shnir, Phys. Rev. D 86, 085027 (2012).
* [34] A. Alonso-Izquierdo, Phys. Rev. D 97, 045016 (2018).
* [35] A. Alonso-Izquierdo, Physica D 365, 12 (2018).
* [36] V. A. Gani, A. A. Kirillov and S. G. Rubin, J. Phys. Conf. Ser. 934, 012046 (2017).
* [37] V. A. Gani, A. A. Kirillov and S. G. Rubin, J. Cosmol. Astropart. Phys. 04, 042 (2018).
* [38] R. D. Yamaletdinov, V. A. Slipko and Y. V. Pershin, Phys. Rev. B 96, 094306 (2017).
* [39] M. Remoissenet and M. Peyrard, J. Phys. C 14, L481 (1981).
* [40] M. Remoissenet and M. Peyrard, Phys. Rev. B 29, 3153 (1984).
* [41] M. Remoissenet, Waves Called Solitons: Concepts and Experiments (Springer, Berlin, 1994).
* [42] A. M. Dikandé and T. C. Kofane, J. Phys.: Condens. Matter 3, L5203 (1991).
* [43] A. M. Dikandé and T. C. Kofané, Solid State Commun. 89, 559 (1994).
* [44] D. Bazeia, A. R. Gomes, K.Z. Nobrega, F. C. Simas, Phys. Lett. B 803, 135291 (2020).
* [45] M. Gleiser, Int. J. Mod. Phys. D16, 219 (2007).
* [46] A. M. Dikandé and T. C. Kofané, Solid State Commun. 89, 283 (1994).
* [47] T. C. Kofané and A. M. Dikandé, Solid State Commun. 86, 749 (1993).
* [48] J. F. Currie, J. A. Krumhansl, A. R. Bishop and S. E. Trullinger, Phys. Rev. B 22, 477 (1980).
* [49] J. A. Krumhansl and J. R. Schrieffer, Phys. Rev. B 11, 3535 (1975).
* [50] R. W. Hornbeck, Numerical Methods (Prentice-Hall, Englewood Cliffs, NJ, 1975).
* [51] G. Fodor, P. Forgàcs, P. Grandclément and I. Ràcz, Phys. Rev. D 74, 124003 (2006).
* [52] J. Sakstein and M. Trodden, Phys. Rev. D 98, 123512 (2018).
* [53] G. Fodor, P. Forgàcs, Z. Horvàth and Á. Lukàcs, Phys. Rev. D 78, 025003 (2008).
* [54] M. Hindmarsh and P. Salmi, Phys. Rev. D 77, 105025 (2008).
* [55] C. Adam, K. Oles, T. Romanczukiewicz and A. Wereszczynski, Phys. Rev. D 101, 105021 (2020).
|
# The inviscid limit for the $2d$ Navier-Stokes equations in bounded domains
Claude Bardos111Laboratoire J.-L. Lions, Sorbonne Université, BC 187, 4 place
Jussieu, 75252 Paris, Cedex 05, France. Email<EMAIL_ADDRESS>Trinh
T. Nguyen222Department of Mathematics, University of Southern California, LA,
CA 90089. Email<EMAIL_ADDRESS>TN is partly supported by the AMS-Simons
Travel Grant Award. Toan T. Nguyen333Department of Mathematics, Penn State
University, State College, PA 16803. Email<EMAIL_ADDRESS>TN is partly
supported by the NSF under grant DMS-2054726. Edriss S. Titi444Department of
Mathematics, Texas A&M University, 3368 TAMU, College Station, TX 77843-3368,
USA. Department of Applied Mathematics and Theoretical Physics, University of
Cambridge, Cambridge CB3 0WA, U.K. ALSO, Department of Computer Science and
Applied Mathematics, Weizmann Institute of Science, Rehovot 76100, Israel.
Emails<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
We prove the inviscid limit for the incompressible Navier-Stokes equations for
data that are analytic only near the boundary in a general two-dimensional
bounded domain. Our proof is direct, using the vorticity formulation with a
nonlocal boundary condition, the explicit semigroup of the linear Stokes
problem near the flatten boundary, and the standard wellposedness theory of
Navier-Stokes equations in Sobolev spaces away from the boundary.
In memory of Robert T. Glassey
###### Contents
1. 1 Introduction
1. 1.1 Boundary vorticity formulation
2. 1.2 Main results
3. 1.3 Remarks
2. 2 Navier-Stokes equations near the boundary
1. 2.1 Global geodesic coordinates
2. 2.2 Scaled coordinates
3. 2.3 Vorticity equations near the boundary
4. 2.4 Dirichlet-Neumann operator
3. 3 Near boundary analytic spaces
1. 3.1 Analytic norms
2. 3.2 Elliptic estimates in the half-plane
3. 3.3 Biot-Savart law in $\Omega$
4. 3.4 Bilinear estimates
5. 3.5 Semigroup estimates in the half-plane
6. 3.6 Semigroup estimates near $\partial\Omega$
4. 4 Nonlinear analysis
1. 4.1 Global Sobolev-analytic norm
2. 4.2 Analytic bounds near the boundary
3. 4.3 Sobolev bounds away from the boundary
4. 4.4 Proof of Theorem 1.1
## 1 Introduction
We are interested in the inviscid limit of solutions to the incompressible
Navier-Stokes equations (NSE)
$\displaystyle\partial_{t}u+u\cdot\nabla u+\nabla p$ $\displaystyle=\nu\Delta
u,$ (1.1) $\displaystyle\nabla\cdot u$ $\displaystyle=0,$
in a bounded domain $\Omega\subset\mathbb{R}^{2}$, with initial data
$u_{|_{t=0}}=u_{0}(x)$ and with the no-slip boundary condition
$u|_{\partial\Omega}=0.$ (1.2)
In the inviscid limit: $\nu\to 0$, one would intuitively expect that the
solutions $u_{\nu}$, of problem (1.1)-(1.2), converge to the corresponding
solutions of the Euler equations of ideal incompressible fluids
$\displaystyle\partial_{t}u+u\cdot\nabla u+\nabla p$
$\displaystyle=0,\quad\hbox{in}\quad\Omega,$ (1.3) $\displaystyle\nabla\cdot
u$ $\displaystyle=0,\quad\hbox{in}\quad\Omega,$ $\displaystyle u\cdot n$
$\displaystyle=0,\quad\hbox{on}\quad\partial\Omega,$
where $n$ denotes the unit normal vector to the boundary pointing inward.
However, the inviscid limit for problem (1.1)-(1.2) is strenuous and remains
open due to the appearance of boundary layers and strong shear near the
boundary that triggers the shedding of unbounded vorticity by the boundary. In
their celebrated work [22], Caflisch and Sammartino establish the boundary
layer expansion and the inviscid limit for analytic data on the half-plane.
Maekawa [20] proved a similar result that allows Sobolev data whose vorticity
is supported away from the boundary. The result and its proof was recently
simplified [21] and extended in [19, 18], which allow data that are only
analytic near the boundary.
In this paper, we prove the inviscid limit of (1.1)-(1.2) for data that are
only analytic near the boundary of a general bounded analytic domain in
$\mathbb{R}^{2}$, thus further extending [22, 20, 21, 19] from the case of
half-plane to bounded domains with analytic boundaries. Precisely, we assume
that
* •
$\Omega$ is a simply-connected bounded domain in $\mathbb{R}^{2}$ whose
boundary $\partial\Omega$ is an analytic curve, defined by an analytic map:
$\theta\in{\mathbb{T}}={\mathbb{R}}/({\mathbb{Z}}L)\mapsto
x(\theta)=(x_{1}(\theta),x_{2}(\theta))\in\partial\Omega\,.$
The analyticity of the boundary naturally extends to an analytic map which
maps the near-boundary part of the domain
$\\{x\in\Omega:\,\,d(x,\partial\Omega)<\delta\\}$ to the case of half-plane
$(z,\theta)\in(0,\delta)\times\mathbb{T}$, where $z$ is the distance function
from the boundary. Here, for sake of presentation, we have chosen to consider
the case of simply-connected domain $\Omega$. The results of this paper apply
to the general setting of multi-connected bounded domains whose boundaries
consist of closed analytic curves, i.e., including domains with holes. Our
analysis near each of the boundaries is close to that on the half-plane. A
crucial assumption, however, lies on the analyticity of initial data near the
boundary, which appears to be sharp.
The work is dedicated to the memory of Professor Robert T. Glassey, who was a
great mathematician, a close friend, and an inspiring teacher.
### 1.1 Boundary vorticity formulation
We shall work with the boundary vorticity formulation [1, 20, 21]. Precisely,
let $u=(u_{1},u_{2})$ be the velocity vector field and
$\omega=\nabla^{\bot}\cdot u=\partial_{x_{2}}u_{1}-\partial_{x_{1}}u_{2}$ be
the corresponding vorticity. Then, the vorticity equation reads
$\displaystyle\partial_{t}\omega+u\cdot\nabla\omega$
$\displaystyle=\nu\Delta\omega,$ (1.4) $\displaystyle u$
$\displaystyle=\nabla^{\perp}\Delta^{-1}\omega,\qquad\hbox{(the Biot-Savart
law).}$
Here and throughout the paper, $\Delta^{-1}$ denotes the inverse of the
Laplacian operator in $\Omega$ subject to the zero Dirichlet boundary
condition. Evidently, this, together with the Biot-Savart law, imply the
impermeability boundary condition $u\cdot n=0$ on $\partial\Omega$. To ensure
the full no-slip boundary condition, i.e., that $u\cdot\tau=0$ on the boundary
$\partial\Omega$, where $\tau$ in the unit tangent vector to the boundary, we
first require that the initial data satisfy the no-slip boundary condition
(1.2), and then we impose in addition that $\partial_{t}u\cdot\tau=0$ on the
boundary, $\partial\Omega$, for all positive time. This leads to the boundary
condition
$0=\tau\cdot\partial_{t}u=\tau\cdot\nabla^{\perp}\Delta^{-1}\partial_{t}\omega=\partial_{n}[\Delta^{-1}(\nu\Delta\omega-u\cdot\nabla\omega)]$
(1.5)
on the boundary. Introduce $\omega^{*}$ to be the solution of the
nonhomogeneous Dirichlet boundary-value problem
$\left\\{\begin{aligned} \Delta\omega^{*}&=0,\qquad\mbox{in}\quad\Omega\\\
{\omega^{*}}&=\omega,\qquad\mbox{on }\quad\partial\Omega.\end{aligned}\right.$
(1.6)
and define the Dirichlet-Neumann operator by
$DN\omega=-\partial_{n}\omega^{*},\qquad\mbox{on}\quad\partial\Omega,$ (1.7)
where $\omega^{*}$ solves (1.6). Observe that
$\partial_{n}[\Delta^{-1}\Delta\omega]=\partial_{n}[\Delta^{-1}\Delta(\omega-\omega^{*})]=(\partial_{n}+DN)\omega$.
Thus, by virtue of the boundary condition (1.5) the boundary condition on
vorticity reads
$\nu(\partial_{n}+DN)\omega_{|_{\partial\Omega}}=[\partial_{n}\Delta^{-1}(u\cdot\nabla\omega)]_{|_{\partial\Omega}},$
(1.8)
together with the Biot-Savart law (1.4).
Throughout this paper, we shall deal with the Navier-Stokes solutions that
solve (1.4)-(1.7), or equivalently (1.4) and (1.8). Such a solution will be
constructed via the Duhamel’s integral representation, treating the
nonlinearity as a source term. As we observed earlier the boundary condition
$u\cdot n=0$ on $\partial\Omega$ follows from the Biot-Savart law and the
definition of $\Delta^{-1}$ with the zero Dirichlet boundary condition.
### 1.2 Main results
Our main result reads as follows.
###### Theorem 1.1.
Let $u_{0}\in H^{5}(\Omega)$ be an initial data that vanishes on the boundary.
We assume that the initial vorticity $\omega_{0}$ is analytic near the
boundary $\partial\Omega$ (see Section 3). Then, there is a positive time $T$,
independent of $\nu$, so that the unique solution $u_{\nu}(t)$ to the Navier-
Stokes problem (1.1)-(1.2), for every $\nu>0$, with initial data $u_{0}$,
exists on $[0,T]$ and has vorticity $\omega_{\nu}=\nabla^{\perp}\cdot u_{\nu}$
that remains analytic near the boundary, and satisfies
$\lim_{\nu\rightarrow
0}\sqrt{\nu}\|\omega_{\nu}\|_{L^{\infty}([0,T]\times\partial\Omega))}<\infty.$
(1.9)
Moreover, in the inviscid limit as $\nu\to 0$, $u_{\nu}$ converges strongly in
$L^{\infty}([0,T];L^{p}(\Omega))$, for any $2\leq p<\infty$, to the
corresponding solution $u$ of the Euler equations (1.3) with the same initial
data $u_{0}$.
The fact that Euler solutions remain analytic near the boundary is a classical
result [3, 17], which is a direct consequence of the main theorem. The main
difficulty in establishing the inviscid limit is to control the vorticity on
the boundary and derive uniform estimates such as (1.9), which is the main
contribution of this paper. The inviscid limit then follows easily. In fact, a
much weaker bound than (1.9) is sufficient to guarantee the convergence of
solutions to the Navier-Stokes to a corresponding solution of the Euler
equations. Precisely, we have the following simple Kato’s type theorem.
###### Theorem 1.2.
Let $T>0$ and $u$ be a weak solution to the Euler equations (1.3) in
$[0,T]\times\Omega$ satisfying $\|\nabla
u\|_{L^{\infty}([0,T]\times\Omega)}<\infty$. Suppose that, for every $\nu>0$,
$u_{\nu}$ are Leray weak solutions to the Navier-Stokes problem (1.1)-(1.2) on
$[0,T]\times\Omega$, satisfying
$\sup_{0<t<T}\|u_{\nu}(t)\|^{2}_{L^{2}(\Omega)}+\nu\int_{0}^{T}\|\nabla_{x}u_{\nu}(t)\|_{L^{2}(\Omega)}dt\leq
C_{0},$ (1.10)
uniformly in $\nu\to 0$. Assume that the vorticity
$\omega_{\nu}=\nabla^{\bot}\cdot u_{\nu}$ satisfies
$\limsup_{\nu\to
0}\Big{(}-\int_{0}^{T}\int_{\partial\Omega}\nu\omega_{\nu}(t,\sigma)u(t,\sigma)\cdot\tau(\sigma)d\sigma
dt\Big{)}=0,$ (1.11)
then any $\overline{u}_{\nu}$, which is a weak$-*$ limit in
$L^{\infty}([0,T];L^{2}(\Omega))$ of a subsequence $u_{\nu_{j}}$ of the Leray
weak solutions, as $\nu_{j}\to 0$, satisfies the stability estimate:
$\|\overline{u_{\nu}(t)}-u(t)\|_{L^{2}(\Omega)}^{2}\leq e^{2t\|\nabla
u\|_{L^{\infty}([0,T]\times\Omega)}}\|\overline{u_{\nu}(0)}-u(0)\|_{L^{2}(\Omega)}^{2}.$
(1.12)
In particular, if $u_{\nu}(0)\rightarrow u(0)$ in $L^{2}(\Omega)$, as $\nu\to
0$, then $u_{\nu}$ converges strongly to $u$ in
$L^{\infty}([0,T];L^{2}(\Omega)).$
###### Proof.
An elementary manipulation (e.g., [4]) yields the following energy inequality
$\displaystyle\|u_{\nu}(t)-u(t)\|_{L^{2}(\Omega)}^{2}+\nu\int_{0}^{t}\|\nabla
u_{\nu}(s)\|^{2}_{L^{2}(\Omega)}ds$ (1.13)
$\displaystyle\leq\|u_{\nu}(0)-u(0)\|_{L^{2}(\Omega)}^{2}+\nu\int_{0}^{t}\|\nabla
u(s)\|^{2}_{L^{2}(\Omega)}ds-\int_{0}^{t}\int_{\partial\Omega}\nu(\partial_{n}u_{\nu}(s,\sigma))\cdot
u(s,\sigma)\;d\sigma ds$
$\displaystyle+\int_{0}^{t}\int_{\Omega}|\Big{(}(\nabla
u+\nabla^{\bot}u)(u_{\nu}-u)\Big{)}\cdot(u_{\nu}-u))|dxds$
$\displaystyle\leq\|u_{\nu}(0)-u(0)\|_{L^{2}(\Omega)}^{2}+\nu\int_{0}^{t}\|\nabla
u(s)\|^{2}_{L^{2}(\Omega)}ds-\int_{0}^{t}\int_{\partial\Omega}\nu\omega_{\nu}(s,\sigma)(u(s,\sigma)\cdot\tau(\sigma))\;d\sigma
ds$ $\displaystyle+2\|\nabla
u\|_{L^{\infty}([0,T]\times\Omega)}\int_{0}^{t}\|u_{\nu}(s)-u(s)\|_{L^{2}(\Omega)}^{2}\,ds,$
where in the third term in the right-hand side of the last inequality we used
the fact that $(\partial_{n}u_{\nu})\cdot u=\omega_{\nu}(u\cdot\tau)$ on the
boundary. Let $u_{\nu_{j}}$ be a subsequence which converges weak$-*$ in
$L^{\infty}([0,T];L^{2}(\Omega))$, as $\nu_{j}\to 0$. We apply the above
energy inequality to $u_{\nu_{j}}$ and invoke Gronwall’s Lemma. Observe that
since the Leray weak solutions belong to $C([0,T];L^{2}(\Omega))$ then
$\|u_{\nu}(0)\|^{2}_{L^{2}(\Omega)}\leq C_{0}$ by virtue of (1.10). Thanks to
the Banach-Alaoglu Theorem and assumption (1.11) we conclude (1.12). The last
part of the theorem is an immediate consquence of (1.12). ∎
### 1.3 Remarks
As mentioned in the introduction, our main results extend the previous works
[22, 20, 21, 19] from the case of the half-plane to bounded domains. The
analyticity near the boundary is required to control the unbounded vorticity
in the inviscid limit. It may be possible to extend the present analysis to
include the propagation of boundary layers and the classical Prandtl’s
boundary layer expansions, whose validity near general boundary layers again
requires analyticity.
The first such a result was due to the celebrated work by Asano [2] and
Sammartino-Caflisch [22], where the boundary layer expansion was established
for data on the half-plane that are analytic in both horizontal and vertical
variables. When constructing solutions to the Prandtl equation, the
analyticity in the vertical variable can be dropped [15]. It is not known
however if such an assumption can be dropped at the level of Navier-Stokes
equations. Maekawa [20] established the Prandtl’s expansion for data whose
vorticity is compactly supported away from the boundary, while recently
Kukavica, Nguyen, Vicol and Wang [18] extended the result to include data that
are analytic only near the boundary, building upon the vorticity formulation
revived by Maekawa [20], the direct proof of the inviscid limit for analytic
data developed in Nguyen and Nguyen [21], and the Sobolev-analytic norm
developed in Kukavica, Vicol and Wang [19]. All these aforementioned works are
on the half-plane. We mention a recent result [23], which to the best of our
knowledge was the first to establish a Prandtl asymptotic expansion in a
curved domain.
When background boundary layers have no inflection point, the analyticity can
be relaxed to include perturbations in Gevrey-$\frac{3}{2}$ spaces [7, 8],
which is sharp in view of the Kelvin-Helmholtz type of instability of generic
boundary layers and shear flows [10, 11]. When Sobolev data is allowed, the
Prandtl’s asymptotic expansion is false due to counter-examples given in [9,
12, 13], where the failure of the convergence from Navier-Stokes to Euler
solutions, plus a Prandtl corrector, is due to an emergence of viscous
boundary sublayers that reach to order one, independent of viscosity, in
$L^{\infty}$ norm for velocity [12].
## 2 Navier-Stokes equations near the boundary
### 2.1 Global geodesic coordinates
Following a construction done in [5] we introduce a well adapted
representation of $\partial\Omega\,,$
$\theta\in{\mathbb{T}}={\mathbb{R}}/({\mathbb{Z}}L)\mapsto
x(\theta)=(x_{1}(\theta),x_{2}(\theta))\in\partial\Omega$
which, being global, preserves the analyticity hypothesis. Let
$\vec{\tau}(\theta)$ and $\vec{n}(\theta)$ be the unit tangent and interior
normal vectors at the boundary:
$\displaystyle\vec{\tau}(\theta)=\vec{\tau}(x(\theta)=(x^{\prime}_{1}(\theta),x^{\prime}_{2}(\theta)),\quad\hbox{
and}\quad\vec{n}(\theta)=\vec{n}(x(\theta))=(-x^{\prime}_{2}(\theta),x^{\prime}_{1}(\theta))$
(2.1) $\displaystyle\hbox{
with}\quad|x^{\prime}(\theta)|^{2}=(x^{\prime}_{1}(\theta))^{2}+(x^{\prime}_{1}(\theta))^{2}=1.$
Let $d(x,\partial\Omega)$ denotes the distance of any point
$x\in{\mathbb{R}}^{2}$ to $\partial\Omega$ . Then we have the following
classical result.
###### Proposition 2.1.
There exists a $\delta>0$ such that for each $x$ on the open set
$V_{\delta}=\\{x\in{\mathbb{R}}^{2}\quad\hbox{with}\quad
d(x,\partial\Omega)<{\delta}\\}$ (2.2)
there is a unique point $\hat{x}(\theta)\in\partial\Omega$ with
$d(x,\partial\Omega)=|x-\hat{x}(\theta)|.$ The mapping
$x\mapsto\hat{x}(\theta)$ is an analytic map from $V_{\delta}$ with value in
$\partial\Omega$. In addition, for $x\in V_{\delta}\,,$ one has the formula
$\nabla_{x}d(x,\partial\Omega)=\vec{n}(x(\theta)).$ (2.3)
When no confusion is possible, for $x\in V_{\delta}$ the notations
$\vec{n}(x)$ and $\vec{\tau}(x)$ will be used for $\vec{n}(x(\theta))$ and
$\vec{\tau}(x(\theta))$ respectively. Observe that
$\vec{\tau}^{\prime}(\theta)\wedge\vec{n}(\theta)=x^{\prime}_{1}(\theta)x_{1}^{\prime\prime}(\theta)+x^{\prime}_{2}(\theta)x_{2}^{\prime\prime}(\theta)=\frac{d}{d\theta}|x^{\prime}(\theta)|^{2}=0\,,$
(2.4)
which implies the relation
$\vec{n}^{\prime}(\theta)=\gamma(\theta)\vec{\tau}(\theta)\quad\hbox{and}\quad\vec{\tau}^{\prime}(\theta)=\gamma(\theta)\vec{n}(\theta)\,,$
(2.5)
with
$\gamma(\theta)=x_{1}^{\prime\prime}(\theta)x_{2}^{\prime}(\theta)-x^{\prime}_{1}(\theta)x_{2}^{\prime\prime}(\theta),$
(2.6)
being the curvature of the boundary $\partial\Omega\,.$ Therefore the mapping:
$(z,\theta)\mapsto X(z,\theta)=x(\theta)+z\vec{n}(x(\theta)),$ (2.7)
defines au global $C^{2}$ diffeomorphisme of
$[-\delta,\delta]\times({\mathbb{R}}/(L{\mathbb{Z}})) $ on
$\overline{V_{\delta}}\,.$ Moreover, for any vector field
$x\in\overline{\Omega}\mapsto v(x)\,,$ as soon as
$x\in\overline{V}_{\delta}\,,$ using the above notations, one has:
$v(x)=(v(x)\cdot\vec{\tau}(x))\vec{\tau}(x)+(v(x)\cdot\vec{n}(x))\vec{n}(x)\,.$
(2.8)
Below, for sake of clarity, the symbol $X$ is used for any $x=X(z,\theta)$.
There hold
$\displaystyle\partial_{z}X(z,\theta)=\vec{n}(\theta)\,,\quad\partial_{\theta}X(z,\theta)=J(z,\theta)\vec{\tau}(\theta)\,,$
(2.9) $\displaystyle\hbox{and}\quad J(z,\theta)=1+z\gamma(\theta)>0
\quad\hbox{for}\quad|z|<\delta\,,$
provided $\delta>0$ is chosen to be small enough. From the relation
$\begin{pmatrix}\partial_{z}X_{1}&\partial_{\theta}X_{1}\\\
\partial_{z}X_{2}&\partial_{\theta}X_{2}\end{pmatrix}\begin{pmatrix}\partial_{X_{1}}z&\partial_{X_{2}}z\\\
\partial_{X_{1}}\theta&\partial_{X_{2}}\theta\end{pmatrix}=\begin{pmatrix}1&0\\\
0&1\end{pmatrix}\,,$ (2.10)
one deduces the formula:
$\nabla_{X}\theta=\frac{\vec{\tau}(z,\theta))}{J(z,\theta)}\quad\hbox{and}\quad\nabla_{X}z=\vec{n}(\theta)\,.$
(2.11)
We collect the following useful relations whose derivations are classical. For
any vector field $u$, we have
$\displaystyle\nabla\cdot u$
$\displaystyle=\frac{1}{J}(\partial_{z}(J(u\cdot\vec{n}))+\partial_{\theta}(u\cdot\vec{\tau}))=\partial_{z}(u\cdot\vec{n})+\frac{1}{J}\partial_{\theta}(u\cdot\vec{\tau}))+\frac{\gamma}{J}u\cdot\vec{n}\,,$
(2.12) $\displaystyle\nabla\wedge u$
$\displaystyle=\frac{1}{J}(\partial_{z}(Ju\cdot\vec{\tau})-\partial_{\theta}(u\cdot\vec{\tau}))=\partial_{z}(u\cdot\vec{\tau})-\frac{1}{J}\partial_{\theta}(u\cdot\vec{\tau}))+\frac{\gamma}{J}(u\cdot\vec{\tau})\,.$
For any scalar function $\Psi$, we have
$\nabla\wedge\Psi=\frac{1}{J}\Bigg{(}\begin{aligned} &\partial_{z}(J\Psi)\\\
&-\partial_{\theta}\Psi\end{aligned}\Bigg{)}=\Bigg{(}\begin{aligned}
&\partial_{z}\Psi\\\
&-\frac{1}{J}\partial_{\theta}\Psi\end{aligned}\Bigg{)}+\Bigg{(}\begin{aligned}
&\frac{\gamma}{J}\Psi\\\ &0\end{aligned}\Bigg{)}\,,$ (2.13)
and
$\displaystyle\Delta\Psi$
$\displaystyle=\frac{1}{J}\partial_{z}(J\partial_{z}\Psi)+\frac{1}{J}\partial_{\theta}(\frac{1}{J}\partial_{\theta}\Psi)=\Delta_{z,\theta}\Psi+R_{\Delta}\Psi\,,$
(2.14)
in which we denote
$\Delta_{z,\theta}=(\partial^{2}_{z}+\partial^{2}_{\theta}),\quad
R_{\Delta}=m(z,\theta)\partial^{2}_{\theta}+\frac{\gamma}{1+z\gamma}\partial_{z}-\frac{z\gamma^{\prime}}{(1+z\gamma)^{3}}\partial_{\theta}\quad\hbox{and}\quad
m(z,\theta)=-\frac{2z\gamma+(z\gamma)^{2}}{(1+z\gamma)^{2}}.$
### 2.2 Scaled coordinates
In view of (2.14), we observe that the Laplacian $\Delta$ is nearly the flat
Laplacian $\Delta_{z,\theta}$, in the $(z,\theta)$ coordinates, near the
boundary. To make use of this fact, we introduce the following scaled
variables
$(\widetilde{z},\widetilde{\theta})=(\lambda z,\lambda\theta)$ (2.15)
for sufficiently small $\lambda\in(0,1)$. By construction, we compute
$\Delta=\lambda^{2}\Big{(}\Delta_{\widetilde{z},\widetilde{\theta}}+\lambda^{2}\widetilde{R}_{\Delta}\Big{)}\,,$
(2.16)
in which
$\Delta_{\widetilde{z},\widetilde{\theta}}=(\partial^{2}_{\widetilde{z}}+\partial^{2}_{\widetilde{\theta}})$
and
$\widetilde{R}_{\Delta}=\widetilde{m}(\widetilde{z},\widetilde{\theta})\partial^{2}_{\widetilde{\theta}}+\frac{\widetilde{\gamma}}{1+\lambda^{2}\widetilde{z}\widetilde{\gamma}}\partial_{\widetilde{z}}-\frac{\widetilde{z}\widetilde{\gamma}^{\prime}}{(1+\lambda^{2}\widetilde{z}\widetilde{\gamma})^{3}}\partial_{\widetilde{\theta}}\quad\hbox{and}\quad\widetilde{m}(\widetilde{z},\widetilde{\theta})=-\frac{2\widetilde{z}\widetilde{\gamma}+\lambda^{2}(\widetilde{z}\widetilde{\gamma})^{2}}{(1+\lambda^{2}\widetilde{z}\widetilde{\gamma})^{2}}\,,$
(2.17)
where $\gamma=\lambda^{3}\widetilde{\gamma}(\widetilde{\theta})$. In the
analysis, $\lambda$ will be taken sufficiently small, and so $\Delta$ is
indeed approximated by $\lambda^{2}\Delta_{\widetilde{z},\widetilde{\theta}}$,
treating $\lambda^{2}\widetilde{R}_{\Delta}$ as a perturbation.
### 2.3 Vorticity equations near the boundary
In this section, we derive vorticity equations in the geodesic coordinates
near the boundary in the region $V_{\delta}$ defined as in Proposition 2.1.
Introduce a smooth cutoff function $\phi^{b}(x)$ so that
$\phi^{b}(x)=\left\\{\begin{aligned} 1,\qquad&\mbox{if}\quad\lambda
d(x,\partial\Omega)\leq\delta_{0}+\rho_{0}\\\ 0,\qquad&\mbox{if}\quad\lambda
d(x,\partial\Omega)\geq\delta_{0}+2\rho_{0}\end{aligned}\right.$ (2.18)
for small positive constants $\delta_{0},\rho_{0}$ so that
$\delta_{0}+2\rho_{0}<\lambda\delta$ to guarantee that
$\hbox{supp}(\phi^{b})\subset V_{\delta}$ as in Proposition 2.1. Define
${\omega}^{b}=\phi^{b}(x){\omega}(t,x).$ (2.19)
It follows from (1.4) that
$\partial_{t}{\omega}^{b}-\nu\Delta{\omega}^{b}=N^{b},$ (2.20)
where
$N^{b}:=-u\cdot\nabla{\omega}^{b}+(u\cdot\nabla\phi^{b}){\omega}-\nu(\Delta\phi^{b}){\omega}-2\nu\nabla\phi^{b}\cdot\nabla{\omega}.$
Observe that $N^{b}(u,\omega)=0$ on $\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}+2\rho_{0}\\}$ where the cutoff function
$\phi^{b}$ vanishes. We then introduce the following scaled vorticity
$\omega^{b}(t,x)=\widetilde{\omega}(\lambda^{2}t,\lambda\theta,\lambda
z),\qquad(\widetilde{t},\widetilde{z},\widetilde{\theta})=(\lambda^{2}t,\lambda
z,\lambda\theta),$ (2.21)
for small $\lambda>0$. Using (2.16), we rewrite the vorticity equation as
$\left(\partial_{\widetilde{t}}-\nu\Delta_{\widetilde{z},\widetilde{\theta}}\right)\widetilde{\omega}=-\nu\lambda^{2}\widetilde{R}_{\Delta}\widetilde{\omega}+\lambda^{-2}N^{b}.$
(2.22)
Equation (2.22) is defined on
$(\widetilde{z},\widetilde{\theta})\in\mathbb{R}_{+}\times{\mathbb{T}}$ (in
fact, the equation vanishes for $\widetilde{z}\geq\delta_{0}+2\rho_{0}$). We
shall solve (2.22) together with the boundary condition (1.8), which now reads
$\nu(\partial_{\widetilde{z}}+\widetilde{DN})\widetilde{\omega}_{|_{\widetilde{z}=0}}=\lambda^{-1}[\partial_{n}\Delta^{-1}(u\cdot\nabla\omega)]_{|_{\partial\Omega}}.$
(2.23)
System (2.22)-(2.23) will be our main equation for the scaled vorticity near
the boundary. Away from the boundary, we construct vorticity using the
original system as derived in Section 1.1.
### 2.4 Dirichlet-Neumann operator
Let us precise the Dirichlet-Neumann operator defined as in (1.6)-(1.7).
###### Lemma 2.2.
For $\omega\in H^{1/2}(\partial\Omega)$, let $DN\omega$ be the Dirichlet-
Neumann operator defined as in (1.6)-(1.7). In the scaled variables, there
holds
$\widetilde{DN}\widetilde{\omega}=|\partial_{\widetilde{\theta}}|\widetilde{\omega}+\widetilde{B}\widetilde{\omega}$
(2.24)
for some linear bounded operator $\widetilde{B}$ from $L^{2}(\partial\Omega)$
to itself: namely,
$\|\widetilde{B}\widetilde{\omega}\|_{L^{2}(\partial\Omega)}\leq
C_{0}\|\widetilde{\omega}\|_{L^{2}(\partial\Omega)}$
for some positive constant $C_{0}$.
###### Proof.
Let $\phi^{b}$ be the cutoff function defined as in (2.18), and set
$\omega^{*b}=\phi^{b}\omega^{*}$, where $\phi^{*}$ solves (1.6). It follows
that
$\left\\{\begin{aligned}
\Delta\omega^{*b}&=(\Delta\phi^{b}){\omega}^{*}-2\nabla\phi^{b}\cdot\nabla{\omega}^{*},\qquad\mbox{in}\quad\Omega\\\
{\omega^{*b}}&=\omega,\qquad\mbox{on
}\quad\partial\Omega.\end{aligned}\right.$ (2.25)
Since $\phi^{p}$ vanishes away from the boundary, we can work in the scaled
variables, which reads
$\widetilde{DN}\widetilde{\omega}=-\partial_{\widetilde{z}}\widetilde{\omega}^{*}_{|_{\widetilde{z}=0}}$.
Recalling (2.16), the scaled function
$\widetilde{\omega}^{*}(\widetilde{t},\widetilde{z},\widetilde{\theta})$ of
$\omega^{*b}$ solves
$\Delta_{\widetilde{z},\widetilde{\theta}}\widetilde{\omega}^{*}=-\lambda^{2}\widetilde{R}_{\Delta}\widetilde{\omega}^{*}+\lambda^{-2}[(\Delta\phi^{b}){\omega}^{*}-2\nabla\phi^{b}\cdot\nabla{\omega}^{*}],\qquad\widetilde{\omega}^{*}|_{\widetilde{z}=0}=\widetilde{\omega}|_{\widetilde{z}=0}\,,$
on $\mathbb{R}_{+}\times{\mathbb{T}}$, which can be solved explicitly. Indeed,
let $\widetilde{\omega}_{\alpha}$ be the Fourier coefficient of
$\widetilde{\omega}(\widetilde{z},\widetilde{\theta})$ in variable
$\widetilde{\theta}$. Note that $\widetilde{\omega}_{\alpha}$ vanishes for
$\alpha=0$, and thus we focus on the case when $\alpha\not=0$. Let
$K_{\alpha}(\widetilde{y},\widetilde{z})=\frac{1}{2|\alpha|}(e^{-|\alpha(\widetilde{y}-\widetilde{z})|}-e^{-|\alpha(\widetilde{y}+\widetilde{z})|})$
be the Green function of the Laplacian
$\partial_{\widetilde{z}}^{2}-\alpha^{2}$ with the Dirichlet boundary
condition. It follows that
$\displaystyle\widetilde{\omega}^{*}_{\alpha}(\widetilde{z})$
$\displaystyle=e^{-|\alpha|\widetilde{z}}\widetilde{\omega}_{\alpha}(0)+\lambda^{2}\int_{0}^{\infty}K_{\alpha}(\widetilde{y},\widetilde{z})(\widetilde{R}_{\Delta}\widetilde{\omega}^{*})_{\alpha}(\widetilde{y})\;d\widetilde{y}$
(2.26)
$\displaystyle\qquad{+\lambda^{-2}\int_{0}^{\infty}K_{\alpha}(\widetilde{y},\widetilde{z})\Big{[}(\Delta\phi^{b}){\omega}^{*}-2\nabla\phi^{b}\cdot\nabla{\omega}^{*}\Big{]}_{\alpha}(\widetilde{y})\;d\widetilde{y}}$
for $\widetilde{z}\geq 0$. The Dirichlet-Neumann operator is thus computed by
$\displaystyle(\widetilde{DN}\widetilde{\omega})_{\alpha}$
$\displaystyle=-\partial_{\widetilde{z}}\widetilde{\omega}^{*}_{\alpha}(0)$
$\displaystyle=|\alpha|\widetilde{\omega}_{\alpha}(0)+\int_{0}^{\infty}e^{-|\alpha|\widetilde{y}}\Big{[}\lambda^{2}(\widetilde{R}_{\Delta}\widetilde{\omega}^{*})_{\alpha}+\lambda^{-2}((\Delta\phi^{b}){\omega}^{*}-2\nabla\phi^{b}\cdot\nabla{\omega}^{*})_{\alpha}\Big{]}(\widetilde{y})\;d\widetilde{y}.$
The decomposition (2.24) thus follows, upon defining $\widetilde{B}$ as the
integral term
$(\widetilde{B}\widetilde{\omega})_{\alpha}:=\int_{0}^{\infty}e^{-|\alpha|\widetilde{y}}\Big{[}\lambda^{2}(\widetilde{R}_{\Delta}\widetilde{\omega}^{*})_{\alpha}+\lambda^{-2}((\Delta\phi^{b}){\omega}^{*}-2\nabla\phi^{b}\cdot\nabla{\omega}^{*})_{\alpha}\Big{]}(\widetilde{y})\;d\widetilde{y}\,,$
(2.27)
for each Fourier variable $\alpha\in{\mathbb{Z}}$. It remains to prove the
boundedness of $\widetilde{B}$. Note that by definition, the last two terms
are defined on the region $\widetilde{y}\geq\delta_{0}+\rho_{0}$ where the
cutoff function $\phi^{b}=1$. Therefore,
$\Big{|}\int_{0}^{\infty}e^{-|\alpha|\widetilde{y}}((\Delta\phi^{b}){\omega}^{*}-2\nabla\phi^{b}\cdot\nabla{\omega}^{*})_{\alpha}(\widetilde{y})\;d\widetilde{y}\Big{|}\lesssim\|\omega^{\star}\|_{H^{1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\rho_{0})}.$
It remains to bound the first integral term in (2.27). In view of (2.17), we
write
$\displaystyle\widetilde{R}_{\Delta}\widetilde{\omega}^{*}$
$\displaystyle=\partial^{2}_{\widetilde{\theta}}[\widetilde{m}\widetilde{\omega}^{*}]-\partial_{\widetilde{\theta}}\Big{[}2\partial_{\widetilde{\theta}}\widetilde{m}\widetilde{\omega}^{*}+\frac{\widetilde{z}\widetilde{\gamma}^{\prime}}{(1+\lambda^{2}\widetilde{z}\widetilde{\gamma})^{3}}\widetilde{\omega}^{*}\Big{]}+\partial_{\widetilde{z}}\Big{(}\frac{\widetilde{\gamma}}{1+\lambda^{2}\widetilde{z}\widetilde{\gamma}}\widetilde{\omega}^{*}\Big{)}$
$\displaystyle\quad+\Big{[}(\partial^{2}_{\widetilde{\theta}}\widetilde{m})-\partial_{\widetilde{z}}\Big{(}\frac{\widetilde{\gamma}}{1+\lambda^{2}\widetilde{z}\widetilde{\gamma}}\Big{)}+\partial_{\widetilde{\theta}}\Big{(}\frac{\widetilde{z}\widetilde{\gamma}^{\prime}}{(1+\lambda^{2}\widetilde{z}\widetilde{\gamma})^{3}}\Big{)}\Big{]}\widetilde{\omega}^{*},$
noting the coefficients are analytic near the boundary. We note in particular
that there is no growth in large $\widetilde{z}$: for instance,
$m(\widetilde{z},\widetilde{\theta})\lesssim\lambda^{-2}$ uniformly in large
$\widetilde{z}$. In addition, we note that
$\widetilde{m}=\widetilde{z}\widetilde{m}_{1}$ for some bounded function
$\widetilde{m}_{1}$. Thus, using the fact that
$|\alpha|\widetilde{y}e^{-\frac{1}{2}|\alpha|\widetilde{y}}\lesssim 1$, the
second-order derivative term
$\partial^{2}_{\widetilde{\theta}}[\widetilde{m}\widetilde{\omega}^{*}]$ thus
can be treated as the first order derivative term. Precisely, we can treat the
first integral in (2.27) systematically as follows: for some smooth and
bounded coefficients $b(\widetilde{z},\widetilde{\theta})$,
$\lambda^{2}\int_{0}^{\infty}e^{-\frac{1}{2}|\alpha|\widetilde{y}}|(\alpha,\partial_{\widetilde{y}})(b\widetilde{\omega}^{*})_{\alpha}|(\widetilde{y})\;d\widetilde{y}\lesssim\lambda^{2}|\alpha|^{-1/2}\|(\alpha,\partial_{\widetilde{y}})(b\widetilde{\omega}^{*})_{\alpha}\|_{L^{2}_{\widetilde{y}}}.$
This yields
$\displaystyle|(\widetilde{B}\widetilde{\omega})_{\alpha}|$
$\displaystyle\lesssim\lambda^{2}|\alpha|^{-1/2}\|(\alpha,\partial_{\widetilde{y}})(b\widetilde{\omega}^{*})_{\alpha}\|_{L^{2}_{\widetilde{y}}}+\|\omega^{\star}\|_{H^{1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\rho_{0})}\,.$
Taking $L^{2}_{\alpha}$, we thus obtain
$\displaystyle\sum_{\alpha}|(\widetilde{B}\widetilde{\omega})_{\alpha}|^{2}$
$\displaystyle\lesssim\lambda^{2}\sum_{\alpha}|\alpha|^{-1}\|(\alpha,\partial_{\widetilde{y}})\widetilde{\omega}^{*}_{\alpha}\|_{L^{2}_{\widetilde{y}}}^{2},$
(2.28)
upon noting that the coefficients $b(\widetilde{z},\widetilde{\theta})$, which
in particular have
$\|b_{\alpha}(\widetilde{z})\|_{L^{1}_{\alpha}L^{\infty}_{\widetilde{z}}}<\infty$.
It remains to bound the right-hand side of (2.28). Directly from (2.26), we
compute
$\displaystyle|(\alpha,\partial_{\widetilde{z}})\widetilde{\omega}^{*}_{\alpha}(\widetilde{z})|$
$\displaystyle\lesssim|\alpha|e^{-|\alpha|\widetilde{z}}|\widetilde{\omega}_{\alpha}(0)|+\lambda^{2}\int_{0}^{\infty}e^{-|\alpha(\widetilde{z}-\widetilde{z}^{\prime})|}|(\widetilde{R}_{\Delta}\widetilde{\omega}^{*})_{\alpha}(\widetilde{z}^{\prime})|\;d\widetilde{z}^{\prime}+|\alpha|^{-1/2}\|\omega^{\star}\|_{H^{1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\rho_{0})}.$
Therefore, together with the standard Hausdorff-Young’s inequality, we bound
$\displaystyle\|(\alpha,\partial_{\widetilde{z}})\widetilde{\omega}^{*}_{\alpha}\|_{L^{2}_{\widetilde{z}}}$
$\displaystyle\lesssim|\alpha|^{1/2}|\widetilde{\omega}_{\alpha}(0)|+\lambda^{2}|\alpha|^{-1}\|(\widetilde{R}_{\Delta}\widetilde{\omega}^{*})_{\alpha}\|_{L^{2}_{\widetilde{z}}}+|\alpha|^{-1/2}\|\omega^{\star}\|_{H^{1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\rho_{0})}$
which yields
$\displaystyle\sum_{\alpha}|\alpha|^{-1}\|(\alpha,\partial_{\widetilde{z}})\widetilde{\omega}^{*}_{\alpha}\|_{L^{2}_{\widetilde{z}}}^{2}$
$\displaystyle\lesssim\sum_{\alpha}|\widetilde{\omega}_{\alpha}(0)|^{2}+\lambda^{2}\sum_{\alpha}|\alpha|^{-3}\|(\widetilde{R}_{\Delta}\widetilde{\omega}^{*})_{\alpha}\|_{L^{2}_{\widetilde{z}}}^{2}+\|\omega^{\star}\|_{H^{1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\rho_{0})}^{2}$
$\displaystyle\lesssim\sum_{\alpha}|\widetilde{\omega}_{\alpha}(0)|^{2}+\lambda^{2}\sum_{\alpha}|\alpha|^{-1}\|(\alpha,\partial_{\widetilde{z}})\widetilde{\omega}^{*}_{\alpha}\|_{L^{2}_{\widetilde{z}}}^{2}$
$\displaystyle\quad+\sum_{\alpha}|\alpha|^{-1}\|(\alpha,\partial_{\widetilde{z}})\widetilde{\omega}^{*}_{\alpha}\|^{2}_{L^{2}_{\\{{\widetilde{z}\geq\delta_{0}+\rho_{0}}\\}}}+\|\omega^{\star}\|_{H^{1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\rho_{0})}.$
Taking $\lambda$ sufficiently small so that the second term on the right can
be absorbed into the left. On the other hand, using the standard elliptic
theory, the last term is bounded by
$\sum_{\alpha}|\alpha|^{-1}\|(\alpha,\partial_{\widetilde{z}})\widetilde{\omega}^{\star}_{\alpha}\|^{2}_{L^{2}_{\\{{\widetilde{z}\geq\delta_{0}+\rho_{0}}\\}}}\lesssim\|\omega^{\star}\|^{2}_{H^{1}(\lambda
d(x,\partial\Omega)\geq\delta_{0})}\lesssim\|\omega\|_{L^{2}(\partial\Omega)}^{2}.$
Putting these back into (2.28), we obtain the lemma. ∎
## 3 Near boundary analytic spaces
In this section, we introduce the near boundary analytic norm used to control
the vorticity that is analytic near the boundary, but however only has Sobolev
regularity away from the boundary. We then derive sufficient elliptic
estimates, bilinear estimates, as well as the semigroup estimates in these
analytic spaces.
### 3.1 Analytic norms
Let $\delta>0$ be small and so that Proposition 2.1 applies for
$\bar{V}_{\delta}=\\{d(x,\partial\Omega)\leq\delta\\}$. In particular,
$\delta$ is small so that the statement of 2.1 still holds for $V_{2\delta}$.
Now for any constant $\lambda\in(0,1)$, we have
$\lambda d(x,\partial\Omega)\leq\lambda\delta$
for all $x\in\bar{V}_{\delta}$. Let $\delta_{0}=\lambda\delta$, which will the
size of the analytic domain for our solution near the boundary. We fix
$\rho_{0}\in(0,1/10)$, and assume that $\rho\in(0,\rho_{0})$. Then
$\Omega_{\rho}=\\{\widetilde{z}\in\mathbb{C}:0\leq\Re\widetilde{z}\leq\delta_{0},|\Im\widetilde{z}|\leq\rho\Re\widetilde{z}\\}\cup\\{\widetilde{z}\in\mathbb{C}:\delta_{0}\leq\Re\widetilde{z}\leq\delta_{0}+\rho,|\Im\widetilde{z}|\leq\delta_{0}+\rho-\Re\widetilde{z}\\}$
(3.1)
denotes the complex domain for functions of the $\widetilde{z}$ variable. We
note that the domain $\Omega_{\rho}$ only contains $\widetilde{z}$ with
$0\leq\Re\widetilde{z}\leq\delta_{0}+\rho$. For a complex valued function $f$
defined on $\Omega_{\rho}$, let
$\|f\|_{L^{1}_{\rho}}=\sup_{0\leq\eta<\rho}\|f\|_{L^{1}(\partial\Omega_{\eta})},\qquad\|f\|_{L^{\infty}_{\rho}}=\sup_{0\leq\eta<\rho}\|f\|_{L^{\infty}(\partial\Omega_{\eta})}$
where the integration is taken over the two directed paths along the boundary
of the domain $\Omega_{\eta}$. Now for an analytic function
$f(\widetilde{\theta},\widetilde{z})$ defined on
$(\widetilde{\theta},\widetilde{z})\in\mathbb{T}\times\Omega_{\rho}$, we
define
$\displaystyle\|f\|_{\mathcal{L}^{1}_{\rho}}$
$\displaystyle=\sum_{\alpha\in\mathbb{Z}}\|e^{\varepsilon_{0}(\delta_{0}+\rho-\Re\widetilde{z})|\alpha|}f_{\alpha}\|_{L^{1}_{\rho}},$
(3.2) $\displaystyle\|f\|_{\mathcal{L}^{\infty}_{\rho}}$
$\displaystyle=\sum_{\alpha\in\mathbb{Z}}\|e^{\varepsilon_{0}(\delta_{0}+\rho-\Re\widetilde{z})|\alpha|}f_{\alpha}\|_{L^{\infty}_{\rho}},$
where $f_{\alpha}$ denotes the Fourier transform of $f$ with respect to
variable $\widetilde{\theta}$. The function spaces $\mathcal{L}^{1}_{\rho}$
and $\mathcal{L}_{\rho}^{\infty}$ are to control the scaled vorticity and
velocity, respectively. We stress that the analyticity weight vanishes on
$\Re\widetilde{z}\geq\delta_{0}+\rho$. For convenience, we also introduce the
following analytic norms
$\|f\|_{\mathcal{W}_{\rho}^{k,p}}=\sum_{i+j\leq
k}\|\partial_{\widetilde{\theta}}^{i}(\widetilde{z}\partial_{\widetilde{z}})^{j}f\|_{\mathcal{L}^{p}_{\rho}}$
(3.3)
for $k\geq 0$ and $p=1,\infty$. We observe the following simple algebra.
###### Lemma 3.1.
There hold
$\|fg\|_{\mathcal{L}^{1}_{\rho}}\leq\|f\|_{\mathcal{L}^{\infty}_{\rho}}\|g\|_{\mathcal{L}^{1}_{\rho}}$
(3.4)
and for any $0<\rho^{\prime}<\rho$,
$\|\partial_{\widetilde{\theta}}f\|_{\mathcal{L}^{1}_{\rho^{\prime}}}+\|\widetilde{z}\partial_{\widetilde{z}}f\|_{\mathcal{L}^{1}_{\rho^{\prime}}}\lesssim\frac{1}{\rho-\rho^{\prime}}\|f\|_{\mathcal{L}^{1}_{\rho}}.$
(3.5)
###### Proof.
By definition, we compute
$\displaystyle
e^{\varepsilon_{0}(\delta_{0}+\rho-\Re\widetilde{z})|\alpha|}|(fg)_{\alpha}(\widetilde{z})|$
$\displaystyle\leq\sum_{\alpha^{\prime}}|f_{\alpha-{\alpha^{\prime}}}(\widetilde{z})g_{\alpha^{\prime}}(\widetilde{z})|e^{\varepsilon_{0}(\delta_{0}+\rho-\Re\widetilde{z})|\alpha|}$
$\displaystyle\leq\sum_{\alpha^{\prime}}|e^{\varepsilon_{0}(\delta_{0}+\rho-\Re\widetilde{z})|\alpha-{\alpha^{\prime}}|}f_{\alpha-{\alpha^{\prime}}}(\widetilde{z})e^{\varepsilon_{0}(\delta_{0}+\rho-\Re\widetilde{z})|{\alpha^{\prime}}|}g_{\alpha^{\prime}}(\widetilde{z})|$
which gives
$\displaystyle\|e^{\varepsilon_{0}(\delta_{0}+\rho-\Re\widetilde{z})|\alpha|}(fg)_{\alpha}(\widetilde{z})\|_{\mathcal{L}_{\rho}^{1}}$
$\displaystyle\leq\sum_{\alpha^{\prime}}\|e^{\varepsilon_{0}(\delta_{0}+\rho-\Re\widetilde{z})|\alpha-{\alpha^{\prime}}|}f_{\alpha-{\alpha^{\prime}}}\|_{\mathcal{L}^{\infty}_{\rho}}\|e^{\varepsilon_{0}(\delta_{0}+\rho-\Re\widetilde{z})|{\alpha^{\prime}}|}g_{\alpha^{\prime}}\|_{\mathcal{L}^{1}_{\rho}}.$
The estimate (3.4) follows from taking the summation in $\alpha$ over
${\mathbb{Z}}$. The stated bounds on derivatives are classical (e.g., [22,
21]), making use of the fact that
$(\rho-\rho^{\prime})|\alpha|e^{(\rho^{\prime}-\rho)|\alpha|}$ is bounded. ∎
### 3.2 Elliptic estimates in the half-plane
In this section, we derive some basic elliptic estimates in the analytic
spaces $\mathcal{W}^{k,p}_{\rho}$. Precisely, we consider
$\left\\{\begin{aligned} \Delta_{z,\theta}\phi&=f,\qquad\mbox{in
}\quad\mathbb{R}_{+}\times{\mathbb{T}}\\\
\phi_{|_{z=0}}&=0\end{aligned}\right.$ (3.6)
in which we drop titles for sake of presentation. The
$\mathcal{W}^{k,p}_{\rho}$ analytic norm is defined on $\Re
z\leq\delta_{0}+\rho$ as introduced in the previous section. We obtain the
following proposition.
###### Proposition 3.2.
Let $\phi$ be the solution of (3.6). Then, the velocity field
$u=\nabla^{\perp}\phi$ satisfies
$\displaystyle\|u\|_{\mathcal{W}^{k,\infty}_{\rho}}$
$\displaystyle\lesssim\|f\|_{\mathcal{W}^{k,1}_{\rho}}+\|f\|_{H^{k+1}(\\{z\geq\delta_{0}+\rho\\})}$
(3.7)
$\displaystyle\|(\frac{1}{z}\partial_{\theta}\phi)\|_{\mathcal{W}^{k,\infty}_{\rho}}$
$\displaystyle\lesssim\|f\|_{\mathcal{W}^{k,1}_{\rho}}+\|\partial_{\theta}f\|_{\mathcal{W}^{k,1}_{\rho}}+\|f\|_{H^{k+1}(\\{z\geq\delta_{0}+\rho\\})}$
$\displaystyle\|\nabla_{z,\theta}u\|_{\mathcal{W}^{k,\infty}_{\rho}}$
$\displaystyle\lesssim\|f\|_{\mathcal{W}^{k,\infty}_{\rho}}+\|f\|_{H^{k+2}(\\{z\geq\delta_{0}+\rho\\})}$
for $k\geq 0$.
###### Proof.
The elliptic problem (3.6) can be solved explicitly in Fourier space. Indeed,
taking the Fourier transform in $\theta$, we get the elliptic equation
$(\partial_{z}^{2}-\alpha^{2})\phi_{\alpha}=f_{\alpha}$
for the Fourier transform $\phi_{\alpha}$. We focus on the case $\alpha>0$;
the other case is similar. The solution is given by
$\phi_{\alpha}(z)=\int_{0}^{z}K_{-}(y,z)f_{\alpha}(y)dy+\int_{z}^{\infty}K_{+}(y,z)f_{\alpha}(y)dy$
with the Green function defined by
$K_{\pm}(y,z)=-\frac{1}{2\alpha}\Big{(}e^{\pm\alpha(z-y)}-e^{-\alpha(y+z)}\Big{)}.$
This expression may be extended to complex values of $z$. Indeed, for
$z\in\Omega_{\sigma}$, there is a positive $\theta$ so that
$z\in\partial\Omega_{\theta}$. We then write
$\partial\Omega_{\theta}=\gamma_{-}(z)\cup\gamma_{+}(z)$, consisting of
complex numbers $y\in\partial\Omega_{\theta}$ so that $\Re y<\Re z$ and $\Re
y>\Re z$, respectively. Then, the integral is taken over $\gamma_{-}(z)$ and
$\gamma_{+}(z)$, respectively. We note in particular that for
$y\in\gamma_{\pm}(z)$, there hold the same bounds on the Green function
$|K_{\pm}(y,z)|\leq\alpha^{-1}e^{-\alpha|y-z|}.$
This proves that
$\displaystyle|\phi_{\alpha}(z)|$
$\displaystyle\leq\int_{\partial\Omega_{\theta}}\alpha^{-1}e^{-\alpha|y-z|}|f_{\alpha}(y)||dy|.$
(3.8)
By definition of $\mathcal{L}^{1}_{\rho}$ norm, we only need to consider the
case when $0\leq\Re z\leq\delta_{0}+\rho$. Now, for $0\leq\Re
y\leq\delta_{0}+\rho$, we bound
$e^{-\alpha|\Re y-\Re z|}e^{-\varepsilon_{0}(\delta_{0}+\rho-\Re y)\alpha}\leq
e^{-\varepsilon_{0}(\delta_{0}+\rho-\Re
z)\alpha}e^{-(1-\epsilon_{0})\alpha|\Re y-\Re z|}$
noting $\epsilon_{0}\leq 1/2$. On the other hand, for $\Re
y\geq\delta_{0}+\rho$ (recalling $\delta_{0}+\rho\geq\Re z$), we bound
$e^{-\alpha|\Re y-\Re z|}\leq e^{-\epsilon_{0}(\delta_{0}+\rho-\Re
z)\alpha}e^{-(1-\epsilon_{0})\alpha|\Re y-\Re z|}.$
Therefore, we bound
$\displaystyle\int_{\Re
y\leq\delta_{0}+\rho}\alpha^{-1}e^{-\alpha|y-z|}|f_{\alpha}(y)||dy|$
$\displaystyle\lesssim\alpha^{-1}e^{-\varepsilon_{0}(\delta_{0}+\rho-\Re
z)\alpha}\|e^{\varepsilon_{0}(\delta_{0}+\rho-\Re
y)\alpha}f_{\alpha}\|_{L^{1}_{\rho}},$ $\displaystyle\int_{\Re
y\geq\delta_{0}+\rho}\alpha^{-1}e^{-\alpha|y-z|}|f_{\alpha}(y)||dy|$
$\displaystyle\lesssim\alpha^{-3/2}e^{-\varepsilon_{0}(\delta_{0}+\rho-\Re
z)\alpha}\|f_{\alpha}\|_{L^{2}(y\geq\delta_{0}+\rho)}.$
Similarly, we also have
$\displaystyle\int_{\Re
y\leq\delta_{0}+\rho}\alpha^{-1}e^{-\alpha|y-z|}|f_{\alpha}(y)||dy|$
$\displaystyle\lesssim\alpha^{-2}e^{-\varepsilon_{0}(\delta_{0}+\rho-\Re
z)\alpha}\|e^{\varepsilon_{0}(\delta_{0}+\rho-\Re
y)\alpha}f_{\alpha}\|_{L^{\infty}_{\rho}},$
which gains an extra factor of $\alpha$. This proves
$\displaystyle\|e^{\varepsilon_{0}(\delta_{0}+\rho-\Re
z)\alpha}(\alpha,\partial_{z})\phi_{\alpha}\|_{L^{\infty}_{\rho}}$
$\displaystyle\leq\|e^{\varepsilon_{0}(\delta_{0}+\rho-\Re
y)\alpha}f_{\alpha}\|_{L^{1}_{\rho}}+\alpha^{-1/2}\|f_{\alpha}\|_{L^{2}(y\geq\delta_{0}+\rho)}$
$\displaystyle\|e^{\varepsilon_{0}(\delta_{0}+\rho-\Re
z)\alpha}(\alpha,\partial_{z})^{2}\phi_{\alpha}\|_{L^{\infty}_{\rho}}$
$\displaystyle\leq\|e^{\varepsilon_{0}(\delta_{0}+\rho-\Re
y)\alpha}f_{\alpha}\|_{L^{\infty}_{\rho}}+\alpha^{1/2}\|f_{\alpha}\|_{L^{2}(y\geq\delta_{0}+\rho)}.$
Taking the summation in $\alpha\in{\mathbb{Z}}$ yields the first and last
estimates in (3.7) for $k=0$. For $k\geq 0$, the estimates follow similarly.
For the estimates involving the weight $z^{-1}$, we use the fact that the
Green function vanishes on the boundary $z=0$, and so $|G_{\pm}(y,z)|\leq
ze^{-\alpha|y-z|}.$ ∎
### 3.3 Biot-Savart law in $\Omega$
In this section, we bound the velocity through the Biot-Savart law: namely,
$u=\nabla^{\perp}\phi$, where
$\left\\{\begin{aligned} \Delta\phi&=\omega,\qquad\mbox{in }\quad\Omega\\\
\phi&=0,\qquad\mbox{on}\quad\partial\Omega.\end{aligned}\right.$ (3.9)
Without loss of generality, we will work with the cut-off vorticity
$\omega^{b}$ (see Section 4.1) near the boundary where the rescaled
coordinates introduced in Section 2.3 apply. We obtain the following
proposition.
###### Proposition 3.3.
Let $\phi$ be the solution of (3.9). Then, the velocity field
$u=\nabla^{\perp}\phi$ satisfies
$\displaystyle\|u\|_{\mathcal{W}^{k,\infty}_{\rho}}$
$\displaystyle\lesssim\|\omega\|_{\mathcal{W}^{k,1}_{\rho}}+\|\omega\|_{H^{k+1}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}/2\\})}$ (3.10)
$\displaystyle\|(\frac{1}{\widetilde{z}}\partial_{\widetilde{\theta}}\phi)\|_{\mathcal{W}^{k,\infty}_{\rho}}$
$\displaystyle\lesssim\|\omega\|_{\mathcal{W}^{k,1}_{\rho}}+\|\partial_{\widetilde{\theta}}\omega\|_{\mathcal{W}^{k,1}_{\rho}}+\|\omega\|_{H^{k+1}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}/2\\})}$
for $k\geq 0$.
###### Proof.
Using (2.16) and (3.9), the scaled stream function
$\widetilde{\phi}(\widetilde{t},\widetilde{z},\widetilde{\theta})$ solves
$\Delta_{\widetilde{z},\widetilde{\theta}}\widetilde{\phi}=\lambda^{-2}\widetilde{\omega}-\lambda^{2}\widetilde{R}_{\Delta}\widetilde{\phi},\qquad{\widetilde{\phi}}{}_{|_{\widetilde{z}=0}}=0$
on ${\mathbb{T}}\times\mathbb{R}_{+}$, and so the elliptic theory, Proposition
3.2, developed in the previous section can be applied, yielding
$\displaystyle\|u\|_{\mathcal{W}^{k,\infty}_{\rho}}$
$\displaystyle\lesssim\|\omega\|_{\mathcal{W}^{k,1}_{\rho}}+\|\omega\|_{H^{k+1}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}+\rho\\})}$ (3.11)
$\displaystyle\quad+\lambda^{2}\|\partial_{\widetilde{\theta}}^{-1}\widetilde{R}_{\Delta}\widetilde{\phi}\|_{\mathcal{W}^{k,\infty}_{\rho}}+\lambda^{2}\|\widetilde{R}_{\Delta}\widetilde{\phi}\|_{H^{k+1}(\\{\widetilde{z}\geq\delta_{0}+\rho\\})}.$
It thus remains to bound $\widetilde{R}_{\Delta}\widetilde{\phi}$. Recall from
(2.17) that
$\widetilde{R}_{\Delta}=\widetilde{m}(\widetilde{z},\widetilde{\theta})\partial^{2}_{\widetilde{\theta}}+\frac{\widetilde{\gamma}}{1+\lambda^{2}\widetilde{z}\widetilde{\gamma}}\partial_{\widetilde{z}}-\frac{\widetilde{z}\widetilde{\gamma}^{\prime}}{(1+\lambda^{2}\widetilde{z}\widetilde{\gamma})^{3}}\partial_{\widetilde{\theta}},\qquad\widetilde{m}(\widetilde{z},\widetilde{\theta})=-\frac{2\widetilde{z}\widetilde{\gamma}+\lambda^{2}(\widetilde{z}\widetilde{\gamma})^{2}}{(1+\lambda^{2}\widetilde{z}\widetilde{\gamma})^{2}}.$
Thanks to the analyticity of the boundary, the coefficients are clearly
bounded in $\mathcal{W}^{k,\infty}_{\rho}$. Therefore, using a similar algebra
as in (3.4), we bound
$\lambda^{2}\|\partial_{\widetilde{\theta}}^{-1}\widetilde{R}_{\Delta}\widetilde{\phi}\|_{\mathcal{W}^{k,\infty}_{\rho}}\lesssim\lambda^{2}\|\partial_{\widetilde{\theta}}\widetilde{\phi}\|_{\mathcal{W}^{k,\infty}_{\rho}}+\lambda^{2}\|\partial_{\widetilde{z}}\widetilde{\phi}\|_{\mathcal{W}^{k,\infty}_{\rho}}$
(3.12)
That is, this term can be absorbed into the left hand side of (3.11), upon
taking $\lambda$ sufficiently small. As for the last term in (3.11), we note
that for large $\widetilde{z}$,
$|\widetilde{m}(\widetilde{z},\widetilde{\theta})|\lesssim\lambda^{-2}$, which
in particular proves that there is no growth in $\widetilde{z}$. This gives
$\displaystyle\lambda^{2}\|\widetilde{R}_{\Delta}\widetilde{\phi}\|_{H^{k+1}(\\{\widetilde{z}\geq\delta_{0}+\rho\\})}$
$\displaystyle\lesssim\|\phi\|_{H^{k+3}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}+\rho\\})}$ (3.13)
$\displaystyle\lesssim\lambda^{2}\|\widetilde{\phi}\|_{\mathcal{W}^{k,\infty}_{\rho}}+\|\omega\|_{H^{k+1}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}/2\\})},$
in which the last estimate follows from the standard elliptic theory in
Sobolev spaces. The proposition follows. ∎
### 3.4 Bilinear estimates
In this section, we show that the Sobolev-analytic norm is well adapted to
treat the nonlinear $u\cdot\nabla\omega$. We have the following lemma.
###### Lemma 3.4.
For any $\omega$ and $\omega^{\prime}$, denoting by $u$ the velocity related
to $\omega$, we have
$\displaystyle\|u\cdot\nabla\omega^{\prime}\|_{\mathcal{L}^{1}_{\rho}}$
$\displaystyle\leq
C\Big{(}\|\omega\|_{\mathcal{L}^{1}_{\rho}}+\|\omega\|_{H^{1}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}\\})}\Big{)}\|\partial_{\widetilde{\theta}}\omega^{\prime}\|_{\mathcal{L}^{1}_{\rho}}$
$\displaystyle\quad+C\Big{(}\|\omega\|_{\mathcal{L}^{1}_{\rho}}+\|\partial_{\widetilde{\theta}}\omega\|_{\mathcal{L}^{1}_{\rho}}+\|\omega\|_{H^{1}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}\\})}\Big{)}\|\widetilde{z}\partial_{\widetilde{z}}\omega^{\prime}\|_{\mathcal{L}^{1}_{\rho}}.$
###### Proof.
By definition, the $\mathcal{L}^{1}_{\rho}$ norm is defined near the boundary
$\\{\lambda d(x,\partial\Omega)\leq\delta_{0}+\rho\\}$, on which we can write
$u\cdot\nabla{\omega}^{\prime}=\frac{1}{1+z\gamma(\theta)}\partial_{\theta}\phi\partial_{z}{\omega}^{\prime}-\frac{1}{(1+z\gamma(\theta))^{2}}\partial_{z}\phi\partial_{\theta}{\omega}^{\prime}$
with $\Delta\phi=\omega$. In the rescaled variable
$(\widetilde{z},\widetilde{\theta})$, we get
$u\cdot\nabla{\omega}^{\prime}=\frac{\lambda^{2}}{1+\lambda^{2}\widetilde{z}\widetilde{\gamma}(\widetilde{\theta})}(\partial_{\widetilde{\theta}}\widetilde{\phi})(\partial_{\widetilde{z}}\widetilde{\omega}^{\prime})-\frac{\lambda^{2}}{(1+\lambda^{2}\widetilde{z}\widetilde{\gamma}(\widetilde{\theta}))^{2}}(\partial_{\widetilde{z}}\widetilde{\phi})(\partial_{\widetilde{\theta}}\widetilde{\omega}^{\prime})$
Note that thanks to the analyticity of $\partial\Omega$, the coefficient
$(1+\lambda^{2}\widetilde{z}\widetilde{\gamma}(\widetilde{\theta}))^{-1}$ is
bounded in $\mathcal{L}^{\infty}_{\rho}$. Using (3.4) and Proposition 3.3, we
bound
$\displaystyle\|(\partial_{\widetilde{z}}\widetilde{\phi})(\partial_{\widetilde{\theta}}\widetilde{\omega}^{\prime})\|_{\mathcal{L}^{1}_{\rho}}$
$\displaystyle\lesssim\|\partial_{\widetilde{z}}\widetilde{\phi}\|_{\mathcal{L}_{\rho}^{\infty}}\|\partial_{\widetilde{\theta}}\widetilde{\omega}^{\prime}\|_{\mathcal{L}^{1}_{\rho}}$
$\displaystyle\lesssim\Big{(}\|\omega\|_{\mathcal{L}^{1}_{\rho}}+\|\omega\|_{H^{1}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}\\})}\Big{)}\|\partial_{\widetilde{\theta}}\omega^{\prime}\|_{\mathcal{L}^{1}_{\rho}}$
$\displaystyle\|(\partial_{\widetilde{\theta}}\widetilde{\phi})(\partial_{\widetilde{z}}\widetilde{\omega}^{\prime})\|_{\mathcal{L}^{1}_{\rho}}$
$\displaystyle\lesssim\|\frac{1}{\widetilde{z}}\partial_{\widetilde{\theta}}\widetilde{\phi}\|_{\mathcal{L}_{\rho}^{\infty}}\|\widetilde{z}\partial_{\widetilde{z}}\widetilde{\omega}^{\prime}\|_{\mathcal{L}^{1}_{\rho}}$
$\displaystyle\lesssim\Big{(}\|\omega\|_{\mathcal{L}^{1}_{\rho}}+\|\partial_{\widetilde{\theta}}\omega\|_{\mathcal{L}^{1}_{\rho}}+\|\omega\|_{H^{1}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}\\})}\Big{)}\|\widetilde{z}\partial_{\widetilde{z}}\omega^{\prime}\|_{\mathcal{L}^{1}_{\rho}}$
giving the lemma. ∎
### 3.5 Semigroup estimates in the half-plane
In this section, we give bounds on the Stokes semigroup $e^{\nu tS}$ in the
analytic spaces $\mathcal{W}^{k,1}_{\rho}$ on the half-plane
$\mathbb{R}_{+}\times{\mathbb{T}}$. We also denote by $\Gamma(\nu t)=e^{\nu
tS}(\mathcal{H}^{1}_{\\{\widetilde{z}=0\\}\times{\mathbb{T}}})$ the trace of
the semigroup on the boundary, with
$\mathcal{H}^{1}_{\\{\widetilde{z}=0\\}\times{\mathbb{T}}}$ being the one-
dimensional Hausdorff measure restricted on the boundary. The results in this
section are an easy adaptation from those obtained in [21], where the analytic
spaces contained no cutoff in $z$. Precisely, we consider
$\displaystyle(\partial_{t}-\nu\Delta_{z,\theta}){\omega}$ $\displaystyle=0$
(3.14) $\displaystyle\nu(\partial_{z}+|\partial_{\theta}|)\omega_{|_{z=0}}$
$\displaystyle=0$
on $\mathbb{R}_{+}\times{\mathbb{T}}$ (where we drop titles for sake of
presentation). We obtain the following proposition.
###### Proposition 3.5.
Let $e^{\nu tS}$ be the semigroup of the linear Stokes problem (3.14), and let
$\Gamma(\nu t)g$ be its trace on the boundary. Then, for any $t\geq 0$,
$\rho>0$, and $k\geq 0$, there hold
$\displaystyle\|e^{\nu tS}f\|_{\mathcal{W}^{k,1}_{\rho}}$ $\displaystyle\leq
C_{0}\|f\|_{\mathcal{W}^{k,1}_{\rho}}+\|zf\|_{H^{k+1}(z\geq\delta_{0}+\rho)}$
(3.15) $\displaystyle\|\Gamma(\nu t)g\|_{\mathcal{W}^{k,1}_{\rho}}$
$\displaystyle\leq
C_{0}\sum_{\alpha\in{\mathbb{Z}}}|\alpha^{k}g_{\alpha}|e^{\epsilon_{0}(\delta_{0}+\rho)|\alpha|}$
uniformly in the inviscid limit.
###### Proof.
The proof follows closely from that in [21]. Indeed, taking the Fourier
transform of the semigroup $e^{\nu tS}$ in variable $\theta$, we obtain
$(e^{\nu
tS}f)_{\alpha}(z)=\int_{0}^{\infty}G_{\alpha}(t,y;z)f_{\alpha}(y)\;dy,\qquad(\Gamma(\nu
t)g)_{\alpha}(z)=G_{\alpha}(t,0;z)g_{\alpha},$ (3.16)
for each Fourier variable $\alpha\in{\mathbb{Z}}$, where $G_{\alpha}(t,y;z)$
is the corresponding Green function. We recall the following result of
Proposition 3.3 from [21] that
$G_{\alpha}(t,y;z)=H_{\alpha}(t,y;z)+R_{\alpha}(t,y;z),$ (3.17)
where
$\displaystyle H_{\alpha}(t,y;z)$ $\displaystyle=\frac{1}{\sqrt{\nu
t}}\Big{(}e^{-\frac{|y-z|^{2}}{4\nu t}}+e^{-\frac{|y+z|^{2}}{4\nu
t}}\Big{)}e^{-\alpha^{2}\nu t},$
$\displaystyle|\partial_{z}^{k}R_{\alpha}(t,y;z)|$
$\displaystyle\lesssim\mu_{f}^{k+1}e^{-\theta_{0}\mu_{f}|y+z|}+(\nu
t)^{-\frac{k+1}{2}}e^{-\theta_{0}\frac{|y+z|^{2}}{\nu
t}}e^{-\frac{1}{8}\alpha^{2}\nu t},$
for $y,z\geq 0$, $k\geq 0$, and for some $\theta_{0}>0$ and for
$\mu_{f}=|\alpha|+\frac{1}{\sqrt{\nu}}$. In particular,
$\|G_{\alpha}(t,y;\cdot)\|_{L^{1}_{\rho}}\lesssim 1$, for each fixed $y,t$.
Now, for $z,y\leq\delta_{0}+\rho$, we note that
$\displaystyle e^{-a|y\pm z|}e^{-\epsilon_{0}(\delta_{0}+\rho-y)|\alpha|}$
$\displaystyle=e^{-a|y\pm
z|+\epsilon_{0}|\alpha|(y-z)}e^{-\epsilon_{0}(\delta_{0}+\rho-z)|\alpha|}$
(3.18) $\displaystyle\leq e^{-(a-\epsilon_{0}|\alpha|)|y\pm
z|}e^{-\epsilon_{0}(\delta_{0}+\rho-z)|\alpha|}$
for any real number $a$ and for $\epsilon_{0}$ sufficiently small. Taking
$a=\frac{1}{2}\theta_{0}\mu_{f}$, we have $a\geq\epsilon_{0}|\alpha|$ and so
$e^{-\theta_{0}\mu_{f}|y+z|}e^{-\epsilon_{0}(\delta_{0}+\rho-y)|\alpha|}\leq
e^{-\epsilon_{0}(\delta_{0}+\rho-z)|\alpha|}e^{-\frac{1}{2}\theta_{0}\mu_{f}|y+z|}$
On the other hand, taking $a=\frac{1}{2}\theta_{0}\frac{|y\pm z|}{\nu t}$ in
(3.18), we have either $a\geq\epsilon_{0}|\alpha|$ or
$\frac{1}{2}\theta_{0}\alpha^{2}\nu t\geq\epsilon_{0}|\alpha||y\pm z|$.
Therefore, we have
$e^{-\theta_{0}\frac{|y+z|^{2}}{\nu t}}e^{-\theta_{0}\alpha^{2}\nu
t}e^{-\epsilon_{0}(\delta_{0}+\rho-y)|\alpha|}\leq
e^{-\frac{1}{2}\theta_{0}\frac{|y+z|^{2}}{\nu
t}}e^{-\epsilon_{0}(\delta_{0}+\rho-z)|\alpha|}.$
This proves that for $z\leq\delta_{0}+\rho$,
$\displaystyle
e^{\epsilon_{0}(\delta_{0}+\rho-z)|\alpha|}\int_{0}^{\delta_{0}+\rho}|G_{\alpha}(t,y;z)f_{\alpha}(y)|\;dy$
$\displaystyle\leq\int_{0}^{\delta_{0}+\rho}\Big{[}(\nu
t)^{-\frac{1}{2}}e^{-\frac{1}{2}\theta_{0}\frac{|y\pm z|^{2}}{\nu
t}}+\mu_{f}e^{-\frac{1}{2}\theta_{0}\mu_{f}|y+z|}\Big{]}|e^{\epsilon_{0}(\delta_{0}+\rho-y)|\alpha|}f_{\alpha}(y)|\;dy.$
Since the term in the bracket is bounded in $L^{1}_{z}$ norm, we have
$\displaystyle\Big{\|}e^{\epsilon_{0}(\delta_{0}+\rho-z)|\alpha|}\int_{0}^{\delta_{0}+\rho}G_{\alpha}(t,y;z)f_{\alpha}(y)\;dy\Big{\|}_{\mathcal{L}^{1}_{\rho}}\lesssim\|e^{\epsilon_{0}(\delta_{0}+\rho-y)|\alpha|}f_{\alpha}\|_{\mathcal{L}^{1}_{\rho}}.$
Taking the summation in $\alpha$ yields the stated bounds for this term.
Next, consider the case when $y\geq\delta_{0}+\rho\geq z$. In this case, we
simply use
$e^{-\epsilon_{0}|\alpha||y-z|}\leq
e^{-\epsilon_{0}|\alpha|(\delta_{0}+\rho-z)},$
giving the right analyticity weight in $z$. The control of the weight
$e^{\epsilon_{0}|\alpha||y-z|}$ is done exactly as above, yielding
$\displaystyle
e^{\epsilon_{0}(\delta_{0}+\rho-z)|\alpha|}\int_{\delta_{0}+\rho}^{\infty}|G_{\alpha}(t,y;z)f_{\alpha}(y)|\;dy$
$\displaystyle\leq\int_{\delta_{0}+\rho}^{\infty}\Big{[}(\nu
t)^{-\frac{1}{2}}e^{-\frac{1}{2}\theta_{0}\frac{|y\pm z|^{2}}{\nu
t}}+\mu_{f}e^{-\frac{1}{2}\theta_{0}\mu_{f}|y+z|}\Big{]}|f_{\alpha}(y)|\;dy.$
Therefore,
$\displaystyle\sum_{\alpha}\|e^{\epsilon_{0}(\delta_{0}+\rho-z)|\alpha|}\int_{\delta_{0}+\rho}^{\infty}|G_{\alpha}(t,y;z)f_{\alpha}(y)|\;dy\|_{\mathcal{L}^{1}_{\rho}}$
$\displaystyle\lesssim\sum_{\alpha}\|f_{\alpha}\|_{L^{1}(z\geq\delta_{0}+\rho)}$
$\displaystyle\lesssim\|zf\|_{H^{1}(z\geq\delta_{0}+\rho)}.$
Similarly, from (3.16), the Fourier transform of the trace operator
$\Gamma(\nu t)g$ is estimated by
$\displaystyle|(\Gamma(\nu t)g)_{\alpha}(z)|$
$\displaystyle\leq|G_{\alpha}(t,0;z)g_{\alpha}|$
$\displaystyle\leq\Big{[}\mu_{f}e^{-\theta_{0}\mu_{f}|z|}+(\nu
t)^{-\frac{1}{2}}e^{-\theta_{0}\frac{|z|^{2}}{\nu
t}}e^{-\frac{1}{8}\alpha^{2}\nu t}\Big{]}|g_{\alpha}|$
$\displaystyle\leq\Big{[}\mu_{f}e^{-\frac{1}{2}\theta_{0}\mu_{f}|z|}+(\nu
t)^{-\frac{1}{2}}e^{-\frac{1}{2}\theta_{0}\frac{|z|^{2}}{\nu
t}}\Big{]}e^{-\epsilon_{0}(\delta_{0}+\rho-z)|\alpha|}|g_{\alpha}|e^{\epsilon_{0}(\delta_{0}+\rho)|\alpha|}$
in which the last inequality is a special case of the previous calculations
for $y=0$ and $z\leq\delta_{0}+\rho$. The bounds $\Gamma(\nu t)g$ are thus
direct. Finally, the bounds on derivatives follow from the similar adaptation
of derivatives bounds provided in [21]. We skip repeating the details. ∎
### 3.6 Semigroup estimates near $\partial\Omega$
In this section, we provide bounds on the Stokes semigroup $e^{\nu tS}$, which
will be used to estimate the vorticity $\omega^{b}$ (see Section 4.1) near the
boundary in the analytic spaces $\mathcal{W}^{k,1}_{\rho}$. Precisely, we
consider
$\left\\{\begin{aligned} &\partial_{t}{\omega}-\nu\Delta{\omega}=0\\\
&\nu(\partial_{n}+DN){\omega}_{|_{\partial\Omega}}=0\end{aligned}\right.$
(3.19)
in $\Omega$. We obtain the following proposition.
###### Proposition 3.6.
Let $e^{\nu tS}$ be the semigroup of the linear Stokes problem (3.19), and let
$\Gamma(\nu t)$ be its trace on the boundary. Fix any finite time $T$. Then,
for sufficiently small $\lambda$, and for any $0\leq t\leq T$, $\rho>0$, and
$k\geq 0$, there hold
$\displaystyle\|e^{\nu tS}f\|_{\mathcal{W}^{k,1}_{\rho}}$ $\displaystyle\leq
C_{0}\|f\|_{\mathcal{W}^{k,1}_{\rho}}+\|f\|_{H^{k+1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}/2)}$ (3.20) $\displaystyle\|\Gamma(\nu
t)g\|_{\mathcal{W}^{k,1}_{\rho}}$ $\displaystyle\leq
C_{0}\sum_{\alpha\in{\mathbb{Z}}}|\alpha^{k}g_{\alpha}|e^{\epsilon_{0}(\delta_{0}+\rho)|\alpha|}$
uniformly in the inviscid limit.
###### Proof.
In the scaled variables, the Stokes problem for near boundary vorticity
$\omega$ becomes
$\begin{cases}&(\partial_{\widetilde{t}}-\nu\Delta_{\widetilde{z},\widetilde{\theta}})\widetilde{\omega}=-\lambda^{2}\nu\widetilde{R}_{\Delta}\widetilde{\omega}\\\
&\nu(\partial_{\widetilde{z}}+|\partial_{\widetilde{\theta}}|)\widetilde{\omega}|_{\widetilde{s}=0}=-\nu\widetilde{B}\widetilde{\omega}\end{cases}$
where $\widetilde{R}_{\Delta}$ and $\widetilde{B}$ are defined as in (2.17)
and (2.27). Using the Duhamel, the solution with initial data $\omega_{0}$ can
be written as
$\widetilde{\omega}(\widetilde{t})=e^{\nu\widetilde{t}S}\widetilde{\omega}_{0}-\nu\lambda^{2}\int_{0}^{\widetilde{t}}e^{\nu(\widetilde{t}-\widetilde{t}^{\prime})S}\widetilde{R}_{\Delta}\widetilde{\omega}(\widetilde{t}^{\prime})\;d\widetilde{t}^{\prime}-\nu\int_{0}^{\widetilde{t}}\Gamma(\nu(\widetilde{t}-\widetilde{t}^{\prime}))\widetilde{B}\widetilde{\omega}(\widetilde{t}^{\prime})\;d\widetilde{t}^{\prime}.$
(3.21)
We shall bound the integral terms on the right in term of the initial data.
Recall from (2.17) that
$\widetilde{R}_{\Delta}=\widetilde{m}(\widetilde{z},\widetilde{\theta})\partial^{2}_{\widetilde{\theta}}+\frac{\widetilde{\gamma}}{1+\lambda^{2}\widetilde{z}\widetilde{\gamma}}\partial_{\widetilde{z}}-\frac{\widetilde{z}\widetilde{\gamma}^{\prime}}{(1+\lambda^{2}\widetilde{z}\widetilde{\gamma})^{3}}\partial_{\widetilde{\theta}},\qquad\widetilde{m}(\widetilde{z},\widetilde{\theta})=-\frac{2\widetilde{z}\widetilde{\gamma}+\lambda^{2}(\widetilde{z}\widetilde{\gamma})^{2}}{(1+\lambda^{2}\widetilde{z}\widetilde{\gamma})^{2}}.$
We rewrite the operator in the following form
$\displaystyle\widetilde{R}_{\Delta}\widetilde{\omega}$
$\displaystyle=\partial^{2}_{\widetilde{\theta}}[\widetilde{m}\widetilde{\omega}]-\partial_{\widetilde{\theta}}\Big{[}2\partial_{\widetilde{\theta}}\widetilde{m}\widetilde{\omega}+\frac{\widetilde{z}\widetilde{\gamma}^{\prime}}{(1+\lambda^{2}\widetilde{z}\widetilde{\gamma})^{3}}\widetilde{\omega}\Big{]}+\partial_{\widetilde{z}}\Big{(}\frac{\widetilde{\gamma}}{1+\lambda^{2}\widetilde{z}\widetilde{\gamma}}\widetilde{\omega}\Big{)}$
$\displaystyle\quad+\Big{[}(\partial^{2}_{\widetilde{\theta}}\widetilde{m})-\partial_{\widetilde{z}}\Big{(}\frac{\widetilde{\gamma}}{1+\lambda^{2}\widetilde{z}\widetilde{\gamma}}\Big{)}+\partial_{\widetilde{\theta}}\Big{(}\frac{\widetilde{z}\widetilde{\gamma}^{\prime}}{(1+\lambda^{2}\widetilde{z}\widetilde{\gamma})^{3}}\Big{)}\Big{]}\widetilde{\omega}.$
We now bound each term appearing in the Duhamel formula (3.21). Thanks to the
analyticity of the boundary, the coefficients are bounded in
$\mathcal{W}^{k,\infty}_{\rho}$. Now, recall from (3.17) that the Green
function has two components:
$e^{\nu\widetilde{t}S}=e^{\nu\widetilde{t}S_{H}}+e^{\nu\widetilde{t}S_{R}}$
which corresponds to the Green kernel $H_{\alpha}$ (i.e., the heat kernel) and
the other from the stationary Stokes kernel $R_{\alpha}$.
We first claim that
$\Big{\|}\nu\lambda^{2}\int_{0}^{\widetilde{t}}e^{\nu(\widetilde{t}-\widetilde{t}^{\prime})S_{H}}\widetilde{R}_{\Delta}\widetilde{\omega}(\widetilde{t}^{\prime})\;d\widetilde{t}^{\prime}\Big{\|}_{\mathcal{W}^{k,1}_{\rho}}\lesssim\lambda^{2}\sup_{0\leq\widetilde{t}^{\prime}\leq\widetilde{t}}\|\omega\|_{\mathcal{W}^{k,1}_{\rho}}+\|\omega\|_{H^{k+1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\delta)}.$ (3.22)
For the heat semigroup, we may integrate by parts in $\widetilde{\theta}$ or
$\widetilde{z}$. It follows directly from the representation of the Green
function that derivatives of the semigroup
$\nabla_{\widetilde{\theta},\widetilde{z}}e^{\nu\widetilde{t}S_{H}}$ are of
order $(\nu\widetilde{t})^{-1/2}$ of the semigroup itself. Therefore, the
first-order derivative term in $\widetilde{R}_{\Delta}$ can be treated
systematically as follows:
$\displaystyle\nu\lambda^{2}\Big{\|}\int_{0}^{\widetilde{t}}e^{\nu(\widetilde{t}-\widetilde{t}^{\prime})S_{H}}\nabla_{\widetilde{\theta},\widetilde{z}}h(\widetilde{t}^{\prime})\;d\widetilde{t}^{\prime}\Big{\|}_{\mathcal{W}^{k,1}_{\rho}}$
$\displaystyle\lesssim\nu\lambda^{2}\int_{0}^{\widetilde{t}}(\nu(\widetilde{t}-\widetilde{t}^{\prime}))^{-1/2}\|h(\widetilde{t}^{\prime})\|_{\mathcal{W}^{k,1}_{\rho}}\;d\widetilde{t}^{\prime}$
$\displaystyle\lesssim\sqrt{\nu}\lambda^{2}\sup_{0\leq\widetilde{t}^{\prime}\leq\widetilde{t}}\|h\|_{\mathcal{W}^{k,1}_{\rho}}.$
The zero-order term is treated similarly. The analysis doesn’t apply directly
to the second-order derivative term
$\partial^{2}_{\widetilde{\theta}}[\widetilde{m}\widetilde{\omega}]$ due to
the singularity in time $(\nu t)^{-1}$, if integration by parts was to perform
twice. However, in the Fourier variable $\alpha$, we compute
$\nu\lambda^{2}\int_{0}^{\widetilde{t}}(e^{\nu(\widetilde{t}-\widetilde{t}^{\prime})S_{H}}\partial^{2}_{\widetilde{\theta}}[\widetilde{m}\widetilde{\omega}])_{\alpha}(\widetilde{t}^{\prime})\;d\widetilde{t}^{\prime}=\nu\alpha^{2}\lambda^{2}\int_{0}^{\widetilde{t}}\int_{0}^{\infty}H_{\alpha}(t,\widetilde{y};\widetilde{z})[\widetilde{m}\widetilde{\omega}]_{\alpha}(\widetilde{t}^{\prime})\;d\widetilde{y}d\widetilde{t}^{\prime}.$
Observe that the Green kernel $H_{\alpha}$ has the diffusion term
$e^{-\nu\alpha^{2}\widetilde{t}}$, for which we use
$\nu\alpha^{2}\lambda^{2}\int_{0}^{\widetilde{t}}e^{-\nu\alpha^{2}(\widetilde{t}-\widetilde{t}^{\prime})}d\widetilde{t}^{\prime}\lesssim\lambda^{2}$
yielding the claim (3.22).
Next, we claim that
$\Big{\|}\nu\lambda^{2}\int_{0}^{\widetilde{t}}e^{\nu(\widetilde{t}-\widetilde{t}^{\prime})S_{R}}\widetilde{R}_{\Delta}\widetilde{\omega}(\widetilde{t}^{\prime})\;d\widetilde{t}^{\prime}\Big{\|}_{\mathcal{W}^{k,1}_{\rho}}\lesssim\nu\lambda^{2}\int_{0}^{\widetilde{t}}\|\partial_{\widetilde{\theta}}\omega(\widetilde{t})\|_{\mathcal{W}^{k,1}_{\rho}}\;d\widetilde{t}+\|\omega\|_{H^{k+1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\delta)}.$ (3.23)
It suffices to check for the stationary Green kernel
$\mu_{f}e^{-\theta_{0}\mu_{f}(\widetilde{y}+\widetilde{z})}$ and for the
second-order derivative term
$\partial^{2}_{\widetilde{\theta}}[\widetilde{m}\widetilde{\omega}]$ appearing
in $\widetilde{R}_{\Delta}\widetilde{\omega}(\widetilde{t}^{\prime})$. For
this term, we make use of the fact that $\widetilde{m}$ vanishes at
$\widetilde{z}=0$; namely, we can write
$\widetilde{m}=\widetilde{z}\widetilde{m}_{1}$ and use
$\mu_{f}e^{-\theta_{0}\mu_{f}\widetilde{z}}\widetilde{z}\lesssim 1$, which
controls one spatial derivative, since $\mu_{f}=|\alpha|+\nu^{-1/2}$. This
proves the claim (3.23).
Finally, putting the previous bounds together into the Duhamel representation
(3.21), we have obtained
$\displaystyle\|\omega(\widetilde{t})\|_{\mathcal{W}^{k,1}_{\rho}}$
$\displaystyle\lesssim\|\omega_{0}\|_{\mathcal{W}^{k,1}_{\rho}}+\|\omega_{0}\|_{H^{k+1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\delta)}$ (3.24)
$\displaystyle\quad+\lambda^{2}\sup_{0\leq\widetilde{t}^{\prime}\leq\widetilde{t}}\|\omega(\widetilde{t}^{\prime})\|_{\mathcal{W}^{k,1}_{\rho}}+\nu\lambda^{2}\int_{0}^{\widetilde{t}}\|\partial_{\widetilde{\theta}}\omega(\widetilde{t})\|_{\mathcal{W}^{k,1}_{\rho}}\;d\widetilde{t}$
$\displaystyle\quad+\|\omega\|_{H^{k+1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\delta)}$
for any $k\geq 0$. The standard energy estimates for the heat equation (away
from the boundary) yield
$\|\omega\|_{H^{k+1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\delta)}\lesssim\|\omega_{0}\|_{H^{k+1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}/2)}.$ (3.25)
It remains to treat the third and forth terms on the right hand side of
(3.24). We bound these terms by iteration, introducing
$\displaystyle A_{0}(\beta):=$ $\displaystyle\quad\sup_{0\leq k\leq
4}\big{(}\sup_{0<\beta\widetilde{t}<\rho_{0}}\sup_{0<\rho<\rho_{0}-\beta\widetilde{t}}\Bigl{\\{}\|\omega(\widetilde{t})\|_{\mathcal{W}^{k,1}_{\rho}}+\|\partial_{\widetilde{\theta}}\omega(\widetilde{t})\|_{\mathcal{W}^{k,1}_{\rho}}(\rho_{0}-\rho-\beta\widetilde{t})^{\zeta}\Bigr{\\}}\big{)}$
for some $\zeta\in(0,1)$. We bound
$\displaystyle\nu\lambda^{2}\int_{0}^{\widetilde{t}}\|\partial_{\widetilde{\theta}}\omega(\widetilde{t})\|_{\mathcal{W}^{k,1}_{\rho}}\;d\widetilde{t}$
$\displaystyle\leq
C_{0}\nu\lambda^{2}A_{0}(\beta)\int_{0}^{\widetilde{t}}(\rho_{0}-\rho-\beta\widetilde{s})^{-\zeta}\;d\widetilde{s}$
$\displaystyle\leq C_{0}\nu\lambda^{2}\beta^{-1}A_{0}(\beta).$
Next, we check the bound on
$\|\partial_{\widetilde{\theta}}\omega(\widetilde{t})\|_{\mathcal{W}^{k,1}_{\rho}}$.
We focus only the worst term as in (3.23). Note that
$\rho<\rho_{0}-\beta\widetilde{t}\leq\rho_{0}-\beta\widetilde{s}$. Thus, we
take $\rho^{\prime}=\frac{\rho+\rho_{0}-\beta s}{2}$ and bound
$\displaystyle\Big{\|}$
$\displaystyle\nu\lambda^{2}\partial_{\widetilde{\theta}}\int_{0}^{\widetilde{t}}e^{\nu(\widetilde{t}-\widetilde{t}^{\prime})S_{R}}\widetilde{R}_{\Delta}\widetilde{\omega}(\widetilde{t}^{\prime})\;d\widetilde{t}^{\prime}\Big{\|}_{\mathcal{W}^{k,1}_{\rho}}$
$\displaystyle\lesssim\nu\lambda^{2}\int_{0}^{\widetilde{t}}\frac{1}{\rho^{\prime}-\rho}\|\partial_{\widetilde{\theta}}\omega(\widetilde{t})\|_{\mathcal{W}^{k,1}_{\rho^{\prime}}}\;d\widetilde{t}+\|\omega\|_{H^{k+1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\delta)}$ $\displaystyle\leq
C_{0}\nu\lambda^{2}\int_{0}^{\widetilde{t}}(\rho_{0}-\rho-\beta
s)^{-1-\zeta}\;ds+\|\omega_{0}\|_{H^{k+1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}/2)}$ $\displaystyle\leq
C_{0}\nu\lambda^{2}\beta^{-1}A_{0}(\beta)(\rho_{0}-\rho-\beta\widetilde{t})^{-\zeta}+\|\omega_{0}\|_{H^{k+1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}/2)}.$
This proves that
$A_{0}(\beta)\lesssim\|\omega_{0}\|_{\mathcal{W}^{k,1}_{\rho}}+\|\omega_{0}\|_{H^{k+1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}/2)}+\Big{(}\lambda^{2}+\nu\lambda^{2}\beta^{-1}\Big{)}A_{0}(\beta).$
Taking $\lambda$ and $\nu$ small, the last term can be absorbed into the left
hand side, completing the bounds on $A_{0}(\beta)$ or the
$\mathcal{W}^{k,1}_{\rho}$ norm for the vorticity. Note that we do not require
$\beta$ to be sufficiently large (compared with the nonlinear iteration
provided in the next section). As a consequence, the proposition holds for any
given finite time. ∎
## 4 Nonlinear analysis
As already mentioned in the introduction, we construct the solutions to the
Navier-Stokes equation via the vorticity formulation
$\displaystyle\partial_{t}\omega+u\cdot\nabla\omega=\nu\Delta\omega$ (4.1)
together with the nonlocal boundary condition (1.8) and with initial data
$\omega_{|_{t=0}}=\omega_{0}$ satisfying
$\|\omega_{0}\|_{\mathcal{W}^{2,1}_{\rho}}+\|\omega_{0}\|_{H^{4}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}/2\\})}<\infty.$ (4.2)
Introduce the smooth cutoff function $\phi^{b}$ as in (2.18), and write
$\omega=\omega^{b}+\omega^{i},\qquad\omega^{b}=\phi^{b}\omega,\qquad\omega^{i}=(1-\phi^{b})\omega.$
(4.3)
We also define the corresponding velocity field through the Biot-Savart law
$u=u^{b}+u^{i},\qquad u^{b}=\nabla^{\perp}\Delta^{-1}\omega^{b},\qquad
u^{i}=\nabla^{\perp}\Delta^{-1}\omega^{i}.$ (4.4)
This yields
$\left\\{\begin{aligned}
\partial_{t}{\omega}^{b}+u\cdot\nabla{\omega}^{b}&=\nu\Delta{\omega}^{b}\\\
\nu(\partial_{n}+DN)\omega^{b}_{|_{\partial\Omega}}&=[\partial_{n}\Delta^{-1}(u\cdot\nabla\omega)]_{|_{\partial\Omega}}\end{aligned}\right.$
(4.5)
for the vorticity near the boundary, and
$\left\\{\begin{aligned}
\partial_{t}{\omega}^{i}+u\cdot\nabla\omega^{i}&=\nu\Delta{\omega}^{i}\\\
{\omega}^{i}_{|{\partial\Omega}}&=0\end{aligned}\right.$ (4.6)
for the vorticity away from the boundary. Here, we note that the boundary
condition on $\omega^{i}$ follows directly from the definition (4.3), while
the boundary condition on $\omega^{b}$ was due to the fact that
$DN\omega^{i}=0$ by Lemma 2.2. We also note that the velocity field $u$ that
appears in both the systems is the full velocity, which is the summation of
$u^{b}$ and $u^{i}$ generated by $\omega^{b}$ and $\omega^{i}$, respectively.
We shall construct the near boundary vorticity solving (4.5) through the
semigroup of the Stokes problem. Indeed, we have the following standard
Duhamel’s integral representation, written in the scaled variables,
$\widetilde{\omega}(\widetilde{t})=e^{\nu\widetilde{t}S}\widetilde{\omega}_{0}+\int_{0}^{\widetilde{t}}e^{\nu(\widetilde{t}-\widetilde{t}^{\prime})S}f(\widetilde{t}^{\prime})\;d\widetilde{t}^{\prime}+\int_{0}^{\widetilde{t}}\Gamma(\nu(\widetilde{t}-\widetilde{t}^{\prime}))g(\widetilde{t}^{\prime})\;d\widetilde{t}^{\prime}$
(4.7)
where
$\displaystyle f(\widetilde{t})=-\lambda^{-2}u\cdot\nabla\omega^{b},\qquad
g(\widetilde{t})$
$\displaystyle=\lambda^{-1}[\partial_{n}\Delta^{-1}(u\cdot\nabla\omega)]_{|_{\partial\Omega}}.$
(4.8)
Here, $e^{\nu\widetilde{t}S}$ denotes the semigroup of the corresponding
Stokes problem and $\Gamma(\nu\widetilde{t})$ being its trace on the boundary;
see Section 3.6.
### 4.1 Global Sobolev-analytic norm
We now introduce Sobolev-analytic norms to control global vorticity. Let us
fix positive numbers $\rho_{0},\delta_{0},$ and $\zeta\in(0,1)$. Introduce the
following family of nonlinear iterative norms for vorticity:
$\displaystyle A(\beta):=$ $\displaystyle\quad\sup_{0<\lambda^{2}\beta
t<\rho_{0}}\Big{[}\sup_{0<\rho<\rho_{0}-\beta\lambda^{2}t}\Bigl{\\{}\|\omega(t)\|_{\mathcal{W}^{1,1}_{\rho}}+\|\omega(t)\|_{\mathcal{W}^{2,1}_{\rho}}(\rho_{0}-\rho-\lambda^{2}\beta
t)^{\zeta}\Bigr{\\}}$ (4.9)
$\displaystyle\quad+\|\omega(t)\|_{H^{4}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}/2\\})}\Big{]}$
for a parameter $\beta>0$, with recalling
$\|\omega(t)\|_{\mathcal{W}^{k,1}_{\rho}}=\sum_{j+\ell\leq
k}\|\partial_{\widetilde{\theta}}^{j}(\widetilde{z}\partial_{\widetilde{z}})^{\ell}\omega(t)\|_{\mathcal{L}^{1}_{\rho}}.$
Note that by definition the norm $\|\cdot\|_{\mathcal{W}^{k,1}_{\rho}}$
controls the analyticity of the vorticity near the boundary, precisely in the
region $\lambda d(x,\partial\Omega)\leq\delta_{0}+\rho,$ while the $H^{4}$
norm is to control the Sobolev regularity away from the boundary. We shall
show that the vorticity norm remains finite for sufficiently large $\beta$.
The weight $(\rho_{0}-\rho-\lambda^{2}\beta t)^{\zeta}$, with a small
$\zeta>0$, is standard in the literature to avoid time singularity when
recovering the loss of derivatives ([2, 6]). See also [14] for an alternative
framework to construct analytic solutions through generator functions.
Our goal is to prove the following key proposition.
###### Proposition 4.1.
For $\beta>0$, there holds
$A(\beta)\leq
C_{0}\|\omega_{0}\|_{\mathcal{W}^{2,1}_{\rho}}+C_{0}\|\omega_{0}\|_{H^{4}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}/2\\})}+C_{0}\beta^{-1}A(\beta)^{2}.$
In Section 4.4, we will show that our main theorem, Theorem 1.1, follows
straightforwardly from Proposition 4.1.
### 4.2 Analytic bounds near the boundary
In this section, we bound the vorticity near the boundary $\lambda
d(x,\partial\Omega)\leq\delta_{0}+\rho_{0}$, on which by definition
$\omega=\omega^{b}$ and therefore the Duhamel representation (4.7) holds. Let
$\rho<\rho_{0}-\lambda^{2}\beta t$. Recalling the notation
$\widetilde{t}=\lambda^{2}t$ and using (4.7), we bound
$\|\widetilde{\omega}(\widetilde{t})\|_{\mathcal{W}_{\rho}^{k,1}}\leq\|e^{\nu\widetilde{t}S}\widetilde{\omega}_{0}\|_{\mathcal{W}_{\rho}^{k,1}}+\int_{0}^{\widetilde{t}}\|e^{\nu(\widetilde{t}-\widetilde{t}^{\prime})S}f(\widetilde{t}^{\prime})\|_{\mathcal{W}_{\rho}^{k,1}}\;d\widetilde{t}^{\prime}+\int_{0}^{\widetilde{t}}\|\Gamma(\nu(\widetilde{t}-\widetilde{t}^{\prime}))g(\widetilde{t}^{\prime})\|_{\mathcal{W}_{\rho}^{k,1}}\;d\widetilde{t}^{\prime}$
(4.10)
for $0<k\leq 4$ and for $f,g$ defined as in (4.8). Let us bound each term on
the right. Using the semigroup estimates, Proposition 3.5, we have
$\displaystyle\|e^{\nu\widetilde{t}S}\widetilde{\omega}_{0}\|_{\mathcal{W}^{k,1}_{\rho}}$
$\displaystyle\leq
C_{0}\|\widetilde{\omega}_{0}\|_{\mathcal{W}^{k,1}_{\rho}}+\|\widetilde{z}\widetilde{\omega}_{0}\|_{H^{k+1}(\widetilde{z}\geq\delta_{0}+\rho)}$
$\displaystyle\leq
C_{0}\|\widetilde{\omega}_{0}\|_{\mathcal{W}^{k,1}_{\rho}}+\|\omega_{0}\|_{H^{k+1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\rho)}.$
While for the second integral term in (4.10), we have
$\int_{0}^{\widetilde{t}}\|e^{\nu(\widetilde{t}-\widetilde{t}^{\prime})S}f(\widetilde{t}^{\prime})\|_{\mathcal{W}_{\rho}^{k,1}}\;d\widetilde{t}^{\prime}\lesssim\int_{0}^{\widetilde{t}}\Big{[}\|f(\widetilde{t}^{\prime})\|_{\mathcal{W}_{\rho}^{k,1}}+\|\widetilde{z}f(\widetilde{t}^{\prime})\|_{H^{k+1}(\widetilde{z}\geq\delta_{0}+\rho)}\Big{]}\;d\widetilde{t}^{\prime}.$
Then, we use (4.8), in the above formula with $f(\widetilde{t})$ replaced by
$-\lambda^{-2}u\cdot\nabla\omega^{b}.$ First, using the standard elliptic
theory for $k=0,1,2$, we bound
$\displaystyle\|\widetilde{z}(u\cdot\nabla\omega^{b})(\widetilde{t}^{\prime})\|_{H^{k+1}(\widetilde{z}\geq\delta_{0}+\rho)}$
$\displaystyle\lesssim\|\omega\|_{H^{4}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}/2\\})}^{2}\lesssim A(\beta)^{2}.$
Next, for the analytic norm, with the bilinear estimates from Lemma 3.4, we
have:
$\displaystyle\|u\cdot\nabla\omega^{b}\|_{\mathcal{L}^{1}_{\rho}}$
$\displaystyle\leq
C\Big{(}\|\omega\|_{\mathcal{L}^{1}_{\rho}}+\|\omega\|_{H^{1}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}\\})}\Big{)}\|\partial_{\widetilde{\theta}}\omega^{b}\|_{\mathcal{L}^{1}_{\rho}}$
$\displaystyle\quad+C\Big{(}\|\omega\|_{\mathcal{L}^{1}_{\rho}}+\|\partial_{\widetilde{\theta}}\omega\|_{\mathcal{L}^{1}_{\rho}}+\|\omega\|_{H^{1}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}\\})}\Big{)}\|\widetilde{z}\partial_{\widetilde{z}}\omega^{b}\|_{\mathcal{L}^{1}_{\rho}}$
$\displaystyle\lesssim\|\omega\|_{\mathcal{W}^{1,1}_{\rho}}^{2}+\|\omega\|_{H^{1}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}\\})}^{2}$ $\displaystyle\lesssim
A(\beta)^{2}$
$\displaystyle\|u\cdot\nabla\omega^{b}\|_{\mathcal{W}^{1,1}_{\rho}}$
$\displaystyle\lesssim\|\omega\|_{\mathcal{W}^{1,1}_{\rho}}\|\omega\|_{\mathcal{W}^{1,2}_{\rho}}+\|\omega\|_{H^{2}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}\\})}^{2}$ $\displaystyle\lesssim
A(\beta)^{2}(\rho_{0}-\rho-\beta\widetilde{t})^{-\zeta}.$
Therefore,
$\displaystyle\int_{0}^{\widetilde{t}}\|u\cdot\nabla\omega^{b}\|_{\mathcal{W}^{1,1}_{\rho}}\;d\widetilde{s}$
$\displaystyle\leq
C_{0}A(\beta)^{2}\int_{0}^{\widetilde{t}}(\rho_{0}-\rho-\beta\widetilde{s})^{-\zeta}\;d\widetilde{s}$
$\displaystyle\leq C_{0}\beta^{-1}A(\beta)^{2}.$
Similarly, we consider the case when $k=2$. Noting $\rho<\rho_{0}-\beta
t\leq\rho_{0}-\beta s$, we take $\rho^{\prime}=\frac{\rho+\rho_{0}-\beta
s}{2}$ and compute
$\displaystyle\int_{0}^{t}\|u\cdot\nabla\omega^{b}\|_{\mathcal{W}^{2,1}_{\rho}}\;ds$
$\displaystyle\leq
C_{0}\int_{0}^{t}\frac{1}{\rho^{\prime}-\rho}\|u\cdot\nabla\omega^{b}\|_{\mathcal{W}^{1,1}_{\rho^{\prime}}}\;ds$
$\displaystyle\leq C_{0}A(\beta)^{2}\int_{0}^{t}(\rho_{0}-\rho-\beta
s)^{-1-\zeta}\;ds$ $\displaystyle\leq
C_{0}\beta^{-1}A(\beta)^{2}(\rho_{0}-\rho-\beta t)^{-\zeta}.$
Finally, we treat the last integral term in (4.10). Precisely, we will show
that, for $k\leq 2$:
$\displaystyle\|\Gamma(\nu(\widetilde{t}-\widetilde{t}^{\prime}))g(\widetilde{t}^{\prime})\|_{\mathcal{W}_{\rho}^{k,1}}\leq$
$\displaystyle\quad
C_{0}\|u\cdot\nabla{\omega}^{b}(\widetilde{t}^{\prime})\|_{\mathcal{W}^{k,1}_{\rho}}+C_{0}\|{\omega}(\widetilde{t}^{\prime})\|^{2}_{H^{4}(\lambda
d(x,\partial\Omega)\geq\delta_{0}/2)}$ (4.11)
$\displaystyle\quad+C_{0}\|{\omega}(\widetilde{t}^{\prime})\|_{\mathcal{W}^{k,1}_{\rho}}\|{\omega}(\widetilde{t}^{\prime})\|_{H^{4}(\lambda
d(x,\partial\Omega)\geq\delta_{0}/2)}$
which would then imply
$\int_{0}^{\widetilde{t}}\|\Gamma(\nu(\widetilde{t}-\widetilde{t}^{\prime}))g(\widetilde{t}^{\prime})\|_{\mathcal{W}_{\rho}^{2,1}}d\widetilde{t}^{\prime}\leq
C_{0}\left(A(\beta)^{2}+\beta^{-1}A(\beta)^{2}(\rho_{0}-\rho-\beta
t)^{-\zeta}\right).$
Here the constant $C_{0}$ may change from line to line. It remains to give the
proof for the inequality (4.11). First, by Proposition 3.5, we have
$\|\Gamma(\nu(\widetilde{t}-\widetilde{t}^{\prime}))g(\widetilde{t}^{\prime})\|_{\mathcal{W}_{\rho}^{k,1}}\leq
C_{0}\sum_{\alpha}|\alpha|^{k}|g_{\alpha}|e^{\varepsilon_{0}(\delta_{0}+\rho)|\alpha|},$
where $g_{\alpha}$ is given by
$g_{\alpha}=\lambda^{-1}\partial_{n}\Delta^{-1}(u\cdot\nabla{\omega})_{\alpha}|_{\partial\Omega}.$
Let $\Phi=\Delta^{-1}(u\cdot\nabla{\omega})$. By definition, $\Phi$ solves
$\begin{cases}&\Delta\Phi=u\cdot\nabla{\omega},\qquad x\in\Omega\\\
&\Phi|_{\partial\Omega}=0.\end{cases}$
In the rescaled geodesic coordinates, we have
$g_{\alpha}=\partial_{\widetilde{z}}\Phi_{\alpha}(0)$. Let
$\Phi^{b}=\Phi(x)\phi^{b}(x)$, we have
$\begin{cases}&\Delta\Phi^{b}=2\nabla_{x}\phi^{b}\cdot\nabla_{x}\Phi^{b}+\Delta\phi^{b}\Phi+\phi^{b}u\cdot\nabla{\omega}\\\
&\Phi^{b}|_{z=0}=0.\end{cases}$
By a direct calculation, we have
$\displaystyle
e^{\varepsilon_{0}(\delta_{0}+\rho)|\alpha|}g_{\alpha}(\widetilde{t}^{\prime})=\partial_{z}\Phi_{\alpha}^{b}|_{\widetilde{z}=0}$
$\displaystyle=\int_{0}^{\infty}e^{|\alpha|(\varepsilon_{0}(\delta_{0}+\rho)-\widetilde{z})}\left\\{\lambda^{2}\left(\widetilde{R}_{\Delta}\widetilde{\Phi}_{b}\right)_{\alpha}(\widetilde{z})-\lambda^{-2}\left(2\nabla_{x}\phi^{b}\cdot\nabla_{x}\Phi^{b}-\Phi\Delta\phi^{b}-\phi^{b}u\cdot\nabla{\omega}\right)_{\alpha}\right\\}d\widetilde{z}$
$\displaystyle=I_{1,\alpha}+I_{2,\alpha}+I_{3,\alpha}+I_{4,\alpha}.$
Treating $I_{1,\alpha}$. As in the proof of Proposition 3.20 for
$\widetilde{R}_{\Delta}$, we have
$\displaystyle|I_{1,\alpha}|\leq$ $\displaystyle
C_{0}|\alpha|^{2}\lambda^{2}\int_{0}^{\infty}e^{|\alpha|(\varepsilon_{0}(\delta_{0}+\rho)-\widetilde{z})}|\widetilde{z}\Phi^{b}_{\alpha}(\widetilde{z})|d\widetilde{z}$
$\displaystyle+C_{0}\lambda^{2}\int_{0}^{\infty}e^{|\alpha|(\varepsilon_{0}(\delta_{0}+\rho)-\widetilde{z})}\left(|\alpha||\Phi^{b}_{\alpha}(\widetilde{z})|+|\partial_{\widetilde{z}}\Phi_{\alpha}^{b}|\right)d\widetilde{z}.$
First, we use the inequality
$\widetilde{z}|\alpha|e^{-|\alpha|\widetilde{z}}\leq
e^{-\frac{1}{2}|\alpha|\widetilde{z}}$ to get
$\displaystyle|I_{1,\alpha}|$ $\displaystyle\leq
C_{0}\lambda^{2}\int_{0}^{\infty}e^{|\alpha|\left(\varepsilon_{0}(\delta_{0}+\rho)-\frac{1}{2}\widetilde{z}\right)}\left(|\alpha||\Phi^{b}_{\alpha}(\widetilde{z})|+|\partial_{\widetilde{z}}\Phi_{\alpha}^{b}|\right)d\widetilde{z}$
$\displaystyle\leq
C_{0}\lambda^{2}\int_{0}^{\delta_{0}+\rho}e^{|\alpha|\varepsilon_{0}(\delta_{0}+\rho-\widetilde{z})}\left(|\alpha||\Phi^{b}_{\alpha}(\widetilde{z})|+|\partial_{\widetilde{z}}\Phi_{\alpha}^{b}|\right)d\widetilde{z}$
$\displaystyle\quad+C_{0}\lambda^{2}\int_{\delta_{0}+\rho}^{\infty}\left(|\alpha||\Phi^{b}_{\alpha}(\widetilde{z})|+|\partial_{\widetilde{z}}\Phi_{\alpha}^{b}|\right)d\widetilde{z}.$
For the first term, we use the $L^{1}_{\mu}$ elliptic estimate for the
velocity (since the kernel $K_{\alpha}\in L^{1}$), to get
$\displaystyle\sum_{\alpha}|\alpha|^{k}\int_{0}^{\delta_{0}+\rho}e^{|\alpha|\varepsilon_{0}(\delta_{0}+\rho-\widetilde{z})}\left(|\alpha||\Phi^{b}_{\alpha}(\widetilde{z})|+|\partial_{\widetilde{z}}\Phi_{\alpha}^{b}|\right)d\widetilde{z}$
(4.12) $\displaystyle\leq
C\|\phi^{b}u\cdot\nabla{\omega}\|_{\mathcal{W}_{\rho}^{k,1}}+C\|\Phi\|_{H^{k+2}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\rho_{0})}.$
Now we have
$\displaystyle\|\phi^{b}u\cdot\nabla{\omega}\|_{\mathcal{W}^{k,1}_{\rho}}$
$\displaystyle=\|u\cdot\nabla{\omega}^{b}-(u\cdot\nabla\phi^{b}){\omega}\|_{\mathcal{W}^{k,1}_{\rho}}$
(4.13) $\displaystyle\leq
C\left(\|u\cdot\nabla{\omega}^{b}\|_{\mathcal{W}^{k,1}_{\rho}}+\|u{\omega}\|_{H^{k}(\lambda
d(x,\partial\Omega)\geq\delta_{0})}\right)$ $\displaystyle\leq
C\|u\cdot\nabla{\omega}^{b}\|_{\mathcal{W}_{\rho}^{k,1}}+C\|{\omega}\|_{H^{4}(\lambda
d(x,\partial\Omega)\geq\delta_{0}/2)}\left(\|{\omega}\|_{H^{4}(\lambda
d(x,\partial\Omega)\geq\delta_{0}/2)}+\|{\omega}\|_{\mathcal{W}^{k,1}_{\rho}}\right).$
By standard elliptic estimate, we have
$\displaystyle\|\Phi\|_{H^{k+2}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\rho_{0})}\leq$ $\displaystyle
C\|\phi^{b}u\cdot\nabla{\omega}\|_{\mathcal{W}_{\rho}^{k,1}}+C\|u\cdot\nabla{\omega}\|_{H^{k}(\lambda
d(x,\partial\Omega)\geq\delta_{0})}$ (4.14) $\displaystyle\leq$ $\displaystyle
C\|u\cdot\nabla{\omega}^{b}\|_{\mathcal{W}_{\rho}^{k,1}}+\|{\omega}\|^{2}_{H^{4}(\lambda
d(x,\partial\Omega)\geq\delta_{0}/2)}+\|{\omega}\|_{H^{4}(\lambda
d(x,\partial\Omega)\geq\delta_{0}/2)}\|{\omega}\|_{\mathcal{W}^{k,1}_{\rho}}.$
Combining (4.12),(4.14) and (4.13), we have
$\sum_{\alpha}|\alpha|^{k}|I_{1,\alpha}|\leq
C_{0}\left(\|u\cdot\nabla{\omega}^{b}(\widetilde{t}^{\prime})\|_{\mathcal{W}^{k,1}_{\rho}}+\|{\omega}\|^{2}_{H^{4}(\lambda
d(x,\partial\Omega)\geq\delta_{0}/2)}+\|{\omega}\|_{H^{4}(\lambda
d(x,\partial\Omega)\geq\delta_{0}/2)}\|{\omega}\|_{\mathcal{W}^{k,1}_{\rho}}\right)$
as claimed in (4.11). The proof for $I_{1,\alpha}$ is complete.
Treating $I_{2,\alpha}$. For $I_{2,\alpha}$, we note that the domain of
integration is $\widetilde{z}\geq\delta_{0}+\rho_{0}>\delta_{0}+\rho$, we have
$|\alpha|^{k}e^{|\alpha|(\varepsilon_{0}(\delta_{0}+\rho)-\widetilde{z})}\leq
C.$
Thus we have
$\displaystyle\sum_{\alpha}|\alpha|^{k}|I_{2,\alpha}|$ $\displaystyle\leq
C\sum_{\alpha}\|\nabla_{x}\Phi^{b}_{\alpha}\|_{L^{1}(\widetilde{z}\geq\delta_{0}+\rho_{0})}\leq
C\|d(x,\partial\Omega)\nabla_{x}\Phi\|_{H^{1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\rho_{0})}$ $\displaystyle\leq
C\|d(x,\partial\Omega)\Phi\|_{H^{2}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\rho_{0})}$ $\displaystyle\leq
C\|\phi^{b}u\cdot\nabla{\omega}\|_{\mathcal{W}_{\rho}^{k,1}}+C\|u\cdot\nabla{\omega}\|_{L^{2}(\lambda
d(x,\partial\Omega)\geq\delta_{0})},$
which is bounded by the right hand side of (4.11). The proof for
$I_{2,\alpha}$ is complete.
Treating $I_{3,\alpha}$. Similarly, for $I_{3,\alpha}$, we get
$\displaystyle\sum_{\alpha}|\alpha|^{k}|I_{3,\alpha}|$ $\displaystyle\leq
C\|d(x,\partial\Omega)\Phi\|_{H^{1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\rho_{0})}$ $\displaystyle\leq
C\|\phi^{b}u\cdot\nabla{\omega}\|_{\mathcal{W}^{k,1}_{\rho}}+C\|u\cdot\nabla{\omega}\|_{L^{2}(\lambda
d(x,\partial\Omega)\geq\delta_{0})}.$
This is also bounded by the right hand side of (4.11). The proof for
$I_{3,\alpha}$ is complete.
Treating $I_{4,\alpha}$. For $I_{4,\alpha}$ we have
$\sum_{\alpha}|\alpha|^{k}|I_{4,\alpha}|\leq\|\phi^{b}u\cdot\nabla{\omega}\|_{\mathcal{W}^{k,1}_{\rho}}.$
We rewrite
$\phi^{b}u\cdot\nabla{\omega}=u\cdot\nabla(\phi^{b}{\omega})-u\cdot\nabla\phi^{b}{\omega}=u\cdot\nabla{\omega}^{b}-(u\cdot\nabla\phi^{b}){\omega}$.
Hence we obtain
$\displaystyle\sum_{\alpha}|\alpha|^{k}|I_{4,\alpha}|$ $\displaystyle\leq
C\left(\|u\cdot\nabla{\omega}^{b}\|_{\mathcal{W}^{k,1}_{\rho}}+\|u{\omega}\|_{H^{k+1}(\lambda
d(x,\partial\Omega)\geq\delta_{0}+\rho_{0})}\right)$ $\displaystyle\leq
C\|u\cdot\nabla{\omega}^{b}\|_{\mathcal{W}^{k,1}_{\rho}}+C\|{\omega}\|^{2}_{H^{4}(\lambda
d(x,\partial\Omega)\geq\delta_{0}/2)}+C\|{\omega}\|_{\mathcal{W}^{k,1}_{\rho}}\|{\omega}\|_{H^{4}(\lambda
d(x,\partial\Omega)\geq\delta_{0}/2)}.$
This completes the bound for $I_{4,\alpha}$.
Combining all of the above, we obtain bounds on $A(\beta)$ in the analytic
norm.
### 4.3 Sobolev bounds away from the boundary
Finally, we bound the vorticity away from the boundary. Recall that
$\left\\{\begin{aligned}
\partial_{t}{\omega}^{i}+u\cdot\nabla\omega^{i}&=\nu\Delta{\omega}^{i}\\\
{\omega}^{i}_{|{\partial\Omega}}&=0\end{aligned}\right.$ (4.15)
Note that by definition, $\omega^{i}$ vanishes in the region when $\lambda
d(x,\partial\Omega)\leq\delta_{0}$. We perform the standard energy estimates,
for $k\geq 3$ so that the standard Sobolev embedding applies, yielding
$\displaystyle\frac{d}{dt}\|\omega^{i}\|_{H^{k}}^{2}+\nu\|\nabla\omega^{i}\|_{H^{k}}^{2}$
$\displaystyle\lesssim\|u\|_{H^{k}}\|\omega^{i}\|_{H^{k}}^{2}$
$\displaystyle\lesssim\|\omega^{i}\|^{3}_{H^{k}}+\|u^{b}\|^{3}_{H^{k}(\lambda
d(x,\partial\Omega)\geq\delta_{0})}.$
Using the elliptic theory for the Biot-Savart law
$u^{b}=\nabla^{\perp}\Delta^{-1}\omega^{b}$, we have
$\|u^{b}\|_{H^{k}(\lambda
d(x,\partial\Omega)\geq\delta_{0})}\lesssim\|\omega^{b}\|_{\mathcal{W}^{k,1}_{\rho}}+\|\omega^{b}\|_{H^{k}(\lambda
d(x,\partial\Omega)\geq\delta_{0})}.$
This proves that
$\displaystyle\frac{d}{dt}\|\omega^{i}\|_{H^{k}}^{2}$
$\displaystyle\lesssim\|\omega^{b}\|^{3}_{\mathcal{W}^{k,1}_{\rho}}+\|\omega^{b}\|^{3}_{H^{k}(\lambda
d(x,\partial\Omega)\geq\delta_{0})}.$
Integrating in time and recalling the iterative norm $A(\beta)$, we arrive at
$\|\omega^{i}\|_{H^{4}}^{2}\lesssim\|\omega_{0}\|_{H^{4}}^{2}+TA(\beta)^{2}.$
This bounds the Sobolev norm in $A(\beta)$, completing the proof of
Proposition 4.1.
### 4.4 Proof of Theorem 1.1
Finally, we show that our main theorem, Theorem 1.1, follows from Proposition
4.1. Indeed, taking $\beta$ sufficiently large in Proposition 4.1, we obtain
uniform bounds on the iterative norm (4.9) in term of initial data, which
gives the local solution in $\mathcal{W}^{1,1}_{\rho}+H^{4}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}/2\\})$ for $t\in[0,T]$, with
$T=\beta^{-1}\lambda^{-2}\rho_{0}$. In particular, by definition of the
iterative norm $A(\beta)$, we have
$\|\omega(t)\|_{\mathcal{W}^{1,1}_{\rho}}+\|\omega(t)\|_{H^{4}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}/2\\})}\leq C_{0}$
for $t\in[0,T]$. To prove the stated bound (1.9) on vorticity, we note that
$\displaystyle\|\omega\|_{L^{\infty}(\partial\Omega)}$
$\displaystyle\lesssim\|\partial_{\widetilde{z}}\omega\|_{\mathcal{L}_{\rho}^{1}}+\|\omega(t)\|_{H^{2}(\\{\lambda
d(x,\partial\Omega)\geq\delta_{0}/2\\})}.$
It thus suffices to prove that
$\|\partial_{\widetilde{z}}\omega\|_{\mathcal{L}_{\rho}^{1}}\lesssim\nu^{-1/2}$.
Indeed, similar to (4.10), we bound
$\|\partial_{\widetilde{z}}\omega(\widetilde{t})\|_{\mathcal{L}_{\rho}^{1}}\leq\|\partial_{\widetilde{z}}e^{\nu\widetilde{t}S}\omega_{0}\|_{\mathcal{L}_{\rho}^{1}}+\int_{0}^{\widetilde{t}}\|\partial_{\widetilde{z}}e^{\nu(\widetilde{t}-\widetilde{t}^{\prime})S}f(\widetilde{t}^{\prime})\|_{\mathcal{L}_{\rho}^{1}}\;d\widetilde{t}^{\prime}+\int_{0}^{\widetilde{t}}\|\partial_{\widetilde{z}}\Gamma(\nu(\widetilde{t}-\widetilde{t}^{\prime}))g(\widetilde{t}^{\prime})\|_{\mathcal{L}_{\rho}^{1}}\;d\widetilde{t}^{\prime}$
for the same $f,g$ defined as in (4.8). It follows directly from the
construction, see Section 3.6, that the $\widetilde{z}$-derivative of the
semigroup $\partial_{\widetilde{z}}e^{\nu\widetilde{t}S}$ satisfies the same
bounds as does $e^{\nu\widetilde{t}S}$, up to an extra factor of
$(\nu\widetilde{t})^{-1/2}$ or $|\partial_{\widetilde{\theta}}|+\nu^{-1/2}$.
Therefore, using the previous bounds on $f(\widetilde{t})$, we have
$\displaystyle\int_{0}^{\widetilde{t}}\|\partial_{\widetilde{z}}e^{\nu(\widetilde{t}-\widetilde{t}^{\prime})S}f(\widetilde{t}^{\prime})\|_{\mathcal{L}_{\rho}^{1}}\;d\widetilde{t}^{\prime}$
$\displaystyle\lesssim\int_{0}^{\widetilde{t}}(\nu(\widetilde{t}-\widetilde{t}^{\prime}))^{-1/2}\Big{[}\|f(\widetilde{t}^{\prime})\|_{\mathcal{W}_{\rho}^{1,1}}+\|\widetilde{z}f(\widetilde{t}^{\prime})\|_{H^{1}(\widetilde{z}\geq\delta_{0}+\rho)}\Big{]}\;d\widetilde{t}^{\prime}$
$\displaystyle\lesssim\int_{0}^{\widetilde{t}}(\nu(\widetilde{t}-\widetilde{t}^{\prime}))^{-1/2}\;d\widetilde{t}^{\prime}$
$\displaystyle\lesssim\nu^{-1/2}.$
Other terms are estimated similarly, giving
$\|\partial_{\widetilde{z}}\omega\|_{\mathcal{L}_{\rho}^{1}}\lesssim\nu^{-1/2}$
as claimed.
## References
* [1] C. R. Anderson. Vorticity boundary conditions and boundary vorticity generation for two-dimensional viscous incompressible flows. J. Comput. Phys., 80(1):72–97, 1989.
* [2] K. Asano, Zero-viscosity limit of the incompressible Navier-Stokes equation. II. Mathematical analysis of fluid and plasma dynamics, I (Kyoto, 1986). No. 656 (1988), 105-128.
* [3] C. Bardos and S. Benachour, Domaine d’analycité des solutions de l’équation d’Euler dans un ouvert de , Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 4 (1977), no. 4, 647-687 (French).
* [4] C. Bardos and E. S. Titi, Mathematics and turbulence: where do we stand? J. Turbul. 14 (2013), 42–76.
* [5] C. Bardos and E. S. Titi, $C^{0,\alpha}$ boundary regularity for the pressure in weak solutions of the 2d Euler equations. Philosophical Transactions of the Royal Society A, (2021), to appear.
* [6] R. E. Caflisch. A simplified version of the abstract Cauchy-Kowalewski theorem with weak singularities. Bull. Amer. Math. Soc. (N.S.), 23(2):495–500, 1990.
* [7] D. Gérard-Varet, Y. Maekawa, and N. Masmoudi, Gevrey stability of Prandtl expansions for 2-dimensional Navier-Stokes flows. Duke Math. J. 167 (2018), no. 13, 2531-2631.
* [8] D. Gérard-Varet, Y. Maekawa, and N. Masmoudi, Optimal Prandtl expansion around a concave boundary layer, arXiv:2005.05022 (2020)
* [9] E. Grenier, On the nonlinear instability of Euler and Prandtl equations, Comm. Pure Appl. Math. 53 (2000), 1067-1091.
* [10] E. Grenier, Y. Guo, and T. Nguyen, Spectral instability of characteristic boundary layer flows, Duke Math. J. 165 (2016), 3085-3146.
* [11] E. Grenier, Y. Guo, and T. Nguyen, Spectral instability of general symmetric shear flows in a two-dimensional channel, Adv. Math. 292 (2016), 52-110.
* [12] E. Grenier and T. Nguyen, $L^{\infty}$ instability of Prandtl layers. Ann. PDE 5 (2019), no. 2, Paper No. 18, 36 pp.
* [13] E. Grenier and T. Nguyen, On nonlinear instability of Prandtl’s boundary layers: the case of Rayleigh’s stable shear flows. arXiv:1706.01282 (2017)
* [14] E. Grenier and T. Nguyen, Generator functions and their applications. Proc. Amer. Math. Soc. Ser. B 8 (2021), 245-251.
* [15] M. Lombardo, M. Cannone, and M. Sammartino, Well-posedness of the boundary layer equations. SIAM J. Math. Anal. 35 (2003), no. 4, 987-1004.
* [16] T. Kato, Remarks on zero viscosity limit for nonstationary Navier-Stokes flows with boundary, Seminar on Nonlinear Partial Differential Equations (Berkeley, Calif., 1983), Math. Sci. Res. Inst. Publ., Vol. 2, Springer, New York, 1984, pp. 85-98.
* [17] I. Kukavica and V. Vicol, On the analyticity and Gevrey-class regularity up to the boundary for the Euler equations. Nonlinearity 24 (2011), no. 3, 765-796.
* [18] I. Kukavica, T. T. Nguyen, V. Vicol and F. Wang, On the Euler+Prandtl expansion for the Navier-Stokes equations, Journal of Mathematical Fluid Mechanics, to appear.
* [19] I. Kukavica, V. Vicol and F. Wang, The inviscid limit for the Navier-Stokes equations with data analytic only near the boundary.Arch. Ration. Mech. Anal. 237 (2020), no. 2
* [20] Y. Maekawa, On the inviscid limit problem of the vorticity equations for viscous incompressible flows in the half-plane. Comm. Pure Appl. Math. 67(7), 1045-1128(2014).
* [21] T.T. Nguyen and T.T. Nguyen. The inviscid limit of Navier-Stokes equations for analytic data on the half-space. Arch. Ration.Mech. Anal., 230(3):1103-1129, 2018.
* [22] M. Sammartino and R. Caflisch, . Zero viscosity limit for analytic solutions of the Navier-Stokes equation on a half-space. II. Construction of the Navier-Stokes solution. Comm. Math. Phys. 192 (1998), no. 2, 463-491.
* [23] C. Wang and Y. Wang Zero-Viscosity Limit of the Navier-Stokes Equations in a Simply-Connected Bounded Domain Under the Analytic Setting, J. Math. Fluid Mech. (2020)
|
# Derived electron densities from linear polarization observations of the
visible-light corona during the 14 December 2020 total solar eclipse
Liam T. Edwards Department of Physics
Aberystwyth University
Ceredigion, Cymru, SY23 3BZ Kaine A. Bunting Department of Physics
Aberystwyth University
Ceredigion, Cymru, SY23 3BZ Brad Ramsey Department of Physics
Aberystwyth University
Ceredigion, Cymru, SY23 3BZ Matthew Gunn Department of Physics
Aberystwyth University
Ceredigion, Cymru, SY23 3BZ Tomos Fearn Department of Computer Science
Aberystwyth University
Ceredigion, Cymru, SY23 3DB Thomas Knight Department of Physics
Aberystwyth University
Ceredigion, Cymru, SY23 3BZ Gabriel Domingo Muro Department of Physics
Aberystwyth University
Ceredigion, Cymru, SY23 3BZ Space Radiation Lab,
California Institute of Technology,
Pasadena, CA, USA 91125 Huw Morgan Department of Physics
Aberystwyth University
Ceredigion, Cymru, SY23 3BZ
###### Abstract
A new instrument was designed to take visible-light (VL) polarized brightness
($pB$) observations of the solar corona during the 14 December 2020 total
solar eclipse. The instrument, called the Coronal Imaging Polarizer (CIP),
consisted of a 16 MP CMOS detector, a linear polarizer housed within a
piezoelectric rotation mount, and an f-5.6, 200 mm DSLR lens. Observations
were successfully obtained, despite poor weather conditions, for five
different exposure times (0.001 s, 0.01 s, 0.1 s, 1 s, and 3 s) at six
different orientation angles of the linear polarizer (0∘, 30∘, 60∘, 90∘, 120∘,
and 150∘). The images were manually aligned using the drift of background
stars in the sky and images of different exposure times were combined using a
simple signal-to-noise ratio cut. The polarization and brightness of the local
sky is also estimated and the observations were subsequently corrected. The
$pB$ of the K-corona was determined using least squares fitting and
radiometric calibration was done relative to the Mauna Loa Solar Observatory
(MLSO) K-Cor $pB$ observations from the day of the eclipse. The $pB$ data was
then inverted to acquire the coronal electron density, $n_{e}$, for an
equatorial streamer and a polar coronal hole, which agreed very well with
previous studies. The effect of changing the number of polarizer angles used
to compute the $pB$ is also discussed and it is found that the results vary by
up to $\sim$ 13% when using all six polarizer angles versus only a select
three angles.
Eclipse Observations; Polarization, Optical; Instrumentation and Data
Management; Spectrum, Visible
## 1 Introduction
A total solar eclipses (TSE) provides a unique opportunity to observe the
visible-light (VL) corona down to the solar limb for a few minutes during
totality, and allow continuous observations from the limb out to several solar
radii. Not only does the Moon block out the solar disk, but it also lowers the
local sky brightness along the eclipse path which makes it perfect for
observing the significantly fainter corona (Lang, 2010). This lower coronal
region ($<$ 2 R⊙) is the origin of the solar wind and is therefore crucial to
observe in order to better understand how it is formed and accelerates to
supersonic speeds (Habbal, 2020; Strong et al., 2017; McComas et al., 2007).
TSEs have been used to obtain valuable information about a variety of solar
phenomena, including coronal streamers (Pasachoff & Rušin, 2022), coronal mass
ejections (CMEs) (Boe et al. (2021c); Filippov et al. (2020); Koutchmy et al.
(2004)), coronal holes (Pasachoff & Rušin, 2022), plasma flows (Sheeley &
Wang, 2014; De Pontieu et al., 2009), prominences (Jejčič et al., 2014), and
coronal jets (Hanaoka et al., 2018). Physical properties of the coronal plasma
have also been studied extensively during TSEs, typically, temperature,
density, and velocity (Del Zanna et al., 2023; Muro et al., 2023; Bemporad,
2020; Reginald et al., 2014; Habbal et al., 2011, 2010; Reginald et al., 2009;
Habbal et al., 2007).
A coronagraph is required in order to observe the VL corona outside of a total
solar eclipse. First designed by Bernard Lyot in the 1930’s (Lyot & Marshall,
1933), it consists of a telescope with an opaque disk that is positioned to
block out the bright disk of the Sun. This is crucial because the photosphere
is $\sim$ 106 times brighter than the corona itself. In fact, the Earth’s sky
is also much brighter than the corona - of the order $\sim$ 105 times brighter
at 20 R⊙ \- therefore, any ground-based coronagraph, such as the COronal Solar
Magnetism Observatory’s (COSMO) K-Coronagraph (K-Cor; Hou et al. (2013)), is
affected by the Earth’s own atmosphere. To overcome this issue, several space-
based coronagraphs have been launched in recent decades, for example, the
Large Angle Spectrometric Coronagraph (LASCO; Brueckner et al. (1995)) onboard
the Solar and Heliospheric Observatory (SOHO; Domingo et al. (1995)) and the
Sun-Earth Connection Coronal and Heliospheric Investigation’s (SECCHI; Howard
et al. (2008)) COR1/2 instruments onboard the Solar Terrestrial Relations
Observatory (STEREO; Kaiser et al. (2008)). However, despite the unprecedented
access to the corona they have given the field of solar physics, there is
still a fundamental issue for any type of coronagraph to overcome which is the
stray light resulting from the diffraction of incoming light at the occulter
edge. In order to mitigate this, most occulters will block not only the solar
disk but also a portion of the very lower solar corona - out to around 1.5 R⊙
for internally occulted coronagraphs (Verroi et al., 2008). One way to
circumvent this issue is to increase the distance between the detector and the
occulter, which is possible to achieve in space by the use of formation flying
cube satellites (e.g., ASPIICS; Lamy et al. (2010)), however, these type of
instruments are still in their infancy. As a result, observations of the inner
corona during a TSE are unmatched in the VL regime.
The VL corona is composed of light from several different sources, primarily
from the scattering of photospheric light by free electrons in the corona and
dust in the interplanetary plane, termed the K- and F-corona, respectively. In
the case of the K-corona, the scattering process - known as Thomson scattering
- produces a strongly tangentially polarized component to the total VL
brightness (for an in-depth overview of this mechanism see Inhester (2015)),
whereas the F-corona is considered to be unpolarized below $\sim$ 3 R⊙ (Morgan
& Habbal, 2007). As a result, observing the VL corona below this height with a
linear polarizer during a TSE can be considered to be a measurement of the
K-coronal brightness only. There are also other types of brightness
contributions to the total coronal brightness, namely the E-corona (emission)
which consists of spectral line emission from highly ionized atoms, but these
are considered to be negligible for the purposes of this work. The K-coronal
brightness component, $B_{K}$, represents the structure and amount of coronal
plasma irrespective of its temperature, unlike EUV or x-ray observations. The
K-coronal component of the polarized brightness (pBk) gives the electron
density ($n_{e}$) which is calculated using an inversion method first
developed by van de Hulst (1950) and later improved upon by Hayes et al.
(2001) and Quémerais & Lamy (2002). Equation 1 describes the relationship
between the polarized brightness of the K-corona and the electron density
which is used to acquire the electron density from the TSE images:
$pB_{k}\propto\int_{LOS}n_{e}(r)\cdot G(s,\rho)\,ds$ (1)
where $G$ is a geometrical weighting function, $\rho$ is the distance between
the Sun and the intercept between the line-of-sight (LOS) and the plane-of-sky
(POS), and $s$ is the distance between the intercept point on the POS and an
arbitrary point in the corona along the LOS (see Figure 1 in Quémerais & Lamy
(2002) for more detail). This is a well-established technique and several
studies have used this inversion of $pB$ to obtain coronal electron densities
(e.g. Liang et al. (2022); Bemporad (2020); Skomorovsky et al. (2012); Hayes
et al. (2001); Raju & Abhyankar (1986); Saito et al. (1977)). Several studies
have also been conducted attempting to separate the K- and F-components of the
solar corona (e.g. Boe et al. (2021a); Fainshtein (2009); Morgan & Habbal
(2007); Dürst (1982); Calbert & Beard (1972)). Most of the previous studies
involving TSE observations require some level of image processing as a result
of several factors - primarily the sharp decrease in brightness with radial
height from the Sun. For example, since the Moon and the Sun move relative to
each other with respect to the background stars, all TSE images need to be
coaligned and there are a number of different methods which can be used to do
this. One of the most common of these methods over the past decade or so has
been to use a modified phase correlation technique by Druckmüller (2009),
whereas others have used more manual methods such as using the drift of
background stars as they move relative to the TSE (Bemporad, 2020). There are
also many different image processing techniques that have been developed to
better reveal various structures and phenomena (Patel et al. (2022); Qiang et
al. (2020); Morgan & Druckmüller (2014); Druckmüller (2013); Byrne et al.
(2012); Druckmüllerová et al. (2011)). The outline of this paper is as
follows: section 1.1 describes the 2020 TSE in more detail, section 2
summarises the design of the instrument (2.1), and the calibrating,
processing, coaligning, and inversion to derive the coronal electron densities
(2.2.3 \- 2.2.7). Section 3 presents the results of the study, with sections 4
and 5 disseminating and concluding the results of this work, respectively.
### 1.1 14 December 2020 Total Solar Eclipse
The TSE on 14 December 2020 was observed by a team from Aberystwyth University
at a site in Neuquén province, Argentina. The observation site was located at
39∘42’ 40.4” S, 70∘23’ 57.6” W, and an altitude of $\approx$ 1,082 m (3,550
ft). Totality lasted $\approx$ 129 seconds with the time of maximum eclipse at
13:07:58 local time (16:07:58 UTC) and the apparent altitude of the Sun above
the horizon was $\approx$ 75∘. The weather conditions at the time of
observation were not optimal with intermittent cloud cover and very strong
gusts of up to 70 km/h which resulted in a lot of airborne dust. Furthermore,
a small, wispy cloud passed across the Sun’s disk for $\approx$ 12 seconds.
This had an adverse effect on the quality of the data but, despite the
conditions, the data captured during totality was still usable. Figure 1 shows
the location of the observation site (white marker) along with the cloud cover
at approximately the time of the eclipse. During this eclipse, a CME had
erupted from the eastern limb of the Sun $\approx$ 110 minutes before
totality, providing a truly unique opportunity to study CME dynamics right
down to the solar limb. The LASCO CME catalog (Gopalswamy et al., 2009) states
that the CME first appeared in the C2 field-of-view (FOV) at 15:12:10 UT with
an estimated linear speed of 437 km/s, mass of 3 $\times$ 1012 kg, and a
central position angle of 121∘. The CME is discussed in detail by Boe et al.
(2021c).
Figure 1: Satellite image of cloud cover above the observation site in Neuquén
province, denoted by the white marker, taken at approximately 13:30 local time
(Credit: Zoom Earth)
## 2 Instrument Design and Data Processing
### 2.1 The Coronal Imaging Polarizer (CIP)
The instrument used to observe the corona during the total solar eclipse was a
VL linear polarization imager designed and built at Aberystwyth University.
The Coronal Imaging Polarizer (CIP) was designed to be relatively cheap and
simple to build, easy to assemble, and lightweight in order to be able to
relocate quickly on the day of an eclipse if necessary. It consisted of an
objective lens, a VL band-pass
filter111https://www.thorlabs.com/thorproduct.cfm?partnumber=FBH520-10, a
linear
polarizer222https://www.thorlabs.com/thorproduct.cfm?partnumber=LPVISC100
housed in a rotating
mount333https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=12829, and a
CMOS sensor444https://www.atik-cameras.com/product/atik-horizon-ii/. The
objective lens, an f-5.6, 200 mm focal length DSLR lens, can be adjusted to
give different fields of view if required. The VL band-pass filter has a
center wavelength of 520 nm (denoted by the vertical dashed black line in
Figure 2) where transmission is $>$ 90%, and a bandwidth (FWHM) of 10 nm.
Since the peak of the Sun’s emission is around 500 nm, the filter is well-
placed to collect the maximum amount of light possible.
Figure 2: Extinction ratio and transmissions for both the band-pass filter and
linear polarizer measured by their respective manufacturers. The dotted
horizontal lines correspond to the polarizer’s extinction ratio (blue) and
transmission (red) at the band-pass filter’s center wavelength (Data from
Thorlabs)
Traditionally, when taking VL polarization observations of the corona, a
polarizer is either manually rotated through a set number of polarization
angles (typically 3 - 5) or several different instruments are set up, each
designed to capture the light of a single polarizer orientation angle. In
contrast, CIP used a piezoelectrically-driven rotation mount from Thorlabs to
automatically rotate the polarizer through six different polarization angles
(0∘, 30∘, 60∘, 90∘, 120∘, and 150∘). The motorized rotation mount allows for
360∘ rotation with a maximum rotation speed of 430∘/second and has an accuracy
of $\pm$ 0.4∘, resulting in high precision and fast rotation. The energy
requirement of the motor is very low, needing only a maximum of 5.5 V DC input
with a typical current consumption of 800 and 50 mA during movement and
standby, respectively.
The camera used for CIP was the Horizon II - a 16 mega-pixel CMOS camera
developed by Atik. It uses the Panasonic MN34230 4/3” CMOS sensor with a 4644
x 3506 resolution and has a very low readout noise ($\sim$ 1 e-). It is
powered by a 12 V 2 A DC input and has a minimum exposure time of 18 $\mu$s
and unlimited maximum exposure. It is also cooled via an internal fan and can
maintain a $\Delta$T of -40∘C, meaning the camera can still maintain low
thermal noise even at high ambient background temperatures. It is a black-and-
white camera since using an RGB camera requires extra steps when processing
the data (e.g. demosaicking), and the correction and alignment must be done
individually for each RGB channel - see Bemporad (2020), for example. In order
for the instrument to run in the most efficient way possible, the data
collection process was fully automated. The rotation mount rotates the
polarizer to a specific angle, then the camera collects a sequence of images
of varying exposure times from 0.001 - 3 seconds, then the polarizer is
rotated to the next angle, and the camera would run through the same sequence
as before, and so on for all six polarization angles. The time taken for one
complete cycle of data collection is around 30 seconds which resulted in two
full data sets and a third partial data set.
Figure 3: The Coronal Imaging Polarizer (CIP) - A: f-5.6, 200 mm objective
lens. B: housing containing the rotation mount, polarizer, and VL filter. C:
Atik Horizon II 16 MP CMOS camera D: housing for the interface board of the
rotation mount
### 2.2 Data Processing
#### 2.2.1 Linearity
The response of the imaging sensor to incoming light intensity is measured by
using an integrating sphere, where the level of illumination in the sphere is
measured by a photodiode (in Amps). The integrating sphere is not calibrated
to give the light levels in SI units but it is linear so if the photodiode
current doubles the light level in the sphere is doubled. The illumination of
the sphere was gradually increased by 1 $\mu$A until it reached the saturation
point. Five measurements were taken and an average intensity was calculated
for each photodiode reading. Figure 4 presents the results of this linearity
test and it clearly shows that the sensor used in CIP has a linear response.
The linearity begins to break down close to the saturation limit ($\approx$
4096 counts) which is to be expected and is accounted for when the images
taken at different exposure times are combined for each polarization angle
(Section 2.2.4). The linearity of the sensor was quantified using equation
(2):
$Linearity\,(\%)=\frac{(MPD+MND)}{MI}\times 100$ (2)
where MPD, MND, and MI are the maximum positive deviation, maximum negative
deviation, and maximum intensity, respectively. The maximum positive and
negative deviations are found from the line of best fit. This calculation was
performed twice - once including the last data point where the linearity
begins to break down, and a second time without including the aforementioned
data point. These calculations result in linearities of 2.99% and 0.96%,
respectively. Both of these values are excellent linearities since most CMOS
sensors have linearities on the order of several percent (Wang & Theuwissen,
2017).
Figure 4: Average pixel intensity as a function of the integrating sphere’s
photodiode reading (black dots) and the calculated line of best fit (dashed
line) giving an R2 value of 0.99956
#### 2.2.2 Flat field and dark frame correction
Five flat field images were taken at an exposure time of 0.01s for each
polarization angle using an integrating sphere. A master flat field image was
then produced for each polarization angle using the open-source plugin
AstroImageJ - an example of one of these is seen in Figure 5 along with an
intensity profile taken at the midpoint of the image.
Figure 5: Master flat field image for a polarization angle of 0∘ (top) and a
normalized intensity profile taken at the midpoint of the image shown by the
red horizontal line (bottom)
Two different types of dark field images were attempted: firstly, setting the
exposure time of the camera to zero and taking several pictures to get an
average; and secondly, blocking the front of the instrument with the lens cap
and running the full eclipse sequence five times. Unfortunately, the Atik
Horizon II has a minimum exposure time of 18 $\mu$s so the first method was
not able to be done. For the second method, the instrument was taken to a dark
room with the lens cap taped over the lens. Several dark frames were taken in
this way for each polarization angle and exposure time used in the eclipse
sequence. It is expected that the mean values of the intensity (along with
their standard deviations) should be similar for each polarization angle and
exposure time and this is indeed what was seen. Figure 6 shows the dark noise,
$\sigma_{dark}$, calculated by taking the average standard deviation of all
dark frames taken at each exposure time and polarization angle. It is clear
that pixels in the detector follow the same pattern and behave uniformly
across all polarization angles.
Figure 6: Average dark noise for all polarization angles as a function of the
exposure time
Once the master flat field and dark frame images were produced, the raw
eclipse images were calibrated using equation (3):
$C_{\theta,t}=\frac{(R_{\theta,t}-D_{t})*m}{(F_{\theta}-D_{t})}$ (3)
where $C_{\theta,t}$ is the reduced image, $R_{\theta,t}$ is the raw image
taken at a specific polarization angle ($\theta$) and exposure time ($t$),
$D_{t}$ is the dark frame for that particular exposure, $F_{\theta}$ is the
flat-field for that particular polarization angle, and $m$ is the image-
averaged value of $(F_{\theta}-D_{t})$. At this stage, the scale of the image
was calculated. The radius of the Moon was found to be 255.5 $\pm$ 0.5 pixels
by fitting a circle to points plotted along the lunar limb using ImageJ. One
of the lowest exposure images was used to obtain this radius in order to
reduce the brightness from the lower corona to obtain the most accurate value
possible. The Moon’s apparent radius at the time of the eclipse was found to
be 1,000.145” using the Stellarium software (Zotti et al., 2021), therefore,
the full-resolution images provided a spatial resolution of 3.92 $\pm$ 0.01
arcsec/pixel.
#### 2.2.3 Image coalignment
After the initial calibration of the raw images with the dark and flat frames,
they were then coaligned before the images with different exposure times could
be combined. This step is crucial because, although a motorized tracking mount
was used to track the solar center, there are still some factors that need to
be accounted for and corrected. The two main factors are the high winds at
ground level throughout the eclipse and the fact that the Moon is moving with
respect to the Sun. The effect of the wind on the tracking mount can be seen
in Figures 7 and 8. As discussed in section 1, there are several different
methods of coaligning eclipse images such as the use of phase correlation
(e.g. Druckmüller (2009)) or by using the positions of stars visible in the
exposures (e.g. Bemporad (2020)). In this study, three stars were visible in
the images taken at longer exposure times (see Figure 7) and were identified
using Stellarium to be HD 157056 (star 1), HD 157792 (star 2), and HD 158643
(star 3), respectively. The images were initially coaligned using the
brightest of these stars (HD 157056), which is star 1 in Figure 7, since its
drift in both the x- and y-direction was fairly consistent (Figures 7 and 8).
For the shorter exposure images, where the stars were not visible, it was
assumed that the star did not move much relative to its position in the 1 and
3 s exposures. Shifts in both the x- and y-pixels were calculated for each
image, based on their respective exposure times, using the equations found in
Figure 8 and all of the images were co-aligned accordingly. The images are
then re-binned through 8 $\times$ 8 pixel averaging to improve the signal
quality, which reduced the resolution of the images to 31.25 $\pm$ 0.01
arcsec/pixel.
Figure 7: Left: Locations of the three visible stars in a 3 s exposure time
image. Right: Locations of the brightest pixel in star 1 for all 1 and 3 s
exposures Figure 8: Pixel drifts in both the x- and y-coordinates for star 1.
The data points represent the pixel coordinate of the brightest pixel during
the 1 and 3 s exposures. The darker and lighter shaded regions show one and
two standard deviations, respectively
#### 2.2.4 Image combination
The next step was to combine all the images of different exposure times for
each individual polarization angle. Firstly, all negative pixel values were
set to zero and the images were normalized by their respective exposure times
(DN/s) using equation 4:
$I_{p}^{\prime}=\frac{I_{p}}{\Delta t_{e}}$ (4)
where $I_{p}^{\prime}$ is the pixel intensity ($I_{p}$) for each polarization
angle $p$, normalised by exposure time ($\Delta t_{e}$). Since the eclipse
images have a very high dynamic range between the brightest and darkest
pixels, they need to be combined together in a way that takes the pixel value
itself into account. In the shorter exposure images, the inner coronal signal
is strong but the outer coronal signal is too weak whereas, in the longer
exposure images, the outer coronal signal is strong but the inner corona is
over-exposed. Thus, the images need to be combined in such a way that the
inner and outer corona are adequately exposed in the final image. This is done
by using a signal-to-noise ratio (SNR) cut based on the square root of the
intensity, thus providing a lower boundary wherein any pixels with a value
less than this threshold are excluded. The SNR cut used in this work was
$I>\sqrt{I}$ where $I$ is the intensity of the pixel. An upper threshold of
3,900 counts is also used since the linearity of the CMOS sensor breaks down
around this value (see Figure 4). This process was done for each image taken
at a given polarization angle which means that each composite image consists
of two sets of different exposure times.
#### 2.2.5 Sky Brightness Removal
During a TSE, the local sky brightness is significantly lowered to $\approx
10^{-9}-10^{-10}$ B⊙ which allows unmatched VL observations of the corona out
to greater heliocentric heights than without a TSE. However, the sky
brightness during a TSE is not negligible and must be removed from the data.
As Figure 9 shows, a box was defined in each corner of each combined TSE
image. There is a considerable amount of noise present in the image as a
result of the poor weather conditions, despite the rebinning to improve the
signal-to-noise ratio. There is also an optical artifact that takes the form
of two parallel lines spanning the width of the image, the cause of which is
unknown but most probably instrumental. The mean intensity in each box was
then calculated and the process was repeated for each polarizer angle. The
result of this can be seen in Figure 10 which seems to show that the intensity
at the right-hand side of the image (corresponding to solar north) is greater
than that of the left-hand side (corresponding to solar south). A map is then
created to estimate the sky brightness at each point in the image by
interpolating the mean counts between each of the four boxes for each
polarizer angle separately. Two examples of these interpolated sky brightness
maps are shown in Figure 11 for 0∘ and 90∘. The polarization of the background
sky lies at around 10% of the overall polarization of the image and this
component is subsequently removed as a result of this sky brightness removal
step. Finally, the F-coronal component has a noticeable impact on $pB$ data
from heliocentric heights of $\sim$ 2.5 - 3.0 R⊙ (Boe et al., 2021b) but since
this study limited observations to a heliocentric height of 1.5 R⊙, the
F-coronal component can be neglected at such low heights.
Figure 9: Location of each box used to determine the sky brightness overlaid
on the combined 0∘ image. The image has not been rotated so that solar north
aligns with the top of the image. L, T, B, and R correspond to the left, top,
bottom, and right sides of the image, respectively Figure 10: Mean intensity
for each box in Figure 9 at each polarizer angle. The fit to equation 5 is
shown as a solid line for each of the four boxes where the results from the
top-left, top-right, bottom-left, and bottom-right boxes are shown in black,
red, green, and blue, respectively. The mean polarization of each box is also
expressed as a percentage of the total polarization
Figure 11: Examples of interpolated sky brightness maps subtracted from the
respective TSE images for polarizer orientation angles of 0∘(left) and
90∘(right)
#### 2.2.6 Determining the Polarized Brightness
A standard inversion method used to acquire coronal electron densities was
initially developed by van de Hulst (1950) and further developed by Newkirk
(1967), Saito et al. (1977), Hayes et al. (2001), and Quémerais & Lamy (2002),
among others. The original model assumes both spherical symmetry and that the
$pB$ is produced purely by the Thomson scattering of photospheric light from
free coronal electrons and it is proportional to the integrated LOS density of
the electrons, as shown in Equation (1). Most similar studies use three or
four different polarization angles to determine the $pB$ \- typically -60∘,
0∘, 60∘ (Hanaoka et al., 2021), and 0∘, 45∘, 90∘, 135∘ (Vorobiev et al.,
2020). However, CIP captured images taken at different exposure times for six
different polarization angles (0∘, 30∘, 60∘, 90∘, 120∘, and 150∘). As a
result, in order to invert the calibrated intensities to find the $pB$ at each
pixel, an approach involving least squares fitting was used. For a polarizer
angle, $\theta_{i}$, where $i=0,1,2,...,n-1$ (with $n=6$ for this study), the
measured intensity, $I_{i}$, is described by Equation 5.
$I_{i}=I_{0}+a\cos{2\theta_{i}}+b\color[rgb]{0,0,0}\sin\color[rgb]{0,0,0}{2\theta_{i}}.$
(5)
where the coefficients to be fitted are the unpolarized background intensity,
$I_{0}$, and the polarized components, $a$ and $b$. The polarized brightness
is then given by
$pB=\sqrt{a^{2}+b^{2}}$ (6)
In order to find a solution for $I_{0}$ and $pB$ at each pixel, the squared
sum is then minimized as follows
$\sum_{i=0}^{n-1}[I_{i}-(I_{0}+a\cos{2\theta_{i}}+b\color[rgb]{0,0,0}\sin\color[rgb]{0,0,0}{2\theta_{i}})]^{2}$
(7)
This least squares fitting method was tested against the standard Mueller
matrix inversion method for three polarization angles (0∘, 60∘, and 120∘) and
it agreed within a few percent. As a result, CIP can theoretically take images
of the corona for any number of polarizer angles in the future.
#### 2.2.7 Relative Radiometric Calibration
A relative radiometric calibration step converts the polarized brightness
measured by CIP (recorded as DN/s) to units relative to the mean solar
brightness (MSB). The instrument used in this step was the Mauna Loa Solar
Observatory’s (MLSO) COSMO K-Coronagraph (DOI: 10.5065/D69G5JV8) which
provides $pB$ data with a field-of-view from 1.05 to $\sim$ 3 R⊙ and a spatial
resolution of 11.3”. The 10-minute averaged K-Cor data was used for this
calibration step with the first observation occurring at 17:56:41 UTC and the
last at 18:10:20 UTC. Figure 12 shows the 10-minute averaged $pB$ observations
taken by MLSO’s K-Cor instrument on the day of the TSE, processed using the
Multi-scale Gaussian Normalization technique (Morgan & Druckmüller, 2014) in
order to enhance the fine-scale coronal structure, and the concentric rings
represent heliocentric heights of 1.0, 1.5, 2.0, and 2.5 R⊙, respectively. It
is clear to see that consistent $pB$ data is restricted to $\sim$ 1.7 R⊙with
the data extending further out in the equatorial regions. As a result, the
relative radiometric calibration in this study was limited to a heliocentric
height of 1.5 R⊙.
Figure 12: 10-minute averaged $pB$ observations taken by MLSO’s K-Cor
instrument on the day of the eclipse, processed using the Multi-scale Gaussian
Normalization technique, with the concentric circles corresponding to
heliocentric heights of 1.0, 1.5, 2.0, and 2.5 R⊙, respectively
The images from both CIP and K-Cor are then coaligned and an intensity profile
is taken for a thin slice of the corona at a specified height for both images.
Some examples of these intensity profiles can be seen in Figure 13 for a range
of heliocentric heights. A calibration factor is temporarily applied to the
CIP data in order to visualize both latitudinal profiles on the same axis
scale, which is denoted in each plot as CF. The mean ratio of the data from
both instruments is then computed at intervals of 0.025 R⊙between 1.1 and 1.5
R⊙and these values are shown in Figure 14. This linear increase in the
intensity ratio with height is then applied to the TSE data thus converting
the $pB$ observations taken with CIP from units of DN/s to units of solar
brightness ($B_{\odot}$). A cross-correlation was also performed on the
intensity profiles to find the angle needed to rotate the CIP data in order to
align it with that taken by K-Cor (i.e., solar north upwards).
Figure 13: Latitudinal distribution of intensity profiles from both CIP (red)
and K-Cor (black) for a range of heliocentric heights. A calibration factor
(CF), noted in each plot, is temporarily applied to the CIP data in order to
see both profiles on the same axis scale
Figure 14: Mean intensity ratio between the CIP and K-Cor intensity profiles
for each height interval (black) along with the linear fit used to perform the
relative radiometric calibration of the TSE data (red)
## 3 Results
Figure 15: Final MGN-processed $pB$ image of the December 14 2020 TSE after
relative radiometric calibration with respect to MLSO/K-Cor
Figure 15 shows the final calibrated $pB$ image of the December 14 2020 TSE.
It has been further processed using the MGN technique to enhance the fine-
scale coronal detail. Due to the poor viewing conditions at the observation
site adversely affecting the data, the data analysis has been limited to a
heliocentric height of 1.5 R⊙but the image in Figure 15 has been extended out
to 2 R⊙to show more of the corona.
Figure 16: Calibrated and corrected $pB$ profiles from CIP (black) and K-Cor
(red) for both the polar coronal hole and equatorial streamer
Figure 16 shows a comparison of $pB$ data as a function of heliocentric height
for the north polar coronal hole and west equatorial streamer chosen for this
study (at position angles 12∘ and 245∘ counter-clockwise from solar north,
respectively). As would be expected, the $pB$ is highest in the solar
equatorial region and lower in the polar region. It is clear to see that the
relative radiometric calibration with respect to MLSO/K-Cor is optimal at the
equator but the polar $pB$ observed by CIP is still greater than that of
K-Cor. This is believed to be due to the poor weather conditions at the time
of observation having a greater effect on the fainter coronal signal in the
polar regions.
Figure 17: Latitudinal distribution of $pB$ for heliocentric heights of 1.2
(black), 1.3 (red), 1.4 (blue), and 1.5 R⊙(green). Vertical dotted lines are
included to represent the position angles for the polar coronal hole (left),
CME (middle), and equatorial streamer (right).
Figure 17 shows the latitudinal distribution of the $pB$ for heliocentric
heights of 1.2, 1.3, 1.4, and 1.5 R⊙ in black, red, blue, and green,
respectively. The $pB$ decreases with increasing distance from the limb which
is to be expected, and all four latitudinal distributions have a similar shape
but there is a noticeable offset between corresponding maxima and minima as
heliocentric height increases. The dotted vertical lines represent the
position angles at which the densities are calculated for the polar coronal
hole and the equatorial streamer, as well as the position angle for the CME
where its presence becomes increasingly apparent in the 1.4 and 1.5 R⊙ $pB$
distributions.
Figure 18: Coronal electron densities as a function of heliocentric height for
the coronal hole (left) compared with Baumbach (1937), Doyle et al. (1999),
and Guhathakurta et al. (1999), and the equatorial streamer (right) compared
with Gibson et al. (1999), Liang et al. (2022), and Gallagher et al. (1999).
The black data points show the densities acquired by inverting the $pB$ and
the solid black line shows the fit to equation 9.
Figure 18 shows the coronal electron densities, derived by inverting the $pB$,
as a function of heliocentric height for both the polar coronal hole (left)
and equatorial streamer (right). Both plots also show comparisons to data from
previous works which agree very well with the data from this study. As
expected, the streamer’s density is greater than the coronal hole’s by an
average factor of $\sim$ 4 between 1.1 - 1.5 R⊙. For the coronal hole
comparison, density profiles from Baumbach (1937), Doyle et al. (1999), and
Guhathakurta et al. (1999) were used. The latter two in particular were
selected because they represent observations of polar coronal holes taken at
the start of solar cycle 24 (solar minimum) and, since the 2020 TSE occurred
exactly a year after the start of solar cycle 25, they are reasonable
comparisons to make. For the equatorial streamer, the densities obtained in
this study are compared with Gibson et al. (1999), Liang et al. (2022), and
Gallagher et al. (1999). Again, these are reasonable comparisons to make since
the compared works represent observations of a streamer (Gibson et al. (1999))
and equatorial regions (Liang et al. (2022); Gallagher et al. (1999)) taken at
or near solar minimum.
## 4 Discussion
### 4.1 Number of Polarizer Angles
Typically, similar studies and space-based coronagraphs use polarizer angles
of 0∘, 60∘, and 120∘ to infer the coronal electron density. As mentioned
previously, CIP is designed to efficiently take polarized observations for any
number of pre-defined polarizer angles. For this study, six polarizer angles
were chosen in an attempt to see if increasing the number of polarizer angles
leads to better constraints on the $pB$ observations and the inferred coronal
electron densities. Figure 19 shows the mean percentage difference between the
coronal electron densities derived using all six polarizer angles (henceforth
referred to as Angle Set A) and only the 0∘, 60∘, and 120∘polarizer angles
(Angle Set B) with the error bars representing the standard error in the mean.
The difference is shown as a function of the position angle between the
heliocentric heights 1.1 - 1.5 R⊙ and ranges from $-12.8\%$ to 9.44% with the
overall mean at 0.56%. For the position angles chosen to represent the coronal
hole and equatorial streamer, the mean difference is -8.26% and 8.04%,
respectively.
Figure 19: Mean difference between the coronal electron densities derived
using all six polarizer angles (Angle Set A) and only the 0∘, 60∘, and
120∘polarizer angles (Angle Set B). The mean difference ranges from $\sim$
-13% to 10% and the mean in the difference is also shown (red line). The error
bars show the standard error in the mean.
Figure 20 shows the difference in density between both angle sets A (red) and
B (black) as a function of heliocentric height for both the coronal hole
(left) and equatorial streamer (right). It is clear to see that the densities
found using set A (all polarizer angles) are lower in the coronal hole and
higher in the equatorial streamer in comparison to those found using only set
B (0∘, 60∘, 120∘). Furthermore, the difference between the two calculated
densities clearly becomes greater with increasing heliocentric height, which
is shown more clearly in Figure 21. Since the difference increases with
increasing heliocentric height, it is unfortunate that the weather conditions
on the day of the TSE were sub-optimal because it would be interesting to see
how this difference evolves beyond the inner coronal region. However, the
existence of this difference implies that changing the number of polarizer
angles used does provide a better constraint on the inferred coronal electron
densities, particularly for heliocentric heights above $\sim$ 1.5 R⊙. For the
remainder of the analysis of these results, set A (all polarizer angles) will
be used.
Figure 20: Difference in coronal electron densities between using $pB$ data
from Angle Sets A (red) and B (black) for both the coronal hole and equatorial
streamer Figure 21: Mean difference (expressed as a percentage) between using
$pB$ data from Angle Sets A and B for both the coronal hole (black) and
equatorial streamer (red) with the error bars representing one standard
deviation in both cases
### 4.2 Radial Density Fitting
Several studies (Saito et al., 1977; Guhathakurta et al., 1999; Hayes et al.,
2001; Thernisien & Howard, 2006) state that the radial dependence of the
coronal electron density can be expressed in the form of a polynomial:
$N_{e}(r)=\Sigma_{i}\alpha_{i}r^{-i}$ (8)
where $r$ is given in solar radii and $\alpha$ and $\beta$ are coefficients
fit to the data. Since the analysis of the data obtained in this study
extended from below $\sim$ 1.1 - 1.5 R⊙, three terms is sufficient to provide
a good fit to the data, thus, the equation used to fit the data and determine
the coefficients was:
$N_{e}(r)=ar^{-b}+cr^{-d}+er^{-f}$ (9)
The coefficients for both coronal features of interest are shown in table 1
and they agree well with previous studies.
Table 1: Coefficients for equation (9) | a | b | c | d | e | f
---|---|---|---|---|---|---
Coronal Hole | 1.12$\times$106 | 0.107 | 2.27$\times$108 | 10.4 | 1.50$\times$108 | 560
Equatorial Streamer | -2.13$\times$109 | 6.56 | 1.43$\times$109 | 6.77 | 1.20$\times$109 | 6.77
## 5 Conclusion
The primary aim of this study was to present a new design for a lightweight
polarization instrument capable of observing using more polarization angles
than is typically used, called the Coronal Imaging Polarizer (CIP). The
instrument was designed and built at Aberystwyth University to observe the
polarized brightness of the solar corona during the December 14 2020 TSE for
six orientation angles of the linear polarizer. Due to the design of the
instrument, it is very easy to increase or decrease the number of polarization
angles depending on the time available during the totality phase of an
eclipse. One of the main design elements of CIP was that it would be
lightweight in order to be easily transported to an observing site. This
element was not only met but was also needed as the team had to relocate to a
new observing site on the morning of the eclipse. The new site was a few
hours’ drive from the original site and with the instrument itself weighing
only $\sim$ 1.59 kg, the bulk of the transporting was due to the tripod and
tracking mount. Consequently, the entire instrument was easily transported and
the simplicity of the design meant that the team could set it up very quickly
at the new observing site. However, the weather conditions at the new site
still impacted the data - primarily the strong gusts of wind which can be
clearly seen in Figure 7, along with airborne dust and high humidity. The
instrument was also designed to capture data fully autonomously and
efficiently and, again, this aim was met.
The raw VL images were successfully corrected using flat-field and dark frame
subtraction (see Section 2.2.2) and the corrected images were then manually
coaligned by tracking the drift of a star in the background of the data
throughout the duration of totality (see Section 2.2.3). The images were then
rebinned to be 8x8 times smaller in order to improve the signal-to-noise ratio
which might not be needed in future eclipses given better viewing conditions.
The images of different exposure times were combined by using a simple signal-
to-noise ratio cut to create composite images of the eclipse for each
individual angle of polarization (see Section 2.2.4). These composite images
were then combined to give the $pB$ image using a simple least squares fitting
method (see Section 2.2.6) and relative radiometric calibration was
successfully done by cross-calibration with MLSO’s K-COR (see Section 2.2.7).
The final $pB$ image was then inverted, assuming a locally spherically
symmetric corona, to produce radial density profiles for a polar coronal hole
and an equatorial streamer. These densities were then compared with previous
studies and were found to be in good agreement (see Section 3). Changing the
number of polarizer angles does have an effect on the inferred coronal
electron densities, however, due to the inclement weather conditions at the
time of observation, no meaningful comparison of the accuracy of more
polarizer angles could be performed. It is hoped that CIP can be sent to
observe future TSEs to provide better quality constraints on $pB$ and coronal
electron densities and build upon the results of this study, particularly with
regard to studying the impact of more polarization angles on the quality of
TSE data.
A special note of appreciation must go to the members of the team who braved
the COVID-19 pandemic in order to observe this eclipse, without whom this work
would not be possible. We acknowledge the valuable advice of the Solar Wind
Sherpa team of collaborators led by Prof Shadia Habbal at the University of
Hawaii. We also acknowledge studentship funding from the Coleg Cymraeg
Cenedlaethol, STFC grant ST/N002962/1 and STFC studentships ST/T505924/1 and
ST/V506527/1 to Aberystwyth University which made this instrument and work
possible. Some of the $pB$ data and coronal images used in this work are
courtesy of the Mauna Loa Solar Observatory, operated by the High Altitude
Observatory, as part of the National Center for Atmospheric Research (NCAR).
NCAR is supported by the National Science Foundation. This research has made
use of the Stellarium planetarium.
## References
* Baumbach (1937) Baumbach, S. 1937, Astronomische Nachrichten, 263, 121, doi: https://doi.org/10.1002/asna.19372630602
* Bemporad (2020) Bemporad, A. 2020, ApJ, 904, 178, doi: 10.3847/1538-4357/abc482
* Boe et al. (2021a) Boe, B., Habbal, S., Downs, C., & Druckmuller, M. 2021a, in AGU Fall Meeting Abstracts, Vol. 2021, SH15D–2057
* Boe et al. (2021b) Boe, B., Habbal, S., Downs, C., & Druckmüller, M. 2021b, ApJ, 912, 44, doi: 10.3847/1538-4357/abea79
* Boe et al. (2021c) Boe, B., Yamashiro, B., Druckmüller, M., & Habbal, S. 2021c, ApJ, 914, L39, doi: 10.3847/2041-8213/ac05ca
* Brueckner et al. (1995) Brueckner, G. E., Howard, R. A., Koomen, M. J., et al. 1995, Sol. Phys., 162, 357, doi: 10.1007/BF00733434
* Byrne et al. (2012) Byrne, J. P., Morgan, H., Habbal, S. R., & Gallagher, P. T. 2012, ApJ, 752, 145, doi: 10.1088/0004-637X/752/2/145
* Calbert & Beard (1972) Calbert, R., & Beard, D. B. 1972, ApJ, 176, 497, doi: 10.1086/151652
* De Pontieu et al. (2009) De Pontieu, B., McIntosh, S. W., Hansteen, V. H., & Schrijver, C. J. 2009, ApJ, 701, L1, doi: 10.1088/0004-637X/701/1/L1
* Del Zanna et al. (2023) Del Zanna, G., Samra, J., Monaghan, A., et al. 2023, ApJS, 265, 11, doi: 10.3847/1538-4365/acad68
* Domingo et al. (1995) Domingo, V., Fleck, B., & Poland, A. I. 1995, Sol. Phys., 162, 1, doi: 10.1007/BF00733425
* Doyle et al. (1999) Doyle, J. G., Teriaca, L., & Banerjee, D. 1999, A&A, 349, 956
* Druckmüller (2009) Druckmüller, M. 2009, ApJ, 706, 1605, doi: 10.1088/0004-637X/706/2/1605
* Druckmüller (2013) —. 2013, ApJS, 207, 25, doi: 10.1088/0067-0049/207/2/25
* Druckmüllerová et al. (2011) Druckmüllerová, H., Morgan, H., & Habbal, S. R. 2011, ApJ, 737, 88, doi: 10.1088/0004-637X/737/2/88
* Dürst (1982) Dürst, J. 1982, A&A, 112, 241
* Fainshtein (2009) Fainshtein, V. G. 2009, Geomagnetism and Aeronomy, 49, 830, doi: 10.1134/S0016793209070020
* Filippov et al. (2020) Filippov, B., Koutchmy, S., & Lefaudeux, N. 2020, Sol. Phys., 295, 24, doi: 10.1007/s11207-020-1586-4
* Gallagher et al. (1999) Gallagher, P. T., Mathioudakis, M., Keenan, F. P., Phillips, K. J. H., & Tsinganos, K. 1999, ApJ, 524, L133, doi: 10.1086/312309
* Gibson et al. (1999) Gibson, S. E., Fludra, A., Bagenal, F., et al. 1999, J. Geophys. Res., 104, 9691, doi: 10.1029/98JA02681
* Gopalswamy et al. (2009) Gopalswamy, N., Yashiro, S., Michalek, G., et al. 2009, Earth Moon and Planets, 104, 295, doi: 10.1007/s11038-008-9282-7
* Guhathakurta et al. (1999) Guhathakurta, M., Fludra, A., Gibson, S. E., Biesecker, D., & Fisher, R. 1999, J. Geophys. Res., 104, 9801, doi: 10.1029/1998JA900082
* Habbal (2020) Habbal, S. R. 2020, in Journal of Physics Conference Series, Vol. 1620, Journal of Physics Conference Series, 012006, doi: 10.1088/1742-6596/1620/1/012006
* Habbal et al. (2007) Habbal, S. R., Morgan, H., Johnson, J., et al. 2007, ApJ, 663, 598, doi: 10.1086/518403
* Habbal et al. (2010) Habbal, S. R., Druckmüller, M., Morgan, H., et al. 2010, ApJ, 708, 1650, doi: 10.1088/0004-637X/708/2/1650
* Habbal et al. (2011) —. 2011, ApJ, 734, 120, doi: 10.1088/0004-637X/734/2/120
* Hanaoka et al. (2021) Hanaoka, Y., Sakai, Y., & Takahashi, K. 2021, Sol. Phys., 296, 158, doi: 10.1007/s11207-021-01907-0
* Hanaoka et al. (2018) Hanaoka, Y., Hasuo, R., Hirose, T., et al. 2018, ApJ, 860, 142, doi: 10.3847/1538-4357/aac49b
* Hayes et al. (2001) Hayes, A. P., Vourlidas, A., & Howard, R. A. 2001, ApJ, 548, 1081, doi: 10.1086/319029
* Hou et al. (2013) Hou, J., de Wijn, A. G., & Tomczyk, S. 2013, ApJ, 774, 85, doi: 10.1088/0004-637X/774/1/85
* Howard et al. (2008) Howard, R. A., Moses, J. D., Vourlidas, A., et al. 2008, Space Sci. Rev., 136, 67, doi: 10.1007/s11214-008-9341-4
* Inhester (2015) Inhester, B. 2015, arXiv e-prints, arXiv:1512.00651. https://arxiv.org/abs/1512.00651
* Jejčič et al. (2014) Jejčič, S., Heinzel, P., Zapiór, M., et al. 2014, Sol. Phys., 289, 2487, doi: 10.1007/s11207-014-0482-1
* Kaiser et al. (2008) Kaiser, M. L., Kucera, T. A., Davila, J. M., et al. 2008, Space Sci. Rev., 136, 5, doi: 10.1007/s11214-007-9277-0
* Koutchmy et al. (2004) Koutchmy, S., Baudin, F., Bocchialini, K., et al. 2004, A&A, 420, 709, doi: 10.1051/0004-6361:20040109
* Lamy et al. (2010) Lamy, P., Damé, L., Vivès, S., & Zhukov, A. 2010, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7731, Space Telescopes and Instrumentation 2010: Optical, Infrared, and Millimeter Wave, ed. J. Oschmann, Jacobus M., M. C. Clampin, & H. A. MacEwen, 773118, doi: 10.1117/12.858247
* Lang (2010) Lang, K. 2010, Chapter 6: Perpetual Change, Sun, NASA’s Cosmos (Tufts University). https://ase.tufts.edu/cosmos/print_images.asp?id=28
* Liang et al. (2022) Liang, Y., Qu, Z., Hao, L., Xu, Z., & Zhong, Y. 2022, MNRAS, doi: 10.1093/mnras/stac3183
* Lyot & Marshall (1933) Lyot, B., & Marshall, R. K. 1933, JRASC, 27, 225
* McComas et al. (2007) McComas, D. J., Velli, M., Lewis, W. S., et al. 2007, Reviews of Geophysics, 45, RG1004, doi: 10.1029/2006RG000195
* Morgan & Druckmüller (2014) Morgan, H., & Druckmüller, M. 2014, Sol. Phys., 289, 2945, doi: 10.1007/s11207-014-0523-9
* Morgan & Habbal (2007) Morgan, H., & Habbal, S. R. 2007, A&A, 471, L47, doi: 10.1051/0004-6361:20078071
* Muro et al. (2023) Muro, G. D., Gunn, M., Fearn, S., Fearn, T., & Morgan, H. 2023, PREPRINT (Version 1) available at Research Square, doi: 10.21203/rs.3.rs-2538179/v1
* Newkirk (1967) Newkirk, Gordon, J. 1967, ARA&A, 5, 213, doi: 10.1146/annurev.aa.05.090167.001241
* Pasachoff & Rušin (2022) Pasachoff, J. M., & Rušin, V. 2022, Sol. Phys., 297, 28, doi: 10.1007/s11207-022-01964-z
* Patel et al. (2022) Patel, R., Majumdar, S., Pant, V., & Banerjee, D. 2022, Sol. Phys., 297, 27, doi: 10.1007/s11207-022-01957-y
* Qiang et al. (2020) Qiang, Z., Bai, X., Ji, K., Liu, H., & Shang, Z. 2020, New A, 79, 101383, doi: 10.1016/j.newast.2020.101383
* Quémerais & Lamy (2002) Quémerais, E., & Lamy, P. 2002, A&A, 393, 295, doi: 10.1051/0004-6361:20021019
* Raju & Abhyankar (1986) Raju, A. K., & Abhyankar, K. D. 1986, Bulletin of the Astronomical Society of India, 14, 217
* Reginald et al. (2009) Reginald, N. L., Davila, J., & St. Cyr, C. 2009, in AAS/Solar Physics Division Meeting, Vol. 40, AAS/Solar Physics Division Meeting #40, 14.03
* Reginald et al. (2014) Reginald, N. L., Davila, J. M., St. Cyr, O. C., & Rastaetter, L. 2014, Sol. Phys., 289, 2021, doi: 10.1007/s11207-013-0467-5
* Saito et al. (1977) Saito, K., Poland, A. I., & Munro, R. H. 1977, Sol. Phys., 55, 121, doi: 10.1007/BF00150879
* Sheeley & Wang (2014) Sheeley, N. R., J., & Wang, Y. M. 2014, ApJ, 797, 10, doi: 10.1088/0004-637X/797/1/10
* Skomorovsky et al. (2012) Skomorovsky, V. I., Trifonov, V. D., Mashnich, G. P., et al. 2012, Sol. Phys., 277, 267, doi: 10.1007/s11207-011-9910-7
* Strong et al. (2017) Strong, K., Viall, N., Schmelz, J., & Saba, J. 2017, Bulletin of the American Meteorological Society, 98, 2593, doi: 10.1175/BAMS-D-16-0204.1
* Thernisien & Howard (2006) Thernisien, A. F., & Howard, R. A. 2006, ApJ, 642, 523, doi: 10.1086/500818
* van de Hulst (1950) van de Hulst, H. C. 1950, Bull. Astron. Inst. Netherlands, 11, 135
* Verroi et al. (2008) Verroi, E., Frassetto, F., & Naletto, G. 2008, Journal of the Optical Society of America A, 25, 182, doi: 10.1364/JOSAA.25.000182
* Vorobiev et al. (2020) Vorobiev, D., Ninkov, Z., Bernard, L., & Brock, N. 2020, PASP, 132, 024202, doi: 10.1088/1538-3873/ab55f1
* Wang & Theuwissen (2017) Wang, F., & Theuwissen, A. 2017, Electronic Imaging, 29, 84, doi: 10.2352/ISSN.2470-1173.2017.11.IMSE-191
* Zotti et al. (2021) Zotti, G., Hoffmann, S. M., Wolf, A., Chéreau, F., & Chéreau, G. 2021, Journal of Skyscape Archaeology, 6, 221–258, doi: 10.1558/jsa.17822
|
[1]Valentin Karam
1]Department of Electrical Engineering and Computer Science, Massachusetts
Institute of Technology, 77 Massachusetts Ave., Cambridge, 02139,
Massachusetts, USA
# Parameter extraction for a superconducting thermal switch (hTron) SPICE
model
<EMAIL_ADDRESS>Owen Medeiros Tareq El Dandachi Matteo Castellani Reed
Foster Marco Colangelo Karl Berggren [
###### Abstract
Efficiently simulating large circuits is crucial for the broader use of
superconducting nanowire-based electronics. However, current simulation tools
for this technology are not adapted to the scaling of circuit size and
complexity. We focus on the multilayered heater-nanocryotron (hTron), a
promising superconducting nanowire-based switch used in applications such as
superconducting nanowire single-photon detector (SNSPD) readout. Previously,
the hTron was modeled using traditional finite-element methods (FEM), which
fall short in simulating systems at a larger scale. An empirical-based method
would be better adapted to this task, enhancing both simulation speed and
agreement with experimental data.
In this work, we perform switching current and activation delay measurements
on 17 hTron devices. We then develop a method for extracting physical fitting
parameters used to characterize the devices. We build a SPICE behavioral model
that reproduces the static and transient device behavior using these
parameters, and validate it by comparing its performance to the model
developed in a prior work, showing an improvement in simulation time by
several orders of magnitude. Our model provides circuit designers with a tool
to help understand the hTron’s behavior during all design stages, thus
promoting broader use of the hTron across various new areas of application.
###### keywords:
superconducting, nanowire, heater-Tron, hTron, SPICE, model, cryotron
## 1 Introduction
The field of superconducting electronics has demonstrated significant benefits
in areas such as quantum computing [1], imaging [2], neuromorphic computing
[3], and digital logic [4]. Josephson junction (JJ)-based circuits dominate
the field, thanks to their fast operation speeds and low power dissipation
[5].
However, JJs have low signals, low normal resistance, and are sensitive to
magnetic fields. In contrast, superconducting nanowires can operate in noisy
environments, offer high gains and exhibit high impedance and fan-out [6, 7,
8]. Therefore, they present a complementary device to JJs for certain
applications. In particular, the capability of nanowire-based circuits to
drive high loads makes them compatible with complementary metal-oxide-
semiconductor (CMOS) technologies [9, 10, 11].
Multiple nanowire-based devices have been introduced in the past, such as the
nanocryotron (nTron) [6] and the heater-nanocryotron (hTron) [12]. The nTron
is a three-terminal device that uses a constriction at the gate input to
suppress a superconducting channel. It has a maximum demonstrated clock
frequency of $615.4\text{\,}\mathrm{MHz}$ [13], but suffers from leakage
current between the gate and channel. The hTron device — in its multiplanar
version — uses the Joule heating from a normal heater to suppress a
superconducting channel deposited beneath it. An oxide layer separates the
heater layer from the superconducting layer, which isolates the layers
galvanically while coupling them thermally. The hTron is easier to fabricate
than the nTron due to the absence of constriction, and does not present the
issue of leakage currents [12].
These two nanowire-based devices allowed the creation of a logic family [14],
a memory cell [15], and are particularly used in Superconducting nanowire
single-photon detector (SNSPD) arrays, both for pulse amplification and
readout [16, 17, 18, 19]. The nTron can also be used to translate SFQ pulses
to CMOS [9]. When a higher impedance is needed, an nTron and hTron can be
combined in an amplification stage, e.g., to drive an LED and communicate
between superconducting neurons [20, 21]. If used in larger and more complex
circuits, superconducting nanowires thus have the potential to complement
Josephson junctions (JJs) and thus enable new superconducting electronics
applications.
However, the scaling of these nanowire-based circuits is currently limited to
a few devices per circuit [19, 18]. We can partly explain this deficiency by
the lack of simulation tools for superconducting nanowires, which is crucial
to their development. Baghdadi et al. developed a 3D electrothermal model for
the hTron using finite-element modeling (FEM) techniques, modeling heat
exchanges inside the device by solving differential heat equations [12]. This
model, while helpful at the device level during early development stages,
cannot easily simulate large-scale circuits, mainly because of the
differential equations stiffness and the difference of scales in the geometry
of these layered devices. A simpler, less accurate 0D model has also been
developed, which was implemented in SPICE [21]. However, this simpler model is
slow to solve, and suffers from convergence issues. Moreover, both models
suffer from a lack of agreement with experimental data. Indeed, the heat
equations require various physical parameters to be approximated from
literature or experiment [12]. This means that the model has to be tuned and
optimized each time the geometry is changed. An arbitrary heater and channel
widths cannot simply be plugged in to get a nice agreement with measurements,
because many of the material’s thermal parameters are geometry-dependent
(e.g., boundary resistance or diffusion coefficient).
These issues can be solved by developing a physics-informed behavioral model.
Instead of focusing on the device’s microscopic physics and heat exchanges,
behavioral models fit experimental data using a minimal set of parameters.
This approach allows physics-informed behavioral models to be more simple due
to their empirical basis, while also being robust thanks to fitting equations
arising from phenomenological physics.
More precisely, we can divide the hTron response to a given heater current
into a static and a transient response. The static response is defined by the
channel switching current, whereas the device activation delay — delay between
the input of a heater pulse and the channel switching — leads to the modeling
of the transient response. To model the hTron behavior, the critical current
and activation delay dependence on heater current have to be measured.
In this paper, we characterize 17 hTron devices from a single wafer and
explain their static behavior using only two physical parameters. We introduce
a systematic approach to extract these parameters from measurements of
critical current as a function of heater current. Furthermore, we demonstrate
the correlation of these parameters with the heater and channel widths. The
transient response of the device, which depends on the heat flow from the
heater to the channel through the oxide layer, is also modeled to fit
experiments, allowing us to accurately simulate the device close to the
maximum operating speed. Finally, we assess the simulation speed and accuracy
of our method, comparing it to the previous electrothermal model by applying
our parameter extraction method on measurement data from the previous hTron
study [12]. We achieve improved agreements with published experimental data
and successfully replicate published results. Moreover, our simulation time is
lower by several order of magnitude, making our approach compatible for use by
circuit designers. This novel behavioral model is a first step into a broader
use of superconducting nanowire-based devices in more complex circuits.
## 2 Methods
Figure 1: Device geometry and typical circuit
a) Device geometry, layers stackup, and typical circuit to perform I-V
measurements using the 4-points measurement, which is further detailed on the
b) schematics. c) I-V curve showing the influence of heater current on the
switching current.
In this section, we outline our methodological approach to constructing,
characterizing and modeling hTron devices. First, we delve into the
fabrication techniques. Following this, we explain the electrical measurement
procedures applied to the hTron devices, providing insights into their
functional characteristics. Lastly, we describe our modeling techniques,
including the specifics of our SPICE implementation.
### 2.1 Fabrication
Here we present the hTron fabrication process we used, which is comparable to
the one introduced by Baghdadi et al [12]. The Figure 1a depicts the device
layout together with the layer stackup.
The hTron fabrication process starts with the deposition of a uniform layer of
Niobium nitride (NbN) layer with thickness
$d_{\text{c}}=$20\text{\,}\mathrm{nm}$$ onto a silicon substrate using RF-
biased sputtering [22]. Subsequently, $5\text{\,}\mathrm{nm}$ Ti and
$50\text{\,}\mathrm{nm}$ Au were deposited with a liftoff process to pattern
the pads, that serve as wire-bonding connections for the hTron ports.
Following this, the channel is patterned into the underlying NbN layer with
electron beam lithography (EBL) followed by reactive ion etching. A
$100\text{\,}\mathrm{nm}$ silicon oxide layer is deposited using a PECVD
technique to isolate the heater and channel layers. Finally, the heaters are
patterned with another liftoff process using the same Ti/Au thickness ratio.
In total, 9 distinct geometry combinations were patterned on two
$10\times$10\text{\,}\mathrm{mm}$$ dies from a single wafer. The nanowire
configurations varied in heater width $w_{\mathrm{h}}$ and channel width
$w_{\mathrm{c}}$ of $100\text{\,}\mathrm{nm}$, $500\text{\,}\mathrm{nm}$, and
$1\text{\,}\mathrm{\SIUnitSymbolMicro m}$, and all had lengths of
$10\text{\,}\mathrm{\SIUnitSymbolMicro m}$. However, one device with
$w_{\mathrm{h}}=$100\text{\,}\mathrm{nm}$$ and
$w_{\mathrm{c}}=$1\text{\,}\mathrm{\SIUnitSymbolMicro m}$$ was damaged while
measuring, resulting in a total of 17 functional devices. As seen in Figure
1a, each terminal of the hTron is connected to 2 different pads, known as a
4-point measurement setup.
### 2.2 Measurement Setup
This section details the experimental setup used to measure the relation
between switching current $I_{\mathrm{SW}}$ as a function of the heater
current $i_{\mathrm{H}}$, $I_{\mathrm{SW}}(i_{\mathrm{H}})$, as shown in
Figure 1a and Figure 1b. This relation explains the steady-state behavior of
the hTron device. For our experiments, we used an immersion probe inside a
liquid helium dewar [23], setting a substrate temperature of
$T_{\mathrm{SUB}}=$4.2\text{\,}\mathrm{K}$$.
The switching current is determined by gradually sweeping the channel current
both positively and negatively using an arbitrary waveform generator (AWG)
until the device switches in the resistive state, resulting in a detectable
voltage. The AWG waveform frequency is set to $10\text{\,}\mathrm{kHz}$ so
that the period would be much larger than the hotspot growth time. We record
the voltage as close to the channel as possible using the 4-points setup. The
dependence of the switching current on the heater current is then obtained by
biasing the heater with a DC current $i_{\mathrm{H}}$ while performing the
channel sweep. A typical result is shown as a current-voltage (I-V) plot in
Figure 1c, where we can see the switching current being gradually lowered by
the increase of heater current. For high heater currents, the channel
temperature approaches $T_{\mathrm{C}}$, and thus the channel is fully
suppressed. To compensate for the intrinsic stochastic nature of the switching
characteristics of the devices, each datapoint is the result of an average of
100 trials [24].
The channel switching current can also be modulated by increasing the entire
substrate temperature. To characterize this effect, we used a commercial
temperature controller with a heater located inside the immersion probe
itself. By doing so, the whole sample is globally heated, allowing the
characterization of the direct dependence between the channel’s switching
current and temperature.
To conclude, the evolution of the switching current was characterized as a
function of the substrate temperature and heater current. Both measurements
highlight different components of the device behavior. Throughout this paper,
we use the switching current density
$J_{\mathrm{SW}}=I_{\mathrm{SW}}/(w_{\mathrm{c}}\cdot d_{\text{c}})$ instead
of the switching current, making our measurements independent on the channel
width.
### 2.3 Modeling
This subsection presents our approach to modeling the hTron device, focusing
on developing a behavioral model grounded in empirical data. We describe the
fitting functions and parameters, which are critical for accurately
replicating and predicting the hTron device’s behavior.
#### 2.3.1 Physics-informed behavioral model
We developed a physics-informed behavioral model based on a set of equations
and fitting parameters to fit the $I_{\mathrm{SW}}(i_{\mathrm{H}})$
experimental data that we collected. In our approach, we treat the hTron as a
four-terminal black-box, focusing on modeling its behavior rather than its
intricate physical properties, which we consider unknown.
The fitting parameters can typically vary from one device to another, and
across two devices with same geometry. However, we found correlations between
the fitting parameters and the device’s widths ($w_{\mathrm{c}}$ and
$w_{\mathrm{h}}$), allowing us to predict the behavior of a device from its
geometry.
#### 2.3.2 Fitting functions
We translated the flow of events governing the hTron static behavior by using
two analytical expressions, the first one estimating the channel temperature
from a heater current input, and the second one predicting the switching
current at this temperature.
First we estimate the channel temperature from the heater current:
$t_{\mathrm{Ch}}(i_{\mathrm{H}})=\left[(T_{\mathrm{C}}^{4}-T_{\mathrm{SUB}}^{4})\cdot\left(\frac{i_{\mathrm{H}}}{I_{\mathrm{H,SUPP}}}\right)^{\eta}+T_{\mathrm{SUB}}^{4}\right]^{\frac{1}{4}}$
(1)
$I_{\mathrm{H,SUPP}}$ is the suppressing current — the heater current at which
the channel is fully suppressed and reaches the critical temperature
$T_{\mathrm{C}}$. The parameter $\eta$ is the strength of the thermal
dependence between the heater current and the estimated channel temperature.
While we used $\eta=2$ in our model, its value will be discussed in Section 5.
We then find the unconstricted switching current density from the estimated
channel temperature, $\widehat{J}_{\mathrm{SW}}(t_{\mathrm{Ch}})$, defined as
the switching current density of the channel section located immediately below
the heater:
$\displaystyle\widehat{J}_{\mathrm{SW}}(t_{\mathrm{Ch}})=\widehat{J}_{\mathrm{C}}\cdot\left[1-\left(\frac{t_{\mathrm{Ch}}}{T_{\mathrm{C}}}\right)^{3}\right]^{2.1}\mathrm{,}\quad
t_{\mathrm{Ch}}\leq T_{\mathrm{C}}$ (2) $\displaystyle\text{with
}\widehat{J}_{\mathrm{C}}=\frac{\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})}{\left[1-\left(\frac{T_{\mathrm{SUB}}}{T_{\mathrm{C}}}\right)^{3}\right]^{2.1}}$
This function was introduced in [12]. The parameter $\widehat{J}_{\mathrm{C}}$
is the channel’s unconstricted critical current density, i.e., the switching
current at zero Kelvin of the channel section located immediately below the
heater. It can be obtained directly from
$\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})$, the unconstricted switching
current density at substrate temperature.
By combining these two simple functions in the form
$\widehat{J}_{\mathrm{SW}}(t_{\mathrm{Ch}}(i_{\mathrm{H}}))=\widehat{J}_{\mathrm{SW}}(i_{\mathrm{H}})$,
we were able to accurately fit our measurement data
$J_{\mathrm{SW}}(i_{\mathrm{H}})$. The fitting parameters
$\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})$ and $I_{\mathrm{H,SUPP}}$ are
extracted from measurements.
$\displaystyle\widehat{J}_{\mathrm{SW}}(i_{\mathrm{H}})=\widehat{J}_{\mathrm{C}}\cdot\left[1-\left(\frac{1}{T_{\mathrm{C}}}\cdot\left[(T_{\mathrm{C}}^{4}-T_{\mathrm{SUB}}^{4})\cdot\left(\frac{i_{\mathrm{H}}}{I_{\mathrm{H,SUPP}}}\right)^{\eta}+T_{\mathrm{SUB}}^{4}\right]^{\frac{1}{4}}\right)^{3}\right]^{2.1}$
(3)
This equation models the switching current density of the channel portion
located directly below the heater. However, at low heater current the devices
often exhibit a constant switching current, or plateau. This plateau suggests
a defect or constriction along the channel in an area distant from the heater
(as detailed in Section 3: Results). In that case, the measured switching
current density at substrate temperature will differ from
$\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})$. To match these observations,
the model switching current density can be rewritten as:
$\displaystyle
J_{\mathrm{MODEL}}(i_{\mathrm{H}})=\mathrm{min}\left\\{\widehat{J}_{\mathrm{SW}}(i_{\mathrm{H}}),J_{\mathrm{CONSTR}}\right\\},\;J_{\mathrm{CONSTR}}\leq\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})$
(4)
The value $J_{\mathrm{CONSTR}}$ is the measured switching current when the
heater current is zero. In practice, it is equal to the measured switching
current density at substrate temperature, $J_{\mathrm{SW}}(T_{\mathrm{SUB}})$.
A value of $J_{\mathrm{CONSTR}}=\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})$
would mean no plateau is observed. In practice, most of the measured devices
present a plateau at low heater currents.
Figure 2: End-to-end process for the hTron SPICE modeling.
a) Activation delay measurements allow to retrieve the transient device
response, and heater current-dependent switching current measurements define
the static device response. b) The heat transfer time constant
$\tau_{\mathrm{filter}}$ is computed from the activation delay. The fitting
parameters $I_{\mathrm{H,SUPP}}$ and
$\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})$, together with the plateau level
$J_{\mathrm{CONSTR}}$ are extracted from $J_{\mathrm{SW}}(i_{\mathrm{H}})$
curves. c) The fitting parameters are further fed into SPICE, computing the
nanowire temperature, switching current, and retrapping current at each
simulation time step thanks to the according fitting functions. The heat
transfer through the oxide layer is simulated by a lumped-element RC circuit,
giving rise to a delay in the hTron response to an input heater pulse. d) The
switching and retrapping current are finally used to define the hotspot
behavior, that is either growing or retrapping depending on the current
flowing in the channel $I_{\mathrm{Ch}}$. The device geometry and NbN
resistance define the hotspot growth rate and normal resistance.
### 2.4 SPICE Implementation
In this subsection we describe the steps undertaken to build the simulation
model in SPICE [25], thus enabling the use of the modeled hTron device in
circuits. The SPICE model file is available in the supplementary material.
When a particular device with channel width $w_{\mathrm{c}}$ and heater width
$w_{\mathrm{h}}$ is to be characterized, the device geometry and wafer
material properties are gathered prior to performing electrical measurements.
The NbN thickness are measured using an ellipsometer [26], and the widths and
lengths are measured from scanning electron micrographs. The NbN sheet
resistance, which is used by the hotspot growth model, is also acquired. The
steps showing the building of a SPICE model from the electrical
characterization of a device are schematically represented in Figure 2.
In a first step (Figure 2a), electrical measurements must be performed. The
activation delay $\tau_{\mathrm{on}}$ (delay between the input of a heater
pulse and the channel switching) is measured to further define the transient
behavior of the device. The heater current-dependent switching current density
$J_{\mathrm{SW}}(i_{\mathrm{H}})$ is also measured to set the static device
behavior.
Subsequently (Figure 2b), the time constant of the heat transfer sub-circuit
$\tau_{\mathrm{filter}}$, the suppressing current $I_{\mathrm{H,SUPP}}$, the
unconstricted switching current at substrate temperature
$\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})$, and the plateau level
$J_{\mathrm{CONSTR}}$ are extracted from the measurement data. Specifically,
$\tau_{\mathrm{filter}}$ is directly computed from $\tau_{\mathrm{on}}$, and
$I_{\mathrm{H,SUPP}}$, $\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})$, and
$J_{\mathrm{CONSTR}}$ are obtained from $J_{\mathrm{SW}}(i_{\mathrm{H}})$
curves using the method detailed in Section 3 : Results.
In a third step (Figure 2c), the SPICE behavioral model uses the fitting
parameters to compute the switching current $I_{\mathrm{SW}}$ and retrapping
current $I_{\mathrm{R}}$ from a heater current input. The temporal behavior of
the heat transfer between the heater and the NbN layer through the oxide is
modeled using a simple heat transfer circuit with lumped elements (low-pass
filter), which time constant is $RC=\tau_{\mathrm{filter}}$. Without this
added sub-circuit, the temperature of the nanowire would start to grow
instantaneously with a heater current input, which does not reflect real
observations. The exact computation of $\tau_{\mathrm{filter}}$ can be found
in the supplementary material.
The channel temperature and switching current are then computed using
expression-based functions defined using the .func SPICE directive. This
directive computes the value of a custom expression without the need of adding
a voltage node, and a SPICE behavioral source allows one to probe the value of
the function. Due to the low computational cost and analytical nature of the
fitting functions, the SPICE implementation of our fitting functions is a
straight-forward process. This simplicity is an asset, as the solver updates
the temperature, switching current and retrapping current at each time step,
in response to the input heater current.
Finally, the channel current is compared to the switching and retrapping
currents to compute the hotspot resistance (Figure 2d). A channel current
above the switching current threshold would induce hotspot growth, after which
it would eventually reset by decreasing below the retrapping current. The
SPICE model of the non-linear inductor and hotspot behavior was based on the
superconducting nanowire SPICE model introduced by Berggren et al. [27],
however, the model was modified to embed a hotspot growth circuit based on the
built-in SPICE integrator function sdt(). The work by El Dandachi [28] has
proven this new approach to present multiple benefits over the previous
nanowire SPICE model.
## 3 Results
Figure 3: Measurement results and method for extracting model parameters
Experimental data of the switching current density of 9 different hTron
geometries, plotted against a) the substrate temperature and against b) the
heater current. Equation 2 was fitted to one of the data curves, showing a
smooth transition from $T_{\mathrm{SUB}}$ to $T_{\mathrm{C}}$. Heater-
dependent switching current curves show a plateau at low heater currents,
indicating that the heater current does not modulate the channel switching
current. c) Method developed to extract the model parameters from
$J_{\mathrm{SW}}(i_{\mathrm{H}})$ measurement plots. A line is fitted to the
linear part of the measurement data, which sets the non-physical fitting
parameters $\widetilde{I}_{\mathrm{H}}$ and $\widetilde{J}_{\mathrm{SW}}$
(indicated by $\Box$). The suppressing current $I_{\mathrm{H,SUPP}}$ and the
unconstricted switching current at substrate temperature
$\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})$ (indicated by $\triangle$), that
define the unconstricted model $\widehat{J}_{\mathrm{SW}}(i_{\mathrm{H}})$,
are further derived from the non-physical parameters. In this example, a
constriction strongly limits the switching current of the measured device at
low heater current, but can be taken into account by defining
$J_{\mathrm{MODEL}}(i_{\mathrm{H}})=\mathrm{min}\\{\widehat{J}_{\mathrm{SW}}(i_{\mathrm{H}}),J_{\mathrm{CONSTR}}\\}$.
In this chapter we present the results of our research, going through the
fabrication, the devices characterization and the model design. We will
further analyze the model parameters and explain their dependence in device
geometry. Finally, we will compare the simulations and measurements of the
device activation delay.
### 3.1 Fabrication results
The deposited NbN film shows a critical temperature of
$T_{\mathrm{C}}\simeq$12.5\text{\,}\mathrm{K}$$, a thickness — measured with
ellipsometry — of $d_{\text{c}}=$23.6\text{\,}\mathrm{nm}$$ and a sheet
resistance of
$R_{\mathrm{SHEET}}=$77.9\text{\,}\mathrm{\SIUnitSymbolOhm}\mathrm{/}$\square$.
The length of the channel and heater strips is
$l_{\text{c}}=l_{\text{h}}=$10\text{\,}\mathrm{\SIUnitSymbolMicro m}$$. Using
these parameters, a depairing current of
$J_{\text{dep}}=$16.44\text{\times}{10}^{10}\text{\,}\mathrm{A}\mathrm{/}\mathrm{m}\mathrm{{}^{2}}$$
was computed using the Equation 13 of Charaev et al [29]. Heater resistances
of $44.2\text{\,}\mathrm{\SIUnitSymbolOhm}$,
$10.4\text{\,}\mathrm{\SIUnitSymbolOhm}$, and
$7.2\text{\,}\mathrm{\SIUnitSymbolOhm}$ were measured on devices with heaters
widths of $0.1\text{\,}\mathrm{\SIUnitSymbolMicro m}$,
$0.5\text{\,}\mathrm{\SIUnitSymbolMicro m}$, and
$1\text{\,}\mathrm{\SIUnitSymbolMicro m}$ respectively, however, device-to-
device variations are likely to be observed. The heater resistance could be
engineered by tuning the heater length, which would not impair the device
operation. However, power is wasted to the substrate if the heater length is
greater than the channel width.
### 3.2 Device characterization
As shown on Figure 3a, the channel switching current density of measured
devices is smoothly reduced with temperature, until the channel is fully
suppressed at $T_{\mathrm{C}}\simeq$12.5\text{\,}\mathrm{K}$$. Moreover, the
experimental data agrees nicely with the fitting function
$J_{\mathrm{SW}}(T)=$4.48\text{\times}{10}^{10}$\cdot\left(1-\left(T/T_{\mathrm{C}}\right)^{3}\right)^{2.1}[\unit{/\squared}]$,
that was fitted to a particular measurement curve. All other curves are thus
expected to disagree with this fit.
On the other hand, Figure 3b shows the switching current density’s dependence
on heater current. This plot is used to extract the two fitting parameters
$\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})$ and $I_{\mathrm{H,SUPP}}$,
together with the additional value $J_{\mathrm{CONSTR}}$. Two comments can be
made on this plot : (1) a plateau appears for low heater currents, and (2) the
suppressing current is correlated to the heater width but not to the channel
width. These observations are discussed in Section 3.4 and Section 3.5,
respectively.
### 3.3 Parameter Extraction
Here we explain the developed step-by-step method to extract the two model
parameters from a particular $J_{\mathrm{SW}}(i_{\mathrm{H}})$ measurement
curve, represented schematically with a typical measurement curve on Figure
3c. The key idea is to recover the measurements that would have been observed
if the constriction outside the heated area (i.e., the plateau) would not have
been there. To do so, we used the information contained in the linear part of
the $J_{\mathrm{SW}}(i_{\mathrm{H}})$ measurement data:
1. 1.
Fit a straight line to the linear part of the curve, ignoring the plateau on
the left side and the part on the right side (where the switching current
reaches zero).
2. 2.
This linear fit defines $\widetilde{J}_{\mathrm{SW}}$ and
$\widetilde{I}_{\mathrm{H}}$, the intersection between the line and the y and
x axis, respectively.
3. 3.
The fitting parameters can be recovered :
$\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})=\alpha\cdot\widetilde{J}_{\mathrm{SW}}$,
and $I_{\mathrm{H,SUPP}}=\beta\cdot\widetilde{I}_{\mathrm{H}}$. The $\alpha$
and $\beta$ constants are correction parameters obtained by fitting a straight
line to the fitting function directly. These constants simply account for the
fitting function’s curvature, and are valid for a same value of $\eta$. We
found $\alpha=0.88$ and $\beta=1.25$ for $\eta=2$. Plugging
$I_{\mathrm{H,SUPP}}$ into Equation 1 and
$\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})$ into Equation 2 gives the
unconstricted model $\widehat{J}_{\mathrm{SW}}(i_{\mathrm{H}})$ as defined in
Equation 3, which represents the measurement we would observe if there was no
plateau.
4. 4.
The final model expression — including the plateau — is defined in Equation 4:
$J_{\mathrm{MODEL}}(i_{\mathrm{H}})=\mathrm{min}\\{\widehat{J}_{\mathrm{SW}}(i_{\mathrm{H}}),J_{\mathrm{CONSTR}}\\}$,
the minimum between the unconstricted model
$\widehat{J}_{\mathrm{SW}}(i_{\mathrm{H}})$ and the constriction level
$J_{\mathrm{CONSTR}}$. Any $J_{\mathrm{CONSTR}}$ value can be set in order to
set various constriction levels, depending on the application.
Figure 4: Results analysis: prediction of the model parameters
a) In black: measured switching current density at substrate temperature
$J_{\mathrm{SW}}(T_{\mathrm{SUB}})$ (which can be seen in the form of a
plateau) against the device channel width. In blue: constriction factor $C$,
defined as the measured critical current density divided by the predicted
switching current density. The average constriction factor is similar for
$w_{\mathrm{c}}=$1\text{\,}\mathrm{\SIUnitSymbolMicro m}$$ and
$0.5\text{\,}\mathrm{\SIUnitSymbolMicro m}$ but decreases significantly for
$w_{\mathrm{c}}=$0.1\text{\,}\mathrm{\SIUnitSymbolMicro m}$$. b) Predicted
fitting parameter $\widetilde{I}_{\mathrm{H}}$ dependence on the heater width,
with a power-law fit. For both plots, each point is an average over all device
with indicated heater or channel width.
### 3.4 Analysis of constrictions outside of the heated area
The presence of a plateau at low heater current in Figure 3b suggests that
most devices’ switching currents are limited by one or multiple weak spots
located along the channel, away from the heated area. At low heater currents,
these weak spots switch at lower channel currents than the channel portion
below the heater. We therefore observe a lower switching current than that of
the heated region. As the heater current increases, a point is reached where
the heated part of the channel starts to switch at a current lower than that
of the weak spot. This marks a significant change, as the plateau abruptly
collapses. Beyond this point, with further increases in heater current, the
switching occurs directly under the heater, where the switching current is at
its minimum. This process continues until the channel is completely
suppressed.
Constrictions are common candidates for weak spots in superconducting
nanowires [30]. In Figure 4a (in black), we plotted the switching current
density of the weak spot $J_{\mathrm{SW}}(T_{\mathrm{SUB}})$ against the
channel width. It can be seen that the switching current density of the weak
spot is similar for $1\text{\,}\mathrm{\SIUnitSymbolMicro m}$-wide channels
and $0.5\text{\,}\mathrm{\SIUnitSymbolMicro m}$-wide channels, but decreases
significantly for $0.1\text{\,}\mathrm{\SIUnitSymbolMicro m}$-wide channels,
while also showing greater variance. The same trend can also be seen with the
constriction factor $C$, plotted in blue, defined as the ratio between the
measured switching current at substrate temperature
$J_{\mathrm{SW}}(T_{\mathrm{SUB}})$ and the ideal predicted critical current
at substrate temperature $\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})$. The
fact that the devices with the narrowest channels are more constricted would
potentially result from the channel line-width roughness, which degrades as
the width decreases [31].
Finally, if our fabrication process was perfect,
$\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})$ should in theory be the same for
all devices from a same wafer, and should not depend on the channel width. In
practice, despite some variance, the average
$\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})$ value is indeed comparable
across devices with different channel widths. This would imply that the
channel’s average switching current density is similar among devices, despite
showing localised constrictions.
### 3.5 Analysis and prediction of the suppressing current
The suppressing current parameter $I_{\mathrm{H,SUPP}}$ has smaller values for
narrower heaters, and can be expressed as a function of the heater width
$w_{\mathrm{h}}$, as shown in Figure 4b. The function
$\widetilde{I}_{\mathrm{H}}(w_{\mathrm{h}})=16.9\cdot w_{\mathrm{h}}^{2/3}$,
resulting from a power-law fitting, predicts the suppressing current for
different heater widths. As reducing $w_{\mathrm{h}}$ increases both the
resistance and the dissipated power per unit area, the suppressing current
decreases faster for narrower heaters. Here, the fitting parameter, valid for
an entire wafer, embeds information about the oxide thickness and heater’s
materials properties.
Figure 5: Prediction of the hTron maximum operating speed
a) Simplified schematics of the measurement setup of the hTron activation
delay $\tau_{\mathrm{on}}$, defined as the channel response time following the
application of a heater current pulse of magnitude $I_{\mathrm{H}}$. The
heater current pulse duration is much larger than $\tau_{\mathrm{on}}$. We
plotted the measured and simulated activation delay against b) the channel
bias current (for a constant heater current), and c) the heater current
amplitude (for a constant channel current). The graphs highlight the model’s
predictive capability across a large span of input currents, despite a
deviation from actual measurements at lower current levels, due the simplicity
of our heat transfer model. The device heater width and channel width are
$1\text{\,}\mathrm{\SIUnitSymbolMicro m}$ and
$0.5\text{\,}\mathrm{\SIUnitSymbolMicro m}$, respectively.
### 3.6 hTron maximum operating speed
The transient response of the hTron is defined by the heat transfer from the
heater to the channel through the oxide. It limits the operating speed of the
hTron devices, resulting in a non-zero device activation delay,
$\tau_{\mathrm{on}}$. This delay, measured as the time needed to observe a
channel switch after the input of a heater pulse, is a key aspect of the
device behavior.
In Figure 5a, we show the measurement setup used to measure the activation
delay $\tau_{\mathrm{on}}$ at a specific $(I_{\mathrm{H}},I_{\text{Ch}})$
operating point. The results of sweeping the channel current while maintaining
a fixed heater current are shown in Figure 5b. Figure 5c illustrates the
results when the heater current is varied with a fixed channel current. In
order to reproduce this data in SPICE, we first choose an operating point
$(I_{\mathrm{H}}=$1455.1\text{\,}\mathrm{\SIUnitSymbolMicro
A}$,I_{\text{Ch}}=$280.3\text{\,}\mathrm{\SIUnitSymbolMicro A}$)$, and extract
the activation delay from the experiments
($\tau_{\mathrm{on}}=$11.85\text{\,}\mathrm{ns}$$). We further obtain the RC
time constant of our heat transfer sub-circuit to
$RC=\tau_{\mathrm{filter}}=$10.0\text{\,}\mathrm{ns}$$ directly from the
activation delay value (details can be found in the supplementary material).
Sweeping the input currents in the same way in SPICE as we did in the
experiments, we observe that the model is reliable around the operating point
$(I_{\mathrm{H}}=$1455.1\text{\,}\mathrm{\SIUnitSymbolMicro
A}$,I_{\text{Ch}}=$280.3\text{\,}\mathrm{\SIUnitSymbolMicro A}$)$ and for high
currents.
This simple and efficient method allowed us to simulate the temporal behavior
of the hTron device, and can be used to determine the maximum operating speed
in real circuits. While the model diverges from the measurements at low
currents, the accuracy of this result could be improved with added degrees of
freedom in the heat transfer sub-circuit.
Figure 6: Reproduction of Baghdadi et al. [12] results from an
$I_{\mathrm{SW}}(i_{\mathrm{H}})$ measurement.
a) Circuit used to simulate the hTron behavior in SPICE, identical to the one
used in Figure 3 of [12]. The power dissipated in the heater increases the
channel’s temperature $t_{\mathrm{Ch}}$, decreasing the channel switching
current $I_{\mathrm{SW}}$. As the channel switches to the normal phase, the
bias current $I_{\text{Ch}}$ is transferred to a
$50\text{\,}\mathrm{\SIUnitSymbolOhm}$ load $R_{\text{Load}}$, allowing the
channel to reset back to the superconducting state. SPICE simulation result of
the b) effective channel temperature and of the c) electrical behavior of the
hTron device. The channel temperature and switching current were computed
using Equation 1 and Equation 2, respectively. The device and simulation
parameters were given by [12], while the model parameters
$I_{\mathrm{H,SUPP}}$ and $J_{\mathrm{SW}}(T_{\mathrm{SUB}})$ were extracted
from the $I_{\mathrm{SW}}(i_{\mathrm{H}})$ plots using our method presented in
this work.
## 4 Model performance and comparison with previous attempt
In this section, we applied our parameter extraction method to heater-
dependent switching current measurements from Figure 5 of Baghdadi et al.
[12], successfully replicating the device behavior as shown in Figure 6. The
measured device switching current at substrate temperature is approximately
$J_{\mathrm{SW}}(T_{\mathrm{SUB}})\simeq$120\text{\,}\mathrm{\SIUnitSymbolMicro
A}$$, and the $J_{\mathrm{SW}}(i_{\mathrm{H}})$ curve does not present a
plateau at low heater current.
Our model closely matches the measurements but slightly overestimates the
measured $J_{\mathrm{SW}}(T_{\mathrm{SUB}})$. This discrepancy might be due to
channel constriction. From their measurements, we extracted the parameters:
$I_{\mathrm{H,SUPP}}=$80.9\text{\,}\mathrm{\SIUnitSymbolMicro A}$$ and
$\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})=$122.9\text{\,}\mathrm{\SIUnitSymbolMicro
A}$$. Applying our method to the previous 3D electrothermal model results
curve from Figure 5 of [12] resulted in
$I_{\mathrm{H,SUPP}}=$79.5\text{\,}\mathrm{\SIUnitSymbolMicro A}$$ and
$\widehat{J}_{\mathrm{SW}}(T_{\mathrm{SUB}})=$116.1\text{\,}\mathrm{\SIUnitSymbolMicro
A}$$, against the previous model that predicted
$J_{\mathrm{SW}}(T_{\mathrm{SUB}})=$113.9\text{\,}\mathrm{\SIUnitSymbolMicro
A}$$. We computed a retrapping current value of
$I_{\mathrm{R}}=$20.3\text{\,}\mathrm{\SIUnitSymbolMicro A}$$ using the
relation given in [27], which is comparable to Baghdadi et al.’s measured
value of approximately
$I_{\mathrm{R}}\approx$21\text{\,}\mathrm{\SIUnitSymbolMicro A}$$.
In our simulation (Figure 6a), the hTron, shunted by a
$50\text{\,}\mathrm{\SIUnitSymbolOhm}$ load and biased at
$100\text{\,}\mathrm{\SIUnitSymbolMicro A}$, switches with a $1$-$\unit{}$
heater pulse, diverting the current into the resistor. The channel then
reverts to its superconducting state, allowing current return with a time
constant determined by the channel inductance and load resistance. Notably,
our SPICE simulation (Figure 6b and Figure 6c) doesn’t account for the
temperature rise from Joule heating when the channel is in the normal state.
However, this effect is taken into account by our hotspot growth rate equation
[27], which makes our model valid. The pulse and hotspot resistance, shown in
Figure 6c, align with Baghdadi et al.’s results (Figure 3c [12]), validating
our model.
In order to simulate the device transient behavior, we matched the device
activation time with the one observed in Figure 3c of [12], estimate as
$\tau_{\mathrm{on}}\approx$750\text{\,}\mathrm{p}\mathrm{s}$$. This gives an
RC time constant of $\tau_{\mathrm{filter}}=$1.6\text{\,}\mathrm{ns}$$ for the
lumped-element sub-circuit that simulates heat transfer.
All simulation parameters used in [12] are listed here:
$w_{\mathrm{c}}=$600\text{\,}\mathrm{nm}$$,
$w_{\mathrm{h}}=$500\text{\,}\mathrm{nm}$$, channel inductance
$L_{\text{Ch}}=$500\text{\,}\nH$$, $T_{\mathrm{SUB}}=$3\text{\,}\mathrm{K}$$,
$T_{\mathrm{C}}=$8.4\text{\,}\mathrm{K}$$, NbN thickness
$d_{\text{c}}=$20\text{\,}\mathrm{nm}$$,
$R_{\mathrm{SHEET}}=$470\text{\,}\mathrm{\SIUnitSymbolOhm}\mathrm{/}$\square$.
Note that the heater’s thickness and resistance, embedded in our fitting
parameters, are unused by our model.
Finally, we compared both models’ execution speeds by running the published
simplified 0D electrothermal model in SPICE, solving heat equations with
behavioral sources as implemented by Castellani [21], and comparing it to our
model in a $250\unit{\nano}$-long simulation of the circuit described in
Figure 6a. We have set the SPICE parameter $\text{{reltol}}=10^{-6}$ for both
models, as suggested by El Dandachi [28], and gathered the simulation results
in Table 1. In a first simulation run, we varied the maximum simulation
timestep $\Delta t_{\mathrm{max}}$ from $1\text{\,}\mathrm{ps}$ to
$10\text{\,}\mathrm{ps}$ over 11 steps, and our model took less than eleven
seconds to run, while the previous model took eight hours. In a second run, we
varied the maximum timestep from $1\text{\,}\mathrm{fs}$ to
$50\text{\,}\mathrm{ps}$ across 48 steps, and our model took less than four
hours to run, while the previous model had to be halted after two days because
it was stuck at a significantly slow simulation speed in the order of
$\approx$1\text{\,}\mathrm{f}\mathrm{s}\mathrm{/}\mathrm{s}$$. At this speed,
it would take almost 3 days to complete the $250\text{\,}\mathrm{ns}$ of
simulation before setting a higher value of maximum timestep.
To conclude, the enhanced simulation speed of our model can be mainly
explained by two factors. First, the previous model required solving a greater
number of nodes due to its approach of using behavioral sources to solve the
heat transfer equations of the entire system. Second, unlike our model, it did
not incorporate our enhanced hotspot integrator sub-circuit, potentially
leading to a higher number of timepoints [28]. While our simulations were
conducted on a personal computer, and performance can vary between different
machines, our results are significant. We successfully reduced typical
simulation times from hours to mere minutes or even seconds, making such
simulations feasible in practice on a personal computer for the first time,
opening the door for their widespread use.
| Run 1: $\Delta t_{\text{max}}=$1\text{\,}\mathrm{ps}$$…$10\text{\,}\mathrm{ps}$ | Run 2: $\Delta t_{\text{max}}=$1\text{\,}\mathrm{fs}$$…$50\text{\,}\mathrm{ps}$
---|---|---
# Time-points | Simulation Time | # Time-points | Simulation Time
Previous model | 813,628 | $8.0$ hours | N/A | N/A
Our model | 2,045 | $10.8$ seconds | 1,189,652 | $3.3$ hours
Table 1: Comparison of simulation speed between the previous model designed by
Baghdadi et al. [12] and our model for the same SPICE circuit but different
time resolutions.
## 5 Discussion
This section discusses the challenges of predicting weak spots in
superconducting nanowires, the impact of hidden parameters in our model, and
the prediction of the device activation delay and operating speed,
highlighting both the strengths and limitations of our approach.
In the current state of research, accurately predicting the location and
number of weak spots — or areas with reduced critical current — along a
channel remains unfeasible. Weak spots located away from the heated area are
responsible for the plateau, which is not part of the behavior of the device,
but rather due to channel width-dependent effects arising from the fabrication
process. Therefore, we choose to get rid of the plateau during the parameter
extraction, effectively fitting to the linear part of the
$J_{\mathrm{SW}}(i_{\mathrm{H}})$ plots. As a result, the ideal model
$\widehat{J}_{\mathrm{SW}}(i_{\mathrm{H}})$ is a hypothetical curve that
probes the critical current directly below the heater, theoretically
representing the channel’s critical current as if there was no weak spots. The
plateau can then be set to any constriction level $J_{\mathrm{CONSTR}}$. While
we observed that the plateau level is smaller for narrower channels, it cannot
be accurately predicted due to its intrinsic stochastic nature. Locating and
removing constrictions in superconducting nanowires would be beneficial both
in photon detection and device operation.
Regarding the parameter $\eta$ in Equation 1, we used the value of $\eta=2$ in
this paper based on its optimal fit with Baghdadi et al.’s measurement data
[12]. However, Butters suggests $\eta=3$ (See Equation 3.30 of [23]), implying
a stronger dependence of the channel temperature with the heater current. The
value of $\eta=3$ aligns more closely with our measurement data, especially
when $i_{\mathrm{H}}$ is close to $I_{\mathrm{H,SUPP}}$. However, our devices
are severely constricted, showing a plateau in most cases. Thus more
experimental data on less constricted devices would help determine the best
value for $\eta$. While a more complicated equation could potentially better
describe the channel temperature as a function of heater current, the strength
of our approach lies in its simplicity: introducing an additional free
parameter would counteract this benefit, making our model less straightforward
without a substantial increase in predictive accuracy for our specific
application.
In this study, the measured devices are from a single wafer, implying that
some geometrical parameters such as the oxide thickness do not explicitly
appear in the fitting equations, and are thus hidden. Consequently, new
measurements and recalibration of the model’s parameters would be required to
predict the behavior of devices with a different oxide thickness. Even though
the previous model does not present hidden parameters, its prediction
capabilities are limited as well. Indeed, it contains various thermal
parameters that are either extracted from literature or fitted from
measurements, and cannot be predicted from geometry. In contrast, our high-
level curve-fitting approach simplifies the analysis, allows for a systematic
process of parameters extraction for any geometry, and outperforms the
previous attempt in accuracy and speed.
Finally, the heat transfer from the heater to the channel through the oxide is
a critical factor limiting the operating speed of the devices, resulting in a
non-zero device activation delay, $\tau_{\mathrm{on}}$. In complicated
circuits, this delay becomes an increasingly significant figure for circuit
designers. Indeed, accurately predicting and simulating the maximum operating
speed of a real-world circuit is essential. In our approach, we intentionally
over-simplified the way we modeled the heat transfer, offering the advantage
of only including one parameter — an RC time constant denoted
$\tau_{\mathrm{filter}}$, which can be directly obtained from
$\tau_{\mathrm{on}}$ —, making it straight-forward to implement in practice.
However, as seen in Figure 5, our model’s prediction of the activation delay
is limited for low input currents. At higher currents, though, our model
accurately predicts the activation delay, allowing one to simulate the
temporal behavior of a hTron-based circuit close to the maximum operating
speed. As a final note, while the minimum measured activation delay is in the
order of $\qtyrange{1}{10}{}$ for our particular device, it is noteworthy that
the fabricated devices were not designed for speed. A thinner oxide layer, as
well as narrower heater wires would greatly improve the reaction speed of
hTron devices.
## 6 Conclusion
Our approach to modeling the hTron device relies on the fitting of simple
physics-informed equations to a relatively large amount of experimental data.
All the fitting parameters are extracted from basic measurements, and no
arbitrary parameter has to be tuned to better fit our measurements. The static
behavior was modeled from the critical current measurements performed on 17
hTrons having 9 different geometries. We were able to extract the two relevant
fitting parameters, and extrapolate them for hTron geometries that were not
measured. Moreover, we were able to simulate the hTron transient response over
a large span of current inputs thanks to activation delay measurements — delay
between the application of a heater current pulse and the switching of the
channel. This transient behavior is critical in circuit design as it sets the
maximum operating speed of a circuit. We applied our model on measurement data
published by Baghdadi et al. [12], and obtained a better agreement than their
attempt. Moreover, we compared the simulation speed of both approaches on a
personal computer in SPICE, and obtained a result in seconds, while the
previous approach completed the same simulation in hours. Consequently, our
model and SPICE implementation is tailored for efficient design iterations in
nanowire-based electronics. Finally, our model effectively simulates devices
for any constriction — or plateau — level, a critical feature given the
unpredictability of these occurrences during circuit design.
Our findings corroborate the positioning of the hTron as an alternative to the
nTron device for applications requiring both high output impedance and
electrical isolation between gate and channel. In a broader view point,
superconducting nanowires could complement Josephson Junctions in areas where
they are currently lacking. Looking forward, we will use our model to solve
larger and more complex circuit like SNSPD arrays readouts. The high
simulation speed capability, accuracy, and simplicity of our approach
positions this work as a promising first step for the building of useful
simulation tools for nanowire-based circuits.
## Acknowledgements
The initial stages of the research were sponsored by the U.S. Department of
Energy, Office of Science, Office of Basic Energy Sciences, under Award Number
DE-AC02-07CH11359. The completion of data analysis and presentation were
sponsored by the National Science Foundation under Grant No. OMA-2137723. The
authors thank Alessandro Restelli, Joshua Bienfang and Ilya Charaev for
helpful scientific discussions. V.K. would like to thank Edoardo Charbon from
the Advanced Quantum Architecture (AQUA) Laboratory at the Swiss Federal
Institute of Technology (EPFL). O.M. acknowledges support from the NDSEG
Fellowship program. M. Colangelo acknowledges support from MIT Claude E.
Shannon award.
## References
* * Ladd et al. [2010] Ladd, T.D., Jelezko, F., Laflamme, R., Nakamura, Y., Monroe, C., O’Brien, J.L.: Quantum computers. nature 464(7285), 45–53 (2010)
* Oripov et al. [2023] Oripov, B.G., Rampini, D.S., Allmaras, J., Shaw, M.D., Nam, S.W., Korzh, B., McCaughan, A.N.: A superconducting-nanowire single-photon camera with 400,000 pixels. arXiv preprint arXiv:2306.09473 (2023)
* Islam et al. [2023] Islam, M.M., Alam, S., Hossain, M.S., Roy, K., Aziz, A.: A review of cryogenic neuromorphic hardware. Journal of Applied Physics 133(7) (2023)
* Likharev [1993] Likharev, K.K.: In: Weinstock, H., Ralston, R.W. (eds.) Rapid Single-Flux-Quantum Logic, pp. 423–452. Springer, Dordrecht (1993). https://doi.org/10.1007/978-94-011-1918-4_14 . https://doi.org/10.1007/978-94-011-1918-4_14
* Soloviev et al. [2017] Soloviev, I.I., Klenov, N.V., Bakurskiy, S.V., Kupriyanov, M.Y., Gudkov, A.L., Sidorenko, A.S.: Beyond moore’s technologies: operation principles of a superconductor alternative. Beilstein journal of nanotechnology 8(1), 2689–2710 (2017)
* McCaughan and Berggren [2014] McCaughan, A.N., Berggren, K.K.: A superconducting-nanowire three-terminal electrothermal device. Nano letters 14(10), 5748–5753 (2014)
* Zheng et al. [2020] Zheng, K., Zhao, Q.-Y., Lu, H.-Y.-B., Kong, L.-D., Chen, S., Hao, H., Wang, H., Pan, D.-F., Tu, X.-C., Zhang, L.-B., et al.: A superconducting binary encoder with multigate nanowire cryotrons. Nano Letters 20(5), 3553–3559 (2020)
* Huang et al. [2023] Huang, Y.-H., Zhao, Q.-Y., Chen, S., Hao, H., Wang, H., Guo, J.-W., Tu, X.-C., Zhang, L.-B., Jia, X.-Q., Chen, J., et al.: Splitter trees of superconducting nanowire cryotrons for large fan-out. Applied Physics Letters 122(9) (2023)
* Zhao et al. [2017] Zhao, Q.-Y., McCaughan, A.N., Dane, A.E., Berggren, K.K., Ortlepp, T.: A nanocryotron comparator can connect single-flux-quantum circuits to conventional electronics. Superconductor Science and Technology 30(4), 044002 (2017)
* Toomey et al. [2019] Toomey, E., Onen, M., Colangelo, M., Butters, B., McCaughan, A., Berggren, K.: Bridging the gap between nanowires and josephson junctions: a superconducting device based on controlled fluxon transfer. Physical review applied 11(3), 034006 (2019)
* McCaughan et al. [2019] McCaughan, A.N., Verma, V.B., Buckley, S.M., Allmaras, J., Kozorezov, A., Tait, A., Nam, S., Shainline, J.: A superconducting thermal switch with ultrahigh impedance for interfacing superconductors to semiconductors. Nature electronics 2(10), 451–456 (2019)
* Baghdadi et al. [2020] Baghdadi, R., Allmaras, J.P., Butters, B.A., Dane, A.E., Iqbal, S., McCaughan, A.N., Toomey, E.A., Zhao, Q.-Y., Kozorezov, A.G., Berggren, K.K.: Multilayered heater nanocryotron: A superconducting-nanowire-based thermal switch. Physical Review Applied 14(5), 054011 (2020)
* Zheng et al. [2019] Zheng, K., Zhao, Q.-Y., Kong, L.-D., Chen, S., Lu, H.-Y.-B., Tu, X.-C., Zhang, L.-B., Jia, X.-Q., Chen, J., Kang, L., et al.: Characterize the switching performance of a superconducting nanowire cryotron for reading superconducting nanowire single photon detectors. Scientific reports 9(1), 16345 (2019)
* Buzzi et al. [2023] Buzzi, A., Castellani, M., Foster, R.A., Medeiros, O., Colangelo, M., Berggren, K.K.: A nanocryotron memory and logic family. Applied Physics Letters 122(14) (2023)
* Butters et al. [2021] Butters, B.A., Baghdadi, R., Onen, M., Toomey, E.A., Medeiros, O., Berggren, K.K.: A scalable superconducting nanowire memory cell and preliminary array test. Superconductor Science and Technology 34(3), 035003 (2021)
* Oripov et al. [2023] Oripov, B., Rampini, D., Allmaras, J., Shaw, M., Nam, S.W., Korzh, B., McCaughan, A.: Thermally coupled imager with 400,000 pixels (2023)
* Zheng et al. [2020] Zheng, K., Zhao, Q.-Y., Lu, H.-Y.-B., Kong, L.-D., Chen, S., Hao, H., Wang, H., Pan, D.-F., Tu, X.-C., Zhang, L.-B., et al.: A superconducting binary encoder with multigate nanowire cryotrons. Nano Letters 20(5), 3553–3559 (2020)
* Foster et al. [2023] Foster, R.A., Castellani, M., Buzzi, A., Medeiros, O., Colangelo, M., Berggren, K.K.: A superconducting nanowire binary shift register. Applied Physics Letters 122(15) (2023)
* Castellani et al. [2023] Castellani, M., Medeiros, O., Foster, R.A., Buzzi, A., Colangelo, M., Bienfang, J.C., Restelli, A., Berggren, K.K.: A nanocryotron ripple counter integrated with a superconducting nanowire single-photon detector for megapixel arrays. arXiv preprint arXiv:2304.11700 (2023)
* Shainline et al. [2019] Shainline, J.M., Buckley, S.M., McCaughan, A.N., Chiles, J.T., Jafari Salim, A., Castellanos-Beltran, M., Donnelly, C.A., Schneider, M.L., Mirin, R.P., Nam, S.W.: Superconducting optoelectronic loop neurons. Journal of Applied Physics 126(4) (2019)
* Castellani [2020] Castellani, M.: Design of superconducting nanowire-based neurons and synapses for power-efficient spiking neural networks. Master’s thesis, Politecnico di Torino (2020)
* Dane et al. [2017] Dane, A.E., McCaughan, A.N., Zhu, D., Zhao, Q., Kim, C.-S., Calandri, N., Agarwal, A., Bellei, F., Berggren, K.K.: Bias sputtered nbn and superconducting nanowire devices. Applied Physics Letters 111(12) (2017)
* Butters [2022] Butters, B.A.: Digital and microwave superconducting electronics and experimental apparatus. PhD thesis, Massachusetts Institute of Technology (2022)
* Lin and Roy [2013] Lin, S.-Z., Roy, D.: Role of kinetic inductance in transport properties of shunted superconducting nanowires. Journal of Physics: Condensed Matter 25(32), 325701 (2013)
* Kundert [2003] Kundert, K.: The Designer’s Guide to Spice and Spectre. Kluwer Academic Publishers, Boston (2003)
* Medeiros et al. [2019] Medeiros, O., Colangelo, M., Charaev, I., Berggren, K.K.: Measuring thickness in thin nbn films for superconducting devices. Journal of Vacuum Science & Technology A 37(4) (2019)
* Berggren et al. [2018] Berggren, K.K., Zhao, Q.-Y., Abebe, N., Chen, M., Ravindran, P., McCaughan, A., Bardin, J.C.: A superconducting nanowire can be modeled by using spice. Superconductor Science and Technology 31(5), 055010 (2018)
* El Dandachi [2023] El Dandachi, T.: Efficient simulation of large-scale superconducting nanowire circuits. Master’s thesis, Massachusetts Institute of Technology (2023)
* Charaev et al. [2017] Charaev, I., Silbernagel, T., Bachowsky, B., Kuzmin, A., Doerner, S., Ilin, K., Semenov, A., Roditchev, D., Vodolazov, D.Y., Siegel, M.: Proximity effect model of ultranarrow nbn strips. Physical Review B 96(18), 184517 (2017)
* Kerman et al. [2007] Kerman, A.J., Dauler, E.A., Yang, J.K., Rosfjord, K.M., Anant, V., Berggren, K.K., Gol’tsman, G.N., Voronov, B.M.: Constriction-limited detection efficiency of superconducting nanowire single-photon detectors. Applied Physics Letters 90(10) (2007)
* Colangelo et al. [2022] Colangelo, M., Walter, A.B., Korzh, B.A., Schmidt, E., Bumble, B., Lita, A.E., Beyer, A.D., Allmaras, J.P., Briggs, R.M., Kozorezov, A.G., et al.: Large-area superconducting nanowire single-photon detectors for operation at wavelengths up to 7.4 $\mu$m. Nano Letters 22(14), 5667–5673 (2022)
|
# Vacuum Stability vs. Positivity in Real Singlet Scalar Extension of the
Standard Model
Parsa Ghorbani Dipartimento di Fisica dell’Università di Pisa, Italy
INFN, Sezione di Pisa, Italy
###### Abstract
We assume a generic real singlet scalar extension of the Standard Model living
in the vacuum $(v,w)$ at the electroweak scale with $v=246$ GeV and $w$ being
respectively the Higgs and the singlet scalar vacuum expectation values. By
requiring absolute vacuum stability for the vacuum $(v,w)$, the positivity
condition and the perturbativity up to the Planck scale, we show that the
viable space of parameters in the model is strongly constrained for various
singlet scalar vacuum expectation values $w=0.1,1,10,100$ TeV. Also, it turns
out that the singlet scalar mass can be from a few GeV up to less than TeV.
## 1 Introduction
The stability of the vacuum in the Standard Model (SM) was first utilized as
an implement to put a bound on the Higgs mass and the mass of fermions in the
SM framework[5]. After the discovery of the Higgs particle by ATLAS and CMS
experiments at the LHC in 2012, the value of the Higgs mass is determined
accurately to be around $125$ GeV [1, 6]. The mass of the top quark (as the
heaviest quark) was already known to be about $176$ GeV [2]. Having the Higgs
and the top quark masses in hand, and knowing the value of the Higgs vacuum
expectation value (VEV) to be $v=246$ GeV, the status of the vacuum stability
in the SM becomes lucid; the SM vacuum starts to be metastable at energy scale
around $10^{10}$ GeV [3, 8, 4]. This happens because in the SM, the top quark
has a large negative contribution in the renormalization group equations (RGE)
for the Higgs quartic coupling $\lambda_{\text{h}}$, so that the Higgs quartic
coupling becomes negative at higher energy scales which makes the vacuum with
$v\neq 0$ metastable.
There is a consensus in the literature that in the presence of more scalars,
the vacuum can become stable up to the Planck scale. For instance, the vacuum
stability in extensions of the SM by adding an extra real scalar with
$\mathbb{Z}_{2}$ symmetry is studied in [17, 11], employing a complex scalar
to address the vacumm stability in [9, 18], using scalars in scale invariant
extension of the SM in [13, 20], investigation of the di-Higgs production in
singlet scalar model in [7], studying the vacuum stability of 2HDM at the
electroweak (EW) scale in [12], and stablizing the vacuum by Higgs–inflaton
mixing in [10].
The point that we want to emphasize in this paper is the importance of the
positivity condition (i.e. the requirement of having a positive definite
potential in all scales), when studying the vacuum stability in a given model.
At the EW scale, say at the scale $\mathcal{O}(m_{t})$, the free parameters of
an extended SM model must be chosen in a way to respect the positivity
condition. However, when solving the RGEs there is no guarantee that the
positivity condition will be satisfied in higher energy scales. This should be
considered alongside the possible change of the vacuum structure at higher
scales.
In the case of the SM, the vacuum stability and the positivity are delicately
related. The potential in the SM is given by
$V_{\text{SM}}=-\mu_{\text{h}}h^{2}/2+\lambda_{\text{h}}h^{4}/4$ for which if
$\lambda_{\text{h}}>0$ the theory develops a non-zero VEV for the Higgs
scalar. If $\lambda_{\text{h}}<0$ the Higgs VEV can only be vanishing. Due to
the large negative contribution of the heavy top quark in the RGE for
$\lambda_{\text{h}}$, the Higgs quartic coupling becomes negative at the scale
$10^{10}$ and thereafter the Higgs non-zero VEV is no longer a minimum of the
theory. Therefore, in the SM the sign of the quartic coupling
$\lambda_{\text{h}}$ changes the structure of the vacuum and deals with the
vacuum stability. At the same time, the sign of the Higgs quartic coupling
confirms or violates the positivity of the potential. If
$\lambda_{\text{h}}<0$ the theory is no longer well defined before thinking
about the vacuum structure of the model. Therefore, in the case of the SM, the
quartic coupling $\lambda_{\text{h}}$ plays a dual role as a tuner for the
vacuum stability and the positivity of the model.
When more scalars are added in the theory, the vacuum stability and the
positivity condition should be investigated separately. Both conditions even
if consistent at the EW scale, may become in conflict at higher scales. This
might resemble although maybe not directly related to the situation that for a
random set of parameters in a multi-scalar potential having at the same time a
small Higgs mass and a small cosmological constant is not very probable (see
[16]). In general with more scalars the vacuum structure of the model gets
more complicated; as the number of vacua grows rapidly with the number of
scalars (see [14] for two-scalar example). The positivity condition which is
obtained from the quartic part of the potential, can be very involved in
general with more scalars [19], unless some symmetry is applied.
The perturbativity of the theory is another constraint which must be taken
into account when running the parameters of the model to higher scales. The
main question is how consistent the vacuum stability constraint, the
positivity condition and the perturbativity are in a multi-scalar theory from
the electroweak up to the Planck scale.
In this article, as the first step to answer the question posed above, we
extend the SM with a generic real scalar potential including also the scalar
cubic term and linear scalar-Higgs interaction. We will study the scale
evolution of the only vacuum of the model i.e. $(v,w)$ at the EW scale, and
will argue the absolute vacuum stability up to the Planck scale. By absolute
vacuum stability we mean that the potential possess only one single minimum
for whole range of the energy scale here from the EW up to the Planck. This
will be confronted with the scale evolution of the positivity condition as
well as the perturbativity up to the Planck scale. Using the aformentioned
constraints we put strong bound on free parameters of the model. For
simplicity we use the term SPP conditions when we consider the stability,
positivity and perturbativity conditions altogether.
The rest of the paper is arranged as the following. In section 2 we introduce
the model giving the details of the vacuum solution $(v,w)$ and the positivity
condition. In section 3 we will discuss the RGEs, and will present our
numerical analysis in section 4. We will bring a summery of the results in
section 5.
## 2 Vacua in Singlet Scalar Model
The vacuum stability in the real singlet scalar extension of the SM with
$\mathbb{Z}_{2}$ symmetry has been studied in [11]. The presence of the
$\mathbb{Z}_{2}$ symmetry simplifies the model considerably. For such model
provided that the positivity condition is taken into account, if the vacuum
solution $(v,w)$ is a minimum at a given scale $\Lambda$, it remains the
global minimum in all scales because $(v,w)$ and other extremum solutions of
the $\mathbb{Z}_{2}$ symmetric model i.e. $(0,0)$, $(v,0)$ and $(0,w)$ cannot
be minimum at the same time (see [15] on vacuum structure of the
$\mathbb{Z}_{2}$ symmetric singlet scalar model).
Here we consider instead a generic real scalar extension of the SM without the
$\mathbb{Z}_{2}$ symmetry,
$w$ | $100$ GeV | $1$ TeV | $10$ TeV | $100$ TeV
---|---|---|---|---
$\lambda_{\text{h}}$ | $(0.39,0.51)$ | $(0.1,0.54)$ | $(1.9\times 10^{-4},0.39)$ | $(0.95,1)$
$\lambda_{\text{s}}$ | $(0.25,1)$ | $(0.015,0.032)$ | $(2.4,4.07)\times 10^{-4}$ | $(1.03,1.13)\times 10^{-5}$
$\lambda_{\text{hs}}$ | $(-0.17,-0.02)$ | $(-0.033,0.015)$ | $(-2900,-4.53)\times 10^{-6}$ | $(-2.37,-0.96)\times 10^{-6}$
$\kappa_{\text{s}}$ | $(-1,1)$ | $(-1,1)$ | $(-1,1)$ | $(0.49,0.64)$
$\kappa_{\text{hs}}$ | $(-1,1)$ | $(-0.054,1)$ | $(-1,0.097)$ | $(-0.55,-0.098)$
$m_{s}$ | $191$-$257$ GeV | $218$-$308$ GeV | GeV $185$-$334$ GeV | $553$-$574$ GeV
Table 1: The allowed region for the couplings
$\lambda_{\text{h}},\lambda_{\text{s}},\lambda_{\text{hs}}$,
$\kappa_{\text{s}},\kappa_{\text{hs}}$ and $m_{s}$ for different singlet
scalar VEV benchmarks $w=0.1,1,10,100$ TeV, respecting the absolute vacuum
stability and positivity condition for the vacuum $(v,w)$ at the electroweak
scale with the assumptions $m_{H}=125$ GeV, $v_{H}=246$ GeV and $m_{s}>m_{H}$.
$V(h,s)=-\frac{1}{2}\mu^{2}_{\text{h}}h^{2}+\frac{1}{4}\lambda_{\text{h}}h^{4}-\frac{1}{2}\mu^{2}_{\text{s}}s^{2}+\frac{1}{3}\kappa_{\text{s}}s^{3}+\frac{1}{4}\lambda_{\text{s}}s^{4}+\frac{1}{2}\kappa_{\text{hs}}h^{2}s+\frac{1}{4}\lambda_{\text{hs}}h^{2}s^{2}\,.$
(1)
The positivity condition on the quartic part of the potential above is the
same as the $\mathbb{Z}_{2}$ symmetric potential and is given by
$\lambda_{\text{h}}>0$, $\lambda_{\text{h}}>0$ and
$\lambda_{\text{hs}}>0~{}\vee~{}\left(\lambda_{\text{hs}}<0~{}\wedge~{}\lambda^{2}_{\text{hs}}\leq\lambda_{\text{h}}\lambda_{\text{s}}\right)\,.$
(2)
The vacuum solutions for the potential in Eq. (1) can only have the structures
$(0,0)$, $(0,w)$ or $(v,w)$; the VEV solution $(v,0)$ is not allowed. Despite
the $\mathbb{Z}_{2}$ symmetric model ( i.e. the potential in Eq. (1) with
$\kappa_{\text{hs}}=\kappa_{\text{s}}=0$), all three extremum solutions can be
local minimum at the same time even if we take into account the positivity
condition. For the $\mathbb{Z}_{2}$ symmetric model after the electroweak
symmetry breaking the possible minima at the EW scale are either $(v,w)$ or
$(v,0)$, but for the generic potential in Eq. (1) the only vacuum solution at
the EW scale is inevitably $(v,w)$ which is given by,
$v=\frac{\sqrt{\mu^{2}_{\text{h}}-\kappa_{\text{hs}}w-\lambda_{\text{hs}}w^{2}}}{\sqrt{\lambda_{\text{h}}}},\hskip
56.9055ptw=p+\xi_{-}^{1/3}+\xi+^{1/3},$ (3)
where
$\begin{split}&\xi_{\pm}=q\pm\sqrt{q^{2}+\left(r-p^{2}\right)^{3}},\\\
&q=\frac{b^{3}+9a(6\kappa_{\text{hs}}\mu^{2}_{\text{h}}a-bc)}{216a^{3}},~{}~{}~{}~{}~{}~{}p=\frac{b}{6a},~{}~{}~{}~{}~{}~{}~{}r=\frac{c}{6a}\,,\\\
&a=\lambda^{2}_{\text{hs}}-\lambda_{\text{h}}\lambda_{\text{s}},~{}~{}~{}~{}~{}b=2\kappa_{\text{s}}\lambda_{\text{h}}-3\kappa_{\text{hs}}\lambda_{\text{hs}},~{}~{}~{}~{}~{}c=\kappa^{2}_{\text{hs}}-2\lambda_{\text{hs}}\mu^{2}_{\text{h}}+2\lambda_{\text{h}}\mu^{2}_{\text{s}}\,.\end{split}$
(4)
From Eq. (4) real solutions for $v$ and $w$ requires $q^{2}>(p^{2}-r)^{1/3}$
for the reality of $w$ and
$\mu^{2}_{\text{h}}>0,\lambda_{\text{hs}}\leq-\kappa^{2}_{\text{hs}}/4\mu^{2}_{\text{h}}$
for the reality of $v$. The parameters $\mu^{2}_{\text{h}}$ and
$\mu^{2}_{\text{s}}$ can be fixed by the stationary conditions for $(v,w)$ at
a given scale $\mu=\Lambda$,
$\begin{split}&\mu^{2}_{\text{h}}=\lambda_{\text{h}}v^{2}+\lambda_{\text{hs}}w^{2}+\kappa_{\text{hs}}w\\\
&\mu^{2}_{\text{s}}=\lambda_{\text{s}}w^{2}+\lambda_{\text{hs}}v^{2}+\kappa_{\text{s}}w+\frac{\kappa_{\text{hs}}v^{2}}{2w}\end{split}$
(5)
where the Higgs VEV is fixed at $v=246$ GeV at the EW scale, and for the
scalar VEV we take benchmark values $w=100$ GeV and $w=1,10,100$ TeV at the EW
scale. Note that we choose the free parameters of the model to be
$\lambda_{\text{h}},\lambda_{\text{s}},\lambda_{\text{hs}},\kappa_{\text{h}},\kappa_{\text{hs}}$;
we do not use a mixing angle as a new free parameter, instead we directly deal
with the various $(v,w)$ inputs from scratch and investigate the properties of
the model based on the chosen vacuum.
$w$ | $100$ GeV | $1$ TeV | $10$ TeV | $100$ TeV
---|---|---|---|---
$\lambda_{\text{h}}$ | $(0.02,0.24)$ | $(0.002,0.222)$ | $(0.00054,0.209)$ | $(8.5\times 10^{-5},0.096)$
$\lambda_{\text{s}}$ | $(0.002,0.97)$ | $(2.26\times 10^{-6},0.014)$ | $(2.54\times 10^{-6},0.00014)$ | $(3.10,11.45)\times 10^{-7}$
$\lambda_{\text{hs}}$ | $(-0.098,0.098)$ | $(-0.019,0.02)$ | $(-0.0021,0.0023)$ | $(-7.73,8.90)\times 10^{-6}$
$\kappa_{\text{s}}$ | $(-0.98,0.99)$ | $(-1,0.97)$ | $(-0.96,0.99)$ | $(-0.25,-0.05)$
$\kappa_{\text{hs}}$ | $(-1,1)$ | $(-0.95,0.99)$ | $(-0.96,0.98)$ | $(-0.71,0.95)$
$m_{s}$ | $48$-$119$ GeV | $46$-$118$ GeV | $4$-$123$ GeV | $8$-$109$ GeV
Table 2: The allowed region for the couplings
$\lambda_{\text{h}},\lambda_{\text{s}},\lambda_{\text{hs}}$,
$\kappa_{\text{s}},\kappa_{\text{hs}}$ and $m_{s}$ for different singlet
scalar VEV benchmarks $w=0.1,1,10,100$ TeV, respecting the absolute vacuum
stability and positivity condition for the vacuum $(v,w)$ at the electroweak
scale with the assumptions $m_{H}=125$ GeV, $v_{H}=246$ GeV and $m_{s}<m_{H}$.
The Hessian matrix at a given scale $\mu=\Lambda$ in the scale-dependent
vacuum $(v,w)$ is given by,
$\mathcal{H}(v,w;\Lambda)=\left(\begin{matrix}3\lambda_{\text{h}}v^{2}+\lambda_{\text{hs}}w^{2}+\kappa_{\text{hs}}w-\mu^{2}_{\text{h}}&2\lambda_{\text{hs}}vw+\kappa_{\text{hs}}v\\\
2\lambda_{\text{hs}}vw+\kappa_{\text{hs}}v&3\lambda_{\text{s}}w^{2}+\lambda_{\text{hs}}v^{2}+2\kappa_{\text{s}}w-\mu^{2}_{\text{s}}\\\
\end{matrix}\right)$ (6)
where we have dropped in the matrix the scale-dependence of the couplings and
the VEVs. Taking into account the stationary condition on the vacuum $(v,w)$
in Eq. (5) the mass eigenvalues are,
$\begin{split}&m_{\pm}^{2}=v^{2}\lambda_{\text{h}}+w^{2}\lambda_{\text{s}}-\frac{v^{2}}{4w}\kappa_{\text{hs}}+\frac{w}{2}\kappa_{\text{s}}\pm\frac{v}{2}\times\\\
&\sqrt{\frac{v^{2}}{w^{2}}\left(\kappa_{\text{hs}}+4w\lambda_{\text{h}}\right)+\frac{w^{2}}{v^{2}}\left(\kappa_{\text{s}}+2w\lambda_{\text{s}}\right)+2\kappa_{\text{hs}}w\left(\lambda_{\text{s}}-8\lambda_{\text{hs}}\right)+4w\left(\kappa_{\text{s}}\lambda_{\text{h}}-4w\lambda_{\text{hs}}^{2}+2w\lambda_{\text{h}}\lambda_{\text{s}}\right)-4\kappa_{\text{hs}}^{2}}\\\
\end{split}$ (7)
where $m_{-}$ and $m_{+}$ can be either the Higgs or the scalar mass. In
section 4, we will consider both possibilities $m_{s}<m_{H}$ and $m_{s}>m_{H}$
at the EW scale. Although we set the initial inputs of the free parameters
such that $(v,w)$ is the absolute minimum at the EW scale, but running the
couplings in higher energy scales, the vacuum $(0,0)$ or $(0,w)$ may become
coexistent minima of the theory even deeper than the $(v,w)$, which results in
the instability of the vacuum at higher scales. To keep the $(v,w)$ to be the
absolute minimum at higher scales, we need to know the mass spectrum of other
possible vacua in higher scales. The mass matrix for the vacuum $(0,0)$ at a
given scale $\Lambda$ read
$\mathcal{M}(0,0;\Lambda)=\begin{pmatrix}-\mu^{2}_{\text{h}}&0\\\
0&-\mu^{2}_{\text{s}}\\\ \end{pmatrix}$ (8)
and the mass matrix for the vacuum $(0,w)$ is
$\mathcal{M}(0,w;\Lambda)=\begin{pmatrix}w^{2}\lambda_{\text{hs}}+w\kappa_{\text{hs}}-\mu^{2}_{\text{h}}&0\\\
0&2w^{2}\lambda_{\text{s}}+w\kappa_{\text{s}}\\\ \end{pmatrix}$ (9)
in which the stationary condition
$\mu^{2}_{\text{s}}=w^{2}\lambda_{\text{s}}+w\kappa_{\text{s}}$ (10)
for $(0,w)$ has been used. The initial values for the parameters
$\mu^{2}_{\text{h}}$ and $\mu^{2}_{\text{s}}$ in Eqs. (8) and (9) is given by
Eq. (5). In order for $(v,w)$ to stay the global minimum up to a desired scale
at least one of the mass eigenvalues in Eqs. (8) and (9) must be negative; in
this way $(v,w)$ would be an absolute minimum. Already at
$\Lambda=\mathcal{O}(m_{t})\sim 173$ GeV, from Eq. (8) we must require
$\mu^{2}_{\text{h}}>0$ or $\mu^{2}_{\text{s}}>0$ and from Eq. (9),
$\mu^{2}_{\text{h}}>w^{2}\lambda_{\text{hs}}+w\kappa_{\text{hs}}$ or
$2w^{2}\lambda_{\text{s}}+w\kappa_{\text{s}}<0$ in which both parameters
$\mu^{2}_{\text{s}}$ and $\mu^{2}_{\text{s}}$ are fixed at $\Lambda=173$ GeV
from Eq. (5).
## 3 Renormalizarion Group Equations
Figure 1: The viable region of input values for the couplings
$\lambda_{\text{h}}$, $\lambda_{\text{s}}$ and $\lambda_{\text{hs}}$ at the
electroweak scale for different singlet scalar VEV benchmarks $w=0.1,1,10,100$
TeV, respecting the absolute vacuum stability for the vacuum $(v,w)$, the
positinity condition and the perturbativity up to the Planck scale, with the
assumptions $m_{H}=125$ GeV, $v=246$ GeV and $m_{s}>m_{H}$.
The evolution of the couplings, fields and mass parameters in the model with
scale $\mu$ is given by the renormalization group equations (RGE). We extract
the RGEs for the model given in Eq. (1) up to one-loop using the Mathematica
package SARAH [21]. We take into account only the top quark Yukawa coupling
and ignore the couplings for the light quarks. The $\beta$-functions for the
gauge couplings and the Yukawa coupling are,
$\begin{split}16\pi^{2}&\beta_{g_{i}}=b_{i}g_{i}^{3}\\\
16\pi^{2}&\beta_{y_{t}}=\left(-\frac{17}{20}g^{2}_{1}-\frac{9}{4}g_{2}^{2}-8g_{3}^{2}\right)y_{t}+\frac{3}{2}y_{t}^{3}\\\
\end{split}$ (11)
where $b_{1}=41/10,~{}b_{2}=-19/6,~{}b_{3}=-7$. The $\beta$-functions
involving the Higgs and the singlet scalar couplings are given by,
$\begin{split}&16\pi^{2}\beta_{\lambda_{\text{h}}}=\left(\frac{27}{200}g_{1}^{4}+\frac{9}{20}g_{1}^{2}g_{2}^{2}+\frac{9}{8}g_{2}^{4}\right)+\left(-\frac{9}{5}g_{1}^{2}-9g_{2}^{2}+12y_{t}^{2}\right)\lambda_{\text{h}}+\frac{1}{2}\lambda_{\text{hs}}^{2}+24\lambda_{\text{h}}^{2}-6y_{t}^{4}\\\
&16\pi^{2}\beta_{\lambda_{\text{s}}}=2\lambda_{\text{hs}}^{2}+18\lambda^{2}_{\text{s}}\\\
&16\pi^{2}\beta_{\lambda_{\text{hs}}}=\left(-\frac{9}{10}g^{2}_{1}-\frac{9}{2}g_{2}^{2}+6\lambda_{\text{s}}+12\lambda_{\text{h}}+6y_{t}^{2}\right)\lambda_{\text{hs}}+4\lambda_{\text{hs}}^{2}\\\
&16\pi^{2}\beta_{\kappa_{\text{s}}}=6\kappa_{\text{hs}}\lambda_{\text{hs}}+18\lambda_{\text{h}}\kappa_{\text{s}}\\\
&16\pi^{2}\beta_{\kappa_{\text{hs}}}=\left(-\frac{9}{10}g_{1}^{2}-\frac{9}{2}g_{2}^{2}+4\lambda_{\text{hs}}+12\lambda_{\text{h}}+6y_{t}^{2}\right)\kappa_{\text{hs}}+2\kappa_{\text{s}}\lambda_{\text{hs}}\\\
\end{split}$ (12)
Figure 2: The plots show the running of the couplings
$\lambda_{\text{h}},\lambda_{\text{s}},\lambda_{\text{hs}},\kappa_{\text{s}},\kappa_{\text{hs}}$
up to the Planck scale for different singlet scalar VEV benchmarks
$w=0.1,1,10,100$ TeV and with the constraints $m_{H}=125$ GeV, $v=246$ GeV and
$m_{s}>m_{h}$. The initial inputs for the couplings are chosen such that the
vacuum $(v,w)$ remains the absolute minimum respecting the positivity
condition and the perturbativity up to the Plack scale.
and the $\gamma$-functions for the VEVs and the mass parameters read,
$\begin{split}&16\pi^{2}\gamma_{v}=\left(\frac{9}{20}g_{1}^{2}+\frac{9}{4}g_{2}^{2}-3y_{t}^{2}\right)v\\\
&16\pi^{2}\gamma_{w}=0\\\
&16\pi^{2}\gamma_{\mu^{2}_{\text{h}}}=\left(-\frac{9}{10}g_{1}^{2}-\frac{9}{2}g_{2}^{2}+12\lambda_{\text{h}}+6y_{t}^{2}\right)\mu^{2}_{\text{h}}-2\kappa_{\text{hs}}^{2}+\lambda_{\text{hs}}\mu^{2}_{\text{s}}\\\
&16\pi^{2}\gamma_{\mu^{2}_{\text{s}}}=4\mu^{2}_{\text{h}}\lambda_{\text{hs}}+6\lambda_{\text{s}}\mu^{2}_{\text{s}}-4\kappa_{\text{hs}}^{2}-4\kappa_{\text{s}}^{2}\end{split}$
(13)
where the $\beta$-functions for a coupling $X$, and the $\gamma$-functions
(anomalous dimensions) for VEV or mass parameter $Y$, are defined as,
$\beta_{X}=\mu\frac{dX}{d\mu}\hskip
85.35826pt\gamma_{Y}=-\frac{\mu}{Y}\frac{dY}{d\mu}\,.$ (14)
Solving the RGEs in Eqs. (11), (12) and (13), the evolution of the couplings
and the VEVs will be known, so we can check the status of the stability, the
positivity and the perturbativity all together at any desired scale. In the
next section we will discuss the RGE solutions and the allowed parameters
inputs that we can use at the EW scale.
Figure 3: The viable region of input values for the couplings
$\lambda_{\text{h}}$, $\lambda_{\text{s}}$ and $\lambda_{\text{hs}}$ at the
electroweak scale for different singlet scalar VEV benchmarks $w=0.1,1,10,100$
TeV, respecting the absolute vacuum stability for the vacuum $(v,w)$, the
positinity condition and the perturbativity up to the Planck scale, with the
assumptions $m_{H}=125$ GeV, $v=246$ GeV and $m_{s}<m_{H}$.
## 4 Vacuum Stability, Positivity and Perturbativity
In this section we numerically solve the RG equations presented in section 3.
We always require that the vacuum $(v,w)$ defined in the EW scale
$\mathcal{O}(m_{t})\sim 173$ GeV, remains the absolute global minimum for
higher scales up to the Planck scale, hence pushing the stability up to the
Planck scale. Furthermore, we impose the positivity condition in Eq. (2)
(which is an scale-dependent condition evolving with the couplings) to hold
from the EW scale up to the Planck scale. We also discard the input values for
the parameters which lead to a Landau pole in the scales lower than the Planck
scale.
Figure 4: The plots show the running of the couplings
$\lambda_{\text{h}},\lambda_{\text{s}},\lambda_{\text{hs}},\kappa_{\text{s}},\kappa_{\text{hs}}$
up to the Planck scale for different singlet scalar VEV benchmarks
$w=0.1,1,10,100$ TeV and with the constraints $m_{H}=125$ GeV, $v=246$ GeV and
$m_{s}<m_{h}$. The initial inputs for the couplings are chosen such that the
vacuum $(v,w)$ remains the absolute minimum respecting the positivity
condition and the perturbativity up to the Plack scale.
From the LHC experiments, the Higgs mass and the Higgs VEV are known: $v=246$
GeV and $m_{H}=125$ GeV. We will also fix the singlet scalar VEV by different
mass scale benchmarks $w=0.1,1,2,10,100$ TeV. In Eq. (7) one of the mass
eigenvalues is attributed to the Higgs mass. We investigate both cases
$m_{+}\equiv m_{s}>m_{H}\equiv m_{-}$ and $m_{-}\equiv m_{s}<m_{H}\equiv
m_{+}$. Among the parameters of the model as seen in section 2,
$\mu^{2}_{\text{h}}$ and $\mu^{2}_{\text{s}}$ are omitted by two stationary
conditions for $(v,w)$. Doing so we are left with the free independent
parameters being $\lambda_{\text{h}}$, $\lambda_{\text{s}}$,
$\lambda_{\text{hs}}$, $\kappa_{\text{s}}$ and $\kappa_{\text{hs}}$. The input
values for the free parameters at $\mathcal{O}(m_{t})$ scale should be chosen
such that the vacuum $(v,w)$ be the absolute global minimum. Any set of input
for the free parameters will give an input for $\mu^{2}_{\text{h}}$ and
$\mu^{2}_{\text{s}}$ from Eq. (5). Using these values in Eqs. (8) and (9) at
least one of the mass eigenvalues in $\mathcal{M}(0,0)$ and in
$\mathcal{M}(0,w)$ must be negative. Moreover, at the EW scale the input
values chosen for the free parameters should be bounded by the positivity
condition in Eq. (2), the positivity of the radicand in the mass expressions
in Eq. (13), and the positivity of the mass eigenvalue $m_{-}$ in Eq. (7).
Also depending on taking $m_{H}\equiv m_{+}\sim 125$ GeV or $m_{H}\equiv
m_{-}\sim 125$ GeV, the free parameters are bounded differently. The initial
values for the set of parameters satisfying the aforementioned constrained are
presented in Table 1 for $m_{s}>m_{H}$, and in Table 2 for $m_{s}<m_{H}$ with
the benchmarks $w=0.1,1,10,100$ TeV. In both tables the range of the allowed
singlet scale mass is shown. All the parameters $\lambda_{\text{h}}$,
$\lambda_{\text{s}}$, $\lambda_{\text{hs}}$, $\kappa_{\text{s}}$ and
$\kappa_{\text{hs}}$ are scanned in the interval $(-1,1)$.
After choosing a set of random input values for the parameters within the
regions in Tables 1 and 2, we solve numerically the RGEs given in section 3.
We repeat numeriously this process by taking input values and solving the RGEs
to cover all the allowed regions defined in Tables 1 and 2. Although the
initial values we found are suitable at scale $\Lambda\sim 173$ GeV, but as
running they vary in the higher scales and may violate one of the conditions
e.g. the stability constraint, the positivity condition or the perturbativity.
For a random set of inputs we check if all constraints are satisfied up to
Planck scale. In the case $m_{s}>m_{H}$ the viable initial values which lead
to an appropriate result is shown in Fig. 1 and the viable region of the
initial values in the case $m_{s}<m_{H}$ is shown in Fig. 3. The viable
regions given in Figs. 1 and (3) are for different benchmark singlet scalar
VEVs, $w=0.1,1,10,100$ TeV.
As seen in Fig. 1 for the case $w=100$ GeV, there is a narrow region which can
fulfill the desired condition up to Planck scale. But if we relax the Planck
scale implementm then there are a larger viable region for $w=100$ GeV. As we
increase the scalar VEV from $w=100$ GeV to $w=100$ TeV, we see in Fig. 1 that
the viable initial values for the couplings $\lambda_{\text{s}}$ and
$\lambda_{\text{s}}$ shrinks considerably. The coupling $\lambda_{\text{s}}$
remains $\mathcal{O}(1)$ for all singlet scalar VEV benchmarks.
In Fig. 3 that $m_{s}<m_{H}$ however the viable region for the case $w=100$
GeV is large and all coupling $\lambda_{\text{h}}$, $\lambda_{\text{s}}$ and
$\lambda_{\text{hs}}$ take $\mathcal{O}(1)$ initial values to fulfill the SPP
conditions up to Planck scale. As we increase the singlet scalar VEV from
$w=100$ GeV to $w=100$ TeV, the couplings $\lambda_{\text{s}}$ and
$\lambda_{\text{s}}$ must become smaller down to $\mathcal{O}(10^{-8})$ to
satisfy the SPP conditions. Also in Fig. 2 for $m_{s}>m_{H}$ and in Fig. 4 for
$m_{s}<m_{H}$ for each singlet scalar VEV benchmark $w$, we have also shown
the evolution of the free parameters for a randomly chosen set of parameters
within the regions in Figs 1 and 3. The singlet scalar mass is also bounded by
the SPP conditions. As seen in Tables 1 and 2 the singlet scalar mass varies
from about GeV up to about $0.5$ TeV.
## 5 Conclusion
The standard Model suffers from a vacuum metastability at high energy scale
around $10^{10}$ GeV. The addition of extra scalars in the hidden sector with
or without internal symmetries may stablize the vacuum up to the Planck scale.
However in general specially when symmetries are absent in the internal
configuration space (more investigations needed in future works), the
positivity condition might become strong enough to compete with the vacuum
stability at a given energy scale. As the simplest example we have
investigated a generic real singlet scalar extension of the Standard Model. We
have imposed the positivity condition for all scales alongside the absolute
vacuum stability and perturbativity to bound the free parameters of the model.
As seen in Tables 1 and 1, even before looking at higher scales, the free
parameters
$\lambda_{\text{h}},\lambda_{\text{s}},\lambda_{\text{hs}},\kappa_{\text{s}},\kappa_{\text{hs}}$
at the EW scale are strongly limited due respecting the absolute vacuum
stability and the positivity for the vacuum $(v,w)$, with $v=246$ GeV and
$w=0.1,1,10,100$ TeV being the Higgs and the singlet scalar vacuum expectation
values respectively. The bounds on the free parameter become stronger if we
want to keep the vacuum $(v,w)$ to be an absolute minimum, and at the same
time respecting the positivity condition and the perturbativity up to the
Planck scale (SPP conditions). The result for the viable space of input values
for the parameters to respect the SPP conditions is shown in Fig. 1 when
$m_{s}<m_{H}$ and in Fig. 3 when $m_{s}<m_{H}$. The upshot is that only for
$w=100$ and $m_{s}<m_{H}$ all the couplings are $\mathcal{O}(1)$. In other
cases, increasing the singlet scale VEV $w$ shrinks the viable region of the
couplings down to $\mathcal{O}(10^{-8})$ except for the coupling
$\lambda_{\text{h}}$ which remains in all case $\mathcal{O}(1)$ up to the
Planck scale.
We observe as well that the singlet scalar mass in the presence of the SPP
conditions can take values from $\mathcal{O}$(GeV) to $\mathcal{O}$(TeV). It
is worth to answer this question in future works that how the SPP conditions
act on the parameter space of a multi-scalar extension of the SM; whether the
SPP condition on multi-scalar theories are more or less restrictive than the
simple singlet scalar model we studied here.
## Acknowledgments
I would like to thank Alessandro Strumia for useful discussions. This work was
supported by the ERC grant NEO-NAT.
## References
* [1] Georges Aad et al. Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC. Phys. Lett. B, 716:1–29, 2012.
* [2] F. Abe et al. Observation of top quark production in $\bar{p}p$ collisions. Phys. Rev. Lett., 74:2626–2631, 1995.
* [3] Fedor Bezrukov, Mikhail Yu. Kalmykov, Bernd A. Kniehl, and Mikhail Shaposhnikov. Higgs Boson Mass and New Physics. JHEP, 10:140, 2012.
* [4] Dario Buttazzo, Giuseppe Degrassi, Pier Paolo Giardino, Gian F. Giudice, Filippo Sala, Alberto Salvio, and Alessandro Strumia. Investigating the near-criticality of the Higgs boson. JHEP, 12:089, 2013.
* [5] N. Cabibbo, L. Maiani, G. Parisi, and R. Petronzio. Bounds on the Fermions and Higgs Boson Masses in Grand Unified Theories. Nucl. Phys. B, 158:295–305, 1979.
* [6] Serguei Chatrchyan et al. Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC. Phys. Lett. B, 716:30–61, 2012.
* [7] Chien-Yi Chen, S. Dawson, and I. M. Lewis. Exploring resonant di-Higgs boson production in the Higgs singlet model. Phys. Rev. D, 91(3):035015, 2015.
* [8] Giuseppe Degrassi, Stefano Di Vita, Joan Elias-Miro, Jose R. Espinosa, Gian F. Giudice, Gino Isidori, and Alessandro Strumia. Higgs mass and vacuum stability in the Standard Model at NNLO. JHEP, 08:098, 2012.
* [9] Joan Elias-Miro, Jose R. Espinosa, Gian F. Giudice, Hyun Min Lee, and Alessandro Strumia. Stabilization of the Electroweak Vacuum by a Scalar Threshold Effect. JHEP, 06:031, 2012.
* [10] Yohei Ema, Mindaugas Karciauskas, Oleg Lebedev, Stanislav Rusak, and Marco Zatta. Higgs–inflaton mixing and vacuum stability. Phys. Lett. B, 789:373–377, 2019.
* [11] Adam Falkowski, Christian Gross, and Oleg Lebedev. A second Higgs from the Higgs portal. JHEP, 05:057, 2015.
* [12] P. M. Ferreira, R. Santos, and A. Barroso. Stability of the tree-level vacuum in two Higgs doublet models against charge or CP spontaneous violation. Phys. Lett. B, 603:219–229, 2004. [Erratum: Phys.Lett.B 629, 114–114 (2005)].
* [13] Emidio Gabrielli, Matti Heikinheimo, Kristjan Kannike, Antonio Racioppi, Martti Raidal, and Christian Spethmann. Towards Completing the Standard Model: Vacuum Stability, EWSB and Dark Matter. Phys. Rev. D, 89(1):015017, 2014.
* [14] Karim Ghorbani and Parsa Hossein Ghorbani. A Simultaneous Study of Dark Matter and Phase Transition: Two-Scalar Scenario. JHEP, 12:077, 2019.
* [15] Parsa Ghorbani. Vacuum structure and electroweak phase transition in singlet scalar model. 10 2020.
* [16] Parsa Ghorbani, Alessandro Strumia, and Daniele Teresi. A landscape for the cosmological constant and the Higgs mass. JHEP, 01:054, 2020.
* [17] Matthew Gonderinger, Yingchuan Li, Hiren Patel, and Michael J. Ramsey-Musolf. Vacuum Stability, Perturbativity, and Scalar Singlet Dark Matter. JHEP, 01:053, 2010.
* [18] Matthew Gonderinger, Hyungjun Lim, and Michael J. Ramsey-Musolf. Complex Scalar Singlet Dark Matter: Vacuum Stability and Phenomenology. Phys. Rev. D, 86:043511, 2012.
* [19] Kristjan Kannike. Vacuum Stability of a General Scalar Potential of a Few Fields. Eur. Phys. J., C76(6):324, 2016. [Erratum: Eur. Phys. J.C78,no.5,355(2018)].
* [20] Valentin V. Khoze, Christopher McCabe, and Gunnar Ro. Higgs vacuum stability from the dark matter portal. JHEP, 08:026, 2014.
* [21] Florian Staub. Exploring new models in all detail with SARAH. Adv. High Energy Phys., 2015:840780, 2015.
|
$d(\lambda)=d_{1}(\lambda)|{\varphi}_{1}\rangle\langle{\varphi}_{1}|,\quad
d_{1}(\lambda)=\frac{1|}{g_{1}(\lambda)(|\langle{\varphi}_{1},v_{1}\rangle|^{2}+cg_{1}(\lambda)^{-1})}\,.$
(9.5)
$d_{1}(\lambda)$ is a Mikhlin multiplier. Then, $B_{1}(\lambda)$ is invertible
by Lemma 6.1 and
$\displaystyle
B_{1}(\lambda)^{-1}=\lambda^{-2}({S_{2}(S_{2}{\mathcal{M}}_{1}S_{2})^{-1}S_{2}}+{d_{1}(\lambda)}Q)\,,$
(9.6) $\displaystyle
Q=\begin{pmatrix}|{\varphi}_{1}\rangle\langle{\varphi}_{1}|&-|{\varphi}_{1}\rangle\langle{\varphi}_{1}|{\tilde{a}}_{12}{\tilde{a}}_{22}^{-1}\\\
-|{\tilde{a}}_{22}^{-1}{\tilde{a}}_{21}{\varphi}_{1}\rangle\langle{\varphi}_{1}&|{\tilde{a}}_{22}^{-1}{\tilde{a}}_{21}{\varphi}_{1}\rangle\langle{\varphi}_{1}|{\tilde{a}}_{12}{\tilde{a}}_{22}^{-1}\end{pmatrix}.$
(9.7)
Note that $Q$ is $\lambda$-independent, ${\rm rank}\,Q=2$ and
$Q=({\varphi}_{1}\oplus{\tilde{\varphi}})\otimes({\varphi}_{1}\oplus{\tilde{\varphi}})$
with ${\tilde{\varphi}}=-{\tilde{a}}_{22}^{-1}{\tilde{a}}_{21}{\varphi}_{1}$.
Then, (9.4) and (9.6) implies
$B(\lambda)=(1+L_{2}(\lambda)B_{1}(\lambda)^{-1})B_{1}(\lambda)$ is invertible
and $B(\lambda)^{-1}$ is given by
$B_{1}(\lambda)^{-1}+L_{3}(\lambda),\
L_{3}(\lambda)=-B_{1}(\lambda)^{-1}L_{2}(\lambda)B_{1}(\lambda)^{-1}+{\mathcal{O}}^{(3)}(\lambda^{2}\langle\log\lambda\rangle^{4}).$
(9.8)
Hece ${\mathcal{M}}(\lambda)^{-1}$ exists by Lemma 6.2 and
${\mathcal{M}}(\lambda)^{-1}={\mathcal{L}}(\lambda)+{\mathcal{L}}(\lambda)S_{1}B(\lambda)^{-1}S_{1}{\mathcal{L}}(\lambda)$.
###### Lemma 9.2.
Modulo a good producer $M_{v}{\mathcal{M}}(\lambda)^{-1}M_{v}\equiv
M_{v}S_{1}B_{1}(\lambda)^{-1}S_{1}M_{v}$ .
###### Proof.
We have
$M_{v}{\mathcal{M}}(\lambda)^{-1}M_{v}=M_{v}{\mathcal{L}}(\lambda)M_{v}+M_{v}{\mathcal{L}}(\lambda)S_{1}B(\lambda)^{-1}S_{1}{\mathcal{L}}(\lambda)M_{v}$.
By Lemma 6.6 again $M_{v}{\mathcal{L}}(\lambda)M_{v}$ is a good producer.
Substituting $B(\lambda)^{-1}$ by (9.8), we see that the second term on the
right is equal to $E_{1}(\lambda)+E_{2}(\lambda)$ where
$E_{1}(\lambda)=M_{v}{\mathcal{L}}(\lambda)S_{1}B_{1}(\lambda)^{-1}S_{1}{\mathcal{L}}(\lambda)M_{v},\quad
E_{2}(\lambda)=M_{v}{\mathcal{L}}(\lambda)S_{1}L_{3}(\lambda)S_{1}{\mathcal{L}}(\lambda)M_{v}\,.$
(i) We first show that $E_{2}(\lambda)$ is a good producer. We obtain by
combining (9.4), (9.6) and (9.8) that
$S_{1}L_{3}(\lambda)S_{1}=\sum_{j,k=1}^{n}(\log\lambda)^{2}g_{jk}(\lambda)|{\varphi}_{j}\rangle\langle{\varphi}_{k}|$
with Mikhlin multipliers
$g_{jk}(\lambda)\in{\mathcal{O}}^{(3)}_{{\mathbb{C}}}(1)$, $j,k=1,\dots,n$. We
then recall (7.4) which implies that for $j=1,\dots,n$
$\displaystyle
M_{v}{\mathcal{L}}(\lambda){\varphi}_{j}=\psi_{j0}+g_{1}(\lambda)\lambda^{2}\psi_{j1}(x)+\lambda^{2}\psi_{j2}(x)+\psi_{j3}(\lambda,x),$
(9.9) $\displaystyle\psi_{j0},\,\psi_{j1},\,\psi_{j2}\in(L^{1}\cap
L^{2}),\quad\psi_{j3}\in{\mathcal{O}}^{(3)}_{L^{1}\cap
L^{2}}(\lambda^{4}(\log\lambda)^{2})\,.$ (9.10)
Since the integral kernel of ${\mathcal{L}}(\lambda)^{\ast}$ is the complex
conjugate of ${\mathcal{L}}(\lambda)$,
$M_{v}{\mathcal{L}}(\lambda)^{\ast}{\varphi}_{k}$, $k=1,\dots,n$ is expressed
similarly. Hence
$E_{2}(\lambda)=\sum(\log{\lambda})^{2}g_{jk}(\lambda)({\mathcal{L}}+{\mathcal{O}}^{(3)}_{{\mathcal{L}}}(h_{2}(\lambda)))$
and Lemma 3.7 for $(j,\ell)=(2,2)$ and Proposition 3.9 imply that
$E_{2}(\lambda)$ is a good producer.
ii) Define $B_{2}(\lambda)=S_{1}B_{1}(\lambda)^{-1}S_{1}$. By virtue of (9.6)
and (9.7) we have
$B_{2}(\lambda)=\sum_{j,k=1}^{n}\lambda^{-2}f_{jk}(\lambda)|{\varphi}_{j}\rangle\langle{\varphi}_{k}|,\quad
f_{jk}(\lambda)\in{\mathcal{O}}^{(3)}_{{\mathbb{C}}}(1).$ (9.11)
Substituting ${\mathcal{L}}(\lambda)$ by (6.6) and using
$D_{0}S_{1}=S_{1}D_{0}=S_{1}$, we express
$E_{1}(\lambda)=E_{11}(\lambda)+E_{12}(\lambda)+E_{13}(\lambda)+E_{14}(\lambda)$
where
$\displaystyle
E_{11}=M_{v}(D_{0}-D_{0}L_{1}(\lambda))B_{2}(\lambda)(D_{0}-D_{0}L_{1}(\lambda))M_{v},\
E_{12}=M_{v}{\mathcal{L}}(\lambda)B_{2}(\lambda)\tilde{L}_{1}(\lambda)M_{v},$
$\displaystyle
E_{13}=M_{v}D_{0}\tilde{L}_{1}(\lambda)B_{2}(\lambda){\mathcal{L}}(\lambda)M_{v},\quad
E_{14}=M_{v}D_{0}\tilde{L}_{1}(\lambda)B_{2}(\lambda)\tilde{L}_{1}(\lambda)M_{v}.$
(a) We first show that $E_{12}(\lambda)$ is a good producer.
$E_{12}=\sum\lambda^{-2}f_{jk}(\lambda)(M_{v}{\mathcal{L}}(\lambda){\varphi}_{j})\otimes(M_{v}\tilde{L}_{1}(\lambda)^{\ast}{\varphi}_{k})$
by (9.11). We can apply (9.9) and (9.10) to
$M_{v}{\mathcal{L}}(\lambda){\varphi}_{j}$ and we have
$M_{v}\tilde{L}_{1}(\lambda)^{\ast}{\varphi}_{k}\in{\mathcal{O}}^{(3)}_{{\mathcal{H}}\cap{\mathcal{L}}}(\lambda^{4}(\log\lambda)^{2})$
by virtue of (6.5) and (6.6). It follows
$E_{12}(\lambda)\in{\mathcal{O}}^{(3)}_{{\mathcal{L}}}(\lambda^{2}(\log\lambda)^{2})$
and $E_{12}(\lambda)$ is a good producer by virtue of Proposition 3.9. Similar
argument implies $E_{13}(\lambda)$ and $E_{14}(\lambda)$ are both good
producers.
(b) We have $E_{11}(\lambda)=M_{v}B_{2}(\lambda)M_{v}+E_{3}(\lambda)$ where
$E_{3}(\lambda)$ is defined by
$E_{3}(\lambda)=-M_{v}B_{2}(\lambda)L_{1}(\lambda)M_{v}-M_{v}D_{0}L_{1}(\lambda)B_{2}(\lambda)M_{v}+M_{v}D_{0}L_{1}(\lambda)B_{2}(\lambda)L_{1}(\lambda)M_{v}.$
We prove $E_{3}(\lambda)$ is a good producer to finish the proof. Recalling
(6.5) and that $v{\varphi}\in{\langle x\rangle}^{-2-\delta}(L^{1}\cap L^{4})$
for ${\varphi}\in S_{1}{L^{2}}$, we obtain as previously that, for
$j=1,\dots,n$,
$\displaystyle\lambda^{-2}M_{v}D_{0}L_{1}(\lambda){\varphi}_{j}(x)=g_{1}(\lambda)\tilde{\psi}_{j1}(x)+\tilde{\psi}_{j2}(x)+\tilde{\psi}_{j3}(\lambda,x),$
$\displaystyle\tilde{\psi}_{j1},\ \tilde{\psi}_{j2}\in(L^{1}\cap L^{2}),\
\tilde{\psi}_{3}(\lambda)\in{\mathcal{O}}^{(3)}_{L^{1}\cap
L^{2}}(h_{2}(\lambda)).$
An obvious modification of the argument shows that similar expressions are
satisfied by $\lambda^{-2}M_{v}L_{1}(\lambda)^{\ast}{\varphi}_{k}$ for
$k=1,\dots,n$. Combining these with (9.11) produces the expression for
$E_{3}(\lambda)$:
$E_{3}(\lambda)=\sum_{j,k=1}^{n}(\log\lambda)^{2}h_{jk}(\lambda){\mathcal{L}}_{jk}+{\mathcal{O}}^{(3)}_{{\mathcal{L}}}(\lambda^{2}\langle\log\lambda\rangle^{2})$
with ${\mathcal{L}}_{jk}\in{\mathcal{L}}$ and
$h_{jk}(\lambda)\in{\mathcal{O}}_{{\mathbb{C}}}^{(3)}(1)$ for $1\leq j,k\leq
n$. Thus, $E_{3}(\lambda)$ is a good producer by Lemma 3.7 and Proposition
3.9. ∎
#### Proof of Theorem 1.4 (2c)
In view of Lemma 9.2, it suffices to prove that the operator $Z$ defined by
$Zu(x)=\int_{0}^{\infty}G_{0}(-\lambda)M_{v}S_{1}B_{1}(\lambda)^{-1}S_{1}M_{v}\Pi(\lambda)u(x)\chi_{\leq
a}(\lambda)\lambda d\lambda$ (9.12)
is bounded in $L^{p}$ for $1<p<2$ and unbounded for $2<p<\infty$. We
substitute (9.6) for $B_{1}(\lambda)^{-1}$, which makes $Z=Z_{1}+Z_{2}$ where
$Z_{1}$ and $Z_{2}$ are produced by
$\lambda^{-2}S_{2}(S_{2}{\mathcal{M}}_{1}S_{2})^{-1}S_{2}$ and
$\lambda^{-2}d_{1}(\lambda)(v({\varphi}_{1}+{\tilde{\varphi}}))\otimes(v({\varphi}_{1}+{\tilde{\varphi}}))$.
We may repeat the argument of the previous section §8 to $Z_{1}$ with obvious
modifications, which proves that $Z_{1}$ is bounded in $L^{p}$ for $1<p<4$ and
unbounded for $4<p<\infty$ in general and, it becomes a good operator if all
${\varphi}\in S_{2}{L^{2}}$ satisfy the extra cancellation property $\langle
v,y_{j}{\varphi}\rangle=0$, $j=1,\dots,4$.
The operator $Z_{2}$ is the same as the one defined by (7.5) if ${\varphi}$
and $\mu(\lambda)$ are replaced by ${\varphi}_{1}+{\tilde{\varphi}}$ and
$d_{1}(\lambda)$ respectively. Then, the repetition of the argument below
(7.5) implies that $Z_{1}$ is bounded in $L^{p}$ for $1<p<2$ and is unbounded
for $2<p<\infty$. This completes the proof of Theorem 1.4. ∎
## References
* [1] S. Agmon, Spectral properties of Schrödinger operators and scattering theory, _Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4)_ 2 (1975), 151–218.
* [2] H. D. Cornean, A. Michelangeli and K. Yajima, Two dimensional Schrödinger operators with point interactions, Threshold expansions and $L^{p}$-boundedness of wave operators, _Reviews in Math. Phys_. 31, no. 4 (2019) 1950012 (32 pages).
* [3] _Digital Library of Mathematical Functions_ https://dlmf.nist.gov/
* [4] M. B. Erdoğan, M. Goldberg and W. R. Green, Dispersive estimates for four dimensional Schrödinger and wave equations with obstructions at zero energy, _Comm. PDE_. 39 (2014), 1936-1964.
* [5] M. Goldberg and W. R. Green, The $L^{p}$ boundedness of wave operators for Schrd̈inger operators with threshold singularities, _Advances in mathematics_ 303 (2016), 60-389.
* [6] M. Goldberg and W. R. Green, On the $L^{p}$ boundedness of wave operators for four-dimensional Schr”odinger operators with a threshold eigenvalue, _Ann. Henri Poincaré_ 18 (2017), 1269-1288, doi 10.1007/s00023-016-0534-1.
* [7] M. Goldberg and M. Visan, A counterexample to dispersive estimates for Schrödinger operators in higher dimensions, _Comm. Math. Phys._ 266 (2006), no. 1, 211-238.
* [8] L. Grafakos _Classical Fourier analysis, 2nd ed._ Graduate Texts in Mathematics 249, Springer (2008).
* [9] W. R. Green and E. Toprak, Decay estimates for four dimensional Schrödinger, Klein Gordon and wave equations with obstructions at zero energy, _Differential Integral Equations_ 30 (2017), no. 5-6, 329-386.
* [10] A.D. Ionescu and D. Jerison, On the absence of positive eigenvalues of Schrödinger operators with rough potentials. _Geom. Funct. Anal_. 13 (2003), 1029-1081.
* [11] D. Ionescu and W. Schlag, Agmon-Kato-Kuroda theorems for a large class of perturbations, _Duke Math. J._ 131 (2006), 397-440.
* [12] A. Jensen, Spectral properties of Schrödinger operatorsand time-decay of the wave functions. Results in $L^{2}({\mathbb{R}}^{4})$, _J. Math. Anal. Appl._ 101 (1984), 397-422.
* [13] A. Jensen and G. Nenciu, A unified approach to resolvent expansions at thresholds. _Reviews in Mathematical Physics_ , 13, No. 6 (2001) 717-754.
* [14] A. Jensen and K. Yajima, On $L^{p}$ boundedness of wave operators for $4$-dimensional Schr”odinger operators with threshold singularities, _Proc. London Math. Soc. (3)_ 96 (2008) 136-162, doi:10.1112/plms/pdm041
* [15] T. Kato and K. Yajima, Some examples of smooth operators and the associated smoothing effect, _Rev. Math. Phys_. 1 (1989), 481-496.
* [16] C. E. Kenig, A. Ruiz and C. D. Sogge, Uniform Sobolev inequalities and unique continuation for second order constant coefficient differential operators, _Duke Math. J._ 55 (1987), 329-347
* [17] H. Koch and D. Tataru, Carleman estimates and absence of embedded eigenvalues. _Comm. Math. Phys._ 267 (2006), no. 2, 419-449.
* [18] S. T. Kuroda, Scattering theory for differential operators. I. Operator theory and II. Self-adjoint elliptic operators. _J. Math. Soc. Japan_ 25 (1973), 75-104 and 223-234.
* [19] S. T. Kuroda, _Introduction to Scattering Theory_ , Lecture Notes, Matematisk Institut, Aarhus University (1978).
* [20] E. H. Lieb and M. Loss, _Analysis_ , American Mathematical Society, Rhode Island (1997).
* [21] M. Murata, Asymptotic expansions in time for solutions of Schr”odinger-type equations. _J. Funct. Anal._ 49 (1982), no. 1, 10-56.
* [22] M. Reed and B. Simon, _Methods of modern mathematical physics III, Scattering theory_ , Academic Press, New York (1975).
* [23] N. Shenk and D. Thoe, Eigenfunction expansions and scattering theory for perturbations of $-\Delta$, _Rocky Mount. J. Math._ 1 (1971), no. 1, 89-125
* [24] E. M. Stein, _Harmonic Analysis: Real-Variable Methods, Orthogonality, and oscillatory Integrals_ , Princeton U. Press, Princeton, N. J. (1993).
* [25] R. Weder, The $L^{p}$ boundedness of the wave operators for matrix Schrödinger equations, http://arxiv.org/abs/1912.12793 (2019).
* [26] K. Yajima, The $W^{k,p}$-continuity of wave operators for Schrödinger operators, _J. Math. Soc. Japan_ 47 (1995), 551-581.
* [27] K. Yajima, The $W^{k,p}$-continuity of wave operators for Schrödinger operators. III. Even dimensional cases, _J. Math. Sci. Univ. Tokyo_ 2 (1995), 311-346.
* [28] K. Yajima, The $L^{p}$-boundedness of wave operators for two dimensional Schrödinger operators with threshold singularities, to appear in _J. Math. Soc. Japan_ , http://arxiv.org/abs/2008.07906
|
From the work of Phong and Sturm in 2007, for a polarised projective manifold and an ample test configuration, one can associate the geodesic ray of plurisubharmonic metrics on the polarising line bundle using the solution of the Monge-Ampère equation on an equivariant resolution of singularities of the test configuration. We prove that the Mabuchi chordal distance between the geodesic rays associated with two ample test configurations coincides with the spectral distance between the associated filtrations on the section ring.
This gives an algebraic description of the boundary at the infinity of the space of positive metrics, viewed — as it is usually done for spaces of negative curvature — through geodesic rays.
Geometry at the infinity of the space of positive metrics
1.25em0.4pt Table of contents
§ INTRODUCTION
The main goal of this article is to study the geometry at the infinity of the space of positive metrics on an ample line bundle over a given projective manifold.
Here we view the infinity in terms of geodesic rays; this point of view goes in line with the general philosophy advocated by Donaldson [33] that the space of positive metrics on an ample line bundle is as an infinite-dimensional manifold of non-positive sectional curvature. In this perspective, our study here is similar to the study of Tits boundary of ${\rm{CAT}}(0)$ spaces, cf. [14].
More precisely, let $X$ be a complex projective manifold, and let $L$ be an ample line bundle over $X$.
We denote by $\mathcal{H}^L$ (or simply $\mathcal{H}$ for brevity) the space of positive Hermitian metrics on $L$.
For any $p \in [1, +\infty]$, one can introduce on $\mathcal{H}$ a collection of $L^p$-type Mabuchi metrics, see Section <ref>.
Using these Finsler metrics, we introduce the path length metric structures $(\mathcal{H}, d_p)$.
By [23], the metric completions $(\mathcal{E}^p, d_p)$ of $(\mathcal{H}, d_p)$ are complete geodesic metric spaces, which means that between any two points of $\mathcal{E}^p$, there is a geodesic of $(\mathcal{E}^p, d_p)$ connecting them.
By definition, a geodesic ray in $(\mathcal{E}^p, d_p)$ is the distinguished geodesic segment, closed at one extremity and of infinite length, which can be constructed as some psh envelope, see (<ref>), or, alternatively, as a solution to a certain Monge-Ampère equation, see (<ref>).
For $p \in ]1, +\infty[$, by the result of Darvas-Lu <cit.>, the space $(\mathcal{E}^p, d_p)$ is uniquely geodesic, and the above notion of geodesic rays coincides with the respective notion in the sense of metric spaces.
By <cit.>, the space of geodesic rays satisfies Euclid's 5th postulate for half-lines, meaning that geodesic rays departing from different initial points are in bijective correspondence.
From now on, we consider geodesic rays departing from a fixed initial point.
Following Darvas-Lu <cit.>, we define the chordal $L^p$-distance, $d_p(\{ h^{L, 1}_{t} \}, \{ h^{L, 2}_{t} \})$, $p \in [1, +\infty]$, between two geodesic rays $h^{L, 1}_{t}, h^{L, 2}_{t}$, $t \in [0, + \infty[$, in the following way
\begin{equation}\label{eq_defn_chordal}
d_p(\{ h^{L, 1}_{t} \}, \{ h^{L, 2}_{t} \}) :=
\lim_{t \to \infty} \frac{d_p(h^{L, 1}_{t}, h^{L, 2}_{t} )}{t}.
\end{equation}
The limit is finite by the triangle inequality and it exists by the fact that metric spaces $(\mathcal{E}^p, d_p)$ are Buseman convex, see Chen-Cheng <cit.>, cf. (<ref>) and after Lemma <ref>.
Darvas-Lu in <cit.> proved that the chordal distance is indeed a distance on the space of geodesic rays departing from the same initial point (in particular, it separates the geodesic rays).
Geodesic rays have recently found applications in several areas of complex geometry.
Most notably, many results towards Yau-Tian-Donaldson conjecture, studying the existence of constant scalar curvature Kähler metrics in a given Kähler class, rely substantially on geodesic rays, see Phong-Ross-Sturm [53], Paul-Tian [52], Berman-Boucksom-Jonsson [3] or Li [46].
Part of the reason for this is that while the points of the space $\mathcal{H}$ parametrize geometric objects, (a subset of) points on the boundary at the infinity of $\mathcal{H}$ are parametrized by ample test-configurations – some special degenerations of manifolds, algebraic in nature.
This proves useful in relating the existence of a certain metric on the line bundle to some algebraic obstruction.
The main goal of the current article is to further investigate the geometry at the infinity of the space of positive metrics on a given ample line bundle by studying chordal distances between pairs of geodesic rays.
As we show, for geodesic rays generated by ample test configurations, this chordal distance coincides with the spectral distance on filtrations on the section ring associated with the test configurations.
This fulfills the general philosophy of Boucksom-Hisamoto-Jonsson <cit.> for the distance functional, saying that the limiting behavior of a functional on the boundary of the space of positive metrics should be related with an appropriate functional defined on the space of non-Archimedean metrics on the line bundle.
Remark that the chordal distance is a complex-geometric quantity, defined using complex pluripotential theory, and spectral distance on the filtrations is a purely algebrogeometric quantity.
Our result, hence, lies on the interface of the three domains.
To describe our main statement in more details, recall that on the geometric side, to any ample test configuration $\mathcal{T}$ of $(X, L)$ and a fixed positive metric $h^L_0$ on $L$, Phong-Sturm in <cit.> associated a geodesic ray $h^{\mathcal{T}}_t$, $t \in [0, + \infty[$, of plurisubharmonic metrics on $L$ emanating from $h^L_0$ by considering the solution of the Dirichlet problem for a Monge-Ampère equation over a $\comp^*$-equivariant resolution of singularities of the test configuration with boundary conditions prescribed by the initial point of the ray, see Section <ref> for details.
By the results of Chu-Tosatti-Weinkove [21], based on the previoius work of Phong-Sturm [56], the metrics $h^{\mathcal{T}}_t$ are $\mathscr{C}^{1, 1}$; from toric examples [20], [62], we cannot hope for a better regularity in general.
On the algebraic side, recall that Witt Nyström in <cit.> associated with any test configuration $\mathcal{T}$ a submultiplicative filtration $\mathcal{F}^{\mathcal{T}}$ on the section ring
\begin{equation}
R(X, L) := \oplus_{k = 1}^{\infty} H^0(X, L^k),
\end{equation}
by considering the vanishing order along the central fiber of $\mathcal{T}$ of the $\comp^*$-equivariant meromorphic extension of a section from $R(X, L)$, see Section <ref> for details.
Now, for any two filtrations $\mathcal{F}_1, \mathcal{F}_2$ on a finitely dimensional vector space $V$, and any $p \in [1, +\infty]$, we define spectral distances $d_p(\mathcal{F}_1, \mathcal{F}_2)$ using the $l^p$-norms of the joint spectrum of the filtrations $\mathcal{F}_1, \mathcal{F}_2$, see (<ref>) for details.
It was established by Chen-Maclean <cit.>, cf. also Boucksom-Jonsson <cit.>, that for the filtrations $\mathcal{F}^{\mathcal{T}_1}, \mathcal{F}^{\mathcal{T}_2}$ associated with ample test configurations $\mathcal{T}_1, \mathcal{T}_2$, and any $p \in [1, +\infty[$, the following limit exists
\begin{equation}\label{eq_spec_dist}
d_p(\mathcal{F}^{\mathcal{T}_1}, \mathcal{F}^{\mathcal{T}_2}) :=
\lim_{k \to \infty} \frac{d_p(\mathcal{F}^{\mathcal{T}_1}_k, \mathcal{F}^{\mathcal{T}_2}_k)}{k},
\end{equation}
where $\mathcal{F}^{\mathcal{T}_1}_k, \mathcal{F}^{\mathcal{T}_2}_k$, $k \in \nat$, are the restrictions of $\mathcal{F}^{\mathcal{T}_1}, \mathcal{F}^{\mathcal{T}_2}$ on the graded pieces $H^0(X, L^k)$.
We shall prove, cf. Remark <ref>, that the limit also exists for $p = +\infty$.
Our main result of this article says that the geometric and algebraic viewpoints on the distances associated with ample test configurations are compatible.
For any ample test configurations $\mathcal{T}_1, \mathcal{T}_2$ and any $p \in [1, +\infty]$, we have
\begin{equation}\label{eq_dist_na}
d_p \big(\{ h^{\mathcal{T}_1}_t \}, \{ h^{\mathcal{T}_2}_t \} \big)
d_p \big( \mathcal{F}^{\mathcal{T}_1}, \mathcal{F}^{\mathcal{T}_2} \big).
\end{equation}
a) A relation between the two distances was speculated and conjectured in the literature, see Darvas-Lu <cit.>, Zhang <cit.> and Remark <ref> for details.
When one of the test configurations is trivial and $p \in [1, +\infty[$, (<ref>) is equivalent to the result of Hisamoto [41], which followed the work of Witt Nyström <cit.>, see Remark <ref> for details.
For $p = 1$, (<ref>) is due to Reboulet <cit.>, see Remark <ref>.
Our proof of Theorem <ref> is new even in these special cases.
When one of the test configurations is trivial (and hence the corresponding geodesic ray is constant), the result is equivalent to the result of Hisamoto [41], which generalizes previous work of Witt Nyström <cit.> and answers a conjecture <cit.>, cf. Zhang <cit.> and Section <ref>.
When $p = 1$, due to the relation between the energy functional and the $d_1$-distance, Theorem <ref> can be reduced using pluripotential theory to the case when one of the test configurations is trivial, see Reboulet <cit.>.
Our proof of Theorem <ref> is new even in these special cases. But we would like to point out that the many difficulties we encounter in our proof disappear there.
As we shall explain in Section <ref>, Theorem <ref> can be further refined.
More precisely, inspired by Berndtsson [5], we associate with two geodesic rays a probability measure on the real line so that the chordal distances between the rays coincides with the absolute moments of this measure.
Similarly, the results of Chen-Maclean [17], cf. also Boucksom-Jonsson <cit.>, show that the spectral distances between two filtrations, appearing on the right-hand side of (<ref>) correspond to the absolute moments of a relative spectral measure between two filtrations.
Our second main result, Theorem ......, shows that the two above measures coincide.
This refines Theorem <ref>, which only says that the absolute moments of two measures coincide.
Let us now briefly describe the main idea behind the proof of Theorem <ref>.
It relies in an essential way on the well-known observation that one can naturally interpret the space of filtrations on a given finitely dimensional vector space as the boundary at the infinity (viewed in terms of geodesic rays) of the space of Hermitian norms on the vector space, see <cit.>.
This result can be viewed as a finitely-dimensional analogue of Theorem <ref>.
To pass from this finitely-dimensional picture to the infinitely-dimensional one, we rely on the methods of geometric quantization.
Previous works of Phong-Sturm [55], [56] and the author [35], [36], lie in the heart of our approach.
More precisely, recall that Phong-Sturm in [55] constructed for any ample test configuration a ray of Hermitian norms on $R(X, L)$ which quantizes the geodesic ray of metrics on $L$ associated with the test configuration (in the sense that the Fubini-Study metric of the ray of norms is related to the ray of metrics), see Theorem <ref>.
Recall further that author in [36] established that the Fubini-Study map, when restricted to the set of submultiplicative norms, is an isometry with respect to the natural distances, see Theorem <ref>.
These two results as well as the fact that the space of Hermitian norms endowed with the natural distances is Buseman convex, see (<ref>), and the fact that the geodesic ray of Hermitian norms, constructed by Phong-Sturm is “almost submultiplicative" in the sense which will be made precise in Section <ref>, allow us to establish one part of Theorem <ref>, showing that the left-hand side of (<ref>) is no bigger than the right-hand side.
Establishing the opposite bound, showing that the right-hand side of (<ref>) is no bigger than the left-hand side, is much more intricate and requires a more detailed analysis of the geodesic ray at the infinity.
We first compare in Theorem <ref> the geodesic ray of Hermitian norms on the section ring and the ray of $L^2$-norms associated with the geodesic ray of the test configuration.
To do so, we rely on the results of Phong-Sturm [56] about the boundness of geodesic rays, see Theorem <ref>, and on a refinement of our previous work on the study of the metric structure of section ring, [35], showing that our results can be extended in the degenerating family setting.
Then we prove that it is sufficient to assume that the singularities of the central fibers of the test configurations are mild enough.
Then we show that for test configurations with mild singularities, it is possible to estimate the distance between the $L^2$-norms of geodesic rays of metrics in terms of the distance between the geodesic rays of metrics themselves.
This is done by using quantized maximum principle of Berndtsson [5] and by relying on the techniques of Dai-Liu-Ma [22] and Ma-Marinescu [48] on the study Bergman kernels, which we generalize to the setting of degerating families of manifolds.
In total, we establish another part of Theorem <ref>, showing that the right-hand side of (<ref>) is no bigger than the left-hand side, finishing the proof of Theorem <ref>.
This article is organized as follows.
In Section <ref>, we recall the necessary preliminaries for Theorem <ref> and provide some applications.
In Section <ref>, we establish one part of Theorem <ref>, showing that the left-hand side of (<ref>) is no bigger than the right-hand side, and in Section <ref>, we establish the opposite bound.
We denote by $\mathbb{D}_n(r)$ (resp. $\mathbb{D}^*_n(r)$) the (resp. punctured) euclidean ball in $\comp^n$ of radius $r > 0$, and by $\mathbb{D}_n(r_1, r_2)$ the euclidean annulus in $\comp^n$ of interior radius $r_1 > 0$ and exterior radius $r_2 > r_1$.
When $n = 1$ or $r = 1$, we omit them from the notation.
On a metric space $(X, d)$, for $x \in X$, $r > 0$, we denote by $B(x, r)$ the ball of radius $r$ around $x$.
For a function $f : X \to \real$, defined on $(X, d)$, we denote by $f_*$ the lower-semicontinuous regularization of $f$, given by $f_*(x) := \lim_{\epsilon \to 0} \inf_{y \in B(x, \epsilon)} f(y)$.
We similarly define the upper-semicontinuous regularization and we extend these notations to metrics on line bundles.
Let $(X, \omega)$ be a compact Kähler manifold.
By $\partial \dbar$-lemma, the space $\mathcal{H}_{[\omega]}$ of Kähler metrics on $X$ cohomologous to $\omega$ can be identified with the space $\mathcal{H}_{\omega}$ of Kähler potentials, consisting of $u \in \ccal^{\infty}(X, \real)$, such that $\omega_u := \omega + \imun \partial \dbar u$ is strictly positive.
Assume that there is a holomorphic line bundle $L$, such that the De Rham class $[\omega]$ of $\omega$ is related with the first Chern class $c_1(L)$ of $L$ as $[\omega] = 2 \pi c_1(L)$.
Then the space $\mathcal{H}_{\omega}$ can be viewed as the space of positive Hermitian metrics $\mathcal{H}^L$ on $L$ upon the identification
\begin{equation}\label{eq_pot_metr_corr}
u \mapsto h^L := e^{-u} \cdot h^L_0,
\end{equation}
where $h^L_0$ is a positive Hermitian metric on $L$, verifying $\omega = 2 \pi c_1(L, h^L_0)$.
The function $u$ is called the potential of $h^L$.
These identifications will be implicit later on, and we sometimes use the letter $\mathcal{H}$ to designate $\mathcal{H}_{\omega}, \mathcal{H}_{[\omega]}$ or $\mathcal{H}^L$.
We denote by ${\rm{PSH}}(X, \omega)$ the set of $\omega$-psh potentials; these are upper semicontinuous functions $u \in L^1(X, \real \cup \{ -\infty \})$, such that
\begin{equation}
\omega_u := \omega + \imun \partial \dbar u
\end{equation}
is positive as a $(1, 1)$-current.
We say a (singular) metric $h^L$ on $L$ is psh if its potential is $\omega$-psh.
A Hermitian metric $h^L$ on a line bundle $L$ over a compact manifold is called bounded if for any (or some) smooth metric $h^L_0$ on $L$, there is $C > 0$, such that $\exp(-C) \cdot h^L_0 \leq h^L \leq \exp(C) \cdot h^L_0$.
We denote by $d_{+ \infty}(h^L_0, h^L)$ the smallest constant $C > 0$, verifying above inequality.
For a fixed Hermitian metric $h^L$ on a line bundle $L$ over a manifold $X$ (resp. and a measure $\mu$ on $X$), we denote by ${\rm{Ban}}^{\infty}_k(h^L) = \| \cdot \|_{L^{\infty}_k(h^L)}$ (resp. ${\rm{Hilb}}_k(h^L, \mu) = \| \cdot \|_{L^2_k(h^L, \mu)}$), $k \in \nat$, the $L^{\infty}$-norm (resp. $L^2$-norm) on $H^0(X, L^k)$ induced by $h^L$ (resp. and $\mu$), i.e. for any $f \in H^0(X, L^k)$, we define $\| f \|_{L^{\infty}_k(h^L)} = \sup_{x \in X} |f(x)|_{h^L}$ (resp. $\| f \|_{L^2_k(h^L, \mu)} = \int_{x \in X} |f(x)|^2_{h^L} d \mu(x)$).
We denote by ${\rm{Ban}}^{\infty}(h^L) = \sum_{k = 0}^{\infty} {\rm{Ban}}^{\infty}_k(h^L)$ and ${\rm{Hilb}}(h^L, \mu) = \sum_{k = 0}^{\infty} {\rm{Hilb}}_k(h^L, \mu)$ the induced graded norms on $R(X, L)$.
When $h^L$ is bounded psh and $\mu$ is given by $\frac{1}{n!} c_1(L, h^L)^n$, where the power is interpreted in Bedford-Taylor sense [1], we omit $\mu$ from the notation.
When the volume form $\mu$ is the symplectic volume $\frac{\omega^n}{n!}$ of some Kähler form $\omega$ on $X$, we denote ${\rm{Hilb}}(h^L, \mu)$ by ${\rm{Hilb}}(h^L, \omega)$.
I would like to thank Rémi Reboulet and Lars Martin Sektnan for their invitation to University of Gothenburg; in particular, Rémi who drew my attention to the problem of this article during my visit and shared some of his ideas.
I also thank Sébastien Boucksom for many enlightening discussions on non-Archimedean pluripotential theory and related fields.
Finally, I would like to acknowledge the support of CNRS and École polytechnique.
§ NORMS, FILTRATIONS, METRICS AND DEGENERATIONS
The main goal of this section is to recall the necessary preliminaries for Theorem <ref> and to describe some applications.
More precisely, in Section <ref> we introduce natural metric structures on the sets Hermitian norms and filtrations on a given finitely dimensional vector space.
In Section <ref>, we recall the basics of pluripotential theory.
In Section <ref>, we recall the basics of test configurations.
Finally, in Section <ref>, we describe some applications of Theorem <ref>.
§.§ Metric structures on Hermitian norms and filtrations
The main goal of this section is to introduce natural distances on the spaces of Hermitian norms and filtrations on a given finitely dimensional vector space.
Let $V$ be a complex vector space, $\dim V = n$.
We denote by $\mathcal{H}_V$ the space of Hermitian norms $H$ on $V$, viewed as an open subset of the Hermitian operators ${\rm{Herm}}(V)$.
Let $\lambda_1, \ldots, \lambda_n$ be the ordered spectrum of $h \in {\rm{Herm}}(V)$ with respect to a norm $H \in \mathcal{H}_V$.
For $p \in [1, +\infty[$, we define
\begin{equation}
\| h \|^H_p
\sqrt[p]{\frac{\sum_{i = 1}^{\dim V} |\lambda_i|^p}{\dim V}},
\end{equation}
and we let $\| h \|^H_{+ \infty} := \max |\lambda_i|$.
By Ky Fan inequality, one can establish that $\| \cdot \|^H_p$, $p \in [1, +\infty]$, is a Finsler norm for any $H$, i.e. it satisfies the triangle inequality, cf. <cit.>.
We then define the length metric $d_p(H_0, H_1)$, $H_0, H_1 \in \mathcal{H}_V$, as usual through the infimum of the length $l(\gamma) := \int_0^1 \|\gamma'(t)\|^{\gamma(t)}_p dt$, where $\gamma$ is a piecewise smooth path in $\mathcal{H}_V$ joining $H_0, H_1$.
One can verify, cf. <cit.>, that this metric admits the following explicit description.
Let $T \in {\rm{Herm}}(V)$, be the transfer map between Hermitian norms $H_0, H_1 \in \mathcal{H}_V$, i.e. the Hermitian products $\langle \cdot, \cdot \rangle_{H_0}, \langle \cdot, \cdot \rangle_{H_1}$ induced by $H_0$ and $H_1$, are related as $\langle \cdot, \cdot \rangle_{H_1} = \langle T \cdot, \cdot \rangle_{H_0}$, then
\begin{equation}\label{eq_dist_transf}
d_p(H_0, H_1)
\sqrt[p]{\frac{{\rm{Tr}}[|\log T|^p]}{\dim V}},
\end{equation}
for any $p \in [1, +\infty[$ and $d_{+ \infty}(H_0, H_1) = \|\log T \|$, where $\| \cdot \|$ is the operator norm with respect to $H_0$.
Moreover, the Hermitian norms $H_t$, $t \in [0, 1]$, corresponding to the scalar products $\langle \cdot, \cdot \rangle_{H_t} := \langle T^t \cdot, \cdot \rangle_{H_0}$ are geodesics in $(\mathcal{H}_V, d_p)$, $p \in [1, +\infty]$.
Later on, we call them the distinguished geodesics.
For $p \in ]1, +\infty[$, it is possible to verify that $(\mathcal{H}_V, d_p)$ is a uniquely geodesic space, cf. <cit.>, and hence these are the only geodesic segments between $H_0$ and $H_1$; see, however, <cit.> for a counterexample of the analogous statement for $p = 1, +\infty$.
Let us now discuss the non-Archimedean part of the story.
A filtration $\mathcal{F}$ of a vector space $V$ is a map from $\real$ to vector subspaces of $V$, $t \mapsto \mathcal{F}^t V$, verifying $\mathcal{F}^t V \subset \mathcal{F}^s V$ for $t > s$, and such that $\mathcal{F}^t V = V$ for sufficiently small $t$ and $\mathcal{F}^t V = \{0\}$ for sufficiently big $t$.
We assume that this map is left-continuous, i.e. for any $t \in \real$, there is $\epsilon_0 > 0$, such that $\mathcal{F}^t V = \mathcal{F}^{t - \epsilon} V $ for any $0 < \epsilon < \epsilon_0$.
We define the jumping numbers $e_{\mathcal{F}}(j)$, $j = 1, \ldots, n$, of the filtration $\mathcal{F}$ as follows
\begin{equation}\label{eq_defn_jump_numb}
e_{\mathcal{F}}(j) := \sup \Big\{ t \in \real : \dim \mathcal{F}^t V \geq j \Big\}.
\end{equation}
Filtrations $\mathcal{F}$ on $V$ are in bijection with functions $\chi_{\mathcal{F}} : V \to [0, +\infty[$, defined as
\begin{equation}\label{eq_filtr_norm}
\chi_{\mathcal{F}}(s) := \exp(- w_{\mathcal{F}}(s)).
\end{equation}
where $w_{\mathcal{F}}(s)$ is the weight associated with the filtration, defined as
w_{\mathcal{F}}(s) := \sup \{ \lambda \in \real : s \in \mathcal{F}^{\lambda} V \}
An easy verification shows that $\chi_{\mathcal{F}}$ is a non-Archimedean norm on $V$ with respect to the trivial absolute value on $\comp$, i.e. it satisfies the following axioms
* $\chi_{\mathcal{F}}(f) = 0$ if and only if $f = 0$,
* $\chi_{\mathcal{F}}(\lambda f) = \chi_{\mathcal{F}}(f)$, for any $\lambda \in \comp^*$, $f \in V$,
* $\chi_{\mathcal{F}}(f + g) \leq \max \{ \chi_{\mathcal{F}}(f), \chi_{\mathcal{F}}(g) \}$, for any $f, g \in V$.
As it was established for example in <cit.>, for any two filtrations $\mathcal{F}_1, \mathcal{F}_2$ on $V$, there is a basis $e_1, \ldots, e_n$ of $V$, which jointly diagonalizes $\chi_{\mathcal{F}_1}$ and $\chi_{\mathcal{F}_2}$, i.e. for any $\lambda_1, \ldots, \lambda_n \in \comp$ and $j = 1, 2$, we have
\begin{equation}\label{eq_sim_diag_nna}
\chi_{\mathcal{F}_j} \Big(\sum_{i = 1}^{n} \lambda_i e_i \Big) = \max{}_{i = 1}^{n} \big\{ \chi_{\mathcal{F}_j}(\lambda_i e_i) \big\}.
\end{equation}
Analogously to (<ref>), for $p \in [1, +\infty[$, we define using this basis
\begin{equation}\label{eq_dp_filtr}
d_p(\mathcal{F}_1, \mathcal{F}_2)
\sqrt[p]{\frac{\sum_{i = 1}^{\dim V} |w_{\mathcal{F}_1}(e_i) - w_{\mathcal{F}_2}(e_i)|^p}{\dim V}},
\end{equation}
and we let $d_{+ \infty}(\mathcal{F}_1, \mathcal{F}_2) := \max_{x \in V \setminus \{ 0 \}} |w_{\mathcal{F}_1}(x) - w_{\mathcal{F}_2}(x)|$.
As we recall later in (<ref>), the space of non-Archimedean norms on a finitely dimensional vector space can be viewed as the boundary at the infinity of the space of Hermitian norms.
The metric structures (<ref>) and (<ref>) are compatible under this identification, see (<ref>).
Our main result, Theorem <ref>, is an analogue of this statement in the infinitely-dimensional setting.
§.§ Pluripotential theory and geodesic segments between positive metrics
The main goal of this section is to recall some basic facts from complex pluripotential theory, emphasizing the metric part and in particular the study of geodesic segments.
Let us fix a Kähler form $\omega$ on $X$.
One can introduce on the space of Kähler potentials $\mathcal{H}_{\omega}$ a collection of $L^p$-type Finsler metrics, $p \in [1, +\infty[$, defined as follows.
If $u \in \mathcal{H}_{\omega}$ and $\xi \in T_u \mathcal{H}_{\omega} \simeq \ccal^{\infty}(X, \real)$, then the $L^p$-length of $\xi$ is given by the following expression
\begin{equation}\label{eq_finsl_dist_fir}
\| \xi \|_p^u
\sqrt[p]{
\frac{1}{\int \omega^n}
\int_X |\xi(x)|^p \cdot \omega_u^n(x)}.
\end{equation}
For $p = 2$, this was introduced by Mabuchi [50], and for $p \in [1, +\infty[$ by Darvas [23].
For brevity, we omit $\omega$ from our further notations.
Darvas in [23] studied the completion $(\mathcal{E}^p, d_p)$ of the path length metric structures $(\mathcal{H}, d_p)$ associated with (<ref>), and proved that these completions are geodesic metric spaces and have a vector space structure.
Certain geodesic segments of $(\mathcal{E}^p, d_p)$ can be constructed as upper envelopes of quasi-psh functions.
More precisely, we identify paths $u_t \in \mathcal{E}^p$, $t \in [0, 1]$, with rotationally-invariant functions $\hat{u}$ over $X \times \mathbb{D}(e^{-1}, 1)$ through the following formula
\begin{equation}\label{eq_defn_hat_u}
\hat{u}(x, \tau) = u_{t}(x), \quad \text{where} \quad x \in X \, \text{ and } \, t = - \log |\tau|.
\end{equation}
We say that a curve $[0,1] \ni t \to v_t \in \mathcal{E}^p$ is a weak subgeodesic connecting $u_0, u_1 \in \mathcal{E}^p$ if $d_p(v_t, u_i) \to 0$, as $t \to 0$ for $i = 0$ and $t \to 1$ for $i = 1$, and $\hat{u}$ is $\pi^* \omega$-psh on $X \times \mathbb{D}(e^{-1}, 1)$.
As shown in <cit.>, the following envelope
\begin{equation}\label{eq_geod_as_env}
u_t := \sup \Big\{
v_t \, : \, t \to v_t \, \text{ is a weak subgeodesic connecting } \, v_0 \leq u_0 \text{ and } v_1 \leq u_1
\Big\},
\end{equation}
is a $d_p$-geodesic connecting $u_0, u_1$.
It will be later called the distinguished geodesic segment.
According to Chen-Cheng <cit.>, the metric spaces $(\mathcal{E}^p, d_p)$, $p \in [1, +\infty[$, are Buseman convex, i.e. for any distinguished geodesic segments $u_t, v_t \in \mathcal{E}^p$, $t \in [0, 1]$, departing from the same initial point, for any $s \in [0, 1]$, we have
\begin{equation}\label{eq_bus_conv_mab}
\frac{d_p(u_s, v_s)}{s} \leq d_p(u_1, v_1).
\end{equation}
The space $(\mathcal{E}^2, d_2)$ is, moreover, $\rm{CAT}(0)$ by the result of Darvas <cit.>, building on the previous work of Calabi-Chen <cit.>.
It is well-known, cf. Guedj-Zeriahi <cit.>, that
\begin{equation}\label{eq_inter_ep}
\underset{p \in [1, +\infty[}{\cap} \mathcal{E}^p
{\rm{PSH}}(X, \omega) \cap L^{\infty}(X).
\end{equation}
When $u_0, u_1 \in {\rm{PSH}}(X, \omega) \cap L^{\infty}(X)$, Berndtsson <cit.> in <cit.> proved that $u_t$, $t \in [0, 1]$, defined by (<ref>), verifies $u_t \in L^{\infty}(X)$ and it can be described as the only path connecting $u_0$ to $u_1$, so that $\hat{u}$ is the solution of the following Monge-Ampère equation
\begin{equation}\label{eq_ma_geod}
(\pi^* \omega + \imun \partial \dbar \hat{u})^{n + 1} = 0,
\end{equation}
where the wedge power is interpreted in Bedford-Taylor sense [1].
For smooth geodesic segments in $(\mathcal{H}, d_2)$, Semmes [61] and Donaldson [33] have made similar observations before.
The uniqueness of the solution of (<ref>) is assured by <cit.>.
Remark, in particular, that for any $u_0, u_1 \in {\rm{PSH}}(X, \omega) \cap L^{\infty}(X)$, the distinguished weak geodesic connecting them is the same if we view $u_0, u_1$ as elements in any of $\mathcal{E}^p$, $p \in [1, + \infty[$.
Now, we define the space $\mathcal{E}^{+ \infty}$ as the completion of $\mathcal{H}$ with respect to the distance $d_{+\infty}$.
More explicitly, by a version of Demailly's regularization theorem, see [28], [29], $\mathcal{E}^{+ \infty}$ can be identified with the space of continuous psh metrics on $L$, cf. <cit.> and <cit.>.
From <cit.>, the metric $d_{+\infty}$ on $\mathcal{E}^{+ \infty}$ can be alternatively defined as the path length metric structure associated with the $L^{\infty}$-length, defined in the notations of (<ref>) as $\| \xi \|_{+\infty}^u := \sup |\xi(x)|$, in a direct analogy with the definitions of $d_p$, $p \in [1, +\infty[$.
The following result is undoubtedly well-known to the experts in the field.
We present its proof later this section.
For any $u_0, u_1 \in \mathcal{E}^{+ \infty}$, we have
\lim_{p \to \infty} d_p(u_0, u_1) = d_{+ \infty}(u_0, u_1)
From Lemma <ref>, the space $(\mathcal{E}^{+ \infty}, d_{+ \infty})$ is Buseman convex in the sense (<ref>).
Hence, the chordal distance (<ref>) between rays of continuous metrics is well-defined for any $p \in [1, +\infty]$.
Recall that the distance between two given elements $u_0, u_1 \in {\rm{PSH}}(X, \omega) \cap L^{\infty}(X)$ can be expressed in terms of the distinguished geodesics $u_t$, $t \in [0, 1]$, connecting them, see (<ref>).
More precisely, Berndtsson in <cit.> proved that $u_t \in L^{\infty}(X)$ and the limits $\lim_{t \to 0} u_t = u_0$, $\lim_{t \to 1} u_t = u_1$ hold in the uniform sense.
Since $u_t$ is a weak subgeodesic, for fixed $x \in X$, the function $u_t(x)$ is convex in $t \in [0, 1]$, see <cit.>.
Hence, one-sided derivatives $\dot{u}_t^{-}$, $\dot{u}_t^{+}$ of $u_t$ are well-defined for $t \in ]0, 1[$ and they increase in $t$.
We denote $\dot{u}_0 := \lim_{t \to 0} \dot{u}_t^{-} = \lim_{t \to 0} \dot{u}_t^{+}$.
From <cit.>, we know that $\dot{u}_0$ is bounded and by Darvas <cit.>, we, moreover, have
\begin{equation}\label{eq_bnd_darvas_sup}
\sup |\dot{u}_0| \leq \sup |u_1 - u_0|.
\end{equation}
According to Darvas-Lu-Rubinstein <cit.>, refining previous result of Chen [18] and Darvas [24], for any $u_0 \in \mathcal{H}_{\omega}$, $u_1 \in {\rm{PSH}}(X, \omega) \cap L^{\infty}(X)$, $p \in [1, +\infty[$, we have
\begin{equation}\label{eq_d_p_berndss}
d_p(u_0, u_1)
\sqrt[p]{ \frac{1}{\int \omega^n} \int_X |\dot{u}_0(x)|^p \cdot \omega_{u_0}^n(x)}.
\end{equation}
From Darvas <cit.>, we actually know that (<ref>) holds also for $u_0, u_1 \in {\rm{PSH}}(X, \omega) \cap L^{\infty}(X)$, such that $\Delta u_0, \Delta u_1 \in L^{\infty}(X)$, where $\Delta$ is the Laplace operator on $X$ (we say in this case that $u_0, u_1 \in \mathscr{C}^{1, \overline{1}}(X)$).
Let us first assume that $u_0, u_1$ are smooth and positive, i.e. $u_0, u_1 \in \mathcal{H}_{\omega}$.
Then by (<ref>),
\lim_{p \to + \infty} d_p(u_0, u_1) = \sup |\dot{u}_0|
However, for geodesics with smooth extremities, we have $\sup |\dot{u}_0| = \sup |u_1 - u_0|$, cf. <cit.>. This proves Lemma <ref> in that case.
By Demailly's regularization theorem, see [28], [29], cf. <cit.>, for any $u_0, u_1 \in \mathcal{E}^{+ \infty}$, there are sequences $u_{0, i}, u_{1, i} \in \mathcal{H}_{\omega}$, $i \in \nat^*$, which converge uniformly, as $i \to \infty$, to $u_0$ and $u_1$ respectively.
Since by above, Lemma <ref> holds for $u_{0, i}, u_{1, i}$, it holds in full generality.
We say that the path $u_t \in {\rm{PSH}}(X, \omega) \cap L^{\infty}(X)$, $t \in [0, 1]$, is $\mathscr{C}^{1, \overline{1}}$ if $\hat{u}, \Delta \hat{u} \in L^{\infty}(X \times \mathbb{D}(e^{-1}, 1))$, where $\Delta$ is the Laplace operator on $X \times \mathbb{D}(e^{-1}, 1)$.
By standard regularity results, we then see that $u \in \mathscr{C}^{1, \alpha}(X \times \mathbb{D}(e^{-1}, 1))$ for any $\alpha < 1$.
Hence, the two-sided derivatives $\dot{u}_t^{-}$ and $\dot{u}_t^{+}$ coincide, and we denote them by $\dot{u}_t$.
Berndtsson in <cit.>, cf. also Darvas <cit.>, established that for $\mathscr{C}^{1, \overline{1}}$ geodesic rays $u_t$, $t \in [0, 1]$, the following bounded measure on the real line
\begin{equation}\label{eq_berndt_meas}
\mu_t
(\dot{u}_t )_* \big( \omega_{u_t}^n \big),
\end{equation}
doesn't depend on $t \in [0, 1]$.
From (<ref>), we see that the absolute moments of this measure, $\int |x|^p d\mu_t(x)$, coincide with $d_p(u_0, u_1)$, $p \in [1, +\infty[$.
§.§ Test configurations, submultiplicative filtrations and geodesic rays
The main goal of this section is to recall the definition of a test configuration and the relation between test configurations, submultiplicative filtrations and geodesic rays of metrics.
Recall first that a test configuration $\mathcal{T} = (\pi: \mathcal{X} \to \comp, \mathcal{L})$ for $(X, L)$ consists of
* A scheme $\mathcal{X}$ with a $\comp^*$-action $\rho$,
* A $\comp^*$-equivariant line bundle $\mathcal{L}$ over $\mathcal{X}$,
* A flat $\comp^*$-equivariant projection $\pi : \mathcal{X} \to \comp$, where $\comp^*$ acts on $\comp$ by multiplication, such that if we denote its fibers by $X_{\tau} := \pi^{-1}(\tau)$, $\tau \in \comp$, then $(X_1, \mathcal{L}|_{X_1})$ is isomorphic to $(X, L)$.
Remark that our definition differs slightly from the usual one, requiring $(X_1, \mathcal{L}|_{X_1})$ to be isomorphic with $(X, L^r)$ for some $r \in \nat^*$.
We say that a test configuration is (semi)ample if $\mathcal{L}$ is relatively (semi)ample.
We say that it is normal if $\mathcal{X}$ is normal.
Remark that the $\comp^*$-action induces the canonical isomorphisms
\begin{equation}\label{eq_can_ident_test}
\mathcal{X} \setminus X_0 \simeq \comp^* \times X, \qquad \mathcal{L}|_{\mathcal{X} \setminus X_0} \simeq p^* L,
\end{equation}
where $p : \comp^* \times X \to X$ is the natural projection.
Now, a collection of filtrations on the graded pieces $A_k$ of a graded vector space $A := \oplus_{k = 0}^{+\infty} A_k$ is called a (graded) filtration on $A$.
We say that a graded filtration $\mathcal{F}$ is bounded if there is $C > 0$, such that for any $k \in \nat^*$, $\mathcal{F}^{ C k} A_k = \{0\}$.
A graded filtration $\mathcal{F}$ on a ring is called submultiplicative if for any $t, s \in \real$, $k, l \in \nat$, we have
\begin{equation}\label{eq_subm_filt}
\mathcal{F}^t A_k \cdot \mathcal{F}^s A_l \subset \mathcal{F}^{t + s} A_{k + l}.
\end{equation}
Remark that for any submultiplicative graded filtration on a finitely generated ring $A$, there is $C > 0$, such that for any $k \in \nat^*$, $\mathcal{F}^{- C k} A_k = A_k$.
We say that a filtration $\mathcal{F}$ is a $\mathbb{Z}$-filtration if its weights are integral.
A $\mathbb{Z}$-filtration on a finitely generated ring $A$ is called a filtration of finite type if the associated $\comp[\tau]$-algebra ${\rm{Rees}}(\mathcal{F}) := \sum_{(\lambda, k) \in \mathbb{Z} \times \mathbb{N}} \tau^{- \lambda} \mathcal{F}^{\lambda} A_k$, also called the Rees algebra, is finitely generated.
Remark that one can reconstruct a filtration $\mathcal{F}$ from the $\comp[\tau]$-algebra structure of ${\rm{Rees}}(\mathcal{F})$.
Clearly, filtrations of finite type are bounded.
Following Witt Nyström <cit.>, let us construct a submultiplicative filtration $\mathcal{F}^{\mathcal{T}}$ on $R(X, L)$ associated with a test configuration $\mathcal{T}$ of $(X, L)$ as follows.
Pick an element $s \in H^0(X, L^k)$, $k \in \nat^*$, and consider the section $\tilde{s} \in H^0(\mathcal{X} \setminus X_0, \mathcal{L}^k)$, obtained by the application of the $\comp^*$-action to $s$.
By the flatness of $\pi$, the section $\tilde{s}$ extends to a meromorphic section over $\mathcal{X}$, cf. Witt Nyström <cit.>.
In other words, there is $l \in \integ$, such that for a coordinate $\tau$ on $\comp$, we have $\tilde{s} \cdot \tau^l \in H^0(\mathcal{X}, \mathcal{L}^k)$.
We define the restriction $\mathcal{F}^{\mathcal{T}}_k$ of the filtration $\mathcal{F}^{\mathcal{T}}$ to $H^0(X, L^k)$ as
\begin{equation}\label{eq_defn_filt_test}
\mathcal{F}^{\mathcal{T} \lambda}_k H^0(X, L^k)
\Big\{
s \in H^0(X, L^k) : \tau^{- \lceil \lambda \rceil} \cdot \tilde{s} \in H^0(\mathcal{X}, \mathcal{L}^k)
\Big\},
\quad \lambda \in \real.
\end{equation}
Alternatively, for any $k \in \nat^*$, consider the embedding $H^0(\mathcal{X}, \mathcal{L}^k) \to H^0(X, L^k) \otimes \comp[\tau, \tau^{-1}]$, induced by (<ref>).
An easy verification shows that $\mathcal{F}^{\mathcal{T}}$ is defined in such a way that under this embedding the $\comp[\tau]$-algebras $R(\mathcal{X}, \mathcal{L})$ and ${\rm{Rees}}(\mathcal{F}^{\mathcal{T}})$ are isomorphic, cf. <cit.>.
As for ample $\mathcal{L}$, the $\comp[\tau]$-algebra $R(\mathcal{X}, \mathcal{L})$ is finitely generated, the filtration $\mathcal{F}^{\mathcal{T}}$ is of finite type for ample test configurations $\mathcal{T}$, cf. <cit.> or <cit.>.
From Rees construction, for any filtration $\mathcal{F}$ of finite type on $R(X, L)$, there is $d \in \nat^*$ and an ample test configuration $\mathcal{T}$ of $(X, L^d)$, such that the restriction of $\mathcal{F}$ to $R(X, L^d) \subset R(X, L)$ coincides with $\mathcal{F}^{\mathcal{T}}$.
For completeness, let us recall this construction in details.
Since $\mathcal{F}$ is of finite type, the associated Rees algebra ${\rm{Rees}}(\mathcal{F})$ is a finitely generated $\comp[\tau]$-algebra.
Let $d \in \nat^*$ be such that ${\rm{Rees}}(\mathcal{F})^{(d)} := \sum_{(\lambda, k) \in \mathbb{Z} \times \mathbb{N}} \tau^{- \lambda} \mathcal{F}^{\lambda} H^0(X, L^{dk})$ is generated in degree one.
Consider $\mathcal{X} := \rm{Proj}_{\comp[\tau]}({\rm{Rees}}(\mathcal{F})^{(d)})$, $\mathcal{L} := \mathscr{O}(1)$ with the natural map $\pi : \mathcal{X} \to \comp = \rm{Spec}(\comp[\tau])$.
Clearly, $\mathcal{L}$ is ample, cf. <cit.>.
Remark that ${\rm{Rees}}(\mathcal{F})^{(d)}$ is torsion free, and so by <cit.>, it is a flat $\comp[\tau]$-algebra, which means that $\pi$ is flat.
There is also a natural equivariant $\comp^*$-action on $\mathcal{X}$, which intervenes with $\pi$.
The fiber $X_1$ of $\pi$ at $\tau = 1$ equals to $\rm{Proj}({\rm{Rees}}(\mathcal{F})^{(d)} \otimes_{\comp[\tau]} \comp[\tau] / (\tau - 1))$, cf. <cit.>, and since ${\rm{Rees}}(\mathcal{F})^{(d)} \otimes_{\comp[\tau]} \comp[\tau] / (\tau - 1)$ is isomorphic to $R(X, L^d)$, we have $(X_1, \mathcal{L}|_{X_1}) = (X, L^d)$ by <cit.>.
Hence, the pair $\mathcal{T} := (\pi : \mathcal{X} \to \comp, \mathcal{L})$ is a test configuration.
In fact, by <cit.>, when restricted to elements of sufficiently large degree, we have an isomorphism between the $\comp[\tau]$-algebras $R(\mathcal{X}, \mathcal{L})$ and ${\rm{Rees}}(\mathcal{F})^{(d)}$.
Hence, the filtration associated with $\mathcal{T}$, restricted to elements of sufficiently big degree in $R(X, L^d)$, coincides with the restriction of the filtration $\mathcal{F}$.
Moreover, from <cit.>, this is a one-to-one correspondence between filtrations of finite type on $R(X, L)$ (considered modulo the restrictions as above) and ample test configurations of $(X, L^d)$ for $d \in \nat^*$.
Let us now recall some operations on ample test configurations, $\mathcal{T} = (\pi: \mathcal{X} \to \comp, \mathcal{L})$.
Consider the normalization $p_0 : \widetilde{\mathcal{X}} \to \mathcal{X}$ of $\mathcal{X}$, and denote $\widetilde{\mathcal{L}} := p_0^* \mathcal{L}$, $\widetilde{\pi} := \pi \circ p_0$.
Since $p_0$ is finite, $\widetilde{\mathcal{L}}$ is ample, cf. <cit.>.
By the universal property of the normalization, the $\comp^*$-action on $\mathcal{X}$ can be lifted to the $\comp^*$-action on $\widetilde{\mathcal{X}}$.
From <cit.> and <cit.>, the map $\widetilde{\pi}$ is flat.
Hence, the pair $\widetilde{\mathcal{T}} = (\widetilde{\pi}: \widetilde{\mathcal{X}} \to \comp, \widetilde{\mathcal{L}})$ is an ample test configuration of $(X, L)$, cf. <cit.>.
By an abuse of notation, we call $\widetilde{\mathcal{T}}$ the normalization of $\mathcal{T}$.
By equivariant Hironaka's resolution of singularities theorem, cf. Kollár <cit.>, $\widetilde{\mathcal{X}}$ admits a $\comp^*$-equivariant resolution $p : \mathcal{X}' \to \widetilde{\mathcal{X}}$ of singularities. We let $\mathcal{L}' := p^* \widetilde{\mathcal{L}}$, $\pi' := \widetilde{\pi} \circ p$.
By <cit.>, the map $\pi': \mathcal{X}' \to \comp$ is flat, and, hence, the pair $\mathcal{T}' := (\pi': \mathcal{X}' \to \comp, \mathcal{L}')$ is a (semiample) test configuration of $(X, L)$.
By an abuse of notation, we call $\mathcal{T}'$ a resolution of singularities of $\mathcal{T}$.
Remark now that test configurations of $(X, L)$ form a category, where a morphism between $\mathcal{T} = (\mathcal{X}, \mathcal{L})$ and $\mathcal{T}' = (\mathcal{X}', \mathcal{L}')$ is given by a $\comp^*$-equivariant morphism $p : \mathcal{X} \to \mathcal{X}'$ over $\comp$, compatible with the isomorphisms $\mathcal{X}'|_1 \simeq X \simeq \mathcal{X}|_1$.
There is at most one morphism between any two given test configurations, and we say that $\mathcal{T}$ dominates $\mathcal{T}'$ when it exists.
Clearly, any morphism between test configurations is a birational map, and hence by <cit.> it is isomorphic to the blowup along a sheaf of ideals, which is trivial away from the central fiber.
Remark that for any test configuration $\mathcal{T} = (\pi: \mathcal{X} \to \comp, \mathcal{L})$, by definition, there is a $\comp^*$-equivariant birational map $\comp \times X \dashrightarrow \mathcal{X}$.
From this, by taking a $\comp^*$-equivariant resolution of indeterminacies, any two $\comp^*$-equivariant resolutions can be dominated by a third one.
We say that two test configurations are equivalent if they are dominated by a third test configuration, so that the pull-backs of the line bundles of the two initial test configurations coincide.
According to <cit.>, every semiample test configuration is equivalent to a unique normal ample test configuration.
By Zariski's main theorem, cf. <cit.>, equivalent normal test configurations produce the same filtrations on the section ring.
Let us now recall, following Phong-Sturm [55], a construction of geodesic rays of metrics associated with an ample test configuration $\mathcal{T} = (\pi: \mathcal{X} \to \comp, \mathcal{L})$.
Consider the restriction $\pi': \mathcal{X}'_{\mathbb{D}} \to \mathbb{D}$ of a resolution of singularities $\mathcal{T}' := (\pi': \mathcal{X}' \to \comp, \mathcal{L}')$ of $\mathcal{T}$ to the unit disc $\mathbb{D}$ and denote $\mathcal{L}'_{\mathbb{D}} := \mathcal{L}'|_{\mathcal{X}'_{\mathbb{D}}}$.
Phong-Sturm in <cit.> established that for any fixed smooth positive metric $h^L_0$ on $L$, there is a rotation-invariant bounded psh metric $h^{\mathcal{L}'}_{\mathbb{D}}$ over $\mathcal{L}'_{\mathbb{D}}$, verifying in the Bedford-Taylor sense, cf. [1], the Monge-Ampère equation
\begin{equation}\label{eq_ma_geod_dir}
c_1(\mathcal{L}'_{\mathbb{D}}, h^{\mathcal{L}'}_{\mathbb{D}})^{n + 1} = 0,
\end{equation}
and such that its restriction over $\partial \mathcal{X}'_{\mathbb{D}}$ coincides with the rotation-invariant metric obtained from the fixed metric $h^L_0$ on $L$.
Under the identification (<ref>), we then construct a ray $h^{\mathcal{T}}_t$, $t \in [0, + \infty[$, of metrics on $L$, such that $\hat{h}^{\mathcal{T}} = h^{\mathcal{L}'}_{\mathbb{D}}$ in the notations (<ref>).
Due to the equation (<ref>) and the description of the geodesic ray as in (<ref>), we see that the ray of metrics $h^{\mathcal{T}}_t$, $t \in [0, + \infty[$, is a geodesic ray emanating from $h^L_0$.
This ray of metrics is only $\mathscr{C}^{1, 1}$ in general, see [21].
Recall that Phong-Sturm in <cit.> established that there is a unique bounded psh solution to (<ref>).
Since a pull-back of a solution (<ref>) from one resolution of singularities will be a solution on a dominating resolution of singularities, the geodesic ray $h^{\mathcal{T}}_t$, $t \in [0, + \infty[$, is independent of the choice of the $\comp^*$-equivariant resolution of singularities. Similarly, equivalent test configurations produce the same geodesic rays.
As we shall see in Remark <ref>, a result of Phong-Sturm [55] shows, moreover, that two ample test configurations produce the same geodesic ray of metrics if and only if they are equivalent.
§.§ Convergence of spectral measures and maximal geodesic rays
The main goal of this section is to refine Theorem <ref> from distance convergence to convergence on the level of spectral measures and from submultiplicative filtrations associated with ample test configurations to general bounded submultiplicative filtrations.
More precisely, inspired by Berndtsson [5], cf. (<ref>), we associate with any two geodesic rays a probability measure on the real line, so that the chordal distance between the rays coincides with the absolute moments of this measure.
Similarly, the results of Chen-Maclean <cit.>, cf. also Boucksom-Jonsson <cit.>, show that the spectral distances between the two filtrations, appearing on the right-hand side of (<ref>) correspond to the absolute moments of a relative spectral measure between two filtrations.
The main result of this section shows that the two above measures coincide.
We use below the notations from Theorem <ref>.
For any $t \in [0, +\infty[$, we denote by $h^{\mathcal{T}_1 \mathcal{T}_2}_{t, s}$, $s \in [0, 1]$, the distinguished geodesic segment between $h^{\mathcal{T}_1}_t$ and $h^{\mathcal{T}_2}_t$.
By the regularity result of Chu-Tosatti-Weinkove [21], Chen [18] and Darvas <cit.>, for any $t \in [0, +\infty[$, the path $h^{\mathcal{T}_1 \mathcal{T}_2}_{t, s}$, $s \in [0, 1]$, is $\mathscr{C}^{1, \overline{1}}$.
In particular, by (<ref>), the following measure
\begin{equation}
\mu_t^{\mathcal{T}_1 \mathcal{T}_2}
\Big(\frac{1}{t} \frac{\partial}{\partial s} h^{\mathcal{T}_1 \mathcal{T}_2}_{t, s} \Big)_* \Big( c_1(L, h^{\mathcal{T}_1 \mathcal{T}_2}_{t, s})^n \Big),
\end{equation}
on $\real$ doesn't depend on $s \in [0, 1]$, as suggested by the notations.
From (<ref>), this is a bounded measure.
Moreover, by (<ref>), the absolute moments of $\mu_t^{\mathcal{T}_1 \mathcal{T}_2}$ are related with $d_p$-distances as
\begin{equation}\label{eq_abs_mom_1}
\sqrt[p]{\int |x|^p \cdot d \mu_t^{\mathcal{T}_1 \mathcal{T}_2}(x)}
\frac{d_p(h^{\mathcal{T}_1}_t, h^{\mathcal{T}_2}_t)}{t}.
\end{equation}
Now, on the algebraic side, for any $k \in \nat$, we denote by $e^k_1, \ldots, e^k_{N_k}$, $N_k := \dim H^0(X, L^k)$, the basis of $H^0(X, L^k)$, which jointly diagonalizes $\chi_{\mathcal{F}^{\mathcal{T}_1}_k}$ and $\chi_{\mathcal{F}^{\mathcal{T}_2}_k}$ as in (<ref>).
We define the sequence of probability measures $\mu^{\mathcal{F}^{\mathcal{T}_1} \mathcal{F}^{\mathcal{T}_2}}_{k}$, $k \in \nat^*$, $N_k \neq 0$, on $\real$ as follows
\begin{equation}\label{eq_jump_meas_defn}
\mu^{\mathcal{F}^{\mathcal{T}_1} \mathcal{F}^{\mathcal{T}_2}}_{k}
\frac{1}{\dim H^0(X, L^k)} \sum_{i = 1}^{\dim H^0(X, L^k)} \delta \bigg[
\frac{w_{\mathcal{F}^{\mathcal{T}_1}_k}(e^k_i) - w_{\mathcal{F}^{\mathcal{T}_2}_k}(e^k_i)}{k} \bigg],
\end{equation}
where $\delta[x]$ is the Dirac mass at $x \in \real$.
Clearly, by the definition of $d_p$ from (<ref>), we have
\begin{equation}\label{eq_abs_mom_2}
\sqrt[p]{\int |x|^p \cdot d \mu^{\mathcal{F}^{\mathcal{T}_1} \mathcal{F}^{\mathcal{T}_2}}_{k} (x)}
\frac{d_p(\mathcal{F}^{\mathcal{T}_1}_k, \mathcal{F}^{\mathcal{T}_2}_k)}{k}.
\end{equation}
As $t \to \infty$ (resp. $k \to \infty$), the sequence of measures $\mu_t^{\mathcal{T}_1 \mathcal{T}_2}$ (resp. $\mu^{\mathcal{F}^{\mathcal{T}_1} \mathcal{F}^{\mathcal{T}_2}}_{k}$) converges weakly a bounded measure on $\real$.
Moreover, the following identity holds
\begin{equation}\label{eq_spec_meas1}
\lim_{t \to \infty} \mu_t^{\mathcal{T}_1 \mathcal{T}_2}
\lim_{k \to \infty} \mu^{\mathcal{F}^{\mathcal{T}_1} \mathcal{F}^{\mathcal{T}_2}}_{k}.
\end{equation}
Also, the following limits exist, they are finite, and we have
\begin{equation}\label{eq_spec_meas2}
\begin{aligned}
\lim_{t \to \infty}
\frac{1}{t} \max_{x \in X} \log \Big( \frac{h^{\mathcal{T}_2}_t(x)}{h^{\mathcal{T}_1}_t(x)} \Big)
\lim_{k \to \infty}
\frac{1}{k} \max_{i = 1, \ldots, N_k} \Big( w_{\mathcal{F}^{\mathcal{T}_1}_k}(e^k_i) - w_{\mathcal{F}^{\mathcal{T}_2}_k}(e^k_i) \Big),
\\
\lim_{t \to \infty}
\frac{1}{t} \min_{x \in X} \log \Big( \frac{h^{\mathcal{T}_2}_t(x)}{h^{\mathcal{T}_1}_t(x)} \Big)
\lim_{k \to \infty}
\frac{1}{k} \min_{i = 1, \ldots, N_k} \Big( w_{\mathcal{F}^{\mathcal{T}_1}_k}(e^k_i) - w_{\mathcal{F}^{\mathcal{T}_2}_k}(e^k_i) \Big).
\end{aligned}
\end{equation}
a) The limiting measure on the left-hand side of (<ref>) is the chordal analogue of the probability measure constructed by Berndtsson [5] for geodesics.
b) The existence of the limit on the right-hand side of (<ref>) is due to Chen-Maclean <cit.>, cf. also Boucksom-Jonsson <cit.>.
Our proof is independent of their result and it provides a different way of establishing the existence of the limiting measure.
When one of the test configurations is trivial (and hence the corresponding geodesic ray is constant), (<ref>) is due to Hisamoto [41]. Witt Nyström <cit.> also previously established (<ref>) under an additional assumption that the second test configuration is a product test configuration.
To establish Theorem <ref>, we need the following statement. We defer its proof to Section <ref>.
For any ample test configuration $\mathcal{T}$, the associated geodesic ray grows at most exponentially.
In other words, there is $C > 0$, such that for any $t \in [0, +\infty[$, we have
\begin{equation}\label{eq_ray_norms_decart21}
d_{+ \infty}(h^{\mathcal{T}}_0, h^{\mathcal{T}}_t)
\leq
C t.
\end{equation}
Moreover, one can take $C := \limsup_{k \geq 1} \frac{1}{k} \sup_{x \in H^0(X, L^k) \setminus \{0\}} |w_{\mathcal{F}_k^{\mathcal{T}}}(x)|$. The latter constant is finite since the filtration $\mathcal{F}^{\mathcal{T}}$ is of finite type.
We also need to study how geodesic rays and filtrations change under the shift operator, defined for any test configuration $\mathcal{T} = (\pi: \mathcal{X} \to \comp, \mathcal{L})$ as $\mathcal{T}[m] := (\pi: \mathcal{X} \to \comp, \mathcal{L} \otimes \mathscr{O}(m X_0))$.
Directly from the definitions, for any $k \in \nat^*$, the weights of the restrictions $\mathcal{F}[m]_k$, $\mathcal{F}_k$ of $\mathcal{F}[m]$ and $\mathcal{F}$ to $H^0(X, L^k)$, are related as
\begin{equation}\label{eq_shift_test_1}
w_{\mathcal{F}[m]_k} = w_{\mathcal{F}_k} + mk.
\end{equation}
Similarly, for ample $\mathcal{T}$ and a fixed smooth positive metric $h^L_0$ on $L$, the geodesic rays $h^{\mathcal{T}[m]}_t$, $h^{\mathcal{T}}_t$, $t \in [0, +\infty[$, emanating from $h^L_0$ and associated with $\mathcal{T}[m]$, $\mathcal{T}$ are related as
\begin{equation}\label{eq_shift_test_2}
h^{\mathcal{T}[m]}_t = \exp(- t m) h^{\mathcal{T}}_t,
\end{equation}
where we implicitly identified $\mathcal{L} \otimes \mathscr{O}(m X_0)|_{\mathcal{X} \setminus X_0}$ with $\mathcal{L}|_{\mathcal{X} \setminus X_0}$ using the canonical trivialization of the line bundle $\mathscr{O}(m X_0)|_{\mathcal{X} \setminus X_0}$.
Let us first assume that for any $t \in [0, +\infty[$, we have $h^{\mathcal{T}_1}_t \leq h^{\mathcal{T}_2}_t$ and for any $k \in \nat$, we have $w_{\mathcal{F}^{\mathcal{T}_1}_k} \geq w_{\mathcal{F}^{\mathcal{T}_2}_k}$.
Then by Lemma <ref>, (<ref>) and the fact that the filtrations $\mathcal{F}^{\mathcal{T}_1}$, $\mathcal{F}^{\mathcal{T}_2}$ are of finite type, the measures $\mu_t^{\mathcal{T}_1 \mathcal{T}_2}$, $t \in [0, +\infty[$, and $\mu^{\mathcal{F}^{\mathcal{T}_1} \mathcal{F}^{\mathcal{T}_2}}_{k}$, $k \in \nat^*$, have support in a fixed compact interval in $[0, +\infty[$.
The weak convergence of the sequence of measures $\mu_t^{\mathcal{T}_1 \mathcal{T}_2}$ (resp. $\mu^{\mathcal{F}^{\mathcal{T}_1} \mathcal{F}^{\mathcal{T}_2}}_{k}$) holds since by Theorem <ref> their absolute moments (which coincide with moments in this case) converge.
Since these moments of the limiting measures coincide by Theorem <ref>, applied for $p \in \nat^*$, we deduce (<ref>).
An easy verification shows that Theorem <ref> for $p = +\infty$ also gives us exactly the first identity from (<ref>).
Similarly, if for any $t \in [0, +\infty[$, we have $h^{\mathcal{T}_1}_t \geq h^{\mathcal{T}_2}_t$, and for any $k \in \nat$, we have $w_{\mathcal{F}^{\mathcal{T}_1}_k} \leq w_{\mathcal{F}^{\mathcal{T}_2}_k}$, Theorem <ref> for $p = +\infty$ gives us exactly the second identity from (<ref>).
Now, let $\mathcal{T}_1$ and $\mathcal{T}_2$ be arbitrary ample test configurations.
Directly from the description (<ref>), we obtain that for any $m \in \nat$, we have $h^{\mathcal{T}_1 \mathcal{T}_2[m]}_{t, s} = \exp(- s m t) h^{\mathcal{T}_1 \mathcal{T}_2}_{t, s}$, $s \in [0, 1]$, $t \in [0, +\infty[$.
Hence, for any $t \in [0, +\infty[$, we have $\mu_t^{\mathcal{T}_1 \mathcal{T}_2[m]} = S[m]_* \mu_t^{\mathcal{T}_1 \mathcal{T}_2}$, where $S[m] : \real \to \real$, $x \mapsto x - m$ is the shift operator.
Similarly, by (<ref>), we have $\mu^{\mathcal{F}^{\mathcal{T}_1} \mathcal{F}^{\mathcal{T}_2[m]}}_{k} = S[m]_* \mu^{\mathcal{F}^{\mathcal{T}_1} \mathcal{F}^{\mathcal{T}_2}}_{k}$.
From this, we obtain that Theorem <ref> holds for $\mathcal{T}_1$ and $\mathcal{T}_2$ if and only if it holds for $\mathcal{T}_1$ and $\mathcal{T}_2[m]$ for some $m \in \nat$.
But from the boundness of the filtrations $\mathcal{F}^{\mathcal{T}_1}$, $\mathcal{F}^{\mathcal{T}_2}$ and from Lemma <ref>, we can always make $h^{\mathcal{T}_1}_t \leq h^{\mathcal{T}_2[m]}_t$, $t \in [0, +\infty[$, and $w_{\mathcal{F}^{\mathcal{T}_1}_k} \geq w_{\mathcal{F}^{\mathcal{T}_2[m]}_k}$, $k \in \nat$, by taking $m$ sufficiently big.
Similarly, by making $m$ sufficiently small, the opposite inequalities will be satisfied.
As described above, this implies Theorem <ref>.
We will now show that Theorem <ref> can be used to study arbitrary bounded submultiplicative filtrartions $\mathcal{F}$ on $R(X, L)$.
To explain this, we will first need to explain that to any bounded submultiplicative filtration one can naturally associate a geodesic ray.
For this, for simplicity, we assume that $\mathcal{F}$ is a $\mathbb{Z}$-filtration.
Following Székelyhidi [63], recall that for any given bounded submultiplicative $\mathbb{Z}$-filtration $\mathcal{F}$ on $R(X, L)$, and any $k \in \nat^*$ big enough so that $H^0(X, L^k)$ generates $R(X, L^k)$, we can define a sequence of canonical apprixomations, $\mathcal{F}(k)$, which are filtrations of finite type on $R(X, L^k)$ generated by the restriction of $\mathcal{F}$ to $H^0(X, L^k)$.
By Rees correspondence, cf. Section <ref>, for any $k \in \nat^*$, there is $d_k \in \nat^*$, divisible by $k$, and an ample test configuration $\mathcal{T}(k) := (\pi_k : \mathcal{X}_k \to \comp, \mathcal{L}_k)$ of $L^{d_k}$, so that the restriction of $\mathcal{F}(k)$ to $R(X, L^{d_k}) \subset R(X, L^k)$ coincides with the filtration associated with $\mathcal{T}(k)$.
We denote by $h_t^{\mathcal{F}(k)}$, $t \in [0, +\infty[$, the geodesic ray on $L$ emanating from $h^L_0$, defined $h_t^{\mathcal{F}(k)} := (h^{\mathcal{T}(k)}_t)^{\frac{1}{d_k}}$, where $h^{\mathcal{T}(k)}_t$ is the geodesic ray on $L^{d_k}$ associated with $\mathcal{T}(k)$ and emanating from $(h^L_0)^{d_k}$.
The following result was established in <cit.> using the works of Berman-Boucksom-Jonsson [3] and Phong-Sturm [56].
For any $t \in [0, +\infty[$, the sequence of metrics $h_t^{\mathcal{F}(k)}$, is uniformly bounded over $k \in \nat^*$. When restricted over multiplicative subsequences of $\nat^*$ (as for example $k = 2^l$, $l \in \nat^*$), for any $t \in [0, +\infty[$, the sequence of metrics $h_t^{\mathcal{F}(k)}$ is decreasing, and the ray of metrics $h_t^{\mathcal{F}} := (\lim_{k \to \infty} h_t^{\mathcal{F}(k)})_*$ is a geodesic ray departing from $h^L_0$.
We can now state the following generalization of Theorem <ref>.
For any bounded submultiplicative filtrations $\mathcal{F}_1, \mathcal{F}_2$ on $R(X, L)$ and any $p \in [1, +\infty[$, the following identity holds
\begin{equation}\label{eq_dist_na_sm}
d_p \big(\{ h_t^{\mathcal{F}_1} \}, \{ h_t^{\mathcal{F}_2} \} \big)
d_p \big( \mathcal{F}_1, \mathcal{F}_2 \big).
\end{equation}
a) Theorem <ref> responds to a question from Zhang <cit.>.
b) From the proof of Theorem <ref>, we see that the analogue of (<ref>) holds if the geodesic rays $h_t^{\mathcal{F}_1}$, $h_t^{\mathcal{F}_2}$ are $\mathscr{C}^{1, \overline{1}}$.
For general bounded submultiplicative filtrations, this regularity cannot be expected by <cit.>.
To prove Theorem <ref>, we need to use a result of Boucksom-Jonsson <cit.> stating that finite-type approximations of submultiplicative filtrations are continuous with respect to the $d_p$-metrics for $p \in [1, +\infty[$. For an alternative proof of this result, see <cit.>.
For any $p \in [1, +\infty[$, we have $\lim_{k \to \infty} \widetilde{d}_p(\mathcal{F}(k), \mathcal{F}) = 0$.
As another ingredient in the proof of Theorem <ref>, we need to show that Theorem <ref> can be used to give an algebraic formula for chordal distances between maximal geodesic rays.
To describe this, we say that a sequence of geodesic rays $h^L_{i, t}$, $i \in \nat$, $t \in [0, +\infty[$, of bounded metrics on $L$ approximate $h^L_t$ from below (resp. approximate $h^L_t$ almost everywhere from above) if for any $t \in [0, +\infty[$, $i \leq j$, $h^L_{i, t} \leq h^L_{j, t}$ (resp. $h^L_{i, t} \geq h^L_{j, t}$) and $\lim_{i \to \infty} h^L_{i, t} = h^L_t$ (resp. $(\lim_{i \to \infty} h^{\mathcal{T}_i}_t)_* = h^L_t$).
Also, for bounded submultiplicative filtrations $\mathcal{F}_i$, $i = 0, 1$, on $R(X, L^{d_i})$, $d_i \in \nat^*$ and any $p \in [1, +\infty[$, we denote $\widetilde{d}_p(\mathcal{F}_0, \mathcal{F}_1) := \frac{1}{d_0 d_1} d_p(\mathcal{F}_0|_{R(X, L^{d_0 d_1})}, \mathcal{F}_1|_{R(X, L^{d_0 d_1})})$.
Recall that for a geodesic ray of Hermitian metrics $h^L_t \in \mathcal{E}^1$, $t \in [0, +\infty[$, one can define its non-Archimedean potential (which is a function on the Berkovich analytification of $X$) by studying the singularities of the ray at $t = +\infty$, see <cit.>.
Following Berman-Boucksom-Jonsson <cit.>, we say that a geodesic ray of Hermitian metrics $h^L_t \in \mathcal{E}^1$, $t \in [0, +\infty[$, is maximal, if its potential is maximal among all geodesic rays departing from the same initial point and having the same non-Archimedean potential.
Alternatively, according to <cit.>, a geodesic ray $h^L_t$, $t \in [0, +\infty[$, is maximal if and only if there is a sequence of ample test configurations $\mathcal{T}_i$, $i \in \nat$, of $(X, L)$, such that the associated geodesic rays $h^{\mathcal{T}_i}_t$, $t \in [0, +\infty[$, approximate from below $h^L_t$.
It was established in <cit.> that for any bounded submultiplicative filtration $\mathcal{F}$ on $R(X, L)$, the ray $\{ h_t^{\mathcal{F}} \}$ from Proposition <ref> is maximal, and it corresponds to the non-Archimedean potential $FS(\mathcal{F})$ prescribed by $\mathcal{F}$ as in <cit.>.
Now, we fix a smooth positive metric $h^L_0$ on $L$, $p \in [1, + \infty[$, and consider the set $\mathcal{R}^p$ of geodesic rays $\{ h^L_t \}$, $h^L_t \in \mathcal{E}^p$, $t \in [0, +\infty[$, departing from $h^L_0$.
We denote by $\mathcal{R}^p_{\max} \subset \mathcal{R}^p$ the subset of maximal geodesic rays.
By <cit.>, the set $\mathcal{R}^p \setminus \mathcal{R}^p_{\max}$ is not empty.
We fix $p \in [1, + \infty[$, and consider two maximal geodesic rays $\{ h^{L, i}_t \} \in \mathcal{R}^p_{\max}$, $i = 0, 1$.
Let $\mathcal{T}^i_j$, $j \in \nat$, be two sequences of ample test configurations of $(X, L^{r^i_j})$, $r^i_j \in \nat^*$, such that the geodesic rays $\{ h^{L, i}_{j, t} \} := \{ (h^{\mathcal{T}^i_j}_t)^{\frac{1}{r^i_j}} \}$, approximate from below (or almost everywhere from above) the rays $\{ h^{L, i}_t \}$.
\begin{equation}\label{eq_dist_na_max}
d_p \big(\{ h^{L, 0}_t \}, \{ h^{L, 1}_t \} \big)
\lim_{j \to \infty} \widetilde{d}_p \big( \mathcal{F}^{\mathcal{T}^0_j}, \mathcal{F}^{\mathcal{T}^1_j} \big).
\end{equation}
By Theorem <ref>, it suffices to establish that $\lim_{j \to \infty} d_p \big(\{ h^{L, i}_t \}, \{ h^{L, i}_{j, t} \} \big) = 0$.
This is a direct consequence of <cit.>, saying that this holds for any sequences of geodesic rays approximating a given geodesic ray from below (resp. almost everywhere from above).
a) When $p = 1$ and one geodesic ray is trivial, Proposition <ref> reduces to <cit.>.
Using pluripotential theory, Reboulet in <cit.> established that for $p = 1$, (<ref>) reduces to the case when one geodesic ray is trivial.
b) Proposition <ref> suggests that the right-hand side of (<ref>) is the analogue of $d_p$-distance between non-Archimedean potentials of the geodesic rays.
It is interesting if one can get a formula for it in the spirit of (<ref>), probably using the construction of the Monge-Ampère measure from <cit.> and geodesics between non-Archimedean potentials from Reboulet [58].
c) For non-maximal geodesic rays, by <cit.> and <cit.>, there is no hope that the chordal distance can be expressed in terms of the non-Archimedean potentials of the geodesic rays.
It would be interesting to understand the difference between the left-hand side and the right-hand side of (<ref>) in this case.
Directly by Propositions <ref>, <ref>, and maximality of $\{ h_t^{\mathcal{F}_1} \}, \{ h_t^{\mathcal{F}_2} \}$, see Remark <ref>, we see that there is a sequence $d_k \in \nat^*$, $k \in \nat^*$, such that
\begin{equation}
d_p \big(\{ h_t^{\mathcal{F}_1} \}, \{ h_t^{\mathcal{F}_2} \} \big)
\lim_{k \to \infty} \frac{1}{d_k} d_p \big( \mathcal{F}_1(k)|_{R(X, L^{d_k})}, \mathcal{F}_2(k)|_{R(X, L^{d_k})} \big),
\end{equation}
where the limit is taken over a multiplicative subsequence of $\nat^*$ (as for example $k = 2^l$, $l \in \nat^*$).
A combination of this with Proposition <ref> yields
\begin{equation}\label{eq_cor_dist_na_sm_1}
d_p \big(\{ h_t^{\mathcal{F}_1} \}, \{ h_t^{\mathcal{F}_2} \} \big)
\lim_{k \to \infty} \frac{1}{d_k} d_p \big( \mathcal{F}_1|_{R(X, L^{d_k})}, \mathcal{F}_2|_{R(X, L^{d_k})} \big),
\end{equation}
where the limit is again taken over a multiplicative subsequence of $\nat^*$.
It is only left now to apply the result of Chen-Maclean <cit.>, cf. also Boucksom-Jonsson <cit.>, saying that for the restrictions $\mathcal{F}_{1, k}, \mathcal{F}_{2, k}$ of $\mathcal{F}_1, \mathcal{F}_2$ to $H^0(X, L^k)$, the limit $\lim_{k \to \infty} \frac{1}{k} d_p ( \mathcal{F}_{1, k}, \mathcal{F}_{2, k} )$ exists.
In particular, the right-hand side of (<ref>) coincides with it.
§ QUANTIZATION, BUSEMAN CONVEXITY AND SUBMULTIPLICATIVE NORMS
To establish Theorem <ref>, we prove that the left-hand side of (<ref>) is not smaller than the right-hand side and then the opposite bound (we call these statements lower and upper bounds of Theorem <ref> later on).
The main goal of this section is to establish the upper bound.
More precisely, in Section <ref>, we study the geometry of the space of Hermitian norms and various constructions of rays of norms.
In Section <ref>, we recall the definition of the Fubini-Study map, and then a statement from [36], concerning its isometry properties.
Finally, in Section <ref>, by relying on this, we establish the upper bound of Theorem <ref>.
§.§ Geometry of geodesic rays on the space of norms
The main goal of this section is to recall the relation between filtrations and Hermitian norms on a finitely dimensional vector space and then to discuss the metric properties of this correspondence.
Let us first recall that it is possible to view the space of filtrations on a given finitely dimensional vector space as the boundary at the infinity of the space of Hermitian norms, where the latter space is interpreted in terms of geodesic rays.
For this, for any filtration $\mathcal{F}$ on a finitely dimensional vector space $V$, we associate a ray of Hermitian norms.
More precisely, we fix a Hermitian norm $H_V := \| \cdot \|_H$ on $V$ and consider an orthonormal basis $s_1, \ldots, s_n$, of $(V, H_V)$, adapted to the filtration $\mathcal{F}$, i.e. verifying $s_i \in \mathcal{F}^{e_{\mathcal{F}}(i)} V$, where $e_{\mathcal{F}}(i)$, $i = 1, \ldots, n$, are the jumping numbers of the filtration $\mathcal{F}$, defined in (<ref>).
We define the ray of Hermitian norms $H_t^{\mathcal{F}} := \| \cdot \|_{t}^{\mathcal{F}}$, $t \in [0, +\infty[$, on $V$ by declaring the basis
\begin{equation}\label{eq_bas_st}
(s_1^t, \ldots, s_n^t) := \big( e^{t e_{\mathcal{F}}(1)} s_1, \ldots, e^{t e_{\mathcal{F}}(n)} s_n \big),
\end{equation}
to be orthonormal with respect to $H_t^{\mathcal{F}}$.
It is clear from (<ref>) that $H_t^{\mathcal{F}}$ is a geodesic ray with respect to the metrics $d_p$, $p \in [1, +\infty]$.
Moreover, for any $t, s \in [0, +\infty[$, $p \in [1, +\infty[$, we have
\begin{equation}
d_p(H_t^{\mathcal{F}}, H_s^{\mathcal{F}})
|t - s|
\cdot
\sqrt[p]{\frac{\sum_{i = 1}^{\dim V} |e_{\mathcal{F}}(i)|^p}{\dim V}},
\end{equation}
and $d_{+ \infty}(H_t^{\mathcal{F}}, H_s^{\mathcal{F}}) = |t - s| \cdot \max |e_{\mathcal{F}}(i)|$.
Since for $p \in ]1, +\infty[$, the space $(\mathcal{H}_V, d_p)$ is a uniquely geodesic space, this gives us a complete description of geodesic rays with respect to $d_p$.
Hence, filtrations are in bijective correspondence with geodesic rays.
Remark the following relation between the non-Archimedean norm $\chi_{\mathcal{F}}$, defined in (<ref>), and the ray of norms $H_t^{\mathcal{F}}$, $t \in [0, +\infty[$: for any $f \in V$, we have
\begin{equation}\label{eq_na_nm_interpol}
\log \chi_{\mathcal{F}}(f)
\lim_{t \to +\infty} \frac{\log \| f \|_{t}^{\mathcal{F}}}{t}.
\end{equation}
Remark that (<ref>) can be though as the finitely-dimensional analogue of Theorem <ref>.
It is well-known, cf. <cit.>, that the correspondence (<ref>) respects the metric structures (<ref>) and (<ref>).
In other words, the geodesic rays $H_t^{\mathcal{F}_0}, H_t^{\mathcal{F}_1}$, $t \in [0, +\infty[$, associated with the filtrations $\mathcal{F}_0, \mathcal{F}_1$ and emanating from a fixed Hermitian norm $H_0$ for any $p \in [1, +\infty]$ verify
\begin{equation}\label{eq_d_p_fil_norms_herm}
d_p(\mathcal{F}_1, \mathcal{F}_2)
\lim_{t \to \infty} \frac{d_p(H_t^{\mathcal{F}_0}, H_t^{\mathcal{F}_1})}{t}.
\end{equation}
It is also well-known, cf. <cit.>, that the space of Hermitian norms endowed with $d_p$-distances is Buseman convex.
More precisely, for any $0 < s < t$, $p \in [1, +\infty]$, we have
\begin{equation}\label{eq_toponogov}
\frac{d_p(H_s^{\mathcal{F}_0}, H_s^{\mathcal{F}_1})}{s} \leq \frac{d_p(H_t^{\mathcal{F}_0}, H_t^{\mathcal{F}_1})}{t},
\end{equation}
which gives an alternative way to see that the limit in (<ref>) exists.
In this article, we sometimes deal with non-Hermitian norms.
Due to this, we will generalize the distances $d_p$, $p \in [1, +\infty]$, from (<ref>), to this more broad context.
More precisely, let $N_i = \| \cdot \|_i$, $i = 0, 1$, be two norms on $V$.
We define the logarithmic relative spectrum of $N_0$ with respect to $N_1$ as a non-increasing sequence $\mu_j := \mu_j(N_0, N_1)$, $j = 1, \ldots, \dim V$, defined as follows
\begin{equation}\label{eq_log_rel_spec}
\mu_j
\sup_{\substack{W \subset V \\ \dim W = j}}
\inf_{w \in W \setminus \{0\}} \log \frac{\| w \|_1}{\| w \|_0}.
\end{equation}
We then define for $p \in [1, +\infty[$, the following quantity
\begin{equation}
d_p(N_0, N_1) = \sqrt[p]{\frac{\sum_{i = 1}^{\dim V} |\mu_i|^p}{\dim V}},
\end{equation}
and we let $d_{+ \infty}(N_0, N_1) = \max |\mu_i|$.
By (<ref>), it coincides with our previous definition if both $N_1$ and $N_2$ are Hermitian.
Also, $d_{+ \infty}(N_0, N_1)$ is the multiplicative gap between $N_0, N_1$, i.e. it is the minimal constant $C > 0$, such that $N_0 \leq \exp(C) \cdot N_1$ and $N_1 \leq \exp(C) \cdot N_0$.
Remark also that John ellipsoid theorem, cf. <cit.>, says that for any normed vector space $(V, N_V)$, there is a Hermitian norm $H_V$ on $V$, verifying
\begin{equation}\label{eq_john_ellips}
H_V \leq N_V \leq \sqrt{\dim V} \cdot H_V.
\end{equation}
From (<ref>), the fact that $d_p$, $p \in [1, +\infty]$, satisfy triangle inequality when restricted to Hermitian norms, Minkowski inequality and the usual monotonicity properties of the logarithmic relative spectrum, cf. <cit.>, we deduce that for any norms $N_0, N_1, N_2$ on $V$, the following weak version of triangle inequality holds
\begin{equation}\label{eq_tr_weak}
d_p(N_0, N_2) \leq d_p(N_0, N_1) + d_p(N_1, N_2) + \log \dim V.
\end{equation}
Now, we fix a finitely-dimensional normed vector space $(V, N_V)$, $\| \cdot \|_V := N_V$ and a filtration $\mathcal{F}$ of $V$.
We define the non-Archimedean norm $\chi_{\mathcal{F}}$ associated with $\mathcal{F}$ as in (<ref>).
Following <cit.>, we construct a ray of norms $N^{\mathcal{F}}_t := \| \cdot \|^{\mathcal{F}}_t$, $t \in [0, +\infty[$, emanating from $N_V$, as follows
\begin{equation}\label{eq_ray_norm_defn0}
\| f \|^{\mathcal{F}}_t :=
\inf
\Big\{
\sum
\| f_i \|_V
\cdot
\chi_{\mathcal{F}}(f_i)^t
\,
\,
f = \sum f_i
\Big\}.
\end{equation}
Remark that even if the initial norm $N_V$ is Hermitian, the above ray is certainly not.
Let us, nevertheless, recall the following compatibility result between the two definitions of rays of norms.
For any (resp. Hermitian) norm $N_V$ (resp. $H_V$) on $V$ and any $t \in [0, +\infty[$, the rays of norms $N_t^{\mathcal{F}}$ (resp. $H_t^{\mathcal{F}}$) associated with the filtration $\mathcal{F}$ as in (<ref>) (resp. (<ref>)) and emanating from $N_V$ (resp. $H_V$) are related as follows: for any $t \in [0, +\infty[$, we have
\begin{equation}\label{eq_two_norms_comp01}
d_{+ \infty}(N^{\mathcal{F}}_t, H^{\mathcal{F}}_t)
\leq
d_{+ \infty}(N_V, H_V)
\log \dim V.
\end{equation}
From (<ref>), (<ref>) and (<ref>), the analogue of (<ref>) holds for the rays as in (<ref>).
Now, the essential reason for introducing the ray of norms (<ref>) instead of (<ref>) is that it behaves better in comparison with (<ref>) when defined on a graded ring instead of a vector space.
To explain this, we fix a graded ring $A$ and a graded filtration $\mathcal{F}$ on $A$.
We assume that $\mathcal{F}$ is submultiplicative in the sense of (<ref>).
We fix a graded norm $N = \sum N_k$, $N_k := \| \cdot \|_k$, over $A$, which we assume to be submultiplicative in the following sense: for any $k, l \in \nat^*$, $f \in A_k$, $g \in A_l$, we have
\begin{equation}\label{eq_subm_s_ring}
\| f \cdot g \|_{k + l} \leq
\| f \|_k \cdot
\| g \|_l.
\end{equation}
A trivial verification shows that the following lemma holds.
The ray of norms $N^{\mathcal{F}}_t$, $t \in [0, +\infty[$, emanating from $N$ and constructed as in (<ref>), is a ray of submultiplicative norms.
§.§ Fubini-Study metrics of submultiplicative norms
In this article, we constantly pass from the study of graded norms on $R(X, L)$ to metrics on $L$.
The fundamental tool for this is the Fubini-Study map.
In this section, we recall its definition and its isometric properties.
We fix an ample line bundle $L$ over a compact complex manifold $X$.
For $k_0 \in \nat$ so that $L^{k_0}$ is very ample, Fubini-Study map associates with any norm $N_k = \| \cdot \|_k$ on $H^0(X, L^k)$, $k \geq k_0$, a continuous metric $FS(N_k)$ on $L^k$, constructed as follows.
Consider the Kodaira embedding
\begin{equation}\label{eq_kod}
{\rm{Kod}}_k : X \hookrightarrow \mathbb{P}(H^0(X, L^k)^*).
\end{equation}
The evaluation maps provide the isomorphism $
L^{-k} \to {\rm{Kod}}_k^* \mathscr{O}(-1),
where $\mathscr{O}(-1)$ is the tautological line bundle over $\mathbb{P}(H^0(X, L^k)^*)$.
We endow $H^0(X, L^k)^*$ with the dual norm $N_k^*$ and denote by $FS^{\mathbb{P}}(N_k)$ the induced metric on the hyperplane line bundle $\mathscr{O}(1) := \mathscr{O}(-1)^*$ over $\mathbb{P}(H^0(X, L^k)^*)$.
We define the metric $FS(N_k)$ on $L^k$ as the only metric verifying under the dual of the above isomorphism the identity
\begin{equation}\label{eq_fs_defn}
FS(N_k) = {\rm{Kod}}_k^* ( FS^{\mathbb{P}}(N_k) ).
\end{equation}
A statement below can be seen as an alternative definition of $FS(N_k)$.
For any $x \in X$, $l \in L^k_x$, the following identity takes place
\begin{equation}\label{eq_fs_norm}
\inf_{\substack{s \in H^0(X, L^k) \\ s(x) = l}}
\| s \|_k.
\end{equation}
An easy verification, cf. Ma-Marinescu <cit.>.
When the norm $N_k$ comes from a Hermitian product on $H^0(X, L^k)$, the definition of the Fubini-Study map is standard, and explicit evaluation shows that in this case $c_1(\mathscr{O}(1) , FS^{\mathbb{P}}(N_k))$ coincides up to a positive constant with the Kähler form of the Fubini-Study metric on $\mathbb{P}(H^0(X, L^k)^*)$ induced by $N_k$.
In particular, $c_1(\mathscr{O}(1) , FS^{\mathbb{P}}(N_k))$ is a positive $(1, 1)$-form.
From Kobayashi [43], for general norms $N_k$, the $(1, 1)$-current $c_1(\mathscr{O}(1) , FS^{\mathbb{P}}(N_k))$ is positive, cf. <cit.> for details.
In particular, the metric $FS(N_k)$ is positive for any norm $N_k$ on $H^0(X, L^k)$.
We will now study the properties of the Fubini-Study map on the space of graded norms.
Let $N, N'$ be graded norms on the section ring $R(X, L)$.
For $p \in [1, +\infty]$, we define
\begin{equation}\label{eq_dp_graded}
d_p(N, N') := \limsup_{k \to \infty} \frac{d_p(N_k, N'_k)}{k},
\end{equation}
where $N_k, N'_k$ are the restrictions of $N, N'$ to $H^0(X, L^k)$.
A trivial verification, based on (<ref>), shows that the Fubini-Study map is 1-Lipschitz with respect to the $d_{+ \infty}$-metric.
In other words, we have
\begin{equation}\label{eq_fs_contact}
d_{+ \infty}(FS(N_k), FS(N_k'))
\leq
d_{+ \infty}(N_k, N_k').
\end{equation}
For other $d_p$-metrics, $p \in [1, +\infty[$, no relation between the distances of graded norms and distances of their Fubini-Study metrics exists, see <cit.>.
But from the work of the author [36], we know that there is such a relation under an additional submultiplicativity assumption, (<ref>).
More precisely, from Lemma <ref>, it is easy to verify that the sequence of Fubini-Study metrics $FS(N_k)$, $k \geq k_0$, is submultiplicative for any submultiplicative graded norm $N = \sum N_k$ on $R(X, L)$.
By this we mean that for any $k, l \geq k_0$, $FS(N_{k + l}) \leq FS(N_k) \cdot FS(N_l)$.
In particular, by Fekete's lemma, the sequence of metrics $FS(N_k)^{\frac{1}{k}}$ on $L$ converges, as $k \to \infty$, to a (possibly only bounded from above and even null) upper semicontinuous metric, which we denote by $FS(N)$.
We say that $N$ is bounded if $FS(N)$ is bounded, and we denote by $FS(N)_*$ the lower semicontinuous regularization of $FS(N)$, which is psh, cf. <cit.>.
For any bounded submultiplicative graded norms $N, N'$ on $R(X, L)$, and any $p \in [1, +\infty[$, we have
\begin{equation}\label{eq_d_p_norm_fs_rel}
d_p \big( FS(N)_*, FS(N')_* \big)
d_p(N, N').
\end{equation}
Moreover, we have $\lim$ instead of $\limsup$ in (<ref>) in this case. If, moreover, $FS(N)$ and $FS(N')$ are continuous, then one can take $p = +\infty$ above.
Let us recall, finally, that a result of Tian [64], states that for smooth positive metrics $h^L_0$ on $L$, as $k \to \infty$, the following uniform convergence takes place
\begin{equation}\label{eq_tian_conv}
FS({\rm{Hilb}}_k(h^L_0))^{\frac{1}{k}} \to h^L_0.
\end{equation}
Directly from Lemma <ref>, we see that (<ref>) can be restated in the following way.
For any $\epsilon > 0$, there is $k_0 \in \nat$, such that for any $k \geq k_0$, we have
\begin{equation}\label{eq_tian_dinf_norms}
d_{+ \infty}\Big({\rm{Ban}}^{\infty}_k(h^L_0), {\rm{Hilb}}_k(h^L_0) \Big)
\leq
\epsilon k.
\end{equation}
More detailed analysis, using the fact that $h^L_0$ is positive and smooth, cf. Catlin [16], Zelditch [67], Dai-Liu-Ma [22] and Ma-Marinescu [48], shows that we can improve (<ref>) by replacing the right-hand side by $(n + \epsilon) \log k$.
We recall that Theorem <ref> was established in [36] as a consequence of several results.
We proved in <cit.> that bounded submultiplicative norms are equivalent (with respect to the $d_p$-distance, $p \in [1, +\infty[$) to ${\rm{Ban}}^{\infty}(h^L)$ for regularizable from above psh metrics $h^L$ on $L$, cf. <cit.> for a definition of a regularizable from above psh metric.
Darvas-Lu-Rubinstein in <cit.> proved, following previous works of Chen-Sun <cit.> and Berndtsson [5], that the identity (<ref>) holds for norms $N, N'$ given by ${\rm{Hilb}}(h^L_0, \omega)$ and ${\rm{Hilb}}(h^L_1, \omega)$ for some bounded psh metrics $h^L_0$, $h^L_1$ and a fixed Kähler form $\omega$.
Finally, we established in <cit.> that the norms ${\rm{Ban}}^{\infty}(h^L)$ and ${\rm{Hilb}}(h^L, \omega)$ are equivalent (again with respect to the $d_p$-distance, $p \in [1, +\infty[$) for regularizable from above psh metrics $h^L$.
For norms with continuous $FS(N)$, $FS(N')$, the only difference is that instead of <cit.> one has to apply <cit.>, and instead of <cit.> – <cit.>.
§.§ Isometry properties of the quantization scheme of Phong-Sturm
The main goal of this section is to establish the upper bound in Theorem <ref>.
Our proof relies in an essential way on the fact that the finitely dimensional version of Theorem <ref> holds, see (<ref>).
To pass from this finitely-dimensional picture to the infinitely-dimensional one of Theorem <ref>, we rely on the methods of geometric quantization using the quantization scheme introduced by Phong-Sturm for geodesic rays associated with test configurations.
The central point is then to prove that this quantization scheme preserves distances in a reasonable sense.
More precisely, we fix an ample test configuration $\mathcal{T}$ of a polarized projective manifold $(X, L)$, and let $\mathcal{F}^{\mathcal{T}}_k$, $k \in \nat$, be the filtrations on the graded pieces $H^0(X, L^k)$ of the section ring $R(X, L)$ induced by the test configuration as in Section <ref>.
We fix a smooth positive metric $h^L_0$ on $L$, and for any $t \in [0, +\infty[$, $k \in \nat$, we define, following Phong-Sturm [54], $H^{\mathcal{T}}_{t, k}$ as the (geodesic) ray of Hermitian norms on $H^0(X, L^k)$ associated with the filtration $\mathcal{F}^{\mathcal{T}}_k$ and emanating from ${\rm{Hilb}}_k(h^L_0)$ as in (<ref>).
We denote by $H^{\mathcal{T}}_t = \sum_{k = 0}^{\infty} H^{\mathcal{T}}_{t, k}$ the associated graded norm on $R(X, L)$.
We denote by $h^{\mathcal{T}}_t$ the geodesic ray of metrics on $L$, constructed from the test configuration $\mathcal{T}$ as in Section <ref>.
The following result will be central in our approach to the upper bound in Theorem <ref>.
For any ample test configurations $\mathcal{T}_1, \mathcal{T}_2$ of a polarized projective manifold $(X, L)$ and any $t \in [0, + \infty[$, $p \in [1, +\infty[$, we have the following metric relation between the quantized geodesic rays of norms and geodesic rays of metrics
\begin{equation}\label{eq_dp_ray_norms_herm}
d_p(h^{\mathcal{T}_1}_t, h^{\mathcal{T}_2}_t)
d_p (
H^{\mathcal{T}_1}_t, H^{\mathcal{T}_2}_t
\end{equation}
and we have $\lim$ instead of $\limsup$ in the definition (<ref>) corresponding to the right-hand side of (<ref>).
Moreover, for $p = +\infty$, we have
\begin{equation}
d_{+ \infty}(h^{\mathcal{T}_1}_t, h^{\mathcal{T}_2}_t)
\leq
\liminf_{k \to \infty}
\frac{d_{+ \infty} (
H^{\mathcal{T}_1}_{t, k}, H^{\mathcal{T}_2}_{t, k}
\end{equation}
To establish Theorem <ref>, the following result of Phong-Sturm is indispensable.
The geodesic ray $h^{\mathcal{T}}_t$, $t \in [0, + \infty[$, associated with the test configuration $\mathcal{T}$ is related to the geodesic ray of Hermitian norms $H^{\mathcal{T}}_t$ as follows
\begin{equation}
\lim_{k \to \infty}
\big(
\inf_{l \geq k}
FS(H^{\mathcal{T}}_{t, l})^{\frac{1}{l}}
\big)_*.
\end{equation}
By the definition of the geodesic ray of norms $H_{t, k}^{\mathcal{F}}$ and the boundness of the filtration associated with an ample test configuration, we conclude that there is $C > 0$, such that for any $k \in \nat^*$, $t \in [0, +\infty[$, we have
\begin{equation}\label{eq_ray_norms_decart}
d_{+ \infty}(H_{t, k}^{\mathcal{F}}, H_{0, k}^{\mathcal{F}})
\leq
C t k.
\end{equation}
From the second part of Theorem <ref> and (<ref>), we deduce Lemma <ref>.
The main idea of the proof is to replace the rays of norms $H^{\mathcal{T}_1}_t, H^{\mathcal{T}_2}_t$, $t \in [0, +\infty[$, by their submultiplicative analogues (<ref>), to which we can apply Theorem <ref>.
More precisely, for an ample test configuration $\mathcal{T}$ of a polarized pair $(X, L)$, we denote by $N^{\mathcal{T}}_{t, k}$, $t \in [1, +\infty[$, the ray of norms emanating from ${\rm{Ban}}^{\infty}_k(h^L_0)$ associated with $\mathcal{F}^{\mathcal{T}}_k$ as in (<ref>).
We denote by $N^{\mathcal{T}}_t = \sum_{k = 0}^{\infty} N^{\mathcal{T}}_{t, k}$ the associated graded ray of norms on $R(X, L)$.
The crucial point about the graded norms $N^{\mathcal{T}}_t$, $t \in [0, +\infty[$, is that they are submultiplicative in the sense (<ref>).
This follows from Lemma <ref>, the fact that the norm ${\rm{Ban}}^{\infty}(h^L_0)$ is submultiplicative and the fact that the filtration $\mathcal{F}^{\mathcal{T}}$ is submultiplicative, see Section <ref>.
Since the filtration $\mathcal{F}^{\mathcal{T}}$ is bounded, we see that $N^{\mathcal{T}}_{t}$, $t \in [0, +\infty[$, is bounded in the sense described before Theorem <ref>, cf. <cit.>.
We conclude by Theorem <ref> that for any $p \in [1, +\infty[$, the following relation holds
\begin{equation}\label{eq_thm_dp_ray_norms_herm0}
d_p \big( FS(N^{\mathcal{T}_1}_{t})_*, FS(N^{\mathcal{T}_2}_{t})_* \big)
d_p \big(N^{\mathcal{T}_1}_{t}, N^{\mathcal{T}_2}_{t} \big),
\end{equation}
and we have $\lim$ instead of $\limsup$ in the definition (<ref>) of the right-hand side of (<ref>).
By (<ref>), we have $d_{+ \infty} ( FS(N^{\mathcal{T}_1}_{t, k}), FS(N^{\mathcal{T}_2}_{t, k}) ) \leq d_{+ \infty} (N^{\mathcal{T}_1}_{t, k}, N^{\mathcal{T}_2}_{t, k} ).$
Remark that $d_{+ \infty}$-distance is lower semicontinuous with respect to the pointwise convergence, i.e. for a sequence of metrics $h^L_{1, l}$, $h^L_{2, l}$, $l \in \nat$, on $L$ converging pointwise to some bounded metrics $h^L_1$, $h^L_2$, we have $d_{+ \infty}(h^L_1, h^L_2) \leq \liminf_{l \to \infty} d_{+ \infty}(h^L_{1, l}, h^L_{2, l})$.
Also, lower-semicontinuous regularization is 1-Lipschitz with respect to $d_{+ \infty}$-distance, i.e. for any bounded metrics $h^L_1, h^L_2$ on $L$, we have $d_{+ \infty}(h^L_{1 *}, h^L_{2 *}) \leq d_{+ \infty}(h^L_1, h^L_2)$.
From all these observations, we conclude
\begin{equation}
d_{+ \infty} \big( FS(N^{\mathcal{T}_1}_{t})_*, FS(N^{\mathcal{T}_2}_{t})_* \big)
\leq
\liminf_{k \to \infty} \frac{d_{+ \infty} (N^{\mathcal{T}_1}_{t, k}, N^{\mathcal{T}_2}_{t, k} )}{k}.
\end{equation}
Now, it is only left to relate the rays of norms $N^{\mathcal{T}_i}_t$, $t \in [0, +\infty[$, $i = 1, 2$, to $H^{\mathcal{T}_i}_t$, and the rays of metrics $FS(N^{\mathcal{T}_i}_{t, k})^{\frac{1}{k}}$, $t \in [0, +\infty[$, $k \in \nat^*$, to $FS(H^{\mathcal{T}_i}_{t, k})^{\frac{1}{k}}$.
For this, by Lemma <ref>, (<ref>) and (<ref>), for any $p \in [1, +\infty]$, we have
\begin{equation}\label{eq_thm_dp_ray_norms_herm1}
\liminf_{k \to \infty} \frac{d_p (N^{\mathcal{T}_1}_{t, k}, N^{\mathcal{T}_2}_{t, k} )}{k}
\liminf_{k \to \infty} \frac{d_p (H^{\mathcal{T}_1}_{t, k}, H^{\mathcal{T}_2}_{t, k} )}{k}.
\end{equation}
Remark also that the sequence of metrics $FS(N^{\mathcal{T}_i}_{t, k})^{\frac{1}{k}}$, $i = 1, 2$, $k \in \nat^*$, is submultiplicative by the discussion before Theorem <ref>, and, hence, by Fekete's lemma, its limit $FS(N^{\mathcal{T}_i}_{t})$ coincides with the infimum of $FS(N^{\mathcal{T}_i}_{t, k})^{\frac{1}{k}}$, $k \in \nat^*$.
From this and Lemma <ref>, we obtain
\begin{equation}\label{eq_thm_dp_ray_norms_herm2}
\lim_{k \to \infty}
\big(
\inf_{l \geq k}
FS(H^{\mathcal{T}_i}_{t, l})^{\frac{1}{l}}
\big)_*.
\end{equation}
We conclude by Theorem <ref>, (<ref>), (<ref>) and (<ref>).
Now, we have everything ready to prove a part of Theorem <ref>.
First of all, for any $t \in [0, + \infty[$, $p \in [1, +\infty]$, $k \in \nat$, by the finitely-dimensional analogue (<ref>) of Theorem <ref>, we have
\begin{equation}\label{eq_thm_dist_na_0}
\frac{d_p \big(
H^{\mathcal{T}_1}_{t, k}, H^{\mathcal{T}_2}_{t, k}
\big)}{t}
\leq
d_p(\mathcal{F}^{\mathcal{T}_1}_k, \mathcal{F}^{\mathcal{T}_2}_k).
\end{equation}
We now divide both sides of (<ref>) by $k$, take the limit $k \to \infty$ and use Theorem <ref> along with (<ref>) to conclude that we have
\begin{equation}\label{eq_thm_dist_na_1}
\frac{d_p(h^{\mathcal{T}_1}_t, h^{\mathcal{T}_2}_t)}{t}
\leq
\liminf_{k \to \infty} \frac{ d_p(\mathcal{F}^{\mathcal{T}_1}_k, \mathcal{F}^{\mathcal{T}_2}_k)}{k}.
\end{equation}
By taking now limit $t \to \infty$ in (<ref>), we obtain the upper bound of Theorem <ref>.
§.§ Finite-type approximations of submultiplicative filtrations
The main goal of this section is to study the behavior of distances between geodesic rays and submultiplicative filtrations by finite-type approximations of the filtrations introduced in Section <ref>.
More precisely, we establish Propositions <ref> and <ref>.
Let us fix a bounded submultiplicative filtration $\mathcal{F}$ on $R(X, L)$ and a positive Hermitian metric $h^L_0$ on $L$.
For $k \in \nat$, we denote by $N^{\mathcal{F}}_{t, k}$, $t \in [1, +\infty[$, the ray of norms emanating from $N_k := {\rm{Ban}}^{\infty}_k(h^L_0)$ associated with $\mathcal{F}_k$ as in (<ref>).
We denote by $N^{\mathcal{F}}_t = \sum_{k = 0}^{\infty} N^{\mathcal{F}}_{t, k}$ (resp. $N = \sum_{k = 0}^{\infty} N_k$) the associated graded ray of norms on $R(X, L)$.
When $\mathcal{F}$ is a filtration associated with an ample test configuration $\mathcal{T}$ of a polarized pair $(X, L)$, this ray of norms coincides with a construction of the ray of norms $N^{\mathcal{T}}_t$ from the proof of Theorem <ref>.
By the same argument, $N^{\mathcal{F}}_t$, $t \in [0, +\infty[$, is submultiplicative in the sense (<ref>) and bounded in the sense described before Theorem <ref>.
By submultiplicativity of $\mathcal{F}$, the non-Archimedean norms $\chi_{\mathcal{F}\{ k \}}$ and $\chi_{\mathcal{F}}$ associated with $\mathcal{F}\{ k \}$ and $\mathcal{F}$ are related as $\chi_{\mathcal{F}\{ k \}} \geq \chi_{\mathcal{F}\{ k + 1 \}} \geq \chi_{\mathcal{F}}$.
Directly from this and (<ref>), we obtain that for any $t \in [0, +\infty[$, we have
\begin{equation}\label{eq_filtr_geod_ray1}
N^{\mathcal{F}\{ k \}}_{t}
\geq
N^{\mathcal{F}\{ k + 1 \}}_{t}
\geq
\end{equation}
From Theorem <ref> and (<ref>), for any $k \in \nat^*$ sufficiently big, we have
\begin{equation}\label{eq_filtr_geod_ray0}
h_t^{\mathcal{F}\{ k \}}
FS(N^{\mathcal{F}\{ k \}}_{t})_*.
\end{equation}
From (<ref>) and (<ref>), we obtain
\begin{equation}\label{eq_filtr_geod_ray1011}
h_t^{\mathcal{F}\{ k \}}
\geq
h_t^{\mathcal{F}\{ k + 1 \}}
\geq
\end{equation}
which proves a part of Proposition <ref>.
However, since $N^{\mathcal{F}\{ k \}}_{t}$ coincides with $N^{\mathcal{F}}_{t}$ over $H^0(X, L^k)$, we deduce by (<ref>) and Fekete's lemma that for any $k \in \nat^*$, we have
\begin{equation}\label{eq_filtr_geod_ray1012}
FS(N^{\mathcal{F}}_{t, k})^{\frac{1}{k}} \geq h_t^{\mathcal{F}\{ k \}}.
\end{equation}
The last two estimates imply that
\begin{equation}
\big( \lim_{k \to \infty} h_t^{\mathcal{F}\{ k \}} \big)_*
\end{equation}
From <cit.>, we conclude that $h_t^{\mathcal{F}} := ( \lim_{k \to \infty} h_t^{\mathcal{F}\{ k \}} )_*$ is a geodesic ray.
It is maximal by <cit.>.
Now, to establish Proposition <ref>, we need the following result.
Let $h^L_k$, $k \in \nat^*$, be a submultiplicative sequence of bounded metrics on $L^k$, such that $h^L := (\lim_{j \to \infty} (h^L_k)^{\frac{1}{k}})_*$ is bounded.
Then for any $p \in [1, +\infty[$, $\lim_{k \to \infty} d_p((h^L_k)^{\frac{1}{k}}, h^L) = 0$.
First of all, by submultiplicativity, the sequence of metrics $(h^L_{2^k})^{\frac{1}{2^k}}$ is decreasing.
By this and <cit.>, for any $p \in [1, +\infty[$, we have
\lim_{k \to \infty} d_p \big( (h^L_{2^k})^{\frac{1}{2^k}}, h^L \big) = 0
By Fekete's lemma, we have $(h^L_{k})^{\frac{1}{k}} \geq h^L$ for any $k \in \nat^*$.
We now fix $\epsilon > 0$, and let $k_0 \in \nat^*$ be such that $d_p ( (h^L_{2^k})^{\frac{1}{2^k}}, h^L ) \leq \epsilon$, for any $k \geq k_0$.
By the boundness of $h^L$, there is $M > 0$, such that for any $k \leq k_0$, we have $d_{+\infty} ( (h^L_{2^k})^{\frac{1}{2^k}}, h^L ) \leq M$.
By using binary expansion and submultiplicativity, we deduce that for any $k \in \nat^*$, we have
d_p ( (h^L_{k})^{\frac{1}{k}}, h^L ) \leq \epsilon^{\frac{k - k_0}{k}} M^{\frac{k_0}{k}}
We conclude that for $k \in \nat^*$ big enough $d_p ( (h^L_{k})^{\frac{1}{k}}, h^L ) \leq 2 \epsilon^{\frac{1}{2}}$. This finishes the proof as $\epsilon > 0$ can be made arbitrary small.
We follow here closely the proof of <cit.>.
First, by the trivial inequality $d_p(\mathcal{F}\{k\}, \mathcal{F}) \leq d_1(\mathcal{F}\{k\}, \mathcal{F})^{\frac{1}{p}} \cdot d_{+\infty}(\mathcal{F}\{k\}, \mathcal{F})^{\frac{p-1}{p}}$ and the boundness of $\mathcal{F}$, it is enough to establish Proposition <ref> for $p = 1$.
We may also assume that the filtration $\mathcal{F}$ satisfies the additional assumption
\begin{equation}\label{eq_wt_neg}
\mathcal{F}^0 R(X, L) = \{0\}.
\end{equation}
In fact, since $\mathcal{F}$ is bounded, there is $C > 0$, verifying $\mathcal{F}^{C k} H^0(X, L^k) = \{0\}$.
Define the filtration $\mathcal{F}[-C]$ on $R(X, L)$, for any $k \in \nat$, $\lambda \in \real$, as follows $\mathcal{F}[-C]^{\lambda} H^0(X, L^k) = \mathcal{F}^{\lambda + Ck} H^0(X, L^k)$.
Clearly, $\mathcal{F}[-C]$ is submultiplicative and bounded whenever $\mathcal{F}$ is submultiplicative and bounded and establishing Proposition <ref> for $\mathcal{F}[-C]$ and $\mathcal{F}$ is equivalent.
We, hence, assume from now on that $\mathcal{F}$ satisfies (<ref>).
Then by (<ref>) and (<ref>), for any $k \in \nat$ sufficiently big and any $t \in [0, +\infty[$, we have
\begin{equation}\label{eq_filtr_geod_ray122}
N^{\mathcal{F}\{ k \}}_{t}
\geq
\geq
\end{equation}
From this, for any $l \in \nat^*$, we have $
d_1(N^{\mathcal{F}\{ k \}}_{t, l}, N^{\mathcal{F}}_{t, l})
d_1(N^{\mathcal{F}\{ k \}}_{t, l}, N_l)
d_1(N^{\mathcal{F}\{ k \}}_{t, l}, N_l)
$, by <cit.>.
In particular, directly from this, (<ref>) and (<ref>), we see that the function $d_1(N^{\mathcal{F}\{ k \}}_{t, l}, N^{\mathcal{F}}_{t, l}) $ is linear in $t$.
Hence, by Remark <ref> we conclude that
\begin{equation}
d_1(\mathcal{F}\{k\}_l, \mathcal{F}_l)
\lim_{t \to \infty}
\frac{d_1(N^{\mathcal{F}\{ k \}}_{t, l}, N^{\mathcal{F}}_{t})}{t, l}
d_1(N^{\mathcal{F}\{ k \}}_{1, l}, N^{\mathcal{F}}_{1, l}),
\end{equation}
where $\mathcal{F}\{k\}_l$ and $\mathcal{F}_l$ are restrictions of $\mathcal{F}\{k\}$ and $\mathcal{F}$ to $H^0(X, L^l)$.
By taking now $l \to \infty$, and using Theorem <ref>, we conclude that
d_1(\mathcal{F}\{k\}, \mathcal{F})
d_1(h_1^{\mathcal{F}\{ k \}}, h_1^{\mathcal{F}})
This and (<ref>), (<ref>), implies that it is enough to prove that $\lim_{k \to \infty} d_1(FS(N^{\mathcal{F}}_{1, k})^{\frac{1}{k}}, FS(N^{\mathcal{F}}_1)_* = 0$, which is a direct consequence of Lemma <ref>.
§ UNIFORM SUBMULTIPLICATIVITY, TOEPLITZ OPERATORS AND SNC MODELS
The main goal of this section is to prove that the left-hand side of (<ref>) is not smaller than the right-hand side, i.e. that the lower bound of Theorem <ref> holds.
This with the fact that we already established the opposite bound in Section <ref> would give us a complete proof of Theorem <ref>.
Similarly to the proof of the upper bound from Section <ref>, the proof here relies on the geometric quantization procedure of Phong-Sturm.
But otherwise it is rather different.
We first make a comparison between the geodesic ray of norms on the section ring, introduced before Theorem <ref>, and the ray of $L^2$-norms of the geodesic ray of metrics associated with the test configurations as introduced in Section <ref>.
We then show that it is sufficient to assume that the singularities of the central fibers of test configurations are mild enough.
Finally, for test configurations with mild singularities, we estimate the distance between the $L^2$-norms associated with the geodesic rays of metrics in terms of the distance between the geodesic rays of metrics themselves.
Combining all these results with a result from Section <ref>, saying that the finitely-dimensional analogue of Theorem <ref> holds, leads to a proof of the lower bound from Theorem <ref>.
More precisely, recall that before Theorem <ref> we defined, following Phong-Sturm, a geodesic ray of graded Hermitian norms $H^{\mathcal{T}}_t = \sum_{k = 0}^{\infty} H^{\mathcal{T}}_{t, k}$, $t \in [0, +\infty[$, on $R(X, L)$ associated with a test configuration $\mathcal{T}$.
Let $h^{\mathcal{T}}_t$, $t \in [0, +\infty[$, be the geodesic ray of metrics on $L$ associated with $\mathcal{T}$ as in Section <ref>.
Let $\omega$ be a Kähler form on $X$.
In Sections <ref>, <ref>, by relying on the results of Phong-Sturm [56] and the methods from the previous works of the author, [35], [36], we establish the following result, relating rays $H^{\mathcal{T}}_t$ and $h^{\mathcal{T}}_t$, $t \in [0, + \infty[$.
There are $C > 0$, $k_0 \in \nat^*$, such that for any $t \in [0, +\infty[$, $k \geq k_0$, the ray of norms $H^{\mathcal{T}}_t$ compares to the $L^2$-norms associated with the geodesic ray of metrics $h^{\mathcal{T}}_t$ as follows
\begin{equation}
d_{+ \infty} \big(
H^{\mathcal{T}}_{t, k},
{\rm{Hilb}}_k(h^{\mathcal{T}}_t, \omega)
\big)
\leq
C(t + k).
\end{equation}
Similarly, for $L^{\infty}$-norms, we have
\begin{equation}
d_{+ \infty} \big(
H^{\mathcal{T}}_{t, k},
\big)
\leq
C(t + k).
\end{equation}
Now, we say that a proper holomorphic map $\pi : \mathcal{X} \to \mathbb{C}$ (or $\pi : \mathcal{X} \to \mathbb{D}$) is a snc model if $\mathcal{X}$ is smooth, the central fiber $X_0$ is a simple normal crossing divisor in $\mathcal{X}$, and the intersections of irreducible components of $X_0$ are either irreducible or empty.
If, furthermore, the central fiber is reduced, we say that it is a semistable snc model.
When $\mathcal{X}$ is endowed with an ample line bundle $\mathcal{L}$, the pair $(\pi, \mathcal{L})$ is called an ample semistable snc model.
In Section <ref>, by relying on the results of Phong-Sturm [56] and Boucksom-Jonsson [12], we establish the following result.
In order to prove Theorem <ref>, it suffices to establish it for $\mathcal{T}_1 = (\pi: \mathcal{X} \to \comp, \mathcal{L}_1)$ and $\mathcal{T}_2 = (\pi: \mathcal{X} \to \comp, \mathcal{L}_2)$, where $(\pi, \mathcal{L}_1)$, $(\pi, \mathcal{L}_2)$ are ample semistable snc models.
Let us now fix two test configurations $\mathcal{T}_1, \mathcal{T}_2$ as in Theorem <ref>.
We fix a smooth positive metric $h^L_0$ on $L$, and let $h^{\mathcal{T}_1}_t, h^{\mathcal{T}_2}_t$, $t \in [0, +\infty[$, be the geodesic rays of metrics on $L$ associated with $\mathcal{T}_1, \mathcal{T}_2$ and emanating from $h^L_0$.
In Sections <ref>, <ref>, by relying on the methods of Dai-Liu-Ma [22], Ma-Marinescu [48], [49], Darvas-Lu-Rubinstein [27], and the results of Berndtsson [5], we establish the following result.
For any $\epsilon > 0$, $p \in [1, +\infty[$, there are $C > 0, k_0 \in \nat$, such that for any $t \in [0, +\infty[$, $k \geq k_0$, the following bound holds
\begin{equation}\label{eq_3_step}
d_p \big( {\rm{Hilb}}_k(h^{\mathcal{T}_1}_t, \omega), {\rm{Hilb}}_k(h^{\mathcal{T}_2}_t, \omega) \big)
\leq
k \cdot d_p(h^{\mathcal{T}_1}_t, h^{\mathcal{T}_2}_t)
C(k + t)
\epsilon k t.
\end{equation}
Moreover, for $p = +\infty$, we have
\begin{equation}
d_{+\infty} \big( {\rm{Hilb}}_k(h^{\mathcal{T}_1}_t, \omega), {\rm{Hilb}}_k(h^{\mathcal{T}_2}_t, \omega) \big)
\leq
k \cdot d_{+\infty}(h^{\mathcal{T}_1}_t, h^{\mathcal{T}_2}_t).
\end{equation}
We will now show how to assemble these results to finally establish Theorem <ref>.
By Theorem <ref>, without loss of generality, we may assume that $\mathcal{T}_1 = (\pi: \mathcal{X} \to \comp, \mathcal{L}_1)$ and $\mathcal{T}_2 = (\pi: \mathcal{X} \to \comp, \mathcal{L}_2)$, where $(\pi, \mathcal{L}_1)$, $(\pi, \mathcal{L}_2)$ are ample semistable snc models.
We use the notations introduced before Theorem <ref>.
From Theorems <ref>, <ref> and (<ref>), we conclude that for any $\epsilon > 0$, $p \in [1, +\infty]$, there are $C > 0$, $k_0 \in \nat^*$, such that for any $t \in [0, +\infty[$ and $k \geq k_0$, we have
\begin{equation}\label{eq_pf_2_0}
d_p \big( H^{\mathcal{T}_1}_{t, k}, H^{\mathcal{T}_2}_{t, k} \big)
\leq
k \cdot d_p(h^{\mathcal{T}_1}_t, h^{\mathcal{T}_2}_t)
C(k + t)
\epsilon k t.
\end{equation}
By dividing both sides of (<ref>) by $t$ and taking limit $t \to \infty$, we conclude that
\begin{equation}\label{eq_pf_2_1}
d_p \big( \mathcal{F}^{\mathcal{T}_1}_{k}, \mathcal{F}^{\mathcal{T}_2}_{k} \big)
\leq
k \cdot d_p \big(\{ h^{\mathcal{T}_1}_t \}, \{ h^{\mathcal{T}_2}_t \} \big)
\epsilon k.
\end{equation}
By dividing both sides of (<ref>) by $k$ and taking limit $k \to \infty$, we conclude that
\begin{equation}\label{eq_pf_2_2}
\limsup_{k \to \infty} \frac{d_p \big( \mathcal{F}^{\mathcal{T}_1}_{k}, \mathcal{F}^{\mathcal{T}_2}_{k} \big)}{k}
\leq
d_p \big(\{ h^{\mathcal{T}_1}_t \}, \{ h^{\mathcal{T}_2}_t \} \big)
\epsilon.
\end{equation}
Since $\epsilon > 0$ was chosen arbitrarily, we obtain the lower bound of Theorem <ref>.
From our proof of Theorem <ref>, we see that the limit of $d_p ( \mathcal{F}^{\mathcal{T}_1}_{k}, \mathcal{F}^{\mathcal{T}_2}_{k} ) / k$, $p \in [1, +\infty]$, exists as $k \to \infty$.
For $p \in [1, +\infty[$, a different proof of this fact was given by Chen-Maclean <cit.>, cf. also Boucksom-Jonsson <cit.>.
§.§ Geodesic rays of Hermitian norms as the ray of $L^2$-norms
The main goal of this section is to compare on a section ring geodesic rays of Hermitian norms associated with an ample test configuration with $L^2$-norms, i.e. to establish Theorem <ref>.
Before all, let us introduce some notations from linear algebra.
Recall first that a norm (Archimedean or non-Archimedean) $N_V = \| \cdot \|_V$ on a finitely dimensional vector space $V$ naturally induces the norm $\| \cdot \|_Q := [N_V]$ on any quotient $Q$, $\pi : V \to Q$ of $V$ as follows
\begin{equation}\label{eq_defn_quot_norm}
\| f \|_Q
\inf \Big \{
\| g \|_V
\quad
g \in V,
\pi(g) = f
\Big\},
\qquad f \in Q.
\end{equation}
Clearly, if $N_V$ is Hermitian, the quotient $N_Q$ is Hermitian as well.
Let $V$ (resp. $W$) be a finitely dimensional vector space with a Hermitian norm $H_V$ (resp. $H_W$).
We denote by ${\rm{Sym}}^l H_V$, $l \in \nat$, (resp. $H_V \otimes H_W$) the Hermitian norm on ${\rm{Sym}}^l V$ (resp. $V \otimes W$) associated with the scalar product induced by $H_V$ (resp. $H_V$ and $H_W$).
Now, for a polarized projective manifold $(X, L)$ and any $l, k \in \nat^*$, we define the multiplication
\begin{equation}\label{eq_mult_map}
{\rm{Mult}}_{l, k}^{{\rm{Sym}}} : {\rm{Sym}}^l H^0(X, L^k)
\to
H^0(X, L^{kl}),
\end{equation}
as $f_1 \otimes \cdots \otimes f_l \mapsto f_1 \cdots f_l$.
Similarly, we define
\begin{equation}\label{eq_mult_map2}
{\rm{Mult}}_{l, k} : H^0(X, L^l) \otimes H^0(X, L^k)
\to
H^0(X, L^{l + k}).
\end{equation}
The core of the proof of Theorem <ref>, from which we conserve the notations, lies in the following several results.
There is $k_0 \in \nat$, such that for any $k \geq k_0$, there are $C > 0$, $l_0 \in \nat$, such that for any $l \geq l_0$, $t \in [0, +\infty[$, under the map (<ref>), the following inequality takes place
\begin{equation}
\exp(C(l + t)) \cdot
{\rm{Hilb}}_{kl}(h^{\mathcal{T}}_t, \omega)
\geq
\big[ {\rm{Sym}}^l H^{\mathcal{T}}_{t, k} \big].
\end{equation}
There are $C > 0$, $k_0 \in \nat$, such that for any $l, k \geq k_0$, $t \in [0, +\infty[$, under the map (<ref>), the following inequality takes place
\begin{equation}
\exp(C t + C) \cdot
{\rm{Hilb}}_{l + k}(h^{\mathcal{T}}_t, \omega)
\geq
\big[ {\rm{Hilb}}_l(h^{\mathcal{T}}_t, \omega) \otimes {\rm{Hilb}}_k(h^{\mathcal{T}}_t, \omega) \big].
\end{equation}
Theorems <ref> and <ref> are uniform weak versions of <cit.>.
The proofs of Theorems <ref>, <ref>, which will be presented in Section <ref>, rely on Ohsawa-Takegoshi extension theorem.
For any $\epsilon > 0$, there is $k_0 \in \nat$ such that for any $l, k \geq k_0$, $t \in [0, +\infty[$, under the map (<ref>), the following inequality takes place
\begin{equation}
\big[ {\rm{Sym}}^l H^{\mathcal{T}}_{t, k} \big]
\geq
\exp(- \epsilon kl) \cdot
H^{\mathcal{T}}_{t, kl}.
\end{equation}
Similarly, under the map (<ref>), the following inequality takes place
\begin{equation}\label{eq_2_step_212}
\big[ H^{\mathcal{T}}_{t, l} \otimes H^{\mathcal{T}}_{t, k} \big]
\geq
\exp(- \epsilon (l + k)) \cdot
H^{\mathcal{T}}_{t, l + k}.
\end{equation}
Before describing the proof of Theorem <ref>, let us explain how along with Theorems <ref>, <ref>, they entail Theorem <ref>.
For this, we need to have a better understanding of the uniform quantization properties of the geodesic ray of Hermitian norms on the section ring, and the following result of Phong-Sturm will be of paramount significance for this.
Let us fix an arbitrary ample test configuration $\mathcal{T} = (\pi: \mathcal{X} \to \comp, \mathcal{L})$ and an arbitrary $\comp^*$-equivariant resolution of singularities $\mathcal{T}' = (\pi': \mathcal{X}' \to \comp, \mathcal{L}')$ of $\mathcal{T}$.
We fix an arbitrary smooth metric $h^{\mathcal{L}'}$ on $\mathcal{L}'$, and denote by $h^{\mathcal{T} {\rm{sm}}}_t$, $t \in [0, +\infty[$, the ray of smooth positive metrics on $L$ associated with $h^{\mathcal{L}'}$ in the same way as the geodesic ray $h^{\mathcal{T}}_t$, $t \in [0, +\infty[$, of metrics on $L$ was associated with a solution of the Monge-Ampère equation (<ref>).
There are $C > 0$, $k_0 \in \nat$, such that for any $t \in [0, +\infty[$, $k \geq k_0$, we have
\begin{equation}\label{eq_ph_st_regul011}
\exp(-C) \cdot h^{\mathcal{T} {\rm{sm}}}_t \leq FS(H^{\mathcal{T}}_{t, k})^{\frac{1}{k}} \leq \exp(C) \cdot h^{\mathcal{T} {\rm{sm}}}_t.
\end{equation}
In particular, by Theorem <ref>, we have
\begin{equation}
\exp(-C) \cdot h^{\mathcal{T} {\rm{sm}}}_t \leq h^{\mathcal{T}}_t \leq \exp(C) \cdot h^{\mathcal{T} {\rm{sm}}}_t.
\end{equation}
From the second part of Theorem <ref>, we see that two ample test configurations give rise to the same geodesic ray of metrics if and only if they are equivalent.
The only new statement is the validity of the upper bound of (<ref>).
The rest was established by Phong-Sturm in the proof of <cit.>, including the validity of the upper bound of (<ref>) for $k \in \nat$ , divisible by $k_0 \in \nat$, where $k_0$ is any sufficiently big natural number.
Let us now fix $k_0, k_1 \in \nat$ sufficiently big and relatively prime.
For a given $k \in \nat$, $k \geq 2 k_0 k_1$, we decompose $k = k_0 r + k_1 s$, $r, s \in \nat$.
An easy calculation, cf. Lemma <cit.>, shows that $FS(\big[ H^{\mathcal{T}}_{t, k_0 r} \otimes H^{\mathcal{T}}_{t, k_1 s} \big]) = FS(H^{\mathcal{T}}_{t, k_0 r}) \cdot FS(H^{\mathcal{T}}_{t, k_1 s})$.
By (<ref>) and the validity of the upper bound (<ref>) for $k := k_0 r$, $k := k_1 s$, we deduce its validity for all sufficiently big $k$.
We are now finally ready to prove the main result of this section.
Directly from the lower bound of (<ref>), by Lemma <ref>, there are $C > 0$, $k_0 \in \nat$, such that for any $k \geq k_0$, $t \in [0, +\infty[$, we have
\begin{equation}\label{eq_low_bnd_inf1}
H^{\mathcal{T}}_{t, k}
\geq
{\rm{Ban}}^{\infty}_k \big( \exp(- C) \cdot h^{\mathcal{T}}_t \big).
\end{equation}
Since the $L^{\infty}$-norm dominates the $L^2$-norm, we deduce that
\begin{equation}\label{eq_low_bnd_inf12}
H^{\mathcal{T}}_{t, k}
\geq
\exp(- C k)
\cdot
{\rm{Hilb}}_k(h^{\mathcal{T}}_t, \omega).
\end{equation}
On another hand, directly from Theorem <ref> and the first part of Theorem <ref>, for any $k_0, k_1 \in \nat$ sufficiently big, there is $C > 0$, such that for any $t \in [0, +\infty[$, $k$ divisible by $k_0$ or $k_1$, we have
\begin{equation}\label{eq_low_bnd_inf2}
{\rm{Hilb}}_k(h^{\mathcal{T}}_t, \omega)
\geq
\exp(- C (k + t))
\cdot
H^{\mathcal{T}}_{t, k}.
\end{equation}
If we now fix $k_0, k_1 \in \nat^*$, which are relatively prime and big enough, and apply Theorem <ref>, the second part of Theorem <ref> and (<ref>) for $k = k_0 r$ and $k = k_1 s$, where $r, s \in \nat$ are big enough, we obtain that (<ref>) holds for all $k$ sufficiently large, since any such number can be written as $k_0 r + k_1 s$, $r, s \in \nat$.
Then a combination of (<ref>) and (<ref>) gives us a proof of the first part of Theorem <ref>.
Also, from (<ref>) and (<ref>), there are $C > 0$, $k_0 \in \nat$, such that for any $t \in [0, +\infty[$, $k \geq k_0$, we have
\begin{equation}
\exp(Ck) \cdot {\rm{Hilb}}_k(h^{\mathcal{T}}_t, \omega)
\geq
\end{equation}
The proof of the second part of Theorem <ref> follows this and the first part.
We will now prove Theorem <ref>.
This result in essence says that the construction of geodesic rays of Hermitian norms emanating from $L^2$-norms respects the multiplicative structure of the section ring.
The proof of Theorem <ref> decomposes into three statements.
The first statement from [35] shows that the construction of $L^2$-norms respects the multiplicative structure of the section ring.
The second statement shows that geodesic rays of Hermitian norms on finitely dimensional vector spaces behave reasonably under taking quotients.
The third statement shows that formation of geodesic rays on finitely dimensional vector spaces is compatible with tensor products.
We begin by recalling the first result.
For any continuous psh metric $h^L$ on $L$ and any $\epsilon > 0$, there is $k_0 \in \nat$, such that for any $l, k \geq k_0$, under the map (<ref>), the following inequality holds
\begin{equation}\label{eq_sec_ring}
\big[ {\rm{Sym}}^l {\rm{Hilb}}_k(h^L) \big]
\geq
\exp(- \epsilon k l)
\cdot
\end{equation}
Similarly, for any $\epsilon > 0$, there is $k_0 \in \nat$, such that for any $l, k \geq k_0$, under the map (<ref>), the following inequality holds
\begin{equation}\label{eq_sec_ring2}
\big[ {\rm{Hilb}}_l(h^L) \otimes {\rm{Hilb}}_k(h^L) \big]
\geq
\exp(- \epsilon (l + k))
\cdot
{\rm{Hilb}}_{l + k}(h^L).
\end{equation}
In <cit.>, we also proved that up to a subexponential factor, the inequality sign can be reversed in (<ref>).
These results were stated for the norms ${\rm{Hilb}}_k(h^L)$ instead of ${\rm{Hilb}}_k(h^L, \omega)$, but since the two differ by a subexponential factor, see Berman-Boucksom-Witt Nyström <cit.>, cf. <cit.>, the version of (<ref>) holds true as well.
The first linear algebra ingredient in the proof of Theorem <ref> goes as follows.
Let $H_0$ (resp. $H_1$) be a fixed Hermitian norm on $V$ (resp. $Q$) and $\mathcal{F}$ (resp. $\mathcal{G}$) is a filtration on $V$ (resp. $Q$).
We assume that $[H_0] \geq H_1$ and $[\chi_{\mathcal{F}}] \geq \chi_{\mathcal{G}}$.
Then the geodesic ray $H_t^{\mathcal{F}}$, $t \in [0, +\infty[$, of Hermitian norms on $V$ associated with $\mathcal{F}$ and emanating from $H_0$ compares to the geodesic ray $H_t^{\mathcal{G}}$ of Hermitian norms on $Q$ associated with $\mathcal{G}$ and emanating from $H_1$ as follows
\begin{equation}\label{eq_interpol}
\geq
\end{equation}
Remark first that the conditions $[H_0] \geq H_1$ and $[\chi_{\mathcal{F}}] \geq \chi_{\mathcal{G}}$ are equivalent to the fact that the quotient map $\pi : V \to Q$ is 1-Lipschitz, where $V$ is endowed with the norm $H_0$ (resp. $\chi_{\mathcal{F}}$) and $Q$ is endowed with the norm $H_1$ (resp. $\chi_{\mathcal{G}}$).
In this perspective Proposition <ref> is a non-Archimedean version of the interpolation theorem of Stein-Weiss, cf. <cit.>, saying, as we recall below, that a similar statement holds for geodesics between two fixed Hermitian norms.
The proof of Proposition <ref> proceeds in two steps.
Let us denote by $N_t^{\mathcal{F}}$ (resp. $N_t^{\mathcal{G}}$) the ray of norms on $V$ (resp. $Q$) associated with $\mathcal{F}$ (resp. $\mathcal{G}$) and emanating from $H_0$ (resp. $H_1$) as in (<ref>).
Directly from (<ref>) and our assumptions on the relation between $H_0$ and $H_1$, $\mathcal{F}$ and $\mathcal{G}$, the following inequality is satisfied
\geq
From this and Lemma <ref>, we conclude
\begin{equation}\label{eq_interpol0}
\dim V^2 \cdot
\geq
\end{equation}
We will now show that this estimate can be bootstrapped to (<ref>).
In fact, recall that in <cit.> we proved, by essentially reformulating interpolation theorem of Stein-Weiss, that for any two Hermitian norms $H^V_0, H^V_1$ on $V$ and any two Hermitian norms $H^Q_0, H^Q_1$ on $Q$, verifying $[H^V_0] \geq H^Q_0$ and $[H^V_1] \geq H^Q_1$, for the geodesic rays of norms $H^V_t$ (resp. $H^Q_t$) between $H^V_0$ and $H^V_1$ (resp. $H^Q_0$ and $H^Q_1$), we have $[H^V_t] \geq H^Q_t$, for any $t \in [0, 1]$.
Now, for fixed $h > 0$, by (<ref>), we can apply this result for $H^V_0 := H_0$, $H^V_1 := H_h^{\mathcal{F}}$ and $H^Q_0 := H_1$, $H^Q_1 := \frac{1}{\dim V^2} H_h^{\mathcal{G}}$.
For $t \in [0, h]$, it gives us
(\dim V^2)^{\frac{t}{h}} \cdot
\geq
As $h$ can be chosen as large as we wish, we deduce (<ref>).
Now, to state the last linear-algebraic ingredient, let us fix a finitely dimensional vector space $V$, endowed with a Hermitian norm $H_V$ and a filtration $\mathcal{F}$.
We denote by ${\rm{Sym}}^k \mathcal{F}$ the filtration on ${\rm{Sym}}^k V$ induced from the filtration $\mathcal{F}$ on $V$, and by $H_t^{{\rm{Sym}}^l \mathcal{F}}$ the geodesic ray of Hermitian norms on ${\rm{Sym}}^l V$ emanating from ${\rm{Sym}}^l H_V$ associated with the filtration ${\rm{Sym}}^l \mathcal{F}$.
Similarly, we fix another finitely dimensional vector space $W$, endowed with a Hermitian norm $H_W$ and a filtration $\mathcal{G}$.
Recall that the filtration $\mathcal{F} \otimes \mathcal{G}$ on $V \otimes W$ is defined so that in terms of the associated non-Archimedean norms, defined as in (<ref>), we have
\begin{equation}
\chi_{\mathcal{F} \otimes \mathcal{G}}(h) = \min \max_{i = 1, \cdots, N} \chi_{\mathcal{F}}(f_i) \cdot \chi_{\mathcal{F}}(g_i),
\end{equation}
where the minimum is taken over all possible decompositions $h = \sum_{i = 1}^{N} f_i \otimes g_i$, $N \in \nat$.
We denote by $H_t^{\mathcal{F} \otimes \mathcal{G}}$ the geodesic ray of Hermitian norms on $V \otimes W$ emanating from $H_V \otimes H_W$ and associated with $\mathcal{F} \otimes \mathcal{G}$.
The construction of geodesic rays of norms is compatible with the symmetrization and tensor products.
In other words, for any $l \in \nat^*$ and $t \in [0, +\infty[$, we have
\begin{equation}\label{eq_p_expl_calc1}
H_t^{{\rm{Sym}}^l \mathcal{F}}
{\rm{Sym}}^l H_t^{\mathcal{F}},
\qquad
H_t^{\mathcal{F} \otimes \mathcal{G}}
\otimes
\end{equation}
The proofs of the both statements are identical, so we only concentrate on the part concerning the symmetric powers.
We denote $n = \dim V$ and let $e_1, \ldots, e_n$ be an orthonormal basis of $(V, H_V)$, adapted to the filtration $\mathcal{F}$ as in (<ref>).
Then we see that for multiindices $\alpha \in \nat^n$, $|\alpha| = l$, $\alpha = (\alpha_1, \ldots, \alpha_n)$, the basis $\sqrt{\frac{k!}{\alpha!}} e^{\alpha} := \sqrt{\frac{l!}{\alpha_1! \cdots \alpha_n!}} e_1^{\alpha_1} \cdots e_n^{\alpha_n}$ is an adapted basis for the filtration ${\rm{Sym}}^l \mathcal{F}$ on the Hermitian vector space $({\rm{Sym}}^l V, {\rm{Sym}}^l H_V)$.
From this, we deduce (<ref>).
The proofs of the both statements are identical, so we only concentrate on the part concerning the symmetric powers.
By Lemma <ref>, we see that ${\rm{Sym}}^l H^{\mathcal{T}}_{t, k}$, $t \in [0, +\infty[$, can be interpreted as the geodesic ray of Hermitian norms, emanating from ${\rm{Sym}}^l H^{\mathcal{T}}_{0, k} = {\rm{Sym}}^l {\rm{Hilb}}_k(h^L_0)$, associated with the filtration ${\rm{Sym}}^l \mathcal{F}^{\mathcal{T}}_k$.
By Lemma <ref> and submultiplicativity of $\mathcal{F}$, we see that all the assumptions of Proposition <ref> are satisfied for $H_t^{\mathcal{F}} := {\rm{Sym}}^l H^{\mathcal{T}}_{t, k}$ and $H_t^{\mathcal{G}} := \exp(- \epsilon k l) H^{\mathcal{T}}_{t, kl}$.
Proposition <ref> then in our context gives us exactly Theorem <ref>.
§.§ Ohsawa-Takegoshi extension theorem and quotients of geodesic rays
The main goal of this section is to prove Theorems <ref>, <ref>.
The proofs are based on a version of Ohsawa-Takegoshi extension theorem with a uniform constant, which we recall below.
We fix a compact complex manifold $X$ of dimension $n$ with an ample line bundle $L$ over it, endowed with a smooth positive metric $h^L_0$.
Let $L_0, L_1$ be two line bundles on $X$, endowed with smooth semipositive metrics $h^{L_0}_0, h^{L_1}_0$, such that $(L_0 \otimes L_1, h^{L_0}_0 \otimes h^{L_1}_0)$ is a positive line bundle.
Let $Y$ be a closed submanifold of $X$ of dimension $m$.
Let $\omega$ be a fixed Kähler form on $X$.
There are $c, C > 0$, $k_0 \in \nat$, such that for any $k \geq k_0$, any psh metric $h^L$ and any section $f \in H^0(Y, L|_Y^k)$, there is a holomorphic extension $\tilde{f} \in H^0(X, L^k)$ of $f$, such that the following $L^2$-bound is satisfied
\begin{equation}\label{eq_ot_asymp}
\int_X | \tilde{f}(x) |_{h^L} \cdot \omega^n(x)
\leq
\cdot
\exp \big( c d_{+ \infty}(h^L, h^L_0) \big)
\cdot
\int_Y | f(y) |_{h^L} \cdot \omega^m(y).
\end{equation}
Similarly, there are $c, C > 0$, $k_0 \in \nat$, such that for any $k, l \geq k_0$, any psh metrics $h^{L_0}$, $h^{L_1}$ on $L_0$, $L_1$, and any section $f \in H^0(Y, L_0|_Y^k \otimes L_1|_Y^l)$, there is a holomorphic extension $\tilde{f} \in H^0(X, L_0^k \otimes L_1^l)$ of $f$, such that the following $L^2$-bound is satisfied
\begin{multline}\label{eq_ot_asymp2}
\int_X | \tilde{f}(x) |_{h^{L_0}, h^{L_1}} \cdot \omega^n(x)
\leq
\cdot
\exp \Big( c \big( d_{+ \infty}(h^{L_0}, h^{L_0}_0) + d_{+ \infty}(h^{L_1}, h^{L_1}_0) \big) \Big)
\cdot
\\
\cdot
\int_Y | f(y) |_{h^{L_0}, h^{L_1}} \cdot \omega^m(y),
\end{multline}
where $| \cdot |_{h^{L_0}, h^{L_1}}$ is the pointwise norm induced by $h^{L_0}$ and $h^{L_1}$.
See the proof of <cit.>, which is a rather direct adaptation of more general and refined results of Demailly <cit.> and Ohsawa [51].
We will now reformulate Theorem <ref> in a form, which is better suited for our needs.
Consider the restriction operator
\begin{equation}\label{eq_res_map}
{\rm{Res}}_k : H^0(X, L^k) \to H^0(Y, L|_Y^k), \quad {\rm{Res}}_{k, l} : H^0(X, L_0^k \otimes L_1^l) \to H^0(Y, L_0|_Y^k \otimes L_1|_Y^l).
\end{equation}
In the language of quotient norms from (<ref>), considered with respect to the maps (<ref>), Theorem <ref> can be restated as follows: the maps (<ref>) are surjective, and the following bound holds
\begin{equation}\label{eq_reform_thm_ot_asymp}
\begin{aligned}
[{\rm{Hilb}}^X_k(h^L, \omega)]
\leq
\cdot
\exp(c d_{+ \infty}(h^L, h^L_0))
\cdot
{\rm{Hilb}}^Y_k(h^L, \omega|_Y),
\\
[{\rm{Hilb}}^X_{k, l}(h^{L_0}, h^{L_1}, \omega)]
\leq
\cdot
\exp \big( c ( d_{+ \infty}(h^{L_0}, h^{L_0}_0) + d_{+ \infty}(h^{L_1}, h^{L_1}_0) ) \big)
\cdot
\\
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
\cdot
{\rm{Hilb}}^Y_{k, l}(h^{L_0}, h^{L_1}, \omega|_Y),
\end{aligned}
\end{equation}
where ${\rm{Hilb}}^X_k(h^L, \omega)$ and ${\rm{Hilb}}^Y_k(h^L, \omega|_Y)$ (resp. ${\rm{Hilb}}^X_{k, l}(h^{L_0}, h^{L_1}, \omega)$ and ${\rm{Hilb}}^Y_{k, l}(h^{L_0}, h^{L_1}, \omega|_Y)$) stand for the $L^2$-norms on $H^0(X, L^k)$ and $H^0(Y, L|_Y^k)$ (resp. $H^0(X, L_0^k \otimes L_1^l)$ and $H^0(Y, L_0|_Y^k \otimes L_1|_Y^l)$) induced by $h^L$ (resp. $h^{L_0}$, $h^{L_1}$) and $\omega$.
To apply Theorem <ref> in the proof of Theorem <ref>, we need to interpret the symmetric tensor product norm in terms of the $L^2$-norm.
The following well-known result gives us exactly that.
For any Hermitian norm $H_V$ on a finitely dimensional complex vector space $V$, and any $k \in \nat^*$, we have
\begin{equation}\label{eq_p_expl_calc2}
{\rm{Sym}}^k H_V
{\rm{Hilb}}^{\mathbb{P}(V^*)}_k \big( FS^{\mathbb{P}}(H_V) \big)
\cdot
\sqrt{\frac{(k + \dim V - 1)!}{k!}},
\end{equation}
where we implicitly used the the canonical isomorphism $H^0(\mathbb{P}(V^*), \mathscr{O}(k)) \simeq {\rm{Sym}}^k V$, and by $FS^{\mathbb{P}}(H_V)$ we mean the Fubini-Study metric on $\mathscr{O}(1)$ induced by $H_V$.
From this result, Tian [64] and Theorem <ref>, we see that $FS(H_t^{\mathcal{F}})$ is the geodesic ray corresponding to a test configuration associated with the filtration on $R(\mathbb{P}(V^*), \mathscr{O}(1))$ induced by $\mathcal{F}$, and so Lemma <ref> refines Theorem <ref> in the special case of projective spaces.
As we will not need this restatement of Lemma <ref> in what follows, we leave the details to an interested reader.
Let $e_1, \ldots, e_n$ be an orthonormal basis of $(V, H_V)$.
For any $k \in \nat$ and a multiindex $\alpha \in \nat^n$, $|\alpha| = k$, we have
\big\| e^{\alpha} \big\|^2_{{\rm{Sym}}^k H_V}
\frac{\alpha!}{k!}
It is, hence, sufficient to establish that
\begin{equation}\label{eq_verif_l2}
\big\| e^{\alpha} \big\|^2_{{\rm{Hilb}}^{\mathbb{P}(V^*)}_k(FS^{\mathbb{P}}(H_V))}
\frac{\alpha!}{(k + n - 1)!}.
\end{equation}
By pulling back the integral from the definition of the $L^2$-norm on $\mathbb{P}(V^*)$ to the unit sphere $S^{2 n - 1}(V^*) \subset V^*$ under the natural projection map $p : S^{2 n - 1}(V^*) \to \mathbb{P}(V^*)$, cf. <cit.>, the verification of (<ref>) boils down to the verification of the identity
\begin{equation}
\int_{S^{2 n - 1}} |z|^{2\alpha} d \sigma(z)
\frac{\alpha! (n - 1)!}{(k + n - 1)!},
\end{equation}
where $d \sigma$ is the standard volume form on the standard unit sphere $S^{2 n - 1} \subset \comp^n$, normalized so that the total volume equals one. The last calculation is standard, cf. <cit.>.
Now, we fix a finitely dimensional vector space $V$ and endow it with a Hermitian norm $H_V$.
We fix a filtration $\mathcal{F}$ on $V$, and denote by $H_t^{\mathcal{F}}$, $t \in [0, +\infty[$, the ray of Hermitian norms emanating from $H_V$ as in (<ref>).
We denote by $FS^{\mathbb{P}}(H_t^{\mathcal{F}})$, $t \in [0, +\infty[$, the ray of Fubini-Study metrics on the hyperplane line bundle $\mathscr{O}(1)$ of $\mathbb{P}(V^*)$ constructed as in (<ref>).
For any $k \in \nat^*$, a filtration $\mathcal{F}$ on $V$ induces a filtration ${\rm{Sym}}^k \mathcal{F}$ on ${\rm{Sym}}^k V$.
We denote by $H_t^{{\rm{Sym}}^k \mathcal{F}}$, $t \in [0, +\infty[$, the ray of Hermitian norms on ${\rm{Sym}}^k V$ emanating from ${\rm{Sym}}^k H_V$ as in (<ref>).
By Lemmas <ref> and <ref>, we deduce that for any $k \in \nat^*$, $t \in [0, +\infty[$, the following identity holds
\begin{equation}\label{eq_gt_sym_l2}
H_t^{{\rm{Sym}}^k \mathcal{F}}
{\rm{Hilb}}^{\mathbb{P}(V^*)}_k \big( FS^{\mathbb{P}}(H_t^{\mathcal{F}}) \big)
\cdot
\sqrt{\frac{(k + \dim V)!}{k!}}.
\end{equation}
We are now finally ready to prove the main results of this section.
From Theorem <ref>, (<ref>) and the fact that the function $\frac{(l + n)!}{l!}$ grows polynomially in $l \in \nat^*$ for fixed $n \in \nat^*$, we see that it is enough to prove that there is $k_0 \in \nat$, such that for any $k \geq k_0$, there are $C > 0$, $l_0 \in \nat$, such that for any $l \geq l_0$, $t \in [0, +\infty[$, under the map (<ref>), we have
\begin{equation}\label{eq_fin_11_00}
\exp(C(l + t)) \cdot
{\rm{Hilb}}^{X}_{kl}(FS(H_{t, k}^{\mathcal{F}})^{\frac{1}{k}}, \omega)
\geq
\big[ {\rm{Hilb}}^{\mathbb{P}(H^0(X, L^k)^*)}_l(FS^{\mathbb{P}}(H_{t, k}^{\mathcal{F}})) \big].
\end{equation}
Let us now denote by $\omega_{\mathbb{P}}$ the Fubini-Study Kähler form on $\mathbb{P}(H^0(X, L^k)^*)$ associated with the Hermitian norm $H_{0, k}^{\mathcal{F}} = {\rm{Hilb}}_k(h^L_0)$ on $H^0(X, L^k)$.
Remark that under the identification $H^0(\mathbb{P}(H^0(X, L^k)^*), \mathscr{O}(l)) = {\rm{Sym}}^l H^0(X, L^k)$, the multiplication map (<ref>) corresponds to the restriction map
${\rm{Res}}_{l} :
H^0(\mathbb{P}(H^0(X, L^k)^*), \mathscr{O}(l))
\to
H^0(X, L^{kl})$,
associated with the Kodaira embedding (<ref>), cf. <cit.>.
In other words, the following diagram is commutative
\begin{equation}\label{eq_kod_map_comm_d}
\begin{tikzcd}
{\rm{Sym}}^l(H^0(X, L^k)) \arrow[swap, rd, "{\rm{Mult}}_{l, k}"] \arrow[r, equal] & H^0(\mathbb{P}(H^0(X, L^k)^*), \mathscr{O}(l)) \arrow[d, "\res_l"] \\
& H^0(X, L^{kl}).
\end{tikzcd}
\end{equation}
From this observation, Theorem <ref> in its form (<ref>), (<ref>) and the very definition of $FS(H_{t, k}^{\mathcal{F}})$ as the pull-back of $FS^{\mathbb{P}}(H_{t, k}^{\mathcal{F}})$ through the Kodaira map, see (<ref>), we conclude that for any $k \in \nat^*$, there are $C > 0$, $l_0 \in \nat$, such that for any $l \geq l_0$, $t \in [0, +\infty[$, we have
\begin{equation}\label{eq_fin_11_0}
\exp(C(l + t)) \cdot
{\rm{Hilb}}^{X}_{kl}(FS(H_{t, k}^{\mathcal{F}})^{\frac{1}{k}}, {\rm{Kod}}_k^* \omega_{\mathbb{P}})
\geq
\big[ {\rm{Hilb}}^{\mathbb{P}(H^0(X, L^k)^*)}_l(FS^{\mathbb{P}}(H_{t, k}^{\mathcal{F}}), \omega_{\mathbb{P}}) \big].
\end{equation}
Since $c_1(\mathscr{O}(1), FS^{\mathbb{P}}(H_{t, k}^{\mathcal{F}}))$ is the Fubini-Study form associated with $H_{t, k}^{\mathcal{F}}$, by (<ref>), there is $C > 0$, such that for any $k \in \nat^*$, $t \in [0, +\infty[$, we have
\begin{equation}\label{eq_fin_11_2}
\omega_{\mathbb{P}}
\geq
\exp(- C t k) \cdot c_1(\mathscr{O}(1), FS^{\mathbb{P}}(H_{t, k}^{\mathcal{F}})).
\end{equation}
Also, for any fixed Kähler form $\omega$ on $X$ and any $k \in \nat^*$, there is $C > 0$, such that ${\rm{Kod}}_k^* \omega_{\mathbb{P}} \leq C \omega$.
From this, (<ref>) and (<ref>), we deduce (<ref>).
Let us consider the product manifold $X \times X$ and the diagonal submanifold in it, given by $\{(x, x) : x \in X \} =: \Delta \hookrightarrow X \times X$.
We denote by $L^k \boxtimes L^l$ the line bundle over $X \times X$, given by $\pi_0^* L^k \otimes \pi_1^* L^l$, where $\pi_0, \pi_1 : X \times X \to X$ are the projections onto the first and second factors respectively.
The natural identification of $\Delta$ with $X$, Künneth isomorphism and multiplication map (<ref>) can be put into the following commutative diagram
\begin{equation}\label{eq_comm_diag}
\begin{CD}
H^0(X \times X, L^k \boxtimes L^l) @> {\rm{Res}}_{\Delta} >> H^0(\Delta, L^k \boxtimes L^l|_{\Delta})
\\
@VV {} V @VV {} V
\\
H^0(X, L^k) \otimes H^0(X, L^l) @> {\rm{Mult}}_{l, k} >> H^0(X, L^{k + l}),
\end{CD}
\end{equation}
where ${\rm{Res}}_{\Delta}$ is the restriction morphism to $\Delta \subset X \times X$, defined analogously to (<ref>).
Theorem <ref> now follows directly from Theorem <ref> in its form (<ref>), applied for $X := X \times X$, $Y := \Delta$, $L_0 := \pi_0^* L$, $L_1 := \pi_1^* L$; $h^{L_0}, h^{L_1} := h^{\mathcal{T}}_t$, Lemma <ref> and (<ref>).
§.§ Resolution of singularities, filtrations and geodesic rays
The main goal of this section is to prove Theorem <ref>.
For this, we first recall some natural operations on the set of test configurations, which transform any pair of ample test configurations into the one as in Theorem <ref>, and then we establish that these natural operations do not perturb the validity of Theorem <ref>.
We first study how the filtration associated with a test configuration changes under the normalization.
Let $\mathcal{T} = (\pi: \mathcal{X} \to \comp, \mathcal{L})$ be an arbitrary ample test configuration of a polarized pair $(X, L)$.
As in Section <ref>, we consider the normalization $\widetilde{\mathcal{T}} = (\widetilde{\pi} : \widetilde{\mathcal{X}} \to \comp, \widetilde{\mathcal{L}})$ of $\mathcal{T}$.
We denote by $\widetilde{\mathcal{F}}^{\mathcal{T}}$ (resp. $\mathcal{F}^{\mathcal{T}}$) the filtration on $R(X, L)$ associated with $\widetilde{\mathcal{T}}$ (resp. $\mathcal{T}$).
For $k \in \nat^*$, we denote by $\mathcal{F}^{\mathcal{T}}_k, \widetilde{\mathcal{F}}^{\mathcal{T}}_k$ the filtrations induced on the graded pieces $H^0(X, L^k)$ of $R(X, L)$.
There is $C > 0$, such that for any $k \in \nat^*$, we have
\begin{equation}
d_{+ \infty}(\mathcal{F}^{\mathcal{T}}_k, \widetilde{\mathcal{F}}^{\mathcal{T}}_k) \leq C.
\end{equation}
For $k$ sufficiently divisible, Theorem <ref> was established by Boucksom-Jonsson <cit.> using non-Archimedean geometry.
Our functional-analytic proof is different.
Recall first that for any submultiplicative (Archimedean or non-Archimedean) norm $N = \| \cdot \|$ in the sense (<ref>) on a ring $A$, we can construct the homogenization (semi)norm $N^{\rm{hom}} = \| \cdot \|^{\rm{hom}}$ on $A$ in the following manner
\begin{equation}\label{eq_homog}
\| f \|^{\rm{hom}}
\lim_{k \to \infty} \| f^k \|^{\frac{1}{k}},
\qquad
f \in A.
\end{equation}
The above limit exists by submultiplicativity of $N$ and Fekete's lemma.
Now, taking into account the relation between filtrations on vector spaces and non-Archimedean norms as in (<ref>), we define the filtration $\mathcal{F}^{\mathcal{T} \rm{hom}}$ on $R(X, L)$ in such a way that
\chi_{\mathcal{F}^{\rm{hom}}} = \chi_{\mathcal{F}}^{\rm{hom}}
$, where $\chi_{\mathcal{F}}$, $\chi_{\mathcal{F}^{\rm{hom}}}$ are the non-Archimedean norms associated with $\mathcal{F}^{\mathcal{T}}$ and $\mathcal{F}^{\mathcal{T} \rm{hom}}$ respectively.
By submultiplicativity of $\chi_{\mathcal{F}}^{\rm{hom}}$, the filtration $\mathcal{F}^{\mathcal{T} \rm{hom}}$ is submultiplicative.
From Boucksom-Jonsson <cit.>, we have
\chi_{\mathcal{F}^{\rm{hom}}}
\leq
\chi_{\widetilde{\mathcal{F}}}
\leq
\chi_{\mathcal{F}}
$, where $\chi_{\widetilde{\mathcal{F}}}$ is the non-Archimedean norm associated with $\widetilde{\mathcal{F}}^{\mathcal{T}}$.
Hence, in order to establish Theorem <ref>, it suffices to prove that there is $C > 0$, such that for any $k \in \nat^*$, we have
\begin{equation}\label{eq_homog00}
d_{+ \infty}(\mathcal{F}^{\mathcal{T}}_k, \mathcal{F}^{\mathcal{T} \rm{hom}}_k) \leq C.
\end{equation}
We will establish (<ref>) using our study of geodesic rays of Hermitian norms.
We fix a positive smooth metric $h^L_0$ on $L$ and denote by $N^{\mathcal{T}}_{t, k}$, $t \in [1, +\infty[$, the ray of norms emanating from ${\rm{Ban}}^{\infty}_k(h^L_0)$ associated with $\mathcal{F}^{\mathcal{T}}_k$ as in (<ref>).
We denote by $N^{\mathcal{T}}_t = \sum_{k = 0}^{\infty} N^{\mathcal{T}}_{t, k}$ the associated graded ray of norms on $R(X, L)$.
As described in the proof of Theorem <ref>, the graded norms $N^{\mathcal{T}}_t$, $t \in [0, +\infty[$, are submultiplicative in the sense (<ref>).
We denote by $N^{\mathcal{T} \rm{hom}}_t = \sum_{k = 0}^{\infty} N^{\mathcal{T} \rm{hom}}_{t, k}$, $t \in [0, +\infty[$, $N^{\mathcal{T} \rm{hom}}_t := \| \cdot \|^{\mathcal{T} \rm{hom}}_t$, the graded ray of norms on $R(X, L)$ associated with $\mathcal{F}^{\mathcal{T} \rm{hom}}$.
We argue that the following inequalities take place
\begin{equation}\label{eq_hom_n_ray_com}
\leq
N^{\mathcal{T} \rm{hom}}_t
\leq
\end{equation}
The upper bound (<ref>) follows from the trivial fact that $\chi_{\mathcal{F}^{\rm{hom}}} \leq \chi_{\mathcal{F}}$ and the fact that the construction of geodesic rays from (<ref>) is monotone in an obvious sense.
To prove the lower bound of (<ref>), we take $f \in H^0(X, L^k)$ with a decomposition $f = \sum_{i = 1}^N f_i$, $f_i \in H^0(X, L^k)$.
By the definition of $\mathcal{F}^{\mathcal{T} \rm{hom}}$, for any $\epsilon > 0$, there is $l \in \nat^*$ such that for any $k \geq l$, $i = 1, \ldots, N$, we have
\begin{equation}\label{eq_hon_appr_ele}
\chi_{\mathcal{F}}(f_i^k)^{\frac{1}{k}}
\leq
\exp(\epsilon)
\cdot
\chi_{\mathcal{F}^{\rm{hom}}}(f_i).
\end{equation}
By the submultiplicativity of $N^{\mathcal{T}}_t$, the norm $\| f \|_t^{\mathcal{T} \rm{hom}}$ of $f$ with respect to $(N^{\mathcal{T}}_t)^{\rm{hom}}$ satisfies
\begin{equation}\label{eq_fkt_bnd_nm00}
\| f \|_t^{\mathcal{T} \rm{hom}}
\leq
(\| f^k \|_t^{\mathcal{T}})^{\frac{1}{k}},
\end{equation}
for any $k \in \nat^*$.
By the definition of the norm $N^{\mathcal{T}}_t$, we have
\begin{equation}\label{eq_fkt_bnd_nm}
\| f^k \|_t^{\mathcal{T}}
\leq
\sum_{\alpha_1+ \ldots + \alpha_N = k} \frac{k!}{\alpha_1! \cdots \alpha_N!}
\cdot
\big \| f_1^{\alpha_1} \cdots f_N^{\alpha_N} \big\|
\cdot
|
# On the limiting problems for two eigenvalue systems and variations
H. Bueno Departmento de Matemática, Universidade Federal de Minas Gerais,
31270-901 - Belo Horizonte - MG, Brazil<EMAIL_ADDRESS>and Aldo H. S.
Medeiros Departamento de Matemática, Universidade Federal de Viçosa,
36570-900 - Viçosa - MG, Brazil<EMAIL_ADDRESS>
###### Abstract.
Let $\Omega$ be a bounded, smooth domain. Supposing that
$\alpha(p)+\beta(p)=p$, $\forall\,p\in\left(\frac{N}{s},\infty\right)$ and
$\displaystyle\lim_{p\to\infty}\alpha(p)/{p}=\theta\in(0,1)$, we consider two
systems for the fractional $p$-Laplacian and a variation on the first system.
The first system is the following.
$\left\\{\begin{array}[]{ll}(-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u|^{\alpha(p)-2}u|v(x_{0})|^{\beta(p)}&{\rm
in}\ \ \Omega,\\\
(-\Delta_{p})^{t}v(x)=\lambda\beta(p)\left(\displaystyle\int_{\Omega}|u|^{\alpha(p)}\mathrm{d}x\right)|v(x_{0})|^{\beta(p)-2}v(x_{0})\delta_{x_{0}}&{\rm
in}\ \ \Omega,\\\ u=v=0&{\rm in}\ \mathbb{R}^{N}\setminus\Omega,\\\
\end{array}\right.$
where $x_{0}$ is a point in $\overline{\Omega}$, $\lambda$ is a parameter,
$0<s\leq t<1$, $\delta_{x}$ denotes the Dirac delta distribution centered at
$x$ and $p>N/s$.
A variation on this system is obtained by considering $x_{0}$ to be a point
where the function $v$ attains its maximum. In this case, we denote
$x_{0}=x_{v}$.
The second one is the system
$\left\\{\begin{array}[]{ll}(-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u(x_{1})|^{\alpha(p)-2}u(x_{1})|v(x_{2})|^{\beta(p)}\delta_{x_{1}}&{\rm
in}\ \ \Omega,\\\
(-\Delta_{p})^{t}v(x)=\lambda\beta(p)|u(x_{1})|^{\alpha(p)}|v(x_{2})|^{\beta(p)-2}v(x_{2})\delta_{x_{2}}&{\rm
in}\ \ \Omega,\\\ u=v=0&{\rm in}\
\mathbb{R}^{N}\setminus\Omega,\end{array}\right.$
where $x_{1},x_{2}\in\Omega$ are arbitrary, $x_{1}\neq x_{2}$. Although we not
consider here, a variation similar to that on the first system can be solved
by practically the same method we apply.
We obtain solutions for the systems (including the variation on the first
system) and consider the asymptotic behavior of these solutions as
$p\to\infty$. We prove that they converge, in the viscosity sense, to
solutions of problems on $u$ and $v$.
###### Key words and phrases:
fractional systems, variational methods, viscosity solutions
###### 1991 Mathematics Subject Classification:
35R11, 35A15, 35D40
## 1\. Introduction
In this paper we deal with different systems for the fractional $p$-Laplacian
and study the behavior of their solutions $(u_{p},v_{p})$ as $p$ goes to
infinity: we prove that these solutions converge, in the viscosity sense, to
solutions $(u_{\infty},v_{\infty})$ of related systems.
Let $\Omega\subset\mathbb{R}^{N}$ be a bounded, smooth domain and, for each
$x\in\Omega$, let $\delta_{x}$ be the Dirac mass concentrated at $x$. Consider
also functions
$\alpha,\beta\colon\left(\frac{N}{s},\infty\right)\to(1,\infty)$ satisfying
1. $(h_{1})$
$\alpha(p)+\beta(p)=p$, $\forall\,p\in\left(\frac{N}{s},\infty\right)$;
2. $(h_{2})$
$\displaystyle\lim_{p\to\infty}\frac{\alpha(p)}{p}=\theta\in(0,1)$.
For each $p>\frac{N}{s}$, we consider the system
$\left\\{\begin{array}[]{ll}(-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u|^{\alpha(p)-2}u|v(x_{0})|^{\beta(p)}&{\rm
in}\ \ \Omega,\\\
(-\Delta_{p})^{t}v(x)=\lambda\beta(p)\left(\displaystyle\int_{\Omega}|u|^{\alpha(p)}\mathrm{d}x\right)|v(x_{0})|^{\beta(p)-2}v(x_{0})\delta_{x_{0}}&{\rm
in}\ \ \Omega,\\\ u=v=0&{\rm in}\ \mathbb{R}^{N}\setminus\Omega,\\\
\end{array}\right.$ ($P^{1}_{p}$)
where $x_{0}$ is a point in $\overline{\Omega}$, $\lambda$ is a parameter,
$0<s\leq t<1$ and $(-\Delta_{p})^{r}$ denotes the $r$-fractional $p$-Laplacian
operator, which is defined, for any $p>1$, by
$(-\Delta_{p})^{r}\phi(x)=\lim_{\varepsilon\to 0}\int_{\mathbb{R}^{N}\setminus
B_{\varepsilon}(x)}\frac{|\phi(x)-\phi(y)|^{p-2}(\phi(x)-\phi(y))}{|x-y|^{N+rp}}\mathrm{d}x\mathrm{d}y$
(1)
for any $\phi\in C^{\infty}_{0}(\Omega)$, which is a dense subespace of
$W^{r,p}_{0}(\Omega)$. We also recall that
$\big{\langle}(-\Delta_{p})^{r}u,\varphi\big{\rangle}:=\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+rp}}\mathrm{d}x\mathrm{d}y$
is the expression of $(-\Delta_{p})^{r}$ as an operator from
$W^{r,p}_{0}(\Omega)$ into its dual. (The definition of the space
$W^{r,p}_{0}(\Omega)$ will be given in the sequence.)
We first prove that, for each $p>N/s$, this system has a unique solution. Then
we consider the behavior of a sequence of these solutions as $p\to\infty$ and
prove that they converge uniformly to $(u_{\infty},v_{\infty})$, which are
viscosity solutions of a related system. (Precise statements are given in the
sequence.)
As a variation on system ($P^{1}_{p}$), we consider the system
$\left\\{\begin{array}[]{ll}(-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u|^{\alpha(p)-2}u|v(x_{v})|^{\beta(p)}&{\rm
in}\ \ \Omega,\\\
(-\Delta_{p})^{t}v(x)=\lambda\beta(p)\left(\displaystyle\int_{\Omega}|u|^{\alpha(p)}\mathrm{d}x\right)|v(x_{v})|^{\beta(p)-2}v(x_{v})\delta_{x_{v}}&{\rm
in}\ \ \Omega,\\\ u=v=0&{\rm in}\
\mathbb{R}^{N}\setminus\Omega,\end{array}\right.$ ($P^{1}_{\infty}$)
where $x_{v}$ is a maximum point of $v$ in $\overline{\Omega}$. Observe that
the first equation in ($P^{1}_{\infty}$) can be replaced by
$(-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u|^{\alpha(p)-2}u\|v\|_{\infty}^{\beta(p)}$
in $\Omega$. To solve the above system we apply the same method used to handle
problem ($P^{1}_{p}$), see Remark 8.
We also handle the system
$\left\\{\begin{array}[]{ll}(-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u(x_{1})|^{\alpha(p)-2}u(x_{1})|v(x_{2})|^{\beta(p)}\delta_{x_{1}}&{\rm
in}\ \ \Omega,\\\
(-\Delta_{p})^{t}v(x)=\lambda\beta(p)|u(x_{1})|^{\alpha(p)}|v(x_{2})|^{\beta(p)-2}v(x_{2})\delta_{x_{2}}&{\rm
in}\ \ \Omega,\\\ u=v=0&{\rm in}\
\mathbb{R}^{N}\setminus\Omega,\end{array}\right.$ ($P^{2}_{p}$)
where $x_{1},x_{2}\in\Omega$ are arbitrary points, $x_{1}\neq x_{2}$.
Of course, we could also consider the case where $x_{u}$ and $x_{v}$ are
points of maxima of $u$ and $v$, respectively, since our reasoning also solves
this case.
In Section 2–5 we handle system ($P^{1}_{p}$), while system ($P^{1}_{\infty}$)
is considered in Remark 8. Finally, in Section 6 we deal with problem
($P^{2}_{p}$).
## 2\. Background, setting and description of results
Due to the appropriate Sobolev embedding, the solutions $(u,v)$ of both
problems ($P^{1}_{p}$) and ($P^{2}_{p}$) must be continuous.
Since both equations in the system have the same homogeneity, ($P^{1}_{p}$)
and ($P^{2}_{p}$) are actually eigenvalue problems. The eigenvalue problem for
the $s$-fractional $p$-Laplacian operator was studied by Lindgren and
Lindqvist in the pioneering paper [9]. Precisely, they studied the problem
$\left\\{\begin{array}[]{ll}(-\Delta_{p})^{s}u=\lambda_{1}(s,p)|u|^{p-2}u(x)&{\rm
in}\ \ \Omega,\\\ u=0&{\rm in}\ \mathbb{R}^{N}\setminus\Omega.\\\
\end{array}\right.$ (2)
The authors proved that the minimum of the Rayleigh quotient associated with
(2), that is,
$\lambda_{1}(s,p)=\inf_{u\in
W^{s,p}_{0}(\Omega)\setminus\\{0\\}}\frac{[u]_{s,p}^{p}}{\|u\|_{p}^{p}}=\frac{[\phi_{p}]_{s,p}^{p}}{\|\phi_{p}\|_{p}^{p}}.$
is attained by a function that does not change sign in $\Omega$.
In the case $p=\infty$ of the same paper, Lindgren and Lindqvist denoted
$\lambda_{1}(s,\infty)=\inf\left\\{\frac{\left\|\frac{u(x)-u(y)}{|x-y|^{s}}\right\|_{\infty}}{\|u\|_{\infty}}\,:\,u\in
W^{s,\infty}_{0}(\Omega)\setminus\\{0\\}\right\\}$
and showed that
$\lambda_{1}(s,\infty)=\frac{1}{R^{s}}\qquad\text{and}\qquad\lim_{p\to\infty}\sqrt[p]{\lambda_{1}(s,p)}=\lambda_{1}(s,\infty),$
where $R=\underset{x\in\Omega}{\mathrm{max\
}}\textup{dist}(x,\mathbb{R}^{N}\setminus\Omega)=\|\textup{dist}(\cdot,\mathbb{R}^{N}\setminus\Omega)\|_{\infty}$.
The results obtained in relation with Eq. (2) were extended by Del Pezzo and
Rossi in [3] to the case of systems of the form
$\left\\{\begin{array}[]{ll}(-\Delta_{p})^{r}u(x)=\lambda\alpha(p)|u(x)|^{\alpha(p)-2}u(x)|v(x)|^{\beta(p)}&{\rm
in}\ \ \Omega,\\\
(-\Delta_{p})^{s}v(x)=\lambda\beta(p)|u(x)|^{\alpha(p)}|v(x)|^{\beta(p)-2}v(x)&{\rm
in}\ \ \Omega,\\\ u=v=0&{\rm in}\ \mathbb{R}^{N}\setminus\Omega,\\\
\end{array}\right.$ (3)
when assumptions $(h_{1})$ and $(h_{2})$ are fulfilled. If for each
$p\in(\frac{N}{s},\infty)$ we denote
$\lambda_{1,p}=\inf\left\\{\frac{\frac{1}{p}[u]_{r,p}^{p}+\frac{1}{p}[v]_{s,p}^{p}}{\displaystyle\int_{\Omega}|u|^{\alpha(p)}|v|^{\beta(p)}\,\mathrm{d}x}\,:\,(u,v)\in
W^{s,p}(\Omega),\ \ uv\neq 0\right\\}$
the authors showed that $\lambda_{1,p}$ is _principal eigenvalue_ (that is, an
eigenvalue associated with an eigenfunction that does not change its sign) and
$\lambda_{s,p}^{\frac{1}{p}}\to\Lambda_{1,\infty}=\left[\frac{1}{R}\right]^{\theta
r+(1-\theta)s}\ \ \text{as}\ \ p\to\infty.$ (4)
More recently, Mihǎilescu, Rossi and Stancu-Dumitru [11] studied the system
$\left\\{\begin{array}[]{ll}-\Delta_{p}u(x)=\lambda\alpha(p)|u(x_{1})|^{\alpha(p)-2}u(x_{1})|v(x_{2})|^{\beta(p)}\delta_{x_{1}}&{\rm
in}\ \ \Omega,\\\
-\Delta_{p}v(x)=\lambda\beta(p)|u(x_{1})|^{\alpha(p)}|v(x_{2})|^{\beta(p)-2}v(x_{2})\delta_{x_{2}}&{\rm
in}\ \ \Omega,\\\ u=v=0&{\rm on}\ \partial\Omega,\\\ \end{array}\right.$ (5)
where $x_{1},x_{2}\in\Omega$ are arbitrary points, $x_{1}\neq x_{2}$. If
$x_{1}$ and $x_{2}$ are points of maxima of $u$ and $v$, respectively, using
arguments like those in [1, 5, 7], it can be proved that ($P^{2}_{p}$) is the
limit, as $r\to\infty$, of the problem
$\left\\{\begin{array}[]{ll}-\Delta_{p}u=\lambda\alpha(p)\|u\|_{r}^{\alpha(p)-r}|u|^{r}\|v\|_{r}^{\beta(p)}&{\rm
in}\ \ \Omega,\\\
-\Delta_{p}v=\lambda\beta(p)\|u\|_{r}^{\alpha(p)}\|v\|_{r}^{\beta(p)-r}|v|^{r}&{\rm
in}\ \ \Omega,\\\ u=v=0&{\rm on}\ \partial\Omega,\end{array}\right.$ (6)
which can be solved by classical minimization procedures.
As in [3], they proved that system (5) has a principal eigenvalue and studied
the asymptotic behavior of the principal eigenvalues and corresponding
positive eigenfunctions $u_{p}$ and $v_{p}$ as $p$ goes to infinity.
Mihǎilescu, Rossi and Stancu-Dumitru proved that the converge to $u_{\infty}$
and $v_{\infty}$, both viscosity solutions of the equation
$-\Delta_{\infty}w=0$ in .
The main goal of this work is to study system ($P^{1}_{p}$). Note that this
system is related to both systems (3) and (5). In the last section of this
article, we make clear that the method used to solve system ($P^{1}_{p}$) also
applies to system ($P^{2}_{p}$), thus generalizing system (5) from [11] to the
fractional $p$-Laplacian operator.
Due to the presence of the Dirac mass $\delta_{x}$, it is more natural to
compare the present work with [11]. We note that the integral form of the
fractional $p$-Laplacian is more difficult to handle than that of the
$p$-Laplacian. Also, in [11], it is valid the convergence
$\|\nabla u\|_{L^{p}(\Omega)}\to\||\nabla u|\|_{L^{\infty}(\Omega)},\ \
\text{for all}\ \ u\in W_{0}^{1,p}(\Omega)$
in the $p$-Laplacian case, what does not happen when we are dealing with the
Gagliardo semi-norm. Furthermore, a direct calculation with the distance
function $\text{dist}(x,\mathbb{R}^{N}\setminus\Omega)$ shows that
$|\nabla\text{dist}(x,\mathbb{R}^{N}\setminus\Omega)|=1$, but this is not
valid in our case, making more difficult to estimate the solutions of system
($P^{2}_{p}$). Furthermore, the presence of the integral term in ($P^{1}_{p}$)
changes the equation that the viscosity solutions $u_{\infty}$ and
$v_{\infty}$ satisfy, see Theorem 4.
On its turn, we will show that the eigenvalues of ($P^{1}_{p}$) converge, as
$p\to\infty$ to the same value $\Lambda_{1,\infty}$ given by (4), a result
obtained in [3].
We introduce the notation used while handling problem ($P^{1}_{p}$). In the
last section of this article, we consider problem ($P^{2}_{p}$) and make the
necessary adjustments.
For each $0<r<1$ and $p\in[1,\infty]$, we consider the Sobolev spaces
$W^{r,p}(\Omega)$
$W^{r,p}(\Omega)=\left\\{u\in
L^{p}(\Omega)\,:\,\int_{\Omega}\int_{\Omega}\frac{|u(x)-u(y)|^{p}}{|x-y|^{N+rp}}\mathrm{d}x\mathrm{d}y<\infty\right\\},$
and also the spaces
$W^{r,p}_{0}(\Omega)=\left\\{u\in L^{p}(\mathbb{R}^{N})\,:\,u=0\ \text{in}\ \
\mathbb{R}^{N}\setminus\Omega\ \text{and}\ [u]_{r,p}<\infty\right\\},$
where
$[u]_{r,p}^{p}=\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p}}{|x-y|^{N+rp}}\mathrm{d}x\mathrm{d}y.$
We recall that, for $0<s\leq t<1$ and $1<p<\infty$, there exists q constant
$C>0$ depending only on $s$, $N$ and $p$ such that
$\|f\|_{W^{s,p}(\Omega)}\leq C\|f\|_{W^{t,p}(\Omega)},\ \ \text{for all}\ \
f\in W^{t,p}(\Omega).$
In particular, $W_{0}^{t,p}(\Omega)\hookrightarrow W_{0}^{s,p}(\Omega)$, for
more details see [4]. So, we can consider only the space
$W_{0}^{s,p}(\Omega)$.
For each $0<s\leq t<1$, $x_{0}\in\Omega$ fixed and $p\in[1,\infty]$, we denote
$X_{s,t,p}(\Omega)=W_{0}^{s,p}(\Omega)\times W_{0}^{t,p}(\Omega)$ and
$X^{*}_{s,t,p}(\Omega)=\left\\{(u,v)\in
X_{s,t,p}(\Omega)\,:\,\left(\int_{\Omega}|u|^{\alpha(p)}\mathrm{d}x\right)v(x_{0})\neq
0\right\\}.$
If $C_{0}(\overline{\Omega})$ stands for the space $\left\\{u\in
C(\Omega)\,:\,u=0\ \text{in}\ \mathbb{R}^{N}\setminus\Omega\right\\}$, it is
well-known that the immersion $W^{s,p}_{0}(\Omega)\hookrightarrow
C_{0}(\overline{\Omega})$ is compact for any
$p\in\left(\frac{N}{s},\infty\right)$. The compactness of this immersion is
consequence of the following Morrey’s type inequality (see [4])
$\sup_{y\neq x}\frac{|u(x)-u(y)|}{|x-y|^{s-\frac{N}{p}}}\leq C[u]_{s,p},\ \
\forall u\in W_{0}^{s,p}(\Omega),$ (7)
which holds whenever $p>\frac{N}{s}$. If $p$ is sufficiently large, the
positive constant $C$ in (7) can be chosen uniformly with respect to $p$ (see
[8], Remark 2.2).
Thus, denoting
$X_{0}(\Omega)=C_{0}(\overline{\Omega})\times C_{0}(\overline{\Omega}),$
we have the compact immersion
$X_{s,t,p}(\Omega)\hookrightarrow X_{0}(\Omega)$
for any $p\in\left(\frac{N}{s},\infty\right)$.
For $p\in\left(\frac{N}{s},\infty\right)$ and $u,v\in X^{*}_{s,t,p}$, we
define
$Q_{s,t,p}(u,v)=\frac{\frac{1}{p}[u]_{s,p}^{p}+\frac{1}{p}[v]_{t,p}^{p}}{\left(\displaystyle\int_{\Omega}|u|^{\alpha(p)}\mathrm{d}x\right)|v(x_{0})|^{\beta(p)}}$
and
$\Lambda_{1}(p)=\inf_{(u,v)\in X^{*}_{s,t,p}(\Omega)}Q_{,s,t,p}(u,v).$
Straightforward calculations show that
$\frac{\mathrm{d}}{\mathrm{d}t}\bigg{|}_{t=0}\left(\frac{1}{p}[u+t\varphi]_{r,p}^{p}\right)=\big{\langle}(-\Delta_{p})^{r}u,\varphi\big{\rangle},\
\ \forall\varphi\in W_{0}^{r,p}(\Omega).$ (8)
If $0<m<\infty$, then
$\frac{\mathrm{d}}{\mathrm{d}t}\bigg{|}_{t=0}|(u+t\varphi)(x)|^{m}=m|u(x)|^{m-2}u(x)\varphi(x),\
\ \forall\,\varphi\in L^{m}(\Omega).$ (9)
We also have, for all $1<\alpha<\infty$ and $\varphi\in L^{\alpha}(\Omega)$,
$\frac{\mathrm{d}}{\mathrm{d}t}\bigg{|}_{t=0}\left(\int_{\Omega}|(u+t\varphi)(x)|^{\alpha}\mathrm{d}x\right)|v(x_{0})|^{\beta}=\alpha\left(\int_{\Omega}|u(x)|^{\alpha-2}u(x)\varphi(x)\mathrm{d}x\right)|v(x_{0})|^{\beta}.$
(10)
###### Definition 1.
A pair $(u,v)\in X_{s,t,p}(\Omega)$ is a weak solution to ($P^{1}_{p}$) if
$\displaystyle\left\langle(-\Delta_{p})^{s}u,\varphi\right\rangle+\left\langle(-\Delta_{p})^{t}v,\psi\right\rangle=$
$\displaystyle\lambda\left[\alpha(p)|u|^{\alpha(p)-2}u(x)|v(x_{0})|^{\beta(p)}\varphi(x)\right.$
(11) $\displaystyle\left.+\
\beta(p)\left(\int_{\Omega}|u(x)|^{\alpha(p)}\mathrm{d}x\right)|v(x_{0})|^{\beta(p)-2}v(x_{0})\psi(x_{0})\right]$
for all $(\varphi,\psi)\in X_{s,t,p}(\Omega)$.
The functional at the left-hand side of (11) is the Gâteaux derivative of the
Fréchet differentiable functional
$(u,v)\mapsto\displaystyle\frac{1}{p}[u]_{s,p}^{p}+\displaystyle\frac{1}{p}[v]_{t,p}^{p}$.
However, the functional at the right-hand side of (11) is merely related to
the right-hand Gâteaux-derivative of the functional
$(u,v)\mapsto\lambda\left(\displaystyle\int_{\Omega}|u(x)|^{\alpha(p)}\mathrm{d}x\right)|v(x_{0})|^{\beta(p)}$,
thus motivating the definition of $Q_{p}$ and $\Lambda_{1}(p)$. It is
noteworthy that minimizing that integral term is enough to minimize the whole
system.
By applying minimization methods, our first result shows that the problem
($P^{1}_{p}$) has a principal eigenvalue – and therefore, a weak solution –
for each $p\in\left(\frac{N}{s},\infty\right)$. Its proof simply adapts
Theorem 1 in [11]. We sketch the proof for the convenience of the reader in
Section 3.
###### Theorem 1.
For each $p\in\left(\frac{N}{s},\infty\right)$ we have
1. $(i)$
$\Lambda_{1}(p)>0$;
2. $(ii)$
there exists $(u_{p},v_{p})\in X^{*}_{s,t,p}(\Omega)$ such that
$\Lambda_{1}(p)=Q_{s,t,p}(u_{p},v_{p}),$
with $u_{p},v_{p}>0$ and
$\left(\displaystyle\int_{\Omega}|u_{p}|^{\alpha(p)}\mathrm{d}x\right)|v_{p}(x_{0})|^{\beta(p)}=1$.
The next step is to look for an operator that will motivate the study of the
problem ($P^{1}_{p}$) as $p\to\infty$. So, for each $0<s\leq t<1$ and
$p\in\left(\frac{N}{s},\infty\right)$ we denote
$\displaystyle S_{p}$ $\displaystyle=\left\\{(u,v)\in
X_{s,t,p}(\Omega)\,:\,\left(\int_{\Omega}|u|^{\alpha(p)}\mathrm{d}x\right)|v(x_{0})|^{\beta(p)}=1\right\\}$
$\displaystyle S_{\infty}$ $\displaystyle=\left\\{(u,v)\in
X_{s,t,\infty}(\Omega)\,:\,\|u\|_{\infty}^{\theta}|v(x_{0})|^{1-\theta}=1\right\\},$
where $\theta$ was defined in ($h_{2}$).
Furthermore, for each $0<s\leq t<1$ and $p\in\left(\frac{N}{s},\infty\right]$,
we define the functions $\chi_{S_{p}}\colon X_{0}(\Omega)\to[0,\infty]$ and
$F_{p}\colon X_{0}(\Omega)\to[0,\infty]$ by
$\chi_{S_{p}}(u,v)=\left\\{\begin{array}[]{ll}0,&\text{if}\quad(u,v)\in
S_{p};\\\ \infty,&\text{otherwise}\\\ \end{array}\right.$ (12)
and
$F_{p}(u,v)=\left\\{\begin{array}[]{ll}G_{p}(u,v)+\chi_{S_{p}}(u,v),&\text{if}\quad(u,v)\in
X^{*}_{s,t,p}(\Omega);\\\ \infty,&\text{otherwise},\end{array}\right.$ (13)
with $G_{p}$ defined by
$G_{p}(u,v)=\left\\{\begin{array}[]{ll}Q_{s,t,p}(u,v)^{\frac{1}{p}},&\text{if}\quad
p\in(\frac{N}{s},\infty),\vspace*{.1cm}\\\
\displaystyle\frac{\max\left\\{|u|_{s},|v|_{t}\right\\}}{\|u\|_{\infty}^{\theta}|v(x_{0})|^{1-\theta}},&\text{if}\quad
p=\infty,\\\ \end{array}\right.$ (14)
where, for $0<\sigma<1$,
$|u|_{\sigma}=\sup_{y\neq x}\frac{|u(x)-u(y)|}{|x-y|^{\sigma}}.$
The method we apply is known as $\Gamma$-convergence, but everything we use
are the properties listed in Theorem 2. Once again, the next result follows
from a straightforward adaptation of the proof of [11, Theorem 2].
###### Theorem 2.
The function $F_{\infty}$ satisfy the following properties.
1. $(i)$
If $\\{(u_{p},v_{p})\\}$ is a sequence such that $(u_{p},v_{p})\to(u,v)$ in
$X_{0}(\Omega)$, then
$F_{\infty}(u,v)\leq\lim_{p\to\infty}\inf F_{p}(u_{p},v_{p}).$
2. $(ii)$
For each $(u,v)\in X_{0}(\Omega)$, there exists a sequence
$\\{(U_{p},V_{p})\\}\subset X_{0}(\Omega)$ such that $(U_{p},V_{p})\to(u,v)$
in $X_{0}(\Omega)$ and
$F_{\infty}(u,v)\geq\lim_{p\to\infty}\sup F_{p}(U_{p},V_{p}).$
Thus, as a consequence of Theorem 2-($i$), we have
$F_{\infty}(u,v)\leq\lim_{p\to\infty}\inf F_{p}(u_{p},v_{p}).$
Applying this inequality to the solutions $(u_{p},v_{p})$ given by Theorem 1,
we obtain the estimate
$F_{\infty}(u,v)\leq\lim_{p\to\infty}\inf\Lambda_{1}(p)^{\frac{1}{p}}=\frac{1}{R^{s\theta+(1-\theta)t}}=\max\\{|u_{\infty}|_{s},|v_{\infty}|_{t}\\},$
(15)
where the last equality will be shown in the proof of Theorem 3. As a
consequence of Theorem 2-($ii$) and (15), we can analyze problem ($P^{1}_{p}$)
as $p\to\infty$.
Therefore, considering Theorems 1 and 2, we study the behavior of the
eigenvalues and eigenfunctions of problem ($P^{1}_{p}$) as $p\to\infty$.
###### Theorem 3.
Let $\\{p_{n}\\}$ be a sequence converging to $\infty$ and
$(u_{p_{n}},v_{p_{n}})$ the solution of ($P^{1}_{p}$) given in Theorem 1.
Passing to a subsequence if necessary,
$\\{(u_{p_{n}},v_{p_{n}})\\}_{n\in\mathbb{N}}$ converges uniformly to
$(u_{\infty},v_{\infty})\in C_{0}^{0,s}(\overline{\Omega})\times
C_{0}^{0,t}(\overline{\Omega})$. Furthermore
1. $(i)$
$u_{\infty}\geq 0$, $v_{\infty}\geq 0$ and
$\|u_{\infty}\|^{\theta}_{\infty}|v_{\infty}(x_{0})|^{1-\theta}=1$;
2. $(ii)$
$\displaystyle\lim_{n\to\infty}\sqrt[p_{n}]{\Lambda_{1}(p_{n})}=\Lambda_{1,\infty}=\frac{1}{R^{s\theta+(1-\theta)t}}$;
3. $(iii)$
$\max\left\\{|u_{\infty}|_{s},|v_{\infty}|_{t}\right\\}=\displaystyle\frac{1}{R^{s\theta+(1-\theta)t}}.$
As we will see in the sequence, the functions $u_{\infty}$ and $v_{\infty}$
are solutions, in the viscosity sense, of regular boundary value problems. In
order to distinguish between the cases (and also to avoid a double minus
sign), we change notation: for each $1<p<\infty$ we denote the
$\sigma$-fractional $p$-Laplacian by
$(-\Delta_{p})^{\sigma}=-\mathcal{L}_{\sigma,p}$, where, if $1<p<\infty$ and
$0<\sigma<1$,
$(\mathcal{L}_{\sigma,p}u)(x):=2\int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{N+\sigma
p}}\mathrm{d}y.$
As argued in [9], this expression appears formally as follows
$\displaystyle\left\langle(-\Delta_{p})^{\sigma}u,\varphi\right\rangle$
$\displaystyle=\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+\sigma
p}}\mathrm{d}x\mathrm{d}y$
$\displaystyle=\int_{\mathbb{R}^{N}}\varphi(x)\left(\int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{N+\sigma
p}}\mathrm{d}y\right)\mathrm{d}x$
$\displaystyle\quad-\int_{\mathbb{R}^{N}}\varphi(y)\left(\int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{N+\sigma
p}}\mathrm{d}x\right)\mathrm{d}y$
$\displaystyle=\int_{\mathbb{R}^{N}}\varphi(x)(\mathcal{L}_{\sigma,p}u)(x)\mathrm{d}x,\
\ \ \forall\varphi\in W_{0}^{\sigma,p}(\Omega).$
If $p=\infty$, we define
$\mathcal{L}_{\sigma,\infty}=\mathcal{L}^{+}_{\sigma,\infty}+\mathcal{L}^{-}_{\sigma,\infty},$
where
$(\mathcal{L}^{+}_{\sigma,\infty}u)(x)=\sup_{y\in\mathbb{R}^{N}\setminus\\{x\\}}\frac{u(x)-u(y)}{|x-y|^{\sigma}}\quad\text{and}\quad(\mathcal{L}^{-}_{\sigma,\infty}u)(x)=\inf_{y\in\mathbb{R}^{N}\setminus\\{x\\}}\frac{u(x)-u(y)}{|x-y|^{\sigma}},$
see Chambolle, Lindgren and Monneau [2], where the concept was introduced, but
also [9]. Observe that, since $\mathcal{L}_{\sigma,\infty}$ is not
sufficiently smooth, its solutions must be interpreted in the viscosity sense.
We recall the definition of a solution in the viscosity sense by considering
the problem
$\left\\{\begin{array}[]{ll}\mathcal{L}_{\sigma,p}u=0&{\rm in}\ \ \Omega,\\\
u=0&{\rm in}\ \mathbb{R}^{N}\setminus\Omega,\\\ \end{array}\right.$ (16)
for all $p\in(1,\infty]$.
###### Definition 2.
Let $u\in C(\mathbb{R}^{N})$ satisfy $u=0$ in $\mathbb{R}^{N}\setminus\Omega$.
The function $u$ is a viscosity supersolution of (16) if
$(\mathcal{L}_{\sigma,p}\varphi)(x_{0})\leq 0$
for each pair $(x_{0},\varphi)\in\Omega\times C_{0}^{1}(\mathbb{R}^{N})$ such
that
$\varphi(x_{0})=u(x_{0})\qquad\text{and}\qquad\varphi(x)\leq u(x)\ \ \forall
x\in\mathbb{R}^{N}.$
On its turn, $u$ is a viscosity subsolution of (16) if
$(\mathcal{L}_{\sigma,p}\varphi)(x_{0})\geq 0$
for all pair $(x_{0},\varphi)\in\Omega\times C_{0}^{1}(\mathbb{R}^{N})$ such
that
$\varphi(x_{0})=u(x_{0})\ \ \text{e}\ \ \varphi(x)\geq u(x)\ \ \forall
x\in\mathbb{R}^{N}.$
The function $u$ is a viscosity solution to the problem (16) if $u$ is both a
viscosity super- and subsolution to problem (16).
Finally, in Section 5, we prove that the solutions $u_{\infty}$ and
$v_{\infty}$ given by Theorem 3 are viscosity solutions.
###### Theorem 4.
Let $1<s\leq t<1$. Then, the functions $u_{\infty}$ and $v_{\infty}$, given by
Theorem 3, are viscosity solutions of the system
$\left\\{\begin{array}[]{llll}\max\left\\{\mathcal{L}_{s,\infty}u,\mathcal{L}^{-}_{s,\infty}u-\Lambda_{1,\infty}|u(x)|^{\theta}|v_{\infty}(x_{0})|^{1-\theta}\right\\}=0&{\rm
in}\ \ \Omega,\\\ \mathcal{L}_{t,\infty}v=0&{\rm in}\ \
\Omega\setminus\\{x_{0}\\},\\\ u=v=0&{\rm in}\
\mathbb{R}^{N}\setminus\Omega,\\\
v(x_{0})=v_{\infty}(x_{0}).\end{array}\right.$ (17)
## 3\. Some remarks on the proofs of Theorems 1 and 2
Since the proofs of Theorems 1 and 2 are simple adaptations of that one given
in [11], we only sketch them for the convenience of the reader. For details,
see [11, Theorem 1 and Theorem 2].
Sketch of proof of Theorem 1. Estimating the denominator in the definition of
$Q_{s,t,p}$, the inequalities of Young and Sobolev imply that $\Lambda_{1}>0$.
By defining
$\displaystyle U_{n}(x)$
$\displaystyle=\frac{u_{n}(x)}{\left(\displaystyle\int_{\Omega}|u_{n}|^{\alpha(p)}\mathrm{d}x\right)^{\frac{1}{p}}|v_{n}(x_{0})|^{\frac{\beta(p)}{p}}}$
and $\displaystyle V_{n}(x)$
$\displaystyle=\frac{v_{n}(x)}{\left(\displaystyle\int_{\Omega}|u_{n}|^{\alpha(p)}\mathrm{d}x\right)^{\frac{1}{p}}|v_{n}(x_{0})|^{\frac{\beta(p)}{p}}},$
we have $(U_{n},V_{n})\in X_{s,p}(\Omega)$ satisfy
$\left(\displaystyle\int_{\Omega}|U_{n}(x)|^{\alpha(p)}\mathrm{d}x\right)|V_{n}(x_{0})|^{\beta(p)}=1$.
Furthermore,
$\lim_{n\to\infty}Q_{s,t,p}(U_{n},V_{n})=\lim_{n\to\infty}Q_{s,t,p}(u_{n},v_{n})=\Lambda_{1}(s,p),$
guaranteeing the existence of $u_{p},v_{p}\in W^{s,p}(\Omega)$ such that
$\left(\int_{\Omega}|u_{p}|^{\alpha(p)}\mathrm{d}x\right)|v_{p}(x_{0})|^{\beta(p)}=1.$
and
$Q_{s,t,p}(u_{p},u_{p})=\Lambda_{1}(p).$
For any $(\phi,\psi)\in X_{s,t,p}(\Omega)$, considering
$g(t)=Q_{s,t,p}(u_{p}+t\phi,v_{p}+t\psi),$
it follows the existence of $t_{0}>0$ such that $g(t)>g(0)=\Lambda_{1}(p)$.
Since $g\in C^{1}((-t_{0},t_{0}),\mathbb{R})$m we have $g^{\prime}(0)=0$, from
what follows that $(u_{p},v_{p})$ is a weak solution to system ($P^{1}_{p}$).
An argument similar [9, Lemma 22] proves that $u_{p}>0$ and $v_{p}>0$ in
$\Omega$, showing that $\Lambda_{1}(s,p)$ is a principal eigenvalue to system
($P^{1}_{p}$). $\Box$
Sketch of proof of Theorem 2. In order to prove ($i$), suppose that
$(u_{p},v_{p})\to(u,v)\in X_{0}(\Omega)$. Passing to a subsequence, we assume
that
$\displaystyle\lim_{p\to\infty}F_{p}(u_{p},v_{p})=\displaystyle\liminf_{p\to\infty}F_{p}(u_{p},v_{p})$.
It is not difficult to discard the case $(u,v)\notin
X^{*}_{s,t,\infty}(\Omega)\cap S_{\infty}$. So, we consider the case $(u,v)\in
X^{*}_{s,t,\infty}(\Omega)\cap S_{\infty}$, which implies
$\|u\|_{\infty}^{\theta}|v(x_{0})|^{1-\theta}=1$. We can assume that
$F_{p}(u_{p},v_{p})\leq C<\infty$, since otherwise ($i$) is valid. So, for $p$
large enough, we have $(u_{p},v_{p})\in S_{p}$ and, if $k>\frac{N}{s}$, then
$\displaystyle\left(\int_{\Omega}\int_{\Omega}\frac{|u_{p}(x)-u_{p}(y)|^{k}}{|x-y|^{\left(\frac{N}{p}+s\right)k}}+\frac{|v_{p}(x)-v_{p}(y)|^{k}}{|x-y|^{\left(\frac{N}{p}+t\right)k}}\mathrm{d}x\mathrm{d}y\right)^{\frac{1}{k}}$
$\leq
2^{\frac{1}{k}}|\Omega|^{2\left(\frac{1}{k}-\frac{1}{p}\right)}p^{\frac{1}{p}}\left[\frac{1}{p}[u_{p}]_{s,p}^{p}+\frac{1}{p}[v_{p}]_{t,p}^{p}\right]^{\frac{1}{p}}.$
Thus,
$\displaystyle F_{p}(u_{p},v_{p})$
$\displaystyle=Q_{s,t,p}(u_{p},v_{p})=\left[\frac{1}{p}[u_{p}]_{s,p}^{p}+\frac{1}{p}[v_{p}]_{t,p}^{p}\right]^{\frac{1}{p}}$
$\displaystyle\geq
2^{-\frac{1}{k}}|\Omega|^{2\left(\frac{1}{p}-\frac{1}{k}\right)}p^{-\frac{1}{p}}\left(\int_{\Omega}\int_{\Omega}\frac{|u_{p}(x)-u_{p}(y)|^{k}}{|x-y|^{\left(\frac{N}{p}+s\right)k}}+\frac{|v_{p}(x)-v_{p}(y)|^{k}}{|x-y|^{\left(\frac{N}{p}+t\right)k}}\mathrm{d}x\mathrm{d}y\right)^{\frac{1}{k}}.$
As $p\to\infty$, results from the uniform convergence and Fatou’s Lemma that
$\displaystyle\liminf_{p\to\infty}F_{p}(u_{p},v_{p})$ $\displaystyle\geq
2^{-\frac{1}{k}}|\Omega|^{-\frac{2}{k}}\left(\int_{\Omega}\int_{\Omega}\frac{|u(x)-u(y)|^{k}}{|x-y|^{sk}}+\frac{|v(x)-v(y)|^{k}}{|x-y|^{tk}}\mathrm{d}x\mathrm{d}y\right)^{\frac{1}{k}}.$
Making $k\to\infty$, we obtain
$\displaystyle\liminf_{p\to\infty}F_{p}(u_{p},v_{p})$
$\displaystyle\geq\max\left\\{|u|_{s},|v|_{t}\right\\}=F_{\infty}(u,v),$ (18)
concluding the proof of ($i$).
Now we deal with the second claim. Take any $(u,v)\in X_{0}(\Omega)$ and
initially suppose that $(u,v)\notin X^{*}_{s,t,\infty}(\Omega)\cap
S_{\infty}$. Then $F_{s,\infty}(u,v)=\infty$. Consider then a sequence of
values $p\to\infty$ and, for any $p\in\left(\frac{N}{s},\infty\right)$ in the
sequence, define $u_{p}:=u$ and $v_{p}:=v$. Of course we have
$(u_{p},v_{p})\to(u,v)$ as $p\to\infty$ in $X_{0}(\Omega)$. It is not
difficult to discard the cases
$\left(\displaystyle\int_{\Omega}|u_{p}|^{\alpha(p)}\mathrm{d}x\right)|v_{p}(x_{0})|^{\beta(p)}\neq
1$. If, however, $(u,v)\in X^{*}_{s,t,\infty}(\Omega)\cap S_{\infty}$,
consider then a sequence of values $p\to\infty$ and, for any
$p\in\left(\frac{N}{s},\infty\right)$ in the sequence, define
$U_{p}(x)=\frac{u(x)}{\left(\displaystyle\int_{\Omega}|u|^{\alpha(p)}\mathrm{d}x\right)^{\frac{1}{p}}|v(x_{0})|^{\frac{1}{p}}}\qquad\text{and}\qquad
V_{p}(x)=\frac{v(x)}{\left(\displaystyle\int_{\Omega}|u|^{\alpha(p)}\mathrm{d}x\right)^{\frac{1}{p}}|v(x_{0})|^{\frac{\beta(p)}{p}}}.$
Then $(U_{p},V_{p})\in S_{p}$ and
$\displaystyle\limsup_{p\to\infty}F_{p}(U_{p},V_{p})=\max\bigg{\\{}|u|_{s},|v|_{t}\bigg{\\}}=F_{\infty}(u,v),$
completing the proof of ($ii$). $\hfill\Box$
## 4\. Proof of Theorem 3
Let us denote
$R=\max_{x\in\overline{\Omega}}\text{dist}(x,\mathbb{R}^{N}\setminus\Omega)=\|\text{dist}(.,\mathbb{R}^{N}\setminus\Omega)\|_{L^{\infty}(\Omega)}.$
For a fixed $x_{1}\in\Omega$ we consider the functions
$\phi_{R}\colon\overline{B_{R}(x_{1})}\rightarrow[0,R]$ and
$\psi_{R}\colon\overline{B_{R}(x_{0})}\rightarrow[0,R]$ given by
$\phi_{R}(x)=R^{(\theta-1)t-s\theta}\left(R-|x-x_{1}|\right)^{s}_{+}\quad\text{and}\quad\psi_{R}(x)=R^{(\theta-1)t-s\theta}\left(R-|x-x_{0}|\right)^{t}_{+}.$
Of course we have $\phi_{R}\in C_{0}^{0,s}(\overline{B_{R}(x_{1})}$ and
$\psi_{R}\in C_{0}^{0,s}(\overline{B_{R}(x_{0})}$. Furthermore,
$\|\phi_{R}\|_{\infty}=R^{(\theta-1)(t-s)},\quad|\psi_{R}(x_{0})|=R^{\theta(t-s)}\quad\text{and}\quad|\phi_{R}|_{s}=|\psi_{R}|_{s}=R^{(\theta-1)t-s\theta}.$
We can extend $\phi_{R}$ and $\psi_{R}$ to $\overline{\Omega}$ by putting
$\phi_{R}=0$ in $\mathbb{R}^{N}\setminus\overline{B_{R}(x_{1})}$ and
$\psi_{R}=0$ in $\mathbb{R}^{N}\setminus\overline{B_{R}(x_{0})}$ to that
$\phi_{R},\psi_{R}\in C_{0}^{0,s}(\overline{\Omega})$, maintaining its
$s$-Hölder norm. Additionally, we still have $\phi_{R},\psi_{R}\in
W_{0}^{1,m}(\Omega)\hookrightarrow W_{0}^{s,m}(\Omega)$ for all $s\in(0,1)$
and $m\geq 1$. For details, see [7, 9].
###### Lemma 5.
For any fixed $0<s\leq t<1$ we have
$\Lambda_{1,\infty}=\inf_{(u,v)\in
X_{s,t,\infty}^{*}(\Omega)}\frac{\max\big{\\{}|u|_{s},|v|_{t}\big{\\}}}{\|u\|_{\infty}^{\theta}|v(x_{0})|_{\infty}^{1-\theta}}=\frac{1}{R^{s\theta+(1-\theta)t}}.$
###### Proof.
We note that we have
$\|\phi_{R}\|_{\infty}^{\theta}|\psi_{R}(x_{0})|^{1-\theta}=R^{\theta(\theta-1)(t-s)+\theta(1-\theta)(t-s)}=1$
and therefore
$\Lambda_{1,\infty}=\inf_{(u,v)\in
X_{s,t,\infty}^{*}(\Omega)}\frac{\max\big{\\{}|u|_{s},|v|_{t}\big{\\}}}{\|u\|_{\infty}^{\theta}|v(x_{0})|_{\infty}^{1-\theta}}\leq\frac{\max\big{\\{}|\phi_{R}|_{s},|\psi_{R}|_{t}\big{\\}}}{\|\phi_{R}\|_{\infty}^{\theta}|\psi_{R}(x_{0})|_{\infty}^{1-\theta}}=\frac{1}{R^{s\theta+(1-\theta)t}}.$
Also note that, given $(u,v)\in X_{s,t,p}^{*}(\Omega)$, then $u=0=v$ in
$\overline{\Omega}$. Since $u$ is continuous, there exists
$x_{1}\in\overline{\Omega}$ such that
$\|u\|_{\infty}=|u(x_{1})|.$
The compactness of $\overline{\Omega}$ guarantees the existence of
$y_{x_{0}},y_{x_{1}}\in\partial\Omega$ such that
$|x_{0}-y_{x_{0}}|=\text{dist}(x_{0},\mathbb{R}^{N}\setminus\Omega)\quad\text{and}\quad|x_{1}-y_{x_{1}}|=\text{dist}(x_{1},\mathbb{R}^{N}\setminus\Omega).$
Thus, since $u(y_{x_{1}})=v(y_{x_{0}})=0$, it follows
$\|u\|_{\infty}^{\theta}=|u(x_{1})-u(y_{x_{1}})|^{\theta}\leq|u|_{s}^{\theta}|x_{1}-y_{x_{1}}|^{s\theta}\leq|u|_{s}^{\theta}\,R^{s\theta}.$
On the other hand,
$|v(x_{0})|^{1-\theta}=|v(x_{0})-v(y_{x_{0}})|^{1-\theta}\leq|v|_{t}^{1-\theta}|x_{0}-y_{x_{0}}|^{t(1-\theta)}\leq|v|_{t}^{1-\theta}\,R^{t(1-\theta)}.$
So, for any $(u,v)\in X^{*}_{s,t,p}(\Omega)$, we have
$\displaystyle\frac{1}{R^{s\theta+t(1-\theta)}}=\frac{1}{R^{s\theta}\,R^{(1-\theta)t}}$
$\displaystyle\leq\frac{|u|_{s}^{\theta}|v|_{t}^{1-\theta}}{\|u\|_{\infty}^{\theta}|v(x_{0})|^{1-\theta}}\leq\frac{\left(\max\big{\\{}|u|_{s},|v|_{t}\big{\\}}\right)^{\theta}\left(\max\big{\\{}|u|_{s},|v|_{t}\big{\\}}\right)^{1-\theta}}{\|u\|_{\infty}^{\theta}|v(x_{0})|^{1-\theta}}$
$\displaystyle=\frac{\max\big{\\{}|u|_{s},|v|_{t}\big{\\}}}{\|u\|_{\infty}^{\theta}|v(x_{0})|^{1-\theta}}.$
Therefore,
$\Lambda_{1,\infty}=\inf_{(u,v)\in
X_{s,t,\infty}^{*}(\Omega)}\frac{\max\big{\\{}|u|_{s},|v|_{t}\big{\\}}}{\|u\|_{\infty}^{\theta}|v(x_{0})|_{\infty}^{1-\theta}}\geq\frac{1}{R^{s\theta+(1-\theta)t}},$
concluding the proof. ∎
The next result is pivotal in our analysis of the asymptotic behavior of
solutions in problems driven by the fractional $p$-Laplacian.
###### Lemma 6.
Let $u\in C_{0}^{0,\sigma}(\overline{\Omega})$ be extended as zero outside
$\Omega$. If $u\in W^{\sigma,q}(\Omega)$ for some $q>1$, then $u\in
W_{0}^{\sigma,p}(\Omega)$ for all $p\geq q$ and
$\lim_{p\to\infty}[u]_{\sigma,p}=|u|_{\sigma}.$
The proof of Lemma 6 can be found in [6, Lemma 7].
Proof of Theorem 3. Of course we have
$\Lambda_{1}(p_{n})\leq\frac{\frac{1}{p_{n}}[\phi_{R}]_{s,p_{n}}^{p_{n}}+\frac{1}{p_{n}}[\psi_{R}]_{t,p_{n}}^{p_{n}}}{\displaystyle\int_{\Omega}\left(|\phi_{R}|^{\alpha(p_{n})}\mathrm{d}x\right)|\psi_{R}(x_{0})|^{\beta(p_{n})}}.$
Thus,
$\displaystyle\limsup_{n\to\infty}\sqrt[p_{n}]{\Lambda_{1}(p_{n})}$
$\displaystyle\leq\limsup_{n\to\infty}\left(\frac{1}{p_{n}}\frac{[\phi_{R}]_{s,p_{n}}^{p_{n}}+[\psi_{R}]_{t,p_{n}}^{p_{n}}}{\displaystyle\int_{\Omega}\left(|\phi_{R}|^{\alpha(p_{n})}\mathrm{d}x\right)|\psi_{R}(x_{0})|^{\beta(p_{n})}}\right)^{\frac{1}{p_{n}}}$
$\displaystyle\leq\limsup_{n\to\infty}\left(\left(\frac{2}{p_{n}}\right)^{\frac{1}{p_{n}}}\frac{\max\big{\\{}[\phi_{R}]_{s,p_{n}},[\psi_{R}]_{t,p_{n}}\big{\\}}}{\displaystyle\int_{\Omega}\left(|\phi_{R}|^{\alpha(p_{n})}\mathrm{d}x\right)|\psi_{R}(x_{0})|^{\beta(p_{n})}}\right)$
$\displaystyle=\frac{\max\big{\\{}|\phi_{R}|_{s},|\psi_{R}|_{t}\big{\\}}}{\|\phi_{R}\|_{\infty}^{\theta}|\psi_{R}(x_{0})|^{1-\theta}}\leq\frac{1}{R^{s\theta+(1-\theta)t}},$
proving that the sequence
$\left\\{\sqrt[p_{n}]{\Lambda_{1}(p_{n})}\right\\}_{n\in\mathbb{N}}$ is
bounded in $\mathbb{R}$, that is, there exists $M_{0}>0$ such that
$\sqrt[p_{n}]{\Lambda_{1}(p_{n})}\leq M_{0}\quad\ \text{for all}\ \
n\in\mathbb{N}.$ (19)
Theorem 1 guarantees that we can take $(u_{p_{n}},v_{p_{n}})$ so that
$u_{p_{n}}>0,\
v_{p_{n}}>0\quad\text{and}\quad\left(\int_{\Omega}|u_{p_{n}}|^{\alpha(p_{n})}\mathrm{d}x\right)|v_{p_{n}}(x_{0})|^{\beta(p_{n})}=1.$
Therefore
$\Lambda_{1}(p_{n})=\frac{1}{p_{n}}[u_{p_{n}}]_{s,p_{n}}^{p_{n}}+\frac{1}{p_{n}}[v_{p_{n}}]_{t,p_{n}}^{p_{n}}\geq\frac{1}{p_{n}}\max\bigg{\\{}[u_{p_{n}}]_{s,p_{n}}^{p_{n}},[v_{p_{n}}]_{s,p_{n}}^{p_{n}}\bigg{\\}},$
what yields
$\displaystyle[u_{p_{n}}]_{s,p_{n}}\leq
p_{n}^{\frac{1}{p_{n}}}\sqrt[p_{n}]{\Lambda_{1}(s,p_{n})}.$ (20)
For a fixed $m_{0}>\frac{N}{s}$, denoting the diameter of $\Omega$ by
$\text{diam}(\Omega)$, it follows from (19) and (20) that
$\displaystyle|u_{p_{n}}|_{s-\frac{N}{m_{0}}}$ $\displaystyle=\sup_{x\neq
y}\frac{|u_{p_{n}}(x)-u_{p_{n}}(y)|}{|x-y|^{s-\frac{N}{m_{0}}}}=\sup_{x\neq
y}\frac{|u_{p_{n}}(x)-u_{p_{n}}(y)|}{|x-y|^{s-\frac{N}{p_{n}}}}\,|x-y|^{\frac{N}{m_{0}}-\frac{N}{p_{n}}}$
$\displaystyle\leq\left(\text{diam}(\Omega)\right)^{\frac{N}{m_{0}}-\frac{N}{p_{n}}}\sup_{x\neq
y}\frac{|u_{p_{n}}(x)-u_{p_{n}}(y)|}{|x-y|^{s-\frac{N}{p_{n}}}}$
$\displaystyle\leq
C\left(\text{diam}(\Omega)\right)^{\frac{N}{m_{0}}-\frac{N}{p_{n}}}\,[u_{p_{n}}]_{s,p_{n}}$
$\displaystyle\leq
C\left(\text{diam}(\Omega)\right)^{\frac{N}{m_{0}}-\frac{N}{p_{n}}}\,p_{n}^{\frac{1}{p_{n}}}\,\sqrt[p_{n}]{\Lambda_{1}(s,p_{n})}$
the constant $C$ not depending on $p_{n}$. We conclude that the sequence
$\\{u_{p_{n}}\\}$ is uniformly bounded in
$C_{0}^{0,s-\frac{N}{m_{0}}}(\overline{\Omega})$ and the same reasoning is
valid for $\\{v_{p_{n}}\\}$, showing that $\\{v_{p_{n}}\\}_{n\in\mathbb{N}}$
is uniformly bounded in $C_{0}^{0,t-\frac{N}{m_{0}}}(\overline{\Omega})$.
Passing to subsequences if necessary, there exist $u_{\infty}\in
C_{0}^{0,s-\frac{N}{m_{0}}}(\overline{\Omega})$ and $v_{\infty}\in
C_{0}^{0,t-\frac{N}{m_{0}}}(\overline{\Omega})$ such that
$u_{p_{n}}\to u_{\infty}\quad\text{and}\quad v_{p_{n}}\to v_{\infty}\ \
\text{uniformly in}\ \ \Omega.$
We also observe that
$\|u_{\infty}\|_{\infty}^{\theta}|v_{\infty}(x_{0})|^{1-\theta}=\lim_{n\to\infty}\left(\left(\int_{\Omega}|u_{p_{n}}|^{\alpha(p_{n})}\mathrm{d}x\right)|v_{p_{n}}(x_{0})|^{\beta(p_{n})}\right)^{\frac{1}{p_{n}}}=1.$
Fix $k>\frac{N}{s}$. By applying Fatou’s, Hölder’s inequality and (20), we
obtain
$\displaystyle\int_{\Omega}\int_{\Omega}\frac{|u_{\infty}(x)-u_{\infty}(y)|^{k}}{|x-y|^{sk}}\mathrm{d}x\mathrm{d}y$
$\displaystyle\leq\liminf_{n\to\infty}\int_{\Omega}\int_{\Omega}\frac{|u_{p_{n}}(x)-u_{p_{n}}(y)|^{k}}{|x-y|^{\left(\frac{N}{p_{n}}+s\right)k}}\mathrm{d}x\mathrm{d}y$
$\displaystyle\leq\liminf_{n\to\infty}|\Omega|^{2\left(\frac{p_{n}-k}{p_{n}}\right)}\left(\int_{\Omega}\int_{\Omega}\frac{|u_{p_{n}}(x)-u_{p_{n}}(y)|^{p_{n}}}{|x-y|^{N+sp_{n}}}\mathrm{d}x\mathrm{d}y\right)^{\frac{k}{p_{n}}}$
$\displaystyle\leq|\Omega|^{2}\liminf_{n\to\infty}[u_{p_{n}}]_{s,p_{n}}^{k}$
(21)
$\displaystyle\leq|\Omega|^{2}\liminf_{n\to\infty}\left(p_{n}^{\frac{1}{p_{n}}}\sqrt[p_{n}]{\Lambda_{1}(p_{n})}\right)^{k}$
$\displaystyle\leq|\Omega|^{2}\left(\frac{1}{R^{s\theta+(1-\theta)t}}\right)^{k}.$
Thus,
$|u_{\infty}|_{s}=\lim_{k\to\infty}\left(\int_{\Omega}\int_{\Omega}\frac{|u_{\infty}(x)-u_{\infty}(y)|^{k}}{|x-y|^{sk}}\mathrm{d}x\mathrm{d}y\right)^{\frac{1}{k}}\leq\lim_{n\to\infty}|\Omega|^{\frac{2}{k}}\,\frac{1}{R^{s\theta+(1-\theta)t}}=\frac{1}{R^{s\theta+(1-\theta)t}}.$
Analagously,
$|v_{\infty}|_{t}=\lim_{k\to\infty}\left(\int_{\Omega}\int_{\Omega}\frac{|v_{\infty}(x)-v_{\infty}(y)|^{k}}{|x-y|^{tk}}\mathrm{d}x\mathrm{d}y\right)^{\frac{1}{k}}\leq\lim_{n\to\infty}|\Omega|^{\frac{2}{k}}\,\frac{1}{R^{s\theta+(1-\theta)t}}=\frac{1}{R^{s\theta+(1-\theta)t}}$
and therefore
$\max\big{\\{}|u_{\infty}|_{s},|v_{\infty}|_{t}\big{\\}}\leq\frac{1}{R^{s\theta+(1-\theta)t}}.$
It follows from Lemma 5 that
$\frac{1}{R^{s\theta+(1-\theta)t}}=\inf_{(u,v)\in
X_{s,t,\infty}^{*}(\Omega)}\frac{\max\big{\\{}|u|_{s},|v|_{t}\big{\\}}}{\|u\|_{\infty}^{\theta}|v(x_{0})|^{1-\theta}}\leq\max\big{\\{}|u_{\infty}|_{s},|v_{\infty}|_{t}\big{\\}}\leq\frac{1}{R^{s\theta+(1-\theta)t}},$
thus producing
$\max\big{\\{}|u_{\infty}|_{s},|v_{\infty}|_{t}\big{\\}}=\frac{1}{R^{s\theta+(1-\theta)t}}.$
On its turn, inequality (4) yields
$\displaystyle\max\left\\{\left(\int_{\Omega}\int_{\Omega}\frac{|u_{\infty}(x)-u_{\infty}(y)|^{k}}{|x-y|^{sk}}\mathrm{d}x\mathrm{d}y\right)^{\frac{1}{k}},\left(\int_{\Omega}\int_{\Omega}\frac{|v_{\infty}(x)-v_{\infty}(y)|^{k}}{|x-y|^{tk}}\mathrm{d}x\mathrm{d}y\right)^{\frac{1}{k}}\right\\}$
$\displaystyle\leq|\Omega|^{\frac{2}{k}}\liminf_{n\to\infty}\left(p_{n}^{\frac{1}{p_{n}}}\sqrt[p_{n}]{\Lambda_{1}(p_{n})}\right).$
Thus, as $k\to\infty$ we obtain
$\displaystyle\frac{1}{R^{s\theta+(1-\theta)t}}=\max\big{\\{}|u_{\infty}|_{s},|v_{\infty}|_{s}\big{\\}}$
$\displaystyle\leq\liminf_{n\to\infty}\left(p_{n}^{\frac{1}{p_{n}}}\sqrt[p_{n}]{\Lambda_{1}(p_{n})}\right)$
$\displaystyle\leq\limsup_{n\to\infty}\left(p_{n}^{\frac{1}{p_{n}}}\sqrt[p_{n}]{\Lambda_{1}(p_{n})}\right)\leq\frac{1}{R^{s\theta+(1-\theta)t}},$
from what follows
$\lim_{n\to\infty}\sqrt[p_{n}]{\Lambda_{1}(p_{n})}=\lim_{n\to\infty}\left(p_{n}^{\frac{1}{p_{n}}}\sqrt[p_{n}]{\Lambda_{1}(p_{n})}\right)=\frac{1}{R^{s\theta+(1-\theta)t}}=\Lambda_{1,\infty}.$
$\hfill\Box$
## 5\. Proof of Theorem 4
The next result only shows that solutions in the weak sense are viscosity
solutions. Its proof can be achieved by adapting the arguments given by
Lindgren and Lindqvist in [9, Proposition 1].
###### Proposition 7.
The function $u_{p}$ e $v_{p}$ given by Theorem 1 are viscosity solutions to
the problems
$\left\\{\begin{array}[]{ll}\mathcal{L}_{s,p}u=\Lambda_{1}(p)\alpha(p)|u|^{\alpha(p)-1}v(x_{0})&{\rm
in}\ \ \Omega,\\\ u=0&{\rm in}\
\mathbb{R}^{N}\setminus\Omega,\end{array}\right.$
and
$\left\\{\begin{array}[]{ll}\mathcal{L}_{t,p}v=0&{\rm in}\
\Omega\setminus\\{x_{0}\\},\\\ v=0&{\rm in}\ \mathbb{R}^{N}\setminus\Omega,\\\
v(x_{0})=v_{p}(x_{0}),\end{array}\right.$
respectively.
Proof of Theorem 4. We start showing that $v_{\infty}$ is a viscosity solution
to the problem
$\left\\{\begin{array}[]{ll}\mathcal{L}_{t,\infty}v=0&{\rm in}\ \
\Omega\setminus\\{x_{0}\\},\\\ v=0&{\rm in}\ \mathbb{R}^{N}\setminus\Omega,\\\
v(x_{0})=v_{\infty}(x_{0}).\end{array}\right.$ (22)
According to Theorem 3 we have $v_{\infty}=0$ in
$\mathbb{R}^{N}\setminus\Omega$ and $v_{\infty}(x_{0})=v_{\infty}(x_{0})$. So,
we need only show that $v_{\infty}$ is a viscosity solution. Fix
$(z_{0},\varphi)\in\left(\Omega\setminus\\{x_{0}\\}\right)\times
C_{0}^{1}(\mathbb{R}^{N}\setminus\\{x_{0}\\})$ satisfying
$\varphi(z_{0})=v_{\infty}(z_{0})\qquad\text{and}\qquad\varphi(x)\leq
v_{\infty}(x),\ \ \forall x\in\mathbb{R}^{N}\setminus\\{x_{0},z_{0}\\}.$
Theorem 3 also guarantees the existence of a sequence
$\\{(u_{p_{n}},v_{p_{n}})\\}_{n\in\mathbb{N}}\in
C_{0}^{0,s}(\overline{\Omega})\times C_{0}^{0,t}(\overline{\Omega})$ such that
$u_{p_{n}}\to u_{\infty}$ and $v_{p_{n}}\to v_{\infty}$ uniformly in $\Omega$.
Thus, there exists a sequence $\\{x_{p_{n}}\\}_{n\in\mathbb{N}}$ so that
$x_{p_{n}}\to z_{0}$ and $v_{p_{n}}(x_{p_{n}})=\varphi(x_{p_{n}})$. Since
$x_{0}\neq z_{0}$, we can assume the existence of $n_{0}\geq 0$ and a ball
$B_{\rho}(z_{0})$ such that
$x_{p_{n}}\notin B_{\rho}(z_{0})\subset\Omega\setminus\\{z_{0}\\},\quad\forall
n\geq n_{0}.$
Since $v_{p_{n}}$ weakly satisfies
$(-\Delta_{p_{n}})^{t}v_{p_{n}}(x)=\Lambda_{1}(p_{n})\alpha(p_{n})\left(\int_{\Omega}|u_{p_{n}}|^{\alpha(p_{n})}\mathrm{d}x\right)|v_{p_{n}}(x_{0})|^{\beta(p_{n})}v_{p_{n}}(x_{0})\delta_{x_{0}}$
in $\Omega$, then also in $\Omega\setminus\\{x_{0}\\}$, Proposition 7 yields
that $v_{p_{n}}$ is a viscosity solution to the problem
$\left\\{\begin{array}[]{llll}\mathcal{L}_{t,p_{n}}v=0&{\rm in}\ \
\Omega\setminus\\{x_{0}\\},\\\ v=0&{\rm in}\ \mathbb{R}^{N}\setminus\Omega,\\\
v(x_{0})=v_{p_{n}}(x_{0}).\end{array}\right.$ (23)
By standard arguments, we obtain a sequence
$\\{z_{n}\\}_{n\in\mathbb{N}}\subset B_{\rho}(x_{0})$ such that $z_{n}\to
z_{0}$ and
$\sigma_{n}:=\min_{B_{\rho}(x_{0})}\left(v_{p_{n}}-\varphi\right)=v_{p_{n}}(z_{n})-\varphi(z_{n})<v_{p_{n}}(x)-\varphi(x),\
\ \forall x\neq x_{p_{n}}.$
Now, define $\Psi_{n}:=\varphi+\sigma_{n}$. We have
$\Psi_{n}(z_{n})=\varphi(z_{n})+\sigma_{n}=v_{p_{n}}(z_{n})\qquad\text{and}\qquad\Psi_{n}(x)=\varphi(x)+\sigma_{n}<v_{p_{n}}(x),\
\ \forall x\in B_{\rho}(x_{0}).$
Since $v_{p_{n}}$ satisfies (23) in $\Omega\setminus\\{x_{0}\\}$,
$(\mathcal{L}_{t,\infty}\Psi_{n})(z_{n})\leq 0,\qquad\forall n\geq n_{0}.$
Thus, defining
$\left(A_{p_{n},t}(\varphi(z_{n}))\right)^{p_{n}-1}:=2\int_{\mathbb{R}^{N}}\frac{|\varphi(z_{n})-\varphi(y)|^{p_{n}-2}(\varphi(z_{n})-\varphi(y))^{+}}{|z_{n}-y|^{N+tp_{n}}}\mathrm{d}y$
and
$\left(B_{p_{n},t}(\varphi(z_{n}))\right)^{p_{n}-1}:=2\int_{\mathbb{R}^{N}}\frac{|\varphi(z_{n})-\varphi(y)|^{p_{n}-2}(\varphi(z_{n})-\varphi(y))^{-}}{|z_{n}-y|^{N+tp_{n}}}\mathrm{d}y,$
we have
$\displaystyle\left(A_{p_{n},t}(\varphi(z_{n}))\right)^{p_{n}-1}-\left(B_{p_{n},t}(\varphi(z_{n}))\right)^{p_{n}-1}$
$\displaystyle=2\int_{\mathbb{R}^{N}}\frac{|\varphi(z_{n})-\varphi(y)|^{p_{n}-2}(\varphi(z_{n})-\varphi(y))}{|z_{n}-y|^{N+sp_{n}}}\mathrm{d}y$
$\displaystyle\leq 0,\quad\forall n\geq n_{0}.$ (24)
Applying [7, Lemma 3.9] (see also [8, Lemma 6.1]), we obtain
$\lim_{n\to\infty}A_{p_{n},t}(\varphi(z_{n}))=\left(\mathcal{L}_{t,\infty}^{+}\varphi\right)(z_{0})\qquad\text{and}\qquad\lim_{n\to\infty}B_{p_{n},t}(\varphi(z_{n}))=\left(-\mathcal{L}_{t,\infty}^{-}\varphi\right)(z_{0}).$
As $n\to\infty$ in $\eqref{Lim}$ we get
$\left(\mathcal{L}_{t,\infty}\varphi\right)(x_{0})=\left(\mathcal{L}_{t,\infty}^{+}\varphi\right)(x_{0})+\left(\mathcal{L}_{t,\infty}^{-}\varphi\right)(x_{0})\leq
0,$
showing that $v_{\infty}$ is a viscosity supersolution of (22). Analogously,
we obtain that $v_{\infty}$ is a viscosity subsolution of the same equation,
and thus a viscosity solution of (22).
Now we show that $u_{\infty}$ is a viscosity solution to the problem
$\left\\{\begin{array}[]{ll}\max\bigg{\\{}\mathcal{L}_{s,\infty}u,\mathcal{L}^{-}_{s,\infty}u+\Lambda_{1,\infty}|u(x)|^{\theta}|v_{\infty}(x_{0})|^{1-\theta}\bigg{\\}}=0&{\rm
in}\ \ \Omega,\\\ u=0&{\rm in}\
\mathbb{R}^{N}\setminus\Omega.\end{array}\right.$ (25)
The same reasoning used before imply that, for given
$(z_{0},\varphi)\in\Omega\times C_{0}^{1}(\mathbb{R}^{N})$, we find a sequence
$\\{u_{p_{n}}\\}_{n\in\mathbb{N}}$ in $C_{0}^{0,s}(\overline{\Omega})$ such
that $u_{p_{n}}\to u_{\infty}$ uniformly in $\Omega$ and a sequence
$\\{x_{p_{n}}\\}_{n\in\mathbb{N}}$ satisfying $x_{p_{n}}\to z_{0}$ and
$u_{p_{n}}(x_{p_{n}})=\varphi(x_{p_{n}})$. Thus, there exist $n_{0}\geq 0$ and
a ball $B_{\rho}(z_{0})$ so that
$x_{p_{n}}\notin B_{\rho}(z_{0})\subset\Omega\setminus\\{z_{0}\\},\ \ \forall
n\geq n_{0}.$
As before, we obtain that $u_{p_{n}}$ is a viscosity solution to the problem
$\left\\{\begin{array}[]{ll}\mathcal{L}_{s,p_{n}}u_{p_{n}}=\Lambda_{1}(p_{n})\alpha(p_{n})|u_{p_{n}}|^{\alpha(p_{n})-1}v_{p_{n}}(x_{0})&{\rm
in}\ \ \Omega,\\\ u=0&{\rm in}\
\mathbb{R}^{N}\setminus\Omega.\end{array}\right.$
Considering, as before, a sequence $\\{z_{n}\\}_{n\in\mathbb{N}}\subset
B_{\rho}(z_{0})$ such that $z_{n}\to z_{0}$ and defining $\Psi_{n}$ as in the
previous proof, we obtain
$(\mathcal{L}_{s,p_{n}}\Psi_{n})(z_{n})\leq\Lambda_{1}(p_{n})\alpha(p_{n})|\Psi_{n}(z_{n})|^{\alpha(p_{n})-1}v_{p_{n}}(x_{0})\
\ \forall n\geq n_{0},$
which is equivalent to the inequality
$\left(A_{p_{n},s}(\varphi(z_{n}))\right)^{p_{n}-1}-\left(B_{p_{n},s}(\varphi(z_{n}))\right)^{p_{n}-1}\leq\left(C_{p_{n}}(\varphi(z_{n}))\right)^{p_{n}-1}\
\ \forall n\geq n_{0},$
where
$\bigg{(}C_{p_{n}}(\varphi(z_{n}))\bigg{)}^{p_{n}-1}:=\Lambda_{1}(p_{n})\alpha(p_{n})|\varphi+\sigma_{n}|^{\alpha(p_{n})-1}v_{p_{n}}(x_{0})$
and the other terms are analogous to that of the previous case, just changing
$t$ for $s$.
Observe that a direct calculation yields
$\displaystyle\lim_{n\to\infty}C_{p_{n}}(\varphi(z_{n}))$
$\displaystyle=\lim_{n\to\infty}\left(\sqrt[p_{n}]{\Lambda_{1}(p_{n})}\sqrt[p_{n}]{\alpha(p_{n})}|\varphi(z_{n})+\sigma_{n}|^{\frac{\alpha(p_{n})}{p_{n}-1}}v_{p_{n}}(x_{0})^{\frac{\beta(p_{n})}{p_{n}-1}}\right)$
$\displaystyle=\Lambda_{1,\infty}|\varphi(z_{0})|^{\theta}v_{\infty}(x_{0})^{1-\theta}$
So, as $n\to\infty$ em $\eqref{Lim}$ we obtain
$\left(\mathcal{L}_{s,\infty}\varphi\right)(x_{0})=\left(\mathcal{L}_{s,\infty}^{+}\varphi\right)(z_{0})+\left(\mathcal{L}_{s,\infty}^{-}\varphi\right)(z_{0})\leq\Lambda_{1,\infty}|\varphi(z_{0})|^{\theta}v_{\infty}(x_{0})^{1-\theta}$
and therefore
$\max\left\\{\mathcal{L}_{s,\infty}u,\mathcal{L}^{-}_{s,\infty}u-\Lambda_{1,\infty}|u(x)|^{\theta}|v_{\infty}(x_{0})|^{1-\theta}\right\\}\leq
0\ \ {\rm in}\ \ \Omega,$
that is, $u_{\infty}$ is a viscosity supersolution to problem (22).
Analogously, $u_{\infty}$ is a viscosity subsolution to the same problem. We
are done. $\hfill\Box$
###### Remark 8.
We observe that the system
$\left\\{\begin{array}[]{ll}(-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u|^{\alpha(p)-2}u|v(x_{v})|^{\beta(p)}&{\rm
in}\ \ \Omega,\\\
(-\Delta_{p})^{t}v(x)=\lambda\beta(p)\left(\displaystyle\int_{\Omega}|u|^{\alpha(p)}\mathrm{d}x\right)|v(x_{v})|^{\beta(p)-2}v(x_{v})\delta_{x_{v}}&{\rm
in}\ \ \Omega,\\\ u=v=0&{\rm in}\ \mathbb{R}^{N}\setminus\Omega,\\\
\end{array}\right.$ ($P^{1}_{\infty}$)
where $x_{v}$ is a maximum point of $v$ in $\overline{\Omega}$ can be treated
in the same setting given in Section 2, applying the same procedure used to
solve system ($P^{1}_{p}$).
## 6\. On the system($P^{2}_{p}$)
In this section we consider the functional system ($P^{2}_{p}$).
$\left\\{\begin{array}[]{ll}(-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u(x_{1})|^{\alpha(p)-2}u(x_{1})|v(x_{2})|^{\beta(p)}\delta_{x_{1}}&{\rm
in}\ \ \Omega,\\\
(-\Delta_{p})^{t}v(x)=\lambda\beta(p)|u(x_{1})|^{\alpha(p)}|v(x_{2})|^{\beta(p)-2}v(x_{2})\delta_{x_{2}}&{\rm
in}\ \ \Omega,\\\ u=v=0&{\rm in}\
\mathbb{R}^{N}\setminus\Omega,\end{array}\right.$
where $x_{1},x_{2}\in\Omega$ are arbitrary points, $x_{1}\neq x_{2}$. Observe
that both equations are functional, so their treatment recall that used to
deal with the second equation in system ($P^{1}_{p}$).
###### Definition 3.
A pair $(u,v)\in X_{s,t,p}(\Omega)$ is a weak solution to ($P^{2}_{p}$) if
$\displaystyle\left\langle(-\Delta_{p})^{s}u,\varphi\right\rangle+\left\langle(-\Delta_{p})^{s}v,\psi\right\rangle=\lambda$
$\displaystyle\left[\alpha(p)|u(x_{1})|^{\alpha(p)-2}u(x_{1})|v(x_{2})|^{\beta(p)}\varphi(x_{1})\right.$
(26) $\displaystyle\left.\
+\beta(p)|u(x_{1})|^{\alpha(p)}|v(x_{2})|^{\beta(p)-2}v(x_{2})\psi(x_{2})\right]$
for all $(\varphi,\psi)\in X_{s,t,p}(\Omega)$.
The denominator in the definition of $Q_{s,t,p}$ should be changed into
$|u(x_{1})|^{\alpha(p)}\,|v(x_{2})|^{\beta(p)}$, maintaining the definition of
$\Lambda_{1}(p)$. The first result, which is similar to Theorem 1 is the
following.
###### Theorem 9.
For each $p\in\left(\frac{N}{s},\infty\right)$ we have
1. $(i)$
$\Lambda_{1}(p)>0$;
2. $(ii)$
there exist $(u_{p},v_{p})\in X^{*}_{s,t,p}(\Omega)$ such that $u_{p}>0$,
$v_{p}>0$ and
$|u_{p}(x_{1})|^{\alpha(p)}|v_{p}(x_{2})|^{\beta(p)}=1\qquad\text{and}\qquad\Lambda_{1}(s,p)=Q_{s,t,p}(u_{p},v_{p}).$
Its proof is also similar to that of Theorem 1. For details, see the proof
sketched in Section 3 or [11, Theorem 1].
The next step is to prove a result similar to Theorem 2. Changing the
definition of $S_{p}$ and $S{\infty}$ into
$\displaystyle S_{p}$ $\displaystyle=\left\\{(u,v)\in
X_{s,t,p}(\Omega)\,:\,|u(x_{1})|^{\alpha(p)}|v(x_{2})|^{\beta(p)}=1\right\\}$
and $\displaystyle S_{\infty}$ $\displaystyle=\left\\{(u,v)\in
X_{s,t,p}\,:\,|u(x_{1})|^{\theta}|v(x_{2})|^{1-\theta}=1\right\\}$
and also the denominator in $G_{p}$ into
$|u(x_{1})|^{\theta}|v(x_{2})|^{1-\theta}$, we obtain the version of Theorem 2
with the same statement.
Up to this point, the points $x_{1},x_{2}\in\Omega$ were taken arbitrarily.
Now, we consider sequences $u_{n}:=u_{p_{n}}$ and $v_{n}:=u_{p_{n}}$ given by
Theorem 1. Since $u_{n},v_{n}>0$, we can take $x_{1}$ as a maximum $x_{n}$ of
$u_{n}$ and $x_{2}$ as a maximum $y_{n}$ of $v_{n}$. Observe that we do not
suppose that the maxima $x_{n}$ and $y_{n}$ are unique. However, we will prove
that the sequence $(x_{n},y_{n})$ has a subsequence that converges to
$(x_{\infty},y_{\infty})$ and the equality
$|u_{\infty}(x_{\infty})|^{\theta}|v_{\infty}(y_{\infty})|^{1-\theta}=1$ still
holds true.
###### Theorem 10.
Let $\\{p_{n}\\}$ be a sequence converging to $\infty$ and
$(u_{p_{n}},v_{p_{n}})$ the solution of ($P^{1}_{p}$) given in Theorem 9.
Denote $x_{n}:=x_{u_{p_{n}}}$ and $y_{n}:=x_{v_{p_{n}}}$ a sequence of maxima
to $u_{p_{n}}$ and $v_{p_{n}}$, respectively. Passing to a subsequence if
necessary, $\\{(u_{p_{n}},v_{p_{n}})\\}_{n\in\mathbb{N}}$ converges uniformly
to $(u_{\infty},v_{\infty})\in C_{0}^{0,s}(\overline{\Omega})\times
C_{0}^{0,s}(\overline{\Omega})$, while the sequences $\\{x_{n}\\}$ and
$\\{y_{n}\\}$ converge to $x_{\infty}\in\Omega$ and $y_{\infty}\in\Omega$,
respectively, which are the maxima of $u_{\infty}$ and $v_{\infty}$.
Furthermore
1. $(i)$
$u_{\infty}\geq 0$, $v_{\infty}\geq 0$ and
$|u_{\infty}(x_{\infty})|^{\theta}|v_{\infty}(y_{\infty})|^{1-\theta}=1$;
2. $(ii)$
$\displaystyle\lim_{n\to\infty}\sqrt[p_{n}]{\Lambda_{1}(p_{n})}=\frac{1}{R^{s\theta+(1-\theta)t}}$
3. $(iii)$
$\max\big{\\{}|u_{\infty}|_{s},|v_{\infty}|_{t}\big{\\}}=\displaystyle\frac{1}{R^{s\theta+(1-\theta)t}}$;
4. $(iv)$
If $s=t$, then
$0\leq
u_{\infty}(x)\leq\displaystyle\frac{\left(\textup{dist}(x,\mathbb{R}^{N}\setminus\Omega)\right)^{s}}{R^{s}}\quad\text{and}\quad
0\leq
v_{\infty}(x)\leq\frac{\left(\textup{dist}(x,\mathbb{R}^{N}\setminus\Omega)\right)^{s}}{R^{s}}.$
Its proof can be obtained by mimicking the method used to prove Theorem 3.
Comparing this result with the one in [11], we first note that our result
brings information about the sequence of maxima of $u_{p_{n}}$ and
$v_{p_{n}}$, which are absent in that paper.
Finally, the analogue to Theorem 4 is the following. Once again, its proof is
obtained by adapting that of the Theorem 4.
###### Theorem 11.
The functions $u_{\infty}$ and $v_{\infty}$, given by Theorem 10, are
viscosity solutions of the problems
$\left\\{\begin{array}[]{ll}\mathcal{L}_{s,\infty}u=0&{\rm in}\ \
\Omega\setminus\\{x_{1}\\},\\\ u=0&{\rm in}\ \mathbb{R}^{N}\setminus\Omega,\\\
u(x_{1})=u_{\infty}(x_{1})\end{array}\right.\qquad\text{and}\qquad\left\\{\begin{array}[]{ll}\mathcal{L}_{t,\infty}v=0&{\rm
in}\ \ \Omega\setminus\\{x_{2}\\},\\\ v=0&{\rm in}\
\mathbb{R}^{N}\setminus\Omega,\\\
v(x_{2})=v_{\infty}(x_{2}),\end{array}\right.$
respectively.
## References
* [1] C. Alves, G. Ercole and G. Pereira: Asymptotic behavior as $p\to\infty$ of ground state solutions of a $(p,q(p))$-Laplacian problem, Proc. Roy. Soc. Edinburgh Sect. A, 149 (2019), no. 6, 1493–1522.
* [2] A. Chambolle, E. Lindgren and R. Monneau: A Hölder infinity Laplacian, ESAIM Control Optim. Calc. Var. 18 (2012), no. 3, 799–835.
* [3] L. Del Pezzo and J. Rossi: Eigenvalues for systems of fractional $p$-Laplacians, Rocky Mountain J. Math. 48 (2018), no. 4, 1077–1104.
* [4] R. Di Nezza, G. Palatucci and E. Valdinoci: Hitchhikers guide to the fractional Sobolev spaces, Bull. Sci. Math. 136 (2012), no. 5, 521–573.
* [5] G. Ercole and G. Pereira: Asymptotics for the best Sobolev constants and their extremal functions, Math. Nachr. 289 (2016), no. 11–12, 1433–1449.
* [6] G. Ercole, G. Pereira and R. Sanchis: Asymptotic behavior of extremals for fractional Sobolev inequalities associated with singular problems, Ann. Mat. Pura Appl. (4), 198 (2019), no. 6, 2059–2079.
* [7] G. Ercole, A. H. S. Medeiros and G. A. Pereira: On the behavior of least energy solutions of a fractional $(p,q(p))$-Laplacian problem as $p$ goes to infinity, Asymptot. Anal. 123 (2021), no. 3–4, 237–262.
* [8] R. Ferreira and M. Pérez-Llanos: Limit problems for a fractional $p$-Laplacian as $p\to\infty$, NoDEA Nonlinear Differential Equations Appl. 23 (2016), no. 2, Art. 14, 28 pp.
* [9] E. Lindgren and P. Lindqvist: Fractional eigenvalues. Calc. Var. Partial Differential Equations 49 (2014), no. 1–2, 795–826.
* [10] P. Juutinen and P Lindqvist: On the higher eigenvalues for the $\infty$-eigenvalue problem, Calc. Var. Partial Differential Equations 23 (2005), no. 2, 169–192.
* [11] M. Mihǎilescu, J. Rossi and D. Stancu-Dumitru: A limiting problem for a family of eigenvalue problems involving p-Laplacians, Rev. Mat. Complut. 32 (2019), no. 3, 631–653.
|
# Scalable underwater assembly with reconfigurable visual fiducials
Samuel Lensgraf, Ankita Sarkar, Adithya Pediredla, Devin Balkcom, Alberto
Quattrini Li This project was partially supported by the NSF GRFP,
CNS-1919647, 2024541, 2144624. Dartmouth College
###### Abstract
We present a scalable combined localization infrastructure deployment and task
planning algorithm for underwater assembly. Infrastructure is autonomously
modified to suit the needs of manipulation tasks based on an uncertainty model
on the infrastructure’s positional accuracy. Our uncertainty model can be
combined with the noise characteristics from multiple devices. For the task
planning problem, we propose a layer-based clustering approach that completes
the manipulation tasks one cluster at a time. We employ movable visual
fiducial markers as infrastructure and an autonomous underwater vehicle (AUV)
for manipulation tasks. The proposed task planning algorithm is
computationally simple, and we implement it on AUV without any offline
computation requirements. Combined hardware experiments and simulations over
large datasets show that the proposed technique is scalable to large areas.
## I Introduction
Autonomous assembly of structures using drones or free-floating robots is a
promising direction for creating rapidly deployable, flexibly designed
structures [1]. In most real-world systems, localization relative to a
reference is achieved using calibrated and fixed positional infrastructure
such as motion capture systems or visual fiducials [2, 3, 4, 5].
Unfortunately, these systems are not scalable as the coverage area is fixed
and scaling beyond the coverage area requires redesigning the positioning
technology.
To overcome the limited coverage area of the positioning technologies, we
propose to design the positioning infrastructure as a dynamic component of the
construction plan. Our method allows localizing against large structures with
minimal modification to the area around them and can also be integrated with
existing underwater construction structures to make them scalable – see Fig. 1
to show our robot in action while moving a fiducial marker.
Figure 1: AUV placing a reconfigurable fiducial marker on a foundation while
localizing using another marker.
Localization infrastructure is often considered to have a constant noise
distribution, allowing coverage algorithms to plan based on the noise
properties of the fixed infrastructure. However, the properties of
localization infrastructure often depend on environmental factors: distance
from the infrastructure, reflections, or water temperature gradients influence
the accuracy of the positioning systems [6]. As the accuracy depends on
relative positioning between the infrastructure and the robot, high-accuracy
positioning can only be provided in a small fixed area resulting in either
large infrastructure requirements, or small structures. Instead, by
understanding the noise properties of the infrastructure components and
modeling them accurately as a function of environmental factors, we show that
it is possible to dynamically reconfigure the infrastructure to maximize the
positioning accuracy of any region. Our noise formulation model is also
conducive to sensor fusion techniques and, hence, can be extended for the case
of multiple sensors.
For a given construction structure manipulation task, we have to plan the
movement of infrastructure such that the repositioned infrastructure
guarantees the accuracy of the manipulation task. This planning is challenging
and results in the combined deployment and sequencing problem.
To solve this problem, we group the manipulation tasks using a clustering
algorithm that guarantees that the radius of any cluster is within the high
localization accuracy achievable with the dynamic markers. We reposition the
markers to have high accuracy for the cluster and execute the manipulation
tasks, one cluster at a time.
For implementation, we use visual fiducial markers mounted on plates that are
movable on our previously developed error-correcting construction foundations
[7]. The AUV views these markers with a downward-facing wide-angle camera and
can compute the position accuracy using our noise model. The implementation of
our clustering-based planning algorithm is computationally simple, and we
implemented it on the AUV itself without any external cloud computing
requirements. We validated the proposed technique both experimentally and on
large simulation datasets.
Our technique is particularly interesting for the case where deploying large
amounts of infrastructure is impractical or expensive. In real-world
scenarios, it is often the case as the sparse area of interest is often near a
protected region that we cannot permanently mar. For example, in the
construction of artificial reefs, the surrounding areas are critically
important to protect. In hard to reach places such as caves, or very deep
waters, transporting large amounts of infrastructure can be impossible. Our
technique is a foundational component towards a scalable solution for these
cases. A number of practical challenges, such as flexibility in terms of
material of connected components per layer, will be subject of future studies
as discussed at the end of the paper.
## II Related work
Uncertainty-based planning. Infrastructure placement for small scale scenes is
a well-studied problem. In particular, [8] define a landmark placement
algorithm that computes a static landmark placement based on certainty
requirements. We consider the problem of dynamically altering landmark
placements. We also develop a direct model of positional certainty based on
visual fiducial measurements.
[9] develop a swarm robot foraging algorithm that dynamically deploys and re-
deploys sensor motes. While the sensors are moved to provide certainty
implicitly, they do not model localization quality during their deployment.
Sensor coverage problems also directly relate to our work. In particular,
dynamic sensor coverage problems consider moving sensors. [10] consider
randomly moving mobile sensors and analyze coverage properties. Clustering has
been applied to dynamic coverage problems [11], but the sensors themselves
were considered as mobile units rather than beacons, which are deployed and
re-deployed by a moving robot. Sensor deployments via actuators are also
similar to our problem [12]. However, the problems of deployments of actuators
often focus on coverage and do not discuss the coupling of localization and
deployment.
In sensor coverage problems, some work incorporate a continuous sensor field
intensity [13, 14]. In the attenuated disk model, sensing quality decays with
the distance to the sensor. This modeling is similar in spirit to our modeling
of visual fiducials, but our model focuses as much on directionality as on
distance in determining the noise model.
Deployment planning algorithms for heterogeneous robot teams also relate to
our work. Such algorithms consider both cases where rewards are known apriori
and those in which rewards are randomly distributed [15, 16, 17]. Our problem
statement is similar in that there is competition for resources and
dependencies imposed by deployment decisions, but we judge the quality of the
assignment by the time required to execute the deployment plan rather than as
rewards accumulated at each assignment.
Localization. Mainstream underwater localization relies acoustic sensors, such
as Doppler Velocity Log (DVL), long/short/ultrashort baseline acoustic
positioning systems, and multibeam or sidescan sonars [6, 18, 19]. While such
sensors allow the robot to navigate in large areas, their accuracy depends on
a number of external factors, including multipath effect and their overall
resolution, making acoustic sensors not best suited to support manipulation
tasks. Vision-based perception is ubiquitously adopted for many robotics tasks
[20], including underwater [21], given camera’s low cost and ability to
capture rich information of the surrounding. The literature classified state
estimation methods according to different axes, with one being on whether they
minimize reprojection errors of tracked features – indirect methods, such as
ORB-SLAM [22] – or the alignment error considering image intensity values –
direct methods, e.g., DSO [23]. Adding IMUs [24] can improve state estimation
and including loop closure will allow the odometry estimate to be corrected.
Underwater, however, vision-based perception still remains a challenge mainly
due to the haze, color loss, and featureless environments [25, 26]. Given the
precision required by the underwater construction task, we rely on visual
fiducial markers and extend the operation area of the robot by allowing the
robot to move them.
Fisheye cameras are often used to localize mobile robots [27, 28], but there
are no techniques for modeling the quality of features detected using a visual
fiducial marker.
Visual fiducial markers have been developed specifically for fisheye cameras
[29] with the purpose of providing better position information. Other visual
fiducial markers have been developed to reduce positioning noise [30]. To our
knowledge, no attempt has been made to directly model the uncertainty of
detecting visual fiducials. An exploration of the noise properties of visual
fiducials is presented by Kalaitzakis et al. [31], but the geometry of the
noise distribution is unexplored.
Free-floating construction systems. We are inspired by the limitations of our
autonomous underwater construction system [7]. This work builds on and extends
our autonomous underwater construction robot. Previously, it localized using a
single visual fiducial that provided a limited coverage area.
Existing aerial free-floating construction systems commonly make use of fixed-
place motion capture systems [32, 1, 2]. These motion capture systems provide
precise, low latency position information but require numerous precisely
calibrated cameras with limited coverage area. We want to provide coverage to
large areas with limited need for complex fixturing.
## III Problem Model
We consider the problem of deploying and moving infrastructure dynamically to
provide high quality localization information for a set of tasks at known
positions in global coordinates. The robot localizes using $m$ beacons which
can be placed and moved throughout the mission. The quality of information
coming from the beacons depends on how the robot is positioned relative to the
beacons. Information from each of the beacons can be combined to increase the
localization accuracy. Our goal is to find a mission plan, $\mathcal{A}$,
which consists of an ordered set of actions $a_{i}$. Each $a_{i}$ can
correspond to picking up a beacon, placing a beacon, or completing a task.
Each of the $n$ tasks, located at $t_{i}\in\mathbb{R}^{3}$, requires a high
enough precision of localization information. We model the quality of
information coming from a beacon $b_{i}$ using a function
$\Sigma(r_{i})\mapsto\Sigma_{r_{i}}\in\mathbb{R}^{3\times 3}$ which maps
relative positions ($r_{i}$) into covariance matrices that describe the noise
distribution of the information coming from the beacon. We assume zero mean
error. Information from multiple sensors can be combined by using sensor
fusion equations. We use the equations described by [33]. Algorithm 1 shows
how we combine noise distributions from multiple sensors.
Algorithm 1 Procedure to fuse covariance matrices of uncertain positions [33].
Covariance matrices $\Sigma_{1},\dots,\Sigma_{n}$ Fused covariance matrix
$\Sigma$
1:$\Sigma\leftarrow\Sigma_{1}$ $i\in 1\dots,n$
2:$K\leftarrow\Sigma(\Sigma+\Sigma_{i})^{-1}$
3:$\Sigma\leftarrow\Sigma-K\Sigma$
4:$\Sigma$
Each task $t_{i}$ requires a high enough precision to be completed. We model
the precision requirements using a scalar $C_{i}$ which is obtained by
applying a certainty approximation function $C(\Sigma)\mapsto
C_{i}\in\mathbb{R}$ to the fused covariance matrix $\Sigma$. We define
$C(\Sigma)$ as a function which approximates the probability that location
readings are inside of a given error range. Receiving a location reading
outside of the error range could cause the robot to fail at its task.
To move a beacon $b_{i}$, the robot must have precise enough location
information for both pickup and placement. This means that beacons must be
clustered to provide coverage of one another. For simplicity in our initial
exploration, we assume perfect placement of the beacons. This assumption is
reasonable for moving beacons on error-correcting foundations. In future work,
we plan to extend our method to model a decay in the quality of information of
moved beacons because of small placement errors.
For simplicity, we also assume a known relative orientation $R$ of the AUV. In
practice, the orientation of a free-floating robot can be sensed with a high
accuracy using out-of-the-box AHRS boards. An initial calibration step can be
used to measure the relative rotation for the set of beacons.
1D example.
Figure 2: Series of steps to cover a task at $t_{1}$ with the red and blue
beacons.
Consider a robot operating in a 1D world with no collisions. Two beacons, red
and blue in Figure 2, provide coverage with precision $\Sigma_{r}(r)=r^{2}$,
that is the quality of information decays quadratically with the distance to
the beacon. We set $C(\Sigma)=1-\Sigma$ because, in 1D, $\Sigma$ is a scalar
and can be used directly. The beacons start at positions $b_{1}=-0.1$ and
$b_{2}=0.1$. The robot is given one task to complete at position $t_{1}=.7$.
Our fusion function in Algorithm 1 becomes
$\Sigma=r_{1}^{2}-\frac{r_{1}^{2}}{r_{1}^{2}+r_{2}^{2}}$. The task has a
requirement $1-\Sigma\geq 0.95$. Moving a beacon requires the same certainty.
Figure 2 shows an example of the problem.
We can compute a coverage area for a single beacon and a pair of beacons to
guide our creation of a simple mission plan: $[b_{i}-0.224,b_{i}+0.224]$. To
reach and cover our task at position $0.7$, we need to move one beacon to
position $.7-0.224=0.476$. Moving the beacons will require multiple hops due
to their limited coverage area. Our final plan $\mathcal{A}$ is then
$\mathcal{A}=$ $\textsc{MoveBeacon}(b_{1},0.324)$,
$\textsc{MoveBeacon}(b_{2},0.548)$, $\textsc{complete}(t_{1})$.
In the specific case of assembly a task $t_{i}$ will represent placing a block
at $t_{i}$’s location. For this application, we write
$\textsc{PlaceBlock}(t_{i})$ to mean placing a block at location $t_{i}$. We
also replace MoveBeacon with MoveMarker when we are dealing with a
reconfigurable visual fiducial marker.
## IV Noise characterization of visual fiducials
(a)
(b)
Figure 3: (a) Predicted and measured largest eigenvector directions in real
world experiment. The arrows extend from the marker’s position. (b) Results
from simulated corner noise. In both cases the predicted and measured
directions closely match.
The first step to implementing our assembly planning and localization method
is to accurately model the noise distribution of visual fiducial markers. To
understand how the noise distribution varies based on the relative position
between the fisheye camera and a marker, we built a simulator. The simulator
applies Gaussian distributed noise to the distorted corner positions of the
visual fiducial, then undistorts them using the Kannala-Brandt Camera model
[34] – typically used for fisheye lenses – and solves the Perspective-n-point
problem. This simulation captures the important sources of noise: sensor noise
and barrel distortion. We experimentally validate our simulator and noise
model in Section VI-C.
Figure 3 shows outputs from a real world experiment (a) and from our
simulation (b). We found that the noise distribution was highly structured and
could be predicted using only two values: the largest eigenvector and
eigenvalue. Further, the eigenvalue is parallel to the position vector. In the
remainder of this section, we discuss how we predict the two components.
### IV-A Scale noise
Figure 4: Position readings from a static fisheye lens camera for a static
visual fiducial marker.
In our experiments on fiducial marker noise when viewed through a fisheye
lens, we found that the axis of largest noise very consistently pointed
towards the camera. Figure 4 shows an example of a set of relative position
measurements for a single visual fiducial measured by a static camera. The
position vector is marked in black and aligns closely with the largest
eigenvector of the noise distribution (green). We noticed this phenomenon to
be consistent across various positions. The largest eigenvector dominates the
noise distribution, and the other two are an order of magnitude smaller. We
provide empirical evidence for this observation in Section VI-C.
### IV-B A definition of $\Sigma(p)$
Algorithm 2 Find covariance matrix $\Sigma_{p}$ for a relative position $p$.
$p$, $\beta$, $\lambda_{i}$ $\Sigma_{p}$
1:$\lambda_{*}\leftarrow\beta(p)$
2:$v_{1}\leftarrow\frac{p}{\|p\|}$
3:$v_{2}\leftarrow\textsc{orthogonal}(p)$
4:$v_{3}\leftarrow\textsc{orthogonal}(p,v_{2})$
5:$\Lambda\leftarrow\textsc{hstack}(v_{1},v_{2},v_{3})$
6:$S\leftarrow\textsc{diag}(\lambda_{*},\lambda_{i},\lambda_{i})$
7:$\Lambda S\Lambda^{-1}$
Algorithm 2 shows our covariance matrix prediction procedure. It accepts as
arguments $p$, the relative position, $\beta$, a predictor of the largest
eigenvalue and $\lambda_{i}$, an upper bound on the two smaller eigenvalues.
The largest eigenvalue, $\lambda_{*}$, is predicted using a spline which is
calibrated on experimental data. $v_{1}$, $v_{2}$, and $v_{3}$ are our
predictions of the eigenvectors. The largest eigenvector, $v_{1}$, is
predicted to be the normalized position vector. The other two eigenvectors,
$v_{1}$, and $v_{2}$ are predicted to be orthogonal to $p$ and one another.
Line 6 accumulates the eigenvalues into a $3\times 3$ diagonal matrix. The
matrices $\Lambda$ and $S$ are multiplied together to produce a predicted
covariance matrix which has the predicted eigenvalues and eigenvectors.
## V Finding feasible assembly plans in practice
We model the assembly process as a set of manipulation tasks located at points
in 3D space. The probability of completing a task $C(\Sigma_{r})$ is the
probability that a block dropped will be inside of the acceptance area of the
slot it is aimed at. As a structure is built, it can occlude the view of
markers, therefore, the markers must be continually moved as the structure is
erected to increase $C(\Sigma_{r})$.
### V-A Computing $C(\Sigma_{r})$
To enable planning during the construction process, we define
$C:\Sigma_{r}\mapsto\mathcal{R}$, which takes as input a covariance matrix and
outputs the probability of successfully dropping a block. A block or marker is
successfully placed if the robot decides to initiate the placement action
within an acceptable range of the ideal position. This acceptable range
($\alpha$) is dictated by the design of the error-correcting construction
foundation. Assuming that the uncertainty in position is well approximated
with a multivariate Gaussian, $C(\Sigma_{r})$ is the probability that a random
sample position drawn from $\mathcal{N}(0,\Sigma_{r})$ is within the sphere of
radius $\alpha$.
Analytically computing this probability is challenging. Instead, we use a
conservative approximation. The largest eigenvalue ($\lambda_{*}$) is a
conservative estimate of the standard deviation of noise in any direction. So,
we assume that the noise distribution along all three coordinate axes is
independent and is equal to $\lambda_{*}$. This approximation results in the
closed form estimate $C^{*}(\Sigma_{r})$:
$C(\Sigma_{r})\geq\operatorname{erf}{\left(\frac{\alpha}{\sqrt{\lambda_{*}}\sqrt{2}}\right)}^{3}=C^{*}(\Sigma_{r}).$
(1)
### V-B A layer-based approach for assembly
To assemble a structure, the robot must manipulate markers which can rest on
top of blocks in order to localize while blocks are placed. When placed, the
blocks can obscure sight of the markers. Planning around obscured markers
introduces difficult nonlinearities into the constraints for any solver hoping
to find feasible solutions.
To construct feasible plans efficiently without needing to model occlusions
between the structure as it is erected and the markers, we propose a layer-by-
layer algorithm. Algorithm 3 shows our strategy for generating feasible plans
for a structure with $n$ slots, using $m$ markers.
Our algorithm works by dividing the blocks into layers $l_{1},\dots,l_{h}$,
where $l_{1}$ is the bottom layer of blocks, $l_{2}$ is the layer to be placed
above $l_{1}$, and so on with $l_{h}$ being the topmost layer of blocks. For
each $i\in\\{1,2,\dots,h\\}$, we first cluster $l_{i}$ to obtain clusters of
width at most $r$, where $r$ is an empirical determination of a marker’s
coverage radius based on the bound in Equation 1. In our implementation, this
is achieved using a subroutine ClusterUntilRadius$(l_{i},r)$. This subroutine
performs $k$-means clustering on $l_{i}$ using a value of $k$ that is tuned,
via binary search, to be the minimum possible such that all cluster widths are
at most $r$.
The sub-procedure ExtractCenters finds the center point of each cluster. The
cluster centers are then passed into FindTour which computes a tour of the
cluster centers $O$ which has elements that index the clusters.
In each cluster produced by ClusterUntilRadius$(l_{i},r)$, we select $m$
points to serve as marker destinations. We choose the marker destinations to
be the $m$ farthest points from each other in the cluster. We then use the
sub-procedure WalkToCoverage to transition the markers between destinations.
We achieve this via a simple hopping strategy, like the one discussed in
Section III. This strategy repeatedly hops one marker to the outside of the
other marker’s coverage area, resembling a “gait” if one imagines the markers
to be a robot’s feet. In this way, we position the markers at the $m$
destinations within a cluster, place the blocks within that cluster, and
repeat for successive clusters in that layer. Since the maximum cluster width
is the coverage radius of a marker, we can ensure that each marker is always
covered by another marker, enabling us to continue the gait after the blocks
in a cluster have been placed.
After blocks have been placed around each of the markers in the cluster, the
markers must be moved to make room for the remaining blocks. To do this, the
loop in 8 to 12 iterates over each marker, moves it on top of the nearest
block to it, and then places a block where the marker used to be. If the
cluster has fewer than $m$ points, two markers might share the same nearest
neighbor and cause the markers to be placed in the same location. We avoid
this edge case by requiring a minimum cluster size of $m$ in
ClusterUntilRadius$(l_{i},r)$; this is feasible via a reasonable assumption
that $m$ markers can be placed within radius $r$.
Algorithm 3 Layer-by-layer traversal.
Structure with slots $S=\\{s_{1},\dots,s_{n}\\}$, marker positions
$M=\\{b_{1},b_{2},\dots,b_{m}\\}$ Assembly plan $\mathcal{A}$
1:Divide $S$ into layers $l_{1},\dots,l_{h}$
2:$\mathcal{A}\leftarrow[]$ $i\in\\{1,\dots,h\\}$
3:$\mathcal{C}\leftarrow\textsc{ClusterUntilRadius}(l_{i},r)$
4:$C\leftarrow\textsc{ExtractCenters}(\mathcal{C})$
5:$c_{1},c_{2},\dots,c_{k}\leftarrow\textsc{FindTour}(C)$ $\triangleright$
efficient tour on cluster centers $j=1$ to $k$ $\triangleright$ process
clusters in tour order
6:$C_{j}\leftarrow\textsc{ClusterOf}(c_{j})$
7:$\mathcal{A}\leftarrow\mathcal{A}$.extend(WalkToCoverage$(M,c_{j})$)
$\triangleright$ move markers into cluster $s\in C_{j}\setminus M$
$\triangleright$ place blocks in slots unoccupied by markers
8:$\mathcal{A}\leftarrow\mathcal{A}.\textit{append}\big{(}\textsc{PlaceBlock}(s)\big{)}$
$s\in M$ $\triangleright$ move markers to place remaining blocks
9:$p\leftarrow\textsc{NearestNeighbor}(s,C_{j})$
10:$p\leftarrow p+(0,0,1)$
11:$\mathcal{A}\leftarrow\mathcal{A}.\textit{append}\big{(}\textsc{MoveMarker}(s,p)\big{)}$
12:$\mathcal{A}\leftarrow\mathcal{A}.\textit{append}\big{(}\textsc{PlaceBlock}(s)\big{)}$
13:$\mathcal{A}$
## VI Experiments
We implement our reconfigurable visual fiducial localization system in both
hardware and simulation, and the results are described in the following.
### VI-A Experimental setup
Our hardware implementation is deployed on the Droplet AUV system [7, 5]. A
ROS node holds the known global position of the visual fiducial markers. As
the markers are moved, the node is notified and then marker readings are
offset accordingly before performing the sensor fusion steps described in
Algorithms 1 and 2.
For viewing visual fiducial markers, the robot is equipped with FLIR Blackfly
camera with a Senko fisheye lens mounted facing downards. The camera is
mounted in a 3 inch Blue Robotics acrylic enclosure with a dome. Figures 6, 1
show the camera mounted on the robot and Figure 5 (b) shows how the scene
looks through the camera.
### VI-B Hardware testing
(a)
(b)
Figure 5: (a) Reconfigurable visual fiducial design. (b) Robot’s eye view
during assembly with the reconfigurable fiducials. Figure 6: The AUV
performing the last placement of the two hop maneuver.
To validate the concept of reconfigurable visual fiducials in practice, we
mounted visual fiducial markers on our error correcting connector geometry
[7]. We show the design in Figure 5. The fiducials provide error correction
during both pickup via the top handle (red in Figure 5) and placement.
We tested the concept of walking behaviors by implementing a 1D hopping
strategy. The robot was able to successfully perform manipulation tasks in a
large area with a side of 2.8 meters using cement blocks placed by hand. This
side is 57% longer than the widest side usable by our previous implementation.
Note that the side of the line covered was bounded by the physical width of
the pool and not the robot’s ability to stack blocks successfully. Figure 6
shows the AUV completing the maneuver.
To verify that the visual fiducial marker fusion procedure reduces the
localization noise, we recorded the measured positions and the number of valid
marker readings when the robot was still. When only one marker was determined
as valid, the standard deviation of position measurements on the X axis was
$4.3\text{\,}\mathrm{c}\mathrm{m}$, but when two markers were valid, the
standard deviation was reduced to $0.69\text{\,}\mathrm{c}\mathrm{m}$; around
$6\times$ improvement. On the Y axis, we observe similar improvements: from
$6.0\text{\,}\mathrm{c}\mathrm{m}$ to $1.5\text{\,}\mathrm{c}\mathrm{m}$;
around $4\times$ improvement.
### VI-C Validation of noise model
To validate our prediction model of covariance matrices given in Algorithm 2,
we conducted both real world and simulation experiments. Our simulator
projects a marker at a given relative position into the camera and offsets the
corners according to a Gaussian distribution. This noise addition simulates
the effects of pixel flicker on corner detection.
Figure 3 (a) shows the results of our physical testing. We arrayed visual
fiducials in an area spanning about 1.4 meters on the positive X and Y axes.
This setup mimics the distances between markers used in practice. We predict
the direction of the largest eigenvector as the median of the measured
relative positions. We found that the predicted direction is accurate to
within $4$ degrees. This result shows that computing the direction of the
largest eigenvector as the position vector is effective in practice.
To more extensively test our prediction algorithm, we conducted a test using a
spline predictor of the largest eigenvalue trained on 6250 noise distributions
generated at known relative positions in our simulator. We set the upper bound
for the smallest two eigenvalues to $10^{-4}$. We set $\alpha$ to
$2\text{\,}\mathrm{c}\mathrm{m}$ for Equation 1. We tested the trained
predictions from Algorithm 2 using 5184 test relative positions not present in
the training data.
To evaluate the quality of Algorithm 2, we checked whether its predictions
were more or less conservative than measured covariance matrices in the test
dataset. We evaluated whether Algorithm 2 produced more or less conservative
results by comparing the bound in Equation 1 for the predicted covariance
matrix $\Sigma_{r}^{*}$ against the measured covariance matrix $\Sigma_{r}$.
We found that in 98.3% of cases $C^{*}(\Sigma_{r}^{*})\leq C^{*}(\Sigma_{r})$.
In the other 1.7% of cases, $C^{*}(\Sigma_{r})$ was only smaller than
$C^{*}(\Sigma_{r}^{*})$ by at most 3.4%. To avoid this case, we configure our
planning algorithm so two markers are always visible.
When combining the two worst predictions using Algorithm 1, we find that the
fused covariance matrix is a conservative estimate. We combined the covariance
matrices of the two worst over-predictions of $C^{*}(\Sigma_{r})$ for both
measured and predicted covariances. We found that the predicted
$C^{*}(\Sigma_{r})$ is 40% lower (59%) than the measured bound (99%).
Our estimate of the noise is often very conservative, but we will show in the
following section that large structures still have a high predicted success
probability when planned using Algorithm 3.
### VI-D Assembly algorithm
(a)
(b)
Figure 7: (a) Visualization of plan checker. Green blocks represent markers.
(b) Robot moving markers in the DAVE simulator.
To test Algorithm 3 we use both a hand crafted plan checker and a full
underwater robot 3D simulator called DAVE [35]. Our plan checker checks lines
of sight to visual fiducials and computes the predicted certainty of every
step. We use the underwater robot simulator to determine whether a plan which
is feasible according to the plan check is also feasible under realistic
control noise. We ran a construction process with 74 build steps. Figure 7 (b)
shows the partially completed structure in the DAVE simulator. With realistic
control noise, the robot kept at least two markers in view 100% of the time.
Table I: Effect of increasing the cluster radius on plan efficiency and probability of success. After a certain cluster size, occlusions make construction impossible. $r$ | Predicted $P(\text{success})$ | # steps
---|---|---
2.5 | 0 | 226
2.0 | 0.82 | 240
1.5 | 0.96 | 259
1.0 | 0.99 | 345
As the number of manipulation steps increases, the time to build a structure
increases. Our layer-based construction (Algorithm 3) takes an input a
parameter $r$ which describes the radius of clusters used for marker
placement. Between each cluster, the markers are moved in an expensive walking
procedure, so increasing $r$ could improve the efficiency of the construction
process but at the cost of reducing the reliability. Table I shows how
changing $r$ affects the number of steps required to build a 200 block
structure and the certainty afforded during construction with three markers.
We measure the predicted $P(\text{success})$ as the product of the probability
of success of every state in the construction process. We also tested a
pyramidal structure containing 1800 blocks. The assembly planning algorithm
took about 5 minutes to plan the structure and the predicted probability of
success for $r=1.5$ is $91\%$.
## VII Conclusions & future work
This paper proposes a novel strategy for localizing relative to error
correcting structures while planning the construction process. Our method is
shown to work in practice at small scale and at large scale in simulation. We
also show the robot being able to reliably complete the “hopping” strategy for
moving the markers, extending the area for assembly.
We plan to improve the scale and quality of our hardware implementation.
Planning for large scale construction with heterogeneous materials will
require adaptive and flexible clustering strategies. Materials which are not
well described by a bounding box may require more sophisticated strategies for
avoiding occlusions.
Our assembly process is currently limited to structures which have only a
single connected component per layer. If there is more than one connected
component in a layer, the markers can become stranded as they are lifted up
the structure. In the future, we plan to explore ways to increase the
flexibility of our method.
## References
* [1] Frederico Augugliaro, Sergei Lupashin, Michael Hamer, Cason Male, Markus Hehn, Mark W Mueller, Jan Sebastian Willmann, Fabio Gramazio, Matthias Kohler and Raffaello D’Andrea “The Flight Assembled Architecture Installation: Cooperative Construction with Flying Machines” In _IEEE Control Systems_ 34.4 IEEE, 2014, pp. 46–64
* [2] Graham Hunt, Faidon Mitzalis, Talib Alhinai, Paul A Hooper and Mirko Kovač “3D Printing with Flying Robots” In _Icra_ IEEE, 2014, pp. 4493–4499
* [3] Barrie Dams, Sina Sareh, Ketao Zhang, Paul Shepherd, Mirko Kovac and Richard J. Ball “Aerial Additive Building Manufacturing: Three-Dimensional Printing of Polymer Structures Using Drones” In _Proceedings of the Institution of Civil Engineers - Construction Materials_ 173.1, 2020, pp. 3–14
* [4] Sébastien Goessens, Caitlin Mueller and Pierre Latteur “Feasibility Study for Drone-Based Masonry Construction of Real-Scale Structures” In _AUTOMAT CONSTR_ 94, 2018, pp. 458–480
* [5] Samuel Lensgraf, Amy Sniffen, Evan Honnold, Jennifer Jain, Zachary Zitzewitz, Weifu Wang, Alberto Quattrini Li and Devin Balkcom “Droplet: Towards Autonomous Underwater Assembly of Modular Structures” In _RSS_ , 2021
* [6] Liam Paull, Sajad Saeedi, Mae Seto and Howard Li “AUV navigation and localization: A review” In _IEEE Journal of oceanic engineering_ 39.1 IEEE, 2013, pp. 131–149
* [7] Samuel Lensgraf, Devin Balkcom and Alberto Quattrini Li “Buoyancy Enabled Autonomous Underwater Construction with Cement Blocks” In _2023 IEEE International Conference on Robotics and Automation (ICRA)_ , 2023, pp. 5207–5213
* [8] Valerio Magnago, Luigi Palopoli, Roberto Passerone, Daniele Fontanelli and David Macii “Effective Landmark Placement for Robot Indoor Localization With Position Uncertainty Constraints” In _IEEE Transactions on Instrumentation and Measurement_ 68.11, 2019, pp. 4443–4455
* [9] Katherine Russell, Michael Schader, Kevin Andrea and Sean Luke “Swarm Robot Foraging with Wireless Sensor Motes”
* [10] Benyuan Liu, Olivier Dousse, Philippe Nain and Don Towsley “Dynamic Coverage of Mobile Sensor Networks” In _IEEE Transactions on Parallel and Distributed Systems_ 24.2, 2013, pp. 301–311
* [11] Dengxiu Yu, Hao Xu, C.. Chen, Wenjie Bai and Zhen Wang “Dynamic Coverage Control Based on K-Means” In _IEEE Transactions on Industrial Electronics_ 69.5, 2022, pp. 5333–5341
* [12] Xu Li, Amiya Nayak, David Simplot-Ryl and Ivan Stojmenovic “Sensor Placement in Sensor and Actuator Networks” In _Wireless Sensor and Actuator Networks_ Wiley, 2010, pp. 263–294
* [13] Seapahn Megerian, Farinaz Koushanfar, Gang Qu, Giacomino Veltri and Miodrag Potkonjak “Exposure in Wireless Sensor Networks: Theory and Practical Solutions” In _Wireless Networks_ 8.5, 2002, pp. 443–454
* [14] Bang Wang “Coverage Problems in Sensor Networks: A Survey” In _ACM Computing Surveys_ 43.4, 2011, pp. 1–53
* [15] Chris Yu Hsuan Lee, Graeme Best and Geoffrey A. Hollinger “Stochastic Assignment for Deploying Multiple Marsupial Robots” In _2021 International Symposium on Multi-Robot and Multi-Agent Systems (MRS)_ Cambridge, United Kingdom: IEEE, 2021, pp. 75–82
* [16] Chris Yu Hsuan Lee, Graeme Best and Geoffrey A. Hollinger “Optimal Sequential Stochastic Deployment of Multiple Passenger Robots” In _2021 IEEE International Conference on Robotics and Automation (ICRA)_ Xi’an, China: IEEE, 2021, pp. 8934–8940
* [17] Colin Mitchell, Graeme Best and Geoffrey Hollinger “Sequential Stochastic Multi-Task Assignment for Multi-Robot Deployment Planning” In _2023 IEEE International Conference on Robotics and Automation (ICRA)_ London, United Kingdom: IEEE, 2023, pp. 3454–3460
* [18] Yvan R Petillot, Gianluca Antonelli, Giuseppe Casalino and Fausto Ferreira “Underwater Robots: From Remotely Operated Vehicles to Intervention-Autonomous Underwater Vehicles” In _IEEE Robotics & Automation Magazine_ 26.2 IEEE, 2019, pp. 94–101
* [19] Francesco Maurelli, Szymon Krupiński, Xianbo Xiang and Yvan Petillot “AUV localisation: a review of passive and active techniques” In _International Journal of Intelligent Robotics and Applications_ Springer, 2021, pp. 1–24
* [20] Cesar Cadena, Luca Carlone, Henry Carrillo, Yasir Latif, Davide Scaramuzza, José Neira, Ian Reid and John J. Leonard “Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age” In _IEEE Transactions on Robotics_ 32.6, 2016, pp. 1309–1332
* [21] John McConnell, Ivana Collado-Gonzalez and Brendan Englot “Perception for Underwater Robots” In _Current Robotics Reports_ 3.4, 2022, pp. 177–186
* [22] Raúl Mur-Artal, J… Montiel and Juan D. Tardós “ORB-SLAM: A Versatile and Accurate Monocular SLAM System” In _IEEE Transactions on Robotics_ 31.5, 2015, pp. 1147–1163
* [23] Jakob Engel, Vladlen Koltun and Daniel Cremers “Direct Sparse Odometry” In _IEEE Transactions on Pattern Analysis and Machine Intelligence_ 40.3, 2018, pp. 611–625
* [24] Carlos Campos, Richard Elvira, Juan J. Rodríguez, José M. M. and Juan D. “ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM” In _IEEE Transactions on Robotics_ 37.6, 2021, pp. 1874–1890
* [25] Alberto Quattrini Li, A. Coskun, S.. Doherty, S. Ghasemlou, A.. Jagtap, M. Modasshir, S. Rahman, A. Singh, M. Xanthidis, J.. O’Kane and I. Rekleitis “Experimental Comparison of Open Source Vision-Based State Estimation Algorithms” In _2016 International Symposium on Experimental Robotics_ , Springer Proceedings in Advanced Robotics Cham: Springer International Publishing, 2017, pp. 775–786
* [26] Bharat Joshi, Sharmin Rahman, Michail Kalaitzakis, Brennan Cain, James Johnson, Marios Xanthidis, Nare Karapetyan, Alan Hernandez, Alberto Quattrini Li, Nikolaos Vitzilaios and Ioannis Rekleitis “Experimental Comparison of Open Source Visual-Inertial-Based State Estimation Algorithms in the Underwater Domain” In _2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2019, pp. 7227–7233
* [27] Sebastian Houben, Marcel Neuhausen, Matthias Michael, Robert Kesten, Florian Mickler and Florian Schuller “Park Marking-Based Vehicle Self-Localization with a Fisheye Topview System” In _Journal of Real-Time Image Processing_ 16.2, 2019, pp. 289–304
* [28] Yewei Huang, Junqiao Zhao, Xudong He, Shaoming Zhang and Tiantian Feng “Vision-Based Semantic Mapping and Localization for Autonomous Indoor Parking” In _2018 IEEE Intelligent Vehicles Symposium (IV)_ , 2018, pp. 636–641
* [29] Jaouad Hajjami, Jordan Caracotte, Guillaume Caron and Thibault Napoleon “ArUcOmni: Detection of Highly Reliable Fiducial Markers in Panoramic Images” In _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_ Seattle, WA, USA: IEEE, 2020, pp. 2693–2699
* [30] Burak Benligiray, Cihan Topal and Cuneyt Akinlar “STag: A Stable Fiducial Marker System” In _CoRR_ abs/1707.06292, 2017
* [31] Michail Kalaitzakis, Brennan Cain, Sabrina Carroll, Anand Ambrosi, Camden Whitehead and Nikolaos Vitzilaios “Fiducial Markers for Pose Estimation” In _Journal of Intelligent & Robotic Systems_ 101.4, 2021, pp. 71
* [32] Federico Augugliaro, Ammar Mirjan, Fabio Gramazio, Matthias Kohler and Raffaello D’Andrea “Building Tensile Structures with Flying Machines” In _2013 IEEE/RSJ International Conference on Intelligent Robots and Systems_ , 2013, pp. 3487–3492
* [33] Randall C Smith and Peter Cheeseman “On the representation and estimation of spatial uncertainty” In _The international journal of Robotics Research_ 5.4 Sage Publications Sage CA: Thousand Oaks, CA, 1986, pp. 56–68
* [34] Juho Kannala and Sami S Brandt “A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses” In _IEEE transactions on pattern analysis and machine intelligence_ 28.8 IEEE, 2006, pp. 1335–1340
* [35] Mabel M. Zhang, Woen-Sug Choi, Jessica Herman, Duane Davis, Carson Vogt, Michael McCarrin, Yadunund Vijay, Dharini Dutia, William Lew, Steven Peters and Brian Bingham “DAVE Aquatic Virtual Environment: Toward a General Underwater Robotics Simulator” In _2022 IEEE/OES Autonomous Underwater Vehicles Symposium (AUV)_ , 2022
|
11institutetext: Leibniz-Institut für Astrophysik Potsdam (AIP), An der
Sternwarte 16, 14482 Potsdam, Germany 22institutetext: Observational
Astrophysics, Division of Astronomy and Space Physics, Department of Physics
and Astronomy, Uppsala University, Box 516, 75120 Uppsala, Sweden
33institutetext: Max Plank Institut für Radioastronomie (MPIfR), Auf dem Hügel
69, 53121 Bonn, Germany 44institutetext: Physics Department, University of
Crete, GR 71003, Heraklion, Greece 55institutetext: Institute of Astrophysics,
Foundation for Research and Technology – Hellas, GR-70013 Heraklion, Greece
66institutetext: Harvard-Smithsonian Center for Astrophysics, 60 Garden
Street, Cambridge, MA 02138, USA
# An expanded ultraluminous X-ray source catalogue††thanks: Based on
observations obtained with XMM-Newton, an ESA science mission with instruments
and contributions directly funded by ESA Member States and NASA.
M. C. i Bernadich 112233 A. D. Schwope 11 K. Kovlakas 4455 A. Zezas 445566 I.
Traulsen 11
(Received ; accepted )
###### Abstract
Context. Ultraluminous X-ray sources (ULXs) are non-nuclear, extragalactic,
point-like X-ray sources whose luminosity exceeds that of the Eddington limit
of an accreting stellar-mass black hole
($M_{\textrm{BH}}<10\textrm{M}_{\odot}$, $L_{\textrm{X}}>10^{39}$ erg s-1).
They are excellent laboratories for extreme accretion physics, probes for the
star formation history of galaxies, and constitute precious targets for the
search of intermediate-mass black holes. As the sample size of X-ray data from
modern observatories such as XMM-Newton and Chandra increases, producing
extensive catalogues of ULXs and studying their collective properties has
become both a possibility and a priority.
Aims. We build a clean, updated ULX catalogue based on one of the most recent
XMM-Newton X-ray serendipitous survey data releases, 4XMM-DR9, and the most
recent and exhaustive catalogue of nearby galaxies, HECATE. We perform a
preliminary population study to test if the properties of the expanded XMM-
Newton ULX population are consistent with previous findings.
Methods. We perform positional cross-matches between XMM-Newton sources and
HECATE objects to identify host galaxies. We filter out known foreground and
background sources and other interlopers by finding counterparts in external
catalogues and databases such as Gaia DR2, SSDS, PanSTARRS1, NASA/IPAC
Extragalactic Database and SIMBAD. Manual inspection of image data from
PanSTARRS1 and the NASA/IPAC is occasionally performed to investigate the
nature of some individual sources. We use distance and luminosity arguments to
identify ULX candidates. Source parameters from 4XMM-DR9, galaxy parameters
from HECATE, and variability parameters from 4XMM-DR9s are used to study the
spectral, abundance and variability properties of the updated XMM-Newton ULX
sample.
Results. We identify 779 ULX candidates, 94 of which hold
$L_{\textrm{X}}\gtrsim 5\times 10^{40}$ erg s-1. We find that spiral galaxies
are more likely to host ULXs. For early-spiral galaxies we also find that the
number of ULX candidates per star forming rate that is consistent with
previous studies, while we also attest the existence of a significant ULX
population in elliptical and lenticular galaxies. Candidates hosted by late-
type galaxies tend to present harder spectra and to undergo more and more
extreme inter-observation variability than the ones hosted by early-type
galaxies. $\sim$30 candidates with $L_{\textrm{X}}>10^{41}$ erg s-1 are also
identified, constituting the most interesting candidates for intermediate-mass
black hole searches.
Conclusions. We have built the largest ULX catalogue to this date. Our results
regarding the spectral and abundance properties of ULXs confirm the findings
made by previous studies based on XMM-Newton and Chandra data, while our
population-scale study on variability properties is unprecedented. Our study,
however, provides limited insight on the properties of the bright ULX
candidates ($L_{\textrm{X}}\gtrsim 5\times 10^{40}$ erg s-1) due to the small
sample size. The expected growth of X-ray catalogues and potential future
follow-ups will aid in drawing a more clear picture.
###### Key Words.:
accretion, accretion disks – black hole physics – catalogs – stars: black
holes – X-rays: binaries
## 1 Introduction
Ultraluminous X-ray sources (ULXs) are extragalactic, point-like, X-ray
sources whose luminosity exceeds that of the Eddington limit of an accreting
stellar-mass black hole (StBH, $M_{\textrm{BH}}<10\textrm{M}_{\odot}$,
$L_{\textrm{Edd}}{\approx}10^{39}$ erg s-1), and that are not the central
source of their host galaxy. These objects have been the subject of active
research since the advent of X-ray astronomy (see Kaaret et al. 2017 for a
review) and, for a time, due to the scaling of the Eddington limit with mass,
it was believed that they hosted accreting BHs of masses between those of
StBHs and supermassive BHs. Nowadays, they are more commonly associated with
super-Eddington accretion onto common compact objects such as StBH and neutron
stars (NSs), with a few exceptions where BHs of higher mass need to be invoked
(Bachetti 2016).
The current understanding comes from three main lines of evidence. Firstly,
predictions of the formation rate for intermediate-mass BHs (IMBHs,
$100\textrm{M}_{\odot}<M_{\textrm{BH}}<10^{6}\textrm{M}_{\odot}$) with donor
companions fall short of explaining the observed abundances of ULXs (King
2004; Madhusudhan et al. 2006). Secondly, ULXs are most commonly associated
with star-forming galaxies (Swartz et al. 2011), or even young elliptical
galaxies with recent star formation events, rather than old elliptical ones
with no star formation (Kim & Fabbiano 2004, 2010). This is consistent with
ULXs belonging to a high-luminosity extrapolation of the X-ray binary
population, whose abundances correlate with the star-formation rate (SFR) and
the stellar mass ($M_{*}$) of their host galaxies (Gilfanov 2004; Grimm et al.
2003; Mineo et al. 2012). And finally, but perhaps most importantly, the
direct observation of regular pulsations in some ULXs unmistakably points
towards accretion onto a NS in binary systems. Famous examples of neutron
star-powered ULXs (NS-ULXs) are M82 X-2 (Bachetti et al. 2014), NGC 7793 P13
(Fürst et al. 2016; Israel et al. 2016), NGC 5907 ULX1 (Israel et al. 2017) or
NGC 300 ULX1 (Carpano et al. 2018). Since some of these objects present
$L_{\textrm{X}}>10^{40}$ erg s-1, they are clear cases of super-Eddington
accretion. In hand with these observations, population synthesis models even
suggest that NS-ULXs constitute a significant fraction of the ULX population
(Wiktorowicz et al. 2017).
The prospect of super-Eddington in ULXs accretion has sparked a lot of
interest for the modelling of ULX systems. Many works have shown that
sufficiently extreme accretion rates to power ULXs can be achieved during
certain phases of regular HMXB and LMXB evolution. In fact, these rates can
explain most of the ULX population (King 2002; Rappaport et al. 2005;
Wiktorowicz et al. 2015; Pavlovskii et al. 2017). Yet, many aspects of the
physics of super-Eddington accretion itself are poorly understood, specially
in cases where the material falls on top of strongly magnetized neutron stars,
and very rich theoretical research is being performed to model this phenomenon
(see Kaaret et al. 2017 for a comprehensive discussion of this topic, which
otherwise lies outside of the scope of this introduction).
Nonetheless, ULXs are still good sources where to look for IMBHs, as a handful
of specially bright candidates still remain. ESO 243-49 HLX-1 has been
detected with both luminosities of $L_{\textrm{X}}{\approx}10^{42}$ erg s-1and
spectral state transitions typical of sub-Eddington Galactic BH binaries
(GBHBs, Remillard & McClintock 2006), being consistent with a mass of
$\sim$$10^{3}$ M⊙ (Servillat et al. 2011). M82 X-1 showcases a similar
behavior (Kong et al. 2007), with the addition of quasi-periodic oscillations
also characteristic from GBHBs (QPO, Remillard & McClintock 2006) and that
grant it a mass estimate of $400\leavevmode\nobreak\ \textrm{M}_{\odot}$
(Pasham et al. 2014). And just like in GBHBs, powerful radio jets are also
present in the IMBH candidates ESO 243-49 HLX-1 (Webb et al. 2012; Cseh et al.
2014) and NGC 2276-3c (Mezcua et al. 2015).
The relevance of ULXs as proxies of recent star formation, in accretion
physics and in IMBHs searches has been so far exposed. However, ULX studies
are limited by their small numbers and large distances. For this reason, a lot
of effort has been put into building compilations of ULX candidates from X-ray
surveys. Early works relied on ROSAT data (e.g. Roberts & Warwick 2000; Liu &
Bregman 2005; Liu et al. 2006; Colbert & Ptak 2002), yielding up to $\sim$100
candidates. More recent ones have been based on XMM-Newton and Chandra data.
XMM-Newton-based catalogues have been highly successful, yielding up to $\sim$
300 ULX candidates (Walton et al. 2011; Earnshaw et al. 2019), but they are
somewhat hampered by the low angular resolution of the observatory. On the
other hand, works based on the Chandra telescope have provided better
identification of individual sources thanks to its crisp angular resolution
(e.g. Swartz et al. 2004, 2011; Wang et al. 2016; Kovlakas et al. 2020).
Of particular interest to us are the works of Earnshaw et al. (2019) and
Kovlakas et al. (2020), since they both constructed ULXs samples from the
largest and most recent XMM-Newton and Chandra samples available at the time.
Earnshaw et al. (2019) is, in fact, our main predecessor and inspiration
regarding methodology. They identify 384 ULX candidates within the fourth XMM-
Newton data release111https://xmmssc-www.star.le.ac.uk/Catalogue/3XMM-
DR4/UserGuide˙xmmcat.html (3XMM-DR4, Rosen et al. 2016), using the Third
Reference Catalogue of Bright Galaxies (RC3, de Vaucouleurs et al. 1991) and
the Catalogue of Neighbouring Galaxies (Karachentsev et al. 2004) as
references for the host galaxies. Their study focuses mostly on the spectral
properties of ULXs, and they find that ULXs tend to be somewhat harder in
late-type galaxies, and that their hardness ratios in general alike to those
of X-ray binaries below the Eddington threshold. Kovlakas et al. (2020) find
629 ULX candidates within the Chandra Source Catalog
2.0222https://cxc.harvard.edu/csc2/ (CSC2) using the Heraklion Extragalactic
CATaloguE (HECATE Kovlakas et al. 2021) as the reference for host galaxies,
which comes with valuable information such as the SFR and $M_{*}$. Their study
focuses heavily on the scaling properties of the ULX content of galaxies with
the SFR and $M_{*}$ parameters, and finds that SFR is the determinant
parameter for late-type galaxies while $M_{*}$ rules over the abundance in
late-type galaxies, in accordance with recent population synthesis models
(Wiktorowicz et al. 2017).
Now, the ninth XMM-Newton data release333http://xmmssc.irap.omp.eu (4XMM-DR9,
Webb et al. 2020), its stacked version (4XMM-DR9s, Traulsen et al. 2020) and
the HECATE catalogue are available to us, making it possible to build the
largest catalogue of extragalactic, non-nuclear, point-like X-ray sources
based on XMM-Newton data. We then use the catalogue to preliminarily explore
the distribution of ULX spectral and variability properties, and the
dependence of ULX abundances galaxies with the SFR and $M_{*}$ on XMM-Newton’s
sky.
This paper is organized as follows: in the following section we explain the
specifics of selection in archival data, in Section 3 we describe our
methodology for identifying ULX candidates and other interloping objects, in
Section 4 we expose our main results and in Section 5 we discuss some of the
limitations and implications of our work and compare it with previous ones.
## 2 Data samples
### 2.1 The X-ray samples
We used the Fourth XMM-Newton Serendipitous Source Catalog, Ninth Data
Release, (4XMM-DR9, Webb et al. 2020) as the basis for our ULX catalogue. It
lists 810 795 individual detections of 550 124 unique sources discovered
across 11 204 observations, and it represents an increase of 177 396 sources
with respect to 3XMM-DR4, the resource used in the previous ULX study based in
XMM-Newton data (Earnshaw et al. 2019). The columns we used during the
construction are:
* •
`DETID` and `SRCID`, the identification number of detections and sources,
* •
`SC_RA`, `SC_DEC` and `SC_POSERR`, the source J2000.0 sky coordinates and
their positional uncertainty,
* •
`SC_EXTENT`, the measured extension of the source,
* •
`SC_DET_ML`, the detection likelihood of sources, taken from the highest value
of `DET_ML` among their detections,
* •
`EP_8_FLUX` and `EP_8_FLUX_ERR`, the measured X-ray flux of detections and its
uncertainty in the 0.2$-$12 keV band, derived from the EPIC photon counts,
* •
`SC_EP_8_FLUX` and `SC_EP_8_FLUX_ERR`, the averaged X-ray flux of sources and
its uncertainty in the 0.2$-$12 keV band, derived from the EPIC photon counts,
* •
`SC_HR`$i$, where $i$ runs from 1 to 4, the source hardness ratios from the
source count rates in the respective energy bands,
* •
and `SC_SUM_FLAG`, the summary quality flag of a unique source, which runs
from 0 to 5. 0 means that all detections of the source are trustworthy, while
5 means that at least one detection is most likely spurious. Its value is
given by the highest value of `SUM_FLAG` among all detections of the source.
Since the XMM-Newton FWHM resolution is of 6″ (ESA: XMM-Newton SOC 2019), we
included only sources with $\verb|SC_EXTENT|<6\arcsec$ to work with point-like
sources only. We also considered only sources with $\verb|SC_DET_ML|>8$. As
these parameters are based on the worst result for all the detections of a
source, it can happen that an otherwise point-like source is counted as
extended if in one of its detections it is measured as such. We nevertheless
excluded them from the catalogue because other source parameters that consist
in averaged detection parameters are most likely unreliable. This left 452 602
sources available for the catalogue.
Additionally, we used the stacked version of 4XMM-DR9. This is the second XMM-
Newton serendipitous source catalogue built from overlapping observations
(4XMM-DR9s, Traulsen et al. 2020), which compiles variability information for
218 283 sources detected in 6 604 overlapping XMM-Newton observations, an
improvement of 146 332 sources with respect to the first version (Traulsen et
al. 2019). The strength of 4XMM-DR9s is that, from the overlapping
observations, extra source variability parameters that are not included in the
bare 4XMM-DR9 are computed. From it, we used the following columns:
* •
`SRCID`, the source identification number,
* •
`RA`, `DEC` and `RADEC_ERR`, the source J2000.0 sky coordinates and their
uncertainty,
* •
`N_CONTRIB`, the number of times a source contributes to the computation of
variability, usually equal to the number of times it has been observed,
* •
`VAR_PROB`, the probability of a source not showing inter-observation
variability, computed from the reduced $\chi^{2}$ of EPIC flux variability,
* •
`FRATIO`, the $F_{\textrm{max}}/F_{\textrm{min}}$ ratio between the highest
and lowest observed fluxes in a single source.
### 2.2 The galaxy sample
We used the Heraklion Extragalactic CATaloguE (HECATE, Kovlakas et al. 2021,
submitted) compilation of 204 733 galaxies within 200 Mpc. Built from the
HyperLEDA catalogue (Makarov et al. 2014) and other databases such as NED,
SDSS and 2MASS, it is a much more complete galaxy compilation than the Third
Reference Catalogue of Galaxies (RC3, de Vaucouleurs et al. 1991),
traditionally used during previous ULX studies (Swartz et al. 2011; Wang et
al. 2016; Earnshaw et al. 2019). A more detailed description of its contents
is available in Kovlakas et al. 2021, but here we mention the columns we used:
* •
`PGC`, the Principal Galaxy Catalogue identification number, originating from
the Principal Catalogue of Galaxies (Paturel et al. 1985) and still used in
HyperLEDA,
* •
`RA` and `DEC`, the J2000.0 coordinates of the galactic center,
* •
`R1` and `R2`, the minor and major D25 isophotal radii,
* •
`PA`, the North-to-East position angle of the major axis,
* •
`D` and `D_ERR`, the galaxy distance and its uncertainty,
* •
`T`, the Hubble Type value ($T_{\textrm{H}}$) of a galaxy,
* •
`logSFR_HEC`, the decimal logarithm of the SFR estimate,
* •
and `logM_HEC`, the decimal logarithm of $M_{*}$.
The SFR values in the HECATE are based on infrared calibrations, which tend to
overestimate the SFR in early-type galaxies (Kovlakas et al. 2021). Therefore,
SFR values were only inspected for galaxies with $T_{\textrm{H}}\geq 0$.
### 2.3 The interloper samples
Our X-ray catalogue is inevitably populated by a fraction of non-ULX
contaminating objects such as background AGNs or foreground stars. Therefore,
we built a filtering pipeline (see section 3.1) to identify interlopers in
external catalogues and databases of already known objects. These consist of:
the second Gaia data release (GaiaDR2, Gaia Collaboration 2018), the Tycho-2
catalogue of bright stars (Tycho2, Høg et al. 2000), the 14th Sloan Digital
Sky Survey data release (SDSS-DR14, Blanton et al. 2017), the 13th edition of
the Véron-Cetty & Véron catalogue of QSOs and AGNs (VéronQSO, Véron-Cetty &
Véron 2010), the SIMBAD database (Wegner et al. 2000), the Panoramic Survey
Telescope and Rapid Response System database444https://panstarrs.stsci.edu
(PanSTARRS1, Flewelling et al. 2016) from the PanSTARRS1 surveys (Chambers et
al. 2016) and the NASA/IPAC Extragalactic
Database555https://ned.ipac.caltech.edu/ (NED).
All the cross-matches were performed using the
TOPCAT666http://www.star.bris.ac.uk/mbt/topcat/ and
STILTS777http://www.star.bris.ac.uk/~mbt/stilts/ software tools to manipulate
the tables in all of the following steps (Taylor 2005).
## 3 Methods
### 3.1 The automatic filtering process
Correlation of sources with galaxies. We selected all 4XMM-DR9 sources that
overlapped with the isophotal ellipses of HECATE within their positional
uncertainty. Catalogue entries matched to galaxies with available measurements
of `R1`, `R2` and `PA` were labeled with $\verb|MATCH_FLAG|=0$. When `PA` was
unknown, detections of sources matched with the minor isophotal circle were
labelled as $\verb|MATCH_FLAG|=1$, while detections of sources within the
annulus drawn by `R1` and `R2` were given $\verb|MATCH_FLAG|=2$. When both
`PA` and `R2` were unlisted, sources matched to the circle drawn by `R1` were
labeled as $\verb|MATCH_FLAG|=3$. Additionally, we stored in `n_Galaxies` the
number of galaxies a source has been matched with. Detections of sources with
$\verb|n_Galaxies|>1$ are listed more than once and treated independently for
each galaxy, being identifiable by the unique `DET_PGC_ID` and `SRC_PGC_ID`
detection/source-galaxy identity numbers. This way, a comprehensive catalogue
of 50 446 entries was built, out of which 49 816 have $\verb|n_Galaxies|=1$
and 48 206 present $\verb|MATCH_FLAG|=0$.
Figure 1: Top: XMM-Newton observation 0762610401, including galaxy NGC 3631
with all detected sources. Bottom: optical Pan-STARRS1 (Chambers et al. 2016)
image of the same galaxy. Non-nuclear sources are highlighted with red
crosses, while a green cross signals the central source. The image in the
visible range covers a slightly smaller sky area including the four most
central sources from the X-ray image.
Identification of central sources. At low redshifts, AGNs typically present
luminosities of $L_{\textrm{X}}>10^{41}$ erg s-1 (Brandt & Alexander 2015).
However, some have been observed to overlap with the ULX luminosity regime (Ho
et al. 2001; Eracleous et al. 2002; Ghosh et al. 2008; Zhang et al. 2009).
Therefore, we identified all sources whose position overlapped within three
times of their uncertainty with a circle of radius 3” around the center of
their host galaxy. 3 658 entries in our catalogue were thus labeled as central
sources. We also checked for sources with more than one potential host galaxy
($\verb|n_Galaxies|>1$) that had been flagged as central-source candidates in
one of their iterations, and flagged them as central-source candidates on all
of their other galaxy associations to indicate their potentially nuclear
nature. This highlighted 137 further entries, all of which are composed by
optically coincident galaxies, whether related or not. Figure 1 illustrates
how essential it is to identify these central sources before classifying them
as ULXs, as otherwise they would easily slip through and contaminate our
sample.
Identification of foreground stellar objects.
X-ray emission from stars is ubiquitous across the Hertzsprung-Russell diagram
and has manifold reasons, be it active coronae, hot stellar winds or binary
accretion, which can lead to the misclassification of Galactic objects as
extragalactic ULXs. We performed a cross-match of unflagged objects with
GaiaDR2 making use of the query tool provided by CDS, Strasbourg, and then
with Tycho2, to identify potential contaminating foreground stars. In both
cases, a positional overlap within three times the positional uncertainty was
required on both sides of the cross-match. Assuming that all matched objects
were stars, a very stringent constraint of
$\log(F_{\textrm{X}}/F_{\textrm{V}})<-2.2$ was used to decide whether the
matched object can explain the X-ray luminosity of the XMM-Newton source,
where $F_{\textrm{X}}$ is the source X-ray flux provided by `SC_EP_8_FLUX` and
$F_{\textrm{V}}$ its flux in optical light. This limit is established in
consistency with the work of Freund et al. (2018), were
$\log(F_{\textrm{X}}/F_{\textrm{BOL}})=-2.2$ is considered the maximal
luminosity ratio for early-type stars, the ones presenting the largest share
of X-ray luminosity. However, being the bolometric flux $F_{\textrm{BOL}}$
unavailable in most existing catalogues, we approximated it as the optical
flux of the star, $F_{\textrm{V}}$. We used the formula from Maccacaro et al.
(1988) to write
$\log\left(\frac{F_{\textrm{X}}}{F_{\textrm{V}}}\right)=\log{F_{\textrm{X}}}+\frac{m_{\textrm{V}}}{2.5}+5.37<-2.2\textrm{,}$
(1)
where $m_{\textrm{V}}$ is the optical magnitude of the studied object. In
cases where $m_{\textrm{V}}$ is not listed, we used the G-band magnitude
$m_{\textrm{G}}$ instead, which typically has the same value at the V-band
magnitude for bright enough sources. From GaiaDR2, we used the listed G-band
magnitude values `Gmag`, and for Tycho2 we used the VT-magnitude measurements
`VTmag`. This resulted in 2 257 entries flagged as stars from GaiaDR2; and 96
entries from Tycho2.
ULXs optical counterparts are typically very faint in the optical, presenting
$m_{\textrm{V}}\gtrsim 21$ (Kaaret et al. 2017), which leads to
$\log(F_{\textrm{X}}/F_{\textrm{V}})\gtrsim 0$ for a ULX candidate with
$L_{\textrm{X}}=10^{39}$ erg s-1 at a distance of 20 Mpc. Therefore, we are
safe from accidentally disregarding genuine ULX candidates as stellar objects.
Objects with $0>\log(F_{\textrm{X}}/F_{\textrm{V}})>-2.2$ were considered in
an upcoming section of the pipeline.
Identification of background QSOs. From the 2-10 keV $\log{N}-\log{S}$
distribution above a flux of $10^{-14}$ erg cm-2 s-1 presented in Mateos et
al. (2008), we expect a background source density of 300 deg-2 in the field.
With an accumulated galaxy sky area outside of the local group ($D>1$ Mpc) of
2.25 deg2, this implies that around 670 background contaminants lie in the
line of sight of galaxies where we expect to find most of our ULX candidates.
This clearly motivates the need to identify such objects in currently
available catalogues. Therefore, all remaining unflagged objects found to
overlap within three times their positional uncertainties during the cross-
match with the SDSS-DR14 and VéronQSO were flagged as QSOs. The first query
was performed with SDSS-DR14 and highlighted 140 entries, while the second one
highlighted 135 more.
Identification of miscellaneous objects in SIMBAD. We cross-matched our
remaining unflagged objects with the SIMBAD database with the query tool
provided by CDS, Strasbourg. In this step of the pipeline, sources were
treated differently depending on the nature of the matched objects, indicated
as $\verb|main_type|$ in the SIMBAD
catalogue888http://simbad.u-strasbg.fr/simbad/sim-display?data=otypes. Every
object overlapping within three times their positional uncertainty and with
either $\verb|main_type|=$“Star” or with their `main_type` containing the “*”
symbol, and that whose optical magnitude `Vmag` holds equation (1) was flagged
as a stellar contaminant. Objects with $\verb|main_type|=$“AGN”,
“AGN$\\_$Candidate”, “QSO”, “QSO$\\_$Candidate” or “SN” were also flagged,
regardless of their optical magnitude. Supernovae in particular were
highlighted as their emission is dominated from the expanding envelope. This
step highlighted 5 745 entries of the catalogue.
Identification of miscellaneous optical objects. As previously stated, ULXs
optical counterparts typically present $m_{\textrm{V}}\gtrsim 21$ (Kaaret et
al. 2017), which leads to $\log(F_{\textrm{X}}/F_{\textrm{V}})\gtrsim 0$ for a
ULX candidate with $L_{\textrm{X}}=10^{39}$ erg s-1 at a distance of 20 Mpc.
Objects with larger optical luminosity are usually AGNs or foreground stars.
Sources with $\log(F_{\textrm{X}}/F_{\textrm{VOL}})<-2.2$ were already
labelled as stars in a previous section of the pipeline. In this section, we
looked for optically bright objects of extragalactic origin. We performed a
cross-match with the query tool provided by CDS, Strasbourg, with the
PanSTARRS1 catalogue to find all objects that overlapped within a radius of
three times their positional uncertainties and that held
$\log\left(\frac{F_{\textrm{X}}}{F_{\textrm{V}}}\right)=\log{F_{\textrm{X}}}+\frac{m_{\textrm{V}}}{2.5}+5.37<0\textrm{,}$
(2)
to highlight all sources brighter in the optical than in the X-rays, using the
G-band magnitude `gmag` as an estimate for $m_{\textrm{V}}$. Up to 3 151
entries were flagged as PanSTARRS1 extragalactic objects.
Figure 2: TOPCAT astrometric map of M51 showcasing all 4XMM-DR9 sources within
its isophotal ellipse. The interlopers are composed by PanSTARRS1 and SIMBAD
objects, and all of them but one stay below the ULX luminosity regime if
assumed at the same distance as the others.
### 3.2 Identification of ULX candidates
To identify ULX candidates, we computed individual detection and average
source luminosities from the X-ray fluxes as listed in 4XMM-DR9 (Section 2.1)
and from the distances as listed in HECATE (Section 2.2), along with the
corresponding error propagation. These are listed as `Luminosity` and
`LuminosistyErr` for detections and `SC_Luminosity` and `SC_LuminosistyErr`
for sources. In our catalogue, we consider any X-ray source to be within the
ULX luminosity regime if:
* •
1) It has at least one detection with luminosity above the Eddington limit
within the uncertainty ($\verb|Luminosity|+\verb|LuminosityErr|>10^{39}$ erg
s-1) in at least one of the potential host galaxies.
This check identified 5 943 entries of diverse nature, corresponding to 3 280
sources in 2 729 galaxies. 2 205 of these objects had been flagged as central,
while only 856 objects in 552 galaxies were clean of counterparts.
Furthermore, some of them present large uncertainties in their data.
Therefore, we imposed additional quality conditions for a source to qualify as
a ULX candidate:
* •
2) It has no identified counterpart.
* •
3) It holds $\verb|SC_Luminosity|>\verb|SC_LuminosityErr|$, as otherwise it
would indicate that its source parameters are unreliable.
* •
4) It has $\verb|SC_SUM_FLAG|<1$. This way, we only considered sources for
which none of the individual detections was flagged as probably spurious.
* •
5) It has a single potential host galaxy ($\verb|n_Galaxies|=1$), as otherwise
their distances and luminosities are unreliable.
This left originally 730 ULX candidates in 490 galaxies. Additionally, we
created a subcategory of bright ULX candidates, which follows that
$\verb|Luminosity|+\verb|LuminosityErr|>5\times 10^{40}$ erg s-1, into which
only 130 in 122 galaxies were included. We also found 7 sources that, despite
having more than one potential host galaxy ($\verb|n_Galaxies|>1$), their
distances were differing only by a small fraction without affecting their ULX
status. These sources were included to the ULX count, one of them being a
bright candidate.
Sources that were below the ULX luminosity regime in all of their detections
but still hold conditions 2) to 5) are simply referred to as quality sources.
Many of these sources and low-luminosity interlopers identified in GaiaDR2,
PanSTARRS1 and SIMBAD are found in nearby galaxies, as showcased by Figure 2.
### 3.3 Manual inspection of candidates and contaminants
Our procedure revealed 50 ULX candidates with $L_{\textrm{X}}>10^{41}$ erg
s-1, well within the AGN luminosity regime (Brandt & Alexander 2015). However,
we had to consider the possibility of some of them being contaminants in
disguise that survived the filtering pipeline. With the aim to identify
potential counterparts, we inspected the available optical and X-ray images
from the PanSTARRS1 and XMM-Newton surveys for 112 bright ULX candidates. Here
we discuss some of the relevant results:
Figure 3: PanSTARRS1 optical image of the location of source
SRCID=200559905010001. The X-ray source (not shown here) is a blend of the two
cores of the interacting pair. Figure 4: PanSTARRS1 optical image of the
location of source SRCID=201389514010019. The source lies very clearly on top
of a background galaxy.
Source $\verb|SRCID|=200559905010001$ (Figure 3), with detections of
$L_{\textrm{X}}=(4.3\pm 1.3)\times 10^{42}$ erg s-1 and
$L_{\textrm{X}}=(8.2\pm 2.3)\times 10^{42}$ erg s-1, is most likely a blend of
the two AGNs in the interacting galaxy pair NGC 5256.
Source $\verb|SRCID|=206701401010002$ presents $\langle
L_{\textrm{X}}\rangle=(4.5\pm 0.9)\times 10^{41}$ erg s-1 and is found in the
interacting galaxy pair II Zw. This pair has been thoroughly studied in the
X-rays (Inami et al. 2010; Iwasawa et al. 2011), and the XMM-Newton source is
most likely a blend of sources A, C and D from Goldader et al. (1997).
Source $\verb|SRCID|=201389514010019$ (Figure 4) clearly points towards a
background galaxy, despite it being listed as belonging to galaxy NGC 2528.
According to the NASA/IPAC Extragalactic
Database999https://ned.ipac.caltech.edu/ (NED), this galaxy is at $z=0.13$.
A further amount of 8 sources were revealed as spurious detections around an
over-saturated region on the X-ray image. All of the mentioned sources were
flagged accordingly.
Unfortunately, 19 of the sources did not have PanSTARRS1 images available, so
we inspected their positions on NED to look for possible counterparts. This is
particularly relevant for objects with ecliptic latitude below $-30^{\circ}$,
as this area is not entirely covered by the PanSTARRS1 survey. 15 additional
sources showed PanSTARRS1 counterparts of unclear nature, so we double checked
them with NED too. The results were the following:
Sources $\verb|SRCID|=200936502010001$ and $\verb|SRCID|=200029702010002$ were
found to be the central sources of their host galaxies, NGC 5128 and UGC
01841, due to a discrepancy in the value for the central coordinates with
HECATE. Source $\verb|SRCID|=201241101010001$ was identified as the QSO MR
0205. Finally, four remaining sources were also identified with distant
background galaxies.
There is another side to the manual inspection of sources. As ULXs are more
commonly associated with star-forming regions (Kaaret et al. 2017), some
genuine ULXs are located within or next to optically bright H II regions of
spiral and irregular galaxies. This caused confusion in the filtering step
involving PanSTARRS1, and otherwise good candidates were flagged as
interlopers. Therefore, we performed manual inspection for 128 PanSTARRS1
associations that held $\verb|SC_SUM_FLAG|\leq 1$,
$\verb|SC_Luminosity|+\verb|SC_LuminosityErr|\geq 10^{39}$ erg s-1 and
$\verb|SC_Luminosity|>\verb|SC_LuminosityErr|$ with the intention of
recovering genuine ULX candidates.
Figure 5: PanSTARRS1 optical image of the location of source
SRCID=205562803010007 (green marker) in galaxy NGC 2903. The red marker
indicates the position of the PanSTARRS1 counterpart identified by the
pipeline. The optical counterpart constitutes an H II region.
72 sources were matched to bright nebulous features in spiral galaxies (see
Figure 5 for an example). NED based inspection of 10 further sources also
suggested H II region nature or having only X-ray counterparts, and therefore
were re-flagged as clean. Source $\verb|SRCID|=201109302010027$ in particular
was not matched to any potential contaminant in NED, but it showed two
possible X-ray counterparts. It was kept as an interloper due to its possibly
blended nature.
Another set of 11 sources were identified with infrared sources also in NED,
while four more sources where found to coincide with background galaxies.
These sources were kept in the group of interlopers.
Finally, we searched for possible extragalactic objects in GaiaDR2. Shu et al.
(2019) draw 3 175 537 AGN candidates from 641 266 363 GaiaDR2 sources, a
fraction of 0.5%, which suggested further possible undetected extragalactic
interlopers in our catalogue. Therefore, we performed a final cross-match of
the remaining unmatched sources with GaiaDR2, this time applying condition (2)
to decide whether to flag the object as a possible interloper. 23 ULX were
matched with a GaiaDR2 source. We inspected all of them manually in the NED
database and found that 14 of them did not have a known counterpart, so they
were re-flagged as clean. In contrast, 7 were found to have infrared
counterparts, $\verb|SRCID|=203049401010029$ was matched to a UV source and
$\verb|SRCID|=207843705010011$ was found to be the central source of NGC 7632,
and therefore they were accordingly confirmed as interlopers.
The final version of the catalogue after the manual inspections yielded 1 452
detections of 779 ULX candidates in 517 galaxies, and 163 detections of 94
bright ULX candidates of quality in 94 galaxies.
### 3.4 Luminosity and complete sub-samples
As low-luminosity sources become increasingly harder to detect with increasing
distance, our catalogue presents a bias towards brighter sources at large
distances, hampering any potential study of the ULX population properties.
Following the methodology of Earnshaw et al. (2019), we created the luminosity
sub-samples to ensure the completeness of the set of sources above a
luminosity threshold $L_{\textrm{min}}$ inside of a radius $D_{\textrm{max}}$.
We established a minimum flux for a source to be detected, $F_{\textrm{min}}$,
from which we compute the maximum distances $D_{\textrm{max}}$ at which
sources with luminosity $L_{\textrm{min}}$ can be seen. Then, we selected all
objects with $D<D_{\textrm{max}}$ in each case to make sure that every source
with $L>L_{\textrm{min}}$ is accounted for, mitigating this way the bias
towards brighter sources in the $L>L_{\textrm{min}}$ regime. From these, we
then selected all quality sources and ULX candidates whose host galaxy’s
isophotal dimensions are known ($\verb|MATCH_FLAG|=0$) and that lie at least
$25^{\circ}$ away from the galactic plane. This last condition ensures minimal
photoelectric absorption from the Milky Way.
Figure 6: Distribution of detection fluxes within 4XMM-DR9, regardless of
their nature. Most detections (77%) have been detected with fluxes higher than
$F_{\textrm{X}}\approx 10^{-14}$erg cm-2 s-1. Figure 7: Distance-luminosity
dispersion of all quality sources within the the 38ss, 39ss and 5x40ss, and
the cULXss and the cbULXss, with an imposed sensitivity of $10^{-14}$ erg cm-2
s-1. The distance cuts correspond to $D_{\textrm{max}}\approx 9$, 29 and 204
Mpc.
The most controversial aspect of this method is the choice of
$F_{\textrm{min}}$, since there is no hard detection threshold for the XMM-
Newton observatory as it depends on exposure time and background intensity. To
work around this, we observed in Figure 6 that the distribution of all
detection fluxes in 4XMM-DR9 peaks early at $F_{\textrm{X}}\approx 10^{-14}$
erg cm-2 s-1mark, with 77% of the detections at higher fluxes, so we adopted
this value as the baseline for XMM-Newton’s sensitivity.
With this in hand, we built the luminosity sub-samples 38ss, 39ss and 5x40ss,
with corresponding $L_{\textrm{min}}=10^{38}$, $10^{39}$ and $5\times 10^{40}$
erg s-1 and $D_{\textrm{max}}\approx 9$, 29 and 204 Mpc. We then selected all
ULX candidates with $\verb|n_Galaxies|=1$ belonging to 39ss to build the
complete ULX sub-sample (cULXss), and we did the same with the bright ULX
candidates and the 5x40ss, building the complete bright ULX sub-sample
(cbULXss).
The effectiveness of this method is well illustrated in Figure 7, where the
bias towards brighter sources has been significantly downplayed by excluding
sources with $D>D_{\textrm{max}}$ from the sub-sample. As shown in Table 1,
these two sub-samples contain a reduced amount of 292 and 69 sources. The sub-
samples are also helpful in an additional way: due to the clustering nature of
ULXs, objects at $D>10$ Mpc are prone to blending due to the limited angular
resolution XMM-Newton. The cut at $D_{\textrm{max}}=29$ Mpc also helps in the
mitigation of this bias.
## 4 Results
### 4.1 Contents of the catalogue
Table 1: Number of detections, sources, host galaxies, mean logarithm of
luminosity and median distance of each population sample with
$\verb||n_{G}alaxies|=1$. All objects with multiple possible host galaxies,
including seven ULX candidates, have been excluded from this table.
Sample | Dets. | Srcs. | Gals. | $\big{\langle}\log\left(\frac{L_{\textrm{X}}}{\textrm{erg\,s${}^{-1}$}}\right)\big{\rangle}$ | $\left[D\right]$
---|---|---|---|---|---
(Mpc)
Quality | 18 421 | 6 385 | 607 | 36.1 | 0.77
ULX | 1 439 | 772 | 515 | 39.6 | 28.4
B. ULX | 162 | 93 | 93 | 40.8 | 127
38ss | 9 594 | 3 290 | 56 | 34.8 | 0.05
39ss | 10 567 | 3 864 | 209 | 35.3 | 0.05
5x40ss | 10 959 | 4 148 | 452 | 35.7 | 0.06
cULXss | 680 | 292 | 146 | 39.2 | 17.5
cbULXss | 126 | 69 | 69 | 40.8 | 128
Table 2: List with the number of detections, sources and galaxies with every
kind of ``CONT_FLAG— and $\verb||n_{G}alaxies|=1$. ``CONT_FLAG— is the flag
indicating whether a source is clean (“none”) or the catalogue or step where a
counterpart has been identified.
CONT_FLAG | Dets. | Srcs. | Gals.
---|---|---|---
Any | 49 816 | 23 120 | 2 700
“none” | 23 340 | 7 657 | 607
“none(PanSTARRS1)” | 75 | 63 | 60
“none(NED)” | 36 | 16 | 16
“central” | 3 514 | 2 173 | 2 170
“GaiaDR2” | 13 402 | 9 666 | 88
“Tycho2” | 96 | 11 | 6
“SDSS$\\_$DR14” | 140 | 33 | 24
“VeronQSO” | 135 | 38 | 11
“SIMBAD” | 5 745 | 2 224 | 65
“PanSTARRS1” | 2 980 | 1 119 | 136
“manual(PanSTARRS1)” | 10 | 9 | 3
“manual(NED)” | 38 | 26 | 25
Tables 1 and 2 summarise the content of our final catalogue. It contains 50
446 entries corresponding to 23 262 sources, out of which 23 120 are
associated with a single galaxy ($\verb|n_Galaxies|=1$). X-ray sources with
GaiaDR2 associations form the largest group of contaminants, containing a
41.8% of the total number of sources. This is due to the 8 437 matches with
objects of large optical to X-ray flux ratio, 8 270 of which are located in
the Magellanic Clouds and M31, indicating a possible stellar nature.
Nonetheless, none of them fall within the ULX X-ray luminosity regime so no
further regard is paid to them. Central objects follow it, constituting a 9.7%
of the catalogue. They are followed by a 8.4% of objects directly identified
as AGNs or QSOs in SDSS_DR14, VeronQSO and SIMBAD. A further 7.1% have been
identified as stars in GaiaDR2, Tycho2 or SIMBAD. Only 4.9% of the objects
have been selected as potential interlopers from the PanSTARRS1 survey, and a
residual fraction is constituted by supernovae, infrared sources, background
galaxies and a single ultraviolet source.
A total of 3 274 (14.1%) objects have been detected at least once within the
ULX regime ($L_{\textrm{X}}+\Delta L_{\textrm{X}}>10^{39}$ erg s-1). 2 208 of
these consist of central sources, and only 779 (3.3%) qualify as ULX
candidates according to the thresholds established in Sections 3.2 and 3.3,
and 287 are interlopers of various kind. 516 of the ULX candidates have been
detected at least once within the ULX regime with high certainty
($L_{\textrm{X}}-\Delta L_{\textrm{X}}>10^{39}$ erg s-1), and 666 with high
likelihood ($L_{\textrm{X}}>10^{39}$ erg s-1). 761 have at least a detection
well above the NS Eddington limit ($L_{\textrm{X}}-\Delta
L_{\textrm{X}}>1.8\times 10^{38}$ erg s-1). A sub-set of 94 also qualify as
bright ULX candidates ($L_{\textrm{X}}+\Delta L_{\textrm{X}}>5\times 10^{40}$
erg s-1), and all of them are the only candidate in their host galaxy. Among
some of the candidates we find well-known objects such as the NS-ULXs NGC 7793
P13, NGC 5907 ULX-1, the IMBH candidate NGC 2276-3c, and M51-ULS-1, host of
the extragalactic exoplanet candidate M51-ULS-1b (Di Stefano et al. 2020).
Other known objects such as the IMBH candidates M82 X-1 and ESO 243-49 HLX-1
also appear in the catalogue, but with $\verb|SC_SUM_FLAG|>1$.
Only sources and candidates with $\verb|n_Galaxies|=1$ are considered during
the remainder of this section due to their more reliable luminosities and host
galaxy associations.
### 4.2 ULX distribution and abundances
Figure 8: Distance distribution of all sources of quality, ULX candidates and
bright ULX candidates in our catalogue.
In Figure 8, it can be seen that all ULX candidates are found at $D>1$ Mpc,
while bright candidates are found at $D\gtrsim 20$ Mpc. Figure 8 also
illustrates how at $D\gtrsim 100$ Mpc the bright candidates dominate almost
completely due to the bias in favor of brighter sources as discussed in
Section 3.4 and possibly source blending. This is consistent with the
information in Table 1, which shows that median distances of ULX and bright
ULX candidates are 28.4 Mpc and 127 Mpc. Sources of quality cluster in the
nearest galaxies for the same reason. They constitute a population of low
luminosity X-ray sources the dominate the serendipitous X-ray sky in the local
group of galaxies. For instance, M31, M33 and the Magellanic Clouds alone
already concentrate a 17% of all sources of quality. The X-ray content of the
local group is not directly relevant to the ULX population, as it is mostly
composed of typical X-ray binaries and supernovae remnants and it has already
been thoroughly studied (e.g., Sturm et al. 2013; Pietsch 2008). Therefore,
during the raeminder of this paper we focus most of our attention to sources
hosted by galaxies at $D>1$ Mpc.
Figure 9: Hubble type distributions of the cULXss and cbULXss. In both cases,
there is a spike of ULXs in the earliest galaxies. Figure 10: Luminosity
distribution of the sources of quality and ULX candidates in our catalogue
divided by the morphological types of galaxies. A distance cut of $D>1$ Mpc
has been imposed to avoid contamination from the Local Group, heavily
dominated by low-luminosity sources. Figure 11: Distribution of ULX
frequencies (ULX/galaxy) in the cULXss and cbULXss across the morphological
sub-types of host galaxies. The ratios from the cbULXss have been multiplied
by 100 to aid visibility.
Table 3: Number of ULX candidates in the cULXss and the cbULXss, number of
galaxies that host them, number of galaxies that could potentially hosted
them, ULX frequencies and fraction of galaxies with candidates, divided
according to morphological groups of their host galaxies. No distinction is
made between bright ULXs and their host galaxies as no more than one candidate
has been found in each.
Sample | cULXss
---|---
Subsample | All | El. | Le. | ESp. | LSp. | Ir.
ULXs | 292 | 49 | 26 | 107 | 92 | 17
Galaxies | 149 | 18 | 16 | 53 | 45 | 13
Pot. gals. | 722 | 151 | 75 | 109 | 114 | 147
ULX / gal. | 0.40 | 0.32 | 0.35 | 0.98 | 0.81 | 0.12
Gal. frac. | 0.21 | 0.12 | 0.21 | 0.49 | 0.37 | 0.09
Sample | cbULXss
Subsample | All | El. | Le. | ESp. | LSp. | Ir.
ULXs | 69 | 10 | 16 | 25 | 10 | 1
Pot. gals. | 12 166 | 2 174 | 1 787 | 2 052 | 1 416 | 850
ULX/gal.(%) | 0.57 | 0.46 | 0.90 | 1.22 | 0.71 | 0.12
Figure 12: Distribution of fraction of host galaxies in the cULXss and cbULXss
across their morphological sub-types. The values from cbULXss are shown in %
to aid visibility.
In Table 3, it can be seen that objects from the cULXss are most commonly
associated with early spiral galaxies (ESp., $0\leq T_{\textrm{H}}<5$) and
late spiral galaxies (LSp., $5\leq T_{\textrm{H}}<9$). However, while 68.1% of
the cULXss candidates are found in spiral galaxies, only for 50.7% of the
objects in cbULXss dataset holds the same trend. In Figure 9 it can be seen
how the overwhelming majority of cULXss objects concentrate in spiral galaxies
in the range of $0\leq T_{\textrm{H}}<10$, with a spike in elliptical galaxies
(El., $T_{\textrm{H}}<-3$). For cbULXss, the distribution is similar, but the
spike in elliptical galaxies is more prominent. In both samples, lenticular
galaxies (Le., $-3\leq T_{\textrm{H}}<0$) and irregular galaxies (Ir.,
$T_{\textrm{H}}\geq 9$) are unpreferred. Figure 10 also showcases that while
ULX candidates of quality are more frequently found in late-type galaxies
(LTG, $T_{\textrm{H}}\geq 0$) than in early-type galaxies (ETG,
$T_{\textrm{H}}<0$), this distinction becomes less clear for the bright ULX
candidates.
Besides looking how ULX candidates distribute themselves among galaxy
morphological types, it is also interesting to investigate the fraction of
galaxies that contain any ULX and general ULX frequencies in them. For this,
we build the set of all galaxies in the field of view that would be included
in the cULXss or cbULXss if hosting a ULX was not a requirement candidate,
with the extra condition of having $R1>3$” to exclude galaxies that can only
contain central sources. In essence, this consists of the baseline of galaxies
that would belong to the complete sub-samples if they host at least one ULX
candidate, including the ones that actually do. This is shown in Table 3 as
the line of potential galaxies, and it is used implicitly in Table 4 to
compute the mean SFR and $M_{*}$ values. In Table 3, it is seen that 21% of
galaxies in the cULXss contains at least a ULX candidate. For elliptical
galaxies, this gets reduced to 12%, for lenticular galaxies it is 21%, for
early spirals 49%, and for late spirals 37%. The general ULX rate is $\sim$0.4
ULX/galaxy, being the highest at $\sim$0.98 ULX/galaxy in early spiral
galaxies. Figures 11 and 12 show this results in more detail for the different
galaxy morphological sub-types, with ULX frequencies peaking at $\sim$1.8
ULX/galaxy for $T_{\textrm{H}}\sim 3$ galaxies, of which $\sim$70% host ULX
candidates. These results are mostly in agreement with earlier works, that
either found a larger fraction of ULXs in late-type galaxies (Swartz et al.
2011) or find higher abundances in late-type galaxies (Earnshaw et al. 2019;
Kovlakas et al. 2020). However, Kovlakas et al. (2020) find that the galaxies
with a higher chance of hosting ULXs are $T_{\textrm{H}}\sim 5$ spirals
instead.
Table 4: Mean star-formation rate, stellar mass, and derived ULX rates for
all morphological types. Computed from all potential galaxies from which we
have SFR and $M_{*}$ information. As stated in Section 2.2, SFR information is
only taken into account for late-type galaxies.
Sample | cULXss
---|---
Subsample | El. | Le. | ESp. | LSp. | Ir.
$\langle\textrm{SFR}\rangle$ | … | … | 1.99 | 0.80 | 0.02
(M⊙yr-1)
$\langle M_{*}\rangle$ | 2.33 | 2.59 | 4.47 | 1.08 | 0.14
($10^{10}\,\textrm{M}_{\odot}$)
$\langle\textrm{sSFR}\rangle$ | … | … | 4.46 | 7.37 | 12.5
($10^{-11}\textrm{yr}^{-1}$)
$\langle\textrm{ULX / SFR}\rangle$ | … | … | 0.49 | 1.01 | 6.41
(M⊙yr-1)-1
$\langle\textrm{ULX / }M_{*}\rangle$ | 1.39 | 1.34 | 2.20 | 7.48 | 80.2
($10^{-11}\,\textrm{M}_{\odot}^{-1}$)
Sample | cbULXss
Subsample | El. | Le. | ESp. | LSp. | Ir.
$\langle\textrm{SFR}\rangle$ | … | … | 3.25 | 1.29 | 1.14
(M⊙yr-1)
$\langle M_{*}\rangle$ | 5.26 | 5.63 | 5.31 | 1.32 | 2.73
($10^{10}\,\textrm{M}_{\odot}$)
$\langle\textrm{sSFR}\rangle$ | … | … | 6.12 | 9.80 | 4.19
($10^{-11}\textrm{yr}^{-1}$) | | | | |
$\langle\textrm{ULX / SFR}\rangle$ | … | … | 3.75 | 5.46 | 1.03
$10^{-3}$(M⊙yr-1)-1
$\langle\textrm{ULX / }M_{*}\rangle$ | 0.87 | 1.59 | 2.29 | 5.34 | 0.43
($10^{-13}\,\textrm{M}_{\odot}^{-1}$)
To find an explanation for these distributions, we need to explore the
properties of host galaxies, and more particularly, the relationship between
ULX abundances, host SFR and $M_{*}$. In spiral galaxies with high specific-
SFR ($\textrm{sSFR}=\textrm{SFR}/M_{*}$) values and that are dominated by
young stellar populations, the ULX population is typically associated with
HMXB evolution timescales of $\tau\sim$100 Myr (Wiktorowicz et al. 2017), the
rates of which are expected to scale with the SFR. In Table 4, we show how
spiral galaxies are typically the ones with the highest SFR as given in
HECATE. Early spirals have higher SFR absolute values than late spirals
($\sim$1.99 vs. $\sim$0.80 M⊙yr-1), but as expected they also have lower sSFR
values. Despite ULXs being more abundant in early spirals, the ULX frequency
per SFR is higher in late spirals ($\sim$0.49 vs. $\sim$1.01 /M⊙yr-1). The
same holds for the number of ULXs per unit mass, and the whole picture is
exagerated for irregular galaxies. Figure 13 also shows the distribution of
individual ULX/SFR galaxy rates for early and late spiral galaxies in the
cULXss that contain available SFR information. The distributions peak at
$\sim$0.65 and $\sim$0.55 ULX/M⊙yr-1each. Wiktorowicz et al. (2017) predict
the existence of 400 ULXs in a galaxy with solar metallicity, $M_{*}=6\times
10^{10}$ M⊙ and $\textrm{SFR}=600$ M⊙yr-1 for a period of 100 Myr, which
implies 0.67 ULX/M⊙yr-1, staying well withing the range of values provided by
our sample.
Figure 13: Histogram of the number of ULX per star-formation rate for early
and late spiral galaxies from the cULXss. Extremely large numbers on the right
are due to lucky galaxies with low SFR that host a single ULX have been cut
out of the picture. Figure 14: Hardness ratio dispersion of the cULXss and
cbULXss against the abundance contours of some populations of interlopers.
AGNs and QSO include both confirmed and candidates. Only hardness ratios with
uncertainties lower than 0.2 are accepted for the sake of reliability. Figure
15: Hardness ratio dispersion of the cULXss and cbULXss against the abundance
contours of lower luminosity objects present in the 38ss. Only hardness ratios
with uncertainties lower than 0.2 are accepted for the sake of reliability.
For elliptical and lenticular galaxies, different considerations need to be
applied. As expressed by Wiktorowicz et al. (2017), there is a ULX
subpopulation constituted by LMXBs that reach Roche-lobe overflow at
$\tau\sim$1 Gyr instead. This evolutionary path explains the presence of ULXs
in galaxies with very low star-forming activity. In this case, ULX frequencies
are not expected to depend significantly on their current SFR values.
On the other hand, objects in the cbULXss provide a slightly different
picture. All of them are located only beyond the $D\gtrsim 20$ Mpc mark and
have extremely low abundances, as shown in Table 3. Additionally, all of them
are the only bright ULX candidate in their host galaxies. From Wiktorowicz et
al. (2017) predictions the rate of objects with $L_{\textbf{X}}>10^{40}$ erg
s-1, we estimate a typical number of 0.023 bright ULX/M⊙yr-1 in star-forming
galaxies. However, we only measure 0.0037 bright ULX/M⊙yr-1 and 0.0055 bright
ULX/M⊙yr-1 for early and late-spiral galaxies, which is an order of magnitude
smaller. This can be partially explained by considering that our luminosity
cut of $L_{\textbf{X}}>5\times 10^{40}$ erg s-1 is slightly higher, and that
the ULX luminosity distribution in general decays exponentially in most
studies. Nonetheless, a total number of only 36 objects are involved in these
estimations. A larger sample will need to be used in the future to extract
more reliable conclusions.
### 4.3 Spectral properties of the catalogue
The X-ray spectral properties of the identified sources can provide additional
information regarding their nature (e.g., Earnshaw et al. 2019; Walton et al.
2011). Therefore, we proceed with the inspection of the spectral properties of
the objects within our catalogue. We use the hardness ratios, defined as
$\textrm{HR}_{i}=\frac{R_{i+1}-R_{i}}{R_{i+1}+R_{i}}\textrm{,}$ (3)
where $R_{i}$ and $R_{i+1}$ are the count rates in the consecutive energy
bands $i$ and $i+1$. With this quantity, spectral hardness can be compared
between different sources, with larger values indicating harder sources. The
4XMM-DR9 computes the source $HR_{i}$ values from the count rates in the
0.2–0.5; 0.5–1.0; 1.0–2.0; 2.0–4.5; and 4.5–12.0 keV energy bands, both from
the PN and MOS cameras101010http://xmmssc.irap.omp.eu/, $HR_{1}$ and $HR_{2}$
being the most useful thanks to larger photon counts in lower energy bands.
Table 5: Number of objects in every contaminant population and their mean
hardness ratios. Contaminants found in the PanSTARRS1 survey and extragalactic
objects from PanSTARRS1 are not included due to the diverse nature of the
matched objects.
OBJ$\\_$TYPE | # | $\langle\textrm{HR}_{\textrm{1}}\rangle$ | $\langle\textrm{HR}_{\textrm{2}}\rangle$ | $\langle\textrm{HR}_{\textrm{3}}\rangle$ | $\langle\textrm{HR}_{\textrm{4}}\rangle$
---|---|---|---|---|---
“central” | 2 246 | 0.52 | $-$0.10 | $-$0.34 | $-$0.18
Stellar objs. | 1 652 | 0.45 | $-$0.05 | $-$0.47 | $-$0.10
“AGN” | 320 | 0.61 | 0.43 | $-$0.21 | $-$0.26
“AGN$\\_$Cand.” | 1 416 | 0.59 | 0.52 | $-$0.20 | $-$0.23
“QSO” | 126 | 0.47 | 0.24 | $-$0.26 | $-$0.29
“QSO$\\_$Cand.” | 81 | 0.55 | 0.31 | $-$0.30 | $-$0.25
“SN” | 27 | 0.66 | 0.07 | $-$0.28 | $-$0.28
“IrS” | 17 | 0.42 | 0.13 | $-$0.53 | $-$0.20
“back. galaxy” | 9 | 0.41 | $-$0.08 | $-$0.01 | $-$0.32
Table 6: Mean hardness ratios of the cULXss and cbULXss populations, broken down according to the morphological type of host galaxies. Population | Size | $\langle\textrm{HR}_{1}\rangle$ | $\langle\textrm{HR}_{2}\rangle$ | $\langle\textrm{HR}_{3}\rangle$ | $\langle\textrm{HR}_{4}\rangle$
---|---|---|---|---|---
cULXss | 292 | 0.51 | 0.24 | $-$0.25 | $-$0.50
El.-hosted | 49 | 0.42 | 0.09 | $-$0.27 | $-$0.46
Le.-hosted | 27 | 0.36 | 0.13 | $-$0.24 | $-$0.40
ESp.-hosted | 107 | 0.49 | 0.24 | $-$0.25 | $-$0.52
LSp.-hosted | 92 | 0.61 | 0.36 | $-$0.24 | $-$0.53
Ir.-hosted | 17 | 0.53 | 0.21 | $-$0.35 | $-$0.52
cbULXss | 69 | 0.43 | 0.19 | $-$0.32 | $-$0.23
El.-hosted | 10 | 0.41 | 0.06 | $-$0.49 | $-$0.39
Le.-hosted | 16 | 0.43 | 0.10 | $-$0.33 | $-$0.24
ESp.-hosted | 25 | 0.47 | 0.23 | $-$0.24 | $-$0.19
LSp.-hosted | 10 | 0.43 | 0.35 | $-$0.21 | $-$0.22
Ir.-hosted | 1 | 0.98 | 0.29 | $-$0.30 | 0.34
From Table 5, it is apparent how the contaminant populations have distinct
average spectral properties from each other. Supernovae, AGNs and QSOs tend to
be the objects with the hardest spectra. Comparing the interloper hardness
ratios with the ULX hardness ratios shown in Table 6 also shows that, as
expected, the typical spectra of the ULX population is also distinct from
those of stellar objects, supernovae, infrared sources and nuclear sources,
while bearing the closest spectral resemblance with the AGN and QSO
population. These properties are made more apparent by looking at Figure 14,
where it can be seen that AGNs and QSOs cluster around slightly harder spectra
than the cULXss and cbULXss samples, and that there is even less overlap of
the ULX population with the supernovae and the stellar populations. Figure 15
shows that the ULX population has the largest resemblance with the population
of X-ray objects just below the ULX luminosity threshold
($10^{38}<L_{\textrm{X}}<10^{39}$ erg s-1).
There are also noteworthy spectral differences within the ULX sample itself.
Table 6 showcases a division between ULX spectral properties in relation to
their host galaxies for the cULXss. ULXs in lenticular and elliptical galaxies
present the softest spectra in all cases both for faint and bright ULXs. The
hardest spectra are found in late spiral galaxies. This notion is clearer in
Figure 16, where the hardness ratio distributions of ULXs in late-type and
early-type galaxies are seen to cluster around slightly different values.
However, Figure 17 illustrates that the data comes with a large dispersion,
despite the increasing trends in the values of $HR_{1}$ and $HR_{2}$ with the
morphological sub-class of each galaxy up until the irregulars ones. This
information points towards photoelectric absorption being higher in spiral
galaxies than in elliptical and irregular ones, in agreement with what has
been reported in previous XMM-Newton-based works (Walton et al. 2011; Earnshaw
et al. 2019), but also towards unaccounted factors being more determinant for
the individual properties of ULXs.
Figure 16: Hardness ratio versus luminosity dispersion of objects in the
cULXss (squares) and cbULXss (diamonds), distinguishing between objects hosted
in late type galaxies ($T_{\textrm{H}}\geq 0$) and early type galaxies
($T_{\textrm{H}}<0$). Density contours have been drawn to help the eye. Only
hardness ratios with uncertainties lower than 0.2 are accepted for the sake of
reliability. Figure 17: $HR_{1}$ and $HR_{2}$ across morphological sub-types
of host galaxies from cULXss sources. The error bars show the $1\sigma$
dispersion of the values, and the data points have been slightly moved to the
left ($HR_{1}$) and to the right ($HR_{2}$) to ease visibility.
Looking back at Table 6, a division between the cULXss and cbULXss is seen. On
average, low-luminosity ULXs present harder spectra than the bright ones. It
is unlikely that the host galaxies of bright ULX candidates contain less
neutral gas in general, and perhaps their generally softer spectra is an
indication of bright candidates having a different physical nature than the
rest of the candidates. Their emission could also result from the blending of
lower-luminosity sources as suggested in Section 5.1.3, perhaps with the
diffuse emission of the stellar population in the host galaxies. Within the
cbULXss itself, it also appears that candidates in late-type galaxies cluster
around slightly harder spectra than those hosted by early-type galaxies, as
showcased in Figure 16.
Table 7: Top: mean value of $\verb||VAR_{P}ROB|$ from 4XMM-DR9s according to
morphology groups. Middle: occupancy of variability of cULXss and cbULXss
variability bins (in % and as defined in Section 4.4), also split according to
the morphological type of their host galaxy. Bottom: mean max to min flux
ratio of sources in each variability bin and morphology group.
Sample | cULXss-DR9s
---|---
Subsample | All | El. | Le. | ESp. | LSp. | Ir.
Size | 147 | 28 | 15 | 51 | 49 | 4
$\langle\verb|VAR_PROB|\rangle$ | 0.18 | 0.23 | 0.26 | 0.20 | 0.10 | 0.22
1$\sigma$ constant | 8.8 | 3.6 | 20.0 | 13.7 | 4.1 | 0.0
uncertain | 15.0 | 28.5 | 13.4 | 9.9 | 8.2 | 50
1$\sigma$ variable | 15.6 | 21.4 | 13.3 | 19.6 | 12.2 | 0.0
2$\sigma$ variable | 15.0 | 17.9 | 20.0 | 13.7 | 14.3 | 0.0
3$\sigma$ variable | 45.6 | 28.6 | 33.3 | 43.1 | 61.2 | 50
$\langle\verb|FRATIO|\rangle$ | 72.8 | 6.02 | 4.10 | 26.4 | 186.1 | 2.42
1$\sigma$ constant | 1.57 | 1.31 | 1.69 | 1.57 | 1.53 | …
1$\sigma$ variable | 11.4 | 2.04 | 2.50 | 22.82 | 3.51 | …
2$\sigma$ variable | 10.0 | 4.39 | 6.24 | 3.04 | 2.61 | …
3$\sigma$ variable | 151.6 | 14.5 | 5.87 | 48.8 | 297.7 | 3.70
Sample | cbULXss-DR9s
Subsample | All | El. | Le. | ESp. | LSp. | Ir.
Size | 31 | 3 | 8 | 12 | 7 | 0
$\langle\verb|VAR_PROB|\rangle$ | 0.42 | 0.41 | 0.35 | 0.39 | 0.49 | …
1$\sigma$ constant | 22.6 | 0.0 | 12.5 | 25.0 | 28.6 | …
uncertain | 35.4 | 66.7 | 37.5 | 41.7 | 42.8 | …
1$\sigma$ variable | 19.4 | 33.3 | 25.0 | 25.0 | 0.0 | …
2$\sigma$ variable | 3.2 | 0.0 | 0.0 | 8.3 | 0.0 | …
3$\sigma$ variable | 19.4 | 0.0 | 25.0 | 16.7 | 28.6 | …
$\langle\verb|FRATIO|\rangle$ | 11.6 | 2.90 | 3.29 | 2.40 | 42.0 | …
1$\sigma$ constant | 1.95 | … | 5.55 | 1.53 | 1.09 | …
1$\sigma$ variable | 1.85 | 1.90 | 1.42 | 2.12 | … | …
2$\sigma$ variable | 4.31 | … | 4.31 | … | … | …
3$\sigma$ variable | 49.92 | … | 4.57 | 3.67 | 141.5 | …
### 4.4 ULX variability
The differences between ULX candidates also extend to their variability
properties. We find 147 cULXss counterparts within three times their
positional uncertainty in the 4XMM-DR9s objects with `N_CONTRIB`$\geq 2$. This
number goes down to 31 for the cbULXss sample. For all these sources, we have
at our disposal variability information drawn from their multiple overlapping
observations (Traulsen et al. 2019).
We pay attention to `VAR_PROB`, which is the probability of a source not
showcasing variability between observations, in essence a long-term
variability measure. For our analysis, we put the ULX candidates in five bins
of variability probability. We consider objects with
$\verb|VAR_PROB|>1\sigma$, where $1\sigma=0.6827$, to be most likely constant,
and we collect them in the $1\sigma$ constant bin. Thereafter, we collect the
objects with $1-1\sigma>\verb|VAR_PROB|>1-2\sigma$,
$1-2\sigma>\verb|VAR_PROB|>1-3\sigma$ and $\verb|VAR_PROB|<1-3\sigma$ into the
$1\sigma$ variable, $2\sigma$ variable and $3\sigma$ variable bins, being
$2\sigma=0.9545$ and $2\sigma=0.9973$, while the remainder are left in the
uncertain variability bin. We also register the `FRATIO` values of the matched
objects, which corresponds to the ratio between the highest and the lowest
detected flux from a source, and compute the average value not only for every
population time, but also for every `VAR_PROB` bin.
The results are shown in Table 7, where it is seen that ULX candidates from
the cULXss hosted by late spiral galaxies are the ones with the highest
chances of showcasing variability between different observations, as they
present the lowest `VAR_PROB` average value, with 61.2% of them belonging to
the $3\sigma$ variable bin, and only 4.1% staying in the $1\sigma$ constant
bin. In contrast, ULX candidates in lenticular galaxies present the highest
`VAR_PROB` average value, falling 20% of them in the $1\sigma$ constant bin.
However, the ULX population with the lowest fraction of candidates in the
$3\sigma$ variable bin is that of candidates hosted by elliptical galaxies.
ULX candidates hosted by early spiral galaxies present in-between variability
properties.
Regarding the measured variation of fluxes between different observations,
Table 7 also shows that for ULX candidates hosted by late-spiral galaxies the
`FRATIO` values span two orders of magnitude on average, both for the entire
set and for those in the $3\sigma$ variable bin. ULX candidates in lenticular
galaxies showcase the lowest `FRATIO` values, being followed by those in
elliptical galaxies. In these cases, the inter-observation luminosity
variability does not go beyond one order of magnitude. ULXs in early spiral
galaxies repeat their role as a bridge between late spiral-hosted and
lenticular-hosted candidates. Not surprisingly, candidates in the $1\sigma$
constant present average values of `FRATIO` of the order of 1 regardless of
their host galaxy, and while some exceptions are seen, in general `FRATIO`
increases with decreasing `VAR_PROB`.
Some trends are similar for the cbULXss. As shown in the lower half of Table
7, bright candidates hosted by late spiral galaxies are the ones with the
highest fraction of ULX candidates in the $3\sigma$ variable bin, and they
once again have the highest `FRATIO` average values. However, they are
typically less variable than their cULXss counterparts, as seen from the
higher `VAR_PROB` and lower `FRATIO` average values for the entire set of
candidates regardless of the morphological type of the host galaxy. The
difference between early spiral-hosted and elliptical and lenticular-hosted
ULXs is also blurrier, most likely due to the small size of the available
sample.
It should be noted that despite these general trends, the dispersion in the
`VAR_PROB` and `FRATIO` values is too large to find a reliable dependence on
other quantities. It is tempting to propose a relationship between variability
and the star-formation activity in galaxies, as coincidentally late spiral
galaxies also present the highest sSFR -excluding irregular galaxies from the
analysis-, while both elliptical and lenticular galaxies present the lowest
sSFR values. Despite these hints towards a correlation between variability and
galaxy morphology, a further investigation of the variability properties in
the context of the sSFR of the galaxies does not show and clear correlation.
Sutton et al. (2013) make a thorough study of the variability of several ULXs,
stating that an increase of X-ray brightness usually goes in hand with a
softening of the source due to accretion winds narrowing their funnel shape
and obscuring the inner regions of the disc. We do not find such relationship
for many cases either. For instance, NGC 5204 X-1, one of their notable
examples, which appears in our cULXss with $\langle
L_{\textbf{X}}\rangle=(6.0\pm 0.6)\times 10^{39}$ erg s-1, presents the
complete opposite behaviour within XMM-Newton observations, ranging from
$HR_{1}\approx 0.33$ and $HR_{2}\approx-0.02$ at $L_{\textbf{X}}\approx
5\times 10^{39}$ erg s-1 to $HR_{1}\approx 0.42$ and $HR_{2}\approx 0.02$ at
$L_{\textbf{X}}\approx 8\times 10^{39}$ erg s-1. Nonetheless, the methodology
and sample data in both of our studies diverge greatly, and in particular our
methodology is rather limited when making statements about individual sources.
## 5 Discussion
### 5.1 Limitations of our study
#### 5.1.1 Limitations of the catalogue
With 779 identified candidates, our catalogue provides the largest ULX sample
built up to date, only followed by the 629 candidates provided by the Chandra-
based catalogue from Kovlakas et al. (2020), the 470 candidates from Walton et
al. (2011) and the 394 candidates from Earnshaw et al. (2019). Out of these,
94 are bright candidates with at least one detection with
$L_{\textrm{X}}>5\times 10^{40}$ erg s-1. However, to avoid biases towards
bright sources and blending, we have only considered the 292 candidates from
the cULXss and 62 from the cbULXss for the population study. While these
ensures the veracity of our results for the whole cULXss, the small size of
the cbULXss hampers or study severely. In addition, as suggested in Section
5.2.2, the bright population is more prone to contamination by unidentified
interlopers and source blending, hampering the study even further.
Aside from the manual inspection of sources during the filtering process, our
study is also strictly limited to collective properties of our samples. Both
the interloper and the ULX populations are very heterogeneous samples at the
astrophysical level, and while some trends are identified, individual objects
can differ greatly from each other. For instance, we have identified in
Section 4.3 that ULX candidates in late-type galaxies tend to present harder
spectra than those hosted by early-type ones. However, it is postulated that
ULXs present hard or soft states depending on the viewing angle and the
accretion rate (Sutton et al. 2013), and therefore it is most likely that the
spectrum of an individual ULX is determined by these factors rather than the
morphological type of their host galaxy.
From a population study such as the one presented in this work we cannot draw
any conclusions on the nature of individual sources. However, we can make
connections between their collective properties (spectra, variability,
frequency) and the nature of the stellar populations in their host galaxy. For
example we see a trend for ULXs in late type spirals to exhibit harder
colours, indicative of more significant photoelectric absorption. However, we
do not see any trend of variability indicators with the sSFR. The variability
and spectral trends can be better addressed with systematic observations of
individual objects (e.g., Sutton et al. 2013; Kaaret et al. 2017).
#### 5.1.2 Selection biases
Figure 18: Hubble Type distributions of galaxies with listed values in HECATE,
within 4XMM-DR9 XMM-Newton observation, and within our catalogue. Portions
have been normalized to sum 1 in all of the three cases due to the large
differences in size between the sets.
The main limitation of 4XMM-DR9 is that it has been constructed from
observations of galaxies or clusters that were already of interest to
astronomers. As such, the characteristics of our ULX sample may deviate from
those of the true ULX population. In Figure 18 it is seen how galaxies
included in 4XMM-DR9 present a slight bias towards earlier types with respect
to the full content of HECATE. This bias only gets amplified further down in
the construction of our catalogue.
Choosing to use 4XMM-DR9 as the only X-ray catalogue of reference also limits
the size of the X-ray sample available to us, as it covers only 2.7% of the
sky area and 8.6% of all the galaxies present in HECATE. Fortunately, other
recent works such as Kovlakas et al. (2020) already take into account non-
overlapping parts of the sky, despite suffering the same bias as our own
individually.
The most straightforward way to tackle these issues is studying the ULX
content of blind all-sky surveys. A good opportunity for this is presented by
the still on-going eROSITA All Sky Survey (eRASS, Merloni et al. 2012; Predehl
et al. 2021). By observing the entirety of the sky instead of selected
patches, and accounting for the average eROSITA’s sensitivity at the end of
the eRASS, we assess that around $\sim$1 000 new ULX candidates will be
discovered in HECATE galaxies of at least 20″ in size, eROSITA’s FWHM angular
resolution.
Further biases in our catalogue stem from our own design choices, particularly
from focusing our studies on source parameters. Occasionally, quality flags
and some measures of source parameters pick the worst possible value among all
the values assigned to individual detections. This means that if a legitimate
source has one detection with $\verb|SUM_FLAG|>1$, this bad value percolates
into `SC_SUM_FLAG` and renders the source non-elegible for being a ULX
candidate under our criteria. Ignoring the `SC_SUM_FLAG` value, the total
number of ULX candidates of quality raises to 847. This choice was made
because a bad detection often leads to unreliable source parameters, but the
true extent of this effect would need to be checked individually for every
source. Nonetheless, all of these sources are still available to the user of
the catalogue.
Another bias we introduced was the selection of point-like sources with high
detection likelihood, which excluded 1 632 sources within HECATE’s isophotal
ellipses, in contrast to the 23 262 accepted ones in the final catalogue.
However, we do think that the exclusion of extended sources is justified, as
ULXs are point-like sources.
#### 5.1.3 XMM-Newton’s angular resolution
XMM-Newton’s angular resolution is a limiting factor due to the clustered
nature of star-forming regions, where ULXs tend to be found (Anastasopoulou et
al. 2016; Kaaret et al. 2017). H II regions, both Galactic and extragalactic,
present sizes of the order of $\sim$10 pc, at most $\sim$100 pc (Tremblin et
al. 2014; Hunt & Hirashita 2009, plus references within). This implies that we
should expect contamination from neighbouring X-ray sources in the ULX
candidate parameters at $D\gtrsim 10$ Mpc, or even blending of sources into
spurious ULX candidates. The cULXss deals with this issue by establishing the
cut at $D_{\textrm{max}}\approx 29$ Mpc, so we expect little contamination to
our ULX population study. However, it is very telling how at
$D_{\textrm{max}}\gtrsim 100$ Mpc the bright ULX candidates completely
overtake over the normal ULX candidates. In fact, it was explicitly revealed
in Section 3.3 that some of the manually investigated objects suffer from this
problem.
Other catalogues built with higher resolution telescopes such as Chandra are
able to partially alleviate this problem, but the issue reappears at $D\gtrsim
40$ Mpc as stated in Kovlakas et al. (2020). Individual follow up of
interesting sources is once again essential to untangle the nature of these
objects.
### 5.2 Discussion of contents
#### 5.2.1 Candidates at the ULX luminosity threshold
As expressed in Section 3.2, our criterion for the selection of ULX candidates
consists on whether a source has at least one detection whose luminosity is
above this established threshold within its 1$\sigma$ uncertainty. This rather
lax condition allows for typically low-luminosity sources to be considered ULX
candidates. However, it manages to take into account that some variable
objects may behave as ULXs with intermittence, and also NS-ULXs close to the
Eddington limit. It is noteworthy that, while in this work we have used the
common definition for the ULX luminosity threshold in the literature
($L_{\textrm{X}}>10^{39}$ erg s-1, Kaaret et al. 2017), this is but an
approximation, as the Eddington limit lies at $1.8\times 10^{38}$ erg s-1 for
a 1.4 M⊙ NS, and at $1.3\times 10^{39}$ erg s-1 for a 10 M⊙ StBH.
As exposed in Section 4.1, only 516 of our ULX candidates have at least one
detection with luminosity above the ULX threshold with more than 1$\sigma$
significance ($L_{\textrm{X}}-\Delta L_{\textrm{X}}>10^{39}$ erg s-1) and 150
other are also likely to be ULXs within their 1$\sigma$ uncertainty
($L_{\textrm{X}}>10^{39}$ erg s-1). This leaves the catalogue with 106 objects
that qualify as ULXs only within the uncertainty of their brightest detection
($L_{\textrm{X}}<10^{39}$ erg s-1), and are therefore unlikely to be ULXs
according to the common definition. However, from this sub-set, a vast
majority of 95 have been observed with luminosity well above that of the
Eddington limit for a 1.4 M⊙ accreting NS ($L_{\textrm{X}}-\Delta
L_{\textrm{X}}>1.8\times 10^{39}$ erg s-1), and therefore still deserve a
place in this catalogue in consideration of a more comprehensive ULX
luminosity threshold.
#### 5.2.2 The most luminous ULXs
Table 8: The number of ULX candidates according to their host galaxies, the number of background (AGNs, QSOs and background galaxies) and foreground (stellar objects) that would otherwise qualify as ULX candidates, and the total fraction that they would constitute if added to the total. Sample | ULX candidates of quality
---|---
Subsample | All | El. | Le. | ESp. | LSp. | Ir.
Candidates | 779 | 141 | 129 | 261 | 195 | 28
Background | 74 | 15 | 13 | 25 | 12 | 3
Foreground | 27 | 6 | 3 | 13 | 1 | 1
Total (%) | 11.5 | 13.0 | 11.0 | 22.8 | 6.25 | 12.5
Sample | Bright ULX candidates of quality
Subsample | All | El. | Le. | ESp. | LSp. | Ir.
Candidates | 94 | 15 | 20 | 33 | 14 | 2
Background | 29 | 7 | 7 | 9 | 3 | 1
Foreground | 5 | 2 | 0 | 2 | 0 | 0
Total (%) | 26.6 | 37.8 | 26.9 | 25.0 | 17.6 | 33.3
Our catalogue contains 94 ULX candidates of quality with at least one
detection with luminosity $L_{\textrm{X}}>5\times 10^{40}$ erg s-1. This
sample extends to 109 sources in total if we ignore the `SC_SUM_FLAG` column.
It has been exposed in sections 4.3 and 4.4 that, while the cbULXss follows
the same trends as the cULXss, in the sense that candidates hosted by late-
type galaxies tend to present harder spectra and higher variability, it is
also true that the bright candidates have generally softer spectra and more
subdued variability than their cULXss cousins.
A tempting way to explain this difference is to invoke different physical
origins of the sources. Hyperluminous X-ray sources (HLXs) are ULXs with
$L_{\textbf{X}}>10^{41}$ erg s-1, and albeit the community has recently
shifted towards explaining them as typical X-ray binaries at the high end of
the luminosity distribution due to very large mass transfer rates (Bachetti
2016; Wiktorowicz et al. 2017), they are still the best objects where to look
for BHs of large masses or even IMBH (Kaaret et al. 2017). This search is
further motivated by first direct confirmation ever of an IMBH in the
gravitational wave event GW190521 (Abbott et al. 2020a, b), which is proof
that black holes of these masses do exist and are to be considered as
potential explanation for the brightest ULXs. Indeed, a handful of ULXs such
as ESO 243-49 HLX-1 (Kong et al. 2007; Servillat et al. 2011; Pasham et al.
2014) or NGC 2276-3c (Mezcua et al. 2015) constitute promising candidates of
being such kind of objects, the three of which are included in our catalogue.
In our catalogue, ESO 243-49 HLX-1 corresponds to source
$\verb|SRCID|=202045402010003$, and has an average luminosity of $\langle
L_{\textrm{X}}\rangle=(1.7\pm 0.4)\times 10^{41}$ erg s-1 and a maximum
luminosity of $L_{\textrm{X}}=(7\pm 2)\times 10^{41}$ erg s-1. Unfortunately,
one of its 6 detections has $\verb|SUM_FLAG|=3$, so it has not been included
among our list of ULX candidates of quality. M82 X-1 also has been recovered
as source $\verb|SRCID|=201122902010001$, but this source is actually a blend
of M82 X-1 and the NS-ULX M82 X-2 (Bachetti et al. 2014), as they are only
0.52” apart and are therefore undistinguishable by XMM-Newton. Furthermore,
this source also has one detection with $\verb|SUM_FLAG|=3$. NGC 2276-3c
($\verb|SRCID|=200223402010005$) is the only one of these three that appears
as bright ULX of quality. However, it is also constituted by three blended
sources in XMM-Newton, one of them being a genuine IMBH candidate (Mezcua et
al. 2015).
Nonetheless, it is known that not all bright ULX candidates need to explained
with the presence of accreting massive BHs. For example, another object
classified as a bright ULX candidate in our catalogue is NGC 5907 ULX-1, a NS-
ULX that has been observed with a luminosity of $L_{\textbf{X}}>10^{41}$ erg
s-1 (Israel et al. 2017).
This implies that there must be other explanations to their different
properties. Many times, the reason may not even be astrophysical. As seen in
Figure 8, almost all of them are located at $D\gtrsim 20$ Mpc, which makes
them prone to be the result of blending of several low-luminosity sources. The
fact that they are always found alone in their host galaxies is also a strong
indication of source blending. This is reinforced by the explicit detection of
confused sources among manually inspected objects in Section 3.3. It is also
possible that there is significant contribution of unidentified nuclear
sources to our cbULXss. Table 5.2.3 showcases well that indeed a much larger
share of background interlopers was identified for the bright ULX candidates
than in the general sample, implying that the unidentified contaminants may
also constitute a larger fraction.
Nonetheless, from the forementioned objects, only NGC 2276-3c has $\langle
L_{\textrm{X}}\rangle>10^{41}$ erg s-1. In fact, only 25 belong to the HLX
class within their uncertainty, 33 if `SC_SUM_FLAG` is ignored. From the ones
with $\verb|SC_SUM_FLAG|\leq 1$, only three match with the brightest objects
from Earnshaw et al. (2019), the study closest to ours in terms of sampling
and methodology. NGC 4077, IC 4596 are included in Earnshaw et al. (2019) as
potential IMBH candidates. IC 4320 also appears in Earnshaw et al. (2019) and
Walton et al. (2011), and has already been investigated by Sutton et al.
(2012). The remaining 22 do not seem to appear in previous literature as far
as we are aware. If extended to other objects qualified as contaminants, we
also find a match for the HLXs in IC 4252 and UGC 6697. The former is
considered a central source in our catalogue, while the later holds
$\verb|n_Galaxies|=2$, $\verb|SC_SUM_FLAG|=2$ and has been identified with a
PanSTARRS1 counterpart, and therefore they do not meet the requirements for
being considered sources of quality. Others also included in Earnshaw et al.
(2019), such as UGC 1934 and NGC 2276 belong to our bright ULX candidates, but
do not qualify as HLXs.
#### 5.2.3 Effectiveness of the filtering pipeline
In Table 5, it is seen that a total number of 1 943 sources are matched with
AGNs, QSOs or background galaxies, and 1 652 with stellar objects.
Additionally, in Table 8 it is shown that 74 and 27 objects in each of this
group would have been classified as ULX candidates if it were not for their
association. In total, they would constitute a 11.5% of the ULX candidates of
quality. It is therefore natural to ask whether the exclusion of these objects
is accurate or, in the opposite end, sufficient at all.
In the case of stellar objects, we have applied extra conditions besides
positional coincidences based on the X-ray flux and optical magnitude of the
sources. This way, an X-ray source was only classified as a stellar
contaminant if the stellar counterpart was bright enough to explain the X-ray
emission or, in the least, to contaminate it significantly. And in any case,
most of the objects classified as stellar contaminants concentrate in nearby
galaxies devoid of ULX candidates. Nonetheless, the best way to attest the
efficacy of the pipeline is comparing the spectral values of ULXs to those of
stellar contaminants, shown in Tables 6 and 5, and seeing that the set of
cULXss and the cbULXss have distinguishable spectral properties with respect
to the bulk of stellar objects.
On the other hand, the direct exclusion of any X-ray source coincident with a
known QSO, AGN or SDSS-DR14 source may rise more concerns. To inspect whether
this method has overshot or has been insufficient, we can compare the number
of expected background sources to the identified ones. In Section 3.1 it is
mentioned that around 670 background contaminants are expected in the set of
galaxies at $D>1$ Mpc from the $\log{N}-\log{S}$ presented in Mateos et al.
(2008), which is equal to ${\sim}12$% of sources at that distance if it were
to be true. In our catalogue, 538 sources at $D>1$ Mpc (${\sim}10$%) were
actually classified as possible QSOs or AGNs. Therefore, it can be concluded
that our filtering pipeline does a good job on identifying most of the
background contaminants, but also that ${\sim}2$% of our sources may still be
unidentified background sources, including ${\sim}15$ ULX candidates.
### 5.3 Comparison with other works
#### 5.3.1 Previous serendipitous XMM-Newton ULX catalogues
Earnshaw et al. (2019) followed much of the methodology presented in Walton et
al. (2011). As Walton et al. (2011) build their sample from 2XMM-
DR1111111http://xmmssc.irap.omp.eu and Earnshaw et al. (2019) from 3XMM-DR4,
their work can be seen as an update on the ULX content of the XMM-Newton
survey. Likewise, our work can also be seen as an update on the ULX content of
XMM-Newton with respect to Earnshaw et al. (2019). Therefore, we expect to
recover most of the ULX candidates presented their work and Walton et al.
(2011).
A source positional cross-match recovers 1 008 sources from the total of the 1
314 listed in Earnshaw et al. (2019). As Earnshaw et al. (2019) do not
preserve sources classified as interlopers, 80% of the matches correspond to
clean sources in our side. The remaining sources consist of 11 central
sources, 30 stars, one QSO, 16 SIMBAD objects, 21 objects from GaiaDR2, 32
extragalactic objects from PanSTARRS1 and 1 background galaxy manually
identified in NED.
Figure 19: Dispersion of the HECATE to RC3 semi-major and semi-minor axes
ratios for galaxies appearing in both this work and Earnshaw et al. (2019),
respectively. Figure 20: TOPCAT astrometric maps of sources found in galaxies
NGC 4649 (large ellipses at the center) and NGC 4647 (smaller ellipses at the
upper right). The isophotal ellipses from RC3 and HECATE are also shown.
The missing 306 sources are explained mostly by the HECATE updates to the
isophotal radii of galaxies with respect to RC3 (de Vaucouleurs et al. 1991;
Corwin et al. 1994) used in Earnshaw et al. (2019). Figure 19 showcases how
most of the newly updated galaxy dimensions often differ slightly from the
older values, both towards larger and smaller values. This leads both to the
loss and new inclusion of some peripheral sources if the new sizes are smaller
or larger, respectively. This phenomenon is well illustrated in Figure 20,
where it is seen how smaller isophotal ellipses lead to the exclusion of seven
sources from Earnshaw et al. (2019). For the same reason, we only recover 271
galaxies as sometimes all the sources present in one galaxy are lost. As a
rough estimate for the loss of sources, we take 154 galaxies present in both
Earnshaw et al. (2019) and our catalogue and that are smaller in HECATE than
RC3 and compute that, on average, a 26% of the area sky area is lost. On the
other hand, for 112 galaxies that are larger, hold $\verb|n_Galaxies|=1$, and
excluding a source that we found to be matched in different galaxies for both
catalogues, the mean sky area increment is of a 39%. This gives an intuition
into the 23% of missing sources from Earnshaw et al. (2019), but it also
indicates that some extra sources have been included. Nonetheless, the newer
values considered more accurate due to the increase in photometric data
(Makarov et al. 2014).
In addition to this, the discrepancies can be further explained by the
improvement of detection algorithms in the 4XMM editions of the serendipitous
XMM-Newton catalogues with respect to the 3XMM editions. As such, a fraction
of the spurious detections present in 3XMM-DR4 are expected to be properly
flagged or not included in 4XMM-DR9.
Regarding the ULX content, 260 sources classified by us as ULX candidates in
our catalogue have a counterpart in Earnshaw et al. (2019). This implies the
recovery of 68% of their 384 candidates. This can be explained by the more
strict filtering pipeline used in our analysis, in addition to the reasons
exposed above. We also recover 337 of the 470 objects in Walton et al. (2011),
72% of the total.
Our findings regarding the spectral and abundance properties of the cULXss are
a confirmation of the findings from Walton et al. (2011) and Earnshaw et al.
(2019). Both of them find an overabundance of ULXs in spiral galaxies and a
clear tendency for late-type galaxy hosted candidates to present slightly
harder spectra. In addition, Earnshaw et al. (2019) also find that the
spectral properties of the ULX population resemble mostly that of the X-ray
population in the range of $10^{38}<L_{\textrm{X}}<10^{39}$ erg s-1, while the
AGN population presents a slightly different distribution. Finally, we recover
five of the HLX candidates in their catalogues.
#### 5.3.2 Previous serendipitous Chandra ULX catalogues
As of the writing of this paper, Kovlakas et al. (2020) is the most recent and
largest Chandra-based ULX catalogue. Their work goes in parallel to our own,
presenting 629 ULX candidates in 309 galaxies out of 23 043 sources in 2 218
galaxies. Therefore, our catalogues complement each other to a great extent.
Most remarkably, we use the same reference list of galaxies, HECATE, which
leads to a very interesting comparison of results.
If we restrict our comparison to mutually shared galaxies at $D>1$ Mpc, their
numbers decrease to 11 359 sources in 849 galaxies. Kovlakas et al. (2020)
focused on galaxies with $D<40$ Mpc, and selected as ULX candidates all non-
nuclear objects in AGN galaxies, or all objects in non-AGN galaxies, which
have $L_{\textrm{X}}>10^{39}$ erg s-1 and present negative pileup and
unreliability flags in Chandra. Using those conditions, 341 of these sources
consist of ULX candidates in their catalogue. These 849 galaxies contain 3 130
sources in our catalogue, out of which 457 are ULX candidates on our side.
Only 2 091 sources have a direct counterpart on both sides, including 301 ULX
candidates on our side and 144 from Kovlakas et al. (2020). Finally, only 74
of the ULX candidates coincide on both sides.
Two remarkable discrepancies are noticed. Firstly, the larger density of
sources in Kovlakas et al. (2020) in comparison to our own catalogue, and
secondly, the lower yield of ULXs in Kovlakas et al. (2020) for the same sky
area. The first one can be easily explained by the means of Chandra’s sharper
resolution, which allows for the detection of many fainter sources which are
harder to resolve with XMM-Newton. The second point can be explained by the
difference of our ULX filtering criterion. We choose as a ULX candidate those
objects that, aside from having good data and being clean of interlopers, find
themselves within the ULX luminosity regime in at least one of their
detections, including the corresponding uncertainties. This implies the
inclusion of objects at the ULX luminosity threshold into our ULX candidate
set, as discussed in Section 5.2.1. By contrast, Kovlakas et al. (2020) use
only the average source luminosity as their criterion. Using their criterion,
our set of ULX candidates in the same area would get reduced from 457 to 347,
closer to their 341 candidates. Additionally, we are prone to source blending
due to XMM-Newton’s poorer resolution, leading to slight luminosity
overestimates.
Figure 21: Dispersion of source luminosities for coinciding $D>1$ Mpc sources
in our catalogue (horizontal axis) and Kovlakas et al. (2020) (vertical axis).
ULX candidates in our work catalogue are highlighted in blue.
There is an extra, more subtle, factor that contributes to the lack of
coinciding candidates. As illustrated by Figure 21, the estimated luminosity
of matched sources give are in good agreement between the two catalogues, but
with a typical dispersion of around order of magnitude. This alone explains
why there are ULX candidates in our side that are not accounted as such in
(Kovlakas et al. 2020) and vice versa, despite having a positional match on
the sky. The source of the spread can be both intrinsic variability in the
sources and uncertainties arising during the flux measurement. Nonetheless,
these discrepancies add to the mutual completeness of the two catalogues. Even
in overlapping observations, both catalogues are to be considered to have a
complete view of the total ULX content.
Besides quantity, we perform similar qualitative work using the same
parameters as provided by HECATE. Kovlakas et al. (2020) find an overabundance
of ULX in spiral galaxies as well as a spike in elliptical galaxies, in
agreement to our results. The most interesting common result is that the
number of ULX candidates per SFR in early-spiral galaxies is the closest to
the one predicted in Wiktorowicz et al. (2017), of 0.67 ULX/M⊙yr-1. In our
case, we estimate a peak value of 0.65 ULX/M⊙yr-1for early spiral galaxies,
while Kovlakas et al. (2020) present a value of 0.51 ULX/M⊙yr-1. We also agree
with Kovlakas et al. (2020) in that there is ULX population in early-type
galaxies exists despite their low SFR values, indicating that it is composed
by an older population of LMXB. However, regarding the spiral galaxies we see
that in the XMM-Newton sample galaxies with $T_{\textrm{H}}\sim 3$ present the
highest ULX frequencies and fraction of hosts, while Kovlakas et al. (2020)
find it at galaxies with $T_{\textrm{H}}\sim 5$ (Sc galaxies in the original
paper). This difference will most likely get blurred as ULX catalogue sizes
grow in the future.
### 5.4 Future prospects
Our catalogue consists of the largest ULX collection built up to date, and it
is based on the most recent XMM-Newton serendipitous catalogue available at
the time. Nonetheless, the available catalogues keep improving both in size
and quality. For instance, in late 2020, the tenth data release of the XMM-
Newton serendipitous catalogue, 4XMM-
DR10121212http://xmmssc.irap.omp.eu/Catalogue/4XMM-DR10/4XMM˙DR10.html, was
made available to the public, adding 25 034 new sources from 443 new
observations, an increase of almost an extra 5% with respect to 4XMM-DR9. As
the available samples keep increasing, so will do the samples of ULX
candidates available to the community. This will allow for more solid
statements regarding the spectral and variability properties of the bright ULX
population, and to better identify it as a subgroup of the ULX population.
A particularly promising resource that will soon be available is that of the
eRASS survey (Merloni et al. 2012), which has been active for more than a year
as of the writing of this paper (Predehl et al. 2021). Based on the area
observed by eROSITA in the western side of the galactic hemisphere as of April
27, 2020, the preliminary study of Bernadich (2020) finds a total number of
132 ULX candidates, of which only 30 are matched to our catalogue within a 3
times their positional uncertainty. This is to be expected, as sources from
the eRASS in general are spread all across the sky, having the observations
little overlap with the XMM-Newton field. Taking into account the area covered
by eROSITA and its average sensitivity at the end of the survey by 2024, the
discovery of around 1 000 new ULX candidates is expected.
X-ray catalogues are not the only ones of relevance to the search for ULXs.
Information from other wavelength ranges help to uncover investigate the
nature of the observed objects. A lot of work is being put on that side too.
For instance, also in late 2020, the Gaia Early Data Release 3 was made
available to early access, providing positional and apparent brightness
information for $10^{8}$ new sources (Gaia Collaboration 2020), with a full
release planned by 2022131313https://www.cosmos.esa.int/web/gaia/release.
Given our methodology, catalogues like this will be of great aid in
identifying further interlopers in future works.
### 5.5 Conclusions
We have built an expanded ULX catalogue from the latest available XMM-Newton
serendipitous source catalogue, 4XMM-DR9, and the HECATE list of galaxies. A
total number of 23 262 point-like sources of high-detection likelihood have
been included within the isophotal ellipses of HECATE galaxies, out of which 3
274 have at least one detection within the ULX luminosity regime
($L_{\textrm{X}}>10^{39}$ erg s-1). However, most of these sources consist of
contaminating interlopers, 2 208 of them being the nuclear source of their
host galaxy. We have built a filtering pipeline to identify such possible
interlopers from other available catalogues and databases, such as PanSTARRS1,
Tycho2, SDSS-DR14, VéronQSO, SIMBAD, PanSTARRS1 and NED. In the end, 779
objects in 617 galaxies qualify as ULX candidates, with 94 with at least one
detection with luminosity over the $L_{\textrm{X}}>5\times 10^{40}$ erg s-1
threshold. Around 30 of these objects qualify as HLXs, with $\langle
L_{\textrm{X}}\rangle>10^{41}$ erg s-1, many of which require individual
follow-ups to untangle their physical nature.
We show that ULX candidates from our complete sub-sample are preferably found
in late-type galaxies, in agreement to the notion presented in recent ULX
census (Kovlakas et al. 2020; Earnshaw et al. 2019) and reviews (Kaaret et al.
2017, and references therein). In particular, early and late spiral galaxies
present the largest ULX frequencies, with 49% of early spiral galaxies and 37%
late spiral galaxies hosting at least one ULX. These galaxies are also the
ones with the highest SFR per unit of stellar mass according to the HECATE
values, confirming the correlation between ULX and star-formation. We compute
of 0.49 ULX/M⊙yr-1 for early spirals and up to 1.01 ULX/M⊙yr-1 for late
spirals. These numbers range across the values exposed in previous literature
(Wiktorowicz et al. 2017; Kovlakas et al. 2020).
From the hardness ratio measurements provided by 4XMM-DR9, we have shown that
our ULX sample has spectral properties that are distinct from those of the
interloper population. At the same time, we show that ULX candidates hosted by
late-type galaxies tend to have harder spectra, most likely due to higher
photoelectric absorption. These results are overly similar to those already
presented by previous XMM-Newton-based ULX studies, such as Walton et al.
(2011) and Earnshaw et al. (2019). The same trend is seen in the brighter
candidates, but they have softer spectra in general.
From the stacked version 4XMM-DR9s, built from overlapping XMM-Newton
observations, we also see that ULX candidates hosted by late spiral galaxies
are the ones with the highest probability of showcasing inter-observation
variability, with the highest amplitude modulations, while candidates hosted
by elliptical and lenticular galaxies fall on the other end. As far as we are
aware, this is the first study of this kind. Other variability studies such as
Sutton et al. (2013) focus on a more thorough study of individual objects
rather than on the whole sample, and therefore the results are not fully
comparable. Indeed, to fully understand the variability properties of a
population as heterogeneous as ULXs, individual follow-ups are essential.
Again, the bright candidates distinguish themselves by showing milder
variability in general, albeit similar trends attempt to surface among them.
We are not able to make solid statements regarding the spectral and
variability properties of bright candidates due to the small size of the
complete bright sub-sample. The expected growth of available X-ray samples in
the future will mitigate this problem. Of particular promise is the all-sky
blind eRASS survey being performed by the eROSITA observatory (Predehl et al.
2021).
###### Acknowledgements.
This work was supported by the German DLR under project 50OX1901. Konstantinos
Kovlakas and Andreas Zezas acknowledge support from the European Research
Council under the European Union’s Seventh Framework Programme (FP/2007-2013)
/ ERC Grant Agreement n. 617001, and the European Union’s Horizon 2020
research and innovation programme under the Marie Skłodowska-Curie RISE
action, Grant Agreement n. 691164 (ASTROSTAT). The authors would also like to
thank all peers who helped in the construction of the catalogue by offering
their own tools and experience, and the anonymous referee who helped to
improve this paper. This research made use of the cross-match service provided
by CDS, Strasbourg.
## References
* Abbott et al. (2020a) Abbott, R., Abbott, T. D., Abraham, S., et al. 2020a, Phys. Rev. Lett., 125, 101102
* Abbott et al. (2020b) Abbott, R., Abbott, T. D., Abraham, S., et al. 2020b, ApJ, 900, L13
* Anastasopoulou et al. (2016) Anastasopoulou, K., Zezas, A., Ballo, L., & Della Ceca, R. 2016, MNRAS, 460, 3570
* Bachetti (2016) Bachetti, M. 2016, Astron. Nachr., 337, 349
* Bachetti et al. (2014) Bachetti, M., Harrisson, F. A., Walton, D. J., et al. 2014, Nature, 504, 202
* Bernadich (2020) Bernadich, M. C. 2020, Master’s Thesis (Uppsala University) [diva2::1438261]
* Blanton et al. (2017) Blanton, M. R., Bershady, M. A., Abolfathi, B., et al. 2017, AJ, 154, 28
* Brandt & Alexander (2015) Brandt, W. N. & Alexander, D. M. 2015, A&AR, 23, 1
* Carpano et al. (2018) Carpano, S., Haberl, F., Maitra, C., & Vasilopoulos, G. 2018, MNRAS, 476, L45
* Chambers et al. (2016) Chambers, K. C., Magnier, E. A., Metcalfe, N., et al. 2016 [arXiv:1612.05560]
* Colbert & Ptak (2002) Colbert, E. J. M. & Ptak, A. F. 2002, ApJS, 143, 25
* Corwin et al. (1994) Corwin, H. G. J., Buta, R. J., & de Vaucouleurs, G. 1994, AJ, 108, 2128
* Cseh et al. (2014) Cseh, D., Webb, N. A., Godet, O., et al. 2014, MNRAS, 2014
* de Vaucouleurs et al. (1991) de Vaucouleurs, G., , de Vaucouleurs, A., Corwin, H., et al. 1991, Third Reference Catalogue of Bright Galaxies (RC3) (Springer-Verlag: New York)
* Di Stefano et al. (2020) Di Stefano, R., Berndtsson, J., Urquhart, R., et al. 2020 [arXiv:2009.08987]
* Earnshaw et al. (2019) Earnshaw, H. P., Roberts, T. P., Middleton, M. J., et al. 2019, MNRAS, 483, 5554
* Eracleous et al. (2002) Eracleous, M., Shields, J. C., Chartas, G., & Moran, E. C. 2002, ApJ, 565, 108
* ESA: XMM-Newton SOC (2019) ESA: XMM-Newton SOC. 2019, XMM-Newton Users Handbook, issue 2.18
* Flewelling et al. (2016) Flewelling, H. A., Magnier, E. A., Chambers, K. C., et al. 2016 [arXiv:1612.05243]
* Freund et al. (2018) Freund, S., Robrade, J., Schneider, P. C., & Schmitt, J. H. M. M. 2018, A&A, 614, A125
* Fürst et al. (2016) Fürst, F., Walton, D. J., Harrison, F. A., et al. 2016, ApJ, 831, L14
* Gaia Collaboration (2018) Gaia Collaboration. 2018, A&A, 616, A1
* Gaia Collaboration (2020) Gaia Collaboration. 2020 [arXiv:2012.01533]
* Ghosh et al. (2008) Ghosh, H., Mathur, S., Fiore, F., & Ferrarese, L. 2008, ApJ, 687, 216
* Gilfanov (2004) Gilfanov, M. 2004, Prog. Theor. Phys. Suppl., 155, 49
* Goldader et al. (1997) Goldader, J. D., Goldader, D. L., & Joseph, R. D. 1997, AJ, 113, 1569
* Grimm et al. (2003) Grimm, H. J., Gilfanov, M., & Sunyaev, R. 2003, ChJAA, 3, 257
* Ho et al. (2001) Ho, L. C., Feigelson, E. D., Townsley, L. K., et al. 2001, ApJ, 549, L51
* Høg et al. (2000) Høg, E., Fabricius, C., Makarov, V. V., et al. 2000, A&A, 355, L27
* Hunt & Hirashita (2009) Hunt, L. K. & Hirashita, H. 2009, A&A, 507, 1327
* Inami et al. (2010) Inami, H., Armus, L., Surace, J. A., et al. 2010, AJ, 140, 163
* Israel et al. (2017) Israel, G. L., Belfiore, A., Stella, L., et al. 2017, Science, 355, 817
* Israel et al. (2016) Israel, G. L., Papitto, A., Esposito, P., et al. 2016, MNRAS, 466, L48
* Iwasawa et al. (2011) Iwasawa, K., Sanders, D. B., Teng, S. H., et al. 2011, A&A, 529, A106
* Kaaret et al. (2017) Kaaret, P., Feng, H., & Roberts, T. P. 2017, ARA&A, 55, 303
* Karachentsev et al. (2004) Karachentsev, I. D., Karachentseva, V. E., Huchtmeier, W. K., & Makarov, D. I. 2004, The Astrophysical Journal, 127, 2031
* Kim & Fabbiano (2004) Kim, D. & Fabbiano, G. 2004, ApJ, 611, 846
* Kim & Fabbiano (2010) Kim, D. & Fabbiano, G. 2010, ApJ, 721, 1523
* King (2002) King, A. R. 2002, MNRAS, 335, L13
* King (2004) King, A. R. 2004, MNRAS, 347, L18
* Kong et al. (2007) Kong, A. K. H. et al. 2007, ApJ, 671, 349
* Kovlakas et al. (2020) Kovlakas, K., Zezas, A., Andrews, J. J., et al. 2020, MNRAS, 498, 4790
* Kovlakas et al. (2021) Kovlakas, K., Zezas, A., Andrews, J. J., et al. 2021, MNRAS, 506, 1896
* Liu & Bregman (2005) Liu, J.-F. & Bregman, J. N. 2005, ApJS, 157, 59
* Liu et al. (2006) Liu, J.-F., Bregman, J. N., & Irwin, J. 2006, ApJ, 642, 171
* Maccacaro et al. (1988) Maccacaro, T., Gioia, I. M., Wolter, A., et al. 1988, ApJ, 326, 680
* Madhusudhan et al. (2006) Madhusudhan, N., Justham, N. S., Nelson, L., et al. 2006, ApJ, 640, 918
* Makarov et al. (2014) Makarov, D., Prugniel, P., Terekhova, N., et al. 2014, A&A, 570, A13
* Mateos et al. (2008) Mateos, S., Warwick, R. S., Carrera, F. J., et al. 2008, A&A, 492, 51
* Merloni et al. (2012) Merloni, A., Predehl, P., Becker, W., et al. 2012 [arXiv:1209.3114]
* Mezcua et al. (2015) Mezcua, M., Roberts, T. P., Lobanov, A. P., & Sutton, A. D. 2015, MNRAS, 448, 1893
* Mineo et al. (2012) Mineo, S., Gilfanov, M., & Sunyaev, R. 2012, MNRAS, 419, 2095
* Pasham et al. (2014) Pasham, D. R., Strohmayer, T. E., & Mushotzky, R. F. 2014, Nature, 513, 74
* Paturel et al. (1985) Paturel, G., Fouque, P., Bottinelli, L., & Gougenheim, L. 1985, A&AS, 80, 299
* Pavlovskii et al. (2017) Pavlovskii, K., Ivanova, N., Belczynski, K., & Van, K. X. 2017, MNRAS, 2016, 2092
* Pietsch (2008) Pietsch, W. 2008, Astron. Nachr., 329, 170
* Predehl et al. (2021) Predehl, P., Andritschke, R., Arefiev, V., et al. 2021, A&A, 647, A1
* Rappaport et al. (2005) Rappaport, S. A., Podsiadlowski, P., & Pfahl, E. 2005, MNRAS, 356, 401
* Remillard & McClintock (2006) Remillard, R. A. & McClintock, J. E. 2006, ARA&A, 44, 42
* Roberts & Warwick (2000) Roberts, T. P. & Warwick, R. S. 2000, MNRAS, 315, 98
* Rosen et al. (2016) Rosen, S. R., Webb, N. A., Watson, M. G., et al. 2016, A&A, 590, A1
* Servillat et al. (2011) Servillat, M., Farrell, S. A., Lin, D., et al. 2011, ApJ, 743, 6
* Shu et al. (2019) Shu, Y., Koposov, S. E., Evans, N. W., et al. 2019, MNRAS, 489, 4741
* Sturm et al. (2013) Sturm, R., Haberl, F., Pietsch, W., et al. 2013, A&A, 558, A3
* Sutton et al. (2013) Sutton, A. D., Roberts, T. P., Gladstone, J. C., et al. 2013, MNRAS, 435, 1758
* Sutton et al. (2012) Sutton, A. D., Roberts, T. P., Walton, D. J., et al. 2012, MNRAS, 423, 1154
* Swartz et al. (2004) Swartz, D. A., Ghosh, K. K., Tennant, A. F., & Wu, K. 2004, ApJS, 154, 159
* Swartz et al. (2011) Swartz, D. A., Soria, R., Tennant, A. F., & Yukita, M. 2011, ApJ, 741, 49
* Taylor (2005) Taylor, M. B. 2005, in Astronomical Society of the Pacific Conference Series, Vol. 347, Astronomical Data Analysis Software and Systems XIV, ed. P. Shopbell, M. Britton, & R. Ebert, 29
* Traulsen et al. (2019) Traulsen, I., Schwope, A. D., Lamer, G., et al. 2019, A&A, 624, A77
* Traulsen et al. (2020) Traulsen, I., Schwope, A. D., Lamer, G., et al. 2020, A&A, 641, A137
* Tremblin et al. (2014) Tremblin, P., Anderson, L. D., & Didelon, P. 2014, A&A, 568, A4
* Véron-Cetty & Véron (2010) Véron-Cetty, M. P. & Véron, P. 2010, A&A, 518, A10
* Walton et al. (2011) Walton, D. J., Roberts, T. P., Mateos, S., & Heard, V. 2011, MNRAS, 416, 1844
* Wang et al. (2016) Wang, S., Qiu, Y., Liu, J., & Bregman, J. N. 2016, ApJ, 829, 20
* Webb et al. (2020) Webb, N. A., Coriat, M., Traulsen, I., et al. 2020, A&A, 641, A136
* Webb et al. (2012) Webb, N. A., Cseh, D., Lenc, E., et al. 2012, Science, 337, 554
* Wegner et al. (2000) Wegner, M., Ochsenbein, F., Egret, D., et al. 2000, A&AS, 143, 9
* Wiktorowicz et al. (2015) Wiktorowicz, G., Sobolewska, M., Lasota, J.-P., & Belczynski, K. 2015, ApJ, 810, 20
* Wiktorowicz et al. (2017) Wiktorowicz, G., Sobolewska, M., Lasota, J.-P., & Belczynski, K. 2017, ApJ, 846, 17
* Zhang et al. (2009) Zhang, W. M., Soria, R., Zhang, S. N., et al. 2009, ApJ, 699
|
# Introduction to Human-Robot Interaction:
A Multi-Perspective Introductory Course
Tom Williams MIRRORLab
Department of Computer Science
Colorado School of MinesGolden, COUSA
###### Abstract.
In this paper I describe the design of an introductory course in Human-Robot
Interaction. This project-driven course is designed to introduce undergraduate
and graduate engineering students, especially those enrolled in Computer
Science, Mechanical Engineering, and Robotics degree programs, to key theories
and methods used in the field of Human-Robot Interaction that they would
otherwise be unlikely to see in those degree programs. To achieve this aim,
the course takes students all the way from stakeholder analysis to empirical
evaluation, covering and integrating key Qualitative, Design, Computational,
and Quantitative methods along the way. I detail the goals, audience, and
format of the course, and provide a detailed walkthrough of the course
syllabus.
Human-Robot Interaction; Engineering Education; Qualitative Methods; Design
Methods; Quantitative Methods
## 1\. Introduction and Course Goals
In this paper I describe the design of an introductory course in Human-Robot
Interaction. This project-driven course is designed to introduce undergraduate
and graduate engineering students, especially those enrolled in Computer
Science, Mechanical Engineering, and Robotics degree programs, to key theories
and methods used in the field of Human-Robot Interaction that they would
otherwise be unlikely to see in those degree programs. To achieve this aim,
the course takes students all the way from stakeholder analysis to empirical
evaluation, covering and integrating key Qualitative, Design, Computational,
and Quantitative methods along the way.
The key goal of this course is three-fold. First, the course aims to introduce
students to the notion of social robotics, and the idea of using interactive
robots to help meet the needs of real world human communities. Second, the
course aims to introduce students to the field of Human-Robot Interaction, and
to showcase both the types of work that researchers are doing in the field to
align with this vision of social robotics, as well as the research methods
that HRI researchers used to achieve those goals. Finally, the course aims to
operate as an HCI research methods course, where students can learn key tools,
including qualitative research methodologies, design research methodologies,
experimental design, and statistical analysis, which they could easily
transfer to other engineering projects, regardless of whether they choose to
pursue future work in HRI, or even in Computer Science at all.
## 2\. Prerequisites and Target Audience
These course goals are critically conditioned on the expected background of
the enrolled students. The course is offered at a small engineering-only
university with a strong focus on Robotics related fields (50% of all
undergraduate students are enrolled in Mechanical Engineering or Computer
Science degree programs, and degree programs offered in Robotics at both the
undergraduate and graduate level), but with no degree programs offered in
social sciences or humanities (e.g., Psychology) and few, if any, elective
courses available in those fields. The university size and focus means that
the course is offered at a mixed undergraduate/graduate level, and is
primarily offered to students from Computer Science, Mechanical Engineering,
and Electrical Engineering. Based on this university structure and student
makeup, students enrolled in the course have prerequisite programming and
mathematical knowledge, have likely taken related technical courses such as
Intro to Robotics, Robot Perception, SLAM, Robot Planning, Computer Vision,
Robot Ethics, Mechatronics, Advanced Robotic Control, or Robot Mechanics, and
have likely taken key design courses required of all undergraduate students,
but likely have little exposure to or appreciation for relevant theories,
methods, or practices from psychology, philosophy, communication, and so
forth. These assets and deficits critically shape the content covered in the
course.
One surprising way these assets and deficits shape course design is in terms
of the course’s coverage of robot ethics topics (or lack thereof). Most
students enrolled in the course also take Robot Ethics either before or after
taking this course. This is especially true for graduate students enrolled in
a Robotics MS or PhD program, who are required to take one or both courses. As
such, despite the societal and ethical impacts of interactive robots being
absolutely critical to the course content, these topics are not explicitly
covered since it is assumed that most students will receive deep coverage of
those topics in the standalone Robot Ethics course. That being said, key
ethical frameworks such as Engineering for Social Justice (E4SJ)e (Leydens and
Lucena, 2017), Robots for Social Justice (Zhu et al., 2024), and Feminist
Human-Robot Interaction (Winkle et al., 2023) are baked into the course and
reflected in course activities such as stakeholder analysis, needfinding, and
E4SJ-grounded reflection exercises. Moreover, discussion of social and ethical
implications frequently arise in guest lectures invited in the second half of
the semester.
## 3\. Course Format Overview
To achieve the course goals, the course is structured as a 48-student,
project-based, sixteen-week course with an even balance between lectures and
lab assignments. Several lab assignments and lectures are derived from
reference courses, especially Dr. Ana Paiva’s Masters-level Social Robots and
Human Robot Interaction course, and Bilge Mutlu’s graduate-level Human-
Computer Interaction course, albeit adapted for this course’s mixed
undergraduate/graduate population.
All 48 students attend class Mondays and Wednesdays for 50 minutes each day.
Students also attend one of two 24-student, two-hour long, lab section on
Fridays. Course exercises use nine Softbank Naos purchased for the course over
several years through university-internal course equipment grants. At any
given time, course staff ensure that eight of the nine Naos are charged and
operational. As such, each lab section can be broken into eight three-student
project groups, each of which has a Nao made available to them during lab
sections.
Over the course of the semester, students submit all lab reports and other
project deliverables by uploading them to Open Science Framework (OSF)
repositories, thus teaching the students open science principles and tools as
a secondary learning outcome.
## 4\. Detailed Syllabus
In this section I will go through the course syllabus week by week, to explain
how the different elements of the course fit together.
The first week begins with a set of activities intended to convey a high level
sense of the course to students. Specifically, the first class immediately
establishes the importance of grounding robotic engineering practice in
genuine community needs, and the ability of qualitative methods to establish
this grounding. Students are then immediately started on their first major
assignment, in which they are tasked with interviewing someone of their choice
who works in an industry relevant to social robotics, about the needs they
face. In this first week, a panel of HRI industry professionals also attend
class, to help students see this first assignment, which is likely out of
their comfort zone, as an opportunity to learn a skill relevant to their
future robotics careers. Finally, in this first week, students participate in
their first Nao programming lab, to better understand the capabilities of the
robot they will use in the class, and better see connections with the needs of
the stakeholders they are interviewing.
The second week of the course steps back to explore the theoretical
foundations of HRI, including key dimensions of interactions and
interactivity, and of longer-term constructs such as trust and influence.
Finally, students end the week by performing a multi-day grounded theory
analysis of their interview transcripts, modeled after an assignment from
Bilge Mutlu’s HCI course at University of Wisconsin Madison. In this
assignment, students begin by performing open coding together in class.
Students then separately perform axial coating as homework, and then reconvene
in groups to consolidate their axial codes and identify higher level trends in
their collected interview transcripts. During this week, students are also
tasked with performing brief literature reviews to supplement what they are
hearing in their interviews with what others have heard and determined in
other parts of the field.
In the third week of the class, class is devoted to collaborative exercises in
which students work towards a shared understanding of the goals they will
pursue over the course of the semester. Students begin by taking insights from
their interviews, and insights from their literature reviews, and creating
sets of need-reason-source triads that correspond with key user needs,
associated reasons why those needs should be prioritized in robot design (if
possible. And the source of that reason in their interview or literature
review. This helps to ensure that student projects are grounded in real user
needs and are traceable back to those needs. Next, students use these need-
reason-source triads to create vision statements for their class projects.
Third, students engage in structured reflection exercises in which students
are encouraged to reflect on ethical design principles from Engineering for
Social Justice (Leydens and Lucena, 2017), and are given the opportunity to
revise their vision statements accordingly. Finally, students identify key
robot design goals aligning with their final vision statements, and perform a
card sorting exercise to identify which design goals they will actually
prioritize over the course of the semester.
In the fourth week of the class, students are given the tools they need to
pursue their design goals. We begin with a lecture on robot design, followed
by a robot design lab in which students first storyboard out interactions
(e.g., Fig. 1), then learn principles of improvisational theater through
Improv theater games played outside on the campus quad, then apply those
principles to perform roleplay–based embodied sketching exercises. These
exercises help students identify possible interaction designs that will help
them best achieve their design goals. Students then perform further literature
review to identify other interaction design strategies from the literature
that might also help them achieve their design goals. Finally, students end
this fourth week with midterm presentations in which they present the results
of their interviews, their design visions, design goals, and the design
strategies they plan to use to pursue those goals and visions to meet the
needs of their identified user populations.
Figure 1. Storyboard from a project group in Fall 2022
In weeks five through eight, students learn the technical skills they need to
execute their design strategies. In week five, we focus on spatial and non-
verbal interaction. Students receive lectures on proxemics, motion, and gaze,
and begin a lab assignment on robot perception, grounded in the official Nao
tutorials. In week six, we focus on interaction dynamics. Students receive
lectures on group and team structures, collaboration, and turn-taking, and
complete their perception lab assignment. In week seven, we focus on verbal
interaction. Students receive lectures on dialogue, reference, and grounding,
and begin a lab assignment on robot dialogue, grounded in the official Nao
tutorials. Finally, in week eight, we focus on emotion and personality.
Students receive lectures on emotion and character design, and finish their
dialogue lab assignment.
In weeks nine through twelve, students are given the tools needed to run
experiments to evaluate their designs. In week nine, students are taught about
key dimensions of quantitative research methods, including measures, metrics,
and experimental design. Students then begin a month-long experimental design
lab, in which they formulate research questions grounded in their design
strategies, formulate hypothesis that correspond with those research
questions, and design a video-based HRI experiment to test those hypotheses.
Students then use the design skills learned earlier in the semester to design
the interactions that will appear in their experimental stimuli, and use the
technical skills learned earlier in the semester to implement those designs on
the Nao robot. Students go through CITI training, and create research ethics
protocols and consent forms for their experiments. Finally, students implement
their video-based experiments through Google Forms, and run their experiments,
with each student in the class serving as a participant in each study designed
by a project group other than their own. During weeks ten through twelve,
students work on this lab in class together on Fridays and outside of classes
homework, while on Mondays and Wednesdays, guest speakers from other
universities provide guest lectures on a wide array of HRI topics. These guest
lectures broaden the scope of HRI topics to which students are exposed,
without adding additional assignments to the students workload.
In weeks thirteen and fourteen, students are given the statistical tools
needed to evaluate the data from the experiments they have run. In week
thirteen students learn about both Frequentist and Bayesian statistics, and
complete a two-week lab assignment, in which they first proceed through
external tutorials on performing data analyses in JASP, and in which they are
then invited to use the same analysis paradigms to analyze the results of
their own experiments. In week fourteen, students listen to another guest
lecture from an HRI researcher from another university while completing their
statistics lab assignment, and then are released on Thanksgiving break.
Finally, we close the semester in week fifteen and sixteen. In these last two
weeks, students hear a final invited talk from an external researcher, and
perform a design futures exercise in which they speculate about other possible
future uses for social robots beyond those they were able to explore in the
class. Students attend a course wrap up lecture in which they learn about
other classes they can take to supplement their knowledge, and learn about
where other HRI research is being performed across the US, and especially in
Colorado, in case they are interested in pursuing graduate work in HRI.
Finally, students give group presentations covering the entire life cycle of
their projects, from needfinding, to design, to implementation, to evaluation
and results.
## 5\. Assignments and Assessment
Over the course of the semester, students are assessed through several means.
All students in the course read key course readings (typically textbook
chapters from Bartneck et al. (2020)’s Human-Robot Interaction: An
Introduction) and are assessed using brief, simple, low-stakes quizzes on the
reading. In addition, graduate students in the class are asked to read a
series of HRI research papers relating to class topics, and are assessed using
forum-style discussion-board posts in which they engage with the content of
those papers.
Next, students are graded through project deliverables, including lab reports,
midterm and final presentations, a final video demonstration of the autonomous
interaction they designed for the Nao, and a final six-page HRI-style paper
reflecting the type of paper they could have submitted to HRI if they had
actually obtained IRB approval and run the experiment with naive participants
rather than pilot-testing the experiment on themselves. In previous years, I
have had students actually obtain IRB approval, and paid for student groups to
run twenty within-subject participants each on Prolific. However, in practice
the overhead needed for students to learn to use psiTurk, and the overhead
needed for the course staff to deploy and run these experiments on students’
behalf, was deemed prohibitively burdensome and not sufficiently contributing
to student learning.
Finally graduate students are asked to complete a four-page mini-survey on a
topic of their choice within HRI, in which they collect, compare, and contrast
at least 20-30 research papers on their chosen topic. At least half of these
papers are required to have been published at T-HRI, HRI, RO-MAN, or ICSR.
## 6\. Case Study
Each year, students in this course have explored a wide range of topics
through their semester-length projects. To provide a sense of these projects,
I will discuss the outcomes of one undergraduate project team as a case study.
Approval to provide this information was approved by our Human Subjects
Research board, under informed consent from students.
This project team began by interviewing an elementary school teacher about her
experiences in the classroom. Based on this interview, and inspired by the HRI
2018 paper “Stop. I See a Conflict Happening” (Shen et al., 2018) the students
formulated the following design vision:
> “A key goal in elementary classrooms identified was that students and
> teachers need to have individual interactions to assist with communication
> and manage conflicts. From the interview and axial codes, several issues
> that make it hard for teachers to achieve this goal were identified. These
> issues include that kids struggle with social interactions, get frustrated
> quickly, prefer iPads to social interaction, and teachers are often required
> to get involved in physical conflicts between students. In addition, meeting
> the needs of every student individually requires teachers and students to
> have one on one time. However, as seen in axial code, teachers are required
> to perform many varied and demanding classroom tasks and thus do not have
> the time or manpower to deal with and manage social interaction and conflict
> with students on an individual basis. We believe that social robots may be
> able to assist teachers in managing conflicts between students and help to
> teach positive social interaction. We believe that social robots address
> this problem because robots can interact one on one with students in
> scenarios where the teacher is busy, and can intervene in minor conflicts
> between students freeing the teacher to focus on other tasks. In addition,
> kids may be more willing to listen to advice and rules from a robot and be
> less likely to fight a robot.”
After formulating this power, students designed a conflict resolution
interaction using storyboarding and embodied sketching, implemented it on the
Nao robot using its Python API, and chose to study how different linguistic
choices might shape the persuasive power of the robot as derived from its
perceived authority. The students designed a 2 (Humorous/Serious) x 2
(Assertive/Passive) within-subjects experiment with a Latin Square design, and
measured the effects of these conditions on perceived authority using the
scale proposed by Gudjonsson (1989). Finally, the students analyzed their
results under a Bayesian analysis framework, and calculated Bayes Inclusion
factors for each of their considered factors and their interaction.
Overall, these activities demonstrated students’ (undergraduate-level) mastery
over qualitative, design, computational, and quantitative research
methodologies.
## 7\. Conclusion
Overall, Introduction to Human-Robot Interaction serves to introduce students
to the domain of Social Robotics, the field of Human-Robot Interaction, and
the research methodologies of Human-Computer Interaction. Due to the cross-
cutting nature of the course, the course does not delve deeply into (1) the
fundamental theories of HRI, (2) the algorithmic methods of HRI, or (3) the
social and ethical implications of HRI. Students taking this course
(especially graduate students) would thus be best served by taking relevant
courses in those areas before, concurrently with, or following the course.
###### Acknowledgements.
This work was funded in part by NSF CAREER Award IIS-2044865.
## References
* (1)
* Bartneck et al. (2020) Christoph Bartneck, Tony Belpaeme, Friederike Eyssel, Takayuki Kanda, Merel Keijsers, and Selma Šabanović. 2020. _Human-robot interaction: An introduction_. Cambridge University Press.
* Gudjonsson (1989) Gisli H Gudjonsson. 1989. Compliance in an interrogative situation: A new scale. _Personality and Individual differences_ 10, 5 (1989), 535–540.
* Leydens and Lucena (2017) Jon A Leydens and Juan C Lucena. 2017. _Chapter 2: Engineering Design for Social Justice_. John Wiley & Sons.
* Shen et al. (2018) Solace Shen, Petr Slovak, and Malte F Jung. 2018. “Stop. I See a Conflict Happening.””” A Robot Mediator for Young Children’s Interpersonal Conflict Resolution. In _Proceedings of the 2018 ACM/IEEE international conference on human-robot interaction_. 69–77.
* Winkle et al. (2023) Katie Winkle, Donald McMillan, Maria Arnelid, Katherine Harrison, Madeline Balaam, Ericka Johnson, and Iolanda Leite. 2023. Feminist Human-Robot Interaction: Disentangling Power, Principles and Practice for Better, More Ethical HRI. In _Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction_ (Stockholm, Sweden) _(HRI ’23)_. Association for Computing Machinery, 72–82.
* Zhu et al. (2024) Yifei Zhu, Ruchen Wen, and Tom Williams. 2024. Robots for Social Justice (R4SJ): Toward a More Equitable Practice of Human-Robot Interaction. In _Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI)_.
|
# Improving Variational Autoencoder Estimation from
Incomplete Data with Mixture Variational Families
Vaidotas Simkus<EMAIL_ADDRESS>
Michael U. Gutmann<EMAIL_ADDRESS>
School of Informatics
University of Edinburgh
###### Abstract
We consider the task of estimating variational autoencoders (VAEs) when the
training data is incomplete. We show that missing data increases the
complexity of the model’s posterior distribution over the latent variables
compared to the fully-observed case. The increased complexity may adversely
affect the fit of the model due to a mismatch between the variational and
model posterior distributions. We introduce two strategies based on (i) finite
variational-mixture and (ii) imputation-based variational-mixture
distributions to address the increased posterior complexity. Through a
comprehensive evaluation of the proposed approaches, we show that variational
mixtures are effective at improving the accuracy of VAE estimation from
incomplete data.
## 1 Introduction
Deep latent variable models, as introduced by Kingma & Welling (2013); Rezende
et al. (2014); Goodfellow et al. (2014); Sohl-Dickstein et al. (2015);
Krishnan et al. (2016); Dinh et al. (2017), have emerged as a predominant
approach to model real-world data. The models excel in capturing the intricate
nature of data by representing it within a well-structured latent space.
_However, they typically require large amounts of fully-observed data at
training time, while practitioners in many domains often only have access to
incomplete data sets_.
In this paper we focus on the class of variational autoencoders (VAEs, Kingma
& Welling, 2013; Rezende et al., 2014) and investigate the implications of
incomplete training data on model estimation. Our contributions are as
follows:
* •
We show that data missingness can add significant complexity to the model
posterior of the latent variables, hence requiring more flexible variational
families compared to scenarios with fully-observed data (section 3).
* •
We propose finite variational-mixture approaches to deal with the increased
complexity due to missingness for both standard and importance-weighted ELBOs
(section 4.1).
* •
We further propose an imputation-based variational-mixture approach, which
decouples model estimation from data missingness problems, and as a result,
improves model estimation when using the standard ELBO (section 4.2).
* •
We evaluate the proposed methods for VAE estimation on synthetic and realistic
data sets with missing data (section 6).
The proposed methods achieve better or similar estimation performance compared
to existing methods that do not use variational mixtures. Moreover, the
mixtures are formed by the variational families that are used in the fully-
observed case, which allows us to seamlessly re-use the inductive biases from
the well-studied scenarios with fully-observed data (see e.g. Miao et al.,
2022, for the importance of inductive biases in VAEs).
## 2 Background: Standard approach for VAEs estimation from incomplete data
We consider the situation where some part of the training data-points might be
missing. We denote the observed and missing parts of the $i$-th data-point
${\bm{x}}^{i}$ by ${\bm{x}}_{\mathrm{obs}}^{i}$ and
${\bm{x}}_{\mathrm{mis}}^{i}$, respectively. This split into observed and
missing components corresponds to a missingness pattern
${\bm{m}}^{i}\in\\{0,1\\}^{D}$, which can be different for each $i$, and is
itself a random variable that follows a typically unknown missingness
distribution ${p^{*}}({\bm{m}}^{i}\mid{\bm{x}}^{i})$. We make the common
assumption that the missingness distribution does not depend on the missing
variables ${\bm{x}}_{\mathrm{mis}}^{i}$, that is,
${p^{*}}({\bm{m}}^{i}\mid{\bm{x}}^{i})={p^{*}}({\bm{m}}^{i}\mid{\bm{x}}_{\mathrm{obs}}^{i})$,
which is known as the ignorable missingness or missing-at-random assumption
(MAR, e.g. Little & Rubin, 2002, Section 1.3). The MAR assumption allows us to
ignore the missingness pattern ${\bm{m}}^{i}$ when fitting a model
${p_{\bm{\theta}}}({\bm{x}})$ of the true distribution ${p^{*}}({\bm{x}})$
from incomplete data.
The VAE model with parameters ${\bm{\theta}}$ is typically specified using a
decoder distribution ${p_{\bm{\theta}}}({\bm{x}}\mid{\bm{z}})$, parametrised
using a neural network, and a prior ${p_{\bm{\theta}}}({\bm{z}})$ over the
latents ${\bm{z}}$ that can either be fixed or learnt. A principled approach
to handling incomplete training data is then to marginalise the missing
variables from the likelihood ${p_{\bm{\theta}}}({\bm{x}})$, which yields the
marginal likelihood
$\displaystyle{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}}^{i})$
$\displaystyle=\int{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}}^{i},{\bm{x}}_{\mathrm{mis}}^{i})\mathop{}\\!\mathrm{d}{\bm{x}}_{\mathrm{mis}}^{i}=\iint{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}}^{i},{\bm{x}}_{\mathrm{mis}}^{i}\mid{\bm{z}}){p_{\bm{\theta}}}({\bm{z}})\mathop{}\\!\mathrm{d}{\bm{z}}\mathop{}\\!\mathrm{d}{\bm{x}}_{\mathrm{mis}}^{i}=\int{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}}^{i}\mid{\bm{z}}){p_{\bm{\theta}}}({\bm{z}})\mathop{}\\!\mathrm{d}{\bm{z}},$
(1)
where the inner integral
$\int{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}}\mid{\bm{z}})\mathop{}\\!\mathrm{d}{\bm{x}}_{\mathrm{mis}}$
is often computationally tractable in VAEs due to standard assumptions, such
as the conditional independence of ${\bm{x}}$ given ${\bm{z}}$ or the use of
the Gaussian family for the decoder ${p_{\bm{\theta}}}({\bm{x}}\mid{\bm{z}})$.
However, the marginal likelihood above remains intractable to compute as a
consequence of the integral over the latents ${\bm{z}}$.
Due to the intractable integral, VAEs are typically fitted via a variational
evidence lower-bound (ELBO)
$\displaystyle\log{p_{\bm{\theta}}}({\bm{y}})$
$\displaystyle\geq\mathbb{E}_{{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{y}})}\left[\log\frac{{p_{\bm{\theta}}}({\bm{y}}\mid{\bm{z}}){p_{\bm{\theta}}}({\bm{z}})}{{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{y}})}\right]=\log{p_{\bm{\theta}}}({\bm{y}})-D_{\text{KL}}({q_{\bm{\phi}}}({\bm{z}}\mid{\bm{y}})\;||\;{p_{\bm{\theta}}}({\bm{z}}\mid{\bm{y}})),$
(2)
where ${\bm{y}}$ refers to ${\bm{x}}^{i}$ in the fully-observed case, and to
${\bm{x}}_{\mathrm{obs}}^{i}$ in the incomplete-data case, and
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{y}})$ is an (amortised) variational
distribution with parameters ${\bm{\phi}}$ that is shared for all data-points
in the data set (Gershman & Goodman, 2014). The amortised distribution is
parametrised using a neural network (the encoder), which takes the data-point
${\bm{y}}$ as the input and predicts the distributional parameters of the
variational family. Moreover, when the data is incomplete, i.e.
${\bm{y}}={\bm{x}}_{\mathrm{obs}}^{i}$, sharing of the encoder for any pattern
of missingness is often achieved by fixing the input dimensionality of the
encoder to twice the size of ${\bm{x}}$ and providing
$\gamma({\bm{x}}_{\mathrm{obs}}^{i})$ and ${\bm{m}}^{i}$ as the
inputs,111Alternative encoder architectures, such as, permutation-invariant
networks (Ma et al., 2019) are also used. where $\gamma(\cdot)$ is a function
that takes the incomplete data-point ${\bm{x}}_{\mathrm{obs}}$ and produces a
vector of length $D$ with the missing dimensions set to zero222Equivalent to
setting the missing dimensions to the empirical mean for zero-centered data.
(Nazábal et al., 2020; Mattei & Frellsen, 2019).
In eq. 2 we show that the training objective for incomplete and fully-observed
data has the same form, and therefore it may seem that fitting VAEs from
incomplete data would be similarly difficult to the fully-observed case.
However, as we will see next, data missingness can make model estimation much
harder than in the complete data case.
## 3 Implications of incomplete data for VAE estimation
Figure 1: _Illustration of the posterior complexity due to missing data._ Each
colour represents a different data-point ${\bm{x}}^{i}$. First: the model
posterior ${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$ under complete data
${\bm{x}}$. Second: the model posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ under incomplete data
${\bm{x}}_{\mathrm{obs}}$. Third: variational approximation
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}})$ of the complete-data posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$. Fourth: an imputation-mixture
variational approximation
$\mathbb{E}_{{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}[{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})]$
of the incomplete posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$.
The decomposition of the ELBO in eq. 2 emphasises that accurate estimation of
the VAE model requires us to accurately approximate the model posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ with the variational
distribution ${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$. While it
might appear that the marginalisation of the missing variables in eq. 1 comes
at no cost since the ELBO maintains the same form as in the complete case, we
here illustrate that his is not the case.
In the two left-most columns of fig. 1 we illustrate the model posteriors
${p_{\bm{\theta}}}({\bm{z}}\mid\cdot)$ under fully-observed data ${\bm{x}}$
and partially-observed data ${\bm{x}}_{\mathrm{obs}}$.333In fig. 1 we use a
VAE with Gaussian variational, prior, and decoder distributions fitted on
complete data. We discover that the model posteriors
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$, which exhibited a certain
regularity in the complete-data scenario, have become irregular multimodal
distributions ${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ when
evaluated with incomplete data.444A related phenomenon, called posterior
inconsistency, has been recently reported in concurrent work by Sudak &
Tschiatschek (2023), relating
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ and
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}\setminus u})$, where $u$
is a subset of the observed dimensions (see section 5). Hence, accurate
estimation of VAEs from incomplete data may require more flexible variational
families than in the fully-observed case: while a Gaussian family may
sufficiently well approximate the model posterior in the fully-observed case
of our example, it is no longer sufficiently flexible in the incomplete data
case. We provide a further explanation when this situation may occur in
appendix A. As a result of the mismatch between the model posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ and the variational
distribution ${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$, the KL
divergence term in eq. 2 may not be minimised, subsequently introducing a bias
to the fit of the model.
In the two right-most columns of fig. 1 we show the variational distributions
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}})$ under fully-observed data ${\bm{x}}$
and approximations of the incomplete-data posteriors
$\mathbb{E}_{{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}[{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})]$,
which are good approximations of ${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$ and
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$, respectively. The
two plots show that if the variational family used in the fully-observed case
well-approximates the model posterior, i.e.
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}})\approx{p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$,
then the imputation-mixture
$\mathbb{E}_{{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}[{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})]$
will also be a good approximation of the incomplete-data posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$. This observation
suggests that we can work with the same variational family in both the fully-
observed and incomplete data scenarios if we adopt a mixture approach. In the
rest of this paper, we investigate opportunities to improve VAE estimation
from incomplete data by constructing variational mixture approximations of the
incomplete-data posterior.
## 4 Fitting VAEs from incomplete data using mixture variational families
We propose working with mixture variational families in order to mitigate the
increase in posterior complexity due to missing data and improve the
estimation accuracy of VAEs when the training data are incomplete. This allows
us to use families of distributions for the mixture components that are known
to work well when the data is fully-observed, and use the mixtures to handle
the increased posterior complexity due to data missingness.
We propose two approaches for constructing variational mixtures. In section
4.1 we specify ${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ as a
finite-mixture distribution that can be learnt directly using the
reparametrisation trick. In section 4.2 we investigate an imputation-based
variational-mixture where we specify
${q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})\approx\mathbb{E}_{{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}[{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})]$.
Detailed evaluation of the proposed methods is provided in section 6.
### 4.1 Using finite mixture variational distributions to fit VAEs from
incomplete data
In section 3 we saw that a good approximation of the incomplete data posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ would be the
imputation-mixture
$\mathbb{E}_{{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}[{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})]$.
However, estimation of
${p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$ is
generally intractable for VAEs (Rezende et al., 2014; Mattei & Frellsen,
2018a; Simkus & Gutmann, 2023). Hence, we here consider a more tractable
approach and specify the variational distribution
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ in terms of a finite-
mixture distribution:
$\displaystyle{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})=\sum_{k=1}^{K}{q_{\bm{\phi}}}(k\mid{\bm{x}}_{\mathrm{obs}}){q_{\bm{\phi}}^{k}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}}),$
(3)
where ${q_{\bm{\phi}}}(k\mid{\bm{x}}_{\mathrm{obs}})$ is a categorical
distribution over the components $k\in\\{1,\ldots,K\\}$ and each component
distribution ${q_{\bm{\phi}}^{k}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$
belongs to any reparametrisable distribution family. Both
${q_{\bm{\phi}}}(k\mid{\bm{x}}_{\mathrm{obs}})$ and
${q_{\bm{\phi}}^{k}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ are amortised using
an encoder network, similar to section 2.
The “reparametrisation trick” is typically used in VAEs to efficiently
optimise the parameters ${\bm{\phi}}$ of the variational distribution
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$, which requires that
the random variable ${\bm{z}}$ can be parametrised as a learnable
differentiable transformation
$t(\bm{\epsilon};{\bm{x}}_{\mathrm{obs}},{\bm{\phi}})$ of another random
variable $\bm{\epsilon}$ that follows a distribution with no learnable
parameters. However, reparametrising mixture-families requires extra care:
sampling the mixture ${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ in
eq. 3 is typically done via ancestral sampling by first drawing
$k\sim{q_{\bm{\phi}}}(k\mid{\bm{x}}_{\mathrm{obs}})$ and then
${\bm{z}}\sim{q_{\bm{\phi}}^{k}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$, but
the sampling of the categorical distribution
${q_{\bm{\phi}}}(k\mid{\bm{x}}_{\mathrm{obs}})$ is non-differentiable, making
the direct application of the “reparametrisation trick” generally infeasible.
As a result, we consider two objectives for fitting VAEs with mixture-
variational distributions based on the variational ELBO (Kingma & Welling,
2013; Rezende et al., 2014):
$\displaystyle\mathcal{L}_{\mathrm{ELBO}}({\bm{x}}_{\mathrm{obs}})$
$\displaystyle=\mathbb{E}_{{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})}\left[\log
w({\bm{z}})\right],\quad\text{and}$ (4)
$\displaystyle\mathcal{L}_{\mathrm{SELBO}}({\bm{x}}_{\mathrm{obs}})$
$\displaystyle=\sum_{k=1}^{K}{q_{\bm{\phi}}}(k\mid{\bm{x}}_{\mathrm{obs}})\mathbb{E}_{{q_{\bm{\phi}}^{k}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})}\left[\log
w({\bm{z}})\right],$ (5) $\displaystyle\text{where}\quad w({\bm{z}})$
$\displaystyle=\frac{{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}},{\bm{z}})}{{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})}.$
(6)
The first objective $\mathcal{L}_{\mathrm{ELBO}}$ corresponds to the standard
ELBO, while $\mathcal{L}_{\mathrm{SELBO}}$ is the stratified ELBO (Roeder et
al., 2017, Section 4; Morningstar et al., 2021). When working with
$\mathcal{L}_{\mathrm{ELBO}}$, due to the mixture variational family, we will
need to optimise ${\bm{\phi}}$ with _implicit_ reparameterisation (Figurnov et
al., 2019). Implicit reparametrisation of mixture distributions requires that
it is possible to factorise the component distributions
${q_{\bm{\phi}}^{k}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ using the chain
rule, i.e.
${q_{\bm{\phi}}^{k}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})=\prod_{d}{q_{\bm{\phi}}^{k}}(z_{d}\mid{\bm{z}}_{<d},{\bm{x}}_{\mathrm{obs}})$,
and have access to the CDF (or other standardisation function) of each factor
${q_{\bm{\phi}}^{k}}(z_{d}\mid{\bm{z}}_{<d},{\bm{x}}_{\mathrm{obs}})$.
However, the chain rule requirement can be difficult to satisfy for some
highly flexible variational distribution families, such as normalising flows
(e.g. Papamakarios et al., 2021), and finding the (conditional) CDF of the
factors can also be hard if not already known in closed form. Consequently,
$\mathcal{L}_{\mathrm{ELBO}}$ with implicit reparametrisation may not be
usable with all families of variational distributions as components of the
mixture.
The second objective $\mathcal{L}_{\mathrm{SELBO}}$, on the other hand,
samples the mixture distribution with stratified sampling,555Stratified
sampling of mixture distributions typically draws an equal number of samples
from each component and weighs the samples by the component probabilities
${q_{\bm{\phi}}}(k\mid{\bm{x}}_{\mathrm{obs}})$ when estimating expectations.
It is commonly used to reduce Monte Carlo variance (Robert & Casella, 2004).
which avoids the non-differentiability of sampling
${q_{\bm{\phi}}}(k\mid{\bm{x}}_{\mathrm{obs}})$, and as a result allows us to
use any family of reparametrisable distributions as the mixture components.
The importance-weighted ELBO (IWELBO, Burda et al., 2015) is often used as an
alternative to the standard ELBO as it can be made tighter. We here also
consider an ordinary version, $\mathcal{L}_{\mathrm{IWELBO}}$, and a
stratified version, $\mathcal{L}_{\mathrm{SIWELBO}}$ (Shi et al., 2019,
Appendix A; Morningstar et al., 2021):
$\displaystyle\mathcal{L}_{\mathrm{IWELBO}}^{I}({\bm{x}}_{\mathrm{obs}})$
$\displaystyle=\mathbb{E}_{\\{{\bm{z}}_{j}\\}_{j=1}^{I}\sim{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})}\left[\log\frac{1}{I}\sum_{j=1}^{I}w({\bm{z}}_{j})\right],\quad\text{and}$
(7) $\displaystyle\mathcal{L}_{\mathrm{SIWELBO}}^{I}({\bm{x}}_{\mathrm{obs}})$
$\displaystyle=\mathbb{E}_{\\{\\{{\bm{z}}_{j}^{k}\\}_{j=1}^{I}\sim{q_{\bm{\phi}}^{k}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})\\}_{k=1}^{K}}\left[\log\sum_{k=1}^{K}{q_{\bm{\phi}}}(k\mid{\bm{x}}_{\mathrm{obs}})\frac{1}{I}\sum_{j=1}^{I}w({\bm{z}}_{j}^{k})\right],$
(8)
where $I$ is the number of importance samples in
$\mathcal{L}_{\mathrm{IWELBO}}$ and the number of samples per-mixture-
component in $\mathcal{L}_{\mathrm{SIWELBO}}$.
When the number of mixture-components is $K=1$ the lower-bounds above
correspond to the MVAE and MIWAE bounds in Mattei & Frellsen (2019) which are
among the most popular bounds for fitting VAEs from incomplete data. However,
as $K>1$ the proposed bounds can be tighter due to an increased flexibility of
the variational distribution
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ (Morningstar et al.,
2021, Appendix A), which potentially mitigates the problems caused by the
missing data (see section 3). Finally, the importance-weighted bounds in eqs.
7 and 8 maintain the asymptotic consistency guarantees of Burda et al. (2015)
and approaches the true marginal log-likelihood
$\log{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}})$ as $K\cdot
I\rightarrow\infty$, allowing for more accurate estimation of the model with
increasing computational budget.
We denote the four methods based on eqs. 4, 5, 7 and 8 by MissVAE, MissSVAE,
MissIWAE, and MissSIWAE respectively.
### 4.2 Using imputation-mixture distributions to fit VAEs from incomplete
data
In section 4.1, we dealt with the inference of the latents ${\bm{z}}$ (section
2) and the pitfalls of missing data (section 3) jointly by learning a finite-
mixture variational distribution. Here, we propose a second “decomposed”
approach to deal with the pitfalls of missing data.
Intuitively, if we had an oracle that was able to generate imputations of the
missing data based on the ground truth conditional distribution
${p^{*}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$, then the VAE
estimation task would reduce to the case of complete-data, that is, the
challenges affecting the estimation of the variational distribution
${q_{\bm{\phi}}}$ from section 3 would be mitigated. This suggests that an
_effective strategy would be to decompose the task of model estimation from
incomplete data into two (iterative) tasks: data imputation and model
estimation_ , akin to the Monte Carlo EM algorithm (Wei & Tanner, 1990;
Dempster et al., 1977). However, access to the oracle
${p^{*}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$ is unrealistic
and the exact sampling of
${p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$, as
required in EM, is generally intractable. To address this, we resort to (i)
approximate but computationally cheap conditional sampling methods for VAEs to
generate imputations (Rezende et al., 2014; Mattei & Frellsen, 2018a; Simkus &
Gutmann, 2023) and (ii) learning objectives for the model ${p_{\bm{\theta}}}$
and the variational distribution ${q_{\bm{\phi}}}$ that mitigate the pitfalls
caused by the missing data. We call the proposed approach DeMissVAE
(decomposed approach for handling missing data in VAEs).
We construct the variational distribution
${q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ for an
incomplete data-point ${\bm{x}}_{\mathrm{obs}}$ using a completed-data
variational distribution
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})$
and an (approximate) imputation distribution
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\approx{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$:
$\displaystyle{q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})=\mathbb{E}_{{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}}\left[{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})\right].$
(9)
Assuming that the completed-data variational distribution
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})$
well-represents the model posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})$,
and that the imputation distribution
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$ draws plausible
imputations of the missing variables, then
${q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ will reasonably
represent ${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ (see the
two right-most columns of fig. 1). In contrast to section 4.1 we here use a
continuous-mixture variational distribution, which is more flexible than a
finite-mixture distribution, albeit at an extra computational cost due to
sampling the (approximate) imputations (see appendix D).
We now derive the DeMissVAE objectives for fitting the generative model
${p_{\bm{\theta}}}({\bm{x}})$ and the completed-data variational distribution
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})$,
see appendix C for a more in-depth treatment.
##### Objective for ${p_{\bm{\theta}}}({\bm{x}},{\bm{z}})$.
With the variational distribution in eq. 9, we derive an ELBO on the marginal
log-likelihood, similar to eq. 2, to learn the parameters ${\bm{\theta}}$ of
the generative model:
$\displaystyle\log{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}})$
$\displaystyle\geq\mathbb{E}_{{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})}\left[\log\frac{{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}},{\bm{z}})}{\mathbb{E}_{{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}}\left[{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})\right]}\right]$
$\displaystyle=\underbrace{\mathbb{E}_{{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})}\left[\log{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}},{\bm{z}})\right]}_{\overset{!}{=}\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}({\bm{x}}_{\mathrm{obs}};{\bm{\phi}},{\bm{\theta}},f^{t})}+\underbrace{\mathcal{H}\left[{q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})\right]}_{\text{Const.
w.r.t.\ ${\bm{\theta}}$}}.$ (10)
This lower-bound can be further decomposed into log-likelihood and KL
divergence terms
$\displaystyle\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}({\bm{x}}_{\mathrm{obs}};{\bm{\phi}},{\bm{\theta}},f^{t})+\mathcal{H}\left[{q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})\right]$
$\displaystyle=\log{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}})-D_{\text{KL}}({q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})\;||\;{p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})),$
(11)
which means that if
${q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})\approx{p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$
then maximising eq. 10 w.r.t. ${\bm{\theta}}$ performs approximate maximum-
likelihood estimation. Importantly, the missing variables
${\bm{x}}_{\mathrm{mis}}$ are marginalised-out, which adds robustness to the
potential sampling errors in
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$.
##### Objective for ${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}})$.
We obtain the objective for learning the variational distribution
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}})$ by marginalising the missing variables
${\bm{x}}_{\mathrm{mis}}$ from the complete-data ELBO in eq. 2 and then lower-
bounding the integral using
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$ (see appendix
B):
$\displaystyle\log{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}})$
$\displaystyle\geq\underbrace{\mathbb{E}_{{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})}\left[\log\frac{{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}},{\bm{z}})}{{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})}\right]}_{\overset{!}{=}\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}({\bm{x}}_{\mathrm{obs}};{\bm{\phi}},{\bm{\theta}},f^{t})}+\underbrace{\mathcal{H}\left[{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\right]}_{\text{Const.
w.r.t.\ ${\bm{\phi}}$}}.$ (12)
This lower-bound can also be decomposed into the log-likelihood term and two
KL divergence terms
$\displaystyle\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}({\bm{x}}_{\mathrm{obs}};{\bm{\phi}},{\bm{\theta}},f^{t})+\mathcal{H}\left[{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\right]$
$\displaystyle=\begin{aligned}
&\log{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}})-D_{\text{KL}}({f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\;||\;{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}}))\\\
&-\mathbb{E}_{{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}}\left[D_{\text{KL}}({q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})\;||\;{p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}}))\right],\end{aligned}$
(13)
which means that the bound is maximised w.r.t. ${\bm{\phi}}$ iff
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})={p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})$
for all ${\bm{x}}_{\mathrm{mis}}$. Therefore, using the above objective to fit
${q_{\bm{\phi}}}$ corresponds directly to the complete-data case, and hence
the issues due to missingness identified in section 3 are mitigated.
If
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})={p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})$
for all ${\bm{x}}_{\mathrm{mis}}$, then maximising either of the bounds in
eqs. 12 and 10 w.r.t. the imputation distribution
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$ would correspond
to setting
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}={p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$.
However, directly learning an imputation distribution
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\approx{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$
is challenging (Simkus et al., 2023, Section 2.2). This motivates using
sampling methods to approximate the optimal imputation distribution
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\approx{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$
with samples. We draw samples from
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$ using (cheap)
approximate conditional sampling methods for VAEs to obtain $K$ imputations
$\\{{\bm{x}}_{\mathrm{mis}}^{k}\\}_{k}^{K}$ and then use them to approximate
the expectations w.r.t.
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$ in the above
objectives. We discuss the implementation of the algorithm in detail in
appendix D.
Finally, it is worth noting that the
$\mathcal{L}_{\mathrm{CVI}}^{{\bm{\theta}}}$ and
$\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}$ objectives in eqs. 10 and 12 are
based on the standard ELBO. Extensions to the importance-weighted ELBO might
improve the method further by increasing the flexibility of the variational
posterior. However, unlike the standard ELBO used in eq. 10 where the density
of the imputation-based variational-mixture
${q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ can be dropped,
IWELBO requires computing the density of the proposal distribution
${q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$, which is
generally intractable. We hence leave this direction for future work.
## 5 Related work
##### Fitting VAEs from incomplete data.
Since the seminal works of Kingma & Welling (2013) and Rezende et al. (2014),
VAEs have been widely used for density estimation from incomplete data and
various downstream tasks, primarily due to the computationally efficient
marginalisation of the model in eq. 1. Vedantam et al. (2017) and Wu & Goodman
(2018) explored the use of product-of-experts variational distributions,
drawing inspiration from findings in the factor analysis case with incomplete
data (Williams et al., 2018). Mattei & Frellsen (2019) used the importance-
weighted ELBO (Burda et al., 2015) for training VAEs on incomplete training
data sets. Ma et al. (2019) proposed the use of permutation invariant neural
networks to parametrise the encoder network instead of relying on zero-
masking. Nazábal et al. (2020) introduced hierarchical priors to handle
incomplete heterogeneous training data. Simkus et al. (2023) proposed a
general-purpose approach that is applicable to VAEs, not requiring the decoder
distribution to be easily marginalisable. Here, we further develop the
understanding of VAEs in the presence missing values in the training data set,
and propose variational-mixtures as a natural approach to improve VAE
estimation from incomplete data, building upon the motivation from imputation-
mixtures discussed in section 3.
##### Variational mixture distributions.
Mixture distributions have found widespread application in variational
inference and VAE literature. Roeder et al. (2017) introduced the stratified
ELBO corresponding to eq. 5. In the context of VAEs in multimodal domains, Shi
et al. (2019, Appendix A) introduced the stratified IWELBO corresponding to
eq. 8, but opted to use a looser bound instead, as detailed footnote 6. These
bounds were subsequently rediscovered by Morningstar et al. (2021) and Kviman
et al. (2023), who investigated their use for VAE estimation in fully-observed
data scenarios. Furthermore, Figurnov et al. (2019) introduced implicit
reparametrisation, enabling gradient estimation for ancestrally-sampled
mixtures, allowing the estimation of variational mixtures using eqs. 4 and 7.
Here, we build on the prior work, asserting that variational-mixtures are
well-suited for handling the posterior complexity increase due to missing data
(see section 3). Moreover, the imputation-mixture distribution used in
DeMissVAE is a novel type of variational mixtures specifically designed for
incomplete data scenarios.
##### Posterior complexity increase due to missing data.
Concurrent to this study, Sudak & Tschiatschek (2023) have recently brought
attention to a phenomenon related to the increase in posterior complexity due
to incomplete data, as discussed in section 3. They noted that, for any
${\bm{x}}_{\mathrm{obs}}$ and ${\bm{x}}_{\mathrm{obs}\setminus u}$, where $u$
is a subset of the observed dimensions, the model posteriors
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ and
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}\setminus u})$ should
exhibit a strong dependency. However, because of the approximations in the
variational posterior (see e.g. Cremer et al., 2018; Zhang et al., 2021), the
variational approximations
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ and
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}\setminus u})$ may not
consistently capture this dependency. They refer to the lack of dependency
between ${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ and
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}\setminus u})$, compared to
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ and
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}\setminus u})$, as
posterior inconsistency. Focused on improving downstream task performance,
they introduce regularisation into the VAE training objective to address
posterior inconsistency. In contrast to their work, we compare the fully-
observed and incomplete-data posteriors,
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$ and
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$, respectively. And,
with the goal of improving model estimation performance, we propose the use of
variational-mixtures to mitigate the posterior complexity gap between the
fully-observed and incomplete-data posteriors.
##### Marginalised variational bound.
In the standard ELBO derivation for incomplete data in eq. 2 the missing
variables are first marginalised (collapsed) from the likelihood, and then a
variational ELBO is established. This approach is sometimes referred to as
collapsed variational inference (CVI). In contrast, in the derivation of the
DeMissVAE encoder objective in eq. 12 we swap the order of marginalisation and
variational inference. Specifically, we start with the variational ELBO on
completed-data, and then marginalise the missing variables (see appendix B).
This approach bears similarity to the marginalised variational bound (MVB, or
KL-corrected bound) in exponential-conjugate variational inference literature
(King & Lawrence, 2006; Lázaro-Gredilla & Titsias, 2011; Hensman et al.,
2012). In these works, MVB has been preferred over CVI due to improved
convergence and guarantees that for appropriately formulated conjugate models
MVB is analytically tractable in cases where CVI is not (Hensman et al., 2012,
Section 3.3). While MVB remains intractable in the VAE setting with incomplete
data, similar to how the standard ELBO is intractable in fully-observed case,
we find the motivation behind MVB and DeMissVAE to be similar.
## 6 Evaluation
We here evaluate the proposed methods, MissVAE, MissSVAE, MissIWAE, MissSIWAE
(section 4.1), and DeMissVAE (section 4.2), on synthetic and real-world data,
and compare them to the popular methods MVAE and MIWAE that do not use mixture
variational distributions (Mattei & Frellsen, 2019). The methods are
summarised in table 1.
Method | ${p_{\bm{\theta}}}$ objective | ${q_{\bm{\phi}}}$ objective | # of components | Mixture sampling
---|---|---|---|---
MVAE$\dagger$ | eq. 4 | eq. 4 | $K=1$ | —
MissVAE | eq. 4 | eq. 4 | $K>1$ | Ancestral
MissSVAE | eq. 5 | eq. 5 | $K>1$ | Stratified
MIWAE$\dagger$ | eq. 7 | eq. 7 | $K=1$ | —
MissIWAE | eq. 7 | eq. 7 | $K>1$ | Ancestral
MissSIWAE | eq. 8 | eq. 8 | $K>1$ | Stratified
DeMissVAE | eq. 10 | eq. 12 | $K>1$ | Conditional VAE
Table 1: _Summary of the proposed and baseline methods._ The non-mixture
baselines ($\dagger$) are based on Mattei & Frellsen (2019) and the other
methods are proposed in this paper. Moreover, the methods using ancestral
sampling require implicit reparametrisation (Figurnov et al., 2019), whereas
the other methods work with the standard reparametrisation trick.
### 6.1 Mixture-of-Gaussians data with a 2D latent VAE
Evaluating log-likelihood on held-out data is generally intractable for VAEs
due to the intractable integral in eq. 1. We hence here choose a VAE with 2D
latent space, where numerical integration can be used to estimate the log-
likelihood of the model accurately (see section E.1 for more details). We fit
the model on incomplete data drawn from a mixture-of-Gaussians distribution.
By introducing missingness in the mixture-of-Gaussians data we introduce
multi-modality in the latent space (see fig. 1), which allows us to verify the
efficacy of mixture-variational distributions when the posteriors are multi-
modal due to missing data.
Figure 2: _Log-likelihood on held out data evaluated by numerically
integrating the 2D latent variables._ VAEs were fitted on mixture-of-Gaussians
data with 50% missingness. Each model is fitted with a computational budget of
5/15/25 samples from the variational distribution. The box plots show 1st and
3rd quartiles, the black lines are the medians, the dashed lines are the
means, and the whiskers show the data range over 5 independent runs. MVAE and
MIWAE ($\dagger$) are baseline methods by Mattei & Frellsen (2019). The other
five methods are proposed in this paper.
Results are shown in fig. 2. We first note that the stratified MissSVAE
approach performed better than MissVAE that uses ancestral sampling. The
reason for this is likely that stratified sampling reduces Monte Carlo
variance of the gradients w.r.t. ${\bm{\phi}}$ and hence enables a better fit
of the variational distribution
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ (see a further
investigation in section F.1.1). In line with this intuition, the MissVAE
results exhibit significantly larger variance than MissSVAE. Similarly, we
observe that the stratified MissSIWAE approach performed better than MissIWAE.
Importantly, we see that the use of mixture variational distributions in
MissSVAE and MissSIWAE improve the model fit over the MVAE and MIWAE baselines
that do not use mixtures to deal with the increased posterior complexity due
to missingness. Finally, we observe that DeMissVAE is capable of achieving
comparable performance to MIWAE and MissSIWAE, despite using a looser ELBO
bound, which shows that the decomposed approach to handling data missingness
can be used to achieve an improved fit of the model.
In section F.1.2, we analyse the model and variational posteriors of the
learnt models. We observe that the mixture approaches better-approximate the
incomplete-data posteriors, compared to the approaches that do not use
variational-mixtures. Moreover, we also observe that the structure of the
latent space is better-behaved when fitted using the decomposed approach in
DeMissVAE.
### 6.2 Real-world UCI data sets
Figure 3: _Estimate of the test log-likelihood using the IWELBO with
$I=50000$, on four UCI data sets._ Each data set was rendered incomplete by
applying uniform missingness of 20/50/80%. The curves show average performance
over 5 independent runs of the algorithms and the intervals show the 90%
confidence.
We here evaluate the proposed methods on real-world data sets from the UCI
repository (Dua & Graff, 2017; Papamakarios et al., 2017). We train a VAE
model with ResNet architecture on incomplete data sets with 20/50/80% uniform
missingness (see section E.2 for more details). We then estimate the log-
likelihood on complete test data set using the IWELBO bound with
$I=50\text{K}$ importance samples.777As $I\rightarrow\infty$ IWELBO approaches
$\log{p_{\bm{\theta}}}({\bm{x}})$. Moreover, as suggested by Mattei & Frellsen
(2018b), to improve the estimate on held-out data we fine-tune the encoder on
complete test data before estimating the log-likelihood. For additional
metrics see section F.2.
The results are shown in fig. 3. We first note that, similar to before, the
stratified MissSVAE approach performed better than MissVAE which uses
ancestral sampling. Importantly, we observe that using mixture variational
distributions in MissSVAE improves the fit of the model over MVAE (with the
exception on the Miniboone data set) that uses non-mixture variational
distributions. Furthermore, the gains in model accuracy typically increase
with data missingness, which verifies that MissSVAE performs better because it
handles the increased posterior complexity due to missing data better (see
fig. 1). Next, we observe that the performance of MIWAE, MissIWAE, and
MissSIWAE is similar, although we can note a small improvement by using
MissIWAE and MissSIWAE in large missingness settings. We observe only a
relatively small difference between the IWAE methods because the use of
importance weighted bound already corresponds to using a more flexible semi-
implicitly defined variational distribution (Cremer et al., 2017), which here
seems to be sufficient to deal with the complexities arising due to
missingness. Finally, we note that DeMissVAE results are in-between MissSVAE
and MIWAE. This verifies that the decomposed approach can be used to deal with
data missingness and, as a result, can improve the fit of the model.
Nonetheless, DeMissVAE is surpassed by the IWAE methods, which is likely due
to using the ELBO in DeMissVAE versus IWELBO in IWAE methods that can tighten
the bound more effectively.
### 6.3 MNIST and Omniglot data sets
Figure 4: _Estimate of the test log-likelihood using the IWELBO with $I=1000$,
MNIST and Omniglot data sets._ Each image in the data set was missing 2 out of
4 random quadrants. The box plots show 1st and 3rd quartiles, the black lines
are the medians, the dashed lines are the means, and the whiskers show the
data range over 5 independent runs.
In this section we evaluate the proposed methods on binarised MNIST (Garris et
al., 1991) and Omniglot (Lake et al., 2015) data sets of handwritten
characters. We fit a VAE model with a convolutional ResNet encoder and decoder
networks (see section E.3 for more details). The data is made incomplete by
masking 2 out of 4 quadrants of an image at random. Similar to the previous
section, we estimate the log-likelihood on a complete test data set using the
IWELBO bound with $I=1000$ importance samples.
On the MNIST data set we see that MVAE $\leq$ MissVAE $<$ MissSVAE similar to
the previous results but MIWAE $<$ MissSIWAE $<$ MissIWAE. This suggests that
MissIWAE, which uses ancestral sampling, was able to tighten the bound more
effectively compared to stratified MissSIWAE, and was able to fit the
variational distribution
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ well despite the
potentially larger variance w.r.t. ${\bm{\phi}}$. Moreover, we also see that
MVAE $<$ MIWAE $<$ DeMissVAE, which further verifies that the decomposed
approach is able to handle the data missingness well.
On the Omniglot data we observe that the mixture approaches perform similarly
to MVAE and MIWAE, which do not use mixture variational distributions. This
suggests that either the posterior multi-modality is less prominent in the
Omniglot data set or that due to the reverse KL optimisation of the
variational distribution all mixture components have degenerated to a single
mode. Finally, DeMissVAE slightly outperforms MVAE, MissVAE, and MissSVAE, but
is surpassed by the importance-weighted approaches.
Interestingly, in this evaluation the stratified approaches (MissSVAE and
MissSIWAE) were outperformed by the approaches using standard ELBO and
implicit reparametrisation (MissVAE and MissIWAE). This suggests that the
performance of each approach can be data- and model-dependent and hence both
should be evaluated when possible.
## 7 Discussion
Handling missing data is a crucial task in machine learning for the
application of modern statistical methods in many practical domains
characterised by incomplete data. In the context of variational autoencoders
we have shown that incomplete data introduces posterior complexity over the
latent variables, compared to the fully-observed data case. Consequently,
accurate model fitting from incomplete data requires the use of more flexible
variational families compared to the complete case. We have then stipulated
that variational-mixtures are a natural approach for handling data
missingness. This allows us to work with the same variational families that
are known to work well when the data is fully-observed, enabling the transfer
of useful known inductive biases (Miao et al., 2022) from the fully-observed
to the incomplete data scenario.
Subsequently, we have introduced two methodologies grounded in variational
mixtures. First, we proposed using finite variational mixtures with the
standard and importance-weighted ELBOs using ancestral and stratified sampling
of the mixtures. Additionally, we have proposed a novel “decomposed”
variational-mixture approach, that uses cost-effective yet often coarse
conditional sampling methods for VAEs to generate imputations and ELBO-based
objectives that are robust to the sampling errors.
Our evaluation shows that using variational mixtures can improve the fit of
VAEs when dealing with incomplete data, surpassing the performance of models
without variational mixtures. Moreover, our observations indicate that,
although stratified sampling of the finite mixtures often yields better
results compared to ancestral sampling, the effectiveness of these methods can
be data- and model-dependent and hence both approaches should be evaluated
when possible. In our findings, we further note that the decomposed approach
in DeMissVAE outperforms all ELBO-based methods but falls short of surpassing
IWELBO-based methods. These results point towards promising research avenues,
suggesting potential improvements in VAE model estimation from incomplete
data. Future directions include extending the DeMissVAE approach to
incorporate IWELBO-based objectives and developing improved cost-effective
conditional sampling methods for VAEs.
| Budget | Variational families | Latent structure* | Evaluation rank
---|---|---|---|---
Method | | _MoG_ | _UCI_ | _MNIST_ +_Omniglot_
MissVAE | _small_ | _limited_ | _well-behaved_ | 5 | 5 | 5
MissSVAE | _medium_ | _any_ | _well-behaved_ | 4 | 4 | 4
MissIWAE | _medium_ | _limited_ | _potentially irregular_ | 3 | 2 | 1
MissSIWAE | _medium_ | _any_ | _potentially irregular_ | 1 | 1 | 2
DeMissVAE | _medium/high_ | _any_ | _well-behaved_ + | 2 | 3 | 3
Table 2: _A coarse summary of advantages and disadvantages of the proposed
methods._ _Budget_ : small/medium/high depending on the number of latent
samples required or whether conditional sampling of VAEs is needed.
_Variational families_ : which families of distributions can be used as
mixture components—any reparametrisable families, or limited families, as
discussed in section 4.1. _Latent structure_ : methods with potential to learn
irregular latent spaces may have decreased downstream performance in certain
tasks. We have found ($+$) that DeMissVAE is able to achieve the most well-
behaved latent structures on the MoG data in section F.1.2. Please note (*)
that the learnt latent structure will depend on the chosen model architecture.
_Evaluation rank_ : the rank of the proposed methods in the evaluations in
sections 6.1, 6.2 and 6.3.
The choice between the proposed methods for fitting VAEs from incomplete data
depends on various factors such as computational budget, variational families,
model accuracy goals, and the specific requirements of downstream tasks,
discussed as follows and summarised in table 2.
Computational and memory budget.
The standard ELBO with ancestral sampling is the most suitable method for
small computational and memory budgets, since the objective can be estimated
using a single latent sample for each data point. On the other hand, methods
using stratified sampling or the importance-weighted ELBO require the use of
multiple latent samples for each data-point and hence may only be used if the
memory and compute budget allows. Furthermore, akin to the standard ELBO, the
DeMissVAE objectives can be estimated using a single latent sample, but the
approach incurs extra cost in sampling the imputations.
Variational families.
While the stratified and DeMissVAE approaches can use any reparametrisable
distribution family for the mixture components, the ancestral sampling methods
require the use of _implicit_ reparametrisation (Figurnov et al., 2019) and as
a result may not work with all distribution families (see discussion in
section 4.1).
Model accuracy.
Stratified sampling of mixtures can improve the model accuracy, compared to
ancestral sampling, by reducing Monte Carlo gradient variance. Additionally,
methods using the importance-weighted ELBO, compared to the standard ELBO, are
often able to tighten the bound more effectively by using multiple importance
samples, leading to improved model accuracy. DeMissVAE performance lies in
between the standard ELBO and importance-weighted ELBO approaches. Although
the introduced DeMissVAE objectives exhibit robustness to some imputation
distribution error, improved model accuracy can often be achieved by improving
the accuracy of imputations by using a larger budget for the imputation step.
Latent structure.
Different downstream tasks may prefer distinct latent structures, for example,
conditional generation from unconditional VAEs is often easier if the latent
space is well-structured (Engel et al., 2017; Gómez-Bombarelli et al., 2018).
To this end, observations in section F.1.2 show that the latent space of
DeMissVAE behaves well, and is comparable to a model fitted with complete
data. This characteristic makes it preferable for downstream tasks requiring
well-structured latent spaces. On the other hand, as noted by Burda et al.
(2015, Appendix C) and Cremer et al. (2018, Section 5.4), the use of
importance-weighted ELBO to mitigate the increased posterior complexity due to
missing data, may make the latent space less regular, compared to a model
trained on fully-observed data set, which potentially decreases the model’s
performance on downstream tasks.
Finally, we step back to note that this paper is focused on the class of
variational autoencoder models, a subset of the broader family of deep latent
variable models (DLVMs). Much like VAEs, DLVMs usually aim to efficiently
represent the intricate nature of data through a well-structured latent space,
implicitly defined by a learnable generative process. Building on our findings
in VAEs, where incomplete data led to an increased complexity in the posterior
distribution compared to the fully-observed case, we conjecture that a similar
effect may occur within the wider family of DLVMs, affecting the fit of the
model. We therefore believe that there is substantial scope to explore the
implications of incomplete data in other DLVM classes, particularly focusing
on the effects of marginalisation on latent space representations and the
associated generative processes. Investigating decomposed approaches, similar
to DeMissVAE or Monte Carlo EM (Wei & Tanner, 1990), presents promising
avenues for further research in this direction.
## References
* Burda et al. (2015) Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance Weighted Autoencoders. In _International Conference on Learning Representations (ICLR)_ , San Juan, Puerto Rico, September 2015.
* Cremer et al. (2017) Chris Cremer, Quaid Morris, and David Duvenaud. Reinterpreting Importance-Weighted Autoencoders. In _ICLR Workshop_ , February 2017.
* Cremer et al. (2018) Chris Cremer, Xuechen Li, and David Duvenaud. Inference Suboptimality in Variational Autoencoders. In _International Conference on Machine Learning (ICML)_ , May 2018.
* Dempster et al. (1977) Arthur P. Dempster, Nan M. Laird, and Donald B. Rubin. Maximum Likelihood from Incomplete Data Via the EM Algorithm. _Journal of the Royal Statistical Society: Series B (Methodological)_ , 39(1):1–22, 1977. doi: 10.1111/j.2517-6161.1977.tb01600.x.
* Dinh et al. (2017) Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP. In _International Conference on Learning Representations (ICLR)_ , February 2017.
* Dua & Graff (2017) Dheeru Dua and Casey Graff. UCI Machine Learning Repository, 2017.
* Engel et al. (2017) Jesse Engel, Matthew Hoffman, and Adam Roberts. Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models. In _International Conference on Learning Representations (ICLR)_ , December 2017.
* Figurnov et al. (2019) Michael Figurnov, Shakir Mohamed, and Andriy Mnih. Implicit Reparameterization Gradients. In _Advances in Neural Information Processing Systems (NeurIPS)_ , January 2019.
* Garris et al. (1991) Michael D. Garris, R. A. Wilkinson, and Charles L. Wilson. Methods for enhancing neural network handwritten character recognition. In _International Joint Conference on Neural Networks (IJCNN)_ , volume 1, pp. 695–700, Seattle, WA, USA, 1991. IEEE. ISBN 978-0-7803-0164-1. doi: 10.1109/IJCNN.1991.155265.
* Gershman & Goodman (2014) Samuel J. Gershman and Noah D. Goodman. Amortized Inference in Probabilistic Reasoning. In _Annual Meeting of the Cognitive Science Society_ , volume 36, 2014.
* Gómez-Bombarelli et al. (2018) Rafael Gómez-Bombarelli, Jennifer N. Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel, Ryan P. Adams, and Alán Aspuru-Guzik. Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules. _ACS Central Science_ , 4(2):268–276, February 2018. ISSN 2374-7943. doi: 10.1021/acscentsci.7b00572.
* Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Networks. In _Advances in Neural Information Processing Systems (NeurIPS)_ , June 2014.
* Hensman et al. (2012) James Hensman, Magnus Rattray, and Neil D. Lawrence. Fast Variational Inference in the Conjugate Exponential Family. In _Advances in Neural Information Processing Systems (NeurIPS)_ , December 2012.
* Heusel et al. (2017) Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In _Advances in Neural Information Processing Systems (NeurIPS)_ , 2017.
* Ipsen et al. (2020) Niels Bruun Ipsen, Pierre-Alexandre Mattei, and Jes Frellsen. Not-MIWAE: Deep Generative Modelling with Missing not at Random Data. In _International Conference on Learning Representations (ICLR)_ , June 2020.
* King & Lawrence (2006) Nathaniel J. King and Neil D. Lawrence. Fast Variational Inference for Gaussian Process Models Through KL-Correction. In _European Conference on Machine Learning (ECML)_ , 2006.
* Kingma & Welling (2013) Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In _International Conference on Learning Representations (ICLR)_ , December 2013.
* Krishnan et al. (2016) Rahul G. Krishnan, Uri Shalit, and David Sontag. Structured Inference Networks for Nonlinear State Space Models. In _AAAI Conference on Artificial Intelligence_ , December 2016. doi: 10.48550/arXiv.1609.09869.
* Kviman et al. (2023) Oskar Kviman, Ricky Molén, Alexandra Hotti, Semih Kurt, Víctor Elvira, and Jens Lagergren. Cooperation in the Latent Space: The Benefits of Adding Mixture Components in Variational Autoencoders. In _International Conference on Machine Learning (ICML)_ , July 2023. doi: 10.48550/arXiv.2209.15514.
* Lake et al. (2015) Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. _Science_ , 2015. doi: 10.1126/science.aab3050.
* Lázaro-Gredilla & Titsias (2011) Miguel Lázaro-Gredilla and Michalis K. Titsias. Variational heteroscedastic Gaussian process regression. In _International Conference on Machine Learning (ICML)_ , June 2011.
* Little & Rubin (2002) Roderick J. A. Little and Donald B. Rubin. _Statistical Analysis with Missing Data: Second Edition_. Wiley-Interscience, 2002. ISBN 0-471-18386-5.
* Ma & Zhang (2021) Chao Ma and Cheng Zhang. Identifiable Generative Models for Missing Not at Random Data Imputation. In _Advances in Neural Information Processing Systems (NeurIPS)_ , October 2021.
* Ma et al. (2019) Chao Ma, Sebastian Tschiatschek, Konstantina Palla, José Miguel Hernández-Lobato, Sebastian Nowozin, and Cheng Zhang. EDDI: Efficient dynamic discovery of high-value information with partial VAE. In _International Conference on Machine Learning (ICML)_ , pp. 7483–7504, 2019. ISBN 9781510886988.
* Mattei & Frellsen (2018a) Pierre-Alexandre Mattei and Jes Frellsen. Leveraging the Exact Likelihood of Deep Latent Variable Models. In _Advances in Neural Information Processing Systems (NeurIPS)_ , February 2018a.
* Mattei & Frellsen (2018b) Pierre-Alexandre Mattei and Jes Frellsen. Refit your Encoder when New Data Comes by. In _Workshop on Bayesian Deep Learning at Neural Information Processing Systems (NeurIPS)_ , pp. 4, Montreal, Canada, 2018b.
* Mattei & Frellsen (2019) Pierre-Alexandre Mattei and Jes Frellsen. MIWAE: Deep Generative Modelling and Imputation of Incomplete Data Sets. In _International Conference on Machine Learning (ICML)_ , 2019.
* Meng (1994) Xiao-Li Meng. On the Rate of Convergence of the ECM Algorithm. _The Annals of Statistics_ , 22(1):326–339, March 1994. ISSN 0090-5364, 2168-8966. doi: 10.1214/aos/1176325371.
* Miao et al. (2022) Ning Miao, Emile Mathieu, N. Siddharth, Yee Whye Teh, and Tom Rainforth. On Incorporating Inductive Biases into VAEs. In _International Conference on Learning Representations (ICLR)_ , February 2022. doi: 10.48550/arXiv.2106.13746.
* Morningstar et al. (2021) Warren Morningstar, Sharad Vikram, Cusuh Ham, Andrew Gallagher, and Joshua Dillon. Automatic Differentiation Variational Inference with Mixtures. In _International Conference on Artificial Intelligence and Statistics (AISTATS)_ , pp. 3250–3258. PMLR, March 2021.
* Nazábal et al. (2020) Alfredo Nazábal, Pablo M. Olmos, Zoubin Ghahramani, and Isabel Valera. Handling Incomplete Heterogeneous Data using VAEs. _Pattern Recognition_ , 107, 2020. ISSN 0031-3203. doi: 10.1016/j.patcog.2020.107501.
* Papamakarios et al. (2017) George Papamakarios, Theo Pavlakou, and Iain Murray. Masked Autoregressive Flow for Density Estimation. _Advances in Neural Information Processing Systems (NeurIPS)_ , 30, 2017.
* Papamakarios et al. (2021) George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing Flows for Probabilistic Modeling and Inference. _Journal of Machine Learning Research_ , 22(57):1–64, 2021.
* Rainforth et al. (2019) Tom Rainforth, Adam R. Kosiorek, Tuan Anh Le, Chris J. Maddison, Maximilian Igl, Frank Wood, and Yee Whye Teh. Tighter Variational Bounds are Not Necessarily Better. In _International Conference on Machine Learning (ICML)_ , March 2019.
* Reddi et al. (2018) Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of Adam and beyond. In _International Conference on Learning Representations (ICLR)_ , pp. 1–23, 2018.
* Rezende et al. (2014) Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and Approximate Inference. In _International Conference on Machine Learning (ICML)_ , Beijing, China, 2014.
* Robert & Casella (2004) Christian P. Robert and George Casella. _Monte Carlo Statistical Methods_. Springer, 2004. ISBN 0-387-21239-6.
* Roeder et al. (2017) Geoffrey Roeder, Yuhuai Wu, and David K. Duvenaud. Sticking the Landing: Simple, Lower-Variance Gradient Estimators for Variational Inference. In _Advances in Neural Information Processing Systems (NeurIPS)_ , volume 30, 2017.
* Shi et al. (2019) Yuge Shi, N. Siddharth, Brooks Paige, and Philip Torr. Variational Mixture-of-Experts Autoencoders for Multi-Modal Deep Generative Models. In _Advances in Neural Information Processing Systems (NeurIPS)_ , 2019.
* Simkus & Gutmann (2023) Vaidotas Simkus and Michael U. Gutmann. Conditional Sampling of Variational Autoencoders via Iterated Approximate Ancestral Sampling. _Transactions on Machine Learning Research_ , August 2023. ISSN 2835-8856.
* Simkus et al. (2023) Vaidotas Simkus, Benjamin Rhodes, and Michael U. Gutmann. Variational Gibbs Inference for Statistical Model Estimation from Incomplete Data. _Journal of Machine Learning Research_ , 24(196):1–72, 2023. ISSN 1533-7928.
* Sohl-Dickstein et al. (2015) Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. In _International Conference on Machine Learning (ICML)_ , November 2015. doi: 10.48550/arXiv.1503.03585.
* Sudak & Tschiatschek (2023) Timur Sudak and Sebastian Tschiatschek. Posterior Consistency for Missing Data in Variational Autoencoders. In _European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD)_ , October 2023. doi: 10.48550/arXiv.2310.16648.
* Tieleman (2008) Tijmen Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In _International Conference on Machine Learning (ICML)_ , pp. 1064–1071, 2008. ISBN 9781605582054. doi: 10.1145/1390156.1390290.
* Tucker et al. (2018) George Tucker, Dieterich Lawson, Shixiang Gu, and Chris J. Maddison. Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives. In _International Conference on Learning Representations (ICLR)_ , November 2018.
* Vedantam et al. (2017) Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, and Kevin P. Murphy. Generative Models of Visually Grounded Imagination. _International Conference on Learning Representations (ICLR)_ , May 2017.
* Wei & Tanner (1990) Greg C. G. Wei and Martin A. Tanner. A Monte Carlo Implementation of the EM Algorithm and the Poor Man’s Data Augmentation Algorithms. _Journal of the American Statistical Association_ , 85(411):699–704, September 1990. doi: 10.1080/01621459.1990.10474930.
* Williams et al. (2018) Christopher K. I. Williams, Charlie Nash, and Alfredo Nazábal. Autoencoders and Probabilistic Inference with Missing Data: An Exact Solution for The Factor Analysis Case. _arXiv preprint_ , 1801.03851, January 2018.
* Wu & Goodman (2018) Mike Wu and Noah D. Goodman. Multimodal Generative Models for Scalable Weakly-Supervised Learning. In _NeurIPS 2018_ , February 2018.
* Younes (1999) Laurent Younes. On the convergence of markovian stochastic algorithms with rapidly decreasing ergodicity rates. _Stochastics and Stochastic Reports_ , 65(3-4):177–228, February 1999. ISSN 1045-1129. doi: 10.1080/17442509908834179.
* Zhang et al. (2021) Mingtian Zhang, Peter Hayes, and David Barber. Generalization Gap in Amortized Inference. In _Workshop on Bayesian Deep Learning at Neural Information Processing Systems (NeurIPS)_ , pp. 6, 2021.
## Appendix A Posterior complexity due to missing information
The complexity increase of the model posterior due to missing data, shown in
fig. 1, explains why flexible variational distributions (Burda et al., 2015;
Cremer et al., 2017) have been preferred when fitting VAEs from incomplete
data (Mattei & Frellsen, 2019; Ipsen et al., 2020; Ma & Zhang, 2021). We here
define the increase of the posterior complexity via the expected
Kullback–Leibler (KL) divergence as follows
$\displaystyle\mathbb{E}_{{p^{*}}({\bm{x}})}\left[D_{\text{KL}}({p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})\;||\;{p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}}))\right]$
$\displaystyle=\mathbb{E}_{{p^{*}}({\bm{x}})}{\mathbb{E}_{{p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})}\left[\log\frac{{p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})}{{p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})}\right]}=\mathcal{I}({\bm{z}},{\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}}).$
As shown above the expected KL divergence equals the (conditional) mutual
information (MI) between the latents ${\bm{z}}$ and the missing variables
${\bm{x}}_{\mathrm{mis}}$.
The mutual information interpretation allows us to reason when a more flexible
variational family may be necessary to accurately estimate VAEs from
incomplete data. Specifically, when the MI is small then the two posterior
distributions, ${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$ and
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ are similar, in which
case a simple variational distribution may work sufficiently well. This
situation might appear when the observed ${\bm{x}}_{\mathrm{obs}}$ and
unobserved ${\bm{x}}_{\mathrm{mis}}$ variables are highly related and
${\bm{x}}_{\mathrm{mis}}$ provides little additional information about
${\bm{z}}$ over just ${\bm{x}}_{\mathrm{obs}}$, for example, when random
pixels of an image are masked it is “easy” to infer the complete image due to
strong relationship between neighbouring pixels. On the other hand, when the
MI is high then ${\bm{x}}_{\mathrm{mis}}$ provides significant additional
information about ${\bm{z}}$ over just ${\bm{x}}_{\mathrm{obs}}$, in which
case a more flexible variational family may be needed, for example, when the
pixels of an image are masked in blocks such that it introduces significant
uncertainty about what is missing.
## Appendix B DeMissVAE: Encoder objective derivation
We derive the objective for learning the variational distribution
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}})$ in DeMissVAE by first marginalising
the missing variables ${\bm{x}}_{\mathrm{mis}}$ from the complete-data ELBO in
eq. 2:
$\displaystyle\log\int$
$\displaystyle\exp\log{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})\mathop{}\\!\mathrm{d}{\bm{x}}_{\mathrm{mis}}\geq\log\int\exp\left\\{\mathbb{E}_{{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})}\left[\log\frac{{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}},{\bm{z}})}{{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})}\right]\right\\}\mathop{}\\!\mathrm{d}{\bm{x}}_{\mathrm{mis}}$
The l.h.s. is the marginal log-likelihood
$\log{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}})$, but the integral in r.h.s.
is intractable. We then lower-bound the integral in the r.h.s. using the
imputation distribution
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$ and Jensen’s
inequality $\displaystyle\log{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}})$
$\displaystyle\geq\log\int\frac{{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}}{{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}}\exp\left\\{\mathbb{E}_{{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})}\left[\log\frac{{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}},{\bm{z}})}{{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})}\right]\right\\}\mathop{}\\!\mathrm{d}{\bm{x}}_{\mathrm{mis}}$
$\displaystyle=\log\mathbb{E}_{{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}}\left[\exp\left(-\log{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\right)\exp\left\\{\mathbb{E}_{{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})}\left[\log\frac{{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}},{\bm{z}})}{{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})}\right]\right\\}\right]$
$\displaystyle=\log\mathbb{E}_{{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}}\left[\exp\left\\{\mathbb{E}_{{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})}\left[\log\frac{{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}},{\bm{z}})}{{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}}){f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}}\right]\right\\}\right]$
$\displaystyle\geq\mathbb{E}_{{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})}\left[\log\frac{{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}},{\bm{z}})}{{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}}){f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}}\right]$
$\displaystyle=\underbrace{\mathbb{E}_{{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})}\left[\log\frac{{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}},{\bm{z}})}{{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})}\right]}_{\overset{!}{=}\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}({\bm{x}}_{\mathrm{obs}};{\bm{\phi}},{\bm{\theta}},f^{t})}+\underbrace{\mathcal{H}\left[{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\right]}_{\text{Const.
w.r.t.\ ${\bm{\phi}}$}}.$
## Appendix C DeMissVAE: Motivating the separation of objectives
The two DeMissVAE objectives $\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}$ and
$\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}$ in eqs. 10 and 12 correspond to
valid lower-bounds on $\log{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}})$
irrespective of ${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$.
Moreover, both of them are tight at
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}={p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$
and
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})={p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})$.
_So, a natural question is why do we prefer
$\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}$ to learn ${p_{\bm{\theta}}}$ and
$\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}$ to learn ${q_{\bm{\phi}}}$?_
##### Why use $\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}$ in eq. 10 over
$\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}$ in eq. 12 to learn
${p_{\bm{\theta}}}({\bm{x}})$?
Maximisation of the objective $\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}$ in
iteration $t$ w.r.t. ${\bm{\theta}}$ would have to compromise between
maximising the log-likelihood $\log{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}})$
and keeping the other two KL divergence terms in eq. 13 low. Specifically, the
compromise between maximising $\log{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}})$
and keeping
$D_{\text{KL}}({f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\;||\;{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}}))$
low is equivalent to the compromise in the EM algorithm, which is known to
affect the convergence of the model (Meng, 1994). Moreover, if
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\not={p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$
then minimising the
$D_{\text{KL}}({f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\;||\;{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}}))$
will fit the model ${p_{\bm{\theta}}}({\bm{x}})$ to the biased samples from
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$. On the other
hand, in $\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}$ the missing variables
${\bm{x}}_{\mathrm{mis}}$ are marginalised from the model, therefore it avoids
the compromise with
$D_{\text{KL}}({f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\;||\;{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}}))$
and the potential bias of the imputation distribution
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$ affects the
model _only_ via the latents
${\bm{z}}\sim{q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$,
increasing the robustness to sub-optimal imputations.
##### Why use $\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}$ in eq. 12 over
$\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}$ in eq. 10 to learn
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}})$?
In the case of $\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}$, if
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}={p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$
then the bound is tightened when
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})={p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})$
for all ${\bm{x}}_{\mathrm{mis}}$, which is the same optimal ${q_{\bm{\phi}}}$
if we used $\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}$. But, there is also at
least one more possible optimal solution
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})={p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$,
which ignores the imputations and corresponds to the optimal solution of the
standard approach in section 2, and thus it means that the optimum is
(partially) unidentifiable and can make optimisation of ${q_{\bm{\phi}}}$
using $\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}$ difficult. Moreover, if
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\not={p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$
then in order to minimise
$D_{\text{KL}}({q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})\;||\;{p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}}))$
w.r.t. ${\bm{\phi}}$ the variational distribution
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})$
would have to compensate for the inaccuracies of
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$ by adjusting the
probability mass over the latents ${\bm{z}}$, such that
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})$
is correct on average, i.e.
${q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})=\mathbb{E}_{{f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}}[{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})]\approx{p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$.
These two issues make optimising ${\bm{\phi}}$ via
$\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}$ such that
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})\approx{p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})$
difficult. On the other hand, in $\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}$ the
optimal ${q_{\bm{\phi}}}$ is always at
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})={p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})$,
irrespective of the imputation distribution
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$, hence the
$\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}$ objective in eq. 12 is well-defined
and more robust to inaccuracies of
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$ for the
optimisation of
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})$.
Figure 5: _A control study on a VAE model with 2D latent space (see additional
details in section E.1), examining the sensitivity of the proposed method
(DeMissVAE, green) and two control methods (blue and yellow) to the accuracy
of the imputation distribution
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$._ Left:
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}={p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$
represented using rejection sampling. Center: an oracle imputation function
that gets progressively “wider” from left-to-right of the figure. Right: an
oracle imputation distribution that towards the right of the figure more
significantly oversamples low-probability posterior modes. The log-likelihood
is computed on a held-out test data set by numerically integrating the 2D
latent space of the VAE. The horizontal axis on the two right-most figures
shows the Jensen–Shannon divergence between the imputation distribution and
the ground-truth conditional
${p^{*}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$.
In fig. 5 we verify the efficacy of DeMissVAE via a control study on a small
VAE model ${p_{\bm{\theta}}}({\bm{x}})$ with 2D latent space fitted on
incomplete samples from a ground truth mixture-of-Gaussians (MoG) distribution
${p^{*}}({\bm{x}})$. We evaluate fitting the VAE using only
$\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}$ in eq. 10 (CVI-VAE, blue), only
$\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}$ in eq. 12 (MVB-VAE, yellow), and
using the proposed two-objective approach (DeMissVAE, green). In the left-most
figure we evaluate the three methods where we represent the imputation
distribution
$f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})={p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$
using rejection sampling, which corresponds to the optimal imputation
distribution w.r.t.
$D_{\text{KL}}({f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\;||\;{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}}))=0$.
We see that the proposed approach (green) dominates over the other two control
methods (blue and yellow), and importantly that marginalisation of the missing
variables in DeMissVAE (green) improves the model accuracy compared to an EM-
type handling of the missing variables (yellow). Furthermore, in the remaining
two figures we investigate the sensitivity of the methods to the accuracy of
imputations in ${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$.
In Oracle 1 we start with the ground-truth conditional
${p^{*}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$ and, along the
x-axis of the figure, investigate how the methods perform when the imputation
distribution becomes “wider”: first interpolating from
${p^{*}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$ to an
independent unconditional distribution
$\prod_{d\in\text{idx}({\bm{m}})}{p^{*}}(x_{d})$ and then further towards and
independent Gaussian distribution. And in Oracle 2 we investigate what happens
when the sampler “oversamples” posterior modes: we interpolate the imputation
distribution from
${p^{*}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$ to
$\frac{1}{C}\sum_{c}^{C}{p^{*}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}},c)$,
where $c$ is the component of the mixture distribution with a total of $C$
components. As we see in the figure, the proposed DeMissVAE approach (green)
performs similar or better than the MVB-VAE (yellow) and CVI-VAE (blue)
control methods, with an exception when the
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$ are extremely
inaccurate (last two points on the middle figure) which is expected since
${q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ in eq. 9 can be
arbitrarily far from ${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$
when
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})={p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}},{\bm{x}}_{\mathrm{mis}})$
but
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\not\approx{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$.
Finally, in fig. 6 we investigate what happens if we used only
$\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}$ in eq. 10 or
$\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}$ in eq. 12 to fit the VAE model, in
contrast to the two separate objectives for encoder and decoder in DeMissVAE.
We use the LAIR sampling method (Simkus & Gutmann, 2023) as detailed in
appendix D to obtain approximate samples from
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})\approx{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$.
And, we observe that DeMissVAE achieves a better fit of the model, in line
with our motivation in this section.
Figure 6: _A control study on a VAE model with 2D latent space (see additional
details in section E.1), investigating the importance of the two-objective
approach in DeMissVAE (green) and two control methods (blue and yellow)._ In
CVI-VAE (blue) we fit both the encoder and decoder using eq. 10, and in MVB-
VAE (yellow) we fit both the encoder and decoder using eq. 12. The log-
likelihood is computed on a held-out test data set by numerically integrating
the 2D latent space of the VAE.
## Appendix D DeMissVAE: Implementing the training procedure
DeMissVAE requires optimising two objectives
$\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}$ and
$\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}$ in eqs. 10 and 12 and drawing
(approximate) samples to represent
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}\approx{p_{\bm{\theta}}}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})$.
Our aim is to implement this efficiently to minimise redundant computation.
Algorithm 1 Shared computation of the DeMissVAE learning objectives
1:parameters ${\bm{\theta}}$ and ${\bm{\phi}}$, number of latent samples $L$,
completed data-point
$({\bm{x}}_{\mathrm{obs}}^{i},{\bm{x}}_{\mathrm{mis}}^{ik})$
2:${\bm{\psi}}^{ik}\leftarrow\texttt{Encoder}({\bm{x}}_{\mathrm{obs}}^{i},{\bm{x}}_{\mathrm{mis}}^{ik};{\bm{\phi}})$
$\triangleright$ Compute parameters of the variational distribution
3:${\bm{z}}_{1},\ldots,{\bm{z}}_{L}\sim
q({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}}^{i},{\bm{x}}_{\mathrm{mis}}^{ik};{\bm{\psi}}^{ik})$
$\triangleright$ Sample latents ${\bm{z}}$
4:${\bm{\eta}}_{l}\leftarrow\texttt{Decoder}({\bm{z}}_{l};{\bm{\theta}})$ for
$\forall l\in[1,L]$ $\triangleright$ Compute parameters of the generative
distribution
5:def
$\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}$(${\bm{z}}_{1},\ldots,{\bm{z}}_{L},{\bm{\eta}}_{1},\ldots,{\bm{\eta}}_{L}$):
$\triangleright$ Procedure for estimating eq. 10
6: return $\frac{1}{L}\sum_{l=1}^{L}\log
p({\bm{x}}_{\mathrm{obs}}^{i},{\bm{z}}_{l};{\bm{\eta}}_{l})$
7:$\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}({\bm{z}}_{1},\ldots,{\bm{z}}_{L},{\bm{\eta}}_{1},\ldots,{\bm{\eta}}_{L}),\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}({\bm{\psi}}^{ik},{\bm{z}}_{1},\ldots,{\bm{z}}_{L},{\bm{\eta}}_{1},\ldots,{\bm{\eta}}_{L})$
8:def
$\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}$(${\bm{\psi}}^{ik},{\bm{z}}_{1},\ldots,{\bm{z}}_{L},{\bm{\eta}}_{1},\ldots,{\bm{\eta}}_{L}$):
$\triangleright$ Procedure for estimating eq. 12
9: return $\frac{1}{L}\sum_{l=1}^{L}\log
p({\bm{x}}_{\mathrm{obs}}^{i},{\bm{x}}_{\mathrm{mis}}^{ik},{\bm{z}}_{l};{\bm{\eta}}_{l})-\log
q({\bm{z}}_{l}\mid{\bm{x}}_{\mathrm{obs}}^{i},{\bm{x}}_{\mathrm{mis}}^{ik};{\bm{\psi}}^{ik})$
The algorithm starts with a randomly-initialised target VAE model
${p_{\bm{\theta}}}({\bm{x}},{\bm{z}})$, an amortised variational distribution
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}})$, and an incomplete data set
$\mathcal{D}=\\{{\bm{x}}_{\mathrm{obs}}^{i}\\}_{i}$. And then, to represent
the imputation distribution
${f^{0}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$, $K$ imputations
$\\{{\bm{x}}_{\mathrm{mis}}^{ik}\\}_{k=1}^{K}$ are generated for each
${\bm{x}}_{\mathrm{obs}}^{i}\in\mathcal{D}$ using some simple imputation
function such as sampling the marginal empirical distributions of the missing
variables. The algorithm then iterates between the following two steps:
1. 1.
Imputation. Update the imputation distribution
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$ using cheap
approximate sampling methods such as pseudo-Gibbs (Rezende et al., 2014),
Metropolis-within-Gibbs (MWG, Mattei & Frellsen, 2018a), or latent-adaptive
importance resampling (LAIR, Simkus & Gutmann, 2023). Moreover, since the
model and the variational distributions are initialised randomly, we skip the
imputation step during the first epoch over the data.
2. 2.
Parameter update. Update the parameters using stochastic gradient ascent on
$\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}$ and
$\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}$ in eqs. 10 and 12 with the
imputations from
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$.
##### Efficient parameter update.
While the two objectives for ${p_{\bm{\theta}}}$ and ${q_{\bm{\phi}}}$ in eqs.
10 and 12 are different, a major part of the computation can be shared, as
shown in algorithm 1. As usual, the objectives are approximated using Monte
Carlo averaging and require only one evaluation of the generative model,
including the encoder, decoder, and prior, for each completed data-point
$({\bm{x}}_{\mathrm{obs}}^{i},{\bm{x}}_{\mathrm{mis}}^{ik})$. Therefore, only
backpropagation needs to be performed separately and the overall per-iteration
computational cost of optimising the two objectives is about 1.67 times the
cost of a fully-observed VAE optimisation (instead of 2 times if implemented
naïvely).999The cost of backpropagation is about 2 times the cost of a forward
pass (Burda et al., 2015).
##### Efficient imputation.
To make the imputation step efficient, the imputation distribution
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$ is “persitent”
between iterations, that is, the imputation distribution from the previous
iteration ${f^{t-1}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$ is
used to initialise the iterative approximate VAE sampler at iteration
$t$.101010Persistent samplers have been used in the past to increase
efficiency of maximum-likelihood estimation methods (Younes, 1999; Tieleman,
2008; Simkus et al., 2023). Moreover, an iteration of a pseudo-Gibbs, MWG, or
LAIR samplers uses the same quantities as the objectives
$\mathcal{L}_{\mathrm{CVI}}^{\bm{\theta}}$ and
$\mathcal{L}_{\mathrm{LMVB}}^{\bm{\phi}}$ in eqs. 10 and 12, and hence the
cost of one iteration of the sampler in the imputation step can be shared with
the cost of computation of the learning objectives. However, it is important
to note that the accuracy of imputations affects the accuracy of the estimated
model, and hence better estimation can be achieved by increasing the
computational budget for imputation.
## Appendix E Experiment details
In this appendix we provide additional details on the experiments.
### E.1 Mixture-of-Gaussians data with a 2D latent VAE
We generated a random 5D mixture-of-Gaussians model with 15 components by
sampling the mixture covariance matrices from the inverse Wishart distribution
$\mathcal{W}^{-1}(\nu=D,\Psi=\mathbf{I})$, means from the Gaussian
distribution $\mathcal{N}(\mu=\mathbf{0},\sigma=\mathbf{3})$ and the component
probabilities from Dirichlet distribution $\mathrm{Dir}(\alpha=\mathbf{1})$
(uniform). The model was then standardised to have a zero mean and a standard
deviation of one. The pairwise marginal densities of the generated
distribution is visualised in fig. 7 showing a highly-complex and multimodal
distribution, and the generated parameters and data used in this paper are
available in the shared code repository. We simulated a 20K sample data set
used to fit the VAEs.
Figure 7: The pairwise marginals of the ground-truth Mixture-of-Gaussians
distribution.
We then fitted a VAE model with 2-dimensional latent space using diagonal
Gaussian encoder and decoder distributions, and a fixed standard Normal prior.
For the decoder and encoder networks we used fully-connected residual neural
networks with 3 residual blocks, 200 hidden dimensions, and ReLU activations.
To optimise the model parameters we have used AMSGrad optimiser (Reddi et al.,
2018) with a learning rate of $10^{-3}$ for a total of 500 epochs.
The hyperameters are listed in table 3, note that the total number of samples
was the same for all methods (i.e. $5/15/25$). Moreover, we have used
“sticking-the-landing” (STL) gradients (Roeder et al., 2017) to reduce
gradient variance for all methods.111111We have also evaluated the doubly-
reparametrised gradients (DReG, Tucker et al., 2018) for IWAE methods but
found STL to perform similar or better.
Method | $Z$ | $K$ | $I$ | Mixture sampling
---|---|---|---|---
MVAE | $5/15/25$ | 1 | — | —
MissVAE | $5/15/25$ | $5/15/25$ | — | Ancestral
MissSVAE | 1 | $5/15/25$ | — | Stratified
MIWAE | 1 | 1 | $5/15/25$ | —
MissIWAE | 1 | $5/15/25$ | $5/15/25$ | Ancestral
MissSIWAE | 1 | $5/15/25$ | 1 | Stratified
DeMissVAE | 1 | $5/15/25$ | — | LAIR (1 iteration, $R=0$) (Simkus & Gutmann, 2023)
Table 3: _Method hyperparameters on MoG data._
### E.2 UCI data sets
We fit the VAEs on four data sets from the UCI repository (Dua & Graff, 2017)
with the preprocessing of (Papamakarios et al., 2017). The VAE model uses
diagonal Gaussian encoder and decoder distributions regularised such that the
standard deviation $\geq 10^{-5}$ (Mattei & Frellsen, 2018a), and a fixed
standard Normal prior. The latent space is 16-dimensional, except for the
MINIBOONE where 32 dimensions were used.
The encoder and decoder networks used fully-connected residual neural networks
with 2 residual blocks (except for on the MINIBOONE dataset where 5 blocks
were used in the encoder) with 256 hidden dimensionality, and ReLU
activations. A dropout of 0.5 was used on the MINIBOONE dataset. The
parameters were optimised using AMSGrad optimiser (Reddi et al., 2018) with a
learning rate of $10^{-3}$ and cosine learning rate schedule for a total of
200K iterations (or 22K iterations on MINIBOONE). As before, STL gradients
(Roeder et al., 2017) were used to reduce the variance for all methods.
DeMissVAE used the LAIR sampler (Simkus & Gutmann, 2023) with $K=5$ $R=1$ and
10 iterations. Moreover we have used gradient norm clipping to stabilise
DeMissVAE with the maximum norm set to $1$ (except for POWER dataset where we
set it to $0.5$).
### E.3 MNIST and Omniglot data sets
We fit a VAE on statically binarised MNIST and Omniglot data sets (Lake et
al., 2015) downsampled to 28x28 pixels. The VAE uses diagonal Gaussian decoder
distributions regularised such that the standard deviation $\geq 10^{-5}$
(Mattei & Frellsen, 2018a), a fixed standard Normal prior, and a Bernoulli
decoder distribution. The latent space is 50-dimensional.
For both MNIST and Omniglot we have used convolutional ResNet neural networks
for the encoder and decoder with 4 residual blocks, ReLU activations, and
dropout probability of 0.3. For MNIST, the encoder the residual block hidden
dimensionalities were 32, 64, 128, 256, and for the decoder they were
128,64,32,32. For Omniglot, the encoder the residual block hidden
dimensionalities were 64, 128, 256, 512, and for the decoder they were
256,128,64,64. We used AMSGrad optimiser (Reddi et al., 2018) with $10^{-4}$
learning rate, cosine learning rate schedule, and STL gradients (Roeder et
al., 2017) for 500 epochs for MNIST and 200 epochs for Omniglot.
For MVAE, we use 5 latent samples and for MIWAE we use 5 importance samples.
For MissVAE we use $K=5$ mixture components and sample 5 latent samples. For
MissSVAE we use $K=5$ mixture components and sample 1 sample from each
component, for a total of 5 samples. For MissIWAE we use $K=5$ components and
sample 5 importance samples. for MissSIWAE we use $K=5$ components and sample
1 sample from each component. For DeMissVAE we use $K=5$ imputations and
update them using a single step of pseudo-Gibbs (Rezende et al., 2014).
## Appendix F Additional figures
In this appendix we provide additional figures for the experiments in this
paper.
### F.1 Mixture-of-Gaussians data with a 2D latent VAE
In this section we show additional analysis on the mixture-of-Gaussians data,
supplementing the results in section 6.1.
#### F.1.1 Analysis of gradient variance with ancestral and stratified
sampling
In section 6.1 we observed that the model estimation performance can depend on
whether ancestral sampling (with implicit reparametrisation) or stratified
sampling is used to approximate the expectations in eqs. 4, 5, 7 and 8,
corresponding to MissVAE/MissIWAE and MissSVAE/MissSIWAE, respectively.
We analyse the signal-to-noise ratio (SNR) of the gradients w.r.t.
${\bm{\phi}}$ and ${\bm{\theta}}$ for the two approaches, which is defined as
follows (Rainforth et al., 2019)
$\displaystyle\text{SNR}({\bm{\phi}})=\left|\frac{\mathbb{E}\left[\Delta({\bm{\phi}})\right]}{\sigma[\Delta({\bm{\phi}})]}\right|,\quad\text{and}\quad\text{SNR}({\bm{\theta}})=\left|\frac{\mathbb{E}\left[\Delta({\bm{\theta}})\right]}{\sigma[\Delta({\bm{\theta}})]}\right|,$
where $\Delta(\cdot)$ denotes the gradient estimate, and $\sigma[\cdot]$ is
the standard deviation of a random variable. We estimate the SNR by computing
the expectation and standard deviation over the entire training epoch.
The SNR for ${\bm{\phi}}$ and ${\bm{\theta}}$ is plotted in fig. 8. We observe
that the stratified approaches (MissSVAE and MissSIWAE) generally have higher
SNR. This is possibly the reason why MissSVAE and MissSIWAE have achieved
better model accuracy than the ancestral approaches (MissVAE and MissIWAE) in
section 6.1.
Figure 8: _Signal-to-noise ratio (SNR, higher is better) of the gradients
w.r.t. encoder parameters ${\bm{\phi}}$ (left) and decoder parameters
${\bm{\theta}}$ (right)._ For all methods we used a budget of 5 samples from
the variational distribution (see section E.1 for more details). We show the
median SNR over 5 independent runs and a 90% confidence interval.
#### F.1.2 Analysis of the model posteriors
In figs. 9, 10 and 11 we visualise the model posteriors with complete and
incomplete data, ${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$ and
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$, respectively, and
the variational distribution ${q_{\bm{\phi}}}({\bm{z}}\mid\cdot)$ that was
used to fit the model via the variational ELBO. For each method we have used a
budget of 25 samples from the variational distribution during training
(additional details are in section E.1). Each figure shows the posteriors for
5 training data-points using distinct colours.
Figure 9 shows MVAE, MissVAE, and MissSVAE model posteriors
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$ and
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$, as well as the
variational distribution
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$, which approximates the
incomplete-data posterior. As motivated in section 3 we observe that the
Gaussian posterior in MVAE (first row) is not sufficiently flexible to
approximate the complex incomplete-data posteriors
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$. On the other hand,
the mixture-variational approaches, MissVAE (second row) and MissSVAE (third
row), are able to well-approximate the incomplete-data posteriors.
Figure 10 shows MIWAE, MissIWAE, and MissSIWAE model posteriors
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$ and
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$, as well as the
variational proposal ${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$
and the importance-weighted semi-implicit distribution
$q_{{\bm{\phi}},{\bm{\theta}},I=25}^{\text{IW}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$
that arises from sampling the variational proposal
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ and re-sampling using
importance weights
$w({\bm{z}})={p_{\bm{\theta}}}({\bm{x}}_{\mathrm{obs}},{\bm{z}})/{q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$
(Cremer et al., 2017). Similar to the MVAE case above, the variational
proposal ${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ in MIWAE
(first row) is quite far from the model posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$, but the importance-
weighted bound in eq. 7 is able to re-weigh the samples to sufficiently-well
match the model posterior, as shown in the fourth column. However, as
efficiency of importance sampling depends on the discrepancy between the
proposal and the target distributions, we can expect that more flexible
variational distributions may improve the performance of the importance-
weighted ELBO methods. Importantly, we show that the variational-mixture
approaches, MissIWAE (second row) and MissSIWAE (third row), are able to adapt
the variational proposals to the incomplete-data posteriors well, and as a
result achieve better efficiency than MIWAE.
Figure 11 shows DeMissVAE model posteriors
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$ and
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$, the variational
distribution ${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}})$, which approximates the
_complete_ -data posterior, and the imputation-mixture
${q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ approximated
using the 25 imputations in
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$ at the end of
training. We observe similar behaviour to fig. 1, where the complete data
posteriors ${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$ are close to Gaussian but
the incomplete-data posteriors
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ are irregular. As we
show in section 6.1, DeMissVAE is capable of fitting the model well by
learning the completed-data variational distribution
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}})$ (third column) and using the
imputation-mixture in eq. 9 to approximate the incomplete data posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$. Moreover, we observe
that the imputation-mixture
${q_{{\bm{\phi}},f^{t}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ (fourth column)
captures only one of the modes of the model posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$. This is a result of
using a small imputation sampling budget, that is, using only a single
iteration of LAIR to update the imputations (see more details in appendix D),
and hence better accuracy can be achieved by trading-off some computational
cost to obtain better imputations that would ensure a better representation of
the imputation distribution. Nonetheless, as observed in fig. 2, DeMissVAE
achieves good model accuracy despite potentially sub-optimal imputations,
further signifying the importance of the two learning objectives for the
encoder and decoder distributions in sections 4.2 and C.
Interestingly, by comparing the complete-data posteriors
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$ (first column) in figs. 9, 10 and
11, we observe that they are slightly more irregular than in the complete case
in fig. 1, except for DeMissVAE whose posteriors are nearly Gaussian. The
irregularity is stronger for the importance-weighted ELBO-based methods in
fig. 10. This is in line with the observation by Burda et al. (2015, Appendix
C) and Cremer et al. (2018, Section 5.4) that VAEs trained with more flexible
variational distributions tend to learn a more complex model posterior. This
means that using the importance-weighted bounds, and to a lesser extent the
finite variational-mixture approaches from section 4.1, to fit VAEs on
incomplete data may result in worse-structured latent spaces, compared to
models fitted on complete data. On the other hand, we observe that DeMissVAE
learns a better-structured latent space, with the posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$ close to a Gaussian, that is
comparable to the complete case. This suggests that the decomposed approach in
DeMissVAE may be important in cases where the latent space needs to be
regular, at the additional cost of obtaining missing data imputations (see
appendix D).
Figure 9: _Posterior distributions of MVAE, MissVAE, and MissSVAE._ First
column: the model posterior ${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$ under
complete data ${\bm{x}}$. Second column: the model posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ under incomplete data
${\bm{x}}_{\mathrm{obs}}$. Third column: variational approximation
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ of the incomplete
posterior ${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ obtained at
the end of training. Figure 10: _Posterior distributions of MIWAE, MissIWAE,
and MissSIWAE._ First column: the model posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$ under complete data ${\bm{x}}$.
Second column: the model posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ under incomplete data
${\bm{x}}_{\mathrm{obs}}$. Third column: variational proposal
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ for an incomplete data-
point ${\bm{x}}_{\mathrm{obs}}$ obtained at the end of training. Fourth
column: importance-weighted variational distribution
${q_{\bm{\phi}}}^{\text{IW}}_{I=25}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ for
an incomplete data-point ${\bm{x}}_{\mathrm{obs}}$ obtained after re-weighting
samples from ${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ (Cremer et
al., 2017). Figure 11: _Posterior distributions of DeMissVAE._ First: the
model posterior ${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$ under complete data
${\bm{x}}$. Second: the model posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}}_{\mathrm{obs}})$ under incomplete data
${\bm{x}}_{\mathrm{obs}}$. Third: variational approximation
${q_{\bm{\phi}}}({\bm{z}}\mid{\bm{x}})$ of the complete-data posterior
${p_{\bm{\theta}}}({\bm{z}}\mid{\bm{x}})$ obtained at the end of training.
Fourth: the variational imputation-mixture distribution in eq. 9 using the
imputation distribution
${f^{t}({\bm{x}}_{\mathrm{mis}}\mid{\bm{x}}_{\mathrm{obs}})}$ obtained at the
end of training, approximated using a Monte Carlo average with 25 imputations.
### F.2 UCI data sets
In fig. 12 we plot the Fréchet inception distance (FID, Heusel et al., 2017)
versus training iteration on the UCI datasets. The results closely mimic the
log-likelihood results in section 6.2. Importantly, we observe that using
mixture variational distributions becomes more important as the missingness
fraction increases, causing the posterior distributions to be more complex.
Figure 12: _FID (lower is better) between the model and the complete test data
versus training iterations._ The FID is computed using features of the last
encoder layer of an independent VAE model trained on complete data. Lines show
the median of 5 independent runs and the intervals show 90% confidence.
|
* Jacob et al. (2018) Jacob S., Pakmor R., Simpson C. M., Springel V., Pfrommer C., 2018, MNRAS, 475, 570
* Kazantsev (1968) Kazantsev A. P., 1968, Soviet Journal of Experimental and Theoretical Physics, 26, 1031
* Kennicutt (1998) Kennicutt Robert C. J., 1998, ApJ, 498, 541
* Kereš et al. (2005) Kereš D., Katz N., Weinberg D. H., Davé R., 2005, MNRAS, 363, 2
* Kolmogorov (1941) Kolmogorov A., 1941, The Proceedings of the USSR Academy of Sciences, 30, 301
* Kotarba et al. (2010) Kotarba H., Karl S. J., Naab T., Johansson P. H., Dolag K., Lesch H., Stasyszyn F. A., 2010, ApJ, 716, 1438
* Kotarba et al. (2011) Kotarba H., Lesch H., Dolag K., Naab T., Johansson P. H., Donnert J., Stasyszyn F. A., 2011, MNRAS, 415, 3189
* Kulsrud (2005) Kulsrud R. M., 2005, Plasma physics for astrophysics
* Kulsrud & Zweibel (2008) Kulsrud R. M., Zweibel E. G., 2008, Reports on Progress in Physics, 71, 046901
* Lacki & Beck (2013) Lacki B. C., Beck R., 2013, MNRAS, 430, 3171
* Libeskind et al. (2020) Libeskind N. I., et al., 2020, MNRAS,
* Luo et al. (2014) Luo W., Yang X., Zhang Y., 2014, ApJ, 789, L16
* Lupton et al. (2004) Lupton R., Blanton M. R., Fekete G., Hogg D. W., O’Mullane W., Szalay A., Wherry N., 2004, PASP, 116, 133
* Man et al. (2016) Man A. W. S., Zirm A. W., Toft S., 2016, ApJ, 830, 89
* Marinacci & Vogelsberger (2016) Marinacci F., Vogelsberger M., 2016, MNRAS, 456, L69
* Marinacci et al. (2014) Marinacci F., Pakmor R., Springel V., 2014, MNRAS, 437, 1750
* Marinacci et al. (2018) Marinacci F., Vogelsberger M., Kannan R., Mocz P., Pakmor R., Springel V., 2018, MNRAS, 476, 2476
* Marinacci et al. (2019) Marinacci F., Sales L. V., Vogelsberger M., Torrey P., Springel V., 2019, MNRAS, 489, 4233
* Martin-Alvarez et al. (2018) Martin-Alvarez S., Devriendt J., Slyz A., Teyssier R., 2018, MNRAS, 479, 3343
* Martin-Alvarez et al. (2020) Martin-Alvarez S., Slyz A., Devriendt J., Gómez-Guijarro C., 2020, MNRAS, 495, 4475
* Mocz et al. (2014) Mocz P., Vogelsberger M., Sijacki D., Pakmor R., Hernquist L., 2014, MNRAS, 437, 397
* Monachesi et al. (2019) Monachesi A., et al., 2019, MNRAS, 485, 2589
* Moreno et al. (2020) Moreno J., et al., 2020, MNRAS, p. 10.1093/mnras/staa2952
* Moss et al. (2014) Moss D., Sokoloff D., Beck R., Krause M., 2014, A&A, 566, A40
* Naab & Burkert (2003) Naab T., Burkert A., 2003, ApJ, 597, 893
* Nulsen & Fabian (2000) Nulsen P. E. J., Fabian A. C., 2000, MNRAS, 311, 346
* Okamoto et al. (2010) Okamoto T., Frenk C. S., Jenkins A., Theuns T., 2010, MNRAS, 406, 208
* Pakmor & Springel (2013) Pakmor R., Springel V., 2013, MNRAS, 432, 176
* Pakmor et al. (2011) Pakmor R., Bauer A., Springel V., 2011, MNRAS, 418, 1392
* Pakmor et al. (2014) Pakmor R., Marinacci F., Springel V., 2014, ApJ, 783, L20
* Pakmor et al. (2016a) Pakmor R., Springel V., Bauer A., Mocz P., Munoz D. J., Ohlmann S. T., Schaal K., Zhu C., 2016a, MNRAS, 455, 1134
* Pakmor et al. (2016b) Pakmor R., Pfrommer C., Simpson C. M., Springel V., 2016b, ApJ, 824, L30
* Pakmor et al. (2017) Pakmor R., et al., 2017, MNRAS, 469, 3185
* Pakmor et al. (2018) Pakmor R., Guillet T., Pfrommer C., Gómez F. A., Grand R. J. J., Marinacci F., Simpson C. M., Springel V., 2018, MNRAS, 481, 4410
* Pakmor et al. (2020) Pakmor R., et al., 2020, MNRAS,
* Pfrommer et al. (2017a) Pfrommer C., Pakmor R., Schaal K., Simpson C. M., Springel V., 2017a, MNRAS, 465, 4500
* Pfrommer et al. (2017b) Pfrommer C., Pakmor R., Simpson C. M., Springel V., 2017b, ApJ, 847, L13
* Pillepich et al. (2018) Pillepich A., et al., 2018, MNRAS, 473, 4077
* Powell et al. (1999) Powell K. G., Roe P. L., Linde T. J., Gombosi T. I., De Zeeuw D. L., 1999, Journal of Computational Physics, 154, 284
* Power et al. (2003) Power C., Navarro J. F., Jenkins A., Frenk C. S., White S. D. M., Springel V., Stadel J., Quinn T., 2003, MNRAS, 338, 14
* Price & Monaghan (2007) Price D. J., Monaghan J. J., 2007, MNRAS, 374, 1347
* Rautiainen & Salo (2000) Rautiainen P., Salo H., 2000, IAU Colloq. 174: Small Galaxy Groups, 209, 330
* Raymond (1992) Raymond J. C., 1992, ApJ, 384, 502
* Renaud et al. (2014) Renaud F., Bournaud F., Kraljic K., Duc P. A., 2014, MNRAS, 442, L33
* Rieder & Teyssier (2017) Rieder M., Teyssier R., 2017, MNRAS, 472, 4368
* Rodenbeck & Schleicher (2016) Rodenbeck K., Schleicher D. R. G., 2016, A&A, 593, A89
* Rodriguez-Gomez et al. (2015) Rodriguez-Gomez V., et al., 2015, MNRAS, 449, 49
* Rodriguez-Gomez et al. (2017) Rodriguez-Gomez V., et al., 2017, MNRAS, 467, 3083
* Ruszkowski et al. (2017) Ruszkowski M., Yang H. Y. K., Zweibel E., 2017, ApJ, 834, 208
* Sales et al. (2012) Sales L. V., Navarro J. F., Theuns T., Schaye J., White S. D. M., Frenk C. S., Crain R. A., Dalla Vecchia C., 2012, MNRAS, 423, 1544
* Scannapieco et al. (2012) Scannapieco C., et al., 2012, MNRAS, 423, 1726
* Schleicher et al. (2013) Schleicher D. R. G., Schober J., Federrath C., Bovino S., Schmidt W., 2013, New Journal of Physics, 15, 023017
* Schmidt (1959) Schmidt M., 1959, ApJ, 129, 243
* Schober et al. (2013) Schober J., Schleicher D. R. G., Klessen R. S., 2013, A&A, 560, A87
* Scudder et al. (2012) Scudder J. M., Ellison S. L., Torrey P., Patton D. R., Mendel J. T., 2012, MNRAS, 426, 549
* Sérsic (1963) Sérsic J. L., 1963, Boletin de la Asociacion Argentina de Astronomia La Plata Argentina, 6, 41
* Shukurov et al. (2006) Shukurov A., Sokoloff D., Subramanian K., Brand enburg A., 2006, A&A, 448, L33
* Sijacki et al. (2007) Sijacki D., Springel V., Di Matteo T., Hernquist L., 2007, MNRAS, 380, 877
* Sijacki et al. (2012) Sijacki D., Vogelsberger M., Kereš D., Springel V., Hernquist L., 2012, MNRAS, 424, 2999
* Sparre & Springel (2016) Sparre M., Springel V., 2016, MNRAS, 462, 2418
* Sparre & Springel (2017) Sparre M., Springel V., 2017, MNRAS, 470, 3946
* Sparre et al. (2020) Sparre M., Pfrommer C., Ehlert K., 2020, MNRAS, 499, 4261
* Springel (2005) Springel V., 2005, MNRAS, 364, 1105
* Springel (2010) Springel V., 2010, MNRAS, 401, 791
* Springel (2015) Springel V., 2015, N-GenIC: Cosmological structure initial conditions (ascl:1502.003)
* Springel & Hernquist (2003) Springel V., Hernquist L., 2003, MNRAS, 339, 289
* Springel et al. (2001) Springel V., White S. D. M., Tormen G., Kauffmann G., 2001, MNRAS, 328, 726
* Springel et al. (2005) Springel V., Di Matteo T., Hernquist L., 2005, MNRAS, 361, 776
* Steinwandel et al. (2019) Steinwandel U. P., Beck M. C., Arth A., Dolag K., Moster B. P., Nielaba P., 2019, MNRAS, 483, 1008
* Su et al. (2018) Su K.-Y., Hayward C. C., Hopkins P. F., Quataert E., Faucher-Giguère C.-A., Kereš D., 2018, MNRAS, 473, L111
* Subramanian (1998) Subramanian K., 1998, MNRAS, 294, 718
* Tacchella et al. (2019) Tacchella S., et al., 2019, MNRAS, 487, 5416
* Teyssier et al. (2010) Teyssier R., Chapon D., Bournaud F., 2010, ApJ, 720, L149
* Thomas et al. (2020) Thomas T., Pfrommer C., Enßlin T., 2020, ApJ, 890, L18
* Thorp et al. (2019) Thorp M. D., Ellison S. L., Simard L., Sánchez S. F., Antonio B., 2019, MNRAS, 482, L55
* Toomre & Toomre (1972) Toomre A., Toomre J., 1972, ApJ, 178, 623
* Torrey et al. (2012) Torrey P., Cox T. J., Kewley L., Hernquist L., 2012, ApJ, 746, 108
* Vogelsberger et al. (2013) Vogelsberger M., Genel S., Sijacki D., Torrey P., Springel V., Hernquist L., 2013, MNRAS, 436, 3031
* Vogelsberger et al. (2014a) Vogelsberger M., et al., 2014a, MNRAS, 444, 1518
* Vogelsberger et al. (2014b) Vogelsberger M., et al., 2014b, Nature, 509, 177
* Weinberger et al. (2019) Weinberger R., Springel V., Pakmor R., 2019, ApJS,
* Welker et al. (2017) Welker C., Dubois Y., Devriendt J., Pichon C., Kaviraj S., Peirani S., 2017, MNRAS, 465, 1241
* Wetzel et al. (2009) Wetzel A. R., Cohn J. D., White M., 2009, MNRAS, 395, 1376
* Zweibel (2017) Zweibel E. G., 2017, Physics of Plasmas, 24, 055402
* van de Voort et al. (2021) van de Voort F., Bieri R., Pakmor R., Gómez F. A., Grand R. J. J., Marinacci F., 2021, MNRAS, 501, 4888
## Appendix A Impact of MHD on the ISM
Figure 16: Top panel: star formation rate surface density as a function of gas
surface density for 1349-3M, as seen at a lookback time of $\sim$4 Gyr. The
dashed line shows a Kennicutt-Schmidt relation (Schmidt, 1959; Kennicutt,
1998) with exponent 1.5. The dotted line indicates the approximate position of
the cut-off in the star formation rate. Bottom panel: as above, but for
1349-3H. Both follow the same relation, despite the strong amplification of
the magnetic field in 1349-3M during this time.
In Section 2.2, we claimed that it was not necessary to recalibrate our ISM
subgrid model when including magnetic fields in the simulation. This statement
is non-trivial: whilst the effective pressure in our ISM subgrid model is a
function of density only in the hydrodynamic simulations (Springel &
Hernquist, 2003), in the MHD simulations there is an additional magnetic
pressure term to consider. We can check the influence of this additional term
on the ISM model by investigating its impact on the relation between the star
formation rate surface density, $\Sigma_{\text{SFR}}$, and the gas surface
density, $\Sigma_{\text{gas}}$. Stars form probabilistically out of our
simulated ISM with a gas consumption time-scale set to match that observed by
Kennicutt (1998) for disc galaxies in the local Universe. This results in the
ISM following the well-known Kennicutt-Schmidt (Schmidt, 1959; Kennicutt,
1998) relation of: $\Sigma_{\text{SFR}}\propto(\Sigma_{\text{gas}})^{n}$. If
our ISM subgrid model required recalibration, the form of this relation would
be dependent on the physics model used.
We show the relation between $\Sigma_{\text{SFR}}$ and $\Sigma_{\text{gas}}$
for 1349-3M and 1349-3H in Fig. 16. The 1349 simulations were chosen due to
the particularly strong amplification in 1349-3M, but naturally the results
apply to all our simulations. We have chosen a time when the magnetic field is
highly amplified in the MHD simulation. At later times we observe a similar
relation, but with the gas density covering a much narrower range. In both
cases, the surface densities are calculated by taking a face-on projection
with depth $\pm$1 kpc from the midplane. We choose this depth to make sure
that we are predominantly considering gas in the disc. It can be seen that for
both physics models, the star formation rate surface density follows the
Kennicutt-Schmidt relation with exponent $n=1.5$ (cf. Springel & Hernquist,
2003). This is true over a broad range of values. At lower gas surface density
values, the relation becomes more scattered as the star formation threshold
density is approached. The peak of the distribution is higher in the MHD
simulation than the hydrodynamic analogue, which is a result of morphological
differences between the two remnants. The strong similarity between both
relations otherwise supports our assertion that the ISM subgrid model must not
be recalibrated when introducing magnetic fields.
## Appendix B Galaxy Tracking and Black Hole Dynamics
Figure 17: The distance between the galactic centre and the closest black hole
for all high-resolution simulations as a function of time. In general, this
distance stays well below 5 kpc, confirming the reliability of our galaxy
tracking method. The ‘wandering’ nature of the black hole in simulation
1526-3H likely contributes to the unusual morphology displayed by the
corresponding merger remnant.
In Section 2.3, we claimed that tracking a galaxy between snapshots is
frequently akin to tracking the black hole particle that resides in that
galaxy. We support this claim in Fig. 17, where we show the distance between
the most bound gas cell in the galaxy (i.e. the galactic centre) and the
closest black hole for each high-resolution simulation. For the vast majority
of snapshots, this distance is always well under 5 kpc, confirming the
validity of our tracking method. An exception to this rule is simulation
1526-3H, where the black hole is not well-tied to the galactic centre. In this
merger scenario, the secondary progenitor passes directly through the primary
progenitor. For a short time immediately afterwards, the black hole then
‘hitches a ride’, becoming gravitationally bound to the merging galaxy. This
can be confirmed by comparing its distance from the main galaxy between 7.11
and 6.35 Gyr to the distance between the merging galaxies, as seen in Fig. 1.
On the black hole’s return to the main galaxy, it never quite loses its newly-
gained orbital energy, oscillating around the galactic centre instead. During
this time, the black hole continues to accrete gas, and consequently continues
to inject energy into neighbouring gas cells. The subsequent AGN outbursts
disrupt the gas, producing a similar morphology to that of 1605-3M (see Fig.
6). This similarity is unexpected as the black hole in 1526-3H grows only a
quarter as large as that in 1605-3H post-merger, meaning that the AGN
outbursts should be significantly less influential (see Section 2.2). Indeed,
the black hole growth in 1526-3H is similar to that of 1349-3H and 1330-3H,
both of which show no signatures of strong outbursts in their gas morphology.
We therefore argue that it is the unlocalised nature of the feedback in
1526-3H that is behind the strongly disrupted morphology. This, in turn,
produces the unusual stellar morphology seen in this simulation.
## Appendix C Evidence for a Small-scale Dynamo in Action
In Section 3.3.3, we discussed the development of a small-scale dynamo in our
simulations as a function of increasing resolution. In Fig. 18, we support
this with kinetic and magnetic energy power spectra for each of our high-
resolution MHD simulations. These were created in the same manner as for Fig.
15 over the same radius of 5 kpc within a zero-padded box of 10 kpc across.
The profiles are shown at 0.5 Gyr after the respective time of periapsis in
each simulation (see Section 2.4). In choosing this time, we show the power
spectra at a similar point in the evolution of the magnetic field. It can be
seen that for each simulation, the kinetic energy exhibits a Kolmogorov-like
spectrum, which is consistent with the volume-filling phase of the gas being
both turbulent and subsonic. The difference in normalisation in each plot can
be mostly explained by the difference in mass evolution. At high $k$ values,
the magnetic energy saturates at around 50 per cent of the kinetic energy,
which is consistent with subsonic turbulent box simulations where the forcing
was solenoidal (Federrath, 2016). In each simulation, the magnetic field has
grown similarly quickly. Indeed, the magnetic energy is almost fully saturated
in each case, with the peak magnetic energy occuring at the driving scale of
the turbulence. This means almost the entire magnetic energy spectrum is in
the non-linear dynamo phase, and consequently there is little evidence of the
Kazantsev-like slope at large scales. The decline of the power spectra at
scales larger than 10 kpc is an artefact of the zero-padding we use. Overall,
Fig. 18 supports our understanding of the growth of the magnetic field, as
analysed in Section 3.3.3.
Figure 18: Kinetic and magnetic energy power spectra for the highest-
resolution MHD simulations, calculated for gas within a sphere of 5 kpc
centred on the galactic centre. The power spectra are shown at 0.5 Gyr after
the time of periapsis for each simulation. The black dotted lines show the
slopes of a Kolmogorov spectrum ($\propto k^{-5/3}$) (Kolmogorov, 1941), which
is theoretically expected for a small-scale dynamo resulting from
incompressible turbulence. In each case, the magnetic field is strongly
saturated.
|
# Light Quantum Control of Persisting Higgs Modes in Iron-Based
Superconductors
C. Vaswani1∗, J. H. Kang2∗, M. Mootz3∗, L. Luo1, X. Yang1, C. Sundahl2, D.
Cheng1, C. Huang1, R. H. J. Kim1, Z. Liu1, Y. G. Collantes4, E. E. Hellstrom4,
I. E. Perakis3, C. B. Eom2 and J. Wang1† 1Department of Physics and Astronomy,
Iowa State University, and Ames Laboratory, Ames, IA 50011 USA.
2Department of Materials Science and Engineering, University of Wisconsin-
Madison, Madison, WI 53706, USA.
3Department of Physics, University of Alabama at Birmingham, Birmingham, AL
35294-1170, USA.
4Applied Superconductivity Center, National High Magnetic Field Laboratory,
Florida State University, Tallahassee, FL 32310, USA.
###### Abstract
The Higgs mechanism, i.e., spontaneous symmetry breaking of the quantum
vacuum, is a cross-disciplinary principle, universal for understanding dark
energy, antimatter and quantum materials, from superconductivity to magnetism.
Yet, Higgs modes in one-band superconductors (SCs) are currently under debate
due to their competition with charge-density fluctuations. A distinct Higgs
mode, controllable by terahertz (THz) laser pulses, can arise in multi-band,
unconventional SCs via strong interband Coulomb interaction, but is yet to be
accessed. Here we both discover and demonstrate quantum control of such
collective mode in iron-based high-temperature superconductors. Using two-
pulse, phase coherent THz spectroscopy, we observe a tunable and coherent
2$\Delta_{\mathrm{SC}}$ amplitude oscillation of the complex order parameter
in such SC with coupled lower and upper bands. The nonlinear dependence of the
amplitude mode oscillations on the THz driving fields is distinct from any
one-band and conventional SC results: we observe a large nonlinear change of
resonance strength, yet with a persisting mode frequency. We argue that this
result provides compelling evidence for a transient coupling between the
electron and hole amplitude modes via strong interband coherent interaction.
To support this scenario, we perform quantum kinetic modeling of a hybrid
Higgs mechanism without invoking extra disorder or phonons. In addition to
distinguishing between collective modes and charge fluctuations, the light
quantum control of multiband SCs can be extended to probe and manipulate many-
body entanglement and hidden symmetries in different quantum materials.
Phase coherence between multiple SC condensates in different strongly
interacting bands is well–established in iron-based superconductors (FeSCs).
As illustrated in Fig. 1a, a dominant Coulomb coupling between the $h$\- and
$e$-like Fermi sea pockets, unlike in other SCs, is manifested by, e.g.,
$s\pm$ pairing symmetry Chubukov:2012 ; Chubukov:2015 , spin density wave
resonant peaks and nesting wave vectors (black arrows) ref5 ; 1 ; 2 ; ref4 .
Experimental evidence for Higgs amplitude coherent excitations in FeSCs has
not been reported yet, despite recent progress in non-equilibrium
superconductivity and collective modes giorgianni2019leggett ; Matsunaga:2013
; matsunaga2014 ; Cea2016 ; Rajasekaran ; n1 ; n2 ; n3 ; n4 ; Wu2019 ;
Udina:2019 ; Kumar:2019 ; Manske:2020 ; Kaiser2020 ; Podolsky2020 .
The condensates in different bands of Ba(Fe1-xCox)2As2 studied here, shown in
Fig. 1a, are coupled by the strong interband $e$–$h$ interaction $U$ (blue
double arrow vector), which is about one order of magnitude stronger than the
intraband interaction $V$ (gray and red thin lines) Fernandes:2010 ;
Akbari:2013 . For $U\gg V$, the formation channels of collective modes are
distinct from one-band SC matsunaga2014 ; Cea2016 ; Cea2018 ; Udina:2019 and
multiband MgB2 with dominant intraband interaction, $V\gg U$ Shimano2019 ;
Aoki2017 ; krull2016 . For the latter, only Leggett modes are observed thus
far giorgianni2019leggett ; ref1 . In contrast, for $U\gg V$, one expects
Higgs amplitude modes arising from the condensates in all Coulomb–coupled
bands, i.e., in the $h$ pocket at the $\Gamma$-point (gray circle, mode
frequency $\omega_{\mathrm{H,1}}$), and in the two $e$ pockets at (0, $\pi$)
and ($\pi$, 0) (red ellipses, $\omega_{\mathrm{H,2}}$). A single-cycle THz
oscillating field (red pulse) can act like a “quantum quench”, with impulsive
non-adiabatic driving of the Mexican-hat-like quantum fields (dark green) and,
yet, with minimum heating of other degrees of freedoms. Consequently, the
multi-band condensates are forced out of the free energy minima, since they
cannot follow the quench adiabatically. Most intriguingly, such coherent
nonlinear driving not only excites amplitude mode oscillations in the
different Fermi sea pockets, but also transiently modifies their coupling,
assisted by the strong interband interaction $U$. Such coherent transient
coupling can be regarded as nonlinear amplitude mode hybridization with a
time-dependent phase coherence. In this way, THz laser fields can manipulate
phase-coherent, hybrid Higgs emerging collective modes.
Figure 1: The 2$\Delta_{\mathrm{SC}}$ oscillations detected by two-pulse THz
coherent spectroscopy of multi-band FeSCs. $\textbf{a},$ Illustration of
coherent excitation of hybrid Higgs mode via THz quantum quench. An effective
three-band model has a $h$ pocket at the $\Gamma$ point and two $e$ pockets at
X/Y points, with strong inter- (blue) and weak intra-band (gray) interactions
marked by arrows. b,c, Real and imaginary parts of the complex THz
conductivity spectra $\sigma_{1}(\omega)$ and $\sigma_{2}(\omega)$ in the
superconducting (4.2 K) and normal states (23 K) of Ba(Fe1-xCox)2As2 (x=0.08)
in equilibrium. Inset of c shows the temperature dependence of the superfluid
density normalized to its value at 4.2K, n${}_{s}(T)$/n0, as determined from
$1/\omega$ divergence of $\sigma_{2}(\omega)$ (blue arrow in c). $\textbf{d},$
Differential THz transmission $\Delta E/E_{0}$ (blue circles) measured by
phase-locked two-THz-pulse pump–probe spectroscopy at 4.2 K shows pronounced
coherent oscillations for a peak THz field strength of Epump=56 kV/cm. The
pink shaded curve denotes the square of the pump THz waveform
E${}_{\mathrm{pump}}^{2}$. Inset: Temperature dependence of the peak amplitude
of $\Delta E/E_{0}$. $\textbf{e},$ Fourier spectrum of the coherent
oscillations in $\Delta E_{\mathrm{}}/E_{0}$ (blue line) exhibits a resonance
peak at $\sim$6.9 meV (blue line) and is distinct from both the pump Epump
(gray) and pump-squared E${}_{\mathrm{pump}}^{2}$ (pink) spectra.
Here we present evidence of Higgs modes that are controlled by THz-field-
driven interband quantum entanglement in a multi-band SC, optimally-doped
Ba(Fe1-xCox)2As2, using two phase-locked near-single–cycle THz laser fields.
We thus reveal a striking nonlinear THz field dependence of coherent amplitude
mode oscillations: quick increase to maximum spectral weight (SW) with
negligible mode frequency shift, followed by a huge SW reduction by more than
50$\%$, yet with robust mode frequency position, with less than 10$\%$
redshift. These distinguishing features of the observed collective mode are
different from any one-band and conventional SC results and predictions so
far. Instead, they are predicted by our quantum kinetic calculation, which
identifies the key role of the interband interaction $U$ for coherently
coupling two amplitude modes, in the $h$\- and $e$-like Fermi sea pockets, and
for controlling the SW of the lower Higgs mode observed in the experiment.
The equilibrium complex conductivity spectra, i.e., real and imaginary parts,
${\sigma}_{1}(\omega)$ and ${\sigma}_{2}(\omega)$, of our epitaxial
Ba(Fe1-xCox)2As2 (x=0.08) film lee2010 (Methods) measure the low-frequency
quasi-particle (QP) electrodynamics and condensate coherence, respectively
(Figs. 1b and 1c) Xu . The normal state (black circles) displays Drude–like
behavior, while the QP spectral weight in ${\sigma}_{1}(\omega)$ is depleted
in the SC state due to SC gap openings, seen, e.g., in the 4.2 K trace (red
circles). The lowest SC gap value 2$\Delta_{1}\sim$6.8 meV obtained is in
agreement with the literature values 6.2-7 meV ref2 ; ref3 (Methods). Such
${\sigma}_{1}(\omega)$ spectral weight depletion is accompanied by an increase
of condensate fraction $n_{\mathrm{s}}/n_{\mathrm{0}}$ (inset, Fig. 1c),
extracted from a diverging $1/\omega$ condensate inductive response, marked by
blue arrow, e. g., in the 4.2 K lineshape of ${\sigma}_{2}(\omega)$ (Fig. 1c).
Note that superfluid density $n_{\mathrm{s}}$ vanishes above T${}_{c}\sim$23 K
(inset Fig. 1c).
We characterize the THz quantum quench coherent dynamics directly in the time
domain (Methods) by measuring the pump–probe responses to two phase-locked THz
pulses as differential field transmission $\Delta E_{\mathrm{}}/E_{0}$ (blue
circles, Fig. 1d) for THz pump field, $E_{\mathrm{pump}}=$56 kV/cm as a
function of pump–probe time delay $\Delta t_{\mathrm{pp}}$. The central pump
energy $\hbar\omega_{\mathrm{pump}}=$5.4 meV (gray shade, Fig. 1e) is chosen
slightly below the 2$\Delta_{1}$ gap. Intriguingly, the $\Delta E/E_{0}$
dynamics reveals a pronounced coherent oscillation, superimposed on the
overall amplitude change, which persists much longer than the THz
photoexcitation (pink shade). This mode is excited by the quadratic coupling
of the pump vector potential, $\mathbf{A}^{2}(t)\propto
E^{2}_{\mathrm{pump}}/\omega^{2}$ due to the SC equilibrium symmetry
matsunaga2014 . Such coherent responses yield information within the general
framework of 2D coherent nonlinear spectroscopy rupert (Methods). The origin
of the observed coherent $\Delta E/E_{0}$ oscillation is better illustrated by
its Fourier transformation (FT), shown in Fig. 1e. The FT spectrum of the
coherent nonlinear signals (blue solid line) displays a pronounced resonance
at 6.9meV, indicative of 2$\Delta_{1}$ coherent amplitude mode oscillations.
This FT spectrum strongly differs from the spectra of both THz pump
E${}_{\mathrm{pump}}(\omega)$ centered at $\omega_{{\mathrm{pump}}}\sim$5.4
meV (gray shade) and second harmonic, Anderson pseudo-spin (APS) precession at
2$\omega_{\mathrm{pump}}$ from E${}^{2}_{\mathrm{pump}}(\omega)$ (pink shade).
This is a consequence of the broadband spectrum of the few-cycle pump pulse
used in the experiment which overlaps with the mode resonances such that
$\Delta E/E_{0}$ oscillates with the collective mode frequencies Udina:2019 .
After the oscillation, time-dependent complex conductivity spectra,
$\sigma_{1}(\omega,\Delta t_{\mathrm{pp}})$ and $\sigma_{2}(\omega,\Delta
t_{\mathrm{pp}})$, can be measured (Figs.S5-S6, Supplementary). They show that
$\Delta E/E_{0}$ closely follows the pump-induced change in condensate
density, $\Delta n_{\mathrm{s}}/n_{\mathrm{0}}$ n1 . The THz excitation at
$E_{\mathrm{pump}}=$56 kV/cm only reduces $n_{\mathrm{s}}$ slightly, $\Delta
n_{\mathrm{s}}/n_{\mathrm{0}}\sim\Delta E_{\mathrm{}}/E_{0}\sim-3~{}\%$ at
$\Delta t_{\mathrm{pp}}=5$ ps. Furthermore, the pump-induced peak amplitude
(blue diamond), marked by the red dashed line in Fig. 1d, diminishes above Tc
(inset). The measured coherent oscillations reflect the emergence of a hybrid
Higgs multi-band collective mode between two Coulomb–coupled lower and higher
modes, $\omega_{\mathrm{H,1}}$ and $\omega_{\mathrm{H,2}}$, as shown later.
Fig. 1. The 2$\Delta_{\mathrm{SC}}$ oscillations detected by two-pulse THz
coherent spectroscopy of multi-band FeSCs. $\textbf{a},$ Illustration of
coherent excitation of hybrid Higgs mode via THz quantum quench. An effective
three-band model has a $h$ pocket at the $\Gamma$ point and two $e$ pockets at
X/Y points, with strong inter- (blue) and weak intra-band (gray) interactions
marked by arrows. b,c, Real and imaginary parts of the complex THz
conductivity spectra $\sigma_{1}(\omega)$ and $\sigma_{2}(\omega)$ in the
superconducting (4.2 K) and normal states (23 K) of Ba(Fe1-xCox)2As2 (x=0.08)
in equilibrium. Inset of c shows the temperature dependence of the superfluid
density normalized to its value at 4.2K, n${}_{s}(T)$/n0, as determined from
$1/\omega$ divergence of $\sigma_{2}(\omega)$ (blue arrow in c). $\textbf{d},$
Differential THz transmission $\Delta E/E_{0}$ (blue circles) measured by
phase-locked two-THz-pulse pump–probe spectroscopy at 4.2 K shows pronounced
coherent oscillations for a peak THz field strength of Epump=56 kV/cm. The
pink shaded curve denotes the square of the pump THz waveform
E${}_{\mathrm{pump}}^{2}$. Inset: Temperature dependence of the peak amplitude
of $\Delta E/E_{0}$. $\textbf{e},$ Fourier spectrum of the coherent
oscillations in $\Delta E_{\mathrm{}}/E_{0}$ (blue line) exhibits a resonance
peak at $\sim$6.9 meV (blue line) and is distinct from both the pump Epump
(gray) and pump-squared E${}_{\mathrm{pump}}^{2}$ (pink) spectra.
Figure 2: Temperature dependence of hybrid Higgs coherent oscillations in
FeSCs. $\textbf{a},$ Temporal profiles of $-\Delta E_{\mathrm{}}/E_{0}$ at
various temperatures and for a peak THz pump E-field of Epump=56 kV/cm. Traces
are offset for clarity. $\textbf{b},$ Fourier spectra of the Higgs mode
oscillations derived from the two-pulse coherent pump–probe signals at
different temperatures. Dashed red lines indicate the slight redshift of the
Higgs mode frequency with the drastic reduction of mode SW as a function of
temperature. $\textbf{c},$ Temperature dependence of the integrated
SW${}_{5\rightarrow 14~{}\mathrm{meV}}$ of the Higgs mode in b.
Figure 2 reveals a strong temperature dependence of the Higgs oscillations.
The coherent dynamics of $\Delta E/E_{0}$ is shown in Fig. 2a for temperatures
4.2 K–30 K. Approaching $T_{c}$ from below, the coherent oscillations quickly
diminish, as seen by comparing the 4.2 K (black line) and 16 K (gray) traces
versus 22 K (cyan) and 24 K (pink) traces. Fig. 2b shows the
temperature–dependent Fourier spectra of $\Delta E/E_{0}$, in the range 4–12
meV. Fig. 2c plots the integrated spectral weight SW${}_{5\rightarrow
14~{}\mathrm{meV}}$ of the amplitude mode (Fig. 2b). The strong temperature
dependence correlates the mode with SC coherence. Importantly, while the mode
frequency is only slightly red-shifted, by less than 10 $\%$ before full SW
depletion close to $T_{c}$, SW is strongly suppressed, by $\sim$55 $\%$ at 16
K ($T/T_{c}\sim$0.7). Such a spectral weight reduction in FeSCs is much larger
than that expected in one-band BCS superconductors or for $U$=0 shown later
(Figs. 4e-4f). We also note that the temperature dependence of Higgs
oscillations, observed in FeSCs here, has not been measured experimentally in
conventional BCS systems, and could represent a key signature of quantum
quench dynamics of unconventional SCs. Note that the observed behavior is
consistent with our simulations of the Higgs mode in multi-band SCs with
dominant interband $U$, shown later in Figs. 4e-4f.
Figure 3 presents the distinguishing experimental evidence for the
unconventional Higgs mode in FeSCs, which is different from one-band SCs – a
highly nonlinear THz electric field dependence of coherent $2\Delta_{1}$
oscillations that manifests as a huge SW change, yet with persisting mode
frequency, with only very small redshift. Fig. 3a shows the detailed pump-
fluence dependence of $\Delta E_{\mathrm{}}/E_{0}$ as a function of time
delay, presented as a false-color plot at $T$=4.2 K for up to
E${}_{\mathrm{pump}}\sim 600$ kV/cm. It is clearly seen that amplitude mode
oscillations depend nonlinearly on the $E_{\mathrm{pump}}$ field strength.
1$+\Delta E_{\mathrm{}}/E_{0}$ (red solid line) at $\Delta t_{\mathrm{pp}}=5$
ps is shown in Fig. 3b. This, together with the measured $\sim 1/\omega$
divergence in $\sigma_{2}(\omega,\Delta t_{\mathrm{pp}})$, allows the
determination of the condensate fraction
$n_{\mathrm{s}}(E_{\mathrm{pump}})/n_{\mathrm{0}}$ (blue circles) in the
driven state (Fig. S6, Supplementary). This shows three different excitation
regimes, marked by black dash lines in Figs. 3a-3b: (1) in regime #1, the
condensate quench is minimal, e. g., $n_{\mathrm{s}}/n_{\mathrm{0}}\geq$97%
below the field $E_{\\#1}=56$ kV/cm; (2) Regime #2 displays partial SC quench,
where $n_{\mathrm{s}}/n_{\mathrm{0}}$ is still significant, e. g., condensate
fraction $\approx$60% at $E_{\\#2}=276$ kV/cm; (3) A saturation regime #3 is
observed $\sim E_{\\#3}=600$ kV/cm, which leads to a slowly changing
$n_{\mathrm{s}}/n_{\mathrm{0}}$ approaching 25 %. The saturation is expected
since the below gap THz pump is used, especially
$\hbar\omega_{pump}\ll$2$\Delta_{2}\sim$15-19 meV at the $e$ pockets ref2 ;
ref3 .
Figure 3: Nonlinear THz field dependence of spectral weight of Higgs mode in
FeSCs. $\textbf{a},$ A 2D false-color plot of $\Delta E_{\mathrm{}}/E_{0}$ as
a function of pump E-field strength Epump and pump–probe delay $\Delta
t_{\mathrm{pp}}$ at 4.2 K. $\textbf{b},$ THz pump $E$-field Epump dependence
of $1+\Delta E_{\mathrm{}}/E_{0}$ (red line) overlaid with the superfluid
density fraction ns/n0 (blue circles) after THz pump at $\Delta
t_{\mathrm{pp}}$=5ps and T=4.2 K. Dashed arrows mark the three pump $E$-field
regimes, i. e., weak, partial, and saturation, identified in the main text. c,
d Spectra of coherent Higgs mode oscillations show a distinct non-monotonic
dependence as a function of THz pump field, i.e., a rapid increase in the mode
amplitude for low pump E-field strengths up to 173kV/cm, saturation up to
276kV/cm and significant reduction at higher fields. The blue dashed line
marks the resonance of the mode and the redshift of the Higgs mode peak, much
smaller than the mode SW change. $\textbf{e},$ Integrated spectral weight
SW${}_{5\rightarrow 14~{}\mathrm{meV}}$ of the Higgs mode at various pump
$E$-field strengths, indicating the SW reduction of the Higgs mode from
dominant $\omega_{\mathrm{\mathrm{H},1}}$ at low driving fields to
$\omega_{\mathrm{\mathrm{H},2}}$ due to the interband interaction and coherent
coupling at high driving fields above $E_{\mathrm{pump}}=$276 kV/cm (inset).
f, g Contrasting the thermal and THz-driven states of coherent hybrid Higgs
responses by comparing the mode spectra at the similar superfluid density f
$n_{s}/n_{0}=60\%$ and g $n_{s}/n_{0}=25\%$ achieved by THz pump (black solid
lines) and temperature (red shades).
Quantum quenching of the single-band BCS pairing interaction has been well
established to induce Higgs oscillations with amplitude scaling as
1/$\sqrt{\Delta_{SC,\infty}}$, determined by the long–time asymptotic
nonthermal order parameter $\Delta_{SC,\infty}$. The latter decreases with
pump field Yuzbashyan:2006 ; Axt:2007 ; Forster:2017 . Both model and
experimental results establish that the Higgs mode amplitude increases with
THz pumping until full depletion of the condensate, concurrent with a
continuous Higgs resonance redshift to zero Yuzbashyan:2006 ; Axt:2007 ;
Forster:2017 ; Matsunaga:2013 . In contrast to this expected behavior for
conventional SCs, Fig. 3a and the Fourier spectra of the coherent
oscillations, Figs. 3c and 3d, show a non-monotonic pump-field dependence of
the Higgs mode amplitude that is unique here. Specifically, the Fourier
spectra exhibit a clear resonance at low pump fluences, which coincides with
the frequency of the lower Higgs mode $\omega_{\mathrm{\mathrm{H},1}}$. This
resonance grows quickly up to a field of $E_{\mathrm{pump}}=173$ kV/cm (Fig.
3c), saturates up to 276 kV/cm (Fig. 3d) and then exhibits a significant
reduction in pump regime #3, e.g., by more than 50 $\%$ at 629 kV/cm (black
line, Fig. 3d). This non-monotonic field dependence clearly shows that the
coherent oscillations in Fig. 3a quickly increase in pump regime #1 and
saturate in regime #2, prior to any significant mode resonance redshift (blue
dashed line, Fig. 3c). Above this relatively low field regime, the oscillation
amplitude starts to decrease at 276 kV/cm, even though there is still more
than 60% of condensate, marked in Fig. 3b: the driven state is still far from
full SC depletion. This striking SW reduction is also seen in the integrated
spectral weight analysis, SW${}_{5\rightarrow 14~{}\mathrm{meV}}$, summarized
in Fig. 3e. Most intriguingly, while there is a large reduction of the Higgs
mode SW$\sim$50% at 629 kV/cm (regime #3), the resonance peak position is
nearly persistent, with $\leq$10 $\%$ red shift (blue dashed lines, Fig. 3d).
These observations differ from any known behavior in one-band SCs, but are
consistent with expectations from coherent coupling of Higgs modes in multi-
band SCs by a dominant interband interaction $U$, discussed below.
The distinct mode amplitude and position variation with pump field extracted
from the coherent oscillations clearly show the transition from SW growth and
saturation to reduction, marked by the black dashed line at
$E_{\mathrm{pump}}=$276 kV/cm (Fig. 3e). The saturation and reduction of SW in
the amplitude oscillation, yet with a persisting mode frequency, cannot be
explained by any known mechanism. This can arise from the coupling of the two
amplitude modes $\omega_{\mathrm{\mathrm{H},1}}$ and
$\omega_{\mathrm{\mathrm{H},2}}$ expected in iron pnictides due to the strong
inter-pocket interaction $U$. As we demonstrate theoretically below, the
coherent coupling process shown in Fig. 3e can be controlled and detected
nonlinearly by two phase-locked THz pulses. We argue that (I) At low driving
fields, $\omega_{\mathrm{\mathrm{H},1}}$ dominates the hybrid collective mode
due to less damping than $\omega_{\mathrm{\mathrm{H},2}}$ arising from the
asymmetry between the electron and hole pockets; (II) For higher fields, SW of
$\omega_{\mathrm{\mathrm{H},1}}$ mode decreases due to the strong coupling
with $\omega_{\mathrm{\mathrm{H},2}}$ expected for the strong inter–band
interaction in iron pnictides. Moreover, it is critical to note that the THz
driving is of highly non-thermal nature, which is distinctly different from
that obtained by temperature tuning in Fig. 2. Specifically, Figs. 3f and 3g
compare the hybrid Higgs mode spectra for similar condensate faction
$n_{\mathrm{s}}/n_{\mathrm{0}}$, i.e., $\approx$60% (f) and 25% (g), induced
by tuning either the temperature (red shade) or THz pump (black line). The
mode is clearly much stronger in the THz driven coherent states than in the
temperature tuned ones.
Figure 4: Fig. 4. Gauge-invariant quantum kinetic calculation of the THz-
driven hybrid Higgs dynamics. $\textbf{a},$ Calculated Higgs mode spectra for
low pump $E$-field strengths. Inset: Calculated $\Delta E_{\mathrm{}}/E_{0}$
for peak Epump = 380 kV/cm (black) and its comparison with the waveform
E${}_{\mathrm{pump}}^{2}$ (pink, shaded) of the applied THz pump pulse in the
experiment. $\textbf{b},$ Calculated Higgs mode spectra for higher field
strengths show a decrease in amplitude and redshift of the Higgs mode
$\omega_{H,1}$, consistent with experimentally measured coherent responses in
Figs. 3c and 3d. Note that a second $\omega_{H,2}$ appears at higher
$E_{\mathrm{pump}}$-field strengths and gets stronger at elevated
$E_{\mathrm{pump}}$-field strengths. Although the $\omega_{H,2}$ mode is
outside the experimental sampling width, it is revealed by the distinct
nonlinear THz field dependence of spectral weight controlled by THz pump (Fig.
3d). $\textbf{c},$ $E_{\mathrm{pump}}$-field dependence of the calculated
Higgs mode $\omega_{H,1}$ spectral weight without inter-band coupling, i.e.,
$U=0$ (red circles) and with strong inter-band coupling $U\neq 0$ (blue
circles). $\textbf{d},$ Plot of the Higgs mode frequency $2\omega_{H,1}$ as a
function of $E_{\mathrm{pump}}$-field strength without inter-band coupling
(red circles) and for strong inter-band coupling (blue circles). Our
simulations for strong $U$ (blue circles) are in full agreement with the
experimental results in Figs. 3c-3d and in sharp contrast with the one-band SC
results obtained for $U=0$ (red circles). e, f, The calculated spectral weight
SW${}_{0\rightarrow 14~{}\mathrm{meV}}$ e and resonance position f are plotted
as a function of temperature for a fixed pump field of $56.0$ kV/cm for $U=0$
(red circles) and $U\neq 0$ (blue circles). With inter-band coupling, the SW
is strongly suppressed, by about 60$\%$ up to a temperature of $0.6~{}T_{C}$,
while at the same time the mode frequency is only slightly redshifted, by
about 15$\%$, before a full spectral weight depletion is observable towards
$T_{C}$. These simulations are in agreement with the hybrid Higgs behavior in
FeSCs and differ from one-band superconductors showing comparable change of SW
and position of the Higgs mode with increasing temperature (red circles).
To put the above Higgs mode findings on a rigorous footing, we calculate
two–dimensional THz coherent nonlinear spectra by extending the gauge-
invariant density matrix equations of motion theory of Ref. mootz2020 , as
outlined in the supplement (Section S6). Using the results of these
calculations, we propose a physical mechanism that explains the distinct
differences of the Higgs mode resonance in the four–wave–mixing spectra
between the strong and weak inter-band interaction limits. For this, we
calculate the APS and quantum transport nonlinearities mootz2020 driven by
two intense phase–locked THz E-field pulses for an effective 3-pocket BCS
model of FeSCs fernandes:2016 . This model includes both intraband and
interband pairing interactions, as well as asymmetry between electron and hole
pockets. We thus calculate the nonlinear differential field transmission
$\Delta E_{\mathrm{}}/E_{0}$ for two phase–locked THz pulses, which allows for
a direct comparison of our theory with the experiment (supplementary).
The inset of Fig. 4a presents the calculated $\Delta E/E_{0}$ (black line),
shown together with E${}_{\mathrm{pump}}^{2}(t)$ of the applied experimental
pump pulse (pink shade). The calculated Higgs mode spectra, Fig. 4a (regime I)
and Fig. 4b (regime II), are dominated by a resonance close to 6.8 meV for low
pump fluences, which corresponds to $\omega_{\mathrm{\mathrm{H},1}}$. This
resonance grows up to pump fields $E_{\mathrm{THz}}\approx 320.0$ kV/cm for
the parameters used here, with minimal redshift and without any significant SW
at $\omega_{\mathrm{\mathrm{H},2}}$ (Fig. 4a). Interestingly, for higher
fields (Fig. 4b), we obtain both a redshift and a decrease of the oscillation
amplitude. In this regime II, SW emerges close to 15.0 meV, outside of our
experimental bandwidth, in the frequency regime of the
$\omega_{\mathrm{\mathrm{H},2}}$ Higgs mode. The latter mode is strongly
suppressed due to damping induced via electron-hole asymmetry (supplementary).
Specifically, the ellipticity of the e-pockets increases the DOS along the
pump field direction and thus increases the damping of mainly the
$\omega_{\mathrm{\mathrm{H},2}}$ resonance, which leads to a transfer of
oscillator strength to the continuum. This damping has a much smaller
influence on the $\omega_{\mathrm{\mathrm{H},1}}$ resonance that arises
largely from the hole pockets. Most importantly, the strong interband coupling
expected in FeSCs leads to a decrease in the $\omega_{\mathrm{\mathrm{H},1}}$
resonance amplitude, with SW reduction accompanied by a persisting mode
frequency This behavior of the multi–band model with strong $U$ is clearly
seen in the raw experimental data. Note that, while in regime I we observe an
increase in the mode amplitude without any significant redshift, in regime II,
the decrease in $\omega_{\mathrm{\mathrm{H},1}}$ resonance is accompanied by a
small redshift. This behavior of the hybrid Higgs mode contradicts the
one–band behavior, recovered by setting $U=0$, and is in excellent agreement
with our experimental observations in the FeSC system (Figs. 3c-3e).
To scrutinize further the critical role of the strong interband interaction
$U$, we show the fluence dependence of the coherent Higgs SW close to
$\omega_{\mathrm{\mathrm{H},1}}$ in Fig. 4c. SW${}_{0\rightarrow
14~{}\mathrm{meV}}$ differs markedly between the calculation with strong
$U\neq 0$ (blue circles) and that without inter-pocket interaction $U=0$ (red
circles), which resembles the one-band BCS quench results. Importantly, the
Higgs mode SW${}_{0\rightarrow 14~{}\mathrm{meV}}$ for strong $U$ grows at low
pump fluences (regime I), followed by a saturation and then decrease at
elevated $E_{\mathrm{pump}}$ (regime II), consistent with the experiment.
Meanwhile, Fig. 4d demonstrates that the resonance frequency remains constant
in regime I, despite the strong increase of SW, and then redshifts in regime
II, yet by much less than in the one–band system (compare $U\neq 0$ (blue
circles) vs. $U=0$ (red circles)). Without inter-pocket $U$ ($U=0$ , red
circles in Figs. 4c-4d), the SW of Higgs mode $\omega_{\mathrm{\mathrm{H},1}}$
grows monotonically up to a quench of roughly 90$\%$. A further increase of
the pump field leads to a complete quench of the order parameter $\Delta_{1}$
and a decrease of the SW of Higgs mode $\omega_{\mathrm{H,1}}$ to zero, due to
ransition from a damped oscillating Higgs phase to an exponential decay. Based
on the calculations in Figs. 4c-4d, the decrease of the spectral weight with
interband coupling appears at a $\Delta_{1}$ quench close to 15$\%$, while
without interband coupling the decrease of spectral weight is only observable
close to the complete quench ($\sim$90$\%$) of the SC order parameter
$\Delta_{1}$. We conclude from this that the spectral weight decrease at the
lower Higgs resonance with low redshift is a direct consequence of the strong
coupling between the electron and hole pockets due to large $U$. This
$E_{\mathrm{pump}}$ dependence is the hallmark signature of the Higgs mode in
FeSCs and is fully consistent with the Higgs mode behaviors observed
experimentally.
Finally, the temperature dependence of the hybrid Higgs mode predicted by our
model is shown in Figs. 4e and 4f for $U=0$ (red circles) and $U\neq 0$ (blue
circles). With interband coupling, the SW is strongly suppressed, by about
60$\%$ up to a temperature of $0.6~{}T_{C}$, while at the same time the mode
frequency is only slightly redshifted, by about 15$\%$, before a full spectral
weight depletion is observable towards $T_{C}$. The strong suppression results
from transfer of SW from mode $\omega_{\mathrm{H,1}}$ to the higher mode
$\omega_{\mathrm{H,2}}$ with increasing temperature, since the higher SC gap
$\Delta_{2}$ experiences stronger excitation by the applied pump $E^{2}$ with
growing $T$. These simulations are in agreement with the hybrid Higgs behavior
in Fig. 2 and differ from one-band superconductors showing comparable change
of both SW and position of the Higgs mode with increasing temperature (red
circles, Figs. 4e-4f). Moreover, our calculation without light-induced changes
in the collective effects (only charge-density fluctuations) produces a
significantly smaller $\Delta E_{\mathrm{}}/E_{0}$ signal in the non-
perturbative excitation regime (Fig. S11, supplementary). Therefore, we
conclude that the hybrid Higgs mode dominates over charge-density fluctuations
in two-pulse coherent nonlinear signals in FeSCs, due to the different effects
of the strong interband $U$ and multi-pocket bandstructure on QPs and on Higgs
collective modes.
In summary, we provide distinguishing features for Higgs modes and coherent
excitations in FeSCs, which differ significantly from any previously observed
collective mode in other superconducting materials: 2$\Delta_{\mathrm{SC}}$
amplitude oscillations displaying a robust mode resonance frequency position
despite a large change of its spectral weight, more than 50 %, on the THz
electric field. This unusual nonlinear quantum behavior provides evidence for
a compelling mechanism of hybrid Higgs from the two band entanglement in
FeSCs. Our results also warrant further investigation of Higgs collective
modes through broadband THz 2D spectroscopy in the coherent driven regime.
## References
* (1) I. I. Mazin, J. Schmalian, Pairing symmetry and pairing state in ferropnictides: Theoretical overview. Physica C: Superconductivity 469, 614-627 (2009).
* (2) Johnston D C, The puzzle of high temperature superconductivity in layered iron pnictides and chalcogenides. Adv. Phys. 59, 803 (2010).
* (3) Hanaguri T, Niitaka S, Kuroki K and Takagi H, Unconventional s-Wave Superconductivity in Fe(Se,Te). Science 328, 474 (2010).
* (4) A. Patz et al., Ultrafast observation of critical nematic fluctuations and giant magnetoelastic coupling in iron pnictides. Nat. Commun. 5, 3229 (2014).
* (5) A. Patz et al., Critical speeding up of nonequilibrium electronic relaxation near nematic phase transition in unstrained Ba(Fe1-xCox)2As2. Phys. Rev. B 95, 165122 (2017).
* (6) A Charnukha, Optical conductivity of iron-based superconductors. J. Phys.: Condens. Matter. 26, 253203 (2014).
* (7) R. Matsunaga et al., Light-induced collective pseudospin precession resonating with Higgs mode in a superconductor. Science 345, 1145–1149 (2014).
* (8) T. Cea, C. Castellani, L. Benfatto, Nonlinear optical effects and third-harmonic generation in superconductors: Cooper pairs versus Higgs mode contribution. Phys. Rev. B 93, 180507(R) (2016).
* (9) M. Udina, T. Cea, L. Benfatto, Theory of coherent-oscillations generation in terahertz pump-probe spectroscopy: From phonons to electronic collective modes. Phys. Rev. B 100, 165131 (2019).
* (10) F. Giorgianni et al., Leggett mode controlled by light pulses. Nat. Phys. 15, 341–346 (2019).
* (11) R. Matsunaga et al., Higgs amplitude mode in the BCS superconductors Nb1-xTixN induced by terahertz pulse excitation. Phys. Rev. Lett. 111, 057002 (2013).
* (12) S. Rajasekaran et al., Probing optically silent superfluid stripes in cuprates. Science 359, 575-579 (2018).
* (13) X. Yang et al., Terahertz-light quantum tuning of a metastable emergent phase hidden by superconductivity. Nat. Mater. 17, 586-591 (2018).
* (14) X. Yang et al., Lightwave-driven gapless superconductivity and forbidden quantum beats by terahertz symmetry breaking. Nat. Photon. 13, 707-713 (2019).
* (15) X. Yang et al., Nonequilibrium pair breaking in Ba(Fe1-xCox)2As2 superconductors: evidence for formation of a photoinduced excitonic state. Phys. Rev. Lett. 121, 267001 (2018).
* (16) C. Vaswani et al., Terahertz Second-Harmonic Generation from Lightwave Acceleration of Symmetry-Breaking Nonlinear Supercurrents. Phys. Rev. Lett. 124, 207003 (2020).
* (17) F. Yang, M. W. Wu, Gauge-invariant microscopic kinetic theory of superconductivity: Application to the optical response of Nambu-Goldstone and Higgs modes. Phys. Rev. B 100, 104513 (2019).
* (18) A. Kumar, A. F. Kemper, Higgs oscillations in time-resolved optical conductivity. Phys. Rev. B 100, 174515 (2019).
* (19) L. Schwarz et al., Classification and characterization of nonequilibrium Higgs modes in unconventional superconductors. Nat. Commun. 11, 287 (2020).
* (20) H. Chu et al., Phase-resolved Higgs response in superconducting cuprates. Nat Commun 11, 1793 (2020).
* (21) M. Buzzi et al., Higgs-mediated optical amplification in a non-equilibrium superconductor. Preprint at https://arxiv.org/abs/1908.10879 (2020).
* (22) R. M. Fernandes, J. Schmalian, Competing order and nature of the pairing state in the iron pnictides. Phys. Rev. B 82, 014521 (2010).
* (23) A. Akbari, A. P. Schnyder, D. Manske, I. Eremin, Theory of nonequilibrium dynamics of multiband superconductors. Europhys. Lett. 101, 17002 (2013).
* (24) T. Cea, P. Barone, C. Castellani, L. Benfatto, Polarization dependence of the third-harmonic generation in multiband superconductors. Phys. Rev. B 97, 094516 (2018).
* (25) T. Cea, L. Benfatto, Signature of the Leggett mode in the A1g Raman response: From MgB2 to iron-based superconductors. Phys. Rev. B 94, 064512 (2016).
* (26) Y. Murotani, N. Tsuji, H. Aoki, Theory of light-induced resonances with collective Higgs and Leggett modes in multiband superconductors. Phys. Rev. B 95, 104503 (2017).
* (27) H. Krull, N. Bittner, G. S. Uhrig, D. Manske, A. P. Schnyder, Coupling of Higgs and Leggett modes in non-equilibrium superconductors. Nat. Commun. 7, 11921 (2016).
* (28) Blumberg G., et al., Observation of Leggett’s collective mode in a multi-band MgB2 superconductor. Phys. Rev. Lett. 99, 227002 (2007).
* (29) S. Lee et al., Template engineering of Co-doped BaFe2As2 single-crystal thin films, Nat. Mater. 9, 397 (2010).
* (30) X. Yang et al., Ultrafast nonthermal terahertz electrodynamics and possible quantum energy transfer in the Nb3Sn superconductor. Phys. Rev. B 99, 094504 (2019).
* (31) J. J. Tu, et al., Optical properties of the iron arsenic superconductor BaFe1.85Co0.15As2. Phys. Rev. B. 82, 174509 (2010).
* (32) K.W. Kim, et al., Evidence for multiple superconducting gaps in optimally doped BaFe1.87Co0.13As2 from infrared spectroscopy. Phys. Rev. B. 81, 214508 (2010).
* (33) T. Maag et al., Coherent cyclotron motion beyond Kohn’s theorem. Nat. Phys. 12, 119 (2016).
* (34) E. A. Yuzbashyan, M. Dzero, Dynamical vanishing of the order parameter in a fermionic condensate. Phys. Rev. Lett. 96, 230404 (2006).
* (35) T. Papenkort, V. M. Axt, T. Kuhn, Coherent dynamics and pump-probe spectra of BCS superconductors. Phys. Rev. B 76, 224522 (2007).
* (36) Y.-Z. Chou, Y. Liao, M. S. Foster, Twisting Anderson pseudospins with light: Quench dynamics in terahertz-pumped BCS superconductors. Phys. Rev. B 95, 104507 (2017).
* (37) M. Mootz, J. Wang, and I. E. Perakis, Lightwave terahertz quantum manipulation of nonequilibrium superconductor phases and their collective modes, Phys. Rev. B 102, 054517 (2020)
* (38) R. M. Fernandes, A. V. Chubukov, Low-energy microscopic models for iron-based superconductors: a review. Rep. Prog. Phys. 80, 014503 (2016).
## Acknowledgments
This work was supported by National Science Foundation 1905981 (THz
spectroscopy and data analysis). The work at UW-Madison (synthesis and
characterizations of epitaxial thin films) was supported by the US Department
of Energy (DOE), Office of Science, Office of Basic Energy Sciences (BES),
under award number DE-FG02-06ER46327. Theory work at the University of
Alabama, Birmingham was supported by the US Department of Energy under
contract # DE-SC0019137 (M.M and I.E.P) and was made possible in part by a
grant for high performance computing resources and technical support from the
Alabama Supercomputer Authority.
## Author Contributions
C.V., L.L. and X. Y. performed the THz spectroscopy measurements with J.W.’s
supervision. J.H.K, C.S. and C.B.E. grew the samples and performed crystalline
quality and transport characterizations. M.M. and I.E.P. developed the theory
for the hybrid Higgs mode and performed calculations. Y. G. C and E. E. H made
Ba122 target for epitaxial thin films. J.W. and C.V. analyzed the THz data
with the help of L.L., D.C., C.H., R.J.H.K. and Z.L. The paper is written by
J.W., M.M., I.E.P. and C.V. with discussions from all authors. J.W. conceived
and supervised the project.
## Correspondence
Correspondence and requests for materials should be addressed to J.W.
<EMAIL_ADDRESS>[email protected]).
|
# Exploring Human Mobility for Multi-pattern Passenger Prediction: A Graph
Learning Framework
Xiangjie Kong, , Kailai Wang, Mingliang Hou, Feng Xia, ,
Gour Karmakar, , and Jianxin Li This work was supported in part by the
National Natural Science Foundation of China under Grant 62072409, in part by
the Zhejiang Provincial Natural Science Foundation under Grant LR21F020003,
and in part by the Fundamental Research Funds for the Provincial Universities
of Zhejiang under Grant RF-B2020001. _(Corresponding author: Feng Xia.)_ X.
Kong is with the School of Software, Dalian University of Technology, Dalian
116620, China, and also with the College of Computer Science and Technology,
Zhejiang University of Technology, Hangzhou 310023, China (e-mail:
[email protected]).K. Wang and M. Hou are with the School of Software, Dalian
University of Technology, Dalian 116620, China (e-mail<EMAIL_ADDRESS>[email protected]).F. Xia is with the School of Engineering, IT and
Physical Sciences, Federation University Australia, Ballarat 3353, Australia
(e-mail: [email protected]).G. Karmakar is with the School of Engineering, IT and
Physical Sciences, Federation University Australia, Churchill 3842, Australia
(e-mail: [email protected]).J. Li is with the School of IT,
Deakin University, Melbourne 3125, Australia (e-mail:
[email protected]).
###### Abstract
Traffic flow prediction is an integral part of an intelligent transportation
system and thus fundamental for various traffic-related applications. Buses
are an indispensable way of moving for urban residents with fixed routes and
schedules, which leads to latent travel regularity. However, human mobility
patterns, specifically the complex relationships between bus passengers, are
deeply hidden in this fixed mobility mode. Although many models exist to
predict traffic flow, human mobility patterns have not been well explored in
this regard. To reduce this research gap and learn human mobility knowledge
from this fixed travel behaviors, we propose a multi-pattern passenger flow
prediction framework, MPGCN, based on Graph Convolutional Network (GCN).
Firstly, we construct a novel sharing-stop network to model relationships
between passengers based on bus record data. Then, we employ GCN to extract
features from the graph by learning useful topology information and introduce
a deep clustering method to recognize mobility patterns hidden in bus
passengers. Furthermore, to fully utilize Spatio-temporal information, we
propose GCN2Flow to predict passenger flow based on various mobility patterns.
To the best of our knowledge, this paper is the first work to adopt a multi-
pattern approach to predict the bus passenger flow from graph learning. We
design a case study for optimizing routes. Extensive experiments upon a real-
world bus dataset demonstrate that MPGCN has potential efficacy in passenger
flow prediction and route optimization.
###### Index Terms:
Spatio-temporal data mining, human mobility pattern, graph convolutional
networks, passenger flow prediction, smart city.
## I Introduction
Smart cities have been gradually formed by information and communication
technologies, including the Internet of Things (IoT) [1], cloud computing [2],
and edge computing [3]. One important application scenario of the smart city
is the Intelligent Transportation System (ITS) to improve the public services
and gain the solutions to problems in urban transportation such as traffic
jams, traffic accidents, parking chaos, route planning [4], and resource
allocation [5]. The above problems are closely related to traffic flow and its
prediction. Furthermore, the smart city industry also plays an important role
in Big Data and generates various Spatio-temporal data styles [6, 7, 8]
including GPS, sensors, social media, and traffic cards. Driven these urban
Spatio-temporal Big Data, the main challenge a smart city faces can be
summarized in two aspects: (1) how to deal with and analyze large but
redundant Spatio-temporal data, and (2) how to improve human mobility and
optimize travel.
Public transportation accounts for a large proportion of urban transportation.
Taking Beijing as an example, buses produced 1.7 billion vehicle kilometers
traveled and transported 4.9 billion passengers in 2011 alone [9]. The
behavior that people are encouraged to take buses is beneficial to the city’s
sustainable development owning to low-carbon and green mode of buses.
Therefore, the operation management of public transportation directly affects
the traffic circumstance of the city, which the government has always valued.
Many policies have also been adopted to try to improve public transportation,
such as preferential bus fares, bus lanes, additional stops, routes, and bus
running time [10].
However, emerging traffic problems like severe traffic congestion and the
unreasonable allocation of resources have urged researchers to study [11].
Consequently, the new services and requirements are urgent to improve bus
travel and ride experiences, where the passenger flow prediction becomes
critical. As shown in Figure 1, the existing potential problems are solved
through processing and analyzing Spatio-temporal data. One of the most
effective solutions is route optimization, which is a complex and challenging
task, although valuable to the related industry in sustainable transportation
systems. It is not difficult to find that traffic flow prediction is essential
in the whole process of route optimization. If we predict the traffic flow
accurately, we can respond in time to avoid traffic jams and keep roads
smooth.
Due to this demand for flow prediction, plenty of work has contributed to
traffic prediction for a long time. In general, some traffic flow prediction
approaches are built on traditional mathematical statistics [12], such as AR,
ARMA, and ARIMA. On the other hand, as a result of the limitations of
traditional models and the excellent performance of deep learning in
prediction tasks, the deep learning-based methods have been evolved, like DNN
[13], DBN [14], LSTM [15], and GAN [16]. However, the methods mentioned above
only consider the numerical traffic flow based on the statistics of Spatio-
temporal data but neglect the existence of human mobility behavior, which
refers to travel habits and plays a decisive role in the change of traffic
flow. The issue can lead to a lack of definite identification and
differentiation of traffic flow. Therefore, existing works do not identify
mobility behaviors through the relationship between peoples, which results in
a deviation for the final prediction performance of different groups.
Experientially, people in a similar group have similar mobility behaviors. For
example, the actuality that most commuters work from 9 am to 5 pm may mean
that they must travel at least once in the morning and evening and pass
regular bus stops. Therefore, our studies defines a passenger mobility pattern
as a group of people with similar travel routes. Hypothetically, the total
flow consists of two flows: (i) steady flow having from most people with
permanent jobs and residence, and (ii) uncertain flow generated from travel,
entertainment, and so on. However, few studies considered passenger mobility
pattern analysis to predict traffic flow as far as we know.
Figure 1: Application and process of traffic flow prediction.
In recent years, Graph Neural Network (GNN), especially Graph Convolutional
Network (GCN), has effective performance in extracting the features and
relationships of a topological graph. GNN not only represents explicitly the
nodes of the graph in the low-dimensional vector space (also called
embedding), but retains essential attributes [17]. Ordinarily, the embeddings
of nodes can be used in various downstream tasks, including clustering [18],
classification, and prediction [19]. Different from the previous methods
treating Spatio-temporal information of the trajectory data as the main
feature [20], based on the passenger bus record data, we try to define and
construct an interpretable graph structure-based network that can be applied
in GCN based model to explore passenger mobility patterns. Besides, the
adjacent bus stops with a solid Spatio-temporal relationship can improve the
accuracy of traffic flow prediction in the mass transit network. It is
important to note that our studies focus on the prediction, so we do not
consider the fare evasion [21] and certain anomalies [22, 23], which do not
affect the evaluation of the predictive model.
In this paper, we introduce a framework, namely MPGCN, with three stages to
predict passenger flow for the first time. We first obtain related information
of bus stops based on the bus record data, which are used in the analysis of
passenger mobility patterns. Secondly, we construct a sharing-stop network of
passengers, including stop matching and weight assignment of graph edges. The
sharing-stop network is utilized in graph deep clustering with GCN to explore
mobility patterns. Furthermore, to verify their diversity, we execute the
statistical analysis of each mobility pattern by describing heavy-tailed
distributions of the number of bus stops the passengers passed. Then,
considering the spatial correlation of bus route network and temporal
correlation of traffic flow, we propose GCN2Flow combining Spatio-temporal
information to separately predict passenger flow of different patterns, where
the predictive flows for all passenger patterns are fused in the final stage
to obtain the prediction result. Finally, we design a case study for
optimizing routes, where we select optimal routes from candidates of
passengers and set the passenger diversion and experience as the main
optimization objective based on previous prediction results.
The main contributions of this work are summarized as follows:
* •
We develop a novel prediction framework, namely MPGCN, which integrates the
passenger mobility patterns into passenger flow prediction task to enhance
accuracy. We define a passenger mobility pattern as a group of people with
similar travel times or similar travel routes. Our MPGCN includes three stages
to achieve the prediction: (i) pre-processing the bus record data, (ii)
recognizing passenger mobility patterns, and (iii) predicting passenger flow
by proposed GCN2Flow combined with Spatio-temporal information. Besides, we
design constrained planning as a case study for optimizing routes and thus
improving passenger diversion and experience based on the prediction results.
* •
We present a sharing-stop network, where the relationship between passengers
is established, and explore the passenger mobility patterns in the sharing-
stop network based on deep clustering with GCN. Through statistical analysis
and data fitness, we suggest the reasonability and interpretability of the
network, as well as the significant laws among different mobility patterns.
* •
We conduct a series of experiments, including the analysis of passenger
mobility patterns and the comparison with different prediction algorithms with
or without passenger mobility patterns. The predictive evaluation demonstrates
that our framework has better performance and substantially improves passenger
flow prediction. Besides, Prediction is accurate enough to be used for
downstream tasks such as route optimization.
The remainder of this paper is structured as follows. In Section 2, we briefly
review related works. Section 3 mainly illustrates our proposed model and
framework (MPGCN) in detail. Data description and analysis of the experimental
results are given in Section 4. Finally, we present discussions, conclusion
and future work in Section 5 and 6.
## II Related Work
In this section, we review the existing works closely related to the research
project presented in this paper covering three fields: traffic flow
prediction, human mobility pattern, and graph convolutional network.
### II-A Traffic Flow Prediction
Traffic flow prediction has many cutting edge applications, such as road
network planning, congestion prevention, and accident detection. Considering
the technical approach applied in prediction, traffic flow prediction models
can be roughly into three categories: (i) traditional mathematical-statistical
parametric, (ii) non-parametric regression, and (iii) artificial neural
network (ANN) models.
The mathematical-statistical parametric models mainly examine the time series
that have a periodic change rule in the urban road traffic, like traffic peak
in the day and night time. These models include autoregressive moving average
(ARMA) [24], and autoregressive integrated moving average (ARIMA) [25].
Kumar et al. [25] utilized the seasonal ARIMA model to design a prediction
scheme using only limited input data. In this model, the issues associated
with huge time-series data like availability, computation, storage, and
maintenance are considered, and the last three days’ flow observations were
used as input for predicting the next day’s flow. Support Vector Regression
(SVR) [26] and Nearest Neighbor Regression [27] are the most popular
nonparametric regression models. With the complexity and diversity increasing
in the road network, there is a demand for more accurate traffic prediction.
Because of the efficacy of ANN for various prediction tasks in complex and
diverse scenarios, it attracts attention to traffic flow prediction. Liu et
al. [13] proposed a novel passenger flow prediction model using a hybrid of
deep network of unsupervised stacked autoencoders (SAE), and supervised deep
neural network (DNN). Besides, Yu et al. [28] introduced Spatio-temporal GCN
(STGCN) model to forecast traffic using graph GNN and Gate CNN for extracting
spatial and temporal features, respectively. More recent works such as T-GCN
[29], and TGC-LSTM [30] further utilized GCN to the extraction of spatial and
temporal correlations to improve prediction. However, all the models presented
in this section only focus on mining the information in the raw data while
ignoring the potential mobility patterns of passengers. For addressing this
research issue, for the first time, this paper aims to discover passenger
mobility behaviors and rules from the bus record data and then utilize them
for traffic flow prediction. Since passenger mobility behaviors and rules are
to be exploited in this paper to predict passenger flow, human mobility
pattern is presented in the next section.
### II-B Human Mobility Pattern
The fluctuation of traffic flow naturally depends on the human mobility and
travel. Consequently, the analysis of human mobility patterns is of paramount
importance for traffic prediction. The current methods are distinguished
primarily by the type of datasets. For example, two dataset types: (i)
unconscious mobility data (e.g., sensors data, or card records) and (ii)
active sharing mobility data (e.g., traditional diary activity surveys or
social location sharing). Using the former dataset type, Yan et al. [31]
presented a model to capture the underlying driving force accounting for human
mobility patterns based on GPS and mobile phone data. Besides, Qi et al. [32]
designed a multi-step methodology to extract mobility patterns from smart card
data and points of interest data. Nitti et al. [33] presented a Wi-Fi-based
Automatic Bus pAssenger CoUnting System (iABACUS), which did not depends on
passengers card records and can track passengers to analyze urban mobility. In
[34], authors integrated taxi and subway data to compute the human mobility
network, and discover the human mobility patterns in terms of trip
displacement, duration, and interval. On the other hand, people are willing to
share their activities containing location information because of the
convenience and popularity of the social networks, like Weibo and Twitter.
Utilizing the latter, Comito et al. [35] developed a methodology to discover
people, community behavior and travel routes from geo-tagged posts and tweets.
Nevertheless, none of these research works has leveraged human mobility
patterns to predict traffic flow. Therefore, exploring human mobility patterns
from passenger bus data is one of the main aims of the research project
presented in this paper. Since we decide to leverage graph convolutional
network for the traffic flow prediction, the next section presents its
overview.
Figure 2: Framework of the model, MPGCN, where $C_{m}$ and $P_{m}$ represent
cluster and corresponding pattern of passenger mobility.
### II-C Graph Convolutional Network
With the development of graph learning, there is an extension of CNNs in the
graph (network) as an extensive data structure, like embedding of nodes or
subgraphs into vector spaces. The first convolutional operation on graphs is
presented in [36]. However, it has been evolved over time for its
effectiveness in representing graph [37] and its numerous application domains,
such as node clustering [38], classification [39], prediction [40], and so on.
Besides, to further utilize neighbours’ information, GAT [41] combined multi-
head self-attention mechanism to calculate attention scores of different
neighbours. In this paper, unlike others, a major challenge is to establish an
explainable network from bus record data, which can enable us to discover the
relationship between passengers. Besides, we also need to extract spatial
features of geographical information in the stop network to improve passenger
flow prediction.
## III Design of Framework
This section provides the details of the theoretical underpinning of our
proposed network and techniques used in our proposed Multi-Pattern GCN based
passenger flow prediction, namely MPGCN framework as shown in Figure 2.
### III-A Network construction: Sharing-Stop Network
For exploring human mobility patterns, a sharing-stop network is defined as a
weighted undirected graph
$\mathcal{G}_{p}=\left\\{\mathcal{V}_{p},\mathcal{E}_{p},\mathcal{A}_{p}\right\\}$,
where $\mathcal{V}_{p}$ is the set of passenger nodes and $\mathcal{E}_{p}$ is
the set of edges.
$\mathcal{A}_{p}\in\mathbb{R}^{\left|\mathcal{V}_{p}\right|\times\left|\mathcal{V}_{p}\right|}$
is the weighted matrix with each element $a^{p}_{ij}\geq 0$. Specifically, the
edge between passengers $i$ and $j$ denotes their existing relationship of
sharing stops, and weight $a^{p}_{ij}$ indicates the count of the occurrences
of their sharing stops. The pseudocode of the construction of the sharing-stop
network is shown in Algorithm 1. A simple example of this sharing-stop network
is shown in the middle part of Figure 2 based on the empirical assumption that
there are similar records between passengers in the same pattern meaning that
for the same mobility pattern, passengers have a similar number of edges and
the similar value of their weights in the sharing-stop network.
Algorithm 1 The construction of sharing-stop network
0: $P2S$: the list of passengers to stops;
0: $G_{p}$, sharing-stop network;
1: Initialize the graph structure $G_{p}$;
2: for $p_{i},p_{j}$ in $P2S.keys$ do
3: $s_{i},s_{j}\leftarrow P2S[p_{i}],P2S[p_{j}]$; $\backslash\backslash$
obtain stops set of records;
4: $s\\_s\leftarrow s_{i}\cap s_{j}$; $\backslash\backslash$ obtain all
sharing-stops;
5: if $s\\_s$ is empty then
6: continue;
7: end if
8: Initialize $a^{p}_{ij}=0$ in $G_{p}$; $\backslash\backslash$ add an edge;
9: for $s$ in $s\\_s$ do
10: $c_{i},c_{j}\leftarrow s_{i}.count(s),s_{j}.count(s)$;
$\backslash\backslash$ counting;
11: $G_{p}.a^{p}_{ij}+=min(c_{i},c_{j})$; $\backslash\backslash$ update the
weight;
12: end for
13: end for
14: return $G_{p}$;
### III-B Deep Clustering with GCN
Figure 3: Computational process of deep clustering with GCN.
After constructing the graph, inspired by [38], an unsupervised deep
clustering with GCN is used to mine potential passenger mobility patterns as
shown in Figure 3. First, unsupervised representation method, the basic
autoencoder (AE), is employed to learn the representation of passenger nodes,
that is a mapping function $\Phi:v\in\mathcal{V}_{p}\mapsto\mathbb{R}^{d}$,
where $d\ll\left|\mathcal{V}_{p}\right|$. We assume that there are $L$ layers
in the encoder and decoder parts, which are symmetrical. Therefore, the
representation of the $l$-th layer, $Y^{(l)}$ in the encoder and
$\hat{X}^{(l)}$ in the decoder, can be obtained as follows:
$\left\\{\begin{aligned}
&Y^{(l)}=\phi\left(W_{e}^{(l)}Y^{(l-1)}+b_{e}^{(l)}\right),\\\
&\hat{X}^{(l)}=\phi\left(W_{d}^{(l)}\hat{X}^{(l-1)}+b_{d}^{(l)}\right),\\\
\end{aligned}\right.$ (1)
where $W$ and $b$ are the weight matrix and bias, respectively, and $\phi$ is
the activation function, such as Relu or Sigmoid functions. Besides, the input
of the encoder, $Y^{(0)}$ is X that is initial feature matrix obtained by
previous sharing-stop network, and the output of the decoder is the
reconstruction of $\hat{X}=\hat{X}^{(L)}$. Hence, the loss function of the
entire AE is as follows:
$\mathcal{L}_{1}=\dfrac{1}{2|V|}\left\|X-\hat{X}\right\|^{2},$ (2)
where $||\cdot||$ denotes the Euclidean distance between two representations
matrix.
On the other hand, we integrate these representations into GCN that can learn
them by combining the relationship between passenger nodes. In this part, the
convolutional operation of the $l$-th layer can be defined by:
$H^{(l)}=\phi\left(\tilde{D}^{-\frac{1}{2}}\tilde{A}_{p}\tilde{D}^{-\frac{1}{2}}\tilde{H}^{(l-1)}W^{(l-1)}\right),$
(3)
where $\tilde{A}_{p}=\mathcal{A}_{p}+I$, and $I$ is an identity matrix.
$\tilde{D}_{ii}=\sum_{j}\tilde{a}^{p}_{ij}$ is the degree of node $i$ in
adjacent matrix $A_{p}$, and $W$ is the weight matrix of parameters.
Specially, the input of $l$-th layer in GCN, $\tilde{H}^{(l-1)}$, combines the
representations from the initial GCN and AE:
$\tilde{H}^{(l-1)}=\alpha H^{(l-1)}+(1-\alpha)Y^{(l-1)},$ (4)
Eq. (4) joins GCN with AE. In this case, we uniformly set $\alpha=0.5$. The
final representation of the last layer can be mapped as a multiple
classification probability with softmax function:
$H=softmax\left(\tilde{D}^{-\frac{1}{2}}\tilde{A}_{p}\tilde{D}^{-\frac{1}{2}}H^{(L)}W^{(L)}\right),$
(5)
where $H$ can be regarded as the probability distribution, and $h_{ij}\in H$
denotes the probability of node $i$ in cluster $j$.
For being more suitable for deep clustering, a dual self-supervised method is
employed to combine clustering information with the representation learned
previously. With t-distribution to measure the similarity, the probability of
node $i$ in cluster $j$ is as follows:
$q_{ij}=\dfrac{\left(1+\left\|y_{i}-\mu_{j}\right\|^{2}/n\right)^{-(n+1)/2}}{\sum_{j}\left(1+\left\|y_{i}-\mu_{j}\right\|^{2}/n\right)^{-(n+1)/2}},$
(6)
where $y_{i}$ is from the $Y^{(L)}$, $\mu_{j}$ is the cluster center vector
initialized by the K-means clustering, and $n$ is the degree of freedom of
t-distribution. Therefore, we obtain the clustering probability distribution
$Q=\left\\{q_{ij}\right\\}$. Besides, the target distribution, $P$, can be
computed and normalized as follows:
$p_{ij}=\dfrac{q^{2}_{ij}/\sum_{i}q^{2}_{ij}}{\sum_{k}(q^{2}_{ik}/\sum_{i}q^{2}_{ik})}.$
(7)
Then, we use the KL divergence as part of the loss function, that is
$\mathcal{L}_{2}$ is the KL divergence between $P$ and $Q$ distributions, and
$\mathcal{L}_{3}$ is between $P$ and $H$. In the end, the overall loss
function is defined by:
$\mathcal{L}=\theta_{1}\mathcal{L}_{1}+\theta_{2}\mathcal{L}_{2}+\theta_{3}\mathcal{L}_{3}.$
(8)
where $\theta$ is the hyper-parameters, $\mathcal{L}_{2}=KL(P||Q)$, and
$\mathcal{L}_{3}=KL(P||H)$. The final cluster label of node $i$ is determined
considering the maximum value of $h_{ij}$ from the probability distribution
$H$.
### III-C Multi-Pattern GCN Based Passenger Flow Prediction
The flow prediction components include identifying passenger patterns,
training neural network model with GCN, and predicting passenger flow. From
the previous results of clustering, passenger nodes under the same cluster
reflect that they have similar mobility rules. Therefore, we divide the
passengers into several mobility pattern groups utilizing the clustering
results, which is also patterns exploration. Besides, we design a statistical
task to discover the potential law by fitting several possible heavy-tailed
distribution [34] shown in Section IV.
Figure 4: The architecture of GCN2Flow, which takes spatial bus route network
($A_{s}$), and time series flows for the passengers in pattern $j$ ($F_{j}$)
as input of TC block and SGC block.
As shown in Figure 4, we develop a prediction architecture, GCN2Flow that
comprises several Temporal Convolutional blocks (TC blocks) and Spatial GCN
blocks (SGC blocks). In this section, the stop network based on routes is
defined as
$\mathcal{G}_{stop}=\left\\{\mathcal{V}_{s},\mathcal{E}_{s},\mathcal{A}_{s}\right\\}$,
where $\mathcal{V}_{s}$ is the set of stop nodes and $\mathcal{E}_{s}$ is the
set of edges. $e^{s}_{ij}\in\mathcal{E}_{s}$ indicates the existence of a
route from the stop $i$ to the next stop $j$. $\mathcal{A}_{s}$ is a weighted
adjacency matrix, whose value of elements denote the geographical distance.
On the one hand, passenger flow prediction leverages historical time series
data i.e., the past $t$ time steps are used to predict the next time step.
Recurrent neural network-based methods are popular in time-series prediction.
However, they have issues, such as time-consuming training and insensitive to
the dynamics of long sequences. Therefore, in our TC block, we define a
temporal gated convolutional operation, which utilizes gated linear units
(GLU) [42] as a non-linearity with the residual connection. We assume that
$C_{in}$, $C_{out}$ are respectively the number of input and output channels,
and $X\in\mathbb{R}^{t\times|\mathcal{V}_{s}|\times C_{in}}$ is the input of
the TC block. The operation is as follows:
$\begin{split}&TC(X)=ReLU((XW_{0}+b_{0})\otimes sigmoid(XW_{1}+b_{1})\\\
&\quad+(XW_{2}+b_{2})),\end{split}$ (9)
where $W\in\mathbb{R}^{k\times C_{in}\times C_{out}}$ (k is the size of
convolutional kernel), $b\in\mathbb{R}^{|\mathcal{V}_{s}|\times C_{out}}$ are
learned parameters, and $\otimes$ is the element-wise product between
matrices.
Note, there are spatial connections between stops of bus routes. For example,
the passenger flow of a stop is related to the flow of its neighboring stops.
Therefore, the spatial graph convolutional operation through Chebyshev
polynomials and first-order approximation [39] can be written as:
$SGC(X)=\phi\left(\tilde{D}^{-\frac{1}{2}}\tilde{A}_{s}\tilde{D}^{-\frac{1}{2}}XW\right),$
(10)
where $\tilde{A}_{s}=\mathcal{A}_{s}+I$, $\tilde{D}$ is degree matrix of the
adjacent matrix $\mathcal{A}_{s}$, and $W$ is the learnable parameter matrix.
Therefore, one TC block and one SGC block are jointly utilized to extract
Spatio-temporal features. The whole computational process of two blocks in
l-th layer is designed as:
$F^{(l)}_{j}=TC(ReLU(SGC(TC(F^{(l-1)}_{j})))),$ (11)
where $F^{(0)}_{j}=F_{j}\in\mathbb{R}^{t\times|\mathcal{V}_{s}|\times
C^{0}_{in}}$ ($C^{0}_{in}=1$ in this case) is the flow matrix of stops with
$t$ time steps in pattern $j$. $ReLU(\cdot)$ is the activation function
(Rectified linear unit). Furthermore, we execute an extra temporal gated
convolutional operation and attach a Fully-Connected (FC) layer as the output
of the whole network. Therefore, $\hat{F}_{j}=FC(TC(F^{(L))})$ is the
prediction flow matrix of next time step. Therefore, the loss function for
passenger flow prediction in pattern $j$ can be defined as:
$\mathcal{L}=\left\|\hat{F}_{j}-F_{j}\right\|^{2}$ (12)
Finally, we train multiple GCN2Flow models, the number of which depends on the
number of passenger mobility patterns. Therefore, each mobility pattern has
its own GCN2Flow model to predict its passenger flow, and then we merge
prediction result of all GCN2Flow models to obtain the total passenger flow
prediction result. Algorithm 2 presents the pseudocode for the training and
predicting process of MPGCN.
Algorithm 2 MPGCN predicts passenger flow
0: $m$: The number of passenger patterns;$F_{j}$: the matrix of stop passenger
flow in the pattern $j$; $A_{s}$: the adjacent matrix of stop network; $T$:
the time-steps of flow sequence;
0: $\hat{F}$: the sequence of total flow prediction results;
1: $normaliztion(A_{s})$;
2: for $j\leftarrow 1$ to $m$ do
3: $normaliztion(F_{j})$;
4: $X,y\leftarrow Samples(F_{j},T)$; $\backslash\backslash$ obtain training
set;
5: building $GCN2Flow_{j}$ network;
6: $GCN2Flow_{j}\leftarrow GCN2Flow_{j}.train(X,y,A_{s})$;
7: $\hat{F}_{j}\leftarrow unnormaliztion(GCN2Flow_{j}.predict)$;
8: $\hat{F}+=\hat{F}_{j}$;
9: end for
10: return $\hat{F}$;
### III-D Route Optimization
Finally, we use the prediction result of MPGCN to attempt a simple application
case study. Once ensuring the accuracy of passenger flow prediction, we can
assume that the prediction result is the real flow distribution of bus stops
at the next time interval. In our framework, we show an example, route
optimization, which is closely related with flow prediction, and our
optimization task aims at providing a new travel route to avoid crowded bus
stops, thus relieving the pressure of overcrowded bus stops in the public
transportation systems.
Therefore, we mainly focus on passenger diversion and ride experience as the
optimization objective $O$, measured by the standard deviation ($std$) of
traffic flow of all bus stops. More specifically, given the Origin-Destination
($OD$) matrix of passengers (shown in Section IV) and bus route network, how
to select optimal routes from the candidate route set becomes the primary
purpose. Mathematically, the objective function, $O(f)$, is similar with the
standard deviation calculation formula, which means $\nabla^{2}O(f)\geq 0$ and
the domain of variable $f$ is a finite set. Then, the optimal route selection
problem can be abstracted into a convex optimization problem. In other words,
given a stop network $\mathcal{G}_{stop}$ and traffic condition at the next
time interval, the objective is to minimize the $std$ of traffic flow of all
bus stops by changing the travel route of some passengers. Therefore, the
objective function and constraints can be defined as follows:
$\min_{f_{1},f_{2},\ldots,f_{|\mathcal{A}_{s}|}}O(f)=\sqrt{\frac{1}{|\mathcal{A}_{s}|}\sum_{i=1}^{|\mathcal{A}_{s}|}(f_{i}-\bar{f})^{2}},$
(13)
$\begin{split}s.t.,&\left\\{\begin{aligned}
&RS=g_{1}(\mathcal{G}_{stop},OD,\hat{F})\\\ &F^{*}=g_{2}(r),\\\ &f_{i}\in
F^{*},\\\ &t\in T,\\\ &r\in RS,\\\
&len(r)-len(r_{shortest})\leq\epsilon\end{aligned}\right.\end{split}$ (14)
where function $g_{1}()$ can generate candidate routes finite set, $RS$, by
passing in arguments including $\mathcal{G}_{stop}$, $OD$ matrix at time $t$
and the predicted passenger flow at the next time $t+1$, and $g_{2}()$ counts
the passenger flow matrix of all stops based on a candidate route in $RS$.
Besides, in the process of generating candidate routes set, we would set a
threshold $\epsilon$ to ensure minimal additional cost of passenger travel
time, that is, $len(r)-len(r_{shortest})\leq\epsilon$ ($\epsilon=5$ in this
part), $r\in RS$, where $r_{shortest}$ is the shortest travel route based on
$OD$. As a result, we can obtain the optimal routes of passengers, which can
make the passenger flow of bus stops more balanced and relieve the crowded
situation on the bus to a certain extent.
### III-E Complexity Analysis
In the part of deep clustering, we denote $d_{l}$ as the dimension of the
input of $l-th$ layer in the autoencoder. The time complexity of the
autoencoder is $\mathcal{O}(|\mathcal{V}_{p}||X|d_{1}^{2}\cdots d_{L}^{2})$,
and the time complexity of GCN module is
$\mathcal{O}(|\mathcal{E}_{p}||X|d_{1}\cdots d_{L})$. Besides, we assume that
there are K clusters, and the time complexity of (6) is
$\mathcal{O}(|\mathcal{V}_{p}|K+|\mathcal{V}_{p}|log|\mathcal{V}_{p}|)$.
Therefore, the total time complexity is the sum of the above three, and is
linearly related to the $|\mathcal{V}_{p}|$ and $|\mathcal{E}_{p}|$.
Similarly, the time complexity of flow prediction method is
$\mathcal{O}(|\mathcal{E}_{s}||\mathcal{V}_{s}|d_{1}\cdots d_{L})$.
## IV Experiments
To demonstrate the efficiency of our proposed MPGCN, we conducted a series of
experiments. In this section, firstly, we describe the experimental dataset in
detail, including preprocessing of data, analysis of data, and sharing-stop
network. Secondly, we show the related parameters setting in experiments.
Next, we present the analysis of mobility patterns and the effectiveness
evaluation of the prediction performance compared with other methods. Finally,
we suggest the application value of our prediction results through a case
study.
### IV-A Data Description and Analysis
#### IV-A1 Data Description
The real-world bus dataset is employed in our experiments, a typical kind of
Spatio-temporal data, shown in TABLE I and TABLE II that contains bus record
dataset from bus card, credit card, and Qr code, and bus stop arriving-leaving
dataset comprising 12 used fields, which can basically cover the majority of
bus passengers and reflect the overall trend of passenger mobility. Besides,
the information about bus stops includes longitude, latitude, and the sequence
of bus stops in the route. The bus dataset was generated by buses in Jiangsu,
by Panda Bus Company, for 30 days (nine weekend days and 21 weekdays) from
November 1st to 30th, 2019, and 18 hours a day from 05:00 to 23:00. It is
noted that we do not use passengers’ private information, so there is no
privacy issue with our data.
TABLE I: Description of the bus record dataset Field | Annotation | Examples
---|---|---
bus_no | ID of each bus | 11180
card_no | ID of each passenger | 2230000010282075
cardType | Payment card type | 1
riding_time | Record time and date | 2019-11-01 05:29:20
routeId | ID of each route | 106
TABLE II: Bus stop arriving-leaving dataset details Field | Annotation | Examples
---|---|---
bus_no | ID of each bus | 61189
enterTime | Enter stop time and date | 2019-11-01 06:37:59
leaveTime | Leave stop time and date | 2019-11-01 06:38:12
stopId | ID of each stop | 46976
routeId | ID of each route | 157
directId | 0 for upline, 1 for downline | 0
stayTime | Bus waiting time (seconds) | 13
Figure 5: A example of passenger flow trend on November 1st, 2019.
#### IV-A2 Preprocessing and Analysis
From the raw data, we could not directly obtain bus stops where the passengers
get on the bus and scan their card or Qr code. Hence, we first need to match
bus stops. Considering the real-world experience of bus riding in China, the
bus may have already run when passengers scan their card or Qr code. This fact
implies the scanning time may not be between entering and leaving time and
thus has a certain deviation. Consequently, we empirically selected 20 seconds
to increase the time interval to expand the matching time range. Algorithm 3
presents the technique for matching stops. Another difficulty with data
preprocessing is that the drop-off stops of passengers are not given, unlike
subway travel, which means the destination of passengers is not clear.
Usually, travel by bus is symmetrical. For example, passengers’ origin stop
and destination will be switched to return to the origin stop. Based on this
assumption, we will extract all getting-on stops and corresponding bus lines,
and the two stops will be regarded as the origin and destination if these two
stops are on the same line for a certain passenger. Therefore, the symmetrical
$OD$ matrix can be inferred.
Algorithm 3 Matching stops
0: $Ride\\_Records$: The table based on bus record data; $Bus\\_Records$: the
table based on bus stop arriving-leaving data; $\tau$: the time interval;
0: $P2S$: the dictionary of matching stop;
1: Load $Bus\\_Records$; $\backslash\backslash$ convert to specific data
structure for easy retrieve;
2: for $i$ in $Ride\\_Records$ do
3: $temp\leftarrow Bus\\_Records.where(i)$ $\backslash\backslash$ obtain
retrieve information based on bus id and date;
4: for $j$ in $temp$ do
5: if $i.riding\\_time.time\geq j.enterTime-\tau$ and
$i.riding\\_time.time\leq j.leaveTime+\tau$ then
6: $\backslash\backslash$ matching condition;
7: add $j.stopId$ to $P2S[i.card\\_no]$;
8: break;
9: end if
10: end for
11: end for
12: return $P2S$;
Figure 6: The heavy-tailed distribution of the number of passengers, who pass
a certain number of stops (a), and the number of stops that have a certain
number of records (b). Figure 7: The distribution of degrees of all passenger
nodes.
After matching stops and inferring the $OD$ matrix, we expand the $OD$ matrix
into stop trajectories, which enable us to count passenger flow at each stop.
Besides, the number of records per passenger is required to be higher than a
particular value. This higher number of records is required because of our
demand for exploring passenger mobility and its laws. For this, some
passengers and their records are needed to be filtered. For example,
passengers having a few records in a month or starting to generate a card
record end of the month were filtered. Also, we found data anomalies like the
missing value problem among the raw data. For example, bus stop arriving-
leaving data of some buses for several days is missing, influencing model
training. Hence, inspired by [22], we use the linear interpolation method to
reduce errors during the computing of traffic flow. The matching process
screens some records having less impact on passenger mobility laws. Finally,
the data extracted by us contains 857900 bus records of 31353 passengers, 214
routes, and 1114 stops in Huai’an city.
Apart from data preprocessing, cleaning, and filtering, we analyze the
relevance of passenger flow and time. In Figure 5, we list passenger flow for
one day (November 1st, 2019). Figure 5 shows daily flow is similar in shape
i.e., there are 2 or 3 peaks and 1 or 2 troughs. Therefore, this similarity
also means that not only is traffic flow regular, but also that passenger
mobility follows the law. Besides, as shown in Figure 6, we calculate the
number of stops that each passenger passes and the average daily number of
records at each stop in a month and describe their distribution based on a
certain order of magnitude after counting them. Figure 6 exhibits both
distributions conform with the heavy-tailed distribution, which is why our
network embedding part is inspired. The existence of heavy-tailed distribution
is the main reason for the latent effectiveness in the process of natural
language model and network representation.
After preprocessing, the sharing-stop network is constructed with Algorithm 1.
As shown in Figure 7, we count the degrees of all passenger nodes in the
network and draw their distribution, which suggests the analogous
distribution, an expected heavy-tailed distribution, and shows the potential
law of passenger mobility.
(a) time_step=5 min
(b) time_step=15 min
(c) time_step=30 min
Figure 8: The prediction results with different number of clusters, where
$n\\_cluster=1$ denotes prediction without passenger patterns. (a) time_step=5
min. (b) time_step=15 min. (c) time_step=30 min.
### IV-B Experimental Settings
In the part of deep clustering with GCN, we extracted passenger mobility
patterns based on the sharing-stop network. Since the size of the sharing-stop
network is large, we set the dimension of the neural network of AE to
$\left|\mathcal{V}_{p}\right|$-100-100-500-16, which was the same as the GCN
module. In addition, the result of our method is insensitive to hyper-
parameters, therefore the setting of hyper-parameters is
$(\theta_{1},\theta_{2},\theta_{3})=(1,0.5,0.05)$. The learning rate and epoch
number were 0.001 and 100, respectively. For considering the impact of cluster
number on the passenger flow prediction comprehensively, we set 3, 4, and 5 as
the number of clusters (passenger mobility patterns), and selected the most
suitable number of passenger mobility patterns to analyze their latent laws
based on the prediction results.
For flow prediction, the historical passenger flow data for an hour were used
as the input of our proposed method to predict the flow for the next time
step, and the time step was taken as 5, 15, and 30 minutes, respectively.
There were 5 TC blocks and 2 SGC blocks in our GCN2Flow. The convolution
kernel size of both blocks was 3. The batch size and the learning rate were 64
and 0.001 with epochs 100, respectively. Before training, the flow of all bus
stops was normalized with the Z-score method, and the stop network as Laplace
matrix was also normalized.
Further, similar with previous works [28, 4], we utilized the following three
most common metrics used in the comparison of passenger flow prediction
methods: Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and
Correlation Coefficient (CC).
* -
Mean Absolute Error (MAE). MAE is the mean of all absolute errors between the
predicted values and their corresponding real values, whose equation is given
as follows:
$MAE=\dfrac{1}{n}\sum_{i=1}^{n}\left|\hat{y_{i}}-y_{i}\right|$ (15)
where $\hat{y_{i}}$ and $y_{i}$ denote the predicted values and real values,
respectively.
* -
Root Mean Square Error (RMSE). RMSE measures the deviation between predicted
values and their respective real values. RMSE is defined as:
$RMSE=\sqrt{\dfrac{\sum_{i=1}^{n}\left|\hat{y_{i}}-y_{i}\right|^{2}}{n}}$ (16)
* -
Correlation Coefficient (CC). CC is used to verify the correlation between
variables and has different forms. In this paper, we used Pearson CC to
measure correlation possessing the following formula:
$CC=\dfrac{cov(\hat{y_{i}},y_{i})}{\sqrt{var(\hat{y_{i}})\cdot var(y_{i})}}$
(17)
where $cov(\cdot,\cdot)$ and $var(\cdot)$ represent the covariance and
variance, respectively.
In addition, our model is implemented based on Pytorch framework, and the
experiments is executed with NVIDIA RTX 2080 Ti.
### IV-C Analysis of Passenger Mobility Patterns
To investigate the impact of extracting passenger mobility patterns on
prediction, we varied the number of clusters used in prediction models. In
Figure8, $n\\_cluster=1$ indicates the GCN2Flow and LSTM [43] models were
directly used to predict passenger flow, while $n\\_cluster$=3, 4, and 5 means
that the MPGCN and Multi-pattern LSTM (MP-LSTM) models were used to predict
flow of passengers for 3, 4, and 5 different mobility patterns, respectively.
Figure 8 shows it is effective to enhance the performance of prediction
combining passenger mobility patterns i.e., MPGCN and Multi-pattern LSTM (MP-
LSTM) are better than GCN2Flow and LSTM, respectively.
As a result, the MPGCN and Multi-pattern LSTM (MP-LSTM) models produced the
best performance for $n\\_cluster=4$. Therefore, we selected $n\\_cluster=4$
as the number of passenger mobility patterns for further analysis. In our
studies, a clustering result is viewed as a mobility pattern. Due to the
basics of the sharing-stop network, we suspect that passenger nodes in the
same pattern tend to have similar travel habits, like taking fixed and
frequent bus stops or routes. In this way, the number of passengers in the
four patterns is 11857, 10537, 3475, and 5484, which add up to 31353.
To further mine special laws hidden in mobility patterns, we show several
heavy-tail distributions to fit the number of stops passengers pass ($n_{s}$),
which are power-law, exponential, log-normal, and Weibull distributions. The
probability density function (pdf) of these distribution are shown in TABLE
IV. From Figure 9, for Patterns 1, 2 and 3, the distributions of $n_{s}$ have
similar law i.e., log-normal and Weibull distributions are better fitting
curves than the remaining two distributions (power-law and exponential).
Before $P(n_{s})$ reaches the highest value, the Weibull distribution has a
better fitting effect, and after that, the log-normal distribution becomes
better. Besides, by comparing key parameters of distributions, $c\simeq
87,91,92$ in pdf of log-normal distribution; $a\simeq 99,102,99$ and $r\simeq
1.8,1.9,1.7$ in pdf of Weibull distribution, the similarity between log-normal
and Weibull distributions can also be further confirmed. For Pattern 4 and
$c\simeq 71$ in pdf of log-normal distribution and $a\simeq 73,r\simeq 1.4$ in
pdf of Weibull distribution, log-normal distribution achieves the best
fitting. More specifically, through quantitative analysis, we note that 80
percent of passengers of Patterns 1, 2, and 3 tend to pass around less than or
equal to 127 stops, while 80 percent of passengers of Pattern 4 only pass less
than or equal to 101 stops.
Furthermore, in order to verify our conjecture that passengers in the same
pattern tend to have similar travel habits, we count the flow contribution of
passengers of four mobility patterns in each bus route and analyze the
distribution proportion of the top 40 bus routes in terms of total passenger
flow. Then, we define the route preference of the pattern, which denotes the
proportion of passengers of this pattern in the route exceeds 50%. As shown in
TABLE III (see Appendix for details), noticeable preference differences of
mobility patterns can be found. For example, passengers of pattern 4
contribute more than 90% on routes 62, 63, 65, and 66. Particularly, for the
routes not shown in TABLE III, we also found that the proportion of pattern 1
and pattern 2 is relatively high and close, both of which are above 30% or
40%, such as routes 2, 4, 16, 26, and 31. This fact indicates that there is a
specific shared similarity between the two mobility patterns in terms of
travel. Therefore, through deep clustering, the effectiveness of our implicit
mobility pattern extraction is verified by these analysis results. In other
words, passengers of the same pattern tend to travel by bus on some fixed
routes and contribute most of the traffic flow in those routes, while
passengers with different patterns often choose different routes to travel.
Besides, based on the latent laws of mobility patterns, it is effective to
combine them with traffic flow prediction task.
TABLE III: Route preference of four mobility patterns Pattern No. | List of Route Preference
---|---
1 | [5, 7, 12, 15, 18, 19, 32, 33, 36, 39, 53, 88, 89, 100]
2 | [1, 6, 11, 14, 22, 23, 28, 38, 116, K1]
3 | [50, 91, 713]
4 | [10, 20, 62, 63, 65, 66, 69]
TABLE IV: Heavy-tailed distributions Distribution | Probability Density Function (pdf)
---|---
Power-law | $ax^{b}$
Exponential | $a\cdot\exp(bx)$
Lognormal | $\frac{A}{xw\sqrt{2\pi}}\exp[-\frac{(\ln(x/c))^{2}}{2w^{2}}]$
Weibull | $\frac{r}{a}(\frac{x-u}{a})^{r-1}\exp[-(\frac{x-u}{a})^{r}]$
(a) Pattern 1
(b) Pattern 2
(c) Pattern 3
(d) Pattern 4
Figure 9: Description about four mobility patterns of passengers with data fitness. TABLE V: Performance comparison of different approaches Methods | LR | SVR | GBDT | ARIMA | LSTM | DCRNN | STGCN | MP-LSTM | GCN2Flow | MPGCN
---|---|---|---|---|---|---|---|---|---|---
t_step=5 min | MAE | 21.56 | 22.62 | 22.85 | 23.90 | 15.30 | 13.93 | 14.01 | 13.98 | 13.81 | 9.16
RMSE(%) | 35.14 | 37.45 | 37.85 | 43.68 | 23.76 | 21.28 | 22.25 | 17.88 | 20.41 | 15.82
CC | 0.823 | 0.942 | 0.968 | 0.955 | 0.975 | 0.983 | 0.978 | 0.980 | 0.986 | 0.992
t_step=15 min | MAE | 70.27 | 67.52 | 68.36 | 70.72 | 51.04 | 50.06 | 48.96 | 49.08 | 50.28 | 45.23
RMSE(%) | 115.26 | 111.74 | 113.28 | 130.41 | 99.82 | 92.33 | 91.45 | 92.56 | 89.39 | 81.33
CC | 0.901 | 0.945 | 0.972 | 0.953 | 0.973 | 0.976 | 0.979 | 0.976 | 0.978 | 0.984
t_step=30 min | MAE | 164.69 | 155.73 | 155.92 | 161.99 | 153.69 | 137.31 | 144.52 | 144.78 | 142.05 | 130.03
RMSE(%) | 283.58 | 270.42 | 294.71 | 321.23 | 279.82 | 255.09 | 267.62 | 268.09 | 260.88 | 240.48
CC | 0.885 | 0.937 | 0.960 | 0.952 | 0.954 | 0.968 | 0.964 | 0.964 | 0.966 | 0.975
### IV-D Passenger Flow Prediction
#### IV-D1 Prediction Result
Based on the passenger mobility patterns obtained from the previous
experiments, using GCN2Flow (MPGCN model described in Algorithm 2 for
individual patterns), we predicted the passenger flow for each pattern. Figure
10 shows the short-term passenger flow prediction results of our proposal
GCN2Flow and MPGCN with $time\\_step=5$ for a weekday and a weekend day. The
comparison between the predicted flow with the real flow justifies that the
excellent prediction result is achieved. For different trends of weekdays and
weekend days, our models both capture the features of passenger flow trends
i.e., flow peaks and troughs. In terms of spatial features, the SGC block is
capable of fast predicting the dynamic flow changes in the stop network based
on the bus route network. Moreover, by combining passenger flow predictions
using MPGCN, the prediction accuracy can be improved.
(a) Weekday (Nov. 29th, 2019)
(b) Weekend day (Nov. 30th, 2019)
Figure 10: Prediction results of MPGCN and GCN2Flow with time_step=5 min
including a weekend day (a), and a weekday (b). (Zoom in suitable view during
a peak and a trough period)
#### IV-D2 Comparison of prediction approaches
To verify the potential capability of our approach, we predicted passenger
flow using the same dataset by many contemporary and popular relevant models
that include machine learning models, mathematical-statistical model, and
neural networks model: Logistic Regression (LR), Support Vactor Regression
(SVR), Gradient Boosting Decision Tree (GBDT), ARIMA, LSTM [43], DCRNN [44],
STGCN [28], MP-LSTM (LSTM with passenger mobility patterns).
The results of prediction evaluation are presented in TABLE V. Our proposed
MPGCN achieves the best performance in all three evaluation metrics. In
general comparison, because of the complexity of data, the mathematical-
statistical model, ARIMA, is the worst at predicting. Machine learning models
have similar prediction performance, i.e., they can predict well for short-
term flow, while their prediction accuracy is seriously reduced for the long-
term without considering the relationship of spatial geographic information.
Although the prediction performance of the neural networks model, LSTM, is
respectful, which has better prediction results than those for the
mathematical-statistical model and other machine learning models, it is still
inferior to our models. It’s worth mentioning that the results of STGCN and
DCRNN are close to our GCN2Flow. Besides, in terms of passenger mobility
patterns, when applying them in the prediction model (MP-LSTM), its
performance can be further enhanced. In particular, for the $time\\_step=30$
setting, MPGCN achieves 5.3% ($137.31\rightarrow 130.03$) MSE reduction
compared to other baselines. To sum up, the ability of GCN2Flow and the
effectiveness of combining passenger mobility patterns, MPGCN vindicate their
application in the passenger flow prediction.
Figure 11: The route optimization results compared with : the trend of (a)
passenger flow, and (b) standard deviation of normalized flow. (Zoom in
suitable view during a peak and a trough period)
### IV-E A Case Study of Route Optimization
On the one hand, passenger flow prediction can be supplied to many downstream
tasks. On the other hand, the feedback results of downstream tasks can also
verify the accuracy of prediction. In our case, we define a route optimization
task for allocating passenger flow and improving travel experience on the bus,
which demonstrates the value of our proposed MPGCN in another way.
In the experiment, based on the OD matrix, we utilize prediction results for
recommending an optimal route to each passenger, which meets objective
function and constraint conditions (Eq. 13, 14). We select the data of the
last day in the dataset to carry out the optimization experiment. Then, we
recount the last day’s flow after optimizing passengers’ travel routes and
calculate the standard deviation of the flow of all bus stops as a simple
evaluation metric. From Figure 11, our route optimization has little impact on
passenger flow because of our constraint conditions. On the other hand, it
reduces $std$, which means balancing the flow of all bus stops, reducing the
probability of congestion on the bus, and achieving traffic diversion in bus
travel. Hence, the prediction results of our MPGCN are accurate enough to be
effectively applied in traffic flow-based downstream tasks.
## V Discussion
Overall, our studies focus on solving the problem of passenger traffic
prediction using a novel concept called passenger mobility pattern. Deep
clustering with GCN, according to the result analysis, can implicitly extract
passenger mobility patterns in the sharing-stop network. In terms of spatial
characteristics, passengers of the same mobility pattern have high
similarities. That is to say, they share similar travel bus stops and routes,
and their contributions to specific routes’ flow are dominant. However, our
established sharing-stop network does not include the relationship of
passenger travel time, so we were unable to uncover any potential temporal-
related laws in the extracted mobility patterns, which merits more
investigation in future. Besides, our studies can also be used in other modes
of transport like the subway. However, it should be highlighted that we are
primarily interested in mobility patterns based on frequent and consistent
travel, which necessitates sufficient co-occurrence relationship for
passengers. Therefore, our method may not be suitable for the traffic flow
prediction task of infrequently used transport mode. For example, if a
passenger only travels by air once or twice a year, it is hard to discover
their mobility patterns.
In addition, the route optimization problem in public transport systems
involves a variety of specific tasks on different scenarios, including route
network design, frequency setting, timetable optimization [45], schedule
optimization [46], and passenger assignment adjustment. Travel time, waiting
time, path length, amount of crowding on the buses, and other objective
functions are all varied due to different sub-problems. [47]. In essence, our
case study of route optimization is about passenger assignment adjustment,
thus we do not operate on route design, bus timetables, or schedules. Unlike
mentioned above, our optimization aims to passenger diversion and lower level
of bus crowding, as stated in Section III-D. It is also understandable that
passengers can be informed of route congestion and recommended an alternate
travel route to alleviate the overcrowded at bus stops. Furthermore, there are
also shortcomings in our route optimization. For example, the new route
selected in the optimization may be longer than the original route, resulting
in longer travel time or more transfers times. Therefore, our case study is
merely a naive application example, which can demonstrate the advantages of
passengers prediction in our MPGCN, but it also provides researchers with
another idea to extend more applications in practical industrial application.
## VI Conclusion and Future Work
In this paper, we introduced forward a passenger flow prediction framework,
MPGCN, including the sharing-stop network construction, passenger mobility
patterns recognition, and passenger flow prediction. We executed experiments
to analyze extracted mobility patterns, and we discovered that different
mobility patterns can fit heavy-tailed distributions well and have their own
travel laws and route preference. Our framework gave full consideration to the
impact of passenger mobility patterns on prediction. We conducted extensive
experiments to demonstrate that MPGCN can accurately predict bus passenger
flow based on the real bus record data. Finally, we design a simple case
study, which shows the value and application of our accurate prediction in the
downstream tasks, like route optimization. Besides, prediction results of
MPGCN can be applied extensively in other services and strategies of ITS for
sustainable public transportation, such as subsequent bus scheduling, route
planning, and congestion management.
When sufficient multi-source sensors data is available, we will attempt to
provide fine-grained analysis and service based on human mobility patterns.
For further research, considering specific Spatio-temporal information, we
will construct different personal mobility prediction architectures.
[Analysis of Passenger Mobility Patterns]
TABLE VI shows flow contribution proportion of passengers of four mobility
patterns in the top 40 bus routes by total passenger flow, which is used to
analyze the route preferences of mobility patterns and travel rules.
TABLE VI: Distribution proportion of passengers of four mobility patterns Route No. | Pattern No. (%) | Total flow | Route No. | Pattern No. (%) | Total flow
---|---|---|---|---|---
1 | 2 | 3 | 4 | 1 | 2 | 3 | 4
1 | 31.38 | 65.48 | 1.61 | 1.53 | 51736 | 32 | 67.47 | 13.67 | 18.37 | 0.49 | 15335
28 | 28.57 | 62.74 | 7.18 | 1.51 | 31822 | 89 | 95.62 | 2.98 | 1.09 | 0.31 | 15278
16 | 47.9 | 49.08 | 2.13 | 0.89 | 28659 | 33 | 64.29 | 16.68 | 17.83 | 1.2 | 14689
3 | 42.37 | 18.18 | 39.06 | 0.4 | 26413 | 53 | 62.31 | 17.14 | 20.14 | 0.41 | 14543
4 | 44.77 | 30.65 | 18.6 | 5.98 | 24472 | 38 | 20.38 | 70.68 | 6.59 | 2.35 | 14354
10 | 2.95 | 32.7 | 1.63 | 62.72 | 23871 | 50 | 18.98 | 17.23 | 58.64 | 5.16 | 14233
12 | 74.03 | 21.6 | 2.52 | 1.84 | 23593 | 63 | 0.3 | 1.74 | 0.47 | 97.49 | 14167
69 | 22.57 | 12.81 | 10.32 | 54.3 | 22987 | 14 | 4.67 | 85.88 | 2.11 | 7.34 | 12390
11 | 28.1 | 64.2 | 6.32 | 1.38 | 22418 | 88 | 64.2 | 14.78 | 17.6 | 3.43 | 11007
26 | 36.95 | 45.66 | 12.87 | 4.52 | 21992 | 91 | 19.85 | 19.45 | 59.51 | 1.18 | 10900
2 | 37.83 | 45.26 | 16.47 | 0.45 | 21705 | 6 | 24.9 | 72.96 | 0.99 | 1.16 | 10893
23 | 20.23 | 65.06 | 13.94 | 0.77 | 21225 | 65 | 1.64 | 5.62 | 2.32 | 90.43 | 10884
18 | 66.34 | 14.83 | 17.81 | 1.02 | 21199 | 5 | 73.25 | 19.92 | 5.45 | 1.38 | 10689
116 | 44.96 | 52.65 | 1.61 | 0.79 | 21103 | 100 | 81.18 | 14.96 | 2.98 | 0.89 | 10504
31 | 42.51 | 46.65 | 9.57 | 1.26 | 18991 | 39 | 87.42 | 9.75 | 1.33 | 1.51 | 10291
22 | 35.62 | 61.89 | 1.84 | 0.65 | 18792 | 66 | 0.56 | 1.39 | 0.28 | 97.76 | 10192
7 | 68.24 | 27.26 | 4.17 | 0.33 | 17846 | 20 | 1.62 | 15.48 | 0.47 | 82.44 | 9872
62 | 1.52 | 2.1 | 1.13 | 95.24 | 17319 | 713 | 18.21 | 19.56 | 60.81 | 1.42 | 9664
36 | 79.62 | 16.33 | 3.59 | 0.46 | 17060 | K1 | 37.68 | 58.37 | 1.6 | 2.34 | 9660
15 | 76.89 | 17.74 | 4.97 | 0.4 | 16705 | 19 | 94.81 | 3.54 | 0.94 | 0.71 | 9633
## References
* [1] E. Sisinni, A. Saifullah, S. Han, U. Jennehag, and M. Gidlund, “Industrial internet of things: Challenges, opportunities, and directions,” _IEEE Transactions on Industrial Informatics_ , vol. 14, no. 11, pp. 4724–4734, 2018\.
* [2] M. R. Rahimi, N. Venkatasubramanian, S. Mehrotra, and A. V. Vasilakos, “On optimal and fair service allocation in mobile cloud computing,” _IEEE Transactions on Cloud Computing_ , vol. 6, no. 3, pp. 815–828, 2018.
* [3] X. Kong, S. Tong, H. Gao, G. Shen, K. Wang, M. Collotta, I. You, and S. Das, “Mobile edge cooperation optimization for wearable internet of things: A network representation-based framework,” _IEEE Transactions on Industrial Informatics_ , 2020.
* [4] X. Kong, M. Li, T. Tang, K. Tian, L. Moreira-Matias, and F. Xia, “Shared subway shuttle bus route planning based on transport data analytics,” _IEEE Transactions on Automation Science and Engineering_ , vol. 15, no. 4, pp. 1507–1520, 2018.
* [5] S. Ruan, J. Bao, Y. Liang, R. Li, T. He, C. Meng, Y. Li, Y. Wu, and Y. Zheng, “Dynamic public resource allocation based on human mobility prediction,” _Proc. ACM Interact. Mob. Wearable Ubiquitous Technol._ , vol. 4, no. 1, 2020\.
* [6] G. Atluri, A. Karpatne, and V. Kumar, “Spatio-temporal data mining: A survey of problems and methods,” _ACM Comput. Sur._ , vol. 51, no. 4, 2018.
* [7] R. Du, P. Santi, M. Xiao, A. V. Vasilakos, and C. Fischione, “The sensable city: A survey on the deployment and management for smart city monitoring,” _IEEE Communications Surveys Tutorials_ , vol. 21, no. 2, pp. 1533–1560, 2019\.
* [8] S. Wang, J. Cao, and P. Yu, “Deep learning for spatio-temporal data mining: A survey,” _IEEE Transactions on Knowledge and Data Engineering_ , 2020.
* [9] J. Zhou, E. Murphy, and Y. Long, “Commuting efficiency in the beijing metropolitan area: an exploration combining smartcard and travel survey data,” _Journal of Transport Geography_ , vol. 41, pp. 175–183, 2014.
* [10] F. Pili, A. Olivo, and B. Barabino, “Evaluating alternative methods to estimate bus running times by archived automatic vehicle location data,” _IET Intelligent Transport Systems_ , vol. 13, no. 3, pp. 523–530, 2019.
* [11] A. Mourad, J. Puchinger, and C. Chu, “A survey of models and algorithms for optimizing shared mobility,” _Transportation Research Part B: Methodological_ , vol. 123, pp. 323–346, 2019.
* [12] M. Gan, Y. Cheng, K. Liu, and G. lin Zhang, “Seasonal and trend time series forecasting based on a quasi-linear autoregressive model,” _Applied Soft Computing_ , vol. 24, pp. 13–18, 2014.
* [13] L. Liu and R.-C. Chen, “A novel passenger flow prediction model using deep learning methods,” _Transportation Research Part C: Emerging Technologies_ , vol. 84, pp. 74–91, 2017.
* [14] Y. Bai, Z. Sun, B. Zeng, J. Deng, and C. Li, “A multi-pattern deep fusion model for short-term bus passenger flow forecasting,” _Applied Soft Computing_ , vol. 58, pp. 669–680, 2017.
* [15] B. Du, H. Peng, S. Wang, M. Z. A. Bhuiyan, L. Wang, Q. Gong, L. Liu, and J. Li, “Deep irregular convolutional residual lstm for urban traffic passenger flows prediction,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 21, no. 3, pp. 972–985, 2020.
* [16] Y. Zhang, S. Wang, B. Chen, J. Cao, and Z. Huang, “Trafficgan: Network-scale deep traffic prediction with generative adversarial nets,” _IEEE Transactions on Intelligent Transportation Systems_ , pp. 1–12, 2019.
* [17] K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How powerful are graph neural networks?” in _International Conference on Learning Representations (ICLR)_ , 2019.
* [18] G. Shen, Z. Zhao, and X. Kong, “Gcn2cdd: A commercial district discovery framework via embedding space clustering on graph convolution networks,” _IEEE Transactions on Industrial Informatics_ , 2021, doi: 10.1109/TII.2021.3051934.
* [19] X. Han, G. Shen, X. Yang, and X. Kong, “Congestion recognition for hybrid urban road systems via digraph convolutional network,” _Transportation Research Part C: Emerging Technologies_ , vol. 121, p. 102877, 2020.
* [20] Z. Pan, Z. Wang, W. Wang, Y. Yu, J. Zhang, and Y. Zheng, “Matrix factorization for spatio-temporal neural networks with applications to urban flow prediction,” in _Proceedings of the 28th ACM International Conference on Information and Knowledge Management_ , ser. CIKM ’19. Association for Computing Machinery, 2019, pp. 2683–2691.
* [21] B. Barabino, C. Lai, and A. Olivo, “Fare evasion in public transport systems: a review of the literature,” _Public Transport_ , vol. 12, pp. 27–88, 2020, doi: https://doi.org/10.1007/s12469-019-00225-w.
* [22] F. McLeod, “Estimating bus passenger waiting times from incomplete bus arrivals data,” _Journal of the Operational Research Society_ , vol. 58, no. 11, pp. 1518–1525, 2007, doi: 10.1057/palgrave.jors.2602298.
* [23] B. Barabino, M. Di Francesco, and S. Mozzoni, “Time reliability measures in bus transport services from the accurate use of automatic vehicle location raw data,” _Quality and Reliability Engineering International_ , vol. 33, no. 5, pp. 969–978, 2017.
* [24] N. Sadek and A. Khotanzad, “Multi-scale high-speed network traffic prediction using k-factor gegenbauer arma model,” in _2004 IEEE International Conference on Communications (IEEE Cat. No.04CH37577)_ , vol. 4, 2004, pp. 2148–2152.
* [25] S. V. Kumar and L. Vanajakshi, “Short-term traffic flow prediction using seasonal arima model with limited input data,” _European Transport Research Review_ , vol. 7, no. 3, p. 21, 2015.
* [26] W. Ge, Y. Cao, Z. Ding, and L. Guo, “Forecasting model of traffic flow prediction model based on multi-resolution svr,” in _Proceedings of the 2019 3rd International Conference on Innovation in Artificial Intelligence_ , ser. ICIAI 2019. Association for Computing Machinery, 2019, pp. 1–5.
* [27] P. Dell’Acqua, F. Bellotti, R. Berta, and A. De Gloria, “Time-aware multivariate nearest neighbor regression methods for traffic flow prediction,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 16, no. 6, pp. 3393–3402, 2015.
* [28] B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting,” in _Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI)_ , 2018\.
* [29] L. Zhao, Y. Song, C. Zhang, Y. Liu, P. Wang, T. Lin, M. Deng, and H. Li, “T-gcn: A temporal graph convolutional network for traffic prediction,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 21, no. 9, pp. 3848–3858, 2020.
* [30] Z. Cui, K. Henrickson, R. Ke, and Y. Wang, “Traffic graph convolutional recurrent neural network: A deep learning framework for network-scale traffic learning and forecasting,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 21, no. 11, pp. 4883–4894, 2020.
* [31] X. Yan, C. Zhao, Y. Fan, Z. Di, and W. Wang, “Universal predictability of mobility patterns in cities,” _Journal of the Royal Society Interface_ , vol. 11, no. 100, p. 20140834, 2014.
* [32] G. Qi, A. Huang, W. Guan, and L. Fan, “Analysis and prediction of regional mobility patterns of bus travellers using smart card data and points of interest data,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 20, no. 4, pp. 1197–1214, 2019.
* [33] M. Nitti, F. Pinna, L. Pintor, V. Pilloni, and B. Barabino, “iabacus: A wi-fi-based automatic bus passenger counting system,” _Energies_ , vol. 13, no. 6, p. 1446, 2020.
* [34] F. Xia, J. Wang, X. Kong, Z. Wang, J. Li, and C. Liu, “Exploring human mobility patterns in urban scenarios: A trajectory data perspective,” _IEEE Communications Magazine_ , vol. 56, no. 3, pp. 142–149, 2018.
* [35] C. Comito, D. Falcone, and D. Talia, “Mining human mobility patterns from social geo-tagged data,” _Pervasive and Mobile Computing_ , vol. 33, pp. 91–107, 2016.
* [36] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun, “Spectral networks and locally connected networks on graphs,” in _International Conference on Learning Representations (ICLR)_ , 2014.
* [37] T. N. Kipf and M. Welling, “Variational graph auto-encoders,” _NIPS Workshop on Bayesian Deep Learning_ , 2016.
* [38] D. Bo, X. Wang, C. Shi, M. Zhu, E. Lu, and P. Cui, “Structural deep clustering network,” in _Proceedings of The Web Conference 2020_ , ser. WWW ’20. Association for Computing Machinery, 2020, p. 1400–1410.
* [39] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in _International Conference on Learning Representations (ICLR)_ , 2017.
* [40] D. Chai, L. Wang, and Q. Yang, “Bike flow prediction with multi-graph convolutional networks,” in _Proceedings of the 26th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems_ , ser. SIGSPATIAL ’18. Association for Computing Machinery, 2018, pp. 397–400.
* [41] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” _arXiv preprint arXiv:1710.10903_ , 2017.
* [42] Y. N. Dauphin, A. Fan, M. Auli, and D. Grangier, “Language modeling with gated convolutional networks,” ser. Proceedings of Machine Learning Research, vol. 70. PMLR, 2017, pp. 933–941.
* [43] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in _Advances in Neural Information Processing Systems 27_. Curran Associates, Inc., 2014, pp. 3104–3112.
* [44] Y. Li, R. Yu, C. Shahabi, and Y. Liu, “Diffusion convolutional recurrent neural network: Data-driven traffic forecasting,” in _International Conference on Learning Representations (ICLR)_ , 2018.
* [45] R. Borndörfer, H. Hoppmann, and M. Karbstein, “Passenger routing for periodic timetable optimization,” _Public Transport_ , vol. 9, no. 1, pp. 115–135, 2017.
* [46] W. Wu, P. Li, R. Liu, W. Jin, B. Yao, Y. Xie, and C. Ma, “Predicting peak load of bus routes with supply optimization and scaled shepard interpolation: A newsvendor model,” _Transportation Research Part E: Logistics and Transportation Review_ , vol. 142, p. 102041, 2020.
* [47] J. Durán-Micco and P. Vansteenwegen, “A survey on the transit network design and frequency setting problem,” _Public Transport_ , pp. 1–36, 2021\.
| Xiangjie Kong (M’13–SM’17) received the B.Sc. and Ph.D. degrees from
Zhejiang University, Hangzhou, China. He is currently a Full Professor with
College of Computer Science and Technology, Zhejiang University of Technology.
Previously, he was an Associate Professor with the School of Software, Dalian
University of Technology, China. He has published over 160 scientific papers
in international journals and conferences (with over 130 indexed by ISI SCIE).
His research interests include network science, mobile computing, and
computational social science.
---|---
| Kailai Wang received the B.Sc. degree in software engineering from the
Dalian University of Technology, China, in 2019, where he is currently
pursuing the master’s degree with the School of Software. His research
interests include analysis of complex networks, network science, and urban
computing.
---|---
| Mingliang Hou received the B.Sc. degree from Dezhou University and the
M.Sc. degree from Shandong University, Shandong, China. He is currently
pursuing the Ph.D. degree in software engineering with the Dalian University
of Technology, Dalian, China. His research interests include graph learning,
city science and social computing.
---|---
| Xia Feng (M’07-SM’12) received the BSc and PhD degrees from Zhejiang
University, Hangzhou, China. He is currently an Associate Professor and
Discipline Leader in School of Engineering, IT and Physical Sciences,
Federation University Australia. Dr. Xia has published 2 books and over 300
scientific papers in international journals and conferences. His research
interests include data science, social computing, and systems engineering. He
is a Senior Member of IEEE and ACM.
---|---
| Gour C. Karmakar (M’01) received a B.Sc. degree in Computer Science
Engineering from CSE, BUET in 1993 and Masters and Ph.D. degrees in
Information Technology from the Faculty of Information Technology, Monash
University, in 1999 and 2003, respectively. He is currently an Associate
Professor at Federation University Australia. He has published over 161 peer-
reviewed research publications including 39 international peer-reviewed
reputed journal papers and was awarded six best papers in reputed
international conferences. He received a prestigious ARC linkage grant in
2011. His research interest includes multimedia signal processing, traffic
signal management, big data analytics, Internet of Things and cybersecurity
including trustworthiness measure.
---|---
| Jianxin Li received the Ph.D. degree in computer science from Swinburne
University of Technology, Australia, in 2009. He is an Associate Professor of
Data Science in the School of Information Technology, Deakin University. He
has published 90 high quality paper in top-tier venues, including The VLDB
Journal, IEEE TKDE, PVLDB and IEEE ICDE. He has received two competitive
grants from Australian Research Council. His research interests include graph
query processing, social network computing, and information network data
analytics.
---|---
|
# Repeat Voting:
Two-Vote May Lead More People To Vote††thanks: The idea of repeat voting
evolved following discussions at the Annual Conference of the Federmann Center
for the Study of Rationality in February 2017\. The author thanks the Center’s
members, in particular Maya Bar-Hillel, Orit Kedar, and Motty Perry, for their
suggestions.
Sergiu Hart The Hebrew University of Jerusalem (Federmann Center for the Study
of Rationality, Department of Economics, and Institute of Mathematics). _E-
mail_<EMAIL_ADDRESS>_Web site_ : http://www.ma.huji.ac.il/hart
(October 17, 2017)
###### Abstract
A _repeat voting_ procedure is proposed, whereby voting is carried out in two
identical rounds. Every voter can vote in each round, the results of the first
round are made public before the second round, and the final result is
determined by adding up all the votes in both rounds. It is argued that this
simple modification of election procedures may well increase voter
participation and result in more accurate and representative outcomes.
Suppose that it is two weeks after the Brexit vote, and there is a new vote on
the same issue—what will the result be? Given the way the original vote went,
will people change their minds and vote differently? Will the original results
cause people who had not voted to cast their vote in this second round? Will
the final result be different?111Before the Brexit vote, a petition that
called for a second vote in case of low participation and a narrow winning
margin was launched; it got 22 signatures before the vote, and more than 2
million signatures in the two days after the result was announced.
(Interestingly, the initiator was a “leave” supporter who believed that
“leave” would lose.) See, e.g., http://www.bbc.com/news/uk-politics-eu-
referendum-36629324 (There are no clear answers to any of these questions, but
one can easily provide arguments either way.) Now carry out a similar thought
experiment regarding the latest presidential election in the U.S., or whatever
your latest favorite, or unsettling, election is …
Democratic elections are beset by many problems. One issue is low voter
turnout, which at times is only one-half of the eligible voters or even less.
Another issue is excessive reliance on polls: polls affect voters, despite
repeatedly turning out to being quite far from accurate. This also relates to
the low-turnout issue: “I will not waste my time voting, as my candidate is in
any case sure to win” (or “… sure to lose”). Polls may also lead people not to
cast their vote for their preferred candidate, if, for example, they do not
want him or her to win by too large a majority, or if they want to voice a
certain “protest” through their vote—only to find out that in the end their
candidate did not win at all. Yet another issue concerns unexpected events
that occur extremely close to election time, too late to be able to be
addressed by the candidates, such as a terrorist attack, the publication of
false information, bad weather, and so on. What is common to many of these
situations is that people might want to change their vote, or their non-
participation in the election, once they see the actual results and how these
came about.
To address these and other issues, I propose the use of the following REPEAT
VOTING procedure.
1. A.
Voting is carried out in two rounds.
2. B.
Every eligible voter is entitled (and encouraged) to vote in each of the two
rounds.
3. C.
All the votes of the two rounds are added up, and the final election result is
obtained by applying the current election rules222Be they plurality, special
majority, electoral college, and so on. to these two-round totals.
4. D.
The results of the first round are officially counted and published; the
second round takes place, say, two weeks after the first round, but no less
than one week after the official publication of the first round’s results.
What are the _advantages_ of repeat voting?
1. 1.
_Polls._ The first round becomes a de facto giant opinion poll; however,
because the votes of the first round count, it is a much more truthful poll
(in contrast to the usual pre-election polls, where giving untruthful
answers—whether intentionally or not—carries no cost333Someone once quipped
that Israelis tell the truth in polls, but lie when they cast their vote.).
The combination of the large sample size and incentivized truthfulness makes
the results of the first round a significantly more accurate predictor of the
electorate views. It is thus crucial for the votes of the first round to count
no less than the votes of the second round, which explains why we are adding
up the votes of the two rounds, rather than having only the second round
determine the outcome.
2. 2.
_Participation._ Voters who do have a preference that is not however strong
enough to make them vote in the first round may well be led to vote in the
second round because of the results of the first round. Thus, participation in
at least one round of the election is expected to increase. It is better that
people vote even in one round than not at all.444Voters who have strong or
extreme positions will most probably vote in both rounds; their relative
weight in the final result will decrease when enough people are motivated to
vote in the second round (which may well happen if such extreme positions get
higher shares of the vote in the first round). One indirect advantage is that
people who vote may feel closer to the elected officials, and to the
democratic system in general.
3. 3.
_Representative results._ The final results may be more representative,
because the second round makes it possible for the voters as well as for the
candidates to “correct” any problems of the first round. This includes the
effects of wrong predictions by the polls, as well as any special
circumstances and events that occurred close to election time (see the second
paragraph of the paper; it is unlikely that such unexpected events will happen
both times). All this, again, can only increase the robustness of the results:
they become more trustworthy and more accepted.
4. 4.
_New reference point._ The results of the first round become a new reference
point, which may well affect a person’s choice in the second round:
_imagining_ a new situation and _being_ in a new situation are not the same
thing.555Robert J. Aumann, awarded the Nobel Prize in Economics in 2005, tells
the following story (S. Hart, “An Interview with Robert Aumann,”
_Macroeconomic Dynamics_ 9, 2005, page 711; reprinted in: Paul A. Samuelson
and William A. Barnett, editors, _Inside the Economist’s Mind: Conversations
with Eminent Economists_ , Blackwell Publishing 2006). In 1956 he had two
offers: one from Bell Labs in New York, and another from the Hebrew University
of Jerusalem. It took him a long time to make up his mind, and he chose Bell
Labs. He phoned them and told them that he accepted their offer. Once he put
down the phone, he immediately started imagining the next few years at Bell
Labs, and reached the conclusion that he had made the wrong choice. A day
later he phoned Bell Labs and asked them if he could change his mind—which
they graciously agreed to. How come a leading game theorist couldn’t
understand all this before he made his initial decision? Aumann’s answer is
that until he found himself in the new situation of someone going to Bell
Labs, he could not really grasp what it meant!
5. 5.
_Strategic voting._ People seem to be more strategic in their voting than is
usually believed (again, see the examples in the second paragraph above), but
under current procedures they base their strategic decisions on possibly
inaccurate polls. Repeat voting provides a much more solid basis. In close
elections it is conceivable that the voting of the second round may be less
strategic (and the other way around when there is a large winning margin in
the first round).666An interesting related instance concerns the minimal
threshold for a party to be represented in a parliament. Many potential
entrants try to convince voters that they have support that is higher than the
threshold and so voting for them would not be a “waste” of one’s vote. In many
cases, however, it turns out that these parties do not pass the threshold;
once this is seen in the first round, there will be many fewer such wasted
votes in the second round.
What are the possible _disadvantages_ of repeat voting?
1. 1.
_Costs._ A second round adds costs (however, in future voting that may be
conducted online, these costs would become much smaller). The additional
electoral campaign between the two rounds also increases the costs (but one
should remember that two rounds are already used in various elections, albeit
not two identical rounds as proposed here). One way to save costs is to carry
out the second round only when the results of the first round are close (for
instance, when the winning margin is below a certain threshold that is
specified in advance).777Suggested by Motty Perry and Steve Brams.
2. 2.
_Participation._ There may be fewer voters in the first round (“I will have a
chance to vote in the second round”).
3. 3.
_Bandwagon effect._ Voters with strong or extreme positions, who are much more
likely to vote in the first round, may have a big effect on the results of the
first round, which may then have a bandwagon effect on the whole election.
One can think of other ways to overcome the issues pointed out above. For
example, one can repeat the vote three times, with the winner having to win at
least two rounds (this applies only to two-outcome elections, however, not to
multi-candidate and parliamentary elections, and is inherently more
complicated).888This procedure was also suggested by Shachar Kariv. Another
possibility is to make voting mandatory (as in certain countries); while this
may resolve the participation issue, it does not resolve the significant
“polls issue” discussed in advantage #1 above. Yet another is to have the
votes in the two rounds of repeat voting weighted differently (for instance,
depending on the total number of votes in each round999For example, averaging
the percentages of votes that each candidate received in the two rounds (which
amounts to giving weights to the two rounds that are inversely proportional to
the total number of votes in each) may perhaps increase participation in that
round where there are fewer voters (probably the first round).); at this
point, however, it seems best to leave it as simple and straightforward as
possible.
In summary: REPEAT VOTING is a simple modification of election procedures that
is capable of increasing voter participation and yielding more accurate and
representative results. Everyone deserves a second chance, as the saying goes.
Shouldn’t this include voters and candidates?
|
# Zero-Shot Retrieval with Search Agents
and Hybrid Environments
Michelle Chen Huebscher, Christian Buck, Massimiliano Ciaramita, Sascha Rothe
Google Research, Zurich, Switzerland
{michellechen, cbuck, massi<EMAIL_ADDRESS>
###### Abstract
Learning to search is the task of building artificial agents that learn to
autonomously use a search box to find information. So far, it has been shown
that current language models can learn symbolic query reformulation policies,
in combination with traditional term-based retrieval, but fall short of
outperforming neural retrievers. We extend the previous learning to search
setup to a hybrid environment, which accepts discrete query refinement
operations, after a first-pass retrieval step via a dual encoder. Experiments
on the BEIR task show that search agents, trained via behavioral cloning,
outperform the underlying search system based on a combined dual encoder
retriever and cross encoder reranker. Furthermore, we find that simple
heuristic Hybrid Retrieval Environments (HRE) can improve baseline performance
by several nDCG points. The search agent based on HRE (HaRE) matches state-of-
the-art performance, balanced in both zero-shot and in-domain evaluations, via
interpretable actions, and at twice the speed.
## 1 Introduction
Transformer-based dual encoders for retrieval, and cross encoders for ranking
(cf. e.g., Karpukhin et al. (2020); Nogueira & Cho (2019)), have redefined the
architecture of choice for information search systems. However, sparse term-
based inverted index architectures still hold their ground, especially in out-
of-domain, or _zero-shot_ , evaluations. On the one hand, neural encoders are
prone to overfitting on training artifacts (Lewis et al., 2021). On the other,
sparse methods such as BM25 (Robertson & Zaragoza, 2009) may implicitly
benefit from term-overlap bias in common datasets (Ren et al., 2022). Recent
work has explored various forms of dense-sparse hybrid combinations, to strike
better variance-bias tradeoffs (Khattab & Zaharia, 2020; Formal et al., 2021b;
Chen et al., 2021; 2022).
Rosa et al. (2022) evaluate a simple hybrid design which takes out the dual
encoder altogether and simply applies a cross encoder reranker to the top
documents retrieved by BM25. This solution couples the better generalization
properties of BM25 and high-capacity cross encoders, setting the current SOTA
on BEIR by reranking 1000 documents. However, this is not very practical as
reranking is computationally expensive. More fundamentally, it is not easy to
get insights on why results are reranked the way they are. Thus, the implicit
opacity of neural systems is not addressed.
We propose a novel hybrid design based on the Learning to Search (L2S)
framework (Adolphs et al., 2022). In L2S the goal is to learn a search agent
that autonomously interacts with the retrieval environment to improve results.
By iteratively leveraging pseudo _relevance feedback_ (Rocchio, 1971), and
language models’ _understanding_ , search agents engage in a goal-oriented
traversal of the answer space, which aspires to model the ability to ’rabbit
hole’ of human searchers (Russell, 2019). The framework is also appealing
because of the interpretability of the agent’s actions.
Adolphs et al. (2022) show that search agents based on large language models
can learn effective symbolic search policies, in a sparse retrieval
environment, but fail to outperform neural retrievers. We extend L2S to a
dense-sparse hybrid agent-environment framework structured as follows. The
environment relies on both a state-of-the-art dual encoder, GTR (Ni et al.,
2021), and BM25 which separately access the document collection. Results are
combined and sorted by means of a transformer cross encoder reranker (Jagerman
et al., 2022). We call this a Hybrid Retrieval Environment (HRE). Our search
agent (HaRE) interacts with HRE by iteratively refining the query via search
operators, and aggregating the best results. HaRE matches state-of-the-art
results on the BEIR dataset (Thakur et al., 2021) by reranking a one order of
magnitude less documents than the SOTA system (Rosa et al., 2022), reducing
latency by 50%. Furthermore, HaRE does not sacrifice in-domain performance.
The agent’s actions are interpretable and dig deep in HRE’s rankings.
Figure 1: Sequential query refinements combining pseudo relevance feedback and
search operators.
Figure 1 shows an example of a search session performed by the HaRE search
agent applying structured query refinement operations. The agent adds two
successive filtering actions to the query ’what is the weather like in germany
in june’ (data from MS MARCO (Nguyen et al., 2016)). In the first step it
restricts results to documents containing the term ’temperatures’, which
occurs in the first set of results. In the second step, results are further
limited to documents containing the term ’average’. This fully solves the
original query by producing an nDCG@10 score of 1.
## 2 Related Work
Classic retrieval systems such as BM25 (Robertson & Zaragoza, 2009) use term
frequency statistics to determine the relevancy of a document for a given
query. Recently, neural retrieval models have become more popular and started
to outperform classic systems on multiple search tasks. Karpukhin et al.
(2020) use a dual-encoder setup based on BERT (Devlin et al., 2019), called
DPR, to encode query and documents separately and use maximum inner product
search (Shrivastava & Li, 2014) to find a match. They use this model to
improve recall and answer quality for multiple open-domain question-answer
datasets. Large encoder-decoder models such as T5 (Raffel et al., 2020) are
now preferred as the basis for dual encoding as they outperform encoders-only
retrievers (Ni et al., 2021).
It has been observed that dense retrievers can fail to catch trivial query-
document syntactic matches involving n-grams or entities (Karpukhin et al.,
2020; Xiong et al., 2021; Sciavolino et al., 2021). ColBERT (Khattab &
Zaharia, 2020) gives more importance to individual terms by means of a _late
interaction_ multi-vector representation framework, in which individual term
embeddings are accounted in the computation of the query-document relevance
score. This is expensive as many more vectors need to be stored for each
indexed object. ColBERTv2 (Santhanam et al., 2022) combines late interaction
with more lightweight token representations. SPLADE (Formal et al., 2021b) is
another approach that relies on sparse representations, this time induced from
a transformer’s masked heads. SPLADEv2 (Formal et al., 2021a; 2022) further
improves performance introducing hard-negative mining and distillation. Chen
et al. (2021) propose to close the gap with sparse methods on phrase matching
and better generalization by combining a dense retriever with a dense lexical
model trained to mimic the output of a sparse retriever (BM25). Ma et al.
(2021) combine single hybrid vectors and data augmentation via question
generation. In Section 3 (Table 2(b)) we evaluate our search environment and
some of the methods above.
The application of large LMs to retrieval, and ranking, presents significant
computational costs for which model distillation (Hinton et al., 2015) is one
solution, e.g. DistillBERT (Sanh et al., 2019). The generalization capacity of
dual encoders have been scrutinized recently in QA and IR tasks (Lewis et al.,
2021; Zhan et al., 2022; Ren et al., 2022). Zhan et al. (2022) claims that
dense rerankers generalize better than dense retrievers. Ni et al. (2021)
suggests that increasing the dual encoder model size increases its ability to
generalize. Rosa et al. (2022) argue that large rerankers provide the most
effective approach, particularly in zero-shot performance and in combination
with a sparse retriever. Their best MonoT5 (Nogueira et al., 2020b) model, a
pretrained transformer encoder-decoder finetuned for query-document scoring,
yields the state-of-the-art results on 12 out of 18, zero-shot tasks on the
BEIR task (Rosa et al., 2022). They observe that in-domain performance is not
a good indicator of zero-shot performance. Consequently, they regularize the
reranker, trading off in-domain performance and improving zero-shot results.
Another interesting line of research is inspired by large decoder-only
language models, where increasing size systematically improves zero-shot
performance, as proven by GPT (Brown et al., 2020) and PaLM (Chowdhery et al.,
2022). Accordingly, SGPT (Muennighoff, 2022) extends the encoder/decoder-only
approach to search to decoder-only modeling, via prompting and finetuning.
Also related is the line of work on retrieval-augmented language models (Lewis
et al., 2020; Guu et al., 2020) and iterative query reformulation for question
answering (Guo et al., 2017; Buck et al., 2018; Qi et al., 2019; 2021; Zhu et
al., 2021; Nakano et al., 2021).
## 3 Hybrid Retrieval Environment (HRE) and Benchmarks
(a) Average nDCG@10 of the BM25, GTR and HRE at different retrieval depths.
BEIR subset nDCG@10 ColB. SPAR. SPL. ColBERTv2 0.481 - - SPAR 0.475 0.482 -
SPLADE++ 0.475 0.508 0.482 HRE, 770M 0.543 0.526 0.507 HRE, 11B 0.529 0.530
0.507
(b) Our HRE, compared, at $k{=}10$, vs. other dense/sparse methods. The BEIR
average score is computed on the subsets of tasks selected by each method. HRE
770M is trained on BM25 top 100 documents, HRE 11B on BM25 top 1000.
Figure 2: Preliminary evaluation of our HRE and benchmark environments on BEIR
tasks.
A search _environment_ is composed of one or more _retrievers_ operating over
a document collection, whose output is possibly combined, and eventually
rescored by a dedicated model, the _reranker_.
### 3.1 Retrievers
We experiment with three types of retrieval methods. The first, BM25, uses
Lucene’s implementation of BM25111https://lucene.apache.org/. as the
retriever. This is the setting of (Adolphs et al., 2022). The second
environment, GTR, uses GTR-XXL (Ni et al., 2021) as the retriever. The last is
a hybrid environment that combines the results of the BM25 and GTR retrievers.
We call this a _Hybrid Retrieval Environment_ (HRE). After retrieval, HRE
simply joins the two $k$-sized results sets, removing duplicates. Thus, for a
fixed value of $k$, HRE has available a slightly larger pool of documents, at
most $2k$.
### 3.2 The T5 Reranker (T5-R)
After retrieval, and, in the case of HRE, the combination step, the top
documents are reranked by the environment’s reranker, which we refer to as
T5-R. In contrast to encoder-decoders (Nogueira et al., 2020a) we follow the
work of Zhuang et al. (2022) and only train T5-R’s encoder and add a
classification layer on top of the encoder output for the first token, similar
to how BERT (Devlin et al., 2019) is often used for classification.
Instead of using a point-wise classification loss, we use a list-based loss
(Jagerman et al., 2022; Zhuang et al., 2022): for each query, we obtain one
positive ($y=1$) and $m-1$ negative ($y=0$) documents to which the model
assigns scores $\mathbf{\hat{y}}=\hat{y}_{1}^{m}$. We use a list-wise softmax
cross-entropy loss (Bruch et al., 2019):
$\ell(\mathbf{y},\mathbf{\hat{y}})=\sum_{i=1}^{m}y_{i}\log\frac{e^{\hat{y}_{i}}}{\sum_{j=1}^{m}e^{\hat{y}_{j}}}.$
(1)
We train T5-R on MS MARCO and the output of BM25. We find that a T5-Large
trained on the top-100 documents works well on the top results, but a T5-11B
model trained on the top-1000 BM25 documents works better in combination with
a search agent (Table 2). For HRE we consider the latter reranker. Figure 2
provides a first evaluation of the ranking performance of our environments on
the BEIR dataset – see §5 for more details on the task. Figure 2(a) reports
the effect of reranking an increasing number of documents. BM25 provides a
baseline performance, and benefits the most from reranking more results with
T5-R. GTR starts from a higher point, at $k{=}10$, and plateaus around
$k{=}300$ with an average nDCG@10 (normalized Discounted Cumulative Gain) of
0.507 on the 19 datasets. Despite its simplicity, HRE is effective at
combining the best of BM25 and GTR at small values of $k$. HRE reaches its
maximum, 0.509, at $k{=}40$ but scores 0.508 at $k{=}20$.
In Figure 2(b) we situate our environments in a broader context, by comparing
zero shot performance against recent dense/sparse combined retrieval
proposals: ColBERTv2 (Santhanam et al., 2022), SPAR (Chen et al., 2021) and
SPLADE++ (Formal et al., 2022), discussed also in §2. Each of them evaluates
on a different subset of the BEIR zero shot tasks, which we select
appropriately. HRE produces the best performance by a substantial margin in
two configurations. ColBERT and SPLADE do not use a reranker but require more
involved training through cross-attention distillation, and rely on token-
level retrieval. The best SPAR model needs an additional dense lexical model
and relies on a more sophisticated base retriever, Contriever (Izacard et al.,
2021). As the results show, a reranker combined with HRE at ($k{=}10$)
provides a simple and effective hybrid search system.
## 4 Hybrid agent Retrieval Environment (HaRE)
A search agent generates a sequence of queries,
$q_{0},q_{1},q_{2},\ldots,q_{T}$, to be passed on, one at a time, to the
environment. Here, $q_{0}{=}q$ is the initial query and $q_{T}$ is the last
one, in what we also call a _search session_. At each step, the environment
returns a list of documents $\mathcal{D}_{t}$ for $q_{t}$. We also maintain a
list, $\mathcal{A}$, of the best $k$ documents found during the whole session
$\displaystyle\mathcal{A}_{t}:=\\{d_{i}\in\mathcal{D}_{t}\cup\mathcal{A}_{t-1}:|\\{d_{j}\in\mathcal{D}_{t}\cup\mathcal{A}_{t-1}:f(q_{0},d_{i})<f(q_{0},d_{j})\\}|<k\\}$
(2)
where $f{:}(q,d)\mapsto\mathbb{R}$ is the score, $P(y{=}1|q,d)$, predicted by
the T5-R model. When no documents can be retrieved after issuing $q_{t+1}$ or
after a maximum number of steps, the search session stops and
$\mathcal{A}_{t}$ is returned as the agent’s output. The agent’s goal is to
generate queries such that the output, $\mathcal{A}_{T}$, has a high score
under a given ranking quality metric, in our case nDCG@10.
### 4.1 Query Refinement Operations
As in (Adolphs et al., 2022), $q_{t+1}$ is obtained from $q_{t}$ by
_augmentation_. That is, either by adding a single term, which will be
interpreted by Lucene as a disjunctive keyword and contribute a component to
the BM25 score, or including an additional unary search operator. We
experiment with the same three unary operators: ‘+’, which limits results to
documents that contain a specific term, ‘-’ which excludes results that
contain the term, and ‘${\scriptstyle\wedge}_{i}$’ which boosts a term weight
in the BM25 score by a factor $i\in\mathbb{R}$. We don’t limit the operators
effect to a specific document _field_ , e.g., the content or title, because in
the BEIR evaluation there is no such information in the training data (MS
MARCO). Formally, a refinement takes the following simplified form:
$\displaystyle q_{t+1}:=q_{t}\;\Delta q_{t},\Delta
q_{t}:=[+|-|\wedge_{i}]\;\;w_{t},w_{t}\in\Sigma_{t}$ (3)
where $\Sigma_{t}$ is the vocabulary of terms present in $\mathcal{A}_{t}$.
This latest condition introduces a _relevance feedback_ dimension (Rocchio,
1971). If available, document relevance labels can be used to build an optimal
query, e.g., for training. Or, in the absence of human labels, the search
results are used for inference purposes – as _pseudo relevance feedback_.
### 4.2 The T5 Query Expander (T5-Q)
Figure 3: Schematic view of the HaRE search agent. The information flows from
the input query $q_{0}$ to the output $\mathcal{A}_{T}$. In between, retrieval
steps (the blue components) and aggregation and refinement steps (the yellow
components) alternate in a cycle.
A search agent includes an encoder-decoder transformer based on T5 (Raffel et
al., 2020) that generates query refinements. We call this component T5-Q. At
each step, an observation $o_{t}{:=}(q_{t},\mathcal{A}_{t})$ is formed by
concatenating $q_{t}$ and $\mathcal{A}_{t}$, which is a string with a minimal
set of structural identifiers. T5-Q takes $o_{t}$ as input and outputs $\Delta
q_{t}$, allowing the composition of $q_{t+1}{=}q_{t}\Delta q_{t}$, as in Eq.
(3).
### 4.3 HaRE and Benchmark Search Agents
Figure 3 illustrates the HaRE search agent. On the first search step only, GTR
retrieves the top-$1000$ documents for $q_{0}$. These define a sub-collection,
Top-$K$, kept frozen through the search session. The top-$k$ documents from
Top-$K$, $\mathcal{D}_{0,2}$, are combined with the top-$k$ from BM25,
$\mathcal{D}_{0,1}$, also retrieved from the full collection. GTR is not used
again. Top-$K$ is further accessed only through BM25, i.e., for $t>0$. At
every step, $t$, the results from the full corpus, $\mathcal{D}_{t,1}$, and
those from Top-$K$, $\mathcal{D}_{t,2}$, are joined to form $\mathcal{D}_{t}$.
$\mathcal{D}_{t}$, in turn, is joined with the current session results
$\mathcal{A}_{t-1}$, to form $\mathcal{A}_{t}$. $\mathcal{A}_{t}$ is passed to
the query expander model, T5-Q, which compiles the observation
$o_{t}{=}(\mathcal{A}_{t},q_{t})$, and generates $\Delta q_{t}$. The new
query, $q_{t+1}{=}q_{t}\Delta q_{t}$, is sent to BM25 for another round of
search. When the termination condition is met, the agent returns
$\mathcal{A}_{T}$.
Besides HaRE we evaluate two simpler search agents, in alignment with the
simpler environments, BM25 and GTR. The first agent (BM25) only uses the BM25
components of HaRE (the BM25 environment), thus, it has only access to the
results $\mathcal{D}_{t,1}$ in Figure 3. Analogously, the second agent (GTR),
only uses the GTR components of HaRE (the GTR environment), and has access
exclusively to the $\mathcal{D}_{t,2}$ results.
## 5 Experiments
We run experiments on the zero-shot retrieval evaluation framework of BEIR
(Thakur et al., 2021), which includes 19 datasets on 9 domains. Only MS MARCO
is used for training and development. Each dataset has its own document
collection which is indexed separately. We use the official TREC eval
script222https://github.com/usnistgov/trec_eval/archive/refs/heads/master.zip.
for our results. Results for the benchmarks are from the corresponding
publications.
### 5.1 Data
To generate training data for T5-R we retrieve $k{\in}\\{100,1000\\}$
documents per query for each query in the MS MARCO training set (532,761
questions) using BM25. To make one example list of length $m$, we take a query
and one gold document and sample $m{-}1$ negatives from the top-k documents.
We skip queries if no gold document occurs within the top-k documents which
removes about 20% of the data.
The training data for T5-Q is generated as follows. Synthetic search sessions
are simulated from labeled query documents pairs, $(q,d)$, where $d$ is the
relevant document for $q$, in the MS MARCO training set. We then use the
_Rocchio Session_ Algorithm of Adolphs et al. (2022), to search for the
optimal expansion. In a nutshell, at step $t$, terms in $\mathcal{A}_{t}\cap
d$ are evaluated as candidates for disjunctive term augmentations, or in
combination with ’+’ and ‘${\scriptstyle\wedge}_{i}$’ operators. Conversely,
terms in $\mathcal{A}_{t}-d$ are candidates in combination with ’-’. We
attempt at most $M$ candidates for each operator using terms in the document
set, $\mathcal{A}_{t}$, ranked by Lucene’s IDF score. Starting from
$q_{0}{=}q$, a synthetic session is expanded, one step at a time, with the
best scoring augmentation. The procedure stops when the nDCG@10 score of
$\mathcal{A}_{t}$ does not improve, or after five steps. We generate two sets
of training data: a high-throughput (HT) data, for faster turnaround in
development, which sets $M{=}20$ and yields 120,563 training examples (single
steps) from 23% of the questions where improvements were found; and a high-
quality (HQ) data using $M{=}100$ which yields 203,037 training examples from
40% of the questions for which improvements were found. Table 4, in the
Appendix, provides an example gold Rocchio session for the query ’what’s the
difference between c++ and java’. The nDCG score of the HRE results is 0.0,
and by applying two refinements (’+language’ and ’+platform’) the score
increases, first to 0.6, then to 1.0. In the training Rocchio sessions, the
‘+’ operator is used for 83% of all refinements. The other operators are each
used for only 2-3% of all refinements. Although ’+’ is used for the majority
of refinements, when we evaluated the agent’s headroom allowing only the ‘+’
operator, the headroom was lower. We also evaluated the agent’s headroom
allowing only ‘${\scriptstyle\wedge}_{i}$’ operators, the result was also
worse. The synthetic data is used to finetune T5-Q via Behavioral Cloning,
where each step defines an independent generation task.
### 5.2 Models
We use the published model checkpoint for GTR-XXL (Ni et al., 2021), as the
off-the-shelf dual encoder.333https://github.com/google-
research/t5x_retrieval. For BM25 we use Lucene’s implementation with default
parameters (k=0.9 and b=0.4).
As detailed in Section 3, the query-document reranker, T5-R, is initialized
from the encoder of a pretrained T5 model. The encoder output of the first
token is fed into a feed-forward layer to generate a score which we use for
ranking query-document pairs. The input is structured as follows:
query: {query} document: {document}
We experimented with several published checkpoints, including T5-large and
T5-11B and found the latter to perform better.444https://github.com/google-
research/text-to-text-transfer-transformer. Note that while T5-11B has 11B
parameters we only train roughly half of them (the encoder side). We train
with a batch size of 64 and lists of length $m=32$, yielding an effective
batch size of $2048$. To limit memory consumption we truncate our inputs to
256 tokens.
The query expander, T5-Q, is based on the T5.1.1-XXL model. The input is
structured as follows:
$\displaystyle\texttt{query:~{}\\{query\\}
document:~{}\\{document1\\}}\ldots\texttt{~{}document:~{}\\{document10\\}}$
When examining the training data (§5.1), we found multiple examples of that we
consider unlearnable, e.g., involving stop words. As we do not want the search
agent to concentrate and overfit on these examples, we employ a simple self-
supervised training trick. In each batch, we sort the sequences by their
negative log-likelihood (NLL) and we mask out the loss for 50% of the training
examples with the highest NLL. Examples can be seen in Table 3 in the
Appendix. We are essentially training with only a halved batch size while
wasting the other half, but given that the model converges quickly, this
technique is sufficient. To further avoid overfitting, we use a small constant
learning rate of $3*10^{-5}$. We train for 12,000 steps with a batch size of
128 (forward pass before masking), which is equivalent to around $1.5$ epochs.
The input sequences have a maximum length of 1024 and the maximum target
length is set to 32. All other hyperparameters are the T5 defaults.
### 5.3 Results
| Benchmarks | Environments | Agents | RS
---|---|---|---|---
Dataset | MonoT5 | SGPT | BM25 | GTR | HRE | BM25 | GTR | RM3 | HaRE | BM25
MS MARCO | 0.398 | 0.399 | 0.285 | 0.470 | 0.479 | 0.361 | 0.480 | 0.483 | 0.483 | 0.557
Trec-Covid | 0.794 | 0.873 | 0.579 | 0.537 | 0.666 | 0.778 | 0.703 | 0.744 | 0.765 | 0.921
BioASQ | 0.574 | 0.413 | 0.315 | 0.344 | 0.427 | 0.453 | 0.470 | 0.468 | 0.493 | 0.654
NFCorpus | 0.383 | 0.362 | 0.343 | 0.358 | 0.377 | 0.380 | 0.368 | 0.380 | 0.383 | 0.508
NQ | 0.633 | 0.524 | 0.419 | 0.637 | 0.655 | 0.528 | 0.664 | 0.661 | 0.669 | 0.724
HotpotQA | 0.758 | 0.593 | 0.605 | 0.651 | 0.713 | 0.694 | 0.734 | 0.734 | 0.759 | 0.850
FiQA-2018 | 0.513 | 0.372 | 0.268 | 0.504 | 0.513 | 0.355 | 0.516 | 0.520 | 0.525 | 0.564
Signal-1M | 0.314 | 0.276 | 0.371 | 0.273 | 0.320 | 0.355 | 0.313 | 0.310 | 0.318 | 0.476
Trec-News | 0.472 | 0.481 | 0.300 | 0.368 | 0.394 | 0.353 | 0.368 | 0.420 | 0.406 | 0.521
Robust04 | 0.540 | 0.514 | 0.384 | 0.513 | 0.556 | 0.513 | 0.514 | 0.565 | 0.589 | 0.728
ArguAna | 0.287 | 0.514 | 0.318 | 0.352 | 0.327 | 0.246 | 0.362 | 0.237 | 0.260 | 0.389
Touche-2020 | 0.299 | 0.254 | 0.536 | 0.249 | 0.325 | 0.518 | 0.251 | 0.321 | 0.320 | 0.673
Quora | 0.840 | 0.846 | 0.650 | 0.875 | 0.876 | 0.769 | 0.874 | 0.873 | 0.873 | 0.880
DBPedia | 0.477 | 0.399 | 0.290 | 0.428 | 0.453 | 0.383 | 0.432 | 0.463 | 0.476 | 0.549
SCIDOCS | 0.197 | 0.197 | 0.166 | 0.173 | 0.196 | 0.181 | 0.197 | 0.198 | 0.201 | 0.280
Fever | 0.849 | 0.783 | 0.783 | 0.811 | 0.829 | 0.813 | 0.817 | 0.831 | 0.832 | 0.866
Climate-Fever | 0.280 | 0.305 | 0.222 | 0.282 | 0.296 | 0.258 | 0.287 | 0.300 | 0.300 | 0.408
SciFact | 0.777 | 0.747 | 0.683 | 0.707 | 0.751 | 0.707 | 0.743 | 0.751 | 0.756 | 0.797
CQADupStack | 0.415 | 0.381 | 0.316 | 0.431 | 0.448 | 0.364 | 0.448 | 0.451 | 0.452 | 0.526
#docs reranked | 1000 | - | 10 | 10 | 17.5 | 21.6 | 23.6 | 33.4 | 66.7 | 15.5
Average | 0.516 | 0.485 | 0.412 | 0.472 | 0.505 | 0.474 | 0.502 | 0.511 | 0.519 | 0.625
Avg. Zero-shot | 0.522 | 0.490 | 0.419 | 0.472 | 0.507 | 0.480 | 0.503 | 0.513 | 0.521 | 0.628
Table 1: Full results on BEIR. MonoT5 refers to the best-performing system in
(Rosa et al., 2022), current SOTA performance holder. MonoT5 reranks the top
1000 documents from BM25 with a cross encoder. SGPT refers to the best
performing GPT-style system from (Muennighoff, 2022), SGPT-BE 5.8B. As
environments, we evaluate BM25, GTR and HRE (§3). The last four columns report
the results of the BM25, GTR and HaRE agents (§4.3) including a variant (RM3,
based on HRE) that replaces T5-Q with RM3 (Pal et al., 2013). As customary on
BEIR evals, we report the all datasets average and without MS MARCO (zero shot
only). We also report the number of unique documents scored with the reranker,
the average value for agents and HRE. The SGPT model retrieves over the full
collection. The last column (RS) reports the performance of the (HQ) Rocchio
Session algorithm used to generate the training data when run on all BEIR eval
sets. Having access to the labeled document(s), it provides an estimate of the
headroom for search agents.
Table 1 holds the detailed BEIR results. We report the average over all
datasets (Average), and minus MS MARCO (Avg. Zero-shot). As benchmarks, we
compare with MonoT5, the current SOTA, which reranks the top 1000 documents
from BM25, with a cross encoder transformer reranker (Rosa et al., 2022). We
also report the results of the best performing GPT-style system from
(Muennighoff, 2022). SGPT is intriguing because of its unique performance
profile (e.g., see on Trec-Covid and Arguana), reinforcing the suggestion that
large decoder-only LMs introduce genuinely novel qualitative dimensions to
explore. Next, we discuss the results of our BM25, GTR and HRE and
corresponding agents. For all our systems, reranking depth is fixed at
$k{=}10$.
All search agents run fixed 5-steps sessions at inference time. They all
outperform their environment. One needs to factor in that agents score more
documents, because of the multi-step sessions. The BM25 and GTR agents collect
on average about 20 documents per session, HaRE 66.7. One of the desired
features of search agents is deep but efficient exploration, which is
observable from the experiments. For the BM25 environment to match the BM25
agent performance (0.474/0.480), one needs to go down to $k{\approx}300$, cf.
Figure 2(a). Overall, the BM25 agent outperforms the BM25 environment by more
than 6 points. We highlight that the BM25 agent outperforms also the GTR
environment, though not in-domain on MS MARCO – consistently with the findings
of Adolphs et al. (2022). The GTR agent outperforms the GTR environment by 3
points, with the GTR environment beginning to perform better than the GTR
agent only between $k{=}50$ and $k{=}100$.
With HRE, performance starts to saturate. However, HaRE outperforms HRE by 1.4
nDCG points, scoring on average 66.7 documents vs. the 17.5 of HRE. Note that
HRE’s maximum performance is 0.509/0.510, at $k=40$, thus is never near HaRE
at any retrieval depth. HaRE’s performance is comparable to MonoT5, the
current SOTA: better on all datasets average by 0.3 points and worse on zero-
shot only, by 0.1 points. However, HaRE scores 15X fewer documents. A
conservative estimate shows a consequent 50% latency reduction (cf. §A.1).
Furthermore, HaRE’s in-domain performance (0.483 on MS MARCO) keeps improving
and is 8.5 points better than MonoT5. HaRE has the best performance on 8
datasets (5 for MonoT5). We also evaluate a variant of the HRE search agent
based on RM3 (Jaleel et al., 2004; Pal et al., 2013) a robust pseudo relevance
feedback query expansion method (Lv & Zhai, 2009; Miao et al., 2012), which
replaces T5-Q as the query expander. At each step, we pick the highest scoring
term based on the RM3 score and add the selected term to the previous query
with a ’+’ operator. The RM3 search agent is also effective, it improves over
HRE but it does not perform as well as HaRE, the reason being that it does not
pull in enough new documents (33.4).
The last column in Table 1 reports an oracle headroom estimate. This is
obtained by running the same Rocchio Session algorithm used to produce T5-Q’s
training data (§5.1) at inference time. As the numbers show, there is
substantial room for improvement. In the next section we continue with an in-
depth analysis and open questions.
### 5.4 Qualitative Analysis
(a) Depth of the documents returned by HaRE in the original retrieval results,
in different depth interval buckets. Results are averages over all BEIR
datasets.
(b) Outcome of Rocchio sessions (oracle) compared to HaRE’s and RM3’s
predictions, as a function of the steps required to achieve the maximum score.
Averages over all BEIR datasets.
Figure 4: Analysis of BEIR tasks results.
Figure 4(a) plots the average depth of the documents in HaRE’s final top-10,
over all BEIR tasks in the retrieval results from BM25, and GTR, for the
original query. HaRE digs deep in the original retrievers rankings, even
beyond position 1000: 16.3% of HaRE top-10 docs for BM25, and 6.9% for GTR. In
this respect, HaRE extends the finding of (Adolphs et al., 2022) to neural
retrievers.
Figure 4(b) looks at the outcomes of HaRE and HRE-RM3 episodes, compared to
oracle sessions. 49% of the time HaRE doesn’t change the outcome (RM3, 52.8),
14% of the results are worse for both, 21% of the examples are resolved at the
initial query for HaRE (18.9 for RM3). However, 16% are improved by HaRE (14
for RM3) and 8.3% need two or more steps. Table 5, in Appendix, looks in
detail at an example of HaRE win at step 2: ’what do you use dtp for +software
+publishing’. Another, ’when did rosalind franklin take x ray pictures of dna
+52 +taken +franklin +photo’ needs four steps. A single step one, but
particularly transparent semantically is ’what is the age for joining aarp
+requirements’. Hence, we find evidence of multistep inference and
interpretable actions. Compared to HRE-RM3, HaRE explores more documents in
fewer steps. This is in part due to RM3 term weighting over-relying on the
query. For example, RM3 needs three refinements to solve the query ’what make
up the pistil in the female flower’, ’+pistil +female +stigma’, while HaRE
solves it in one with ’+ovary’.
We find that the HaRE learns only to use ’+’, and completely ignores other
operators. Part of the problem may be an unbalanced learning task for T5-Q
(’+’ covers 83% of training). One way to make the task more expressive and
balanced would be defining more operators. Lucene, for example, makes
available several other operators including proximity, phrase and fuzzy match,
and more. More generally, while interpretable, such operators are also rigid
in their strict syntactic implementation in traditional search architectures.
An interesting direction to explore is that of implementing such operators in
a semantic framework, that is via neural networks, combining transparency and
flexibility. SPAR’s approach to modeling phrase matches (Chen et al., 2021),
for instance, is one step in this direction. Another possible limitation is
the lack of title information in the training data, as more structure in the
data may partition the learning space in more interesting ways. The importance
of the document title, for one, is well-known in IR.
In early development phases on MS MARCO we trained several, T5-R and T5-Q,
models using different configurations. We first trained T5-R models with the
top 100 documents from BM25. We call these models ’T5-R BM25-100’. The T5-R
model is used to generate training data for T5-Q, but it is also paired with
the trained T5-Q model at inference time. Thus, the T5-R model needs to be
robust when faced with documents originating from deeper than at training
time. Larger T5-R models seem more robust in this respect, consistently with
previous findings (Ni et al., 2021; Rosa et al., 2022). Similarly, larger T5-Q
models seem more robust when training on the noisy data generated by the
Rocchio Session procedure. Some of those steps are genuinely meaningful, some
are spurious actions with little chance of being learnable. Eventually, we
settled on the largest models available and trained the T5-R models with the
top 1000 documents from BM25. Table 2 provides a sample of these explorations,
evaluated on the BEIR dataset as an ablation study.
There are still open questions on how to properly train these models. For
instance, it is apparent that our reranker is not as robust as as we would
hope for the document depth that is usually explored by agents. Figure 2(a)
clearly shows performance declining at $k{>}50$. This may, at least partially,
explain why we don’t squeeze more performance from the existing headroom. A
natural option, despite the negative findings of (Adolphs et al., 2022), is
joint reinforcement learning of the agent.
| T5-R BM25-100 Large | T5-R BM25-100 11B
---|---|---
Environment | T5-R | T5-Q Large | T5-Q 11B | T5-R | T5-Q Large | T5-Q 11B
BM25 | 0.437 | 0.450 | 0.447 | 0.439 | 0.449 | 0.456
GTR | 0.467 | 0.491 | 0.493 | 0.472 | 0.493 | 0.495
HRE | 0.506 | 0.506 | 0.507 | 0.506 | 0.510 | 0.511
Table 2: Average nDCG@10 on BEIR. ’T5-R BM25-100 Large’ and ’T5-R BM25-100
11B’ are, respectively, T5-Large and T5-v1_1-XXL reranker models trained on
the Top 100 BM25 documents from MS MARCO. T5-Q Large and T5-Q 11B are T5-Large
and T5-11B agent models trained on data generated via the high-throughput
Rocchio Session process (HT, cf. §5.1).
## 6 Conclusion
In this paper we extended the learning to search (L2S) framework to hybrid
environments. In our approach, we simply combine dense and sparse retrievers,
in what we call a hybrid retrieval environment (HRE), and leave results
aggregation to a reranker which operates efficiently, only on the very top
retrieved results. Our experiments show that our search environment, while
simple, is competitive with respect to other hybrid proposals based on more
complex design, or relying on reranking retrieval results very deeply. Our
search agent, HaRE, learns to explore the indexed document collection deeply,
but nimbly, keeping the number of documents to rescore low. HaRE leverages
discrete query refinement steps which produce SOTA-level retrieval performance
in a competitive zero-shot task, without degrading in-domain performance.
Furthermore, we find evidence of effective multi-step inference, and the
actions of the search agent are often easy to interpret and intuitive.
Overall, we are inclined to conclude that search agents can support the
investigation of performant information retrieval systems, capable of
generalization. At the same time, they provide plenty of unexplored
opportunities, and challenges, on the architectural and learning side.
## References
* Adolphs et al. (2022) Leonard Adolphs, Benjamin Börschinger, Christian Buck, Michelle Chen Huebscher, Massimiliano Ciaramita, Lasse Espeholt, Thomas Hofmann, Yannic Kilcher, Sascha Rothe, Pier Giuseppe Sessa, and Lierni Sestorain. Boosting search engines with interactive agents. _Transactions on Machine Learning Research_ , 2022. URL https://openreview.net/forum?id=0ZbPmmB61g.
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In _Advances in Neural Information Processing Systems_ , volume 33, pp. 1877–1901, 2020.
* Bruch et al. (2019) Sebastian Bruch, Xuanhui Wang, Mike Bendersky, and Marc Najork. An analysis of the softmax cross entropy loss for learning-to-rank with binary relevance. In _Proceedings of the 2019 ACM SIGIR International Conference on the Theory of Information Retrieval (ICTIR 2019)_ , pp. 75–78, 2019.
* Buck et al. (2018) Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Wojciech Gajewski, Andrea Gesmundo, Neil Houlsby, and Wei Wang. Ask the right questions: Active question reformulation with reinforcement learning. In _International Conference on Learning Representations_ , 2018. URL https://openreview.net/forum?id=S1CChZ-CZ.
* Chen et al. (2022) Tao Chen, Mingyang Zhang, Jing Lu, Michael Bendersky, and Marc Najork. Out-of-domain semantics to the rescue! zero-shot hybrid retrieval models. In _Advances in Information Retrieval: 44th European Conference on IR Research, ECIR 2022, Stavanger, Norway, April 10–14, 2022, Proceedings, Part I_ , pp. 95–110, 2022. URL https://doi.org/10.1007/978-3-030-99736-6_7.
* Chen et al. (2021) Xilun Chen, Kushal Lakhotia, Barlas Oguz, Anchit Gupta, Patrick Lewis, Stan Peshterliev, Yashar Mehdad, Sonal Gupta, and Wen tau Yih. Salient phrase aware dense retrieval: Can a dense retriever imitate a sparse one? _arXiv preprint arXiv:2110.06918_ , 2021.
* Chowdhery et al. (2022) Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2022. URL https://arxiv.org/abs/2204.02311.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pp. 4171–4186, 2019.
* Formal et al. (2021a) Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stéphane Clinchant. Splade v2: Sparse lexical and expansion model for information retrieval, 2021a. URL https://arxiv.org/abs/2109.10086.
* Formal et al. (2021b) Thibault Formal, Benjamin Piwowarski, and Stéphane Clinchant. _SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking_ , pp. 2288–2292. Association for Computing Machinery, New York, NY, USA, 2021b. ISBN 9781450380379. URL https://doi.org/10.1145/3404835.3463098.
* Formal et al. (2022) Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stéphane Clinchant. From distillation to hard negative sampling: Making sparse neural ir models more effective. In _Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval_ , SIGIR ’22, pp. 2353–2359, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450387323. doi: 10.1145/3477495.3531857. URL https://doi.org/10.1145/3477495.3531857.
* Guo et al. (2017) Xiaoxiao Guo, Tim Klinger, Clemens Rosenbaum, Joseph P. Bigus, Murray Campbell, Ban Kawas, Kartik Talamadupula, Gerry Tesauro, and Satinder Singh. Learning to query, reason, and answer questions on ambiguous texts. In _International Conference on Learning Representations_ , 2017. URL https://openreview.net/forum?id=rJ0-tY5xe.
* Guu et al. (2020) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. REALM: Retrieval-augmented language model pre-training. _https://arxiv.org/abs/2002.08909_ , 2020.
* Hinton et al. (2015) Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network, 2015. URL http://arxiv.org/abs/1503.02531.
* Izacard et al. (2021) Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. Unsupervised dense information retrieval with contrastive learning, 2021\. URL https://arxiv.org/abs/2112.09118.
* Jagerman et al. (2022) Rolf Jagerman, Xuanhui Wang, Honglei Zhuang, Zhen Qin, Mike Bendersky, and Marc Najork. Rax: Composable learning-to-rank using jax. In _Proceedings of the 28th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, 2022.
* Jaleel et al. (2004) Nasreen Jaleel, James Allan, W. Croft, Fernando Diaz, Leah Larkey, Xiaoyan Li, Mark Smucker, and Courtney Wade. Umass at trec 2004: Novelty and hard. In _TREC_ , 01 2004.
* Karpukhin et al. (2020) Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. Dense passage retrieval for open-domain question answering. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , 2020.
* Khattab & Zaharia (2020) Omar Khattab and Matei Zaharia. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In _Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval_ , SIGIR ’20, pp. 39–48, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450380164. doi: 10.1145/3397271.3401075. URL https://doi.org/10.1145/3397271.3401075.
* Lewis et al. (2020) Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. Retrieval-augmented generation for knowledge-intensive nlp tasks. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), _Advances in Neural Information Processing Systems_ , volume 33, pp. 9459–9474, 2020.
* Lewis et al. (2021) Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. Question and answer test-train overlap in open-domain question answering datasets. In _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume_ , 2021.
* Lv & Zhai (2009) Yuanhua Lv and ChengXiang Zhai. A comparative study of methods for estimating query language models with pseudo feedback. In _Proceedings of the 18th ACM Conference on Information and Knowledge Management_ , CIKM ’09, pp. 1895–1898, New York, NY, USA, 2009. Association for Computing Machinery. ISBN 9781605585123. doi: 10.1145/1645953.1646259. URL https://doi.org/10.1145/1645953.1646259.
* Ma et al. (2021) Ji Ma, Ivan Korotkov, Yinfei Yang, Keith Hall, and Ryan McDonald. Zero-shot neural passage retrieval via domain-targeted synthetic question generation. In _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume_ , pp. 1075–1088, Online, April 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.eacl-main.92. URL https://aclanthology.org/2021.eacl-main.92.
* Miao et al. (2012) Jun Miao, Jimmy Xiangji Huang, and Zheng Ye. Proximity-based rocchio’s model for pseudo relevance. In _Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval_ , SIGIR ’12, pp. 535–544, New York, NY, USA, 2012. Association for Computing Machinery. ISBN 9781450314725. doi: 10.1145/2348283.2348356. URL https://doi.org/10.1145/2348283.2348356.
* Muennighoff (2022) Niklas Muennighoff. Sgpt: Gpt sentence embeddings for semantic search. _arXiv preprint arXiv:2202.08904_ , 2022.
* Nakano et al. (2021) Reiichiro Nakano, Jacob Hilton, S. Arun Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. Webgpt: Browser-assisted question-answering with human feedback. _ArXiv_ , abs/2112.09332, 2021.
* Nguyen et al. (2016) Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. _CoRR_ , November 2016. URL https://www.microsoft.com/en-us/research/publication/ms-marco-human-generated-machine-reading-comprehension-dataset/.
* Ni et al. (2021) Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Y. Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, and Yinfei Yang. Large dual encoders are generalizable retrievers. _CoRR_ , abs/2112.07899, 2021. URL https://arxiv.org/abs/2112.07899.
* Nogueira & Cho (2019) Rodrigo Nogueira and Kyunghyun Cho. Passage re-ranking with bert. _arXiv preprint arXiv:1901.04085_ , 2019.
* Nogueira et al. (2020a) Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. Document ranking with a pretrained sequence-to-sequence model. _CoRR_ , abs/2003.06713, 2020a. URL https://arxiv.org/abs/2003.06713.
* Nogueira et al. (2020b) Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. Document ranking with a pretrained sequence-to-sequence model. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pp. 708–718, Online, November 2020b. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.63. URL https://aclanthology.org/2020.findings-emnlp.63.
* Pal et al. (2013) Dipasree Pal, Mandar Mitra, and Kalyankumar Datta. Query expansion using term distribution and term association. _CoRR_ , abs/1303.0667, 03 2013. URL http://arxiv.org/abs/1303.0667.
* Qi et al. (2019) Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, and Christopher D. Manning. Answering complex open-domain questions through iterative query generation. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , November 2019. URL https://aclanthology.org/D19-1261.
* Qi et al. (2021) Peng Qi, Haejun Lee, OghenetegiriTGSido, and Christopher D. Manning. Answering open-domain questions of varying reasoning steps from text. In _EMNLP_ , 2021.
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. _Journal of Machine Learning Research_ , 21(140):1–67, 2020. URL http://jmlr.org/papers/v21/20-074.html.
* Ren et al. (2022) Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qifei Wu, Yuchen Ding, Hua Wu, Haifeng Wang, and Ji-Rong Wen. A thorough examination on zero-shot dense retrieval, 2022. URL https://arxiv.org/abs/2204.12755.
* Robertson & Zaragoza (2009) Stephen Robertson and Hugo Zaragoza. The probabilistic relevance framework: Bm25 and beyond. _Foundations and Trends in Information Retrieval_ , 3(4):333–389, 2009. ISSN 1554-0669. doi: 10.1561/1500000019. URL http://dx.doi.org/10.1561/1500000019.
* Rocchio (1971) J. J. Rocchio. Relevance feedback in information retrieval. In G. Salton (ed.), _The Smart retrieval system - experiments in automatic document processing_ , pp. 313–323. Englewood Cliffs, NJ: Prentice-Hall, 1971. URL https://sigir.org/files/museum/pub-08/XXIII-1.pdf.
* Rosa et al. (2022) Guilherme Moraes Rosa, Luiz Bonifacio, Vitor Jeronymo, Hugo Abonizio, Marzieh Fadaee, Roberto Lotufo, and Rodrigo Nogueira. No parameter left behind: How distillation and model size affect zero-shot retrieval. _arXiv preprint arXiv:2206.02873_ , 2022.
* Russell (2019) Daniel M. Russell. _The Joy of Search: A Google Insider’s Guide to Going Beyond the Basics_. The MIT Press, 2019.
* Sanh et al. (2019) Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. _ArXiv_ , abs/1910.01108, 2019.
* Santhanam et al. (2022) Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. ColBERTv2: Effective and efficient retrieval via lightweight late interaction. In _Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pp. 3715–3734, Seattle, United States, July 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.naacl-main.272. URL https://aclanthology.org/2022.naacl-main.272.
* Sciavolino et al. (2021) Christopher Sciavolino, Zexuan Zhong, Jinhyuk Lee, and Danqi Chen. Simple entity-centric questions challenge dense retrievers. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pp. 6138–6148, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.496. URL https://aclanthology.org/2021.emnlp-main.496.
* Shrivastava & Li (2014) Anshumali Shrivastava and Ping Li. Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips). In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger (eds.), _Advances in Neural Information Processing Systems_ , volume 27. Curran Associates, Inc., 2014. URL https://proceedings.neurips.cc/paper/2014/file/310ce61c90f3a46e340ee8257bc70e93-Paper.pdf.
* Thakur et al. (2021) Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. BEIR: A heterogenous benchmark for zero-shot evaluation of information retrieval models. _CoRR_ , abs/2104.08663, 2021. URL https://arxiv.org/abs/2104.08663.
* Xiong et al. (2021) Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In _International Conference on Learning Representations_ , 2021. URL https://openreview.net/forum?id=zeFrfgyZln.
* Zhan et al. (2022) Jingtao Zhan, Xiaohui Xie, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, and Shaoping Ma. Evaluating extrapolation performance of dense retrieval. In _CIKM_ , 2022.
* Zhu et al. (2021) Yunchang Zhu, Liang Pang, Yanyan Lan, Huawei Shen, and Xueqi Cheng. Adaptive information seeking for open-domain question answering. In _EMNLP_ , 2021.
* Zhuang et al. (2022) Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, and Michael Bendersky. Rankt5: Fine-tuning t5 for text ranking with ranking losses. _CoRR_ , 2022. doi: 10.48550/ARXIV.2210.10634. URL https://arxiv.org/abs/2210.10634.
## Appendix A Appendix
query | query expansion (target) | NLL
---|---|---
what is the age for hitting puberty | around^8 | 75.6
trigeminal definition | affects^6 | 71.2
is rhinitis painful | common^6 | 70.0
how many glasses of water is required a day | every^4 | 67.5
how soon do symptoms show up for hiv | acute^4 | 67.5
the collection film cast | +collection | 0.5
what kind of fossil is made by an imprint? | +fossil | 1.5
who invented corn flakes | +john | 1.7
what does gelastic mean | +laughter | 1.7
what is acha | +hockey | 1.8
Table 3: Examples from the evaluation set with highest respectively lowest
negative log-likelihood (NLL) using the trained HaRE search agent.
### A.1 Latency
We estimate latency by measuring the wall clock time of each individual step
for HaRE, and MonoT5, running in the same setup; i.e., using the same sub-
systems and configurations. For both MonoT5 and HaRE we don’t include the full
collection indexing times which are performed once, and focus on inference.
For MonoT5, we consider only the BM25 retrieval step, plus the T5-R inference
to rerank the top 1000 documents returned. We call this $L_{\mathrm{MonoT5}}$.
For HaRE, we count: BM25 retrieval + GTR retrieval + T5-R inference +
4$\times$(T5-Q inference + BM25 retrieval + BM25 Top-K retrieval + T5-R
inference). We call this $L_{\mathrm{HaRE}}$. The resulting ratio is
$\frac{L_{\mathrm{HaRE}}}{L_{\mathrm{MonoT5}}}=0.51.$
Notice that, while we count the T5-R reranking latency in full for each step,
in practice the same documents are often retrieved multiple times and caching
can be used effectively.
More importantly, for simplicity sake, in the current implementation, we don’t
create a separate Lucene index for the GTR Top-1K sub-corpus. Instead, to
search on GTR’s Top-1K corpus, we search on the full collection but restrict
the query to the Top-1k documents by means of Lucene’s ’id’ query operator.
Hence, we append 1k document ids to the query, thus constraining retrieval to
Top-1K only. This turns out to be by far the slowest step for HaRE. A
conservative back of the envelope calculation shows that by refactoring the
search step by indexing separately Top-1K for each episode and executing two
separate BM25 retrieval steps, on the the full and Top-1K collections, would
cut overall latency by another 50%.
In general, while more complex, HaRE’s architecture offer’s many options for
refactoring and optimization.
### A.2 Examples
Query and Results | Score
---|---
$q_{0}$ | what’s the difference between c++ and java | 0.0
$d_{1}$ | The difference between C#/Java and C is too big, but the differences between C#/Java and C++ are easier to pick and the most important, other than said updates to the language, are the adoption of a pure OOP approach to programming. |
$d_{2}$ | JavaScript is an interpreted computer programming language. The main difference between Java and JavaScript is that while Java is a programming language, JavaScript is scripting language. It is a prototype-based scripting language that is dynamic, weakly typed, and has first-class functions. However, like Java, JavaScript is heavily influenced by the C programming language. |
| … |
$q_{1}$ | what’s the difference between c++ and java +language | 0.6
$d_{1}$ | The most important difference is that Java is a memory-safe language, whereas C++ is not. This means that errors in Java programs are detected in defined ways; for example, attempting a bad cast or indexing an array out of bounds results in an exception. |
$d_{2}$ | Java (bottom) is contrasted with C/C++ (top). Unlike C/C++, which is compiled into machine language for a specific hardware platform, Java programs are compiled into an intermediate bytecode language. The bytecode is then compiled into machine language by a runtime program called the Java Virtual Machine (JVM). Unlike C/C++, which is compiled into machine language for a specific hardware platform, Java programs are compiled into an intermediate bytecode language. The bytecode is then compiled into machine language by a runtime program called the Java Virtual Machine (JVM). |
| … |
$q_{2}$ | what’s the difference between c++ and java +language +platform | 1.0
$d_{1}$ | C++ is an evolution to C. Which was a system programming language. C++ Added many features to the language to make it object oriented. It became the mainstream programming language for that reason. Java is an evolution of C++, with different goals ( cross platform for instance ). It remove some of the features that make C++ so hard to learn. Simplify others and remove others. |
$d_{2}$ | Java (bottom) is contrasted with C/C++ (top). Unlike C/C++, which is compiled into machine language for a specific hardware platform, Java programs are compiled into an intermediate bytecode language. The bytecode is then compiled into machine language by a runtime program called the Java Virtual Machine (JVM). Unlike C/C++, which is compiled into machine language for a specific hardware platform, Java programs are compiled into an intermediate bytecode language. The bytecode is then compiled into machine language by a runtime program called the Java Virtual Machine (JVM). |
| … |
Table 4: Example of multistep gold search session that forms the training data for T5-Q. This is one of few examples from MS MARCO where more than one document is annotated as relevant. The first set of results are on topic but too generic, and slightly off the mark: the first document talks also about c#, the second document is about JavaScript. By restricting document to those containing the term ’language’ a relevant document is in 2nd position in the results from step 1. Here a new term is discovered, ’platform’, which was not present in the results for $q_{0}$ (it does not occurr in any of the top 10 results, which we omit for simplicity). By further refining the query with +platform, the second-step results contain the two relevant documents at the top and and the session achieves a full score. Query and Results | Score
---|---
$q_{0}$ | what do you use dtp for | 0.0
$d_{1}$ | The Dynamic Trunking Protocol (DTP) is a proprietary networking protocol developed by Cisco Systems for the purpose of negotiating trunking on a link between two VLAN-aware switches, and for negotiating the type of trunking encapsulation to be used. It works on Layer 2 of the OSI model. |
$d_{2}$ | DTP (diptheria, tetanus toxoids and pertussis) Vaccine Adsorbed (For Pediatric Use) is a vaccine used for active immunization of children up to age 7 years against diphtheria, tetanus, and pertussis (whooping cough) simultaneously. DTP is available in generic form. |
| … |
$d_{9}$ | Page Layout Software (Generally Known as DTP Software). Since page layout software is commonly known as DTP software, this can lead to some confusion but now you know better. These software programs are the workhorses of DTP and they do exactly what you might think they would in accordance with the name. |
| … |
$q_{1}$ | what do you use dtp for +software | 0.0
$d_{1}$ | the operating systems main interface screen. desktop publishing. DTP; application software and hardware system that involves mixing text and graphics to produce high quality output for commercial printing using a microcomputer and mouse, scanner, digital cameras, laser or ink jet printer, and dtp software. |
$d_{2}$ | Page Layout Software (Generally Known as DTP Software). Since page layout software is commonly known as DTP software, this can lead to some confusion but now you know better. These software programs are the workhorses of DTP and they do exactly what you might think they would in accordance with the name. |
| … |
$q_{2}$ | what do you use dtp for +software +publishing | 1.0
$d_{1}$ | Desktop publishing. Desktop publishing (abbreviated DTP) is the creation of documents using page layout skills on a personal computer primarily for print. Desktop publishing software can generate layouts and produce typographic quality text and images comparable to traditional typography and printing. |
$d_{2}$ | Scribus, an open source desktop publishing application. Desktop publishing (abbreviated DTP) is the creation of documents using page layout skills on a personal computer primarily for print. Desktop publishing software can generate layouts and produce typographic quality text and images comparable to traditional typography and printing. |
| … |
Table 5: Example of multistep search session performed by HaRE. The first set
of results gets an nDCG score of 0.0, and the top 2 seem clearly wrong.
Curiously, none of the top-10 documents mentions the word ’publishing’. HaRE
first selects the refinement ’+software’. The corresponding results, while
still scoring 0, lead to the presence of the term ’publishing’ which is used
by HaRE as the next refinement ’+publishing’, which leads to a full nDCG score
on the next round.
|
# FasterX: Real-Time Object Detection Based on Edge GPUs for UAV Applications
Wei Zhou, Xuanlin Min, Rui Hu, Yiwen Long, Huan Luo, and JunYi W. Zhou, X.
Min, R. Hu, Y. Long, H. Luo, and J. Yi are with the School of Intelligent
Technology and Engineering, Chongqing University of Science and Technology,
Chongqing 401331, China (e-mail: [email protected]).
###### Abstract
Real-time object detection on Unmanned Aerial Vehicles (UAVs) is a challenging
issue due to the limited computing resources of edge GPU devices as Internet
of Things (IoT) nodes. To solve this problem, in this paper, we propose a
novel lightweight deep learning architectures named FasterX based on YOLOX
model for real-time object detection on edge GPU. First, we design an
effective and lightweight PixSF head to replace the original head of YOLOX to
better detect small objects, which can be further embedded in the depthwise
separable convolution (DS Conv) to achieve a lighter head. Then, a slimmer
structure in the Neck layer termed as SlimFPN is developed to reduce
parameters of the network, which is a trade-off between accuracy and speed.
Furthermore, we embed attention module in the Head layer to improve the
feature extraction effect of the prediction head. Meanwhile, we also improve
the label assignment strategy and loss function to alleviate category
imbalance and box optimization problems of the UAV dataset. Finally, auxiliary
heads are presented for online distillation to improve the ability of position
embedding and feature extraction in PixSF head. The performance of our
lightweight models are validated experimentally on the NVIDIA Jetson NX and
Jetson Nano GPU embedded platforms. Extensive experiments show that FasterX
models achieve better trade-off between accuracy and latency on VisDrone2021
dataset compared to state-of-the-art models.
###### Index Terms:
Lightweight, prediction head, distillation, edge GPU, UAV.
## I Introduction
Unmanned aerial vehicles (UAVs) have been considered as an efficient Internet
of Things (IoT) node for large-scale environment sensing and monitoring [1,
2], which is widely used in urban [3], agricultural [4], surveillance [5, 6],
and other tasks. Visual object detection is a hot topic in UAV application
that can locate and classify all objects in UAV photography, such as
pedestrians, cars, bicycles, and so on. In recent years, object detection
based on deep learning has made significant progress both in accuracy as well
as efficiency, and many excellent networks have been proposed. The two-stage
model [7, 8, 9] usually achieves high accuracy but performs poorly in
efficiency, which is difficult to perform on UAV platform with limited
computing resources. Recently, one-stage model based on YOLO is widely used in
embedded system [10, 11, 12]. However, the anchor-based YOLO models, it does
not solve the following problems: 1) the anchor needs to be carefully and
manually redesigned to adapt to the anchor distribution of different data
sets; 2) the imbalance between positive samples and negative samples.
Meanwhile, it is still a challenging task to balance detection accuracy and
real-time requirements.
Figure 1: Giga Floating-point Operations Per Second (GFLOPs) and accuracy
(AP50) on VisDrone2021 benchmark dataset. Our FasterX is much better than
YOLOV5, YOLOX, and YOLOX+P4. Noting that Nano-Tiny-S is in order from the
bottom to the top of the curve in YOLOX, and YOLOV5 inluding Nano-S. Such
performance is very competitive in UAV applications. Details are given in
Section IV.
To overcome this problem, many researchers have been committed to develop
efficient detector architectures, such as FCOS [13] and FSAF [14]. The
lightweight model based on anchor-free, Nanodet[15], YOLOX-Nano, YOLOX-Tiny,
and YOLOX-S [16] are also typical representatives. These detectors conduct a
model evaluation based on COCO, VOC, etc, which promotes the development of
the object detection field. However, there are two practical issues for the
application of UAV detection on the edge devices: 1) compared with COCO
dataset [17] and VOC dataset [18], UAV dataset have specific problems such as
large changes in object scale and a large number of small objects, which is
intuitively illustrated by some cases in Fig. 2; 2) there is a certain
hardware bottleneck in UAV real-time edge monitoring. Hence, this is an
irreconcilable contradiction between detection accuracy of small objects and
model inference speed. For UAV, current the simple and efficient methods to
improve small object accuracy can be summarized as two points: 1) increasing
the input image resolution to enlarge the object; 2) without changing the
resolution of the input image, an additional small object detection head is
supplemented by adding a large-resolution feature map, which is an
amplification strategy on the feature map [19]. No matter how to improve the
accuracy, the above two tricks will lead to a sharp rise in the GFLOPS cost of
the model, seriously affecting the inference and not conducive to the real-
time edge detection. Hence, these methods cannot be simply applied to
lightweight model.
Figure 2: Some samples in Visdrone2021 dataset [20], which illustrate the
large and dense number of small objects and complex background in UAV dataset.
Recently, relevant work has been put forward in succession for the embedded
deployment of UAV [21, 22]. Among them, the model compression method proposed
by Zhang et al. [22] has been widely validated on Visdrone2021 dataset [20] to
show its effectiveness, which can be used as one of the optimization schemes.
Unlike model compression, this paper focuses on the optimization of small
objects and detector architecture, to realize the real-time inference
performance on edge GPU devices for UAV. Inspired by PP-PicoDet [23], NanoDet-
Plus [15], ESPCN [24] and YOLOX [16], we present an edge GPU friendly anchor-
free detector FasterX for UAV detection, which is much more accurate and
lighter than lightweight detector YOLOX. With adding more prediction heads,
our FasterX still maintain the advantages of accuracy and efficiency on
parameters, GFLOPS cost and inference time. In summary, the main contributions
of this paper are as follows:
* •
In this paper, a novel lightweight PixSF head is proposed to greatly alleviate
the reasoning puzzle caused by the Head layer, where an additional detection
head P4 is added. Furthermore, the PixSF head is constructed by a novel
position encoder-decoder (Fcous & Pixel Shuffling). The PixSF head is not only
effective on UAV dataset Visdrone2021, but also experimentally proved on the
VOC2012 benchmark dataset.
* •
We develop a slimmer structure to replace PAFPN structure in the Neck layer,
named as SlimFPN. The SlimFPN removes the upper sampling structure of PAFPN
and unifies the input channel numbers of all branches of the Neck by Ghost
module[25], which significantly reduces parameters of the network.
* •
We employ dynamic label assignment strategy SimOTA [26, 16] to optimize
training process to obtain global optimization result. Meanwhile, we adopt the
weighted sum of Focal Loss (FL) [27] and complete intersection over union
(CIoU) loss [28] to alleviate category imbalance and aspect ratio imbalance of
the ground truth (GT) box.
* •
We present a novel auxiliary heads for online distillation to improve the
ability of position encoding and feature extraction in PixSF head.
The rest of this article is organized as follows: Section II covers the
related work while Section III discusses the proposed FasterX. In Section IV,
experimental results and discussions are provided. Section V summarizes the
full text and plans for future work.
Figure 3: Framework of FasterX. The Backbone is CSPDarknet53, which outputs
P1-P4 feature maps to the Neck layer with strides (32,16,8,4). In SlimFPN,
parameter reduction for Neck inputs(P1, P2 and P3) with large number of
channel is achieved through Ghost module [25]. In Head layer, four lightweight
PixSF heads are proposed to balance accuracy and speed. In addition, Aux-
Module is designed to obtain a better label assignment for PixSF Head.
## II Related Work
In the development of deep learning, lightweight has always been one of the
tasks of neural network optimization. In the field of object detection, great
progress has been made in lightweight components. This section will elaborate
on the object detection and its lightweight technique.
The detection networks using CNN as the feature extractor can be divided into
many types. For example, anchor-based and anchor-free can be categorized
according to different label assignment strategies. According to different
stages classification, object detection can be divided into one-stage and two-
stage detectors.
From the perspective of components, the current mainstream technology
generally consists of three parts. One part is based on the CNN backbone,
which is used for image feature extraction, the other part is to design Neck
layer for feature fusion, and finally, based on the feature assignment of Neck
layer, the detection head is used for category and box prediction. Next, we
will introduce the three components of object detection and the related
lightweight technologies.
Backbone: The backbone of object detection mainly composes of CNN network,
such as VGG, ResNet, DenseNet, Swin Transformer etc [29, 30, 31, 32].
Lightweight design of the network includes Mobilenet, GhostNet, and ShuffleNet
[25, 33, 34]. Although these works have accelerated the model by reducing
parameters and GFLOPS, recent studies have shown that evaluating only these
theoretical indicators may lead to suboptimal results. Some researchers
propose that it is a feasible scheme to directly use edge devices for
practical validation. Repvgg [35] uses structural re-parameterization to
improve the model inference speed, and proves that only using Flops to measure
the real speed of different architectures is inappropriate. YOLO-ReT[11]
improves convolution module by building a Neck structure more conducive to
edge device inference, which measures real speed by inferring directly on the
target device.
Neck: The Neck layer is designed to better extract features from the backbone.
Neck layer can reprocess the feature maps extracted by the backbone network at
different stages, and reasonably uses different receptive fields for sample
point assignment. Generally the Neck layer consists of several bottom-up and
top-down paths.
Improved path aggregation blocks based on FPN include FPN[9], PANet[36], NAS-
FPN[37], BiFPN[38], ASFF[39], and SFAM[40]. Path aggregation block is very
effective, but its multi-branch structure and the repetition of various
upsampling and downsampling and splicing operations will reduce the inference
speed. Recently, there have been many studies on the lightweight Neck. YOLOF
[41] proposed dilated encoder and uniform matching to replace FPN. NanoDet[15]
completely removes all convolutions in PANet, and uses interpolation to
complete up sampling and down sampling. PPYOLOE [42] and YOLOV7 [43] all
introduced re-parameterized convolution [35] and applied it to the Neck for
improvement of inference.
Head: The head layer is designed for extracting feature maps from the Neck
layer, handling position regression and classification tasks. In recent years,
many improved methods for head layer mainly focus on regression and
classification task [44, 45, 46]. A common optimization method is to divide
the head layer into different streams according to regression and
classification, and performed separately feature extraction [16]. This method
can alleviate the contradiction between regression features and classification
features. However, this method will bring inference latency on edge devices.
Recently, Transformer methods have been a hot topic in computer vision [19,
47]. Several studies have explored the effects of using transformer to improve
the prediction accuracy in the head layer. Unlike convolution operator,
transformer is a non-intensive operation on edge GPU, which resulting in
underutilization of computational resources. Meanwhile, a large number of
element-wise operations may significantly slow down the inference speed. The
requirements of computational and storage make it difficult to deploy on edge
GPU devices.
## III Proposed Models
### III-A YOLOX
YOLOX is designed based on the anchor-free framework, abandoning the anchor-
based strategy of YOLO series (YOLOv2-v5). Because it has achieved the most
advanced results in the current YOLO series on COCO datasets, this paper
chooses it as the baseline. A total of six different models with different
network width and depth setting are available for YOLOX, including YOLOX-Nano,
YOLOX-Tiny, YOLOX-S, YOLOX-M, YOLOX-L and YOLOX-X. Among them, YOLOX-Nano,
YOLOX-Tiny, and YOLOX-S belong to the lightweight family of YOLOX and are
suitable for deployment in embedded devices.
### III-B FasterX
The framework of FasterX is shown in Fig. 3. The main part of the FasterX is
consists of PixSF head, SlimFPN, and auxiliary head. The proposed FasterX
achieves efficient and real-time deep learning-based object detection on
Jetson NX and Jetson Nano GPU embedded platforms. Next, we will describe this
design in detail.
Figure 4: Training curves for detectors with 4-head (add P4 head) and 3-head
structure. We evaluate the mAP and AP50 on Visdrone2021 every 50 epochs. It is
obvious that the 4-head converges much faster than the 3-head and also
achieves better result.
Lightweight PixSF head: For UAV object detection application, the objects are
relatively small in the overall image scene (small object $<32\times 32$
pixels, medium object from $32\times 32$ pixels to $96\times 96$ pixels, and
large object $>96\times 96$ pixels, defined by COCO dataset). For example, in
VisDrone2021 dataset, more than 60% objects are small object such as
pedestrian, bicycle, motor and car. Although many deep learning-based object
detection methods can achieve high accuracy, these methods perform poorly in
UAV dataset due to the small object and complex backgrounds. Recently, many
studies and reviews have been undertaken to explore using an additional
detection head (P4 head) to improve the detection accuracy on small objects
[19, 13]. The 4-head structure for the prediction head can alleviate the
impact of the change of object scale and reduce the difficulty of optimization
for small object detection network. As shown in Fig. 4, we compare three
benchmark models of the lightweight series YOLOX under the 4-head structure.
The mean average precision (mAP) and average precision at AP50 of 4-head
YOLOX-Tiny are far better than the 3-head structure.
Although the 4-head structure is very effective, the parameters and GFLOPS
increase significantly on the GPU embedded device. Especially, the GFLOPS
increases seriously because of the further expansion of the multi-branch
structure, which is not conducive to GPU parallel computing. Table I shows the
latency results on the Jetson NX and Nano platforms. Taking YOLOX-S as an
example, the inference delay of YOLOX with a 4-head structure increases 85%
and 353% on the Jetson NX and Nano platforms, respectively, compared with the
native YOLOX. To address this problem, in this paper, a lightweight PixSF head
is designed based on a 4-head structure, which has a trade-off between
accuracy and speed.
Table I: The latency comparisons between 4-head and 3-head structures on Jetson NX and Jetson Nano platforms. Model | Params (M) | GFLOPs | Latency Nano NX
---|---|---|---
YOLOX-S | 9.0 | 26.8 | 52.52ms 14.70ms
YOLOX-S+P4 | 9.69 | 60.14 | 238.22ms 27.26ms
YOLOX-Tiny | 5.06 | 7.43 | 38.89ms 10.68ms
YOLOX-Tiny+P4 | 5.45 | 16.67 | 79.85ms 12.89ms
YOLOX-Nano | 0.91 | 1.21 | 27.57ms 8.91ms
YOLOX-Nano+P4 | 0.94 | 1.98 | 55.25ms 10.96ms
Figure 5: Framework of PixSF head. Focus and Pixel Shuffling are one-to-one
encoding and decoding operations, where Focus is responsible for embedding the
position in channel-wise, and Pixel Shuffling is responsible for decoding and
restoring the position information spatially. Pixsf heads uses this pair of
key operations to construct a new head-level paradigm in combination with YOLO
location constraints.
In the field of super-resolution reconstruction, the loss function is usually
constructed for pixel-level supervision to realize the mapping from Low-
resolution image (LR) to High-resolution image (HR), in which the pixel-level
loss function (L2 loss) and convolution operation implicitly contain the
location information [48, 49, 24]. Similarly, position constraints also exist
in object detection. When predicting the confidence of position regression,
the position prediction needs to be decoded to the input coordinate range to
adapt loss supervision. In YOLO detectors, the position decode can be
formulated as follows:
$\displaystyle[t_{x},t_{y}]$
$\displaystyle=(Convs(\textbf{F})+Grid\\_coord)*Stride$
$\displaystyle[t_{w},t_{h}]$ $\displaystyle=e^{Convs(\textbf{F})}*Stride$ (1)
where $t_{x}$ and $t_{y}$ denote the offset of the center point from the upper
left corner of the grid cell, and $t_{w}$ and $t_{h}$ represent the width and
height factors of the object, respectively. These values are within the scale
range of original map and need to be linked by $Convs(\textbf{F})$ and the
stride of the current feature map relative to original map. $Grid\\_coord$
represents the upper-left coordinate of the grid cell. Further, F represents
the input of the head layer, and $Convs(\cdot)$ represents the convolution
operation in the head layer.
In summary, the formula (III-B) can be considered as a feature position
embedding by mapping the encoded feature to the fixed position of original map
and combing with grid and downsampling multiples. Inspired by this, in this
paper, a position encode-decoder is designed by using the position embedding
in the head layer, named PixSF head.
As shown in Fig. 5, the encoder uses Focus to concatenate pixel-patch along
channel-wise in the HR feature. In this way, the channel of each pixel point
contains patch position information (upper left corner, upper right corner,
lower left corner, lower right corner). Furthermore, in order to reduce the
encoding cost and build the position relationship between the pixel-patchs, we
use a simple $1\times 1$ convolution for feature extraction and dimension
reduction. Then, we input it into the decoupled head for regression and
classification tasks, respectively. In the decoder part, we restore the
extracted features of the encoder to $C\times rH\times rW$ dimensions by using
pixelshuffle operation. Specifically, the $1\times 1$ convolution with a depth
of $r^{2}*C$ is used to map the graph of $C\times H\times W$ to $r^{2}*C\times
H\times W$, where $r$ is set to be 2. The encode-decoder can be described in
the following way:
$\displaystyle EC(X)$ $\displaystyle=\phi(\omega_{1}(FCOUS(X_{[C,H,W]})))$
$\displaystyle DC(X)$ $\displaystyle=Pixelshuffle(EC(X_{[C,H,W]}))$
$\displaystyle FCOUS(X)$ $\displaystyle=X_{[C,H,W]}\rightarrow
X_{[r^{2}*C,H/r,W/r]}$ $\displaystyle Pixelshuffle(X)$
$\displaystyle=\phi(\omega(FCOUS(X)))_{[C,H,W]}$ (2)
where $EC(\cdot)$ represents the encoder, and $DC(\cdot)$ represents the
decoder. $X_{[C,H,W]}$ is input $X$ with size of $C\times H\times W$ and $r$
is the multiple factor, noting that $r$ needs to be divisible by $H$ or $W$.
The activation function $\phi$ is applied element-wise and $\omega$ is the
parameter included in the whole decoupling layer.
SlimFPN: Anchor-free object detector mainly adopt feature pyramid network
(FPN) for multi-scale prediction, which makes it possible to extract multi-
level object feature to improve online feature selection capability. In the
path aggregation network (PANet) of YOLOX, a bottom-up structure PAFPN based
on FPN is designed to complement localization information, which enables FPN
has a bottom-up gradient update. In the PAFPN structure, the input and output
of the Neck layer are the same as the output of the backbone at different
stages. The PAFPN structure can be formulated as
$\displaystyle P4^{{}^{\prime}}=DS_{2X}(P4)$ (3) $\displaystyle
P3^{{}^{\prime}}=f(DS_{2X}(Cat(P4^{{}^{\prime}},P3)))$ $\displaystyle
P2^{{}^{\prime}}=f(DS_{2X}(Cat(P3^{{}^{\prime}},P2)))$ $\displaystyle
P1^{{}^{\prime}}=f(DS_{2X}(Cat(P2^{{}^{\prime}},P1)))$
where P4 is the lowest layer of the FPN, $DS_{2X}$ represents 2 times
downsampling, $f$ is the fusion operator, $Cat$ is the channel cascade.
PANet structure can propagate localization features to high semantic feature
layer by combining the downsampling the upper level feature map and the
present level feature map. However, from the point of view of gradient
propagation, the P4 prediction layer, which is benefit for small object
detection, has been calculated the loss before the PANet structure. Hence, P4
head cannot share the advantages of PANet structure on the gradient, but P1′
head, which is benefit for big objects, can fully share the advantages of
PANet structure.
Consider that the PANet with a large number of channels is structurally multi-
branches and result in delayed inference on edge GPU devices, in this paper,
we design a lightweight SlimFPN structure without PANet as the Neck layer.
SlimFPN forms the multilevel feature map by intercepting the output of each
stage in backbone. The extraction parameters of network increases with the
depth of the network, which leads the increase of the feature channels. A
large number of channels will increase the computational cost of edge GPU
devices. Hence, in SlimFPN, channel scaling is performed on large parameters
branches of features to reduce inference speed. As shown in the SlimFPN part
of Fig. 3, since the backbone channels increase with depth at each stage, the
input $C$ of P3 is used as the basis to represent other levels of input.
Considering that balancing channels will inevitably result in loss of the
information, parameter reduction is achieved through Ghost module [25] to
increase the spatial extraction ability to alleviate the loss of the
information.
Compared to PANet, the proposed SlimFPN may result in some loss of accuracy,
but the influence is insignificant. This is because most of the UAV objects
are considered to be small objects. On the one hand, it benefits from the
powerful performance of FPN in feature fusion. The top-down structure can
ensure that the deep semantic information is transferred to the shallow
feature maps, which can provide semantic support for small objects. On the
other hand, the bottom-up structure of PANet is mainly aimed at large objects
and does not play a supervisory role in P4 head. Therefore, SlimFPN will lead
to the decline of large object accuracy but it can improve the inference speed
in three models, which is a trade-off on accuracy and speed.
Head with attention: In the traditional head layer in object detection,
regression and classification are always in the final stage, where most of the
weights are shared between the positioning and regression tasks. Double head
RCNN [44] firstly proposes a double head layer, where different convolution
are adopted to extract streams for regression and classification tasks to
minimize the feature parts shared by regression and classification. Since
then, this idea has been used in the recently popular one-stage detectors [16,
23, 41]. In order to further improve the extraction ability in head, PPYOLOE
[42] and TOOD [50] try to use the attention mechanism to enhance feature
extraction ability, and the effectiveness of the attention module integrated
in head is proved in COCO dataset.
In the images of UAV, the large coverage area always contains complex and
diverse backgrounds. To enhance feature extraction ability of the head layer,
we adopt attention mechanism to focus on the interest area. In this paper,
convolutional block attention module (CBAM) is employed to improve the feature
representation of Head layer due to the abundant attention feature maps on
channel and spatial. Meanwhile, CBAM is a lightweight module that can be
easily embedded into head layer to improve detection performance, as shown in
Fig. 6.
Figure 6: Network structure of the attention mechanism in FasterX head. Figure
7: Architecture diagram of the online distillation with the auxiliary head.
(a) is the common object detection framework including Backbone, Neck and
Heads. (b) is the auxiliary model, including Stu.Head and Aux.Head, where the
Stu.Head is the PixSF head. The auxiliary distillation process is shown in
(c), Aux.Head is used to guide SimOTA to assign label assignment results for
Stu.head. In addition, the features distance between two heads is optimized by
L2 loss.
Label Assignment Strategy and Loss: The label assignment of positive and
negative samples has a crucial impact for the detector. Label assignment is
assigned Retinanet [27], Faster RCNN [9] directly divides positive samples and
negative samples through the intersection over union (IOU) threshold between
the anchor template and ground truth (GT). YOLOV5 [51] uses the aspect ratio
between the Anchor template and GT to sample positively. FCOS [13] takes
anchors in the central area of GT as positive samples, and ATSS [52]
determines samples according to the statistical characteristics of the nearest
anchors around GT. The above label assignment strategy is unchanged in the
process of global training. In order to obtain the globally optimal assignment
results, we use SimOTA dynamic label assignment strategy to optimize the
training process.
SimOTA first determines the candidate region through a central prior, then
calculates the IOU value between the GT and the candidate box in the candidate
region, and finally obtains the dynamic parameter $K$ by summing the IOU.
Furthermore, SimOTA uses the cost matrix to assign $K$ candidate boxes to each
GT for network training, where the cost matrix is obtained by directly
calculating losses in all candidate boxes and GT. The original SimOTA uses the
weighted sum of CE loss and IOU loss to calculate the cost matrix, which is
which can be expressed as:
$CostMatrix=ClsLoss_{BCE}+\alpha*RegLoss_{IOU}$ (4)
In the UAV scene, the categories of sample and the aspect ratio of the GT box
are often unbalanced, respectively. Taking VisDrone2021 dataset as an example,
the sample of pedestrian is 79 337, but the sample of tricycle is 4 812.
Meanwhile, the aspect ratio of the GT box is also unbalanced. The range of
aspect ratio (Max side / Min side ) is broad, from 1 to an upper limit of near
5. However, most of the GT box is between 1 and 2.5. In such cases, the
original label assignment loss of YOLOX cannot handle handle it well. To solve
this problem, for the category imbalance, we use Focal loss to make the
network pay more attention to the less-sample categories. Meanwhile, for the
aspect ratio imbalance, we use CIOU loss replace IOU loss, which can introduce
aspect ratio supervision information to fit the GT BOX. Hence, the Focal loss
can be expressed as
$CostMatrix=ClsLoss_{Focal}+\alpha*RegLoss_{CIoU}$ (5)
$ClsLoss_{Focal}=|y-\overline{y}|^{\gamma}*(\alpha_{1}*y+(1-\alpha_{1})*(1-y))$
(6)
where $\alpha$ is balance factor for CostMatrix and $|y-\overline{y}|$ means
distance between predicted value and label. $\gamma$ and $\alpha$ are
hyperparameters for adjusting unbalanced class. For regression part,
$RegLoss_{CIoU}$ can be defined as:
$RegLoss_{CIoU}=1-IOU+\frac{distance(b,b^{gt})^{2}}{c^{2}}+\alpha_{2}v$ (7)
$v=\frac{4}{\pi^{2}}+(arctan\frac{w^{gt}}{h^{gt}}-arctan\frac{w}{h})^{2}$ (8)
where $\alpha_{2}$ also is balance factor. Then, $v$ is used to measure the
similarity of aspect ratio and distance($\cdot$) compute distance between the
central points of prediction box and GT box.
Auxiliary Head for Online distillation: In this paper, we use position
constraints to design the encode-decoder PixSF head to improve the detection
of small objects. Meanwhile, we use improved SimOTA method to train the
network. However, for lightweight network, dynamic matching and position
encoding and decoding method may not obtain good results because the model is
initialized randomly at the beginning [53].
To address this issue, in the lightweight PixSF head, we design an auxiliary
head for online distillation. As shown in Fig. 7, we design two streams
(Stu.Head and Aux.Head) at the head level, where Stu.Head (that is the PixSF
head) is the head layer of the original structure and is the distilled layer.
Aux.Head is designed to guide SimOTA to assign label assignment results for
Stu.head (PixSF heads). As shown in Fig. 7 (c), the label assignment results
of SimOTA guided by Aux.head is used to replace the original label assignment
results of the Stu.Head, to improve the overall expressive ability of
Stu.Head. In addition, we add an additional pixel-level alignment task to
enhance the ability of Stu.Head to learn from Aux.Head. This process can be
summarized as the following:
$\begin{split}Loss_{total}=Loss_{PixSF}+Loss_{Aux}+\\\ \lambda\cdot\parallel
F_{PixSF}-F_{Aux}\parallel^{2}_{2}\end{split}$ (9)
where $\parallel F_{PixSF}-F_{Aux}\parallel^{2}_{2}$ is used to compute
distance between features on PixSF head and Aux.head.
It is worth noting that the auxiliary head is only performed during the
training process without inference cost.
Table II: Comparison with the state-of-the-art one-stage detectors on Visdrone2021 dataset. Model | Size | Params(M) | GFLOPs | mAP(val) | AP50(val) | mAP(test) | AP50(test)
---|---|---|---|---|---|---|---
YOLOv3-Tiny | $832\times 838$ | 8.7 | 21.82 | $11.00\%$ | $23.40\%$ | - | -
SlimYOLOv3-SPP3-50 | $832\times 838$ | 20.8 | 122 | $25.80\%$ | $45.90\%$ | - | -
SlimYOLOv3-SPP3-90 | $832\times 838$ | 8.0 | 39.89 | $23.90\%$ | $36.90\%$ | - | -
SlimYOLOv3-SPP3-95 | $832\times 838$ | 5.1 | 26.29 | $21.20\%$ | $36.10\%$ | - | -
YOLOv5-S | $640\times 640$ | 7.2 | 16.5 | $15.03\%$ | $30.19\%$ | $13.22\%$ | $25.86\%$
YOLOv5-Nano | $416\times 416$ | 1.9 | 1.8 | $6.61\%$ | $14.60\%$ | $5.65\%$ | $13.03\%$
YOLOv5-Nano | $448\times 448$ | 1.9 | 2.1 | $6.75\%$ | $15.03\%$ | $5.90\%$ | $13.67\%$
YOLOX-S | $640\times 640$ | 9.0 | 26.8 | $19.80\%$ | $34.00\%$ | $16.86\%$ | $29.57\%$
YOLOX-Tiny | $448\times 448$ | 5.06 | 7.43 | $14.37\%$ | $25.33\%$ | $13.23\%$ | $22.67\%$
YOLOX-Nano | $448\times 448$ | 0.91 | 1.21 | $9.34\%$ | $18.06\%$ | $9.55\%$ | $17.00\%$
YOLOX-S+P4 | $640\times 640$ | 9.69 | 60.14 | $21.71\%$ | $38.59\%$ | $18.26\%$ | $32.17\%$
YOLOX-Tiny+P4 | $448\times 448$ | 5.45 | 16.67 | $16.46\%$ | $29.97\%$ | $13.87\%$ | $24.97\%$
YOLOX-Nano+P4 | $448\times 448$ | 0.94 | 1.98 | $12.47\%$ | $22.01\%$ | $9.57\%$ | $18.49\%$
FasterX-S | $640\times 640$ | 5.19 | 19.20 | $22.37\%$ | $39.71\%$ | $19.29\%$ | $32.62\%$
FasterX-Tiny | $448\times 448$ | 2.93 | 5.39 | $17.26\%$ | $30.32\%$ | $14.65\%$ | $27.62\%$
FasterX-Nano | $448\times 448$ | 0.70 | 1.43 | $11.95\%$ | $21.02\%$ | $10.62\%$ | $19.39\%$
FasterX-S | $832\times 838$ | 5.19 | 32.69 | $25.54\%$ | $46.69\%$ | $22.45\%$ | $37.13\%$
FasterX-Tiny | $832\times 838$ | 2.93 | 18.57 | $22.89\%$ | $44.75\%$ | $19.45\%$ | $33.77\%$
FasterX-Nano | $832\times 838$ | 0.70 | 4.92 | $16.44\%$ | $31.32\%$ | $14.31\%$ | $27.22\%$
Table III: Detection accuracy comparison of small objects, medium objects and large objects on Visdrone2021 dataset. Model | Size | Backbone | GFLOPs | $AP_{S}(val)$ | $AP_{M}(val)$ | $AP_{L}(val)$ | $AP_{S}(test)$ | $AP_{M}(test)$ | $AP_{L}(test)$
---|---|---|---|---|---|---|---|---|---
YOLOX-S | 640 | 9.0 | 26.8 | $12.4\%$ | $27.64\%$ | $11.7\%$ | $8.8\%$ | $25.02\%$ | $11.73\%$
YOLOX-Tiny | 448 | 5.06 | 7.43 | $7.38\%$ | $23.07\%$ | $11.82\%$ | $5.77\%$ | $20.75\%$ | $10.66\%$
YOLOX-Nano | 448 | 0.91 | 1.21 | $5.26\%$ | $17.00\%$ | $6.94\%$ | $3.66\%$ | $17.27\%$ | $6.18\%$
FasterX-S | 640 | 5.19 | 19.20 | $17.29\%$ | $29.73\%$ | $12.23\%$ | $11.40\%$ | $27.20\%$ | $11.13\%$
FasterX-Tiny | 448 | 2.93 | 5.39 | $10.63\%$ | $26.36\%$ | $14.31\%$ | $6.87\%$ | $23.81\%$ | $10.82\%$
FasterX-Nano | 448 | 0.70 | 1.43 | $7.73\%$ | $19.43\%$ | $6.86\%$ | $5.17\%$ | $19.26\%$ | $6.32\%$
## IV EXPERIMENTS
In this section, VisDrone2021 dataset and VOC2012 dataset are used to verify
the validity of the proposed FasterX. We use NVIDIA Jetson NX and Jetson Nano
as edge GPUs devices to evaluate the speed of FasterX, and report mAP (average
of all 10 IOU thresholds, rank is [0.5:0.05:0.95]) and AP50 to evaluate the
accuracy. Meanwhile, we adopt FPS and latency ($ms$) to show the inference
efficiency. It should be noted that we conduct 10 rounds inference for each
model to obtain averaged stable data, considering that the edge GPU devices
may have relative errors on model inference. Specifically, each round
inference contains 64 test images.
### IV-A Implementation Details
In this paper, all the experiments were completed in the deep learning
framework – pytorch version 1.8.0 and were trained and tested with the NVIDIA
RTX TITAN GPU. For the performance evaluation scheme on the Edge computing
platform (Jetson NX and Jetson Nano), we select the commonly used ONNX as the
intermediate expression, and further use TensorRT for serialization
acceleration (coded storage format is FP16). The Jetson NX and Nano platform
environments are Jetpack 4.6 and Jetpack 4.5.1, respectively, and TensorRT
versions are 8.0.1 and 7.1.3, respectively. It is worth noting that Jetson NX
is more powerful than Jetson Nano in terms of computational performance,
Jetson Nano has a greater fluctuation in speed than NX under the same
lightweight model conditions.
For a fair comparison, during the training, all models are trained from
scratch and no pre-training weights are loaded. For the training strategy,
like YOLOX, we use the cosine annealing learning rate of warm-up. Considering
the lack of extraction ability in the lightweight models, we only use mosaic
enhancement to weak the data enhancement. Meanwhile, SGD optimizer is used in
the training and momentum is set to 0.9. It is worth noting that the input
size of YOLOX-Tiny and YOLOX-Nano models is $416\times 416$, and an odd number
will occur after 32 times down-sampling, resulting in the fuzzy boundary of
the scale in the prediction. Therefore, in the comparative experiment, the
input size of the YOLOX-Tiny and YOLOX-Nano models is changed to $448\times
448$, and the YOLOX-S model is still $640\times 640$.
Table IV: Ablation experiment on edge GPU devices. Model(Size) | Feature Aggr.Path | Head Deco.Mode | Attention | CIoU &FL | Aux | Params (M) | GFLOPs | FPS Nano NX | Latency(ms) Nano NX | Val mAP AP50
---|---|---|---|---|---|---|---|---|---|---
YOLOX-S | PANet | Conv | - | - | - | 9.0 | 26.8 | 19.04 68.02 | 52.52 14.70 | $19.82\%$ $34.00\%$
S+4 head | PANet | Conv | - | - | - | 9.69 | 60.14 | 4.19 36.68 | 238.22 27.26 | 21.71% 38.59%
YOLOX-Tiny | PANet | Conv | - | - | - | 5.06 | 7.43 | 25.71 93.63 | 38.89 10.68 | $14.37\%$ $25.33\%$
Tiny+4 head | PANet | Conv | - | - | - | 5.45 | 16.67 | 12.52 77.57 | 79.85 12.89 | $16.46\%$ 29.97%
YOLOX-Nano | PANet | DS Conv | - | - | - | 0.91 | 1.21 | 36.27 112.23 | 27.57 8.91 | $9.34\%$ $18.06\%$
Nano+4 head | PANet | DS Conv | - | - | - | 0.94 | 1.98 | 18.09 91.24 | 55.25 10.96 | 12.47% 22.01%
FasterX-S ($640\times 640$) | PANet | DS Conv | - | - | - | 7.61 | 24.88 | 11.10 52.00 | 90.09 19.23 | $21.34\%$ $37.75\%$
SlimFPN | DS Conv | - | - | - | 4.96 | 22.87 | 11.29 54.02 | 88.56 18.51 | $20.68\%$ $36.52\%$
SlimFPN | PixSF | - | - | - | 7.25 | 27.99 | 6.47 42.88 | 154.51 23.32 | 21.93% $37.90\%$
SlimFPN | DS+PixSF | - | - | - | 5.19 | 19.20 | 12.95 59.88 | 77.21 16.70 | $21.30\%$ $37.09\%$
SlimFPN | DS+PixSF | $\surd$ | - | - | 5.19 | 19.20 | 11.68 46.62 | 85.81 21.45 | $21.46\%$ $37.23\%$
SlimFPN | DS+PixSF | $\surd$ | $\surd$ | - | 5.19 | 19.20 | 11.68 46.62 | 85.81 21.45 | $21.84\%$ $38.62\%$
SlimFPN | DS+PixSF | $\surd$ | $\surd$ | $\surd$ | 5.19 | 19.20 | 11.68 46.62 | 85.81 21.45 | 22.37% 39.71%
FasterX-Tiny ($448\times 448$) | PANet | DS Conv | - | - | - | 4.29 | 6.99 | 14.67 72.15 | 68.12 13.86 | $15.43\%$ $28.64\%$
SlimFPN | DS Conv | - | - | - | 2.80 | 6.44 | 15.37 74.18 | 65.03 13.48 | $15.12\%$ $27.99\%$
SlimFPN | PixSF | - | - | - | 4.09 | 7.81 | 17.01 82.71 | 58.77 12.09 | 16.60% 28.90%
SlimFPN | DS+PixSF | - | - | - | 2.93 | 5.39 | 17.18 81.83 | 58.21 12.22 | $15.93\%$ $27.96\%$
SlimFPN | DS+PixSF | $\surd$ | - | - | 2.93 | 5.39 | 16.24 76.33 | 61.54 13.10 | $16.21\%$ $28.91\%$
SlimFPN | DS+PixSF | $\surd$ | $\surd$ | - | 2.93 | 5.39 | 16.24 76.33 | 61.54 13.10 | $16.80\%$ $29.47\%$
SlimFPN | DS+PixSF | $\surd$ | $\surd$ | $\surd$ | 2.93 | 5.39 | 16.24 76.33 | 61.54 13.10 | 17.26% 30.32%
FasterX-Nano ($448\times 448$) | SlimFPN | DS Conv | - | - | - | 0.63 | 1.92 | 21.58 96.61 | 46.32 10.35 | $10.73\%$ $21.13\%$
SlimFPN | PixSF | - | - | - | 1.21 | 2.49 | 21.93 129.03 | 45.58 7.75 | $11.47\%$ 21.91%
SlimFPN | DS+PixSF | - | - | - | 0.70 | 1.43 | 24.26 102.98 | 41.22 9.71 | $10.89\%$ $21.32\%$
SlimFPN | DS+PixSF | $\surd$ | - | - | 0.70 | 1.43 | 23.47 102.14 | 42.60 9.79 | $11.14\%$ $20.70\%$
SlimFPN | DS+PixSF | $\surd$ | $\surd$ | - | 0.70 | 1.43 | 23.47 102.14 | 42.60 9.79 | $11.33\%$ $20.42\%$
SlimFPN | DS+PixSF | $\surd$ | $\surd$ | $\surd$ | 0.70 | 1.43 | 23.47 102.14 | 42.60 9.79 | 11.95% $21.02\%$
### IV-B Performance Comparison
In order to demonstrate the superior performance of FasterX, we conducted a
series of experiments on Visdrone2021 dataset with other state-of-the-art one-
stage detectors, such as SlimYOLOv3, YOLOV5, and YOLOX. Experimental results
are shown in Table II. It is worth noting that the results of these compared
methods are referenced from the corresponding original papers.
From Table II, it can be observed that FasterX-Tiny reduces $43.1\%$
parameters and $29.4\%$ GFLOPs, and achieves $1.69\%$ higher mAp and $8.65\%$
higher AP50 than SlimYOLOv3-SPP3-95. FasterX-S achieves $4.34\%$ higher mAp
and $10.59\%$ higher AP50 than SlimYOLOv3-SPP3-95 [22], and only increase
$1.8\%$ parameters. Obviously, YOLOX achieves higher accuracy than YOLOv5 in
all model and has lower parameters and GFLOPs in S and Nano. In particular,
with the addition of P4 head, 4-head YOLOX series have a greater improvement
in detection accuracy. However, parameters and GFLOPs increase significantly
with the added P4 head of YOLOX series. As shown in Table II and Fig. 1,
FasterX series (S, Tiny and Nano) provide not only a much better mAp and AP50
but also a lower parameters and GFLOPs that YOLOv5 series, YOLOX series and
4-head YOLOX series, thanks to the better feature expression of the
lightweight PixSF head and online distillation mechanism. Experimental results
demonstrate that the proposed FasterX with lightweight PixSF-head might be
more powerful and effective than a 4-head YOLOX.
To illustrate the efficiency of FasterX on detecting small objects, we
evaluate detection accuracy on the validation set and test set of Visdrone2021
dataset. Experimental results are shown shown in Table III. Obviously, FasterX
series are much better than YOLOX series in detection accuracy.
From Table III, it can be observed that FasterX series achieve a greater
accuracy improvement on small objects. In contrast, the accuracy improvement
is not obvious on large objects. For example, the detection accuracy of
FasterX-S on small objects, medium objects and large objects are increased by
4.89%, 2.09% and 0.53%, respectively. Such results show that the efficiency of
the lightweight PixSF head for small objects.
Table V: Comparison of the different Head and Feature Aggregation with edge GPU on VOC2012 dataset. Model | Feature Aggr.Path | Head Deco.Mode | Params (M) | GFLOPs | Val mAP AP50
---|---|---|---|---|---
YOLOX-S | PANet | Conv | 8.95 | 26.68 | $42.61\%$ $64.52\%$
PANet | DS Conv | 7.39 | 17.96 | $39.12\%$ $61.82\%$
PANet | PixSF | 7.00 | 16.54 | $42.53\%$ $63.81\%$
PANet | DS+PixSF | 5.44 | 14.37 | $40.20\%$ $63.73\%$
SlimFPN | Conv | 6.34 | 23.04 | $38.97\%$ $63.50\%$
SlimFPN | DS Conv | 4.79 | 14.32 | $38.42\%$ $63.43\%$
SlimFPN | PixSF | 6.52 | 15.59 | $39.42\%$ $63.44\%$
SlimFPN | DS+PixSF | 4.96 | 13.41 | $39.00\%$ $63.13\%$
YOLOX-Tiny | PANet | Conv | 5.04 | 7.43 | $34.92\%$ $58.28\%$
PANet | DS Conv | 4.17 | 5.04 | $34.67\%$ $57.13\%$
PANet | PixSF | 3.95 | 4.64 | $35.43\%$ $57.91\%$
PANet | DS+PixSF | 3.07 | 4.04 | $34.88\%$ $57.60\%$
SlimFPN | Conv | 3.57 | 6.43 | $33.53\%$ $56.87\%$
SlimFPN | DS Conv | 2.70 | 4.04 | $32.91\%$ $56.42\%$
SlimFPN | PixSF | 3.66 | 4.36 | $34.19\%$ $57.75\%$
SlimFPN | DS+PixSF | 2.78 | 3.76 | $33.44\%$ $56.83\%$
YOLOX-Nano | PANet | DS Conv | 0.90 | 1.21 | $25.30\%$ $47.31\%$
PANet | PixSF | 1.06 | 1.19 | $27.37\%$ $48.68\%$
PANet | DS+PixSF | 0.68 | 0.92 | $26.78\%$ $48.03\%$
SlimFPN | DS Conv | 0.60 | 1.03 | $24.68\%$ $47.21\%$
SlimFPN | PixSF | 1.02 | 1.15 | $26.42\%$ $48.49\%$
SlimFPN | DS+PixSF | 0.64 | 0.89 | $25.97\%$ $47.38\%$
### IV-C Ablation Study
In order to reveal the impact of different tricks (such as Slim-FPN, PixSF-
head, attention mechanism, SimOTA and online distillation) in FasterX on
accuracy and computation complexity and latency, we conducted ablation study
to prove the effectiveness of the proposed method. The results of the ablation
experiments are shown in Table IV. From Table IV, it can be observed that the
proposed FasterX series can not only increase mAP accuracy and AP50 accuracy,
but also improve the inference speed and reduce the size of the network. Next,
we will discuss the result of ablation experiment in detail.
PixSF head and SlimFPN: We first explore the effect of 4-head structure for
YOLOX. Taking 4-head YOLOX-S as an example, as shown in Table IV (row 2), the
mAP and AP50 are increased significantly by 1.89% and 4.59% respectively,
which verifies the efficiency of the 4-head structure. However, adding a
detection head makes the GFLOPs change from 26.8 to 60.14, and latency from
52.52ms to 238.22ms on Jetson Nano, 14.70ms to 27.26ms on Jetson NX. In order
to alleviate this situation, replacing the convolution operator with DS Conv
in the head layer is a feasible design for lightweight model[23, 15]. As shown
in Table IV (FasterX-S part, row 1 and FasterX-Tiny part, row 1), replacing
the general convolution with the DS Conv, although the mAP and AP50 decrease a
little bit, the latency has decreased significantly. Experimental results show
the efficiency of the DS Conv operator.
To demonstrate the efficient performance of our PixSF head approach, we
compare the general convolution operator, DW Conv operator as well as our
PixSF operator. To show the flexible embeddedness of PixSF head, we also embed
DW Conv into PixSF head to design a more lightweight head layer. As shown in
Table IV (FasterX-S part, row 2 and 4, FasterX-Tiny part, row 2 and 4,
FasterX-Nano part, row 1 and 3), it can be observed that our DW+PixSF method
show better performance than general convolution operator and DW operator in
latency and detection accuracy. Taking FasterX-S as an example, compared to
4-head structure with DW operator, not only the mAP and AP50 are decreased by
0.62% and 0.57% respectively, but also the latency is decreased by over
$12.8\%$ and $9.8\%$ on Jetson Nano and NX, respectively. Such results
illustrate that the proposed PixSF head can not only efficiency on detection
accuracy but also improve the inference speed.
Next, we illustrate the trade-off between accuracy and speed in the feature
aggregation part. As shown in Tabel IV (FasterX-S part, row 2, FasterX-Tiny
part, row 2 and FasterX-Nano part, row 1), taking FasterX-S as an example,
compared with the PANet (FasterX-S part, row 1), although the mAP and AP50 of
SlimFPN method are decreased by 0.66% and 1.23% respectively, the parameters
decrease from 7.61 to 4.96, and GFLOPs from 24.88 to 22.87. Such results imply
that SlimFPN not only reduce the size of of the network but also can keep the
detection accuracy. This is because the top-down structure can ensure that the
deep semantic information is transferred to shallow feature maps, which can
provide semantic support for small objects.
To further validate the generality of the PixSF head, we performed experiments
on VOC2012 dataset with a higher number of larger object than UAV dataset. The
experimental results in Table V show that the combination of DS Conv and PixSF
head can further achieve the trade off between the model capacity and
accuracy.
Attention mechanism: To enhance the decoupling performance of Head layer of
object detection, we adopt CBAM to improve the feature representation of Head
layer. As shown in Tabel IV (FasterX-S part, row 5, FasterX-Tiny part, row 5
and FasterX-Nano part, row 4), it can be seen that CBAM present positive
impacts on the accuracy. Because it not only monitors the channel, but also
extracts the interest area by using the spatial probability map.
Improved SimOTA: To verify the effectiveness of the proposed dynamic label
assignment strategy, we compare the improved SimOTA with the basic label
assignment mechanism. As shown in TABLE IV, we replace the original SimOTA
with the improved SimOTA. The experimental results show that the improved
SimOTA can achieve good results in all three models. Taking FasterX-S as an
example (FasterX-S part, row 6), the mAP and AP50 are increased by 0.42% and
1.39% respectively, without additional computing resource.
Auxiliary Head: In addition, to illustrate the efficiency of the auxiliary
head for online distillation, we explore the effect of the auxiliary head for
FasteX. In order to improve the extraction ability of auxiliary head, we use
the YOLOX-X head with a large number of parameters for training supervise. In
the process of training, we adopt the network preheating strategy. First, the
PixSF head and auxiliary head are trained jointly for 50 epochs. Then, we use
label results of the auxiliary head to guide the PixSF head. It can be
observed from TABLE IV that mAP and AP50 have shown significant improvement
after using the auxiliary head for online distillation.
Table VI: Evaluation results of Backbone accuracy and Latency on VisDrone2021 dataset. Model(Size) | Params (M) | GFLOPs | Backbone | AP50(val) | AP50(test) | FPS Nano NX | Latency Nano NX
---|---|---|---|---|---|---|---
YOLOX-S +P4 ($640\times 640$) | 7.25 | 27.99 | CSPDarknet53 | $37.92\%$ | $31.63\%$ | 6.47 42.88 | 154.51ms 23.32ms
4.79 | 21.66 | MblNetV2(width 1) | $35.87\%$ | $29.55\%$ | 4.82 44.66 | 207.09ms 22.39ms
4.01 | 20.28 | MblNetV2(width 0.75) | $35.43\%$ | $28.97\%$ | 5.02 45.87 | 198.81ms 21.80ms
3.44 | 18.85 | MblNetV2(width 0.5) | $34.61\%$ | $28.22\%$ | 5.70 49.97 | 175.29ms 20.01ms
7.19 | 21.03 | GhostNet(width 1.3) | $36.12\%$ | $30.12\%$ | 4.68 34.28 | 213.25ms 29.17ms
5.48 | 19.43 | GhostNet(width 1) | $36.38\%$ | $29.66\%$ | 5.26 38.13 | 189.90ms 26.22ms
3.59 | 17.54 | GhostNet(width 0.5) | $31.40\%$ | $25.30\%$ | 6.61 46.77 | 151.16ms 21.38ms
5.94 | 23.06 | Efficientnet-Lite0 | $38.98\%$ | $32.52\%$ | 4.55 44.80 | 219.53ms 22.32ms
6.71 | 25.04 | Efficientnet-Lite1 | $38.56\%$ | $31.98\%$ | 3.76 39.41 | 265.85ms 25.37ms
7.35 | 26.25 | Efficientnet-Lite2 | $39.11\%$ | $33.31\%$ | 3.65 38.18 | 273.54ms 26.19ms
9.42 | 30.84 | Efficientnet-Lite3 | $39.32\%$ | $33.82\%$ | 3.05 33.20 | 326.83ms 30.12ms
14.16 | 39.02 | Efficientnet-Lite4 | 40.16% | 34.11% | 2.35 25.89 | 424.98ms 38.62ms
YOLOX-Tiny +P4 ($448\times 448$) | 4.09 | 7.81 | CSPDarknet53 | $28.90\%$ | $25.19\%$ | 17.01 82.71 | 58.77ms 12.09ms
3.51 | 7.03 | MblNetV2(width 1) | $28.10\%$ | $23.44\%$ | 11.53 81.69 | 86.71ms 12.24ms
2.73 | 6.36 | MblNetV2(width 0.75) | $28.32\%$ | $24.13\%$ | 14.35 81.36 | 79.64ms 12.29ms
2.16 | 5.67 | MblNetV2(width 0.5) | $27.39\%$ | $22.85\%$ | 14.52 81.30 | 68.83ms 12.30ms
5.91 | 6.72 | GhostNet(width 1.3) | $28.08\%$ | $24.44\%$ | 10.46 66.40 | 95.55ms 15.06ms
4.20 | 5.94 | GhostNet(width 1) | $27.39\%$ | $23.56\%$ | 11.89 72.72 | 84.05ms 13.75ms
2.32 | 5.03 | GhostNet(width 0.5) | $24.24\%$ | $20.66\%$ | 15.64 68.44 | 63.93ms 14.61ms
4.66 | 7.72 | Efficientnet-Lite0 | $28.32\%$ | $24.33\%$ | 10.16 78.18 | 98.40ms 12.79ms
5.42 | 8.69 | Efficientnet-Lite1 | $28.36\%$ | $24.27\%$ | 8.37 73.74 | 119.43ms 13.56ms
6.06 | 9.28 | Efficientnet-Lite2 | $29.21\%$ | $24.90\%$ | 7.83 73.47 | 127.59ms 13.61ms
8.13 | 11.52 | Efficientnet-Lite3 | $30.60\%$ | $24.81\%$ | 6.56 64.26 | 152.29ms 15.56ms
12.87 | 15.53 | Efficientnet-Lite4 | 31.24% | 26.79% | 4.92 51.38 | 203.03ms 19.46ms
YOLOX-Nano +P4 ($448\times 448$) | 0.69 | 1.43 | CSPDarknet53 | $20.32\%$ | $18.45\%$ | 24.26 102.98 | 41.22ms 9.71ms
2.05 | 3.13 | MblNetV2(width 1) | $24.58\%$ | $21.02\%$ | 13.31 91.40 | 75.08ms 10.94ms
1.27 | 2.47 | MblNetV2(width 0.75) | $23.78\%$ | $20.77\%$ | 14.55 93.10 | 68.69ms 10.74ms
0.70 | 1.78 | MblNetV2(width 0.5) | $22.10\%$ | $20.13\%$ | 18.04 93.80 | 55.43ms 10.66ms
4.45 | 2.81 | GhostNet(width 1.3) | $24.74\%$ | $21.55\%$ | 12.07 67.15 | 82.81ms 14.89ms
2.74 | 2.04 | GhostNet(width 1) | $23.59\%$ | $20.00\%$ | 14.45 67.38 | 69.19ms 14.84ms
0.87 | 1.14 | GhostNet(width 0.5) | $19.39\%$ | $17.33\%$ | 19.90 84.74 | 50.23ms 11.80ms
3.19 | 3.82 | Efficientnet-Lite0 | $24.98\%$ | $22.71\%$ | 11.74 77.57 | 85.16ms 12.89ms
3.96 | 4.79 | Efficientnet-Lite1 | $24.30\%$ | $22.85\%$ | 9.59 73.26 | 104.24ms 13.65ms
4.6 | 5.37 | Efficientnet-Lite2 | $25.10\%$ | $23.12\%$ | 8.75 69.10 | 114.19ms 14.47ms
6.66 | 7.62 | Efficientnet-Lite3 | $25.35\%$ | $23.33\%$ | 7.08 64.59 | 141.20ms 15.48ms
11.40 | 11.62 | Efficientnet-Lite4 | 25.84% | 23.93% | 5.19 59.80 | 192.46ms 16.72ms
Backbone: In this paper, we employ CSPDarknet53 as the backbone of FasterX.
Instead of theoretically modeling the relation between backbones and inference
speed, we directly report the FPS and latency of the current popular
lightweight backbones (such as MobileNetV2, GhostNet and Efficientnet-Lite)
under the 4-head structure on Jetson devices. As shown in Table VI, it is
observed that Efficientnet-Lite4 achieves the best detection accuracy. At the
same time, the latency is highest among all backbones. CSPDarknet53 backbone
is able to operate with a higher detection accuracy without sacrificing more
computing time. Hence, a balance is achieved in the CSPDarknet53 backbone
between detection accuracy and inference speed.
## V Conclusion
In this paper, we propose a novel lightweight object detector FasterX for UAV
detection on edge GPU devices. The most important part of the proposed FasterX
is the lightweight PixSF head, where the position encode-decoder is designed
to improve the detection accuracy of small objects by using the position
embedding in the head layer. Meanwhile, PixSF head can be further embedded in
the depthwise separable convolution to obtain a lighter head. Furthermore,
SlimFPN is developed to boost the inference speed. In additional, attention
mechanism, label assignment strategy and auxiliary head methods are presented
to further improve the object detection accuracy of the model. The efficacy of
the FasterX is validated experimentally on VisDrone2021 dataset, which has a
trade-off between accuracy and speed. Extensive experiments demonstrate that
our method can obtain better performance than state-of-the-art detection
methods in terms of accuracy and speed.
Considering the disadvantages of dense detector in UAV, a large number of
target candidates lead to redundant calculation, which often requires NMS to
remove redundancy. In future work, we will focus the possible paradigm of
lightweight design with sparse technology for dense detector.
## References
* [1] K. Li, W. Ni, E. Tovar, and M. Guizani, “Joint flight cruise control and data collection in uav-aided internet of things: An onboard deep reinforcement learning approach,” _IEEE Internet of Things Journal_ , vol. 8, no. 12, pp. 9787–9799, 2021.
* [2] X. Huang, X. Yang, Q. Chen, and J. Zhang, “Task offloading optimization for uav-assisted fog-enabled internet of things networks,” _IEEE Internet of Things Journal_ , vol. PP, no. 99, pp. 1–1, 2021.
* [3] N. Audebert, B. Le Saux, and S. Lefèvre, “Beyond rgb: Very high resolution urban remote sensing with multimodal deep networks,” _ISPRS Journal of Photogrammetry and Remote Sensing_ , vol. 140, pp. 20–32, 2018.
* [4] Z. Shao, C. Li, D. Li, O. Altan, L. Zhang, and L. Ding, “An accurate matching method for projecting vector data into surveillance video to monitor and protect cultivated land,” _ISPRS Int. J. Geo Inf._ , vol. 9, p. 448, 2020\.
* [5] “Detecting mammals in uav images: Best practices to address a substantially imbalanced dataset with deep learning,” _Remote Sensing of Environment_ , vol. 216, pp. 139–153, 2018.
* [6] B. Kellenberger, M. Volpi, and D. Tuia, “Fast animal detection in uav images using convolutional neural networks,” in _2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_ , 2017, pp. 866–869.
* [7] R. Girshick, “Fast r-cnn,” in _2015 IEEE International Conference on Computer Vision (ICCV)_ , 2015, pp. 1440–1448.
* [8] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 39, no. 6, pp. 1137–1149, 2017\.
* [9] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2017, pp. 936–944.
* [10] X. Long, K. Deng, G. Wang, Y. Zhang, Q. Dang, Y. Gao, H. Shen, J. Ren, S. Han, E. Ding, and S. Wen, “Pp-yolo: An effective and efficient implementation of object detector,” _ArXiv_ , vol. abs/2007.12099, 2020.
* [11] P. Ganesh, Y. Chen, Y. Yang, D. Chen, and M. Winslett, “Yolo-ret: Towards high accuracy real-time object detection on edge gpus,” in _2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)_ , 2022, pp. 1311–1321.
* [12] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Scaled-yolov4: Scaling cross stage partial network,” in _2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2021, pp. 13 024–13 033.
* [13] Z. Tian, C. Shen, H. Chen, and T. He, “Fcos: Fully convolutional one-stage object detection,” in _2019 IEEE/CVF International Conference on Computer Vision (ICCV)_ , 2019, pp. 9626–9635.
* [14] C. Zhu, Y. He, and M. Savvides, “Feature selective anchor-free module for single-shot object detection,” in _2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2019, pp. 840–849.
* [15] RangiLyu, “Nanodet-plus: Super fast and high accuracy lightweight anchor-free object detection model.” https://github.com/RangiLyu/nanodet, 2021.
* [16] Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, “Yolox: Exceeding yolo series in 2021,” _arXiv preprint arXiv:2107.08430_ , 2021.
* [17] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” _european conference on computer vision_ , 2014.
* [18] M. Everingham, L. V. Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” _International Journal of Computer Vision_ , vol. 88, pp. 303–308, September 2009.
* [19] X. Zhu, S. Lyu, X. Wang, and Q. Zhao, “Tph-yolov5: Improved yolov5 based on transformer prediction head for object detection on drone-captured scenarios,” in _2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)_ , 2021, pp. 2778–2788.
* [20] P. Zhu, L. Wen, D. Du, X. Bian, H. Fan, Q. Hu, and H. Ling, “Detection and tracking meet drones challenge,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , pp. 1–1, 2021.
* [21] J. Deng, Z. Shi, and C. Zhuo, “Energy-efficient real-time uav object detection on embedded platforms,” _IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems_ , vol. 39, no. 10, pp. 3123–3127, 2020.
* [22] P. Zhang, Y. Zhong, and X. Li, “Slimyolov3: Narrower, faster and better for real-time uav applications,” _2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)_ , pp. 37–45, 2019.
* [23] G. Yu, Q. Chang, W. Lv, C. Xu, C. Cui, W. Ji, Q. Dang, K. Deng, G. Wang, Y. Du, B. Lai, Q. Liu, X. Hu, D. Yu, and Y. Ma, “Pp-picodet: A better real-time object detector on mobile devices,” _ArXiv_ , 2021.
* [24] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2016, pp. 1874–1883.
* [25] K. Han, Y. Wang, Q. Tian, J. Guo, C. Xu, and C. Xu, “Ghostnet: More features from cheap operations,” in _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2020, pp. 1577–1586.
* [26] Z. Ge, S. Liu, Z. Li, O. Yoshie, and J. Sun, “Ota: Optimal transport assignment for object detection,” in _2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2021, pp. 303–312.
* [27] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in _2017 IEEE International Conference on Computer Vision (ICCV)_ , 2017, pp. 2999–3007.
* [28] Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, “Distance-iou loss: Faster and better learning for bounding box regression,” in _AAAI_ , 2020\.
* [29] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” _arXiv preprint arXiv:1409.1556_ , 2014.
* [30] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2016, pp. 770–778.
* [31] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2017, pp. 2261–2269.
* [32] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in _2021 IEEE/CVF International Conference on Computer Vision (ICCV)_ , 2021, pp. 9992–10 002.
* [33] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” _ArXiv_ , vol. abs/1704.04861, 2017\.
* [34] X. Zhang, X. Zhou, M. Lin, and J. Sun, “Shufflenet: An extremely efficient convolutional neural network for mobile devices,” in _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 6848–6856.
* [35] X. Ding, X. Zhang, N. Ma, J. Han, G. Ding, and J. Sun, “Repvgg: Making vgg-style convnets great again,” in _2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2021, pp. 13 728–13 737.
* [36] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for instance segmentation,” in _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 8759–8768.
* [37] G. Ghiasi, T.-Y. Lin, and Q. V. Le, “Nas-fpn: Learning scalable feature pyramid architecture for object detection,” in _2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2019, pp. 7029–7038.
* [38] M. Tan, R. Pang, and Q. Le, “Efficientdet: Scalable and efficient object detection,” 06 2020, pp. 10 778–10 787.
* [39] S. Liu, D. Huang, and Y. Wang, “Learning spatial fusion for single-shot object detection,” _ArXiv_ , vol. abs/1911.09516, 2019.
* [40] Q. Zhao, T. Sheng, Y. Wang, Z. Tang, Y. Chen, L. Cai, and H. Ling, “M2det: A single-shot object detector based on multi-level feature pyramid network,” _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 33, pp. 9259–9266, 07 2019.
* [41] Q. Chen, Y. Wang, T. Yang, X. Zhang, J. Cheng, and J. Sun, “You only look one-level feature,” in _2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2021, pp. 13 034–13 043.
* [42] S. Xu, X. Wang, W. Lv, Q. Chang, C. Cui, K. Deng, G. Wang, Q. Dang, S. Wei, Y. Du, and B. Lai, “Pp-yoloe: An evolved version of yolo,” _ArXiv_ , vol. abs/2203.16250, 2022.
* [43] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” _arXiv preprint arXiv:2207.02696_ , 2022.
* [44] Y. Wu, Y. Chen, L. Yuan, Z. Liu, L. Wang, H. Li, and Y. Fu, “Rethinking classification and localization for object detection,” in _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2020, pp. 10 183–10 192.
* [45] G. Song, Y. Liu, and X. Wang, “Revisiting the sibling head in object detector,” in _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2020, pp. 11 560–11 569.
* [46] J. Cao, H. Cholakkal, R. M. Anwer, F. S. Khan, Y. Pang, and L. Shao, “D2det: Towards high quality object detection and instance segmentation,” in _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2020, pp. 11 482–11 491.
* [47] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in _Computer Vision – ECCV 2020_ , A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, Eds. Cham: Springer International Publishing, 2020, pp. 213–229.
* [48] C. Dong, C. C. Loy, and X. Tang, “Accelerating the super-resolution convolutional neural network,” in _Computer Vision – ECCV 2016_ , B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Cham: Springer International Publishing, 2016, pp. 391–407.
* [49] C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 38, no. 2, pp. 295–307, 2016.
* [50] C. Feng, Y. Zhong, Y. Gao, M. R. Scott, and W. Huang, “Tood: Task-aligned one-stage object detection,” in _2021 IEEE/CVF International Conference on Computer Vision (ICCV)_ , 2021, pp. 3490–3499.
* [51] G. Jocher, A. Stoken, J. Borovec, NanoCode012, ChristopherSTAN, L. Changyu, Laughing, tkianai, A. Hogan, lorenzomammana, yxNONG, AlexWang1900, L. Diaconu, Marc, wanghaoyang0106, ml5ah, Doug, F. Ingham, Frederik, Guilhen, Hatovix, J. Poznanski, J. Fang, L. Yu, changyu98, M. Wang, N. Gupta, O. Akhtar, PetrDvoracek, and P. Rai, “ultralytics/yolov5: v3.1 - Bug Fixes and Performance Improvements,” Oct. 2020. [Online]. Available: https://doi.org/10.5281/zenodo.4154370
* [52] C. Zhang, Y. Wu, M. Guo, and X. Deng, “Training sample selection for space–time adaptive processing based on multi-frames,” _The Journal of Engineering_ , vol. 2019, 10 2019.
* [53] C. H. Nguyen, T. C. Nguyen, T. N. Tang, and N. L. H. Phan, “Improving object detection by label assignment distillation,” in _2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)_ , 2022, pp. 1322–1331.
|
# Constraining runaway dilaton models using joint gravitational-wave and
electromagnetic observations
Arnab Dhani<EMAIL_ADDRESS>Institute for Gravitation and the Cosmos,
Department of Physics, Pennsylvania State University, University Park, PA
16802, USA Anuradha Gupta<EMAIL_ADDRESS>Department of Physics and
Astronomy, The University of Mississippi, University, Mississippi 38677, USA
B. S. Sathyaprakash<EMAIL_ADDRESS>Institute for Gravitation and the Cosmos,
Department of Physics, Pennsylvania State University, University Park, PA
16802, USA Department of Astronomy & Astrophysics, Pennsylvania State
University, University Park, PA 16802, USA School of Physics and Astronomy,
Cardiff University, Cardiff, UK, CF24 3AA
###### Abstract
With the advent of gravitational-wave astronomy it has now been possible to
constrain modified theories of gravity that were invoked to explain the dark
energy. In a class of dilaton models, distances to cosmic sources inferred
from electromagnetic and gravitational wave observations would differ due to
the presence of a friction term. In such theories, the ratio of the Newton’s
constant to the fine structure constant varies with time. In this paper we
explore the degree to which it will be possible to test such models. If
collocated sources (e.g. supernovae and binary neutron star mergers), but not
necessarily multimessengers, can be identified by electromagnetic telescopes
and gravitational-wave detectors one can probe if light and gravitational
radiation are subject to the same laws of propagation over cosmological
distances. This helps in constraining the variation of Newton’s constant
relative to fine-structure constant. The next generation of gravitational wave
detectors, such as the Cosmic Explorer and Einstein Telescope, in tandem with
the Vera Rubin Observatory and gamma ray observatories such as the Fermi Space
Observatory will be able to detect or constrain such variations at the level
of a few parts in 100. We apply this method to GW170817 with distances
inferred by the LIGO and Virgo detectors and the observed Kilonova.
Gravitational waves, electromagnetic waves, fine structure constant,
gravitational constant
## I Introduction
Gravitational waves (GWs) and electromagnetic (EM) waves follow the same
propagation equations in General Relativity (GR) Maggiore (2007).
Consequently, the various distance measures in cosmology (e.g., luminosity
distance, angular diameter distance, comoving distance, etc.) are identical
for both GW and EM. Several alternative theories of gravity with additional
scalar degrees of freedom Fujii and Maeda (2007); Bertolami _et al._ (2007,
2008); Bertolami and Paramos (2008); Sotiriou and Faraoni (2008); De Felice
and Tsujikawa (2010); Harko _et al._ (2013); Das and Banerjee (2008); Bisabr
(2012); Moffat and Toth (2012); Shiralilou _et al._ (2022) modify the
propagation of either or both by altering the friction term in the wave
equations due to the evolution of the scalar field. We will, however, restrict
to a class of scalar-tensor theories in which the dispersion relation remains
unchanged. Hence, in these scalar-tensor theories distance to an astronomical
source inferred from gravitational-wave observation will be different from
that inferred using electromagnetic radiation.
The presence of a scalar field is also motivated by the low energy effective
field theories of Loop Quantum Gravity Rovelli and Smolin (1994); Domagala
_et al._ (2010) and String Theory Green _et al._ (1988); Uzan (2011); Damour
and Polyakov (1994a, b); Gasperini _et al._ (2002); Minazzoli and Hees
(2013). Furthermore, dark energy Ratra and Peebles (1988); Caldwell _et al._
(1998); Peebles and Ratra (2003), inflation Guth (1981); Linde (1982);
Albrecht and Steinhardt (1982); Linde (2008), and variations of the
fundamental constants are often modeled using a scalar field Bekenstein
(1982); Sandvik _et al._ (2002); Dvali and Zaldarriaga (2002); Olive and
Pospelov (2008); Damour (2012). In fact, it has been claimed that the
requirement of gauge and diffeomorphism invariances would invariably lead to
scalar-tensor theories with minimal/non-minimal coupling to the matter sector
Armendariz-Picon (2002). The coupling of the scalar field to the gravitational
sector in such theories has been tightly constrained in the weak-field limit
using solar system tests Adelberger _et al._ (2003, 2007, 2009); Kapner _et
al._ (2007); Will (2006). If the scalar field couples non-minimally to the
matter sector, the Einstein equivalence principle is broken. The equivalence
principle, likewise, has been tested to a very high accuracy within the solar
system Rosenband _et al._ (2008); Will (2006); Adelberger _et al._ (2009);
Williams _et al._ (2012). A variety of decoupling Tseytlin and Vafa (1992);
Damour and Vilenkin (1996); Damour _et al._ (2002); Jarv _et al._ (2008);
Damour and Nordtvedt (1993); Damour and Polyakov (1994b); Minazzoli and Hees
(2013) or screening Khoury (2010); Khoury and Weltman (2004a, b); Hees and
Fuzfa (2012); Hinterbichler and Khoury (2010); Hinterbichler _et al._ (2011)
mechanisms have, therefore, been proposed to keep these theories viable for
cosmological evolution.
The propagation of waves on a modified background allows one to test for the
presence of a scalar field on cosmological scales. High redshift quasar
absorption spectra Webb _et al._ (2001); King _et al._ (2012); Webb _et
al._ (2011), galaxy clustering data Holanda _et al._ (2016), and 21cm neutral
hydrogen intensity mapping Khatri and Wandelt (2007) have been used to place
limits on the spatio-temporal evolution of the fine structure constant which
can be modeled using a scalar field. Type Ia supernova (SNeIa) data is used to
fit the EM luminosity distance-redshift relation and constrain models of
dynamical dark energy Riess _et al._ (2016). Other studies use the EM
luminosity distance estimates from SNeIa in parallel with the EM angular
diameter distance measurements from X-ray and Sunyaev-Zel’dovich observations
of galaxy clusters to directly constrain the violation of the distance-duality
relation in the EM sector Cao and Liang (2011); Hees _et al._ (2014).
Gravitational wave astronomy has opened a new means of revealing the presence
of a scalar degree of freedom. Coincident measurements of the luminosity
distance from GW observations and the redshift from follow-up EM observations
of “bright” sirens, such as the first observation of gravitational waves from
a binary neutron star merger, GW170817 Abbott _et al._ (2017a, b), have been
used to put limits on the modified friction term in $f(R)$ and scalar-tensor
theories of gravity Fanizza _et al._ (2020); Finke _et al._ (2021). The
luminosity distance-redshift relation has also been constrained for “dark”
sirens (GW observations without an EM counterpart) by cross-correlations with
galaxy catalogs Finke _et al._ (2021); Mukherjee _et al._ (2020). In these
methods, the modified friction term is constrained together with the standard
cosmological parameters, however, the two sets of parameters are strongly
correlated with each other. Mukherjee _et al._ (2020) propose the use of
Baryon acoustic oscillation (BAO) data together with luminosity-distance
measurements from GW observations and redshifts from galaxy catalog cross-
correlations to directly constrain the ratio between the GW luminosity-
distance and EM luminosity distance in terms of the modified friction
parameter.
In this study, we propose the direct use of the EM luminosity distance from
SNeIa/kilonova concurrently with the GW luminosity distance from “bright”
sirens and the redshift obtained from photometric/spectroscopic studies of the
identified galaxy or galaxy cluster to directly constrain the ratio of the two
luminosity distances for a class of scalar-tensor theories with a non-minimal
multiplicative coupling between the scalar field and the matter sector. The
crucial distinction with the method described in Mukherjee _et al._ (2020) is
their use of the BAO data to convert the angular diameter distance to EM
luminosity distance via the distance-duality relation, which is broken for us
due to the non-minimal coupling of the scalar field to the matter sector. In
other words, their procedure is valid for alternative theories of gravity in
which gravity is minimally coupled to the matter sector whereas our method
applies to more general theories. Furthermore, they infer the redshifts to GW
sources using galaxy correlation and as a result also measure some
cosmological parameters. We restrict ourselves to “bright” sirens and,
therefore, have a direct measurement of the redshift. In this way, our
parameter constraints do not suffer from degeneracies with the other
cosmological parameters.
The class of scalar-tensor theories considered in this study arise as low
energy action of string theories and satisfy the solar system tests for both
the modifications to the gravitational sector and the breakage of the
equivalence principle. This class of theories, known as the runaway dilaton
models Gasperini _et al._ (2002); Damour _et al._ (2002); Minazzoli and Hees
(2014); Hees _et al._ (2014), has a Brans-Dicke type gravitational
interaction and a universal multiplicative coupling between the scalar field
and the matter sector which breaks the equivalence principle. The unequal
coupling of the scalar field to the metric and the matter sector leads to
(distinct) modified propagation equations for gravitational waves and
electromagnetic waves.
We parameterize the ratio of the electromagnetic and gravitational-wave
luminosity distance using a parameter $\eta_{0}$. We find that the planned
upgrades to the second-generation of advanced gravitational-wave detector
networks (e.g. the A+ upgrade Reitze _et al._ (2019); Abbott _et al._ (2018)
of Advanced LIGO and similar upgrades to Advanced Virgo Acernese _et al._
(2015a), KAGRA Aso _et al._ (2013); Somiya (2012) and LIGO-India Unnikrishnan
(2013); Saleem _et al._ (2022)) constrains $\eta_{0}$ to $|\eta_{0}|<0.2,$
while the proposed improvement of the network to Voyager sensitivity Adhikari
_et al._ (2020a) refines the constraint to $|\eta_{0}|<0.05$. The proposed
third-generation of ground-based gravitational-wave detector network (Cosmic
Explorer Evans _et al._ (2021); Reitze _et al._ (2019) and Einstein
Telescope Punturo _et al._ (2010a, b); Hild _et al._ (2011)) will place the
best limits on $\eta_{0}$ at $|\eta_{0}|<0.01$.
In Sec. II, we briefly describe runaway dilaton models and their EM and GW
propagation equations. We also describe how the ratio of the luminosity
distance of each sector can be related to the variation of the fundamental
constants. In Sec. III, we discuss the gravitational-wave detectors considered
in this study, the simulations we performed, and the electromagnetic data that
we used. We outline our main results and forecasts in Sec. IV and the
constraints that can be placed using GW170817 in Sec. V. Sec. VI concludes the
paper.
## II Background
In this section, we briefly review the equations of motion for runaway dilaton
models, derive the propagation equations for electromagnetic and gravitational
waves on a homogeneous and isotropic background, parameterize the ratio of the
luminosity distances as a function of redshift, and relate it to the redshift
variation of the fundamental constants.
### II.1 Runaway dilaton models
The action for runaway dilaton models Gasperini _et al._ (2002); Damour _et
al._ (2002); Minazzoli and Hees (2014); Hees _et al._ (2014), a class of
scalar-tensor theories with a generic multiplicative coupling $h(\phi)$
between a scalar field $\phi$ and the matter Lagrangian
$\mathcal{L}_{m}[g_{\mu\nu,\Psi}]$, is given by
$S=\int d^{4}x\sqrt{-g}\Bigg{[}\frac{1}{2\kappa}\left(\phi
R-\frac{\omega(\phi)}{\phi}\nabla_{\mu}\phi\nabla^{\mu}\phi-V(\phi)\right)\\\
+h(\phi)\mathcal{L}_{m}[g_{\mu\nu,\Psi}]\Bigg{]}\,,$ (1)
where $\kappa=8\pi G$ with $G$ being the gravitational coupling constant, $R$
is the Ricci scalar, and $\Psi$ consists of all the Standard Model fields.
The gravitational equations of motion, given by the variation of the action
with respect to the metric, takes the form,
$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=\kappa\frac{h(\phi)}{\phi}T_{\mu\nu}+\frac{1}{\phi}(\nabla_{\mu}\nabla_{\nu}-g_{\mu\nu}\Box)\phi\\\
+\frac{\omega(\phi)}{\phi^{2}}(\nabla_{\mu}\phi\nabla_{\nu}\phi-\frac{1}{2}g_{\mu\nu}\nabla_{\alpha}\phi\nabla^{\alpha}\phi)-g_{\mu\nu}\frac{V(\phi)}{2\phi}.$
(2)
Similarly, one can obtain the equation of motion for the scalar field by
varying the action with respect to it. Upon replacing the Ricci scalar in the
resulting equation with the trace of the gravitational equations of motion Eq.
(2), one finds that the Klein-Gordon equation for the scalar field is given
by,
$\frac{2\omega(\phi)+3}{\phi}\Box\phi=\kappa\left(\frac{h(\phi)}{\phi}T^{\alpha}_{\alpha}-2h^{\prime}(\phi)\mathcal{L}_{m}\right)\\\
-\frac{\omega^{\prime}(\phi)}{\phi}\nabla_{\alpha}\phi\nabla^{\alpha}\phi+V^{\prime}(\phi)-2\frac{V(\phi)}{\phi},$
(3)
where a prime denotes the derivative with respect to $\phi$. The stress-energy
tensor $T_{\mu\nu}$ in the above equations can be defined by the variation of
the matter Lagrangian with the metric $g_{\mu\nu}$,
$T_{\mu\nu}\equiv\frac{-2}{\sqrt{-g}}\frac{\delta}{\delta
g^{\mu\nu}}(\sqrt{-g}\mathcal{L}_{m}).$ (4)
We will consider the background to be a homogeneous, isotropic, and spatially
flat Universe described by the Friedmann-Lemaître-Robertson-Walker (FLRW)
metric,
$ds^{2}=-dt^{2}+a(t)^{2}\delta_{ij}dx^{i}dx^{j},$ (5)
where the size of the homogeneous, isotropic, and spatially flat 3-surface is
given by the scale factor $a(t)$. The background spacetime is considered to be
sourced by a perfect fluid with its stress-energy tensor given by
$T^{\mu\nu}=(\rho+p)u^{\mu}u^{\nu}+pg^{\mu\nu},$ (6)
where $\rho$ is the total energy density and $p$ is the pressure of the fluid
in its rest-frame, and $u^{\mu}$ is the 4-velocity of the fluid with respect
to an observer.
The Friedmann equations that describe the evolution of the background
spacetime are obtained by substituting Eq. (5) and Eq. (6) in to the
gravitational field equations (2):
$H^{2}=\kappa\frac{h(\phi)}{3\phi}\rho+\frac{V(\phi)}{6\phi}+\frac{\omega(\phi)}{6}\left(\frac{\dot{\phi}}{\phi}\right)^{2}-H\frac{\dot{\phi}}{\phi},$
(7)
$2\dot{H}+3H^{2}=-\kappa\frac{h(\phi)}{\phi}p+\frac{V(\phi)}{2\phi}-2H\frac{\dot{\phi}}{\phi}-\frac{\omega(\phi)}{2}\left(\frac{\dot{\phi}}{\phi}\right)^{2}-\frac{\ddot{\phi}}{\phi},$
(8)
where $H(t)$ is the Hubble parameter defined as $H(t)\equiv\dot{a}(t)/a(t)$
and dots denote derivatives with respect to the time coordinate $t$. In Eqs.
(7) and (8), if the scalar field $\phi$ is a constant, only the first two
terms on the right-hand side are non-zero and we recover the standard
Friedmann equations for $\Lambda$CDM cosmology up to the normalization of the
field $\phi$.
### II.2 Propagation of gravitational waves
In this and the following subsection, we will derive the equations describing
the propagation of gravitational and electromagnetic waves, respectively, on
the background spacetime.
Gravitational waves are propagating tensor perturbations of the background
spacetime. To get the equations of motion for tensor perturbations, we perturb
our FLRW metric as
$ds^{2}=-dt^{2}+a(t)^{2}(\delta_{ij}+h_{ij})dx^{i}dx^{j},$ (9)
where $h_{ij}$ is a small perturbation of the background geometry and is
transverse ($\partial_{i}h^{ij}=0$) and trace-less ($h_{i}^{i}=0$) in the
chosen coordinate system. Note that this is not a generic perturbation of the
background. A generic perturbation can be decomposed in to scalar, vector, and
tensor components that do not mix under diffeomorphisms. Furthermore, at the
leading order, the equations of motion for these components are decoupled.
Here, since we are only interested in GWs, it is sufficient to perturb the
background with the tensor component which is transverse and trace-less.
The equations of motion for gravitational-wave propagation are then given by
$\ddot{h}_{ij}+\left(3H+\frac{\dot{\phi}}{\phi}\right)\dot{h}_{ij}-\frac{\nabla^{2}h_{ij}}{a(t)^{2}}=0,$
(10)
where $\dot{\phi}/\phi$ is the modified friction term that would change the
observed GW amplitude and as a result the luminosity distance with respect to
GR. We note that the luminosity distance is additionally modified since the
Friedmann equations get altered due to the presence of the scalar field. In
other words, the evolution of the scale factor $a(t)$ is different from that
in GR. Note, however, that the dispersion relation is unchanged with respect
to GR and, hence, GWs travel at the speed of light.
Throughout this study, we are interested in solutions of the wave equations
under geometric optics approximation. This is because the length scales of the
signals of interest to us (stellar-mass compact binary mergers and SNeIa) are
much smaller than the Hubble scale. In this limit, the metric perturbations
can be written as
$h_{ij}=\mathcal{R}\\{(b_{ij}+\epsilon
c_{ij}+\mathcal{O}(\epsilon^{2}))\,e^{i\theta/\epsilon}\\},$ (11)
where $\theta$ is the phase of the plane wave and $\epsilon$ is an order-
counting parameter, which can be set to 1 at the end of the calculation.
Substituting Eq. (11) into Eq. (10) and collecting terms of the same order, we
get
$\displaystyle k_{\mu}k^{\mu}$ $\displaystyle=0,$ (12) $\displaystyle
k^{\nu}\nabla_{\nu}k^{\mu}$ $\displaystyle=0,$
at the $\mathcal{O}(\epsilon^{2})$ and
$\nabla_{\mu}(b^{2}k^{\mu})=-b^{2}k^{\mu}\nabla_{\mu}\ln\phi,$ (13)
at the $\mathcal{O}(\epsilon)$, where $k_{\mu}=\nabla_{\mu}\theta$, the wave
vector, is null and follows null geodesics and $b=||b_{ij}||$ is the Euclidean
norm of the leading-order amplitude. The latter equation is the one which is
modified with respect to GR and represents the non-conservation of the
graviton number as it propagates on the background spacetime.
The luminosity distance can then be calculated following Minazzoli and Hees
(2014)111The derivation of the luminosity distance is carried out for EM waves
but the procedure is the same for GWs. and is given by
$d_{L}^{\rm GW}=(1+z)\sqrt{\frac{\phi_{0}}{\phi}}\int_{0}^{z}\frac{dz}{H(z)},$
(14)
where where $\phi_{0}$ is the value of the field in the present epoch and
$H(z)$ is the modified Hubble relation [Eq. (7)].
### II.3 Propagation of electromagnetic waves
The field equations that govern the propagation of electromagnetic waves can
be obtained by the variation of the action Eq. (1) with respect to the EM
4-potential $A^{\mu}$ and are given by
$\nabla_{\nu}(h(\phi)F^{\mu\nu})=0,$ (15)
where $F_{\mu\nu}=\nabla_{\mu}A_{\nu}-\nabla_{\nu}A_{\mu}$ is the
electromagnetic field tensor.
Solving the above equation in the geometric optics limit, as for metric
perturbations, yields similar equations for photons, namely, they travel on
null geodesics. The equation for photon number ‘non-conservation’ is given by:
$\nabla_{\mu}(\overline{b}^{2}k^{\mu})=-\overline{b}^{2}k^{\mu}\nabla_{\mu}\ln
h(\phi),$ (16)
where the electromagnetic potential in the geometric optics limit is given by
$A^{\mu}=\mathcal{R}\\{(b^{\mu}+\epsilon
c^{\mu}+\mathcal{O}(\epsilon^{2}))\,e^{i\theta/\epsilon}\\},$ (17)
with $\overline{b}=||b^{\mu}||$ being the norm taken with respect to the
background FLRW metric.
The luminosity distance is then given by Minazzoli and Hees (2014)
$d_{L}^{\rm
EM}=(1+z)\sqrt{\frac{h(\phi_{0})}{h(\phi)}}\int_{0}^{z}\frac{dz}{H(z)}\,,$
(18)
where $\phi_{0}$ is the value of the field at the present epoch and $H(z)$ is
given by the modified Friedman equations. Of note here is that the two
luminosity distances differ only if $h(\phi)\neq\phi$ which is the premise we
are working under.
### II.4 Parameterizing the modified luminosity distances
Now that we have obtained the luminosity distance-redshift relation for both
electromagnetic and gravitational waves, we can parameterize the scalar field
dependence of their ratio. We choose to do this parameterization in terms of
the violation of the distance-duality relation for both the sectors. This
choice helps connect our results to those of the numerous experiments in the
electromagnetic sector that parameterize this deviation Hees _et al._ (2014).
The distance-duality relation connects the luminosity distance to the angular
diameter distance. The latter is defined by:
$d_{A}(z)=\frac{1}{1+z}\int_{0}^{z}\frac{dz}{H(z)}.$ (19)
This is a geometric quantity that can be derived by integrating the geodesic
equation. For the class of theories considered in this study, both
gravitational and electromagnetic waves travel on null geodesics and,
therefore, their angular diameter distances are unchanged from that of GR,
apart from a modification to the Friedmann equations. We can, then, write the
parameterization as
$\eta(z)=\frac{d_{L}(z)}{d_{A}(z)(1+z)^{2}}.$ (20)
In GR, the distance-duality relation implies $\eta(z)=1$.
We consider $\eta(z)$ to have the functional form
$\displaystyle\eta^{\rm
GW}(z)=\sqrt{\frac{\phi_{0}}{\phi}}=1+\eta_{1}\frac{z}{1+z},$ (21)
$\displaystyle\eta^{\rm{}EM}(z)=\sqrt{\frac{h(\phi_{0})}{h(\phi)}}=1+\eta_{2}\frac{z}{1+z},$
parameterizing the deviation from the gravitational (electromagnetic)
distance-duality relation by $\eta_{1}$ ($\eta_{2}$). Additionally, we
parameterize the ratio of the luminosity distances as
$\frac{d_{L}^{\rm GW}}{d_{L}^{\rm EM}}=\frac{\eta^{\rm GW}(z)}{\eta^{\rm
EM}(z)}=\sqrt{\frac{\phi_{0}/h(\phi_{0})}{\phi/h(\phi)}}=1+\eta_{0}\frac{z}{1+z}$
(22)
from which one can deduce that $\eta_{0}\approx\eta_{1}-\eta_{2}$. The above
form of the parameterization was introduced in Holanda _et al._ (2012) with
the advantage being that it avoids divergence at large redshifts which the
linear expansions suffer from. Given a simultaneous measurement of the GW and
EM luminosity distances, either from the same source or the same galaxy or the
same galaxy cluster (see Sec. III.2.3 and VI for a discussion), one can place
constraints on the parameter $\eta_{0}$.
At this point, we note that studies in Mukherjee _et al._ (2020); Fanizza
_et al._ (2020); Finke _et al._ (2021) have constrained the ratio of the two
luminosity distances, albeit in the context of modifying the frictional term
in the gravitational sector alone, through the parameterization,
$\frac{d_{L}^{\rm GW}(z)}{d_{L}^{\rm
EM}(z)}=\Xi_{0}+\frac{1-\Xi_{0}}{(1+z)^{n}}.$ (23)
where $\Xi_{0}=1$ in GR and $n$ gives the rate at which the ratio saturates to
its asymptotic value $\Xi_{0}$. Our parameter $\eta_{0}$ is related to
$(\Xi_{0},n)$ as
$\eta_{0}=1-\Xi_{0}\quad{\rm for}\quad n=1,$ (24)
i.e., our parameterization is a subclass of the $(\Xi_{0},n)$ parameterization
for a fixed saturation rate.
The errors on $\eta_{0}$ can be calculated from the errors on the GW and EM
luminosity distances, assuming the redshift to the source is known, using the
standard error propagation formula for independent variables ($d_{L}^{\rm GW}$
and $d_{L}^{\rm EM}$ in this case) as,
$\sigma_{\eta_{0}}^{2}=\left(\frac{\partial\eta_{0}}{\partial d_{L}^{\rm
GW}}\right)^{2}\sigma_{d_{L}^{\rm
GW}}^{2}+\left(\frac{\partial\eta_{0}}{\partial d_{L}^{\rm
EM}}\right)^{2}\sigma_{d_{L}^{\rm EM}}^{2},$ (25)
where $\sigma_{X}$ denotes $1$-$\sigma$ error in the quantity $X$. Simplifying
the above equation by evaluating the derivative expressions leads to
$\sigma_{\eta_{0}}=\frac{1+z}{z}\frac{d_{L}^{\rm GW}}{d_{L}^{\rm
EM}}\sqrt{\left(\frac{\sigma_{d_{L}^{\rm GW}}}{d_{L}^{\rm
GW}}\right)^{2}+\left(\frac{\sigma_{d_{L}^{\rm EM}}}{d_{L}^{\rm
EM}}\right)^{2}}.$ (26)
### II.5 Redshift variation of fundamental constants
The non-minimal coupling of the scalar field to the matter and gravitational
sectors leads to the dependence of the fundamental constants on the scalar
field and they, therefore, evolve with the evolution of the scalar field Yunes
_et al._ (2010, 2016); Vijaykumar _et al._ (2021). From the action given by
Eq. (1), it can be read out that the fine structure constant $\alpha$ and the
gravitational constant $G$ depend on the scalar field via
$\displaystyle\alpha\sim h^{-1}(\phi),$ (27) $\displaystyle G\sim\phi^{-1},$
and, hence, their redshift variation can be written as
$\displaystyle\frac{\Delta\alpha(z)}{\alpha_{0}}\equiv\frac{\alpha(z)-\alpha_{0}}{\alpha_{0}}=\frac{h(\phi_{0})}{h(\phi)}-1=\eta^{\rm
EM}(z)^{2}-1$ (28) $\displaystyle\frac{\Delta
G(z)}{G_{0}}=\frac{G(z)-G_{0}}{G_{0}}=\frac{\phi_{0}}{\phi}-1=\eta^{\rm
GW}(z)^{2}-1\,,$
where $\alpha_{0}$ and $G_{0}$ are the values of $\alpha$ and $G$ at the
current epoch, respectively.
Given the experimental constraints on the ratio of the two luminosity
distances, one can constrain the temporal variation of $\frac{G}{\alpha}(z)$
in the current epoch as
$\beta\equiv\left.\frac{\tfrac{d}{dt}(G/\alpha)}{G/\alpha}\right|_{0}=-2H_{0}\left.\frac{d\eta}{dz}\right|_{0}=-2H_{0}\eta_{0}\,,$
(29)
where $\eta(z)=\eta^{\rm GW}(z)/\eta^{\rm EM}(z)$ and $H_{0}$ is the present
value of the Hubble parameter. If one uses constraints from other
electromagnetic probes Hees _et al._ (2014), the temporal variations of both
$\alpha$ and $G$ can be separately constrained.
We point out here that $G_{0}$ is not the effective gravitational constant
$G_{\rm eff}$ that enters the Poisson equation at the Newtonian order and
should not be interpreted as the strength of the gravitational force between
two test masses separated by a unit distance. The two are related by
$G_{\rm
eff}=G_{0}\left(1+\frac{1-2\phi_{0}\frac{h^{\prime}(\phi_{0})}{h(\phi_{0})}}{2\omega(\phi_{0})+3}\right)\frac{h(\phi_{0})}{\phi_{0}}.$
(30)
In the absence of the scalar field, $G_{0}$ and $G_{\rm eff}$ coincide, as
expected.
## III Method
In this section, we describe the different gravitational wave detector
networks considered in this study, outline our procedure for simulating
gravitational-wave sources, calculate the rate of spatially coincident EM and
GW signals, and estimate the distribution of luminosity distance errors for
the coincidentally observed population of sources.
### III.1 Gravitational wave detector networks
We consider three GW detector networks across three technology generations.
The 2G+ network consists of the five second generation GW detectors with three
LIGO detectors Aasi _et al._ (2015) (LIGO-Hanford, LIGO-Livingston, LIGO-
India) operating at A+ sensitivity, the Virgo Acernese _et al._ (2015b) and
the KAGRA Akutsu _et al._ (2019) detectors at AdV+ and KAGRA+ sensitivities,
respectively. The Voy+ network includes the same five second generation
detectors but with the LIGO detectors upgraded to a proposed ‘Voyager’
Adhikari _et al._ (2020b) technology. The final network, ECC, consists of
three proposed third generation detectors, specifically, two Cosmic Explorer
Reitze _et al._ (2019) detectors and an Einstein Telescope Punturo _et al._
(2010b). We show the noise power spectral densities (PSDs) for the individual
detectors in Fig. 1. The locations of these detectors and the technologies
used in a network are given in Table 1.
Figure 1: The noise power spectral density (PSD) estimates used for the individual detectors considered in this study. We use a low frequency cutoff of 5Hz for all but the advanced Virgo detector for which the PSD starts at 10Hz. Network label | Detector location (technology)
---|---
2G+ | Hanford WA (A+), Livingston LA (A+), Cascina Italy (AdV+), Kamioka Japan (KAGRA+), Hingoli India (A+)
Voy+ | Hanford WA (Voyager), Livingston LA (Voyager), Cascina Italy (AdV+), Kamioka Japan (KAGRA+), Hingoli India (Voyager)
ECC | Cascina Italy (ET-D), fiducial US site (CE1_cb), fiducial Australian site (CE1_cb)
Table 1: An overview of the three networks used in the study. The location
determines the detector antenna patterns, while the technology indicates the
used power spectral density. The Voyager and Cosmic Explorer power spectral
densities are chosen to be low-frequency optimized and in the case of the
latter for a detector arm length of $40\,\text{km}$.
### III.2 Rates
#### III.2.1 Binary neutron star merger rates
We simulate a population of binary neutron star (BNS) merger events up to a
redshift of $z=1.5$ assuming a uniform mass distribution between 1$M_{\odot}$
and 2.5$M_{\odot}$ for the individual NSs Abbott _et al._ (2020). The other
parameters, cosine of the inclination angle $\cos\iota$, location of the
source on the plane of the sky $\Omega$ (cosine of the declination angle
$\cos\delta$ and right ascension $\alpha$), polarization angle $\psi$, and the
phase of the signal at coalescence $\phi_{0}$, of the fiducial BNS population
are drawn from a uniform distribution across their domains. We assume 10 years
of observing time for each network with an 80% duty cycle for each detector
Belgacem _et al._ (2019). The redshift distribution for our BNS population is
given by the following probability distribution,
$p(z)=\frac{R_{z}(z)}{\int_{0}^{10}R_{z}(z)dz},$ (31)
where an upper limit of $z=10$ is justified since the contribution to the
integral from redshifts larger than 10 is negligible. $R_{z}(z)$, the merger
rate density in the observer frame, can be expressed as
$R_{z}(z)=\frac{R_{m}(z)}{1+z}\frac{dV(z)}{dz}.$ (32)
Here $R_{m}(z)$ is the merger rate per comoving volume in the source frame and
$dV/dz$ is the comoving volume element. The former is given by
$R_{m}(z)=\int_{t_{\rm min}}^{t_{\rm max}}R_{f}[t(z)+t_{d}]P(t_{d})dt_{d},$
(33)
where $R_{f}(t)$ is the binary star formation rate (SFR) which we assume
follows the Vangioni cosmic SFR Vangioni _et al._ (2015). The delay time (the
time it takes for a binary to coalesce after formation) distribution is taken
to be $P(t_{d})\propto 1/t_{d}$ with $t_{\rm min}=20\,\rm Myr$ and $t_{\rm
max}$ set to the Hubble time $1/H_{0}$. The value of $R_{m}$ at $z=0$ is
estimated from the population properties of the third LIGO–Virgo
Gravitational-Wave Transient Catalog, GWTC-3 Abbott _et al._ (2021) to be
between
$R_{m}(z=0)=\mbox{13--1900}\;\rm Gpc^{-3}yr^{-1}.$ (34)
We present results for both the optimsitic and pessimistic local merger rates.
#### III.2.2 Electromagnetic counterpart
The Voy+ and the ECC network of GW detectors have a reach beyond the horizon
distance of the current and future EM telescopes for kilonova which can be
observed up to a redshift of about $z=0.5$ (see, e.g. Table 2.2 in Ref.
Kalogera _et al._ (2021)). Hence, BNS events beyond a redshift of $z=0.5$ is
electromagnetically observable only through short gamma ray burst events.
Therefore, in this study, we assume that 10% of the BNS events up to a
redshift of 0.5 will have a dedicated EM follow-up search to detect their
kilonova emissions (we assume this to be in addition to possible GRB
detection, which do not need a dedicated search owing to the near all-sky
sensitivity of GRB detectors) and for BNS observations beyond a redshift of
0.5, we assume a coincident electromagnetic detection to consist of only GRBs.
Figure 2: Number of GW (left panel) and GW+GRB (right panel) detections as a
function of redshift for different detector networks considered in this study.
The optimistic case (solid lines) represents the upper limit on the local BNS
merger rate and the pessimistic case (dashed lines) the lower limit. The range
of values for the local BNS merger rate is given in Eq. (34). The lifetime of
a network is assumed to be 10 years with an 80% duty cycle for each detector.
We calculate the rate of a coincident GRB detection following the procedure
outlined in Belgacem _et al._ (2019) and sketched it out here for
completeness. We assume a Gaussian structured jet profile Howell _et al._
(2018) for a GRB burst and the luminosity $L(\theta_{V})$ is given by
$L(\theta_{V})=L_{p}\exp\left(-\frac{\theta_{V}^{2}}{2\theta_{c}^{2}}\right),$
(35)
where $\theta_{V}$ is the viewing angle and $\theta_{c}=4.7\degree$ represents
the variation in the GRB jet opening angle. $L_{p}$ is the peak luminosity of
each burst assuming isotropic emission in the rest frame in the $1-10^{4}$ keV
energy range and can be sampled from the probability distribution
$\Phi(L_{p})\propto\begin{cases}(L_{p}/L_{*})^{\alpha},\qquad L_{p}<L_{*},\\\
(L_{p}/L_{*})^{\beta},\qquad L_{p}\geq L_{*},\end{cases}$ (36)
where the parameters of the broken power-law distribution are $L_{*}=2\times
10^{52}\,\rm erg/s$, $\alpha=-1.95$, and $\beta=-3$ Wanderman and Piran
(2015). A GRB is assumed to be detected if the observed peak flux
$F_{P}(\theta_{V})=L(\theta_{V})/4\pi d_{L}^{2}$, given the GW luminosity
distance and inclination angle, is greater than the flux limit of
$1.1\rm\,ph\,s^{-1}\,cm^{-2}$ Belgacem _et al._ (2019) in the $50$–300 keV
band for Fermi-GBM. The total time-averaged observable sky fraction for the
Fermi-GBM is taken to be 0.6 Burns _et al._ (2016).
#### III.2.3 Rates for spatially coincident SNeIa
Following the arguments presented in Sec. 3 of Gupta _et al._ (2019), we now
estimate the rates for a spatial coincidence of SNeIa and BNSs in a galaxy
cluster. Gupta _et al._ (2019) concluded that the rate of spatial coincidence
of SNeIa and BNS mergers in a galaxy given their rates Li _et al._ (2011);
Abbott _et al._ (2019) is extremely small. Moreover, in their Sec. 5 Gupta
_et al._ (2019) showed that there is ${\cal O}(1\%)$ error in the distance
estimation of SNeIa if calibrated through a BNS in the same galaxy cluster
instead of the same galaxy. Therefore, coincident of SNeIa with a BNS in the
same galaxy cluster is sufficient to obtain the redshift information of BNSs.
The current volumetric merger rate of BNSs is $13$–1900 $\rm Gpc^{-3}yr^{-1}$
Abbott _et al._ (2021) and that of local SNeIa is $3.0^{+0.6}_{-0.6}\times
10^{4}\,\rm Gpc^{-3}yr^{-1}$ Li _et al._ (2011). Considering the median of
local SNeIa rates, it implies that there will be roughly 15 to 2300 SNeIa for
a BNS merger in a galaxy. As in Gupta _et al._ (2019), we assume that the
ratio of the SNeIa and BNS rates will be similar in rich galaxy clusters as
well since both types of populations involve compact object mergers. For
sources up to the redshift of $z=1.5$, we use SNeIa rate to be
$0.65^{+0.61}_{-0.49}\times 10^{-12}\,\rm L_{B,\odot}^{-1}yr^{-1}$ in rich
galaxy clusters Friedmann and Maoz (2018), where $L_{\odot}$ is the bolometric
luminosity in solar units. Consequently, these numbers suggest that every year
there will be $\approx 3$ SNeIa in a Coma-like cluster with bolometric
luminosity of $L_{B}\approx 5.0\times 10^{12}L_{B,\odot}$ Girardi _et al._
(2002) which is sufficient to confirm the association with BNSs and derive
their redshifts.
### III.3 Luminosity distance errors
#### III.3.1 Errors from gravitational-wave observations
We simulate a population of neutron star binaries using the procedure outlined
in Sec. III.2. The redshift of a source is converted to its luminosity
distance, the GW observable, using Planck18 Aghanim _et al._ (2020)
cosmology. For a BNS merger to be detectable, we require a network signal-to-
noise ratio (SNR) threshold of 12 for each binary in the population but do not
demand a minimum SNR for individual detectors. Note that the probability of
having just one detector online in a 5 detector network with a duty cycle of
80% for each detector is less than a percent Belgacem _et al._ (2019). We
calculate the errors in the estimation of the binary parameters using the
publicly available code, gwbench Borhanian (2020), which implements a Fisher-
matrix formalism Cutler and Flanagan (1994); Poisson and Will (1995) for error
calculation. We use the IMRPhenomPv2_NRTidal waveform model in our Fisher
analysis, with a fixed effective tidal parameter $\tilde{\Lambda}=100$. We do
not compute the error on the $\tilde{\Lambda}$ measurement because this
parameter is not expected to appreciably affect the luminosity distance
estimate. We assume that the electromagnetic counterpart accurately provides
the sky location, so we do not compute an error on it. We further take the
chirp mass to be given because it is well estimated and mostly not degenerate
with the luminosity distance.
Network | GW events | GW events ($z<0.5$) | GW + GRB events | GW + GRB events ($z>0.5$) | GW + EM counterpart
---|---|---|---|---|---
2G+ | 10,259 (83) | 10,259 (83) | 304 (2) | 0 (0) | 1330 (11)
Voy+ | 83,697 (589) | 81,415 (571) | 825 (5) | 17 (0) | 8,967 (62)
ECC | 5,286,423 (36,001) | 505,073 (3454) | 2,810 (19) | 1,657 (15) | 53,317 (364)
Table 2: Number of GW events detected by the three networks in 10 years,
together with the coincident GRB detection rate and the same for sources with
$z>0.5$, assuming the detector characteristics of Fermi-GBM.
This slightly underestimates the distance errors but it would not affect our
results significantly. From a technical perspective, this renders some of the
otherwise ill-conditioned Fisher matrices of the 3G network to behave well. We
are then left with a seven dimensional Fisher matrix consisting of the
symmetric mass ratio $\eta$, the luminosity distance $d_{L}^{\rm GW}$, the
inclination angle $\iota$, the polarization angle $\psi$, the time of
coalescence $t_{c}$, and the phase of coalescence $\phi_{c}$. We subsequently
extract the errors in the measurement of the luminosity distance which is the
parameter of interest here.
Figure 2 shows the redshift distribution of the detected GW events in our
population (left panel), together with those that have an observable GRB
counterpart in the Fermi-GBM detector (right panel) for the three different
networks considered. The distribution is shown for both the optimistic case
(solid lines) and the pessimistic case (dashed lines) corresponding to the
range of the local BNS merger rates given in Eq. (34). We note that only the
3G network can observe BNS coalescences from the furthest redshifts
considered. In Tab. 2, we quote the figures for the expected number of GW
events, the corresponding number whose redshift is less than 0.5, the total
number of events with a GRB counterpart, the number of events with a GRB
counterpart above redshift $z>0.5$, and the cumulative number of events
expected to contribute to the measurement of $\eta_{0}$ according to our
assumptions in Sec. III.2.2. The numbers in parenthesis correspond to the
pessimistic case.
We see that the 2G+ network has a horizon reach of less than $z<0.3$ and the
total number of coincident electromagnetic detections for the optimistic case
are $\sim 1330$ (304 + 10% of 10,259). The corresponding numbers for the Voy+
and ECC networks are $\sim 8,970$ and $\sim 53,320$, respectively, as given in
the last column of Tab. 2.
Figure 3: Fractional error in the measurement of gravitational luminosity
distance as a function of redshift for simulated the BNS population. We see
that only the third generation detector network detects sources from the
highest redshift ($z=1.5$) considered in this study.
In Fig. 3, we show the fractional error in the measurement of GW luminosity
distance as a function of redshift for the three detector networks for our
detected population. To get the average behavior, we distribute the sources
into redshift bins and calculate the median of the fractional errors of the
sources in each redshift bin. We model the fractional luminosity distance
errors as a function of redshift as a series of Heaviside step functions,
which entails taking the errors in each redshift bin to be a constant.
We note that a fit for the fractional error in luminosity distance as a
function of redshift can be found in Belgacem _et al._ (2019). The reason we
do not directly use their fits in our study is because the only parameter in
their Fisher matrix is the luminosity distance and, therefore, their errors
are unrealistic. Crucially, they ignore the correlations between the
luminosity distance and inclination angle, which is known to increase the
errors significantly Marković (1993); Cutler and Flanagan (1994).
As can be seen from Fig. 3, the horizon distance for the 2G+ network is $z\sim
0.3$ and hence we consider the full detectable population to have a possible
kilonova counterpart detection. Another point of note from the figure is that
the largest redshift considered in this study ($z=1.5$) is within the horizon
distance for the ECC network. We do not consider higher redshift sources
because we are limited by the farthest observed SNeIa in the Union2 data-set
(see Sec. III.3.2).
#### III.3.2 Errors from electromagnetic observations
We model the EM luminosity distance errors using the Union2 Amanullah _et
al._ (2010) SNeIa compilation. Supernovae distances are measured in units of
distance modulus $\mu,$ which is related to luminosity distance by
$\mu=5\log_{10}\left(\frac{10^{5}d_{L}}{Mpc}\right).$ (37)
It can be easily seen from the above expression that the fractional error in
the luminosity distance is given by
$\frac{\Delta d_{L}}{d_{L}}=\frac{5}{\log_{e}(10)}\Delta\mu.$ (38)
Figure 4 shows the fractional errors in the EM luminosity distance in the
Union2 data-set as a function of redshift. We do not see any functional
behaviour in the luminosity distance errors across redshift bins (with the
errors approximately constant across bins) and, therefore, do not attempt at a
fit, instead treating the redshift behaviour of the errors as a piece-wise
step function.
We also note that for the low redshift events of 2G+ and Voy+ networks, we are
limited by the SNeIa luminosity distance errors and the median errors for the
ECC network is always less than their SNeIa counterpart except around the
redshift limit of $z=1.5$.
The median errors in the distance modulus for the Union2 data-set is 0.19. The
Rubin Observatory Legacy Survey of Space and Time is expected to observe about
half a million supernovae in its survey life cycle of 10 years with a large
fraction of them expected to have distance modulus errors of order 0.12 which
is a $\sim 40\%$ improvement over the Union2 data-set LSS , which would
further improve our estimates.
Figure 4: Fractional errors on SNe luminosity distances as a function of
redshift. The red steps denote the median error for the corresponding redshift
bin.
## IV Results
We calculate the errors on $\eta_{0}$ for our simulation as follows. From our
sub-population of observed sources of GWs and their EM counterparts, we
randomly select $N$ binaries. Given that we know the redshift to each of our
sources, we get the median fractional error in the GW and EM luminosity
distances for each of the $N$ detections from our modeling of the same as
described in Sec. III.3. Assuming that the central value for both the
luminosity distances are the same, we use Eq. (26) to calculate the errors on
$\eta_{0}$ for each source. The combined error for $N$ independent
observations is given by
$\frac{1}{\sigma^{2}}=\sum_{i=1}^{N}\frac{1}{\sigma_{i}^{2}},$ (39)
where $\sigma_{i}$ is the error for each event. We show the resultant errors
on $\eta_{0}$ in Fig. 5 as a function of the number of observed events, in
increments of 5 up to the expected number of observations for the optimistic
case, for the three networks under consideration. The dashed vertical lines
show the expected number of observations for the pessimistic case rounded to
the nearest multiple of 5 for easy reading of the associated error. Also
depicted in the figure on the right axis is the same error converted to the
temporal variation of the ratio of the gravitational and fine structure
constant at the current epoch [see Eq. (29)] where $H_{0}=73.04\pm 1.04\,\rm
km/s/Mpc$ Riess _et al._ (2021). We fit the $1/\sqrt{N}$ asymptotic behavior
of the errors and quote the typical error for an observation in each network
in Table 3.
Figure 5: The expected error on the measurement of $\eta_{0}$ ($\sigma_{\eta_{0}}$) as a function of the number of observations for the three detector networks examined in this study. The axis on the right enumerates the same errors in terms of the temporal variation of the ratio of the gravitational and fine structure constant at the present epoch, $\beta$. The maximum number of events for each network denotes the expected number of total observations for the optimistic case. The corresponding number for the pessimistic case (rounded to the nearest multiple of 5) is shown by the vertical dashed lines. Network | | $\sigma_{\eta_{0}}$ | | $\sigma_{\beta}$ $[\times 10^{-9}\rm yr^{-1}]$
---|---|---|---|---
2G+ | | 7.3 | | 1.1
Voy+ | | 4.1 | | 0.61
ECC | | 1.6 | | 0.24
Table 3: Typical value of the error in the measurement of $\eta_{0}$ and the
same in terms of the temporal variation of $G/\alpha(z=0)$ for a single GW
detection for the three networks considered in this study.
We note that the constraints from the violation of the distance-duality
relation directly translates to constraints on the variation of fundamental
constants. We now compare our results to complimentary EM experiments that
look for the variation of the fine structure constant from cosmological data.
We stress that this comparison can only be done for a restricted class of
models that do not modify the gravitational sector as the relevant EM
experiments are oblivious to these modifications. Hees _et al._ (2014)
briefly reviewed such EM probes and the constraints from various probes are
quoted in Table i@ of their paper. Holanda _et al._ (2012) and Cao and Liang
(2011) use the same parameterization of $\eta(z)$ as ours and quote average
$1\sigma$ errors of 0.12 and 0.22 on $\eta_{0}$, respectively. The latter is
on par with the capability of the 2G+ network at the end of its observing
cycle in the optimistic case.
Other studies use different parameterizations but one can deduce that although
the constraints using 2G+ network’s forecast to be of the same order or
slightly more than the electromagnetic ones, Voy+ network would be able to
place limits that are a few times better than most of the EM
experiments—except constraints from high redshift quasar absorption spectra
Webb _et al._ (2001); King _et al._ (2012); Webb _et al._ (2011)—for the
optimistic case. The ECC network improves the Voy+ network constraints by an
additional factor of 5. The pessimistic case yields constraints that are an
order of magnitude poorer than the optimistic case for all the three networks
studied here, which is in line with the $1/\sqrt{N}$ behaviour of errors – the
optimistic case has two orders of magnitude more events than the pessimistic
case. Furthermore, observations of quasars from high redshifts suggest a
possible spatial variation of the fine structure constant King _et al._
(2012); Webb _et al._ (2011). The large number of gravitational wave
observations, albeit from smaller redshifts, would allow for local constraints
on the spatial variation of the fine structure constant too.
## V Constraints based on GW170817
In the previous sections, we focused on the detection of a spatially
coincident SNeIa to provide the EM luminosity distance. This is because a
SNeIa is a standard candle and, hence, has constant absolute luminosity in the
source frame. In addition, the systematic uncertainties of modeling SNeIa as a
standard candles is well understood and, therefore, provides an unbiased
estimate for the luminosity distance to the source.
Recently, there have been efforts to model the kilonova emissions following a
BNS merger as a standard candle Kashyap _et al._ (2019); Coughlin _et al._
(2020). This would provide another independent measure of the EM luminosity
distance for low redshift sources with the added benefit of not having to
search for a spatially coincident supernova. We, however, do not forecast the
constraints that can be placed on $\eta_{0}$ for a population of joint GW-
kilonova sources using standardised kilonova emissions since these models are
at a very nascent stage of development with large systematic uncertainties.
Nevertheless, we use the EM luminosity distance estimates of Coughlin _et
al._ (2020) for GW170817 Abbott _et al._ (2017a) to place the first
constraints on our deviation parameter $\eta_{0}$. The gravitational-wave
luminosity distance for GW170817 was estimated to be $d_{L}^{\rm
GW}=43.8^{+2.9}_{-6.9}\,\text{Mpc}$ Abbott _et al._ (2017c). Coughlin _et
al._ (2020) give three measurements of the EM luminosity distance. The first
value, $d_{L}^{\rm EM}=31^{+17}_{-11}\,\text{Mpc}$, is a direct measurement
from the lightcurve based on the analysis of Kasen _et al._ (2017). The other
two are inferred from ejecta parameters based on the analyses of Kasen _et
al._ (2017) and Bulla (2019) and are given by $d_{L}^{\rm
EM}=37^{+8}_{-7}\,\text{Mpc}$ and $d_{L}^{\rm EM}=40^{+9}_{-8}\,\text{Mpc}$,
respectively. These three luminosity distance measurements correspond to
$\eta_{0}=42^{+79}_{-55}$, $18^{+27}_{-29}$, and $10^{+26}_{-28}$,
respectively. Unsurprisingly, the estimate of $\eta_{0}$ is consistent with 0.
The above measurements of $\eta_{0}$ give the temporal variation of
$G/\alpha(z)$ [see Eq. (29)] – in units of $[\times 10^{-9}\rm yr^{-1}]$ – at
the current epoch to be
$\beta=-6^{+8}_{-12},\;-3^{+4}_{-4},\;-1^{+4}_{-4},$ (40)
respectively. A Hubble constant value of $H_{0}=73.04\pm 1.04\,\rm km/s/Mpc$
as reported by Riess _et al._ (2021) is used for this calculation.
## VI Conclusion
In this paper, we focused on constraining the ratio of gravitational-wave and
electromagnetic-wave luminosity distances and, consequently, the variation in
the ratio of the gravitational and fine structure constant, using coincident
gravitational and electromagnetic wave observations for a class of scalar-
tensor theories known as runaway dilaton models. These theories have
multiplicative couplings of generic scalar fields to gravitational and
electromagnetic sectors. To constrain the modified propagation in such
theories without having to fit for other cosmological parameters, as is done
while using a luminosity distance-redshift relation, a second distance measure
is necessary. We used a spatially coincident supernova as the EM probe to
provide the complimentary EM luminosity distance estimate.
We find that the planned upgrade to the current second-generation ground-based
detector network (2G+) can constrain the parameter modeling the ratio of the
EM and GW luminosity distance to below $|\eta_{0}|\lesssim 0.2,$ while the
proposed improvement of the 2G+ network to Voy+ sensitivity would be able to
place an upper limit of $|\eta_{0}|\lesssim 0.05$, at the end of an 8-year
effective observing cycle if no deviation from the GR value of $\eta_{0}=0$ is
measured. The proposed next-generation ground-based detector network (ECC) can
further improve the constraints to $|\eta_{0}|\lesssim 0.01$. We see that the
constraints using this method for the sub-class of theories that modify the EM
sector alone would be competitive with most of the current EM probes in the
literature Hees _et al._ (2014) for the 2G+ network. The Voy+ network would
improve these estimates by a factor of 4 and the ECC network improves the Voy+
network constraints by a factor of 5. We also showed how these numbers
translate in to the temporal variation of the fundamental constants.
We, further, make use of recent progress in kilonova light-curve modeling and,
consequently, the EM luminosity distance estimates from them to place the
first constraints on our $\eta_{0}$ parameter for GW170817. As expected, we
find consistency with GR.
We expect a number of BNS merger observations with counterpart in the fourth
observing run of aLIGO/aVirgo/KAGRA and plan to use them to constrain this
class of theories.
###### Acknowledgements.
AD was supported by NSF grant PHY-2012083 and BSS was supported in part by NSF
grants PHY-1836779, PHY-2012083 and AST-2006384.
## References
* Maggiore (2007) M. Maggiore, _Gravitational Waves. Vol. 1: Theory and Experiments_ , Oxford Master Series in Physics (Oxford University Press, 2007).
* Fujii and Maeda (2007) Y. Fujii and K. Maeda, _The scalar-tensor theory of gravitation_, Cambridge Monographs on Mathematical Physics (Cambridge University Press, 2007).
* Bertolami _et al._ (2007) O. Bertolami, C. G. Boehmer, T. Harko, and F. S. N. Lobo, Phys. Rev. D 75, 104016 (2007), arXiv:0704.1733 [gr-qc] .
* Bertolami _et al._ (2008) O. Bertolami, F. S. N. Lobo, and J. Paramos, Phys. Rev. D 78, 064036 (2008), arXiv:0806.4434 [gr-qc] .
* Bertolami and Paramos (2008) O. Bertolami and J. Paramos, Class. Quant. Grav. 25, 245017 (2008), arXiv:0805.1241 [gr-qc] .
* Sotiriou and Faraoni (2008) T. P. Sotiriou and V. Faraoni, Class. Quant. Grav. 25, 205002 (2008), arXiv:0805.1249 [gr-qc] .
* De Felice and Tsujikawa (2010) A. De Felice and S. Tsujikawa, Living Rev. Rel. 13, 3 (2010), arXiv:1002.4928 [gr-qc] .
* Harko _et al._ (2013) T. Harko, F. S. N. Lobo, and O. Minazzoli, Phys. Rev. D 87, 047501 (2013), arXiv:1210.4218 [gr-qc] .
* Das and Banerjee (2008) S. Das and N. Banerjee, Phys. Rev. D 78, 043512 (2008), arXiv:0803.3936 [gr-qc] .
* Bisabr (2012) Y. Bisabr, Phys. Rev. D 86, 127503 (2012), arXiv:1212.2709 [gr-qc] .
* Moffat and Toth (2012) J. W. Moffat and V. T. Toth, Int. J. Mod. Phys. D 21, 1250084 (2012), arXiv:1001.1564 [gr-qc] .
* Shiralilou _et al._ (2022) B. Shiralilou, T. Hinderer, S. M. Nissanke, N. Ortiz, and H. Witek, Class. Quant. Grav. 39, 035002 (2022), arXiv:2105.13972 [gr-qc] .
* Rovelli and Smolin (1994) C. Rovelli and L. Smolin, Phys. Rev. Lett. 72, 446 (1994), arXiv:gr-qc/9308002 .
* Domagala _et al._ (2010) M. Domagala, K. Giesel, W. Kaminski, and J. Lewandowski, Phys. Rev. D 82, 104038 (2010), arXiv:1009.2445 [gr-qc] .
* Green _et al._ (1988) M. B. Green, J. H. Schwarz, and E. Witten, _SUPERSTRING THEORY. VOL. 2: LOOP AMPLITUDES, ANOMALIES AND PHENOMENOLOGY_ (Cambridge University Press, 1988).
* Uzan (2011) J.-P. Uzan, Living Rev. Rel. 14, 2 (2011), arXiv:1009.5514 [astro-ph.CO] .
* Damour and Polyakov (1994a) T. Damour and A. M. Polyakov, Gen. Rel. Grav. 26, 1171 (1994a), arXiv:gr-qc/9411069 .
* Damour and Polyakov (1994b) T. Damour and A. M. Polyakov, Nucl. Phys. B 423, 532 (1994b), arXiv:hep-th/9401069 .
* Gasperini _et al._ (2002) M. Gasperini, F. Piazza, and G. Veneziano, Phys. Rev. D 65, 023508 (2002), arXiv:gr-qc/0108016 .
* Minazzoli and Hees (2013) O. Minazzoli and A. Hees, Phys. Rev. D 88, 041504 (2013), arXiv:1308.2770 [gr-qc] .
* Ratra and Peebles (1988) B. Ratra and P. J. E. Peebles, Phys. Rev. D 37, 3406 (1988).
* Caldwell _et al._ (1998) R. R. Caldwell, R. Dave, and P. J. Steinhardt, Phys. Rev. Lett. 80, 1582 (1998), arXiv:astro-ph/9708069 .
* Peebles and Ratra (2003) P. J. E. Peebles and B. Ratra, Rev. Mod. Phys. 75, 559 (2003), arXiv:astro-ph/0207347 .
* Guth (1981) A. H. Guth, Phys. Rev. D 23, 347 (1981).
* Linde (1982) A. D. Linde, Phys. Lett. B 108, 389 (1982).
* Albrecht and Steinhardt (1982) A. Albrecht and P. J. Steinhardt, Phys. Rev. Lett. 48, 1220 (1982).
* Linde (2008) A. D. Linde, Lect. Notes Phys. 738, 1 (2008), arXiv:0705.0164 [hep-th] .
* Bekenstein (1982) J. D. Bekenstein, Phys. Rev. D 25, 1527 (1982).
* Sandvik _et al._ (2002) H. B. Sandvik, J. D. Barrow, and J. Magueijo, Phys. Rev. Lett. 88, 031302 (2002), arXiv:astro-ph/0107512 .
* Dvali and Zaldarriaga (2002) G. R. Dvali and M. Zaldarriaga, Phys. Rev. Lett. 88, 091303 (2002), arXiv:hep-ph/0108217 .
* Olive and Pospelov (2008) K. A. Olive and M. Pospelov, Phys. Rev. D 77, 043524 (2008), arXiv:0709.3825 [hep-ph] .
* Damour (2012) T. Damour, Class. Quant. Grav. 29, 184001 (2012), arXiv:1202.6311 [gr-qc] .
* Armendariz-Picon (2002) C. Armendariz-Picon, Phys. Rev. D 66, 064008 (2002), arXiv:astro-ph/0205187 .
* Adelberger _et al._ (2003) E. G. Adelberger, B. R. Heckel, and A. E. Nelson, Ann. Rev. Nucl. Part. Sci. 53, 77 (2003), arXiv:hep-ph/0307284 .
* Adelberger _et al._ (2007) E. G. Adelberger, B. R. Heckel, S. A. Hoedl, C. D. Hoyle, D. J. Kapner, and A. Upadhye, Phys. Rev. Lett. 98, 131104 (2007), arXiv:hep-ph/0611223 .
* Adelberger _et al._ (2009) E. G. Adelberger, J. H. Gundlach, B. R. Heckel, S. Hoedl, and S. Schlamminger, Prog. Part. Nucl. Phys. 62, 102 (2009).
* Kapner _et al._ (2007) D. J. Kapner, T. S. Cook, E. G. Adelberger, J. H. Gundlach, B. R. Heckel, C. D. Hoyle, and H. E. Swanson, Phys. Rev. Lett. 98, 021101 (2007), arXiv:hep-ph/0611184 .
* Will (2006) C. M. Will, Living Rev. Rel. 9, 3 (2006), arXiv:gr-qc/0510072 .
* Rosenband _et al._ (2008) T. Rosenband, D. B. Hume, P. O. Schmidt, C. W. Chou, A. Brusch, L. Lorini, W. H. Oskay, R. E. Drullinger, T. M. Fortier, J. E. Stalnaker, S. A. Diddams, W. C. Swann, N. R. Newbury, W. M. Itano, D. J. Wineland, and J. C. Bergquist, Science 319, 1808 (2008), https://science.sciencemag.org/content/319/5871/1808.full.pdf .
* Williams _et al._ (2012) J. G. Williams, S. G. Turyshev, and D. Boggs, Class. Quant. Grav. 29, 184004 (2012), arXiv:1203.2150 [gr-qc] .
* Tseytlin and Vafa (1992) A. A. Tseytlin and C. Vafa, Nucl. Phys. B 372, 443 (1992), arXiv:hep-th/9109048 .
* Damour and Vilenkin (1996) T. Damour and A. Vilenkin, Phys. Rev. D 53, 2981 (1996), arXiv:hep-th/9503149 .
* Damour _et al._ (2002) T. Damour, F. Piazza, and G. Veneziano, Phys. Rev. Lett. 89, 081601 (2002), arXiv:gr-qc/0204094 .
* Jarv _et al._ (2008) L. Jarv, P. Kuusk, and M. Saal, Phys. Rev. D 78, 083530 (2008), arXiv:0807.2159 [gr-qc] .
* Damour and Nordtvedt (1993) T. Damour and K. Nordtvedt, Phys. Rev. Lett. 70, 2217 (1993).
* Khoury (2010) J. Khoury, (2010), arXiv:1011.5909 [astro-ph.CO] .
* Khoury and Weltman (2004a) J. Khoury and A. Weltman, Phys. Rev. Lett. 93, 171104 (2004a), arXiv:astro-ph/0309300 .
* Khoury and Weltman (2004b) J. Khoury and A. Weltman, Phys. Rev. D 69, 044026 (2004b), arXiv:astro-ph/0309411 .
* Hees and Fuzfa (2012) A. Hees and A. Fuzfa, Phys. Rev. D 85, 103005 (2012), arXiv:1111.4784 [gr-qc] .
* Hinterbichler and Khoury (2010) K. Hinterbichler and J. Khoury, Phys. Rev. Lett. 104, 231301 (2010), arXiv:1001.4525 [hep-th] .
* Hinterbichler _et al._ (2011) K. Hinterbichler, J. Khoury, A. Levy, and A. Matas, Phys. Rev. D 84, 103521 (2011), arXiv:1107.2112 [astro-ph.CO] .
* Webb _et al._ (2001) J. K. Webb, M. T. Murphy, V. V. Flambaum, V. A. Dzuba, J. D. Barrow, C. W. Churchill, J. X. Prochaska, and A. M. Wolfe, Phys. Rev. Lett. 87, 091301 (2001), arXiv:astro-ph/0012539 .
* King _et al._ (2012) J. A. King, J. K. Webb, M. T. Murphy, V. V. Flambaum, R. F. Carswell, M. B. Bainbridge, M. R. Wilczynska, and F. E. Koch, Mon. Not. Roy. Astron. Soc. 422, 3370 (2012), arXiv:1202.4758 [astro-ph.CO] .
* Webb _et al._ (2011) J. K. Webb, J. A. King, M. T. Murphy, V. V. Flambaum, R. F. Carswell, and M. B. Bainbridge, Phys. Rev. Lett. 107, 191101 (2011), arXiv:1008.3907 [astro-ph.CO] .
* Holanda _et al._ (2016) R. F. L. Holanda, S. J. Landau, J. S. Alcaniz, I. E. Sanchez G., and V. C. Busti, JCAP 05, 047 (2016), arXiv:1510.07240 [astro-ph.CO] .
* Khatri and Wandelt (2007) R. Khatri and B. D. Wandelt, Phys. Rev. Lett. 98, 111301 (2007), arXiv:astro-ph/0701752 .
* Riess _et al._ (2016) A. G. Riess _et al._ , Astrophys. J. 826, 56 (2016), arXiv:1604.01424 [astro-ph.CO] .
* Cao and Liang (2011) S. Cao and N. Liang, Research in Astronomy and Astrophysics 11, 1199 (2011), arXiv:1104.4942 [astro-ph.CO] .
* Hees _et al._ (2014) A. Hees, O. Minazzoli, and J. Larena, Phys. Rev. D 90, 124064 (2014), arXiv:1406.6187 [astro-ph.CO] .
* Abbott _et al._ (2017a) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. Lett. 119, 161101 (2017a), arXiv:1710.05832 [gr-qc] .
* Abbott _et al._ (2017b) B. P. Abbott _et al._ (LIGO Scientific, Virgo, Fermi GBM, INTEGRAL, IceCube, AstroSat Cadmium Zinc Telluride Imager Team, IPN, Insight-Hxmt, ANTARES, Swift, AGILE Team, 1M2H Team, Dark Energy Camera GW-EM, DES, DLT40, GRAWITA, Fermi-LAT, ATCA, ASKAP, Las Cumbres Observatory Group, OzGrav, DWF (Deeper Wider Faster Program), AST3, CAASTRO, VINROUGE, MASTER, J-GEM, GROWTH, JAGWAR, CaltechNRAO, TTU-NRAO, NuSTAR, Pan-STARRS, MAXI Team, TZAC Consortium, KU, Nordic Optical Telescope, ePESSTO, GROND, Texas Tech University, SALT Group, TOROS, BOOTES, MWA, CALET, IKI-GW Follow-up, H.E.S.S., LOFAR, LWA, HAWC, Pierre Auger, ALMA, Euro VLBI Team, Pi of Sky, Chandra Team at McGill University, DFN, ATLAS Telescopes, High Time Resolution Universe Survey, RIMAS, RATIR, SKA South Africa/MeerKAT), Astrophys. J. Lett. 848, L12 (2017b), arXiv:1710.05833 [astro-ph.HE] .
* Fanizza _et al._ (2020) G. Fanizza, G. Franchini, M. Gasperini, and L. Tedesco, Gen. Rel. Grav. 52, 111 (2020), arXiv:2010.06569 [gr-qc] .
* Finke _et al._ (2021) A. Finke, S. Foffa, F. Iacovelli, M. Maggiore, and M. Mancarella, (2021), arXiv:2101.12660 [astro-ph.CO] .
* Mukherjee _et al._ (2020) S. Mukherjee, B. D. Wandelt, and J. Silk, (2020), 10.1093/mnras/stab001, arXiv:2012.15316 [astro-ph.CO] .
* Minazzoli and Hees (2014) O. Minazzoli and A. Hees, Phys. Rev. D 90, 023017 (2014), arXiv:1404.4266 [gr-qc] .
* Reitze _et al._ (2019) D. Reitze _et al._ , Bull. Am. Astron. Soc. 51, 035 (2019), arXiv:1907.04833 [astro-ph.IM] .
* Abbott _et al._ (2018) B. P. Abbott _et al._ (KAGRA, LIGO Scientific, Virgo, VIRGO), Living Rev. Rel. 21, 3 (2018), arXiv:1304.0670 [gr-qc] .
* Acernese _et al._ (2015a) F. Acernese _et al._ (VIRGO), Class. Quant. Grav. 32, 024001 (2015a), arXiv:1408.3978 [gr-qc] .
* Aso _et al._ (2013) Y. Aso, Y. Michimura, K. Somiya, M. Ando, O. Miyakawa, T. Sekiguchi, D. Tatsumi, and H. Yamamoto (KAGRA), Phys. Rev. D 88, 043007 (2013), arXiv:1306.6747 [gr-qc] .
* Somiya (2012) K. Somiya (KAGRA), Class. Quant. Grav. 29, 124007 (2012), arXiv:1111.7185 [gr-qc] .
* Unnikrishnan (2013) C. S. Unnikrishnan, Int. J. Mod. Phys. D 22, 1341010 (2013), arXiv:1510.06059 [physics.ins-det] .
* Saleem _et al._ (2022) M. Saleem _et al._ , Class. Quant. Grav. 39, 025004 (2022), arXiv:2105.01716 [gr-qc] .
* Adhikari _et al._ (2020a) R. X. Adhikari _et al._ (LIGO), Class. Quant. Grav. 37, 165003 (2020a), arXiv:2001.11173 [astro-ph.IM] .
* Evans _et al._ (2021) M. Evans _et al._ , (2021), arXiv:2109.09882 [astro-ph.IM] .
* Punturo _et al._ (2010a) M. Punturo _et al._ , Class. Quant. Grav. 27, 194002 (2010a).
* Punturo _et al._ (2010b) M. Punturo _et al._ , Class. Quant. Grav. 27, 084007 (2010b).
* Hild _et al._ (2011) S. Hild _et al._ , Class. Quant. Grav. 28, 094013 (2011), arXiv:1012.0908 [gr-qc] .
* Holanda _et al._ (2012) R. F. L. Holanda, R. S. Gonçalves, and J. S. Alcaniz, JCAP 06, 022 (2012), arXiv:1201.2378 [astro-ph.CO] .
* Yunes _et al._ (2010) N. Yunes, F. Pretorius, and D. Spergel, Phys. Rev. D 81, 064018 (2010), arXiv:0912.2724 [gr-qc] .
* Yunes _et al._ (2016) N. Yunes, K. Yagi, and F. Pretorius, Phys. Rev. D 94, 084002 (2016), arXiv:1603.08955 [gr-qc] .
* Vijaykumar _et al._ (2021) A. Vijaykumar, S. J. Kapadia, and P. Ajith, Phys. Rev. Lett. 126, 141104 (2021), arXiv:2003.12832 [gr-qc] .
* Aasi _et al._ (2015) J. Aasi _et al._ (LIGO Scientific), Class. Quant. Grav. 32, 074001 (2015), arXiv:1411.4547 [gr-qc] .
* Acernese _et al._ (2015b) F. Acernese _et al._ (VIRGO), Class. Quant. Grav. 32, 024001 (2015b), arXiv:1408.3978 [gr-qc] .
* Akutsu _et al._ (2019) T. Akutsu _et al._ (KAGRA), Nature Astron. 3, 35 (2019), arXiv:1811.08079 [gr-qc] .
* Adhikari _et al._ (2020b) R. X. Adhikari _et al._ (LIGO), Class. Quant. Grav. 37, 165003 (2020b), arXiv:2001.11173 [astro-ph.IM] .
* Abbott _et al._ (2020) R. Abbott _et al._ (LIGO Scientific, Virgo), (2020), arXiv:2010.14533 [astro-ph.HE] .
* Belgacem _et al._ (2019) E. Belgacem, Y. Dirian, S. Foffa, E. J. Howell, M. Maggiore, and T. Regimbau, JCAP 08, 015 (2019), arXiv:1907.01487 [astro-ph.CO] .
* Vangioni _et al._ (2015) E. Vangioni, K. A. Olive, T. Prestegard, J. Silk, P. Petitjean, and V. Mandic, Mon. Not. Roy. Astron. Soc. 447, 2575 (2015), arXiv:1409.2462 [astro-ph.GA] .
* Abbott _et al._ (2021) R. Abbott _et al._ (LIGO Scientific, VIRGO, KAGRA), (2021), arXiv:2111.03634 [astro-ph.HE] .
* Kalogera _et al._ (2021) V. Kalogera _et al._ , (2021), arXiv:2111.06990 [gr-qc] .
* Howell _et al._ (2018) E. J. Howell, K. Ackley, A. Rowlinson, and D. Coward, (2018), 10.1093/mnras/stz455, arXiv:1811.09168 [astro-ph.HE] .
* Wanderman and Piran (2015) D. Wanderman and T. Piran, Mon. Not. Roy. Astron. Soc. 448, 3026 (2015), arXiv:1405.5878 [astro-ph.HE] .
* Burns _et al._ (2016) E. Burns, V. Connaughton, B.-B. Zhang, A. Lien, M. S. Briggs, A. Goldstein, V. Pelassa, and E. Troja, Astrophys. J. 818, 110 (2016), arXiv:1512.00923 [astro-ph.HE] .
* Gupta _et al._ (2019) A. Gupta, D. Fox, B. S. Sathyaprakash, and B. F. Schutz, The Astrophysical Journal 886, 71 (2019).
* Li _et al._ (2011) W. Li, R. Chornock, J. Leaman, A. V. Filippenko, D. Poznanski, X. Wang, M. Ganeshalingam, and F. Mannucci, MNRAS 412, 1473 (2011), arXiv:1006.4613 [astro-ph.SR] .
* Abbott _et al._ (2019) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. X 9, 031040 (2019), arXiv:1811.12907 [astro-ph.HE] .
* Friedmann and Maoz (2018) M. Friedmann and D. Maoz, MNRAS 479, 3563 (2018), arXiv:1803.04421 [astro-ph.GA] .
* Girardi _et al._ (2002) M. Girardi, P. Manzato, M. Mezzetti, G. Giuricin, and F. Limboz, Astrophys. J. 569, 720 (2002), astro-ph/0112534 .
* Aghanim _et al._ (2020) N. Aghanim _et al._ (Planck), Astron. Astrophys. 641, A6 (2020), arXiv:1807.06209 [astro-ph.CO] .
* Borhanian (2020) S. Borhanian, (2020), arXiv:2010.15202 [gr-qc] .
* Cutler and Flanagan (1994) C. Cutler and E. E. Flanagan, Phys. Rev. D 49, 2658 (1994), arXiv:gr-qc/9402014 .
* Poisson and Will (1995) E. Poisson and C. M. Will, Phys. Rev. D 52, 848 (1995), arXiv:gr-qc/9502040 .
* Marković (1993) D. Marković, Phys. Rev. D 48, 4738 (1993).
* Amanullah _et al._ (2010) R. Amanullah, C. Lidman, D. Rubin, G. Aldering, P. Astier, K. Barbary, M. S. Burns, A. Conley, K. S. Dawson, S. E. Deustua, M. Doi, S. Fabbro, L. Faccioli, H. K. Fakhouri, G. Folatelli, A. S. Fruchter, H. Furusawa, G. Garavini, G. Goldhaber, A. Goobar, D. E. Groom, I. Hook, D. A. Howell, N. Kashikawa, A. G. Kim, R. A. Knop, M. Kowalski, E. Linder, J. Meyers, T. Morokuma, S. Nobili, J. Nordin, P. E. Nugent, L. Östman, R. Pain, N. Panagia, S. Perlmutter, J. Raux, P. Ruiz-Lapuente, A. L. Spadafora, M. Strovink, N. Suzuki, L. Wang, W. M. Wood-Vasey, N. Yasuda, and T. Supernova Cosmology Project, Astrophys. J. 716, 712 (2010), arXiv:1004.1711 [astro-ph.CO] .
* (105) 11.5 Constraining the Dark Energy Equation of State, https://www.lsst.org/sites/default/files/docs/sciencebook/SB_11.pdf.
* Riess _et al._ (2021) A. G. Riess _et al._ , (2021), arXiv:2112.04510 [astro-ph.CO] .
* Kashyap _et al._ (2019) R. Kashyap, G. Raman, and P. Ajith, Astrophys. J. Lett. 886, L19 (2019), arXiv:1908.02168 [astro-ph.SR] .
* Coughlin _et al._ (2020) M. W. Coughlin, T. Dietrich, J. Heinzel, N. Khetan, S. Antier, M. Bulla, N. Christensen, D. A. Coulter, and R. J. Foley, Phys. Rev. Res. 2, 022006 (2020), arXiv:1908.00889 [astro-ph.HE] .
* Abbott _et al._ (2017c) B. P. Abbott _et al._ (LIGO Scientific, Virgo, 1M2H, Dark Energy Camera GW-E, DES, DLT40, Las Cumbres Observatory, VINROUGE, MASTER), Nature 551, 85 (2017c), arXiv:1710.05835 [astro-ph.CO] .
* Kasen _et al._ (2017) D. Kasen, B. Metzger, J. Barnes, E. Quataert, and E. Ramirez-Ruiz, Nature 551, 80 (2017), arXiv:1710.05463 [astro-ph.HE] .
* Bulla (2019) M. Bulla, Mon. Not. Roy. Astron. Soc. 489, 5037 (2019), arXiv:1906.04205 [astro-ph.HE] .
|
# How to design an AI ethics board
Jonas Schuett
Centre for the Governance of AI
<EMAIL_ADDRESS>
Anka Reuel∗
Stanford University
<EMAIL_ADDRESS>
Alexis Carlier
Centre for the Governance of AI
<EMAIL_ADDRESS>
Equal contribution
###### Abstract
Organizations that develop and deploy artificial intelligence (AI) systems
need to take measures to reduce the associated risks. In this paper, we
examine how AI companies could design an AI ethics board in a way that reduces
risks from AI. We identify five high-level design choices: (1) What
responsibilities should the board have? (2) What should its legal structure
be? (3) Who should sit on the board? (4) How should it make decisions and
should its decisions be binding? (5) What resources does it need? We break
down each of these questions into more specific sub-questions, list options,
and discuss how different design choices affect the board’s ability to reduce
risks from AI. Several failures have shown that designing an AI ethics board
can be challenging. This paper provides a toolbox that can help AI companies
to overcome these challenges.
## 1 Introduction
It becomes increasingly clear that state-of-the-art artificial intelligence
(AI) systems pose significant societal risks. AI systems used for drug
discovery could be misused for the design of biochemical weapons [116]. A
failure of AI systems used to control nuclear power plants or other critical
infrastructure could also have devastating consequences [35]. Another concern
is that, as models become larger and larger, certain dangerous capabilities
might emerge at some point. Scholars and practitioners are increasingly
worried about power-seeking behavior, situational awareness, and the ability
to persuade people [25, 79, 82]. Organizations that develop and deploy AI
systems need to take measures to reduce these risks to an acceptable level. In
this paper, we examine how AI companies could design an AI ethics board in a
way that reduces risks from AI. By “ethics board”, we mean a collective body
intended to promote an organization’s ethical behavior.
Some AI companies already have an AI ethics board. For example, Meta’s
Oversight Board makes binding decisions about the content on Facebook and
Instagram [86, 58, 121]. Microsoft’s AI, Ethics and Effects in Engineering and
Research (AETHER) Committee advises their leadership “on the challenges and
opportunities presented by AI innovations” [68]. DeepMind’s Institutional
Review Committee (IRC) oversees their human rights policy [34] and has already
played a key role in the AlphaFold release [57]. These examples show that AI
ethics boards are of practical relevance.
But there have also been a number of failures. Google’s Advanced Technology
External Advisory Council (ATEAC) faced significant resistance over the
inclusion of disputable members. It was shut down only one week after its
announcement [94, 5, 42, 118]. Axon’s AI and Policing Technologies Ethics
Board was effectively discontinued in June 2022 after three years of
operations [109]. Nine out of eleven members resigned after Axon announced
plans to develop taser-equipped drones to be used in schools without
consulting the board first [39]. (In late 2022, Axon announced their new
ethics board: the Ethics & Equity Advisory Council [EEAC], which gives
feedback on a limited number of products “through a racial equity and ethics
lens” [11].) These cases show that designing an AI ethics board can be
challenging. It also highlights the need for more research.
Although there has been some research on AI ethics boards, the topic remains
understudied. The most important work for our purposes is a whitepaper by
Accenture [101]. They discuss key benefits of AI ethics boards and identify
key design questions. However, their discussion lacks both breadth and depth.
They discuss only a handful of design considerations and do not go into
detail. They also do not focus on leading AI companies and risk reduction.
Besides that, there is some literature on the purpose [55, 115, 73] and
practical challenges of AI ethics boards [93, 45]. There are also several case
studies of existing boards, including Meta’s Oversight Board [121] and
Microsoft’s AETHER Committee [78]. And finally, there is some discussion of
the role of AI ethics boards in academic research [14, 112]. Taken together,
there seem to be at least two gaps in the literature. First, there is only
limited work on the practical question of how to design an AI ethics board.
Second, there is no discussion of how specific design considerations can help
to reduce risks from AI. In light of these gaps, the paper seeks to answer two
research questions (RQs):
* •
RQ1: What are the key design choices that AI companies have to make when
setting up an AI ethics board?
* •
RQ2: How could different design choices affect the board’s ability to reduce
risks from AI?
The paper has two areas of focus. First, it focuses on organizations that
develop state-of-the-art AI systems. This includes medium-sized research labs
(e.g. OpenAI, DeepMind, and Anthropic) and big tech companies (e.g. Microsoft
and Google). We use the term “AI company” or “company” to refer to them.
Although we do not mention other types of companies (e.g. hardware companies),
we expect that they might also benefit from our analysis. Second, the paper
focuses on the board’s ability to reduce risks (see RQ2). By “risk”, we mean
the “combination of the probability of occurrence of harm and the severity of
that harm” [52]. (But note that there are other risk definitions [51]). In
terms of severity, we focus on adverse effects on large groups of people and
society as a whole, especially threats to their lives and physical integrity.
We are less interested in financial losses and risks to organizations
themselves (e.g. litigation or reputation risks). In terms of likelihood, we
also consider low-probability, high-impact risks, sometimes referred to as
“black swans” [113, 9, 60]. The two main sources of harm (“hazards”) we
consider are accidents [1, 7] and cases of misuse [22, 41, 2].
In the following, we consider five high-level design choices: What
responsibilities should the board have (Section 2)? What should its legal
structure be (Section 3)? Who should sit on the board (Section 4)? How should
it make decisions and should its decisions be binding (Section 5)? What
resources does it need (Section 6)? We break down each of these questions into
more specific sub-questions, list options, and discuss how they could affect
the board’s ability to reduce risks from AI. The paper concludes with a
summary of the most important design considerations and suggestions for
further research (Section 7).
## 2 Responsibilities
What responsibilities should the board have? We use the term “responsibility”
to refer to the board’s purpose (what it aims to achieve), its rights (what it
can do), and duties (what it must do). The board’s responsibilities are
typically specified in its charter or bylaws. In the following, we focus on
responsibilities that could help to reduce risks from AI (see RQ2). The ethics
board could advise the board of directors (Section 2.1), oversee model
releases and publications (Section 2.2), support risk assessments (Section
2.3), review the company’s risk management practices (Section 2.4), interpret
AI ethics principles (Section 2.5), or serve as a contact point for
whistleblowers (Section 2.6). Note that these responsibilities are neither
mutually exclusive nor collectively exhaustive. The board could also have more
than one responsibility.
### 2.1 Advising the board of directors
The board of directors plays a key role in the corporate governance of AI
companies [28]. It sets the company’s strategic priorities, is responsible for
risk oversight, and has significant influence over management (e.g. it can
replace senior executives). But since many board members only work part-time
and rely on information provided to them by management, they need support from
an independent ally in the company [33]. Internal audit can be this ally, but
the ethics board could serve as an additional layer of assurance [102].
#### Options.
The ethics board could provide strategic advice on various topics. It could
advocate against high-risk decisions and call for a more prudent and wiser
course.
* •
Research priorities. Most AI companies have an overarching research agenda
(e.g. DeepMind’s focus on reinforcement learning [108] or Anthropic’s focus on
empirical safety research [3]). This agenda influences what projects the
company works on. The ethics board could try to influence that agenda. It
could advocate for increasing focus on safety and alignment research [1, 47,
79]. More generally, it could caution against advancing capabilities faster
than safety measures. The underlying principle is called “differential
technological development” [19, 84, 100].
* •
Commercialization strategy. The ethics board could also advise on the
company’s commercialization strategy. On the one hand, it is understandable
that AI companies want to monetize their systems (e.g. to pay increasing costs
for compute [105]). On the other hand, commercial pressure might incentivize
companies to cut corners on safety [6, 76]. For example, Google famously
announced to “recalibrate” the level of risk it is willing to take in response
to OpenAI’s release of ChatGPT [43]. It has also been reported that
disagreements over OpenAI’s commercialization strategy were the reason why key
employees left the company to start Anthropic [119].
* •
Strategic partnerships. AI labs might enter into strategic partnerships with
profit-oriented companies (see e.g. the extended partnership between Microsoft
and OpenAI [67]) or with the military (see e.g. “Project Maven”, Google’s
collaboration with the U.S. Department of Defense [29]). Although such
partnerships are not inherently bad, they could contribute to an increase of
risk (e.g. if they lead to an equipment of nuclear weapons with AI technology
[64]).
* •
Fundraising and M&A transactions. AI companies frequently need to bring in new
investors. For example, in January 2023, it has been reported that OpenAI
raised $10B from Microsoft [48, 83]. But if new investors care more about
profits, this could gradually shift the company’s focus away from safety and
ethics towards profit maximization. The same might happen if AI companies
merge or get acquired. The underlying phenomena is called “mission drift”
[44].
#### Discussion.
How much would advising the board of directors reduce risk? This depends on
many different factors. It would be easier if the ethics board has a direct
communication channel to the board of directors, ideally to a dedicated risk
committee. It would also be easier if the board of directors is able to do
something about risks. They need risk-related expertise and governance
structures to exercise their power (e.g. a chief risk officer [CRO] as a
single point of accountability). But the board of directors also needs to take
risks seriously and be willing to do something about them. This will often
require a good relationship between the ethics board and the board of
directors. Inversely, it would be harder for the ethics board to reduce risk
if the board of directors mainly cares about other things (e.g. profits or
prestige), especially since the ethics board is usually not able to force the
board of directors to do something.
### 2.2 Overseeing model releases and publications
Many risks are caused by accidents [1, 7] or the misuse of specific AI systems
[22, 41, 2]. In both cases, the deployment decision is a decisive moment.
Ideally, companies should discover potential failure modes and vulnerabilities
before they deploy a system, and stop the deployment process if they cannot
reduce risks to an acceptable level. But not all risks are caused by the
deployment of individual models. Some risks also stem from the publication of
research, as research findings can be misused [116, 22, 41, 2, 8, 107, 21].
The dissemination of potentially harmful information, including research
findings, is called “infohazards” [20, 62]. Publications can also fuel harmful
narratives. For example, it has been argued that the “arms race” rhetoric is
highly problematic [26].
#### Options.
An ethics board could try to reduce these risks by creating a release strategy
[111, 110, 81] and norms for the responsible publication of research [32, 8,
106, 91]. For example, the release strategy could establish “structured
access” as the norm for deploying powerful AI systems [106]. Instead of open-
sourcing new models, companies might want to deploy them via an application
programming interface (API), which would allow them to conduct know-your-
customer (KYC) screenings and restrict access if necessary, while allowing the
world to use and study the model. The release strategy could also specify
instances where a “staged release” seems adequate. Stage release refers to the
strategy of releasing a smaller model first, and only releasing larger models
if no meaningful cases of misuse are observed. OpenAI has coined the term and
championed the approach when releasing GPT-2 [111]. But note that the approach
has also been criticized [32]. The ethics board could also create an
infohazard policy. The AI research organization Conjecture has published its
policy [62]. We expect most AI companies to have similar policies, but do not
make them public. In addition to that, the board could oversee specific model
releases and publications (not just the abstract strategies and policies). It
could serve as an institutional review board (IRB) that cares about safety and
ethics more generally, not just the protection of human subjects [14, 112]. In
particular, it could review the risks of a model or publication itself, do a
sanity check of existing reviews, or commission an external review (Section
2.3).
#### Discussion.
How much would this reduce risk? Among other things, this depends on whether
board members have the necessary expertise (Section 4.4), whether the board’s
decisions are binding (Section 5.2), and whether they have the necessary
resources (Section 6). The decision to release a model or publish research is
one of the most important points of intervention for governance mechanisms
that are intended to reduce risks. An additional attempt to steer such
decisions in a good direction therefore seems desirable.
### 2.3 Supporting risk assessments
By “risk assessment”, we mean the identification, analysis, and evaluation of
risks [52, 51]. Assessing the risks of state-of-the-art AI systems is
extremely difficult: (1) The risk landscape is highly complex and evolves
rapidly. For example, the increasing use of so-called “foundation models” [18]
might lead to new diffuse and systemic risks (e.g. threats to epistemic
security [104]). (2) Defining normative thresholds is extremely difficult:
What level of risk is acceptable? How fair is fair enough? (3) In many cases,
AI companies are also detached from the people who are most affected by their
systems, often historically marginalized communities [70, 16]. (4) Risk
assessments might become even more difficult in the future. For example,
systems might become capable of deceiving their operators and only
“pretending” to be safe in a testing environment [79].
#### Options.
The ethics board could actively contribute to the different steps of a risk
assessment. It could use a risk taxonomy to flag missing hazards [120],
comment on a heatmap that illustrates the likelihood and severity of a risk
[50], or try to circumvent a safety filter [99]. It could also commission a
third-party audit [97, 23, 37, 98, 72, 75] or red team [40, 92, 99]. It could
report its findings to the board of directors which would have the necessary
power to intervene (Section 2.1). Depending on its power, it might even be
able to veto or at least delay deployment decisions (Section 5.2).
#### Discussion.
Some companies already take extensive measures to assess risks before
deploying state-of-the-art AI systems [57, 24, 3]. It is unclear how much
value the support of an ethics board would add to such efforts. But especially
when dealing with catastrophic risks, having an additional “layer of defense”
seems generally desirable. The underlying concept is called “defense in depth”
[30]. This approach could be seen as a solution to the problem that “there is
no silver bullet” [24]. But supporting risk assessments could also have
negative effects. If other teams rely on the board’s work, they might assess
risks less thoroughly. This would be particularly problematic if the board is
not able to do it properly (e.g. it can only perform sanity checks). But this
effect could be mitigated by clearly communicating expectations and creating
appropriate incentives.
### 2.4 Reviewing risk management practices
Instead of or in addition to supporting specific risk assessments (Section
2.3), the ethics board could review the company’s risk management practices
more generally. In other words, it could try to improve the company’s “risk
governance” [117, 63]. Risk management practices at AI companies seem to be
less advanced compared to other industries like aviation [49]. “They might
look good on paper, but do not work in practice” [102]. There are not yet any
established best practices and companies rarely adhere to best practices from
other industries (though there are promising developments around risk
management standards). And practices that companies develop themselves might
not be as effective. For example, there might be blind spots for certain types
of risks (e.g. diffuse or systemic risks) or they might not account for
cognitive biases (e.g. availability bias or scope neglect [122]).
#### Options.
The ethics board could assess the adequacy and effectiveness of the company’s
risk management practices. It could assess whether the company complies with
relevant regulations [103], standards [80, 53], or its own policies and
processes. It could also try to find flaws in a more open-ended fashion.
Depending on its expertise and capacity, it could do this on its own (e.g. by
reviewing risk-related policies and interviewing people in risk-related
positions) or commission an external review of risk management practices (e.g.
by an audit firm [71]). Note that this role is usually performed by the
company’s internal audit function, but the ethics board could provide an
additional layer of assurance [102]. They could report their findings directly
to the risk committee of the board of directors and the chief risk officer
(CRO) who could make risk management practices more effective.
#### Discussion.
If companies already have an internal audit function, the additional value
would be limited; the ethics board would merely be an additional defense layer
[102]. However, if companies do not already have an internal audit function,
the added value could be significant. Without a deliberate attempt to identify
ineffective risk management practices, some limitations will likely remain
unnoticed [102]. But the value ultimately depends on the individuals who
conduct the review. This might be problematic because it will require a very
specific type of expertise that most members of an ethics board do not have
(Section 4.4). It is also very time-consuming, so a part-time board might not
be able to do it properly (Section 4.5). Both issues should be taken into
account when appointing members.
### 2.5 Interpreting AI ethics principles
Many AI companies have ethics principles [54, 46], but “principles alone
cannot guarantee ethical AI” [69]. They are necessarily vague and need to be
put into practice [74, 123, 104].
#### Options.
The ethics board could interpret principles in the abstract (e.g. defining
terms or clarifying the purpose of specific principles) or in concrete cases
(e.g. whether a new research project violates a specific principle). In doing
so, it could influence a wide range of risk-related decisions. For example,
the board might decide that releasing a model that can easily be misused would
violate the principle “be socially beneficial”, which is part of Google’s AI
principles (Google, n.d.). When interpreting principles, the board could take
a risk-based approach: the higher the risk, the more the company needs to do
to mitigate it [12, 65, 27]. The board could also suggest amendments to the
principles.
#### Discussion.
How much would this reduce risk? It will be more effective if the principles
play a key role within the company. For example, Google’s motto “don’t be
evil”—which it quietly removed in 2018—used to be part of its code of conduct
and, reportedly, had a significant influence on its culture [31]. Employees
could threaten to leave the company or engage in other forms of activism if
principles are violated [13]. Interpreting ethics principles would also be
more effective if the board’s interpretation is binding (Section 5.2), and if
the principles are public, mainly because civil society could hold the company
accountable [28]. It would be less effective if the principles are mainly a PR
tool. This practice is called “ethics washing” [15, 104, 38].
### 2.6 Contact point for whistleblowers
Detecting misconduct is often difficult: it is hard to observe from the
outside, while insiders might not report it because they face a conflict
between personal values and loyalty [56, 36] or because they fear negative
consequences [17]. For example, an engineer might find a severe safety flaw,
but the research lead wants to release the model nonetheless and threatens to
fire the engineer if they speak up. In such cases, whistleblower protection is
vital.
#### Options.
An ethics board could protect whistleblowers by providing a trusted contact
point. The ethics board could report the case to the board of directors,
especially the board risk committee, who could engage with management to do
something about it. It could also advise the whistleblower on steps they could
take to protect themselves (e.g. seeking legal assistance) or to do something
about the misconduct (e.g. leaking the information to the press or a
government agency).
#### Discussion.
The ethics board would be more trustworthy than other organizational units (at
least if it is independent from management). But since it would still be part
of the company (Section 3.2), or at least in a contractual relationship with
it (Section 3.1), confidentiality would be less of a problem. This can be
particularly important if the information is highly sensitive and its
dissemination could be harmful in itself [116, 20, 21].
The ethics board can only serve this role if employees trust the ethics board,
they know about the board’s commitment to whistleblower protection, and at
least one board member needs to have relevant expertise and experience. For
more information on the drivers of effective whistleblowing, we refer to the
relevant literature [77, 4]. Anecdotally, whistleblowing within large AI
companies has had some successes, though it did not always work [28]. Overall,
this role seems very promising, but the issue is highly delicate and could
easily make things worse.
## 3 Structure
What should the board’s (legal) structure be? We can distinguish between
internal (Section 3.1) and external structures (Section 3.2). The board could
also have substructures (Section 3.3).
### 3.1 External boards
The ethics board could be external. The company and the ethics board could be
two separate legal entities. The relationship between the two entities would
then be governed by a contract.
#### Options.
The ethics board could be a nonprofit organization (e.g. a 501(c)(3)) or a
for-profit company (e.g. a public-benefit corporation [PBC]). The individuals
who provide services to the company could be members of the board of directors
of the ethics board (Figure 1a). Alternatively, they could be a group of
individuals contracted by the ethics board (Figure 1b) or by the company
(Figure 1c). There could also be more complex structures. For example, Meta’s
Oversight Board consists of two separate entities: a purpose trust and a
limited liability company (LLC) [90, 114]. The purpose trust is funded by Meta
and funds the LLC. The trustees are appointed by Meta, appoint individuals,
and manage the LLC. The individuals are contracted by the LLC and provide
services to Facebook and Instagram (Figure 2).
Figure 1: Three potential structures of an external ethics board
#### Discussion.
External ethics boards have a number of advantages: (1) They can legally bind
the company through the contractual relationship (Section 5.1). This would be
much more difficult for internal structures (Section 3.2). (2) The board would
be more independent, mainly because it would be less affected by internal
incentives (e.g. board members could prioritize the public interest over the
company’s interests). (3) It would be a more credible commitment because it
would be more effective and more independent. The company might therefore be
perceived as being more responsible. (4) The ethics board could potentially
contract with more than one company. In doing so, it might build up more
expertise and benefit from economies of scale. But external boards also have
disadvantages. We expect that few companies are willing to make such a strong
commitment, precisely because it would undermine its independence. It might
also take longer to get the necessary information and a nuanced view of the
inner workings of the company (e.g. norms and culture).
Figure 2: Structure of Meta’s Oversight Board
### 3.2 Internal boards
The ethics board could also be part of the company. Its members would be
company employees. And the company would have full control over the board’s
structure, its activities, and its members.
#### Options.
An internal board could be a team, i.e. a permanent group of employees with a
specific area of responsibility. But it could also be a working group or
committee, i.e. a temporary group of employees with a specific area of
responsibility, usually in addition to their main activity. For example,
DeepMind’s IRC seems to be a committee, not a team [57, 34].
#### Discussion.
The key advantage of internal boards is that it is easier for them to get
information (e.g. because they have a better network within the organization).
They will typically also have a better understanding of the inner workings of
the company (e.g. norms and culture). But internal structures also have
disadvantages. They can be disbanded at the discretion of senior management or
the board of directors. It would be much harder to play an adversarial role
and openly talk about risks, especially when potential mitigations are in
conflict with other objectives (e.g. profits). The board would not have much
(legal) power. Decisions cannot be enforced. To have influence, it relies on
good relationships with management (if collaborative) or the board of
directors (if adversarial). Finally, board members would be less protected
from repercussions if they advocate for unfavorable measures.
### 3.3 Substructures
Both internal and external boards could have substructures. Certain
responsibilities could be delegated to a part of the ethics board.
#### Options.
Two common substructures are committees and liaisons. (Note that an internal
ethics board can be a committee of the company, but the ethics board can also
have committees.) (1) Committees could be permanent (for recurring
responsibilities) or temporary (to address one-time issues). For example, the
board could have a permanent “deployment committee” that reviews model
releases (Section 2.2), or it could have a temporary committee for advising
the board on an upcoming M&A transaction. For more information about the
merits of committees in the context of the board of directors, we refer to the
relevant literature. Meta’s Oversight Board has two types of committees: a
“case selection committee” which sets criteria for cases that the board will
select for review, and a “membership committee” which proposes new board
members and recommends the removal or renewal of existing members [87]. They
can also set up other committees.
Liaisons are another type of substructure. Some members of the ethics board
could join specific teams or other organizational structures (e.g. attend
meetings of research projects or the board of directors). They would get more
information about the inner workings of the company and can build better
relationships with internal stakeholders (which can be vital if the board
wants to protect whistleblowers, see Section 2.6). Inversely, non-board
members could be invited to attend board meetings. This could be important if
the board lacks the necessary competence to make a certain decision (Section
4.4). For example, they could invite someone from the technical safety team to
help them interpret the results of a third-party model audit. Microsoft’s
AETHER Committee regularly invites engineers to working groups [66].
#### Discussion.
On the one hand, substructures can make the board more complex and add
friction. On the other hand, they allow for faster decision-making because
less people are involved and group discussions tend to be more efficient.
Against this background, we expect that substructures are probably only needed
in larger ethics boards (Section 4.3).
## 4 Membership
Who should sit on the board? In particular, how should members join (Section
4.1) and leave the board (Section 4.2)? How many members should the board have
(Section 4.3)? What characteristics should they have (Section 4.4)? How much
time should they spend on the board (Section 4.5)? And should they be
compensated (Section 4.6)?
### 4.1 Joining the board
How should members join the board?
#### Options.
We need to distinguish between the appointment of the initial and subsequent
board members. Initial members could be directly appointed by the company’s
board of directors. But the company could also set up a special formation
committee which appoints the initial board members. The former was the case at
Axon’s AI and Policing Technologies Ethics Board [10], the latter at Meta’s
Oversight Board [88]. Subsequent board members are usually appointed by the
board itself. Meta’s Oversight Board has a special committee that selects
subsequent members after a review of the candidates’ qualifications and a
background check [88]. But they could also be appointed by the company’s board
of directors. Candidates could be suggested (not appointed) by other board
members, the board of directors, or the general public. At Meta’s Oversight
Board, new members can be suggested by other board members, the board of
directors, and the general public [88].
#### Discussion.
The appointment of initial board members is particularly important. If the
company does not get this right, it could threaten the survival of the entire
board. For example, Google appointed two controversial members to the initial
board which sparked internal petitions to remove them and contributed to the
board’s failure [94]. The appointment should be done by someone with enough
time and expertise. This suggests that a formation committee will often be
advisable. The board would be more independent if it can appoint subsequent
members itself. Otherwise, the company could influence the direction of the
ethics board over time.
### 4.2 Leaving the board
How should members leave the board?
#### Options.
There are at least three ways in which members could leave the board. First,
their term could expire. The board’s charter or bylaws could specify a term
limit. Members would leave the board when their term expires. For example, at
Meta’s Oversight Board, the term ends after three years, but appointments can
be renewed twice [88]. Second, members could resign voluntarily. While members
might resign for personal reasons, a resignation can also be used to express
protest. For example, in the case of Google’s ATEAC, Alessandro Acquisti
announced his resignation on Twitter to express protest against the setup of
the board [5]. Similarly, in the case of Axon’s AI and Policing Technologies
Ethics Board, nine out of eleven members publically resigned after Axon
announced plans to develop taser-equipped drones to be used in schools without
consulting the board first [39]. Third, board members could be removed
involuntarily.
#### Discussion.
Since any removal of board members is a serious step, it should only be
possible under special conditions. In particular, it should require a special
majority and a special reason (e.g. a violation of the board’s code of conduct
or charter). To preserve the independence of the board, it should not be
possible to remove board members for substantive decisions they have made.
### 4.3 Size of the board
How many members should the board have?
#### Options.
In theory, the board can have any number of members. In practice, most boards
have between 10-20 members (Table 1).
#### Discussion.
On the one hand, larger boards can work on more cases and they can go into
more detail. They can also be more diverse [45]. On the other hand, it will
often be difficult to find enough qualified people. Group discussions in
smaller boards tend to be easier and it is easier to reach consensus (e.g. if
a qualified majority is required [Section 5.1]). Smaller boards allow for
closer personal relationships between board members. But conflicts of interest
could have an outsized effect in smaller boards. As a rule of thumb, the
number of members should scale with the board’s workload (“more cases, more
members”).
Table 1: Size of different AI ethics boards Ethics board | Members | Source
---|---|---
Meta’s Oversight Board | 22 | [89]
Microsoft’s AETHER Committee | 20 | [78]
Google’s ATEAC | 8 | [118]
Axon’s AI and Policing Technologies Ethics Board | 11 | [10]
Axon’s Ethics & Equity Advisory Council | 11 (US), 7 (UK) | [11]
### 4.4 Characteristics of members
What characteristics should board members have?
#### Options.
When appointing board members, companies should at least consider candidates’
expertise, diversity, seniority, and public perception.
#### Discussion.
(1) Different boards will require different types of expertise [101]. But we
expect most boards to benefit from technical, ethical, and legal expertise.
(2) Members should be diverse along various dimensions, such as gender, race,
and geographical representation [45]. For example, Meta’s Oversight Board has
geographic diversity requirements in its bylaws [87]. They should adequately
represent historically marginalized communities [70, 16]. Diverse perspectives
are particularly important in the context of risk assessment (Section 2.3).
For example, this will make it more likely that unprecedented risks are
identified. (3) Board members may be more or less senior. By “seniority”, we
mean a person’s position of status which typically corresponds to their work
experience and is reflected in their title. More senior people tend to have
more subject-matter expertise. The board of directors and senior management
might also take them more seriously. As a consequence, it might be easier for
them to build trust, get information, and influence key decisions. This is
particularly important for boards that only advise and are not able to make
binding decisions. However, it will often be harder for the company to find
senior people. And in many cases, the actual work is done by junior people.
(4) Finally, some board members might be “celebrities”. They would add
“glamor” to the board, which the company could use for PR reasons. Inversely,
appointing highly controversial candidates (e.g. who express sympathy to
extreme political views) might put off other candidates and undermine the
board’s credibility.
### 4.5 Time commitment
How much time should members spend on the board?
#### Options.
Board members could work full-time (around 40 hours per week), part-time
(around 15-20 hours per week), or even less (around 1-2 hours per week or as
needed). None of the existing (external) boards seem to require full-time
work. Members of Meta’s Oversight Board work part-time [59]. And members of
Axon’s AI and Policing Technologies Ethics Board only had two official board
meetings per year, with ad-hoc contact between these meetings [10].
#### Discussion.
The more time members spend working on the board, the more they can engage
with individual cases. This would be crucial if cases are complex and stakes
are high (e.g. if the board supports pre-deployment risk assessments, see
Section 2.3). Full-time board members would also get a better understanding of
the inner workings of the company. For some responsibilities, the board needs
this understanding (e.g. if the board reviews the company’s risk management
practices, see Section 2.4). However, we expect it to be much harder to find
qualified candidates who are willing to work full-time because they will
likely have existing obligations or other opportunities. This is exacerbated
by the fact that the relevant expertise is scarce. And even if a company finds
qualified candidates who are willing to work full-time, hiring several full-
time members can be a significant expense.
### 4.6 Compensation
Should board members be compensated?
#### Options.
There are three options. First, serving on the ethics board could be unpaid.
Second, board members could get reimbursed for their expenses (e.g. for
traveling or for commissioning outside expertise). For example, Axon paid its
board members $5,000 per year, plus a $5,000 honorarium per attended board
meeting, plus travel expenses (AI and Policing Technologies Ethics Board,
2019). Third, board members could be fully compensated, either via a regular
salary or honorarium. For example, it has been reported that members of Meta’s
Oversight Board are being paid a six-figure salary [59].
#### Discussion.
Not compensating board members or only reimbursing their expenses is only
reasonable for part-time or light-touch boards. Full-time boards need to be
compensated. Otherwise, it will be extremely difficult to find qualified
candidates. For a more detailed discussion of how compensation can affect
independence, see Section 6.1.
## 5 Decision-making
### 5.1 Decision-making process
How should the board make decisions?
#### Options.
We expect virtually all boards to make decisions by voting. This raises a
number of questions:
* •
Majority. What majority should be necessary to adopt a decision? Boards could
vote by absolute majority, i.e. a decision is adopted if it is supported by
more than 50% of votes. For certain types of decisions, the board may also
require a qualified majority (e.g. a unanimous vote or a 67% majority).
Alternatively, boards could vote by plurality (or relative majority), i.e. a
decision is adopted if it gets more votes than any other but does not receive
more than half of all votes cast. The majority could be calculated based on
the total number of board members (e.g. if the board has 10 members, 6 votes
would constitute a simple majority), or the number of members present (e.g. if
7 members are present, 4 votes would constitute a simple majority). At Meta’s
Oversight board, “outcomes will be determined by majority rule, based on the
number of members present” [87].
* •
Voting rights. Who should be able to vote? There are three options. First, all
board members could have voting rights. Second, only some board members could
have voting rights. For example, only members of subcommittees could be able
to vote on issues related to that subcommittee. This is the case at Meta’s
Oversight Board [87]. It would also be conceivable that some members only
advise on special issues; they might be less involved in the board’s day-to-
day work. These board members, while formally being part of the board, might
not have voting rights. Third, non-board members could have (temporary) voting
rights. For example, the board could ask external experts to advise on
specific issues. These experts could be granted voting rights for this
particular issue.
* •
Voting power. A related, but different question is: how much should a vote
count? We expect this question to be irrelevant for most boards, as “one
person, one vote” is so commonsensical. However, in some cases, boards may
want to deviate from this. For example, the board could use quadratic voting,
which allows individuals to express the degree of their preferences, rather
than just the direction of their preferences [96, 61].
* •
Quorum. What should the minimum number of members necessary to vote be? This
is called a “quorum”. In principle, the quorum can be everything between one
and all board members, though there might be legal requirements for some
external structures. A natural quorum is the number of board members who could
constitute a majority (e.g. more than 50% of board members if a simple
majority is sufficient). It is also possible to have a different quorum for
different types of decisions. Note that a lack of quorum might make the
decision void or voidable.
* •
Voting method. How should the board vote? The most common voting methods are
paper ballots, show of hands, postally, or electronically (e.g. using a voting
app). According to its bylaws, voting at Meta’s Oversight Board takes place
“in-person or electronically” [87].
* •
Abstention. In some cases, board members may want to abstain from a vote (e.g.
because they do not feel adequately informed about the issue at hand, are
uncertain, or mildly disapprove of the decision, but do not want to actively
oppose it). Abstention could always be permitted or prohibited. The board
could also allow abstention for some decisions, but not for others. Board
members must abstain if they have a conflict of interest. At Meta’s Oversight
Board, abstention is only prohibited for one type of decisions, namely for
case deliberation [87].
* •
Proxy voting. Some board members may want to ask someone else to vote on their
behalf. This is called “proxy voting”. Proxy voting could always be permitted
or prohibited. The board could also allow proxy voting under certain
circumstances (e.g. in the event of illness), only for certain decisions (e.g.
less consequential decisions), or upon request. Meta’s Oversight Board does
not allow proxy voting [87].
* •
Frequency of board meetings. How often should the board meet to vote? There
are three options. First, the board could meet periodically (e.g. weekly,
monthly, quarterly, or annually). Second, the board could meet on an ad hoc
basis. Special meetings could be arranged at the board’s discretion, upon
request by the company, and/or based on a catalog of special occasions (e.g.
prior to the deployment of a new model). Third, the board could do both, i.e.
meeting periodically and on an ad hoc basis. Meta’s Oversight Board meets
annually and has special board meetings “in emergency or exceptional cases”
[87]. Google’s ATEAC planned to have four meetings per year [118].
* •
In-person or remote meetings. Should board meetings be held in person or
remotely? We expect this design choice to be less important than most others,
but it is a necessary one nonetheless. At Meta’s Oversight Board, meetings
take place in person, though it does allow exceptions “in limited and
exceptional circumstances”; its committees meet either in person or remotely
[87].
* •
Preparation and convocation of board meetings. How should board meetings be
prepared and convened? More precisely, who can convene a board meeting? What
is the notice period? How should members be invited? What should the
invitation entail? And do members need to indicate if they will attend? At
Meta’s Oversight Board, “written notice of periodic and special meetings must
specify the date, time, location, and purpose for convening the board. This
notice will be provided at least eight weeks in advance for in-person
convenings and, unless in case of imminent emergency, at least two days in
advance for remote convenings. Members are required to acknowledge receipt of
this notice and also indicate their attendance in a timely fashion” [87].
* •
Documentation and communication of decisions. Finally, it needs to be
specified how decisions are documented and communicated. More precisely, which
decisions should be documented and communicated? What exactly should be
documented and communicated? And who should get access to the documentation?
At Facebook’s Oversight Board, “minutes will be taken and circulated to board
members within one week” [87]. It does not publicly release meeting minutes,
but has sometimes allowed reporters in their meetings. Google’s ATEAC planned
to “publish a report summarizing the discussions” [118]. Axon’s AI Ethics
Board published two annual reports [95]. In their 2019 report, they also
highlight “the importance of public engagement and transparency” [10].
#### Discussion.
Some of these questions might seem like formalities, but they can
significantly affect the board’s work. For example, if the necessary majority
or the quorum are too high, the board might not be able to adopt certain
decisions. This could bias the board towards inaction. Similarly, if the board
is not able to convene ad hoc meetings or only upon request by the company,
they would not be able to respond adequately to emergencies.
### 5.2 Bindingness of decisions
Should the board’s decisions be binding?
#### Options.
This mainly depends on the board’s structure (Section 3). External boards can
be set up in a way that their decisions are binding, i.e. enforceable by legal
means. Both parties need to contractually agree that the board’s decisions are
in fact binding. This agreement could also contain further details about the
enforcement of the board’s decisions (e.g. contractual penalties). It is worth
noting, however, that the ethics board cannot force the company to follow its
decisions. The worst legal consequence for the company is a contractual
liability. If the ethics board is part of the company, it is very difficult,
if not impossible, to ensure that the board’s decisions are legally binding.
If the board is able to make binding decisions, it needs to be specified
whether and, if so, under what conditions the company can override them. For
example, the contract could give the company’s board of directors the option
to override a decision if they achieve the same voting majority as the ethics
board. But even if the board’s decisions are not enforceable by legal means,
there are non-legal means that can incentivize the company to follow the
board’s decision. For example, the board could make its decisions public,
which could spark a public outcry. One or more board members could (threaten
to) resign, which might lead to negative PR. Employees could also (threaten
to) leave the company (e.g. via an open letter), which could be a serious
threat, depending how talent-constraint the company is. Finally, shareholders
could engage in shareholder activism. In practice, the only ethics board that
is able to make binding decisions is Meta’s Oversight Board, which has the
power to override content moderation decisions.
#### Discussion.
Boards that are able to make legally binding decisions are likely more
effective, i.e. they are able to achieve their goals to a higher degree (e.g.
reducing risks to an acceptable level). They would also be a more credible
commitment to safety and ethics. However, we expect that many companies would
oppose creating such a powerful ethics board, mainly because it would
undermine the company’s power. There might also be legal constraints on how
much power the company can transfer to the ethics board.
## 6 Resources
What resources does the board need? In particular, how much funding does the
board need and where should the funding come from (Section 6.1)? How should
the board get information (Section 6.2)? And should it have access to outside
expertise (Section 6.3)?
### 6.1 Funding
How much funding does the board need and where should the funding come from?
#### Options.
The board might need funding to pay its members salaries or reimburse expenses
(Section 4.6), to commission outside expertise (e.g. third-party audits or
expert consulting), or to organize events (e.g. in-person board meetings).
Funding could also allow board members to spend their time on non-
administrative tasks. For example, the Policing Project provided staff
support, facilitated meetings, conducted research, and drafted reports for
Axon’s former AI and Policing Technologies Ethics Board [95]. How much funding
the board needs varies widely—from essentially no funding to tens of millions
of dollars. For example, Meta’s Oversight Board has an annual budget of $20
million [85]. Funding could come from the company (e.g. directly or via a
trust) or philanthropists. Other funding sources do not seem plausible (e.g.
state funding or research grants).
#### Discussion.
The board’s independence could be undermined if funding comes directly from
the company. The company could use the provision of funds as leverage to make
the board take decisions that are more aligned with its interests. A more
indirect funding mechanism therefore seems preferable. For example, Meta funds
the purpose trust for multiple years in advance [85].
### 6.2 Information
How should the board get information?
#### Options.
What information the board needs is highly context-specific and mainly depend
on the board’s responsibilities (Section 2). The board’s structure determines
what sources of information are available (Section 3). While internal boards
have access to some information by default, external boards have to rely on
public information and information the company decides to share with them.
Both internal and external boards might be able to gather additional
information themselves (e.g. via formal document requests or informal coffee
chats with employees).
#### Discussion.
Getting information from the company is convenient for the board, but the
information might be biased. The company might—intentionally or not—withhold,
overemphasize, or misrepresent certain information. The company could also
delay the provision of information or present them in a way that makes it
difficult for the board to process (e.g. by hiding important information in
long documents). To mitigate these risks, the board might prefer gathering
information itself. In particular, the board might want to build good
relationships with a few trusted employees. While this might be less biased,
it would also be more time-consuming. It might also be impossible to get
certain first-hand information (e.g. protocols of past meetings of the board
of directors). It is worth noting that not all company information is equally
biased. For example, while reports by management might be too positive,
whistleblower reports might be too negative. The most objective information
will likely come from the internal audit team and external assurance providers
[102]. In general, there is no single best information source. Boards need to
combine multiple sources and cross-check important information.
### 6.3 Outside expertise
Should the board have access to outside expertise?
#### Options.
There are at least three types of outside expertise the ethics board could
harvest. First, it could hire a specialized firm (e.g. a law or consulting
firm) to answer questions that are beyond its expertise (e.g. whether the
company complies with the NIST AI Risk Management Framework). Second, it could
hire an audit firm (e.g. to audit a specific model, the company’s governance,
or its own practices). Third, it could build academic partnerships (e.g. to
red-team a model).
#### Discussion.
It might make sense for the ethics board to rely on outside expertise if they
have limited expertise or time. They could also use it to get a more objective
perspective, as information provided to them by the company can be biased
(Section 6.2). However, the company might use the same sources of outside
expertise. For example, if a company is open to a third-party audit, it would
commission the audit directly (why would it ask the ethics board to do it on
its behalf?). In such cases, the ethics board would merely “double-check” the
company’s or the third party’s work. While the added value would be low, the
costs could be high (especially for commissioning an external audit or expert
consulting).
## 7 Conclusion
#### Summary.
In this paper, we have identified key design choices that AI companies need to
make when setting up an ethics board (RQ1). For each of them, we have listed
different options and discussed how they would affect the board’s ability to
reduce risks from AI (RQ2). Table 2 contains a summary of the design choices
we have covered.
Table 2: Summary of design choices High-level questions | Sub-questions / options
---|---
What responsibilities should the board have? | • Advising the board of directors • Overseeing model releases and publications • Supporting risk assessments • Reviewing risk management practices • Interpreting Al ethics principles • Serving as a contact for whistleblowers
What should the board’s legal structure be? | • The board could be a separate legal entity that contracts with the company (external board) • It could also be part of the company (internal board) • Should it have substructures (e.g. committees)?
Who should sit on the board? | • How should initial and subsequent members be appointed? • How should they leave the board? • How many members should the board have? • What characteristics should they have? • How much time should they spend on the board? • Should they be compensated?
How should the board make decisions? | • What decision-making process should the board use? • Should its decisions be binding?
What resources does the should the board need? | • How much funding does the board need and where should the funding come from? • How should the board get information? • Should the board have access to outside expertise?
#### Key claims.
Throughout this paper, we have made four key claims. First, ethics boards can
take many different shapes. Most design choices are highly context-specific.
It is therefore very difficult to make abstract recommendations. There is no
one-size-fits-all. Second, ethics boards should be seen as an additional
“layer of defense”. They do not have an original role in the corporate
governance of AI companies. They do not serve a function that no other
organizational structure serves. Instead, most ethics boards support,
complement, or duplicate existing efforts. While this reduces efficiency, an
additional safety net seems warranted in high-stakes situations. Third, merely
having an ethics board is not sufficient. Most of the value depends on its
members and their willingness and ability to pursue its mission. Thus,
appointing the right people is crucial. Inversely, there is precedent that
appointing the wrong people can threaten the survival of the entire board.
Fourth, while some design choices might seem like formalities (e.g. when the
board is quorate), they can have a significant impact on the effectiveness of
the board (e.g. by slowing down decisions). They should not be taken lightly.
#### Questions for further research.
The paper left many questions unanswered and more research is needed. In
particular, our list of design choices is not comprehensive. For example, we
did not address the issue of board oversight. If an ethics board has
substantial powers, the board itself also needs adequate oversight. A “meta
oversight board”—a central organization that oversees various AI ethics
boards—could be a possible solution. Apart from that, our list of potential
responsibilities could be extended. For example, the company could grant the
ethics board the right to appoint one or more members of its board of
directors. The ethics board could also oversee and coordinate responses to
model evals. For example, if certain dangerous capabilities are detected, the
company may want to contact government and coordinate with other labs to pause
capabilities research.
We wish to conclude with a word of caution. Setting up an ethics board is not
a silver bullet—“there is no silver bullet” [24]. Instead, it should be seen
as yet another mechanism in a portfolio of mechanisms.
## Acknowledgements
We are grateful for valuable feedback from Christina Barta, Carrick Flynn,
Cullen O’Keefe, Virginia Blanton, Andrew Strait, Tim Fist, and Milan Griffes.
Anka Reuel worked on the project during the 2022 CHERI Summer Research
Program. All remaining errors are our own.
## References
* [1] D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Mané. Concrete problems in AI safety. arXiv preprint arXiv:1606.06565, 2016.
* [2] M. Anderljung and J. Hazell. Protecting society from AI misuse: When are restrictions on capabilities warranted? arXiv preprint arXiv:2303.09377, 2023.
* [3] Anthropic. Core views on AI safety: When, why, what, and how. https://www.anthropic.com/index/core-views-on-ai-safety, 2023.
* [4] C. R. Apaza and Y. Chang. What makes whistleblowing effective: Whistleblowing in Peru and South Korea. Public Integrity, 13(2):113–130, 2011.
* [5] A. Aquisti. https://twitter.com/ssnstudy/status/1112099054551515138, 2019.
* [6] S. Armstrong, N. Bostrom, and C. Shulman. Racing to the precipice: A model of artificial intelligence development. AI & Society, 31:201–206, 2016.
* [7] Z. Arnold and H. Toner. AI accidents: An emerging threat. Center for Security and Emerging Technology, Georgetown University, 2021.
* [8] C. Ashurst, E. Hine, P. Sedille, and A. Carlier. AI ethics statements: Analysis and lessons learnt from NeurIPS broader impact statements. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 2047–2056, 2022.
* [9] T. Aven. On the meaning of a black swan in a risk context. Safety Science, 57:44–51, 2013.
* [10] Axon. First report of the Axon AI & Policing Technology Ethics Board, 2019\.
* [11] Axon. Ethics & Equity Advisory Council. https://www.axon.com/eeac, 2022.
* [12] R. Baldwin and J. Black. Driving priorities in risk-based regulation: What’s the problem? Journal of Law and Society, 43(4):565–595, 2016.
* [13] H. Belfield. Activism by the AI community: Analysing recent achievements and future prospects. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 15–21, 2020.
* [14] M. S. Bernstein, M. Levi, D. Magnus, B. A. Rajala, D. Satz, and Q. Waeiss. Ethics and society review: Ethics reflection as a precondition to research funding. Proceedings of the National Academy of Sciences, 118(52), 2021.
* [15] E. Bietti. From ethics washing to ethics bashing: A view on tech ethics from within moral philosophy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 210–219, 2020.
* [16] A. Birhane, W. Isaac, V. Prabhakaran, M. Díaz, M. C. Elish, I. Gabriel, and S. Mohamed. Power to the people? Opportunities and challenges for participatory AI. Equity and Access in Algorithms, Mechanisms, and Optimization, pages 1–8, 2022.
* [17] B. Bjørkelo. Workplace bullying after whistleblowing: Future research and implications. Journal of Managerial Psychology, 28(3):306–323, 2013.
* [18] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill, E. Brynjolfsson, S. Buch, D. Card, R. Castellon, N. Chatterji, A. Chen, K. Creel, J. Q. Davis, D. Demszky, C. Donahue, M. Doumbouya, E. Durmus, S. Ermon, J. Etchemendy, K. Ethayarajh, L. Fei-Fei, C. Finn, T. Gale, L. Gillespie, K. Goel, N. Goodman, S. Grossman, N. Guha, T. Hashimoto, P. Henderson, J. Hewitt, D. E. Ho, J. Hong, K. Hsu, J. Huang, T. Icard, S. Jain, D. Jurafsky, P. Kalluri, S. Karamcheti, G. Keeling, F. Khani, O. Khattab, P. W. Koh, M. Krass, R. Krishna, R. Kuditipudi, A. Kumar, F. Ladhak, M. Lee, T. Lee, J. Leskovec, I. Levent, X. L. Li, X. Li, T. Ma, A. Malik, C. D. Manning, S. Mirchandani, E. Mitchell, Z. Munyikwa, S. Nair, A. Narayan, D. Narayanan, B. Newman, A. Nie, J. C. Niebles, H. Nilforoshan, J. Nyarko, G. Ogut, L. Orr, I. Papadimitriou, J. S. Park, C. Piech, E. Portelance, C. Potts, A. Raghunathan, R. Reich, H. Ren, F. Rong, Y. Roohani, C. Ruiz, J. Ryan, C. Ré, D. Sadigh, S. Sagawa, K. Santhanam, A. Shih, K. Srinivasan, A. Tamkin, R. Taori, A. W. Thomas, F. Tramèr, R. E. Wang, W. Wang, B. Wu, J. Wu, Y. Wu, S. M. Xie, M. Yasunaga, J. You, M. Zaharia, M. Zhang, T. Zhang, X. Zhang, Y. Zhang, L. Zheng, K. Zhou, and P. Liang. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2022.
* [19] N. Bostrom. Existential risks: Analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology, 9(1), 2001.
* [20] N. Bostrom. Information hazards: A typology of potential harms from knowledge. Review of Contemporary Philosophy, 10:44–79, 2011.
* [21] N. Bostrom. The vulnerable world hypothesis. Global Policy, 10(4):455–476, 2019.
* [22] M. Brundage, S. Avin, J. Clark, H. Toner, P. Eckersley, B. Garfinkel, A. Dafoe, P. Scharre, T. Zeitzoff, B. Filar, H. Anderson, H. Roff, G. C. Allen, J. Steinhardt, C. Flynn, S. O. hÉigeartaigh, S. Beard, H. Belfield, S. Farquhar, C. Lyle, R. Crootof, O. Evans, M. Page, J. Bryson, R. Yampolskiy, and D. Amodei. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228, 2018.
* [23] M. Brundage, S. Avin, J. Wang, H. Belfield, G. Krueger, G. Hadfield, H. Khlaaf, J. Yang, H. Toner, R. Fong, T. Maharaj, P. W. Koh, S. Hooker, J. Leung, A. Trask, E. Bluemke, J. Lebensold, C. O’Keefe, M. Koren, T. Ryffel, J. Rubinovitz, T. Besiroglu, F. Carugati, J. Clark, P. Eckersley, S. de Haas, M. Johnson, B. Laurie, A. Ingerman, I. Krawczuk, A. Askell, R. Cammarota, A. Lohn, D. Krueger, C. Stix, P. Henderson, L. Graham, C. Prunkl, B. Martin, E. Seger, N. Zilberman, S. O. hÉigeartaigh, F. Kroeger, G. Sastry, R. Kagan, A. Weller, B. Tse, E. Barnes, A. Dafoe, P. Scharre, A. Herbert-Voss, M. Rasser, S. Sodhani, C. Flynn, T. K. Gilbert, L. Dyer, S. Khan, Y. Bengio, and M. Anderljung. Toward trustworthy AI development: Mechanisms for supporting verifiable claims. arXiv preprint arXiv:2004.07213, 2020.
* [24] M. Brundage, K. Mayer, T. Eloundou, S. Agarwal, S. Adler, G. Krueger, J. Leike, and P. Mishkin. Lessons learned on language model safety and misuse. OpenAI, 2022.
* [25] J. Carlsmith. Is power-seeking AI an existential risk? arXiv preprint arXiv:2206.13353, 2022.
* [26] S. Cave and S. S. ÓhÉigeartaigh. An AI race for strategic advantage: Rhetoric and risks. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 36–40, 2018.
* [27] J. Chamberlain. The risk-based approach of the European Union’s proposed artificial intelligence regulation: Some comments from a tort law perspective. European Journal of Risk Regulation, 14(1):1–13, 2022.
* [28] P. Cihon, J. Schuett, and S. D. Baum. Corporate governance of artificial intelligence in the public interest. Information, 12(7), 2021.
* [29] K. Conger and D. Cameron. Google is helping the Pentagon build AI for drones. Gizmodo, 2018.
* [30] O. Cotton-Barratt, M. Daniel, and A. Sandberg. Defence in depth against human extinction: Prevention, response, resilience, and why they all matter. Global Policy, 11(3):271–282, 2020.
* [31] P. Crofts and H. van Rijswijk. Negotiating ’evil’: Google, Project Maven and the corporate form. Law, Technology and Humans, 2(1):1–16, 2020.
* [32] R. Crootof. Artificial intelligence research needs responsible publication norms. Lawfare Blog, 2019.
* [33] H. Davies and M. Zhivitskaya. Three lines of defence: A robust organising framework, or just lines in the sand? Global Policy, 9:34–42, 2018.
* [34] DeepMind. Human rights policy. https://www.deepmind.com/human-rights-policy, 2022.
* [35] J. Degrave, F. Felici, J. Buchli, M. Neunert, B. Tracey, F. Carpanese, T. Ewalds, R. Hafner, A. Abdolmaleki, D. de Las Casas, et al. Magnetic control of tokamak plasmas through deep reinforcement learning. Nature, 602(7897):414–419, 2022.
* [36] J. Dungan, A. Waytz, and L. Young. The psychology of whistleblowing. Current Opinion in Psychology, 6:129–133, 2015.
* [37] G. Falco, B. Shneiderman, J. Badger, R. Carrier, A. Dahbura, D. Danks, M. Eling, A. Goodloe, J. Gupta, C. Hart, et al. Governing AI safety through independent audits. Nature Machine Intelligence, 3(7):566–571, 2021.
* [38] L. Floridi. Translating principles into practices of digital ethics: Five risks of being unethical. Ethics, Governance, and Policies in Artificial Intelligence, pages 81–90, 2021.
* [39] B. Friedman, W. Abd-Almageed, M. Brundage, R. Calo, D. Citron, R. Delsol, C. Harris, J. Lynch, and M. McBride. Statement of resigning Axon AI ethics board members. Policing Project, 2022.
* [40] D. Ganguli, L. Lovitt, J. Kernion, A. Askell, Y. Bai, S. Kadavath, B. Mann, E. Perez, N. Schiefer, K. Ndousse, A. Jones, S. Bowman, A. Chen, T. Conerly, N. DasSarma, D. Drain, N. Elhage, S. El-Showk, S. Fort, Z. Hatfield-Dodds, T. Henighan, D. Hernandez, T. Hume, J. Jacobson, S. Johnston, S. Kravec, C. Olsson, S. Ringer, E. Tran-Johnson, D. Amodei, T. Brown, N. Joseph, S. McCandlish, C. Olah, J. Kaplan, and J. Clark. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858, 2022.
* [41] J. A. Goldstein, G. Sastry, M. Musser, R. DiResta, M. Gentzel, and K. Sedova. Generative language models and automated influence operations: Emerging threats and potential mitigations. arXiv preprint arXiv:2301.04246, 2023.
* [42] Googlers Against Transphobia. Googlers against transphobia and hate. Medium, 2019.
* [43] N. Grant. Google calls in help from Larry Page and Sergey Brin for A.I. fight. The New York Times, 2023.
* [44] M. G. Grimes, T. A. Williams, and E. Y. Zhao. Anchors aweigh: The sources, variety, and challenges of mission drift. Academy of Management Review, 44(4):819–845, 2019.
* [45] A. Gupta and V. Heath. AI ethics groups are repeating one of society’s classic mistakes. MIT Technology Review, 2020.
* [46] T. Hagendorff. The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1):99–120, 2020.
* [47] D. Hendrycks, N. Carlini, J. Schulman, and J. Steinhardt. Unsolved problems in ML safety. arXiv preprint arXiv:2109.13916, 2022.
* [48] L. Hoffman and R. Albergotti. Microsoft eyes $10 billion bet on ChatGPT. Semafor, 2023.
* [49] W. Hunt. The flight to safety-critical AI. Center for Long-Term Cybersecurity, UC Berkeley, 2020.
* [50] IEC. 31010:2019 Risk management — Risk assessment techniques, 2019.
* [51] ISO. 31000:2018 Risk management — Guidelines, 2018.
* [52] ISO/IEC. Guide 51:2014 Safety aspects — Guidelines for their inclusion in standards, 2014.
* [53] ISO/IEC. 23894:2023 Information technology — Artificial intelligence — Guidance on risk management, 2023.
* [54] A. Jobin, M. Ienca, and E. Vayena. The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9):389–399, 2019.
* [55] S. R. Jordan. Designing artificial intelligence review boards: creating risk metrics for review of AI. In 2019 IEEE International Symposium on Technology and Society (ISTAS), pages 1–7, 2019.
* [56] P. B. Jubb. Whistleblowing: A restrictive definition and interpretation. Journal of Business Ethics, 21:77–94, 1999.
* [57] K. Kavukcuoglu, P. Kohli, L. Ibrahim, D. Bloxwich, and S. Brown. How our principles helped define alphafold’s release. https://www.deepmind.com/blog/how-our-principles-helped-define-alphafolds-release, 2022\.
* [58] K. Klonick. The Facebook Oversight Board: Creating an independent institution to adjudicate online free expression. Yale Law Journal, 129(2418), 2020.
* [59] K. Klonick. Insight the making of Facebook’s supreme court. New Yorker, 2021.
* [60] N. Kolt. Algorithmic black swans. Washington University Law Review, 101, 2023.
* [61] S. P. Lalley and E. G. Weyl. Quadratic voting: How mechanism design can radicalize democracy. In AEA Papers and Proceedings, volume 108, pages 33–37, 2018.
* [62] C. Leahy, S. Black, C. Scammell, and A. Miotti. Conjecture: Internal infohazard policy. Alignment Forum, 2022.
* [63] S. A. Lundqvist. Why firms implement risk governance: Stepping beyond traditional risk management to enterprise risk management. Journal of Accounting and Public Policy, 34(5):441–466, 2015.
* [64] M. M. Maas. How viable is international arms control for military artificial intelligence? three lessons from nuclear weapons. Contemporary Security Policy, 40(3):285–311, 2019.
* [65] T. Mahler. Between risk management and proportionality: The risk-based approach in the EU’s Artificial Intelligence Act proposal. Nordic Yearbook of Law and Informatics, 2021.
* [66] Microsoft. Putting principles into practice: How we approach responsible AI at Microsoft. https://www.microsoft.com/cms/api/am/binary/RE4pKH5, 2020.
* [67] Microsoft. Microsoft and OpenAI extend partnership. https://blogs.microsoft.com/blog/2023/01/23/microsoftandopenaiextendpartnership/, 2023\.
* [68] Microsoft. Our approach. https://www.microsoft.com/en-us/ai/our-approach, 2023.
* [69] B. Mittelstadt. Principles alone cannot guarantee ethical AI. Nature machine intelligence, 1(11):501–507, 2019.
* [70] S. Mohamed, M.-T. Png, and W. Isaac. Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33:659–684, 2020.
* [71] J. Mökander and L. Floridi. Operationalising AI governance through ethics-based auditing: An industry case study. AI and Ethics, pages 1–18, 2022.
* [72] J. Mökander, J. Morley, M. Taddeo, and L. Floridi. Ethics-based auditing of automated decision-making systems: Nature, scope, and limitations. Science and Engineering Ethics, 27(44), 2021.
* [73] J. Morley, A. Elhalal, F. Garcia, L. Kinsey, J. Mökander, and L. Floridi. Ethics as a service: a pragmatic operationalisation of AI ethics. Minds and Machines, 31(2):239–256, 2021.
* [74] J. Morley, L. Floridi, L. Kinsey, and A. Elhalal. From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and Engineering Ethics, 26(4):2141–2168, 2020.
* [75] J. Mökander, J. Schuett, H. R. Kirk, and L. Floridi. Auditing large language models: A three-layered approach. arXiv preprint arXiv:2302.08500, 2023.
* [76] W. Naudé and N. Dimitri. The race for an artificial general intelligence: implications for public policy. AI & Society, 35:367–379, 2020.
* [77] J. P. Near and M. P. Miceli. Effective whistle-blowing. Academy of management review, 20(3):679–708, 1995.
* [78] J. Newman. Decision points in AI governance. Center for Long-Term Cybersecurity, UC Berkeley, 2020.
* [79] R. Ngo, L. Chan, and S. Mindermann. The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626, 2023.
* [80] NIST. Artificial Intelligence Risk Management Framework (AI RMF 1.0), 2023\.
* [81] OpenAI. Best practices for deploying language models. https://openai.com/blog/best-practices-for-deploying-language-models, 2022\.
* [82] OpenAI. GPT-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
* [83] OpenAI. OpenAI and Microsoft extend partnership, 2023.
* [84] T. Ord. The precipice: Existential risk and the future of humanity. Hachette Books, 2020.
* [85] Oversight Board. Securing ongoing funding. https://www.oversightboard.com/news/1111826643064185-securing-ongoing-funding-for-the-oversight-board/, 2022\.
* [86] Oversight Board. https://www.oversightboard.com, 2023.
* [87] Oversight Board. Bylaws. https://www.oversightboard.com/sr/governance/bylaws, 2023.
* [88] Oversight Board. Charter. https://oversightboard.com/attachment/494475942886876/, 2023.
* [89] Oversight Board. Our commitment. https://www.oversightboard.com/meet-the-board/, 2023.
* [90] Oversight Board. Trustees. https://www.oversightboard.com/governance, 2023.
* [91] Partnership on AI. Managing the risks of AI research, 2021.
* [92] E. Perez, S. Huang, F. Song, T. Cai, R. Ring, J. Aslanides, A. Glaese, N. McAleese, and G. Irving. Red teaming language models with language models. arXiv preprint arXiv:2202.03286, 2022.
* [93] M. Petermann, N. Tempini, I. K. Garcia, K. Whitaker, and A. Strait. Looking before we leap. Ada Lovelace Institute, 2022.
* [94] K. Piper. Google’s brand-new AI ethics board is already falling apart. Vox, 2019.
* [95] Policing Project. Reports of the axon AI ethics board. https://www.policingproject.org/axon, 2020.
* [96] E. A. Posner and E. G. Weyl. Quadratic voting as efficient corporate governance. The University of Chicago Law Review, 81(1):251–272, 2014.
* [97] I. D. Raji and J. Buolamwini. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 429–435, 2019.
* [98] I. D. Raji, P. Xu, C. Honigsberg, and D. Ho. Outsider oversight: Designing a third party audit ecosystem for AI governance. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, pages 557–571, 2022.
* [99] J. Rando, D. Paleka, D. Lindner, L. Heim, and F. Tramèr. Red-teaming the Stable Diffusion safety filter. arXiv preprint arXiv:2210.04610, 2022.
* [100] J. Sandbrink, H. Hobbs, J. Swett, A. Dafoe, and A. Sandberg. Differential technology development: A responsible innovation principle for navigating technology risks. SSRN, 2022.
* [101] R. Sandler, J. Basl, and S. Tiell. Building data and AI ethics committees. Accenture & Northeastern University, 2019.
* [102] J. Schuett. Three lines of defense against risks from AI. arXiv preprint arXiv:2212.08364, 2022.
* [103] J. Schuett. Risk management in the Artificial Intelligence Act. European Journal of Risk Regulation, pages 1–19, 2023.
* [104] E. Seger. In defence of principlism in AI ethics and governance. Philosophy & Technology, 35(2):45, 2022.
* [105] J. Sevilla, L. Heim, A. Ho, T. Besiroglu, M. Hobbhahn, and P. Villalobos. Compute trends across three eras of machine learning. arXiv preprint arXiv:2202.05924, 2022.
* [106] T. Shevlane. Structured access: An emerging paradigm for safe AI deployment. In The Oxford Handbook of AI Governance, 2022.
* [107] T. Shevlane and A. Dafoe. The offense-defense balance of scientific knowledge: Does publishing AI research reduce misuse? In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 173–179, 2020.
* [108] D. Silver, S. Singh, D. Precup, and R. S. Sutton. Reward is enough. Artificial Intelligence, 299, 2021.
* [109] R. Smith. Axon committed to listening and learning so that we can fulfill our mission to protect life, together. https://www.axon.com/news/technology/axon-committed-to-listening-and-learning, 2022\.
* [110] I. Solaiman. The gradient of generative AI release: Methods and considerations. arXiv preprint arXiv:2302.04844, 2023.
* [111] I. Solaiman, M. Brundage, J. Clark, A. Askell, A. Herbert-Voss, J. Wu, A. Radford, G. Krueger, J. W. Kim, S. Kreps, M. McCain, A. Newhouse, J. Blazakis, K. McGuffie, and J. Wang. Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203, 2019.
* [112] M. Srikumar, R. Finlay, G. Abuhamad, C. Ashurst, R. Campbell, E. Campbell-Ratcliffe, H. Hongo, S. R. Jordan, J. Lindley, A. Ovadya, et al. Advancing ethics review practices in AI research. Nature Machine Intelligence, 4(12):1061–1064, 2022.
* [113] N. N. Taleb. The Black Swan: The Impact of the Highly Improbable. Random House, 2007.
* [114] V. Thomas, J. Duda, and T. Maurer. Independence with a purpose: Facebook’s creative use of Delaware’s purpose trust statute to establish independent oversight. Business Law Today, 2019.
* [115] S. Tiell. Create an ethics committee to keep your AI initiative in check. Harvard Business Review, 15, 2019.
* [116] F. Urbina, F. Lentzos, C. Invernizzi, and S. Ekins. Dual use of artificial-intelligence-powered drug discovery. Nature Machine Intelligence, 4(3):189–191, 2022.
* [117] M. B. Van Asselt and O. Renn. Risk governance. Journal of Risk Research, 14(4):431–449, 2011.
* [118] K. Walker. An external advisory council to help advance the responsible development of AI. https://blog.google/technology/ai/external-advisory-council-help-advance-responsible-development-ai/, 2019\.
* [119] R. Waters and M. Kruppa. Rebel AI group raises record cash after machine learning schism. https://www.ft.com/content/8de92f3a-228e-4bb8-961f-96f2dce70ebb, 2021.
* [120] L. Weidinger, J. Mellor, M. Rauh, C. Griffin, J. Uesato, P.-S. Huang, M. Cheng, M. Glaese, B. Balle, A. Kasirzadeh, Z. Kenton, S. Brown, W. Hawkins, T. Stepleton, C. Biles, A. Birhane, J. Haas, L. Rimell, L. A. Hendricks, W. Isaac, S. Legassick, G. Irving, and I. Gabriel. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359, 2021.
* [121] D. Wong and L. Floridi. Meta’s Oversight Board: A review and critical assessment. Minds and Machines, pages 1–24, 2022.
* [122] E. Yudkowsky. Cognitive biases potentially affecting judgment of global risks. In Global catastrophic risks, pages 91–119, 2008.
* [123] J. Zhou and F. Chen. AI ethics: From principles to practice. AI & Society, pages 1–11, 2022.
|
# Coherent cohomology of Shimura varieties, motivic cohomology, and
archimedean $L$-packets
Gyujin Oh
###### Abstract.
We formulate an analogue of the archimedean motivic action conjecture of
Prasanna–Venkatesh for _irregular_ cohomological automorphic forms on Shimura
varieties, which appear on multiple degrees of coherent cohomology of Shimura
varieties. Such multiple appearances are due to many infinity types in a
single $L$-packet with equal minimal $K$-types. Accordingly, we formulate the
conjecture comparing periods of forms of _different automorphic
representations_. We provide evidences for the conjecture by showing its
compatibility with existing conjectures on periods of automorphic forms. The
conjectures suggest the existence of certain operations which move between
different infinity types in an $L$-packet.
###### Contents
1. 1 Introduction
1. 1.1 Archimedean $L$-packet and the matter of choosing automorphic realizations
2. 1.2 Comparison of periods of different automorphic representations
3. 1.3 Generalized complex conjugations
4. 1.4 Summary
5. 1.5 Problems and questions
6. 1.6 Notation
2. 2 Archimedean $L$-packets and the motivic action conjecture
1. 2.1 $(\mathfrak{p},K)$-cohomology and automorphic forms
2. 2.2 Metrics on cohomology
3. 2.3 Archimedean motivic action conjecture for Shimura varieties
4. 2.4 An approach towards the motivic action conjecture
3. 3 Evidence I: The case of $\operatorname{Sp}_{4}$
1. 3.1 Whittaker periods via cohomological period integrals
2. 3.2 Hodge-linear algebra
3. 3.3 Completion of the proof
4. 4 Evidence II: The case of $\operatorname{SU}(2,1)$
1. 4.1 Whittaker periods via cohomological period integrals
2. 4.2 Hodge-linear algebra
3. 4.3 Completion of the proof
5. 5 Towards motivic action conjecture for rationality of classes
1. 5.1 The case of Hilbert modular forms: comparison with [Ho]
2. 5.2 Desiderata for generalized complex conjugations
6. A Beilinson’s conjecture over a general number field
1. A.1 Chow motives
2. A.2 Beilinson’s conjecture for Chow and Grothendieck motives
7. B Deligne cohomology and Lie algebra cohomology
1. B.1 Nondegenerate limit of discrete series as constituents of reducible principal series
2. B.2 Action of the $\operatorname{Ext}$-space
3. B.3 Deligne cohomology as an $\operatorname{Ext}$-space
## 1\. Introduction
The motivic action conjecture of Venkatesh posits that, roughly speaking, for
a Hecke eigensystem $h$, there is a natural action of the motivic cohomology
of the adjoint motive associated to $h$ on the $h$-isotypic part of the
rational cohomology of locally symmetric spaces. This has many incarnations
which shed new light on various parts of the Langlands program. However, the
conjectures have been mostly restricted to the case of “$\delta\neq 0$,”
namely when the reductive group in concern $G$ has no compact Cartan subgroup.
This in particular excludes the case when the locally symmetric space is a
_Shimura variety_. As a locally symmetric space that is not a Shimura variety
so far has not been related to algebraic geometry (although see [GT]), the
motivic action conjectures seemed to be extremely difficult to approach.
On the other hand, there have been expectations that a similar conjecture
would exist for automorphic forms over Shimura varieties with _irregular
weight_. The easiest instance is the case of weight one modular forms; they
appear in both $H^{0}$ and $H^{1}$ of the modular curve of the same line
bundle. The main purpose of the paper is to give a formulation of such
conjecture for general Shimura varieties, and provide somewhat intricate
evidences using well-known results and conjectures regarding periods of
automorphic forms. The following is a generalization of the archimedean
motivic action conjecture to general Shimura varieties.
###### Conjecture 1 ((Archimedean motivic action conjecture for Shimura
varieties)).
Let $\lambda$ be a nondegenerate singular analytically integral character, and
let $\Pi$ satisfy Assumption 2.5, with
$\Pi_{\infty}\in\mathfrak{P}_{\lambda}$. Let
$\mathcal{M}=H_{M}^{1}((\operatorname{Ad}\Pi)_{\mathcal{O}_{E}},\overline{\mathbb{Q}}(1))$
and
$\mathcal{H}^{i}=H^{i}(X_{G}(\Gamma)_{\overline{\mathbb{Q}}},[V])[\Pi_{f}]$,
where both are regarded as $\overline{\mathbb{Q}}$-vector spaces equipped with
Hermitian bilinear forms, induced from a fixed admissible bilinear form on
$\mathfrak{g}_{\mathbb{C}}$ (see §2.2). Then, there is an isometry between
graded $\overline{\mathbb{Q}}$-vector spaces equipped with Hermitian metrics,
$\wedge^{*}\mathcal{M}^{*}\otimes\mathcal{H}^{i_{\min}}\cong\bigoplus_{i=i_{\min}}^{i_{\max}}\mathcal{H}^{i},$
where $V$ is the automorphic vector bundle coming from Levi (see Notation)
such that $\Pi_{f}$ appears in multiple degrees of its cohomology,
$i_{\min}=\min\\{i\mid H^{i}(X_{G}(\Gamma),[V])[\Pi_{f}]\neq 0\\}$, and
$i_{\max}$ is defined analogously.
This is the analogue of [PV, Conjecture 1.2.1], although new subtleties arise
in the irregular weight case as we will see. Under some standard conjectures
on periods, we show that the conjecture indeed holds in a few low-dimensional
cases.
###### Theorem 2.
Under certain mild conditions and several standard conjectures on periods (see
below), for $G=\operatorname{Sp}_{4}$ and $\operatorname{SU}(2,1)$, Conjecture
1 is true. In other words, the action of adjoint motivic cohomology group on
the coherent cohomology groups of Shimura variety respects
$\overline{\mathbb{Q}}^{\times}$-structure.
The “mild conditions” are that, informally speaking, the finite part is
globally generic, and that there are newforms; see Assumption 2.5. These
conditions exist to have a clean statement of the conjectures. More
importantly, the “standard conjectures on periods” are summarized in
Assumption 2.22, which includes the Lapid–Mao conjecture and the Beilinson
conjectures. The proof of Theorem 2.18 uses the same idea of [PV], but
requires more machinery, as the whole setup is about comparing periods of
_different automorphic representations_.
The cases where Theorem 2 is proved are when the Hecke eigensystem appears in
two consecutive degrees of cohomology. In this case, or more generally, when
we restrict the statement of Conjecture 1 into the relation between the top
and bottom degrees, the conjecture can be made into a statement that does not
refer to motivic cohomology.
More precisely, let $\Pi=\Pi_{f}\otimes\Pi_{\infty}$ be a cuspidal automorphic
representation satisfying Assumption 2.5, and let $\lambda$ be the
infinitesimal character of $\Pi_{\infty}$, which is singular and nondegenerate
(see Notation). Let $V$ be the automorphic vector bundle coming from the Levi
(see Notation), such that $H^{i}(\mathfrak{p},K;V\otimes\Pi_{\infty})\neq 0$
for some $i$, and $\Pi_{\min},\Pi_{\max}$ be the members of the archimedean
$L$-packet of $\Pi_{\infty}$ such that the degree that $\Pi_{\min}$
($\Pi_{\max}$, respectively) has nontrivial $(\mathfrak{p},K)$-cohomology with
coefficient in $V$ is the minimum (maximum, respectively) in the $L$-packet.
Let $i_{\min}$ ($i_{\max}$, respectively) be the degree, and let
$f_{\min}\in\Pi_{f}^{\operatorname{new}}\otimes\Pi_{\min}^{\operatorname{new}}$
and
$f_{\max}\in\Pi_{f}^{\operatorname{new}}\otimes\Pi_{\max}^{\operatorname{new}}$
(see Definition 2.6) such that $[f_{\min}],[f_{\max}]\in
H^{*}(X_{G}(\Gamma),V)$ (the harmonic Dolbeault forms corresponding to the
automorphic forms; see Definition 2.8 for a precise definition) are defined
over $\overline{\mathbb{Q}}$. Then, under the Beilinson conjectures, the
information on the top and bottom degrees in Conjecture 1 is precisely that
$\frac{\langle f_{\min},f_{\min}\rangle}{\langle
f_{\max},f_{\max}\rangle}\sim_{\overline{\mathbb{Q}}^{\times}\cap\mathbb{R}}\left|\pi^{i_{\min}-i_{\max}}\frac{L_{\infty}(1,\Pi,\operatorname{Ad})}{L_{\infty}(0,\Pi,\operatorname{Ad})}\cdot\frac{L(1,\Pi,\operatorname{Ad})}{\operatorname{vol}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}\Pi)}\right|^{2},$
where $\langle\cdot,\cdot\rangle$ is the Petersson inner product, and the
volume is computed with respect to the metric induced by any weak polarization
(see [PV, §2.2.3])111The volume is independent of the choice of weak
polarization, see [PV, Lemma 2.2.2].. In particular, this statement is
_equivalent to Conjecture 1 if there are only two degrees that $\Pi_{f}$
appear in the cohomology_, assuming Beilinson conjectures.
At first sight, the information on top and bottom degrees might seem to less
interesting, as one might guess there is an extra duality that one can compare
the top and bottom degrees. Indeed, in [PV], the top and bottom degrees were
complementary, so that they are related via duality. On the other hand, in the
irregular setting, there are numerous cases where the top and bottom degrees
are not complementary. Rather, they are determined by the position of the
infinitesimal character inside the Weyl chambers. Indeed, we work with two
examples, $G=\operatorname{Sp}_{4}$ and $\operatorname{SU}(2,1)$, and both
cases we work with the choice of $\lambda$ where the minimum and maximum
degree of appearances are $H^{0}$ and $H^{1}$, respectively.
### 1.1. Archimedean $L$-packet and the matter of choosing automorphic
realizations
We now explain the new subtleties of the irregular weight case. In view of
automorphic cohomology, the phenomenon of a weight one modular form appearing
in $H^{0}$ and $H^{1}$ actually involves _two different representations_. If
we denote $\omega$ to be the weight one line bundle or the corresponding
representation of $\operatorname{SO}(2)$, then for a weight one modular
newform $f$,
$H^{0}(X,\omega)[f]=H^{0}(\mathfrak{p},\operatorname{SO}(2);D_{0}^{+}\otimes\omega),$
$H^{1}(X,\omega)[f]=H^{1}(\mathfrak{p},\operatorname{SO}(2);D_{0}^{-}\otimes\omega),$
where $\mathfrak{p}=\mathfrak{p}_{-}\oplus\mathfrak{s}\mathfrak{o}(2)$,
$\mathfrak{p}_{-}$ is the anti-holomorphic tangent space and $D_{0}^{+}$ and
$D_{0}^{-}$ are the holomorphic and the antiholomorphic discrete series,
respectively. Indeed, it is $\overline{f}d\overline{z}$ that appears in
$H^{1}(X,\omega)$ as a class in Dolbeult cohomology, and $\overline{f}$ is an
_antiholomorphic_ modular form of weight $-1$.
In general, if the same Hecke eigensystem appears in multiple degrees of
cohomology of the same automorphic vector bundle of a Shimura variety, then
each such instance is actually represented by the so-called
$(\mathfrak{p},K)$-cohomology of different automorphic representations. More
precisely, the finite part ($G(\mathbb{A}_{f})$-representation) remains the
same, while the infinity type varies inside an archimedean $L$-packet. Such
phenomenon happens precisely when the infinitesimal character of the infinity
type lies on the walls of Weyl chambers.
The action of $\wedge^{*}\mathfrak{a}_{G}^{*}$ in [PV] is most naturally
thought as the self-$\operatorname{Ext}$-algebra of a representation. On the
other hand, in our case, the representations corresponding to the target and
the source of the action is different, so the action cannot be thought as an
$\operatorname{Ext}$-_algebra_ , but merely as an $\operatorname{Ext}$-group.
Furthermore, the translation of the action into the automorphic cohomology
context also depends on the choice of _automorphic realization maps
$\Pi_{f}\otimes\Pi_{\infty}\hookrightarrow\mathcal{A}(G)$_ for each
$\Pi_{\infty}$; namely, each realization map can be always scaled by a scalar,
but we want to compare them as a whole. This problem did not arise in _op.
cit._ , as a choice of a single automorphic realization map would rigidify the
situation. In turn, we had no choice but to formulate a slightly weaker
Conjecture 1 that asserts the motivic action conjecture on the level of metric
spaces.
### 1.2. Comparison of periods of different automorphic representations
We briefly explain the strategy of the proof of Theorem 2. As does in [PV], we
most notably assume _Beilinson’s conjectures (for Chow motives)_. The main new
feature in this paper is that, because we need to compare periods of two
different representations, we need two different conjectures on periods, one
for each representation. In both evidences, we will compare periods of a
_holomorphic automorphic form_ and a _generic automorphic form_. For the
holomorphic form, we will need the _refined Gan–Gross–Prasad conjecture_
(referred as Ichino–Ikeda conjecture in _op. cit._), although in some
instances this requirement can be avoided by using the doubling construction
of standard $L$-functions. For the generic form, we will need the _Lapid–Mao
conjecture_ , which relates the value of a Whittaker function to a certain
$L$-value.
One also needs a way to detect rationality of classes for both types of forms.
For the holomorphic forms, we can use Fourier expansion, but for those
appearing in higher coherent cohomology, we need a new machinery. We will
develop the so-called _cohomological period integrals_ for higher coherent
cohomology, which realizes integral representations of $L$-functions as cup
product pairings in coherent cohomology. The cohomological interpretation of
such integral representations (or, at least, their appearance in the
literature) is relatively new ([LPSZ], [Oh]).
### 1.3. Generalized complex conjugations
The archimedean motivic action conjecture as stated in [PV] gives a recipe of
rational cohomology classes, whereas our Conjecture 1 is a statement on
metrics. To formulate a similar conjecture in the setting of coherent
cohomology of Shimura varieties, we need a way to rigidify between different
automorphic representations in an archimedean $L$-packet. Indeed, in
retrospect, even in the easy case of modular forms, one needs complex
conjugation to go between holomorphic and antiholomorphic limits of discrete
series. Unfortunately, beyond the case of modular forms, there is no known
general operation that can move between different infinity types. We will
tentatively name such an operation a _generalized complex conjugation_ ,
which should send an automorphic form of a certain infinity type to an
automorphic form of another infinity type in the same $L$-packet. It seems
inevitable to come up with such an operation to formulate the full conjecture
on rational cohomology classes.
The generalized complex conjugations should be naturally understood in the
context of a “derived” local-global compatiblity in some sense, and their
existence is also suggested by the existence of similar operations in the
analogous settings over the $p$-adic fields (Kottwitz’s conjectures, e.g.
[FaMa]) and over the function fields (excursion operators, e.g. [Laf]).
Following the suggestions of Joseph Wolf, we will investigate the nature of
generalized complex conjugations using the theory of Penrose transforms.
On the other hand, if the associated Hermitian symmetric space is a product of
copies of the upper half planes, one can come up with an operation that
changes one infinity type to another by taking complex conjugation at certain
variables. This is a _partial complex conjugation_ , studied in [Ha2]. Using
partial complex conjugations, we can formulate a conjecture on rationality of
cohomology classes in the case of Hilbert modular forms of partial weight one.
There exists a prior work of [Ho] on the motivic action conjecture for Hilbert
modular forms of parallel weight one, which similarly uses partial complex
conjugations. We compare our conjecture in the Hilbert modular form case with
the conjecture of [Ho], and explain the evidences given in _op. cit._ are also
consistent with our conjecture.
### 1.4. Summary
In §2, we take an efficient route to the statement of the Archimedean motivic
action conjecture (Conjecture 2.13) and its more accessible variant, the
Period conjecture (Conjecture 2.15). The objective of the section is to set
the conjecture in a context. We in particular defer the abstract discussion of
how to derive the conjectures, parallel to those of [PV, §2-§5], to later
sections, as it requires more advanced theory on representation theory of real
groups.
In §3 and §4, we provide our main evidence for the Period conjecture
(Conjecture 2.15). Similarly to [PV, §7], we prove that, for
$G=\operatorname{Sp}_{4}$ and $\operatorname{SU}(2,1)$, the Period conjecture
is compatible with several well-accepted conjectures on periods of automorphic
forms, such as the Beilinson’s conjectures, the Lapid–Mao conjecture and the
refined Gan–Gross–Prasad conjectures. To use these, we review how certain
period integrals can be interpreted as cup product pairings of (higher)
coherent cohomology classes on Shimura varieties.
In §5, we discuss the issue on formulating a motivic action conjecture on
rationality of coherent cohomology classes. Most notably, we suggest the
notion of _generalized complex conjugations_ , which move between different
members of a single archimedean $L$-packet. In §5.1, we formulate a precise
conjecture in the case of Hilbert modular forms using partial complex
conjugations, and compare our conjecture with the conjecture of [Ho]. In §5.2,
we spell out conditions that the generalized complex conjugations should
satisfy, and formulate the full conjecture assuming their existence. In
Appendix A, we review the formulation of Beilinson’s conjecture for motives
over a general number field, as many references state the conjecture for only
$\mathbb{Q}$-motives. Finally, we develop a representation theory background
in Appendix B, parallel to [PV, §2-§4]. Although Appendix B is independent of
the development of the rest of the paper, the section is suggestive of a
correct foundation in which the motivic action conjectures need to be
developed.
### 1.5. Problems and questions
There are several interesting questions that arise in this work.
1. (1)
Place the generalized complex conjugations in the context of some form of
“derived” local-global compatibility, motivated by the strong form of Arthur
conjectures as realized in the function field case via excursion operators as
in [Laf]. A correct formulation should be in accordance with the existing
statements of derived local-global compatibility as in [Fe] and [Zhu].
2. (2)
The compatibility between the Period Conjecture, Conjecture 2.15, and the
existing period conjectures relies on the yet-to-be-calculated archimedean
zeta integrals. These can be conducted using explicit integral formulae of
(generalized) Whittaker functions, e.g. [KO], [Od].
3. (3)
As we deal with motives over a more general number field and Shimura varieties
over a number field other than $\mathbb{Q}$, in every aspect of our
discussion, the choice of a complex embedding is always implicit. In
particular, there must be a relation between the conjectures in this paper for
the _conjugates of Shimura varieties_ (e.g. [Va]).
4. (4)
It seems extremely hard to detect rationality of coherent cohomology classes
if the Hecke operators can only cut a space that is of dimension larger than
one. For example, if $X$ is a Hilbert modular surface, and if $\omega$ is the
parallel weight one line bundle, then it seems extremely difficult to
determine whether a class in $H^{1}(X,\omega)$ is defined over
$\overline{\mathbb{Q}}$, or even to produce a class in it.
5. (5)
It is expected that the motivic action conjecture will involve $L$-packets
even in the case of $\delta>0$. It may be possible to formulate the conjecture
for the same eigensystem appearing in cohomology with _different
coefficients_.
### 1.6. Notation
Let $G$ be a connected reductive algebraic group over $\mathbb{Q}$. For
simplicity, let us assume that $G$ is quasisplit, $G(\mathbb{R})$ is
connected, and that the center of $G$ does not have a nontrivial
$\mathbb{R}$-split torus. Also, we assume that there exists a _twisting
element_ in the sense of [BG, Definition 5.2.1]222This is to avoid the
subtlety of difference between $C$-algebraicity and $L$-algebraicity.. Let
$\mathfrak{g}_{\mathbb{Q}}$ be the $\mathbb{Q}$-Lie algebra of $G$, and let
$\mathfrak{g}_{\mathbb{R}},\mathfrak{g}_{\mathbb{C}}$ be its base change to
$\mathbb{R}$ and $\mathbb{C}$, respectively. We occasionally drop the
subscript for $\mathbb{C}$. Let $W_{G}$ be the Weyl group of $G$. We endow an
invariant, $\theta$-invariant, $\mathbb{R}$-invariant bilinear form $B$ on
$\mathfrak{g}_{\mathbb{R}}$, such that $B(X,\theta(X))$ is negative definite,
where $\theta$ is the Cartan involution. We will use this to talk about inner
product on weight space, Riemannian metric on the Hermitian symmetric domain,
etc. In specific examples, we may and will choose $B$ to induce a preferred
Riemannian metric on the Hermitian symmetric space (for example, one may want
the Riemannian metric to be $dxdy$ on the upper half plane
$\mathbb{H}=\\{x+iy\mid y>0\\}$). We also use $B$ to any other bilinear form
induced from $B$.
Suppose further that $G$ gives rise to a _Shimura variety_ , which means that
there is a symmetric space $X$ for $G(\mathbb{R})$ which can be endowed with a
structure of Hermitian symmetric domain (which we will fix). Fix a point $h\in
X$, which gives rise to a _Hodge cocharacter_
$h:\mathbb{S}=\operatorname{Res}_{\mathbb{C}/\mathbb{R}}\mathbb{G}_{m,\mathbb{C}}\rightarrow
G_{\mathbb{R}}$ which in turn induces a real Hodge structure of weight $0$ on
$\mathfrak{g}_{\mathbb{R}}$,
$\mathfrak{g}=\mathfrak{g}^{-1,1}\oplus\mathfrak{g}^{0,0}\oplus\mathfrak{g}^{1,-1}$.
Given an open compact subgroup $\Gamma\subset G(\mathbb{A}_{f})$, there exists
a quasi-projective variety $Y_{G}(\Gamma)$, a Shimura variety, defined over a
number field $E$, whose complex points333Note that the choice of a point in a
Hermitian symmetric domain gives the reflex field as a subfield of
$\mathbb{C}$, so there is a _preferred complex embedding_ ; e.g. [Va, Notation
4.6]. In particular, one can expect that the statement of the conjecture
depends a priori on the choice of a Hermitian symmetric domain. It could be
interesting to check if our conjecture is consistent with conjugation of
Shimura varieties. have an analytification isomorphic to the double quotient
$G(\mathbb{Q})\backslash(X\times G(\mathbb{A}_{f})/\Gamma)$.
Let $K\subset G(\mathbb{R})$ be the stabilizer of $h$. Let $T$ be the Cartan
subgroup of $K$. Then, $X\cong G(\mathbb{R})/K$ and
$\mathfrak{g}^{0,0}=\mathfrak{k}:=\operatorname{Lie}(K)_{\mathbb{C}}$. We
denote $\mathfrak{p}_{+}=\mathfrak{g}^{-1,1}$,
$\mathfrak{p}_{-}=\mathfrak{g}^{1,-1}$ and
$\mathfrak{p}=\mathfrak{k}\oplus\mathfrak{p}_{-}$. Then, $\mathfrak{p}$ is a
parabolic subalgebra of $\mathfrak{g}$, giving rise to a parabolic subgroup
$P\subset G_{\mathbb{C}}$ with $\operatorname{Lie}P=\mathfrak{p}$. We also fix
once and for all a positive system of roots for $\mathfrak{k}$. The
holomorphic tangent space of $X$ at $h$ is identified with $\mathfrak{p}_{+}$,
so there is a $G(\mathbb{R})$-equivariant embedding of complex manifolds
$X\rightarrow\check{D}:=G(\mathbb{C})/P(\mathbb{C})$, sending $h\mapsto
P(\mathbb{C})$. In this regard, $K=G(\mathbb{R})\cap P(\mathbb{C})$, and
$K(\mathbb{C})$ is the Levi subgroup of $P(\mathbb{C})$. Also, any finite-
dimensional holomorphic representation $V$ of $P(\mathbb{C})$ gives rise to an
algebraic vector bundle over $Y_{G}(\Gamma)$, an _automorphic vector bundle_ ,
denoted $[V]$, which is an algebraization of the pullback of the holomorphic
vector bundle on $X$ which in turn is the restriction of the vector bundle
$G(\mathbb{C})\times^{P(\mathbb{C})}V\rightarrow\check{D}$ on $\check{D}$. If
$V$ factors through $K(\mathbb{C})$, namely if it is induced from a
representation of $K(\mathbb{C})$, we will call $[V]$ an automorphic vector
bundle _coming from the Levi_. If not, we will call $V$ _nearly_ , following
[LPSZ].
To save space, we may abbreviate some words with repeated appearances:
discrete series into DS, limit of discrete series into LDS, and nondegenerate
limit of discrete series (see the paragraph before Theorem 2.4 for its
definition) into NLDS. All real group representations are thought as
$(\mathfrak{g},K)$-modules.
For automorphic representations, their $L$-functions are normalized so that
$\frac{1}{2}$ is the center of symmetry. For pure motives, theire
$L$-functions are normalized so that, if $w$ is its weight, $\frac{w+1}{2}$ is
the center of symmetry. For both kinds of $L$-functions, $w$ is called the
_motivic weight_ of the $L$-function. For an automorphic representation
$\Pi=\Pi_{f}\otimes\Pi_{\infty}$ of $G(\mathbb{A})$, the field of rationality
$F_{\Pi}$ is the fixed field of the isomorphism class of $\Pi_{f}$ as a
$G(\mathbb{A}_{f})$-representation ([Cl, §3.1]). An inner product on the space
of automorphic forms, denoted $\mathcal{A}(G)$, can be given as either the
$L^{2}$-norm on $G(\mathbb{Q})/G(\mathbb{A})$ with respect to the Tamagawa
measure or the measure coming from the Riemannian metric of the symmetric
space, as the norms are all scaled by the same scalar factor. The second norm
is the same as the usual _Petersson norm_ , which we will denote as
$\langle,\rangle_{P}$.
For the integral representations, we fix a nontrivial additive character
$\psi=\prod_{p}\psi_{p}$ of $\mathbb{A}/\mathbb{Q}$. For a cuspidal
automorphic form $\varphi=\prod_{p}\varphi_{p}$ of cuspidal automorphic
representation $\pi=\bigotimes_{p}^{\prime}\pi_{v}$ of $G(\mathbb{A})$, its
Whittaker transform $W_{\varphi}(g)$ is defined as
$W_{\varphi}(g)=\int_{N_{G}(\mathbb{Q})\backslash
N_{G}(\mathbb{A})}\varphi(ng)\psi(n^{-1})dn$, for some choice of $N_{G}$ that
needs to be specified when talking about Whittaker model. Locally, a generic
$G(\mathbb{Q}_{p})$-representation $\pi_{p}$ is isomorphic to the space of
functions
$\mathcal{W}_{p}:=\\{W_{p}:G(\mathbb{Q}_{p})\rightarrow\mathbb{C}\mid
W_{p}(ng)=\psi_{p}(n)W_{p}(g)\text{ for }n\in N_{G}(\mathbb{Q}_{p})\\}$; one
can choose isomorphisms (a local _Whittaker model_)
$\pi_{p}\xrightarrow{\sim}\mathcal{W}_{p}$, $f_{p}\mapsto W_{f_{p}}$ that are
compatible with the global Whittaker model; namely,
$W_{\varphi}(g)=\prod_{p}W_{\varphi_{p}}(g_{p})$.
We use notational convention for motives as in [PV, §2], except that we will
deal with motives over a more general number field. In that case, we put the
complex embedding in the subscript, such as $M_{\sigma}$,
$\operatorname{comp}_{B,\operatorname{dR},\sigma}$, etc.
## 2\. Archimedean $L$-packets and the motivic action conjecture
In this section, we take the shortest path to the statement of the Archimedean
motivic action conjecture for Shimura varieties, Conjecture 2.13. More
abstract justification of the formulation of the Conjecture, including a
parallelism between [PV] and our conjecture, is discussed in Appendix B and
§5.2.
### 2.1. $(\mathfrak{p},K)$-cohomology and automorphic forms
Firstly, we quickly review how coherent cohomology of Shimura varieties is
related to automorphic forms via the theory of $(\mathfrak{p},K)$-cohomology.
Recall that, as the singular cohomology of locally symmetric spaces can be
calculated in terms of $(\mathfrak{g},K)$-cohomology, the coherent cohomology
of Shimura varieties can be calculated in terms of the so-called
$(\mathfrak{p},K)$-cohomology. By reinterpreting what the Dolbeault cohomology
calculates in the setting of Shimura varieties, one gets the following
###### Proposition 2.1 ((See [Su, (2.12)])).
We have
$H^{i}(Y_{G}(\Gamma),[V])\cong
H^{i}(\mathfrak{p},K;C^{\infty}(G(\mathbb{Q})\backslash
G(\mathbb{A})/\Gamma)^{K\mathrm{-finite}}\otimes V),$
where the left hand side is analytic cohomology, and $V$ is understood as a
$(\mathfrak{p},K)$-module with trivial $\mathfrak{p}$-action.
Furthermore, there is an analogue of Franke’s theorem for coherent cohomology
of Shimura varieties.
###### Theorem 2.2 ((Su, [Su, Theorem 6.7])).
For any sufficiently refined polyhedral cone decomposition $\Sigma$, there is
a natural Hecke-equivariant isomorphism
$H^{i}(X_{G}^{\Sigma}(\Gamma),[V]^{\operatorname{can}})\cong
H^{i}(\mathfrak{p},K;\mathcal{A}(G)^{\Gamma}\otimes V),$
where $X_{G}^{\Sigma}(\Gamma)$ is the corresponding toroidal compactification,
$[V]^{\operatorname{can}}$ is the canonical extension of $[V]$, and
$\mathcal{A}(G)$ is the space of automorphic forms, namely the space of right
$K$-finite, $Z(\mathfrak{g})$-finite smooth functions on
$G(\mathbb{Q})\backslash G(\mathbb{A})$ of moderate growth.
###### Remark 2.3.
We would be only interested in a part of coherent cohomology localized at a
cuspidal Hecke eigensystem, so it is unlikely that the full power of Su’s
theorem is required.
The calculation of coherent cohomology is therefore about
$(\mathfrak{p},K)$-cohomology of automorphic representations. As far as the
coherent cohomology is concerned, the choice of $\Sigma$ is ineffective, so we
may occasionally drop the superscript $\Sigma$ if there is no confusion.
We will be interested in the situation where a Hecke eigensystem appears in
multiple degrees of coherent cohomology. Namely, for an admissible
$G(\mathbb{A}_{f})$-representation $\Pi_{f}$ and an automorphic vector bundle
$\mathcal{E}$ of $Y_{G}(\Gamma)$, we are interested in
$H^{*}(X_{G}(\Gamma),\mathcal{E}^{\operatorname{can}})[\Pi_{f}].$
By Theorem 2.2, we have a canonical isomorphism
$H^{*}(X_{G}(\Gamma),\mathcal{E}^{\operatorname{can}})\cong\left(\bigoplus_{\Pi_{f}\otimes\Pi_{\infty}\subset\mathcal{A}(G)}H^{*}(\mathfrak{p},K;\Pi_{\infty}\otimes
E)\right)\otimes_{\mathbb{C}}\Pi_{f}^{\Gamma},$
where the sum runs over all automorphic representations with the finite part
being $\Pi_{f}$, and $E$ is the algebraic $P(\mathbb{C})$-representation such
that $[E]=\mathcal{E}$. Thus, it is possible that _several different
$\Pi_{\infty}$’s can appear in the decomposition_, if
$H^{*}(\mathfrak{p},K;\Pi_{\infty}\otimes E)\neq 0$ for several different
$\Pi_{\infty}$’s. Indeed, this can be the case if, for example, some of
$\Pi_{\infty}$ is a nondegenerate limit of discrete series (NLDS) and not a
discrete series (DS); recall that a _nondegenerate limit of discrete series_
is a limit of discrete series whose infinitesimal character is not orthogonal
to any compact root. Unlike the case of “$\delta>0$” as in [PV], the
appearance of single Hecke eigensystem in multiple cohomological degrees in
our setting necessarily implies that, by the following Theorem, there are
_many different archimedean representations involved_ :
###### Theorem-Definition 2.4 ((See [VZ], [Sc2])).
Let $\Pi_{\infty}$ be the $(\mathfrak{g},K)$-module associated to a DS or a
NLDS representation of $G(\mathbb{R})$. Then, there is a unique $0\leq
i\leq\dim X$ and a finite-dimensional irreducible $K$-representation $V$ such
that $H^{i}(\mathfrak{p},K;\Pi_{\infty}\otimes V)\neq 0$. Furthermore,
$\dim_{\mathbb{C}}H^{i}(\mathfrak{p},K;\Pi_{\infty}\otimes V)=1$. We will
denote $i_{\Pi_{\infty}}$ and $V_{\Pi_{\infty}}$ for the $i$ and $V$
corresponding to $\Pi_{\infty}$.
Indeed, the above Theorem says that a single archimedean representation can
only contribute to a single degree. We will see that the _raison d’être_ of
appearance of a Hecke eigensystem in multiple degrees is that the infinity
type $\Pi_{\infty}$ can change in an archimedean $L$-packet without changing
the finite part. We will see in detail in Appendix B how an archimedean
$L$-packet (rather than a single $G(\mathbb{R})$-representation) appears in
the context of motivic action. For now, we move on to the formulation of the
“metric” conjecture which does not involve abstract real group representation
theory nor Lie algebra cohomology. From now on, for the sake of simplicity, we
assume the following
###### Assumption 2.5.
Let $\Pi=\Pi_{f}\otimes\Pi_{\infty}$, with $\Pi_{f}=\prod_{p<\infty}\Pi_{p}$,
be a cuspidal automorphic automorphic representation with $\Pi_{\infty}$ an
NLDS (see Notation)444This would ensure that $F_{\Pi}$, the field of
definition, is a number field, by our assumption in Notation that there exists
a twisting element in the sense of [BG, Definition 5.2.1]. Note that the
twisting element indeed exists for $G=\operatorname{Sp}_{4}$ (as it is split
and has simply-connected derived subgroup) and $\operatorname{SU}(2,1)$ (as
the half-sum of positive roots is integral).. We hereafter assume the
following:
$\Pi_{f}$ is globally generic. For each $p<\infty$, there exists a compact
open subgroup $\Gamma_{p}\leq G(\mathbb{Z}_{p})$ such that
$\dim_{\mathbb{C}}\Pi_{p}^{\Gamma_{p}}=1$. If $G=\operatorname{Sp}_{4}$, a
holomorphic Siegel modular newform $f_{\Pi}$ in
$\Pi_{f}\otimes\Pi_{\infty}^{\operatorname{hol}}$, where
$\Pi_{\infty}^{\operatorname{hol}}$ is a holomorphic (limit of) discrete
series, has a nontrivial special Bessel period $B(f_{\Pi},F)\neq 0$ for some
imaginary quadratic field $F$. Here,
$B(f_{\Pi},F)=\sum_{A=\left(\begin{smallmatrix}a&b/2\\\
b/2&c\end{smallmatrix}\right),4ac-b^{2}=D_{F},A\sim M^{T}AM\text{ for
}M\in\operatorname{SL}_{2}(\mathbb{Z})}a_{A}$, where
$f_{\Pi}(Z)=\sum_{S}a_{S}e^{2\pi i\operatorname{tr}(SZ)}$ is the $q$-expansion
of $f_{\Pi}$, and $-D_{F}$ is the discriminant of $F$. If
$G=\operatorname{SU}(2,1)$, defined using an imaginary quadratic field $K$,
(1) there exists a split place $v$ such that $\Pi_{v}$ is supercuspidal, and
(2) if $p$ is not split in $K$ where $\Pi_{p}$ is also not unramified, either
$\Pi_{p}$ is supercuspidal or the stabilizer of an anisotropic vector
$\operatorname{SU}(1,1)(\mathbb{Q}_{p})\subset\operatorname{SU}(2,1)(\mathbb{Q}_{p})$
is compact.
The purpose of Assumption 2.5 is to isolate the effect of archimedean
$L$-packet phenomenon amongst others and to use existing results on the
refined Gan–Gross–Prasad conjectures. We believe that, for example, it would
be not difficult to formulate the Conjectures without (2.5).
###### Definition 2.6.
Under Assumption 2.5, let $\Gamma(\Pi_{f})=\Gamma:=\prod_{p}\Gamma_{p}$. Also,
when we say a vector $f\in\Pi$ is a _newform_ , it means
$f=\prod_{p}f_{p}\otimes f_{\infty}$ such that, not only
$f_{p}\in\Pi_{p}^{\Gamma_{p}}$, but also $f_{\infty}$ is a highest weight
vector of the minimal $K$-type of $\Pi_{\infty}$ (which exists as we will be
only concerned about either DS or LDS). We define
$\Pi_{f}^{\operatorname{new}}=\prod_{p}\Pi_{p}^{\Gamma_{p}}$, and also
$\Pi_{\infty}^{\operatorname{new}}$ to be the one-dimensional
$\mathbb{C}$-vector subspace of $\Pi_{\infty}$ generated by highest weight
vectors of the minimal $K$-type. In particular, $f$ being a newform means that
$f\in\Pi_{f}^{\operatorname{new}}\otimes\Pi_{\infty}^{\operatorname{new}}$.
In view of Theorem-Definition 2.4, the above “newform” appears in
$H^{i_{\Pi_{\infty}}}(X_{G}(\Gamma),[V_{\Pi_{\infty}}]^{\operatorname{can}})[\Pi_{f}],$
which we denote by $H^{i_{\Pi_{\infty}}}(X)[\Pi_{f}]$.
Note that a choice of a highest weight vector of $V_{\Pi_{\infty}}$ induces a
natural isomorphism
$H^{i_{\Pi_{\infty}}}(\mathfrak{p},K;\Pi_{\infty}\otimes
V_{\Pi_{\infty}})\cong\Pi_{\infty}^{\operatorname{new}},$
defined as follows. Given $v\in\Pi_{\infty}^{\operatorname{new}}$, define a
$K$-homomorphism
$f:\left(\wedge^{i_{\Pi_{\infty}}}(\mathfrak{p}/\mathfrak{k})\right)\otimes
V_{\Pi_{\infty}}^{*}\rightarrow\Pi_{\infty},$
by sending the highest weight vector (induced from the choice of a highest
weight vector of $V_{\Pi_{\infty}}$ and the roots of $\mathfrak{g}$) of the
highest $K$-type of the source to $v$ and sending all other $K$-types to zero.
This defines a class in
$\operatorname{Hom}_{K}(\wedge^{i_{\Pi_{\infty}}}(\mathfrak{p}/\mathfrak{k}),\Pi_{\infty}\otimes
V_{\Pi_{\infty}})$ which is closed in the corresponding Chevalley–Eilenberg
complex for the $(\mathfrak{p},K)$-cohomology, thus a class in
$H^{i_{\Pi_{\infty}}}(\mathfrak{p},K;\Pi_{\infty}\otimes V_{\Pi_{\infty}})$.
###### Remark 2.7.
In this paper, each conjecture will consider a fixed finite type $\Pi_{f}$ and
a coefficient vector bundle $V$, so in that context we firstly fix a choice of
a highest weight vector of $V$ before anything else.
###### Definition 2.8.
For
$f=\prod_{p\leq\infty}f_{p}\in\Pi_{f}^{\operatorname{new}}\otimes\Pi_{\infty}^{\operatorname{new}}$,
we define $[f]\in H^{i_{\Pi_{\infty}}}(X)[\Pi_{f}]$ to be the class
corresponding to
$[f_{\infty}]\otimes\prod_{p<\infty}f_{p}\in
H^{i_{\Pi_{\infty}}}(\mathfrak{p},K;\Pi_{\infty}\otimes
V_{\Pi_{\infty}})\otimes\Pi_{f}^{\operatorname{new}}\subset
H^{i_{\Pi_{\infty}}}(X)[\Pi_{f}],$
where $[f_{\infty}]\in H^{i_{\Pi_{\infty}}}(\mathfrak{p},K;\Pi_{\infty}\otimes
V_{\Pi_{\infty}})$ corresponds to
$f_{\infty}\in\Pi_{\infty}^{\operatorname{new}}$ via the above natural
isomorphism $H^{i_{\Pi_{\infty}}}(\mathfrak{p},K;\Pi_{\infty}\otimes
V_{\Pi_{\infty}})\cong\Pi_{\infty}^{\operatorname{new}}$.
### 2.2. Metrics on cohomology
To formulate the main Conjecture 2.13, we need to define metrics on the
coherent cohomology of Shimura varieties as well as motivic cohomology. To
define metrics on both motivic cohomology and Dolbeault cohomology, we fix an
_admissible bilinear form_ on $\mathfrak{g}_{\mathbb{C}}$ in the following
sense.
###### Definition 2.9 ((Admissible bilinear form)).
An _admissible bilinear form_ $B$ on $\mathfrak{g}_{\mathbb{C}}$ is an
invariant, $\theta$-invariant Hermitian bilinear form on
$\mathfrak{g}_{\mathbb{C}}$ such that the following conditions are satisfied.
1. (1)
It is a natural extension of a $\mathbb{R}$-valued bilinear form on
$\mathfrak{g}_{\mathbb{R}}$ such that $B(X,\theta(X))$ is negative definite,
where $\theta$ is the Cartan involution that fixes
$\mathfrak{k}_{\mathbb{R}}$.
2. (2)
It is $\overline{\mathbb{Q}}$-valued on
$\mathfrak{g}_{\overline{\mathbb{Q}}}$.
For example, if $G(\mathbb{R})$ is semisimple, the extention of the Killing
form as a Hermitian bilinear form on $\mathfrak{g}_{\mathbb{C}}$ is an
admissible bilinear form.
Firstly, the coherent cohomology of toroidal compactifications of Shimura
varieties
$H^{*}(X_{G}(\Gamma),[V]^{\operatorname{can}}),$
can be given a Hermitian metric555This coincides with the metric on the Lie
algebra cohomology (see e.g. [BW, §II.2]), and this will be reviewed later in
Appendix B., induced from a Hermitian metric on the Hermitian symmetric domain
and the automorphic vector bundle (which is always possible by the compactness
of $K$, see e.g. [BKK, §5]), which is in turn induced from our choice of
admissible bilinear form on $\mathfrak{g}_{\mathbb{C}}$. The choice of
Hermitian metric induces Hermitian metrics on the entries of Dolbeault complex
$\mathscr{A}^{0,*}([V]^{\operatorname{can}})$. By taking the formal adjoint
$\overline{\partial}^{*}$ of $\overline{\partial}$, one can define the
Laplacian
$\Delta=\overline{\partial}\overline{\partial}^{*}+\overline{\partial}^{*}\overline{\partial}$
on each entry of Dolbeault complex. By Hodge theory, the Dolbeault cohomology
$H^{i}([V]^{\operatorname{can}})$ is identified with the space of _harmonic
$(0,i)$-forms_ $\mathscr{H}^{i}([V]^{\operatorname{can}})$, which is just the
kernel of the Laplacian $\Delta$. The restriction of Hermitian metric on
$\mathscr{A}^{0,i}([V]^{\operatorname{can}})$ to the space of harmonic
$(0,i)$-forms gives rise to a Hermitian metric on the Dolbeault cohomology.
On the other hand, the motivic cohomology that would have to appear in the
motivic action conjectures is that of the adjoint motive. As in [PV, §4.2], we
assume a conjecture on the existence of adjoint motive.
###### Conjecture 2.10.
For $\Pi=\Pi_{f}\otimes\Pi_{\infty}$ satisfying Assumption 2.5, there exists
an adjoint motive $\operatorname{Ad}\Pi$, in the sense of [PV, Definition
4.2.1], over the reflex field $E$ with coefficients in $F_{\Pi}$.
To endow a Hermitian metric on the adjoint motivic cohomology
$H^{1}_{M}((\operatorname{Ad}\Pi)_{\mathcal{O}_{E}},\overline{\mathbb{Q}}(1))$,
consider the Beilinson regulator for $\operatorname{Ad}\Pi$. Recall that the
Beilinson regulator is a map from motivic cohomology to Deligne cohomology,
$H^{1}_{M}((\operatorname{Ad}\Pi)_{\mathcal{O}_{E}},\mathbb{Q}(1))\rightarrow
H_{\mathscr{D}}^{1}((\operatorname{Ad}\Pi)_{\mathbb{R}},\mathbb{R}(1)),$
where the Deligne cohomology group of a motive $M$ over a number field $k$ is
defined as (see [Ra, (6.1.22)])
$H_{\mathscr{D}}^{i}(M_{\mathbb{R}},A)=\prod_{w\text{ complex places of
$k$}}H_{\mathscr{D}}^{i}(M\times_{k,w}\mathbb{C},A)\times\prod_{w\text{ real
places of $k$}}H_{\mathscr{D}}^{i}(M\times_{k,w}\mathbb{R},A).$
Under the Beilinson’s conjectures, the Beilinson regulator gives rise to an
isomorphism
$H^{1}_{M}((\operatorname{Ad}\Pi)_{\mathcal{O}_{E}},\mathbb{Q}(1))\otimes_{\mathbb{Q}}\mathbb{R}\xrightarrow{\sim}H_{\mathscr{D}}^{1}((\operatorname{Ad}\Pi)_{\mathbb{R}},\mathbb{R}(1)).$
###### Remark 2.11.
Note that the Betti realization, on which the Beilinson regulator depends,
_depends on the complex embedding_ $E\hookrightarrow\mathbb{C}$. In our
discussion, we use the preferred embedding that came with the datum of reflex
field3. In the following discussions, we always use this complex embedding.
From §B.3, we know that the target of the Beilinson regulator is identified
with $\widehat{\mathfrak{g}}^{\varphi(W_{\mathbb{C}/\mathbb{R}})}$, where
$\varphi:W_{\mathbb{C}/\mathbb{R}}\rightarrow{}^{L}G$ is the corresponding
Lanvlands parameter, and that this is identified as a Lie subalgebra of
$\widehat{\mathfrak{t}}$. We define a Hermitian bilinear form on
$\widehat{\mathfrak{t}}$ as the dual Hermitian bilinear form of the one we
chose for $\mathfrak{t}$, and this restricts to a Hermitian bilinear form on
the Deligne cohomology.
###### Definition 2.12.
Let $X$ be a finite-dimensional $\overline{\mathbb{Q}}$-vector space, together
with an embedding $\iota:\overline{\mathbb{Q}}\hookrightarrow\mathbb{C}$. A
_Hermitian bilinear form_ on $X$ is a Hermitian metric on
$X\otimes_{\overline{\mathbb{Q}},\iota}\mathbb{C}$.
By [PV, Lemma 2.2.2], the volume of
$H_{M}^{1}((\operatorname{Ad}\Pi)_{\mathcal{O}_{E}},\overline{\mathbb{Q}}(1))$
is in fact independent of choice of the admissible Hermitian bilinear form.
### 2.3. Archimedean motivic action conjecture for Shimura varieties
Now, we are able to state the Archimedean motivic action conjecture for
Shimura varieties as follows.
###### Conjecture 2.13 ((Archimedean motivic action conjecture for Shimura
varieties, metric version)).
Let $\Pi=\Pi_{f}\otimes\Pi_{\infty}$ satisfy Assumption 2.5, with
$\Pi_{\infty}$ being a NLDS but not being a DS. Let
$\mathcal{M}=H_{M}^{1}((\operatorname{Ad}\Pi)_{\mathcal{O}_{E}},\overline{\mathbb{Q}}(1))$
and $\mathcal{H}^{i}=H^{i}(X)[\Pi_{f}]$, where both are regarded as
$\overline{\mathbb{Q}}$-vector spaces equipped with a Hermitian bilinear form
(see Definition 2.12), induced from a fixed admissible Hermitian bilinear form
on $\mathfrak{g}_{\mathbb{C}}$. Then, there is an isomorphism of graded
$\overline{\mathbb{Q}}$-vector spaces equipped with Hermitian metrics,
$\wedge^{*}\mathcal{M}^{*}\otimes\mathcal{H}^{i_{\min}}\cong\bigoplus_{i=i_{\min}}^{i_{\max}}\mathcal{H}^{i},$
where $i_{\min}$ and $i_{\max}$ are the bottom and top degrees, respectively,
of appearance of $\Pi_{f}$ in the cohomology $H^{*}(X)[\Pi_{f}]$.
###### Remark 2.14.
It seems that, to descend the coefficient field from $\overline{\mathbb{Q}}$
to a number field, one may have to take a field larger than $EF_{\Pi}$ (the
compositum of the reflex field and the field of definition of $\Pi_{f}$) even
in the case of $\operatorname{SL}_{2}(\mathbb{Q})$; see [Ho, Corollary 4.6].
The relationship between the above Conjecture and the philosophy of motivic
action conjectures will be fully discussed in Appendix B.
A special subset of the Main conjecture (Conjecture 2.13) concerning the norms
of _top and bottom degrees_ can be formulated without reference to motivic
cohomology, assuming Beilinson’s conjectures for Chow motives [PV, Conjecture
2.1.1].
###### Conjecture 2.15 ((Comparison of top and bottom in Conjecture 2.13)).
Let $\Pi$ be as in Conjceture 2.13. Let $V$ be the automorphic vector bundle
coming from the Levi, such that
$H^{i}(\mathfrak{p},K;V\otimes\Pi_{\infty})\neq 0$ for some $i$, and
$\Pi_{\min},\Pi_{\max}$ be the members of the archimedean $L$-packet of
$\Pi_{\infty}$ such that the degree that $\Pi_{\min}$ ($\Pi_{\max}$,
respectively) has nontrivial $(\mathfrak{p},K)$-cohomology with coefficient in
$V$ is the minimum (maximum, respectively) cohomological degree, denoted
$i_{\min}$ ($i_{\max}$, respectively) in the $L$-packet. Let
$f_{\min}\in\Pi_{f}^{\operatorname{new}}\otimes\Pi_{\min}^{\operatorname{new}}$
and
$f_{\max}\in\Pi_{f}^{\operatorname{new}}\otimes\Pi_{\max}^{\operatorname{new}}$
such that $[f_{\min}],[f_{\max}]\in H^{*}(X)[\Pi_{f}]$ are defined over
$\overline{\mathbb{Q}}$. Then,
$\frac{\langle f_{\min},f_{\min}\rangle_{P}}{\langle
f_{\max},f_{\max}\rangle_{P}}\sim_{\overline{\mathbb{Q}}^{\times}\cap\mathbb{R}}\left|\pi^{i_{\min}-i_{\max}}\frac{L_{\infty}(1,\Pi,\operatorname{Ad})}{L_{\infty}(0,\Pi,\operatorname{Ad})}\cdot\frac{L(1,\Pi,\operatorname{Ad})}{\operatorname{vol}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}\Pi)}\right|^{2},$
where the volume is computed with respect to the metric induced by any weak
polarization (see [PV, §2.2.3])666The volume is independent of the choice of
weak polarization, see [PV, Lemma 2.2.2]..
###### Proposition 2.16.
Assuming the Beilinson conjecture, Conjecture 2.15 is equivalent to the
isometry statement in Conjecture 2.13 for top and bottom degrees,
$\wedge^{\mathrm{top}}\mathcal{M}^{*}\otimes\mathcal{H}^{i_{\min}}\cong\mathcal{H}^{i_{\max}}.$
In particular, if $i_{\max}=i_{\min}+1$, Conjecture 2.13 and Conjecture 2.15
are equivalent.
###### Proof.
The key is to compare the top and the bottom degrees of the graded vector
spaces and relate them with the volumes, which is independent of the choice of
metric. Namely, we need to prove the analogue of [PV, Lemma 2.2.2],
$\operatorname{vol}_{S}H^{1}_{M}((\operatorname{Ad}\Pi)_{\mathcal{O}_{E}},\overline{\mathbb{Q}}(1))\sim_{\overline{\mathbb{Q}}^{\times}}\frac{L^{*}(0,\operatorname{Ad}\Pi)}{\operatorname{vol}_{S}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}\Pi)},$
for $S$ the Hermitian inner product induced by a weak polarization, and the
statement will follow after applying functional equation. This now follows
from the Beilinson’s conjecture over a general number fields, as in [Ra, §6].
Namely, there is a fundamental exact sequence, [Ra, (6.4.2)],
$0\rightarrow
F^{1}H_{\operatorname{dR}}((\operatorname{Ad}\Pi)_{\mathbb{C}})\rightarrow
H_{B}^{0}((\operatorname{Ad}\Pi)_{\mathbb{R}},\mathbb{C})\rightarrow
H_{\mathscr{D}}^{1}((\operatorname{Ad}\Pi)_{\mathbb{R}},\mathbb{C}(1))\rightarrow
0,$
where
$H_{B}^{0}((\operatorname{Ad}\Pi)_{\mathbb{R}},\mathbb{C})=\prod_{w\text{
complex places of
$E$}}H_{B}^{0}(M\times_{E,w}\mathbb{C},\mathbb{C})\times\prod_{w\text{ real
places of $E$}}H_{B}^{0}(M\times_{E,w}\mathbb{R},\mathbb{C}),$
is similarly defined as the “real Deligne cohomology”. The Beilinson’s
conjecture over a general number field says that the determinant of the
fundamental exact sequence (2.3) has incompatible
$\overline{\mathbb{Q}}$-rational structures, and are off precisely by
$L^{*}(0,\operatorname{Ad}\Pi)$ (see [Ra, §6.4]):
$\det(H_{B}^{0}((\operatorname{Ad}\Pi)_{\mathbb{R}},\overline{\mathbb{Q}}))L^{*}(0,\operatorname{Ad}\Pi)\sim_{\overline{\mathbb{Q}}^{\times}}\det
F^{1}H_{\operatorname{dR}}((\operatorname{Ad}\Pi)_{\overline{\mathbb{Q}}})\cdot\det(H_{M}^{1}((\operatorname{Ad}\Pi)_{\mathcal{O}_{E}},\overline{\mathbb{Q}}(1))).$
Regarding this as an equality inside the determinant of the fundamental exact
sequence (2.3), computing the volumes would give the desired statement. ∎
We will later prove Conjecture 2.13 in special cases where Conjecture 2.15 is
equivalent to Conjecture 2.13 (namely, the appearances of Hecke eigensystem
span over two degrees).
###### Example 2.17 ((Sanity check: $\operatorname{SL}_{2}$)).
The simplest example is the case of $G=\operatorname{SL}_{2,\mathbb{Q}}$,
where the conjecture is about _weight one elliptic modular forms_. Let $f\in
S_{1}(\Gamma)$ be a weight one cuspidal new eigenform with Fourier
coefficients in $\overline{\mathbb{Q}}$, generating an automorphic
representation $\Pi$. Then, the complex conjugate
$\overline{f}\in\overline{\Pi}$, which also satisfies
$\langle\overline{f},\overline{f}\rangle_{P}=\langle f,f\rangle_{P}$. Thus, we
are led to the following question: for what $c\in\mathbb{C}^{\times}$ does
$c\overline{f}$ define a $\overline{\mathbb{Q}}$-coherent cohomology class in
$H^{1}(X(\Gamma),\omega)$? After calculating the archimedean $L$-factors
(which is elementary), Conjecture 2.15 says that
$|c|^{-2}\sim_{\overline{\mathbb{Q}}^{\times}\cap\mathbb{R}}\left(\pi^{-2}\frac{L(1,\Pi,\operatorname{Ad})}{\operatorname{vol}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}\Pi)}\right)^{2}.$
Note that $\Pi$ is a pure weight $0$ motive, and so is $\operatorname{Ad}\Pi$;
therefore, $F^{1}H_{\operatorname{dR}}(\operatorname{Ad}\Pi)=0$, which means
that
$|c|\sim_{\overline{\mathbb{Q}}^{\times}\cap\mathbb{R}}\frac{\pi^{2}}{L(1,\Pi,\operatorname{Ad})}.$
It is well-known that
$L(1,\Pi,\operatorname{Ad})\pi^{-2}\sim_{\overline{\mathbb{Q}}^{\times}}\langle
f,f\rangle_{P}$. Thus, the conjecture says
$|c|\sim_{\overline{\mathbb{Q}}^{\times}\cap\mathbb{R}}\frac{\pi}{\langle
f,f\rangle_{P}}.$
Let $c\overline{f}d\overline{z}$ define a Dolbeault cohomology class
$f^{\vee}\in H^{1}(X(\Gamma),\omega)$, which is, by definition, defined over
$\overline{\mathbb{Q}}$. By Serre duality, $\langle
f,f^{\vee}\rangle_{S}\in\overline{\mathbb{Q}}$, where $\langle-,-\rangle_{S}$
denotes the Serre duality.
On the other hand, the Serre duality in this case coincides with Petersson
inner product scaled by the factor of $\frac{1}{2\pi i}$ (e.g. [DMOS, p. 22]),
namely $2\pi i\langle f,f^{\vee}\rangle_{S}=\langle
f,\overline{c}f\rangle_{P}=\overline{c}\langle f,f\rangle_{P}$. Thus, means
that $c\sim_{\overline{\mathbb{Q}}^{\times}}\frac{1}{\langle
f,f\rangle_{P}}$,777Indeed, $\langle f,f\rangle_{P}$ is a real number. which
is consistent with our conjecture. That these facts can be realized as an
instance of motivic action conjecture was realized in [HV] and was explicitly
spelled out in [Ho].
### 2.4. An approach towards the motivic action conjecture
Unlike [PV] or the case of $\operatorname{SL}_{2}$, the top and bottom degrees
are most of the time not complementary (namely,
$i_{\Pi_{\min}}+i_{\Pi_{\max}}\neq\dim_{\mathbb{C}}X$), so the conjecture has
nothing to do with any form of duality. We will nevertheless prove a form of
the conjecture by relating this with _cohomological period integrals_. These
are integral representations of certain $L$-functions that
* •
apply to automorphic forms appearing in higher coherent cohomology,
* •
and admit an interpretation as cup product pairing in coherent cohomology.
This is useful in verifying our conjectures as the coherent cohomological cup
product can _detect rationality of higher coherent cohomology classes_.
We will show that a strategy similar to [PV, §7] can also show that our period
conjecture, Conjecture 2.15, is true in certain cases.
###### Theorem 2.18.
Let $G$ be either $\operatorname{Sp}_{4}$ or $\operatorname{SU}(2,1)$, and let
$\Pi$ be a globally generic cuspidal automorphic representation of
$G(\mathbb{A}_{\mathbb{Q}})$ satisfying Assumption 2.5. Assume the working
hypothesis on periods, Assumption 2.22. Then, Conjecture 2.15 is true, up to
the factor of an archimedean zeta integral (see Remarks 3.6, 4.5).
In the two cases, we will compare periods of _holomorphic LDS_ appearing in
$H^{0}$ and _generic LDS_ appearing $H^{1}$. The tools that we will use are
summarized in the following table.
LDS type | Detecting rationality | Relation with Petersson norm
---|---|---
Holomorphic ($H^{0}$) | Rational Fourier coefficient | | Doubling method, Refined GGP conjectures
---
Generic ($H^{>0}$) | Cohomological period integrals | Lapid–Mao conjecture
A slightly more detailed outline is as follows. We would need to know how
$\overline{\mathbb{Q}}$-algebraicity of coherent cohomology classes in $H^{0}$
and $H^{1}$ is related to Petersson norms. For $H^{0}$, the classes are
represented by holomorphic automorphic forms, where their algebraicity is
detectable by Fourier coefficients. In the language of periods, these are
related to Bessel or Fourier–Jacobi periods, which are related to Petersson
norms via the refined Gan–Gross–Prasad conjectures. For $H^{1}$, the classes
are represented by generic automorphic forms888In both cases of our concern,
the infinity type corresponding to $H^{1}$ belongs to a generic (L)DS, which
is a numerical coincidence that only happens in certain special examples.. The
Petersson norms of corresponding automorphic forms are related to the
_Whittaker periods_ via the _Lapid–Mao conjecture_ [LM]. Due to its simple
statement, we recall the conjecture here:
###### Conjecture 2.19 (([LM])).
Let $\Pi$ be a globally generic representation, satisfying (2.5) of Assumption
2.5. Let $f=\otimes_{p}f_{p}\in\Pi$ be a newform. Then,
$\frac{\langle
f,f\rangle}{|W(1)|^{2}}\sim_{\mathbb{Q}^{\times}}\frac{L(1,\Pi,\operatorname{Ad})}{\Delta_{G}(1)|W_{\infty}(1)|^{2}},$
where $\langle,\rangle$ is the $L^{2}$-norm (as opposed to the Petersson
norm), $\Delta_{G}(s)$ is the $L$-function of the dual to the Artin motive
attached to $G$ as defined in [Gr], and $W$ ($W_{\infty}$, respectively) is
the Whittaker function of $f$ ($f_{\infty}$, respectively).
The Whittaker periods are then related to $\overline{\mathbb{Q}}$-algebraicity
of coherent cohomology classes via cohomological period integrals. On one
hand, the period integral has automorphic interpretation, which connects to
Whittaker periods. On the other hand, the period integral has cohomological
interpretation, so that in particular it descends to $\overline{\mathbb{Q}}$,
hence detects $\overline{\mathbb{Q}}$-algebraicity.
Finally, as the conjectures are formulated using invariants coming from the
motivic formalism, we would be working with the corresponding motives and
compute motivic invariants (dubbed “Hodge-linear algebra”). We will work with
the adjoint motive, as in Conjecture 2.10, as well as the motive of the given
automorphic representation999The construction of those motives is a subtle
matter, as we work with limits of discrete series. Indeed, the corresponding
Galois representations have been constructed, but only by using congruences.:
###### Conjecture 2.20.
For $\Pi$ as in Assumption 2.5, there exists a motive $M_{\Pi}$ that is
uniquely characterized by [Cl, §4.3.3].
###### Remark 2.21.
The adjoint motive of Conjecture 2.10 is the motive associated to
$L(s,\Pi,\operatorname{Ad})$ in the above sense.
Thus, the “working hypothesis” is as follows.
###### Assumption 2.22 ((Working hypothesis)).
We assume the following conjectures101010Even though the refined
Gan–Gross–Prasad conjectures are being mentioned throughout the paper, they
are not required as an assumption, because we have an unconditional
alternative result using the doubling method (and they are equivalent if one
assumes Beilinson’s conjectures anyways).. There are numerous instances of
these conjectures being verified, and we do not attempt to list them here.
1. (1)
_Beilinson’s conjecture for Chow motives_ , Conjecture 2.1.1 of [PV].
2. (2)
_Lapid–Mao conjecture_ , Conjecture 2.19.
3. (3)
_Existence of motives_ , Conjecture 2.10 and Conjecture 2.20.
## 3\. Evidence I: The case of $\operatorname{Sp}_{4}$
We first provide the evidence for the motivic action conjecture (Conjecture
2.13) for the case of certain irregular automorphic forms on
$G(\mathbb{A})=\operatorname{Sp}_{4}(\mathbb{A})$. In this case, we will use
an integral reprsentation of the spinor $L$-function by Novodvorsky ([No]),
whose coherent cohomological interpretation was given by [LPSZ].
Let $\Pi$ be a globally generic cuspidal automorphic representation of
$G=\operatorname{GSp}_{4}(\mathbb{A}_{\mathbb{Q}})$, namely that the Whittaker
transform (see §1.6, Notation) defines a realization of $\Pi$ as a space of
functions on $G(\mathbb{Q})\backslash G(\mathbb{A})$ satisfying a
transformation property under the $N(\mathbb{A})$-action. We choose unramified
vectors $\varphi_{v}^{0}\in\Pi_{v}$ for $v$ finite with $\Pi_{v}$ unramified
such that, if $\psi_{v}$ is unramified, $W_{\varphi_{v}^{0}}(1)=1$. Suppose
that $\Pi$ satisfies Assumption 2.5.
###### Definition 3.1.
Let $M\supset F_{\Pi}$ be a number field. A $\psi$-Whittaker function $W$ on
$G$ is called to be _defined over $M$_ if it takes values in $M(\mu_{\infty})$
and satisfies
$\sigma(W(g))=W(w(\kappa(\sigma))g),$
for all $g\in G(\mathbb{A}_{f})$ and
$\sigma\in\operatorname{Gal}(\overline{\mathbb{Q}}/M)$, where
$w(x)=\operatorname{diag}(x^{3},x^{2},x,1)$ and
$\kappa:\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\rightarrow\widehat{\mathbb{Z}}^{\times}$
is the cyclotomic character.
###### Remark 3.2.
1. (1)
The convention is made so that, if $\psi$ is unramified, the unramified
$\psi$-Whittaker function $W$ with $W(1)=1$ is defined over $F_{\Pi}$. As
$L(s,\Pi,\operatorname{Ad})$ is regular at $s=1$, $W(1)$ is nonzero by [LM,
§3.1]. Thus, $W$ is defined over $M$ if and only if $W(1)\in M^{\times}$.
2. (2)
Indeed, our conditions on $G$ and $\Pi$ imply that $F_{\Pi}$ is a number field
(see Notation). Thus, the space of $\psi$-Whittaker functions, which naturally
has an action of $\operatorname{Aut}(\mathbb{C}/\mathbb{Q})$ as in [GHL,
§4.1], is fixed by $\operatorname{Aut}(\mathbb{C}/F_{\Pi})$. Thus, there
exists a nontrivial $\psi$-Whittaker function defined over $M$.
### 3.1. Whittaker periods via cohomological period integrals
The purpose of this subsection is to prove the following
###### Theorem 3.3.
Let
$\varphi\in\Pi^{\operatorname{new}}=\Pi_{f}^{\operatorname{new}}\otimes\Pi_{\infty}^{\operatorname{new}}$.
Let $F$ be an imaginary quadratic field such that the special Bessel period of
$\Pi$ for $F$ is not identically zero (see (2.5) of Assumption 2.5). If
$[\varphi]$ (see Definition 2.8) is defined over a number field
$F^{\prime}\supset FF_{\Pi}$, then
${\Lambda\left(1/2,\Pi\right)\Lambda\left(1/2,\Pi\otimes\chi_{F}\right)}W_{\varphi}$
is a nontrivial $\psi$-Whittaker function defined over $F^{\prime}$, where
$\chi_{F}$ is the quadratic character associated to $F$.
###### Proof.
Let $C\in\mathbb{C}^{\times}$ be such that $\frac{W_{\varphi}}{C}$ is defined
over $F^{\prime}$, which exists by Remark 3.2(2). We would like to show that
$C\Lambda(1/2,\Pi)\Lambda(1/2,\Pi\otimes\chi_{F})\in F^{\prime}{}^{\times}.$
We first prove that the above quantity is nonzero. This is because, by [FuMo,
(1.26)], the nonvanishing of $L(1/2,\Pi)L(1/2,\Pi\otimes\chi_{F})$ is
equivalent to nonvanishing of the special Bessel period $B(f_{\Pi},F)$ in
(2.5) of Assumption 2.5; that the special Bessel period defined in this paper
is the Bessel period (up to explicit nonzero scalar) in [FuMo] follows from
the calculation of [DPSS, Proposition 3.5].
We now consider Novodvorsky’s integral representation of spinor $L$-function.
This is, roughly speaking, the period integral of $\varphi$ times an
Eisenstein series over an embedded product of two modular curves. More
precisely, there is an embedding of
$H=\operatorname{GL}_{2}\times_{\operatorname{GL}_{1}}\operatorname{GL}_{2}$
into $G$, which is most naturally thought as
$\operatorname{SO}(2,2)\hookrightarrow\operatorname{SO}(3,2)$. Let $B\subset
H$ be the upper triangular Borel. For
$\Phi_{1},\Phi_{2}:\mathbb{A}^{2}\rightarrow\mathbb{C}$ and
$\chi_{1},\chi_{2}$ a unitary Grössencharcter, we define an Eisenstein series
on $H$ with respect to $B$, $E(h,\chi_{i},\Phi_{i},s_{i})$. The following are
well-known (e.g. [LPSZ, Proposition 7.3]).
###### Proposition 3.4.
Suppose we are taking the “weight $k$-section” for $\Phi_{i,\infty}$.
1. (1)
For $-\frac{k}{2}+1\leq s_{1},s_{2}\leq\frac{k}{2}$ half-integral, equivalent
modulo $1$ to $\frac{k}{2}$, this defines a nearly holomorphic form on $H$,
which can be thought as an $H^{0}$ class of some automorphic vector bundle
over a Shimura variety for $H$.
2. (2)
If $\chi_{i}$’s and $\Phi_{i}$’s are valued in a number field $F$, then the
$H^{0}$ class is defined over $F$.
Let $(\lambda_{1},\lambda_{2})$ be the Harish–Chandra character of the
(necessarily generic) (L)DS $\Pi_{\infty}$. Let $\Gamma_{H}=H\cap\Gamma$, and
let $i:X_{H}(\Gamma_{H})\hookrightarrow X_{G}(\Gamma)$ be the closed embedding
of certain toroidal compactifications of the corresponding Shimura
varieties111111That the closed immertion of open Shimura varieties extends to
a closed immersion of toroidal compactifications with respect to certain
refinements of polyhedral cone decomposition is achieved by [Lan2], but the
situation is simplier in this case, because $X_{H}$, being a productof modular
curves, is unique.. Then, Novodvorsky’s integral representation can be
understood via a cup product pairing
$H^{2}(X_{G},V)\otimes_{\mathbb{C}}H^{0}(X_{H},W)\xrightarrow{(\operatorname{id},i_{*})}H^{2}(X_{G},V)\otimes_{\mathbb{C}}H^{1}(X_{G},W^{\prime})\xrightarrow{\cup}H^{3}(X_{G},V\otimes
W^{\prime})\xrightarrow{S}H^{0}(X_{G},\mathcal{O})=\mathbb{C},$
where $\cup$ is the cohomological cup product and $S$ is the Serre duality
pairing, induced from a morphism of algebraic
$\overline{\mathbb{Q}}$-representations of $K_{\infty}$, $V\otimes
W^{\prime}\rightarrow\mathfrak{g}^{-1,1}$ (namely, the pairing is normalized
such that, on the level of representations of $K_{\infty}$, the
$\overline{\mathbb{Q}}$-structures are compatible). Here, $W$ and $W^{\prime}$
are certain automorphic vector bundles over $X_{H}$ and $X_{G}$, respectively,
corresponding to $[V_{H}(\lambda_{1}-\lambda_{2}-1,0)]\otimes\omega_{H}(1,1)$
and $[\widetilde{L}_{1}]$, if we use the notation of [LPSZ, §6]. The
reinterpretation, done in [LPSZ, §7.4], of Novodvosky’s integral asserts that,
given an imaginary quadratic field $F$, there is a coherent cohomology class
$[E]\in H^{0}(X_{H}(\Gamma_{H}),W)$ defined over $FF_{\Pi_{f}}$, which
corresponds to a nearly-holomorphic Eisenstein series $E$ under the Hodge
splitting of [LPSZ, §6.3], such that
$\langle[\varphi],[E]\rangle=C\Lambda\left(1/2,\Pi\right)\Lambda\left(1/2,\Pi\otimes\chi_{F}\right),$
where $\chi_{F}$ is the Hecke character corresponding to $F$. As
$\langle,\rangle$ is defined over $\mathbb{Q}$, the left hand side is in
$F^{\prime}$, which is what we are looking for. ∎
### 3.2. Hodge-linear algebra
We now put the relevant Hodge-linear algebra that proves the case of
$\operatorname{Sp}_{4}$. For the sake of simplicity, we assume we work with a
parallel weight $(2,2)$ Siegel modular form, or Harish–Chandra parameter
$(1,0)$, although the Hodge-linear algebra calculation stays the same for
general weights. Our goal is to convert, using elementary linear algebra,
$\operatorname{vol}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}M)$ into an
expression that involves Deligne’s periods $c^{+}(M),c^{-}(M),\delta(M)$ whose
definitions will be recalled later. We will then be able to express
$\operatorname{vol}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}M)$ with
$L$-values, using Deligne’s conjectures. We will prove
###### Proposition 3.5.
For a motive $M$ associated to $\Pi$ (in the sense of Conjecture 2.20), we
have
$\operatorname{vol}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}M)\sim_{\overline{\mathbb{Q}}^{\times}}\frac{\sqrt{c^{+}(M)c^{-}(M)}^{3}}{\delta(M)^{3/2}}.$
###### Proof.
The motive $M$ of $\Pi$ should be of rank $4$ and weight $1$, with the Hodge
decomposition
$H_{B}(M)\otimes_{\mathbb{Q}}\mathbb{C}=H^{1,0}(M)\oplus
H^{0,1}(M),\quad\dim_{\mathbb{C}}H^{1,0}(M)=\dim_{\mathbb{C}}H^{0,1}(M)=2,$
which is of the type of the Hodge structure defined by the corresponding
archimedean Langlands parameter. In this case,
$\delta(M)\text{ }\left(\text{$c^{\pm}(M)$, resp.}\right)\text{
}\in\mathbb{C}^{\times}/\mathbb{Q}^{\times},$
is the determinant of the comparison map
$H_{B}(M)\otimes\mathbb{C}\xrightarrow{\sim}H_{\operatorname{dR}}(M)\otimes\mathbb{C}\text{
}\left(\text{$H_{B}(M)^{+}\otimes\mathbb{C}\rightarrow
H_{B}(M)\otimes\mathbb{C}\xrightarrow{\sim}H_{\operatorname{dR}}(M)\otimes\mathbb{C}\twoheadrightarrow(H_{\operatorname{dR}}(M)/F^{1}H_{\operatorname{dR}}(M))\otimes\mathbb{C}$,
resp.}\right),$
with respect to the bases coming from the underlying $\mathbb{Q}$-structures
on both sides.
Let $e_{1}^{+},e_{2}^{+}$ be a $\mathbb{Q}$-basis of $H_{B}(M)^{+}$ and
$e_{1}^{-},e_{2}^{-}$ be a $\mathbb{Q}$-basis of $H_{B}(M)^{-}$. Let
$f_{1},f_{2}\in F^{1}H_{\operatorname{dR}}(M)$ be a $\mathbb{Q}$-basis, and
$g_{1},g_{2}\in H_{\operatorname{dR}}(M)/F^{1}H_{\operatorname{dR}}(M)$ be a
$\mathbb{Q}$-basis, and $\widetilde{g_{1}},\widetilde{g_{2}}\in
H_{\operatorname{dR}}(M)$ be lifts of $g_{1},g_{2}$. Given two
$\mathbb{C}$-bases of $H_{B}(M)\otimes\mathbb{C}\cong
H_{\operatorname{dR}}(M)\otimes\mathbb{C}$, we can write an expression
$\begin{pmatrix}e_{1}^{+}&e_{2}^{+}&e_{1}^{-}&e_{2}^{-}\end{pmatrix}=\begin{pmatrix}f_{1}&f_{2}&\widetilde{g_{1}}&\widetilde{g_{2}}\end{pmatrix}\begin{pmatrix}A&B\\\
C&D\end{pmatrix},$
for $A,B,C,D\in M_{2}(\mathbb{C})$. Note that by definition
$\delta(M)=\det\left(\begin{smallmatrix}A&B\\\ C&D\end{smallmatrix}\right)$.
We have canonical isomorphisms
$H^{1,0}(M)\cong F^{1}H_{\operatorname{dR}}(M)\otimes\mathbb{C},\quad
H^{0,1}(M)\cong\frac{H_{\operatorname{dR}}(M)}{F^{1}H_{\operatorname{dR}}(M)}\otimes\mathbb{C}.$
Let $f_{1,B},f_{2,B}\in H^{1,0}(M)$ and $g_{1,B},g_{2,B}\in H^{0,1}(M)$ be the
images of $f_{1},f_{2},g_{1},g_{2}$ under the above canonical isomorphisms.
Then $f_{i,B}$ and $f_{i}$ coincide as elements of
$H_{B}(M)\otimes\mathbb{C}\cong H_{\operatorname{dR}}(M)\otimes\mathbb{C}$,
whereas $\widetilde{g_{i}}-g_{i,B}\in H^{1,0}(M)$. So,
$\begin{pmatrix}f_{1}&f_{2}&\widetilde{g_{1}}&\widetilde{g_{2}}\end{pmatrix}=\begin{pmatrix}f_{1,B}&f_{2,B}&g_{1,B}&g_{2,B}\end{pmatrix}\begin{pmatrix}1_{2}&M\\\
0_{2}&1_{2}\end{pmatrix}.$
In particular, if we write
$\begin{pmatrix}e_{1}^{+}&e_{2}^{+}&e_{1}^{-}&e_{2}^{-}\end{pmatrix}=\begin{pmatrix}f_{1,B}&f_{2,B}&g_{1,B}&g_{2,B}\end{pmatrix}\begin{pmatrix}A^{\prime}&B^{\prime}\\\
C^{\prime}&D^{\prime}\end{pmatrix},$
then $\det\left(\begin{smallmatrix}A&B\\\
C&D\end{smallmatrix}\right)=\det\left(\begin{smallmatrix}A^{\prime}&B^{\prime}\\\
C^{\prime}&D^{\prime}\end{smallmatrix}\right)$.
Also, there must be relations
$c_{B}(f_{1,B})=ag_{1,B}+bg_{2,B},$ $c_{B}(f_{2,B})=cg_{1,B}+dg_{2,B}.$
Using that $F_{\infty}e_{i}^{+}=e_{i}^{+}$, $F_{\infty}e_{i}^{-}=-e_{i}^{-}$
and that $F_{\infty}(f_{i,B})=c_{B}(f_{i,B})$ and
$F_{\infty}(g_{i,B})=c_{B}(g_{i,B})$ (as in [PV, Lemma, §8.2.1]), one has
$\displaystyle c_{B}(A^{\prime}_{1i}f_{1,B}+A^{\prime}_{2i}f_{2,B})$
$\displaystyle=$ $\displaystyle
C^{\prime}_{1i}g_{1,B}+C^{\prime}_{2i}g_{2,B},$ $\displaystyle
c_{B}(B^{\prime}_{1i}f_{1,B}+B^{\prime}_{2i}f_{2,B})$ $\displaystyle=$
$\displaystyle-(D^{\prime}_{1i}g_{1,B}+D^{\prime}_{2i}g_{2,B}),$
for $i=1,2$. This can be packaged into
$C^{\prime}=\left(\begin{smallmatrix}a&c\\\
b&d\end{smallmatrix}\right)A^{\prime}$,
$D^{\prime}=-\left(\begin{smallmatrix}a&c\\\
b&d\end{smallmatrix}\right)B^{\prime}$. So
$\delta(M)=\det\begin{pmatrix}A^{\prime}&B^{\prime}\\\
\left(\begin{smallmatrix}a&c\\\
b&d\end{smallmatrix}\right)A^{\prime}&-\left(\begin{smallmatrix}a&c\\\
b&d\end{smallmatrix}\right)B^{\prime}\end{pmatrix}=\det\begin{pmatrix}A^{\prime}&B^{\prime}\\\
0_{2}&-2\left(\begin{smallmatrix}a&c\\\
b&d\end{smallmatrix}\right)B^{\prime}\end{pmatrix}=4\det\begin{pmatrix}a&c\\\
b&d\end{pmatrix}\det A^{\prime}\det B^{\prime}.$
Note on the other hand that $c^{+}(M)=\det C$, $c^{-}(M)=\det D$. As
$C^{\prime}=C$, $D^{\prime}=D$, we see that
$c^{+}(M)c^{-}(M)\sim_{\overline{\mathbb{Q}}^{\times}}\delta(M)\det\begin{pmatrix}a&c\\\
b&d\end{pmatrix}.$
Note that, as $M$ is self-dual, $M^{\vee}\cong M(1)$, which implies
$\operatorname{Ad}M=(\operatorname{Ad}M)^{*}=(\operatorname{Sym}^{2}M)(1)$. We
are also led to calculate
$\operatorname{vol}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}M)$ (we know it
does not depend on the choice of weak polarization up to
$\overline{\mathbb{Q}}^{\times}$-ambiguity). We know that
$(\operatorname{vol}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}M))^{2}\sim_{\overline{\mathbb{Q}}^{\times}}\lambda$,
where $\varphi(c_{B}(v^{+}))=\lambda v^{-}$. Here, $v^{+},v^{-}$ are
$\mathbb{Q}$-bases vectors for $\det
F^{1}H_{\operatorname{dR}}(\operatorname{Ad}M)$ and
$\det\left(\frac{H_{\operatorname{dR}}(\operatorname{Ad}M)}{F^{0}H_{\operatorname{dR}}(\operatorname{Ad}M)}\right)$,
respectively, and $\varphi:\wedge^{\dim
F^{1}H_{\operatorname{dR}}(\operatorname{Ad}M)}(H_{\operatorname{dR}}(M)\otimes\mathbb{C})\rightarrow\det\left(\frac{H_{\operatorname{dR}}(\operatorname{Ad}M)}{F^{0}H_{\operatorname{dR}}(\operatorname{Ad}M)}\right)$
is the natural projection. As
$\operatorname{Ad}M=(\operatorname{Sym}^{2}M)(1)$, we can take
$f_{1}^{2},f_{1}f_{2},f_{2}^{2}$ and $g_{1}^{2},g_{1}g_{2},g_{2}^{2}$ as
$\mathbb{Q}$-bases of $F^{1}H_{\operatorname{dR}}(\operatorname{Ad}M)$ and
$\frac{H_{\operatorname{dR}}(\operatorname{Ad}M)}{F^{0}H_{\operatorname{dR}}(\operatorname{Ad}M)}$,
respectively. Now from the known relations,
$\displaystyle c_{B}(f_{1}^{2})$ $\displaystyle\equiv$ $\displaystyle
a^{2}g_{1}^{2}+2abg_{1}g_{2}+b^{2}g_{2}^{2},$ $\displaystyle
c_{B}(f_{1}f_{2})$ $\displaystyle\equiv$ $\displaystyle
acg_{1}^{2}+(ad+bc)g_{1}g_{2}+bdg_{2}^{2},$ $\displaystyle c_{B}(g_{1}g_{2})$
$\displaystyle\equiv$ $\displaystyle
c^{2}g_{1}^{2}+2cdg_{1}g_{2}+d^{2}g_{2}^{2},$
where $\equiv$ is mod
$F^{0}H_{\operatorname{dR}}(\operatorname{Ad}M)\otimes\mathbb{C}$. So
$\lambda=\det\begin{pmatrix}a^{2}&2ab&b^{2}\\\ ac&ad+bc&bd\\\
c^{2}&2cd&d^{2}\end{pmatrix}=(ad-bc)^{3}.$
Therefore, we obtain
$\operatorname{vol}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}M)\sim_{\overline{\mathbb{Q}}^{\times}}\frac{\sqrt{c^{+}(M)c^{-}(M)}^{3}}{\delta(M)^{3/2}}.$
∎
### 3.3. Completion of the proof
###### Proof of Theorem 2.18 for $\operatorname{Sp}_{4}$.
Now we apply the relevant period conjectures for this case. Let
$f_{\operatorname{hol}}$, $f_{\operatorname{gen}}$ be newforms (see Assumption
2.5) in $\Pi_{f}\otimes\Pi_{\operatorname{hol}}$,
$\Pi_{f}\otimes\Pi_{\operatorname{gen}}$, respectively, where
$\Pi_{\operatorname{hol}}$ is the corresponding holomorphic NLDS in the
$L$-packet of $\Pi_{\operatorname{gen}}$. Let us assume that
$[f_{\operatorname{hol}}]$ and $[f_{\operatorname{gen}}]$ are defined over
$\overline{\mathbb{Q}}$. Let $F$ be the imaginary quadratic field as in
Theorem 3.3. Then, a theorem of Furusawa–Morimoto121212See [FuMo, Theorem 1]
for the case of discrete series; the case of limit of discrete series is also
an upcoming work of them. 131313This is a specific case of the refined
Gan–Gross–Prasad conjecture for Bessel periods, [Li, Conjecture 2.5]. This
special case is also sometimes called _Böcherer’s conjecture_. implies that
$\frac{\langle
f_{\operatorname{hol}},f_{\operatorname{hol}}\rangle}{|B(f_{\operatorname{hol}},F)|^{2}}\sim_{\overline{\mathbb{Q}}^{\times}}\pi^{-6}\frac{L(1,\Pi,\operatorname{Ad})}{L(1/2,\Pi)L(1/2,\Pi\otimes{\chi_{F}})}.$
Since $[f_{\operatorname{hol}}]$(see Definition 2.8) is defined over
$\overline{\mathbb{Q}}$, we know that $B(f_{\operatorname{hol}},F)$, a
$\mathbb{Q}$-linear combination of Fourier coefficients of
$f_{\operatorname{hol}}$, is in $\overline{\mathbb{Q}}$. Thus, in the setting
of Theorem 2.18, $\langle
f_{\operatorname{hol}},f_{\operatorname{hol}}\rangle\sim_{\overline{\mathbb{Q}}^{\times}}\pi^{-6}\frac{L(1,\Pi,\operatorname{Ad})}{L(1/2,\Pi)L(1/2,\Pi\otimes\chi_{F})}$.
We now relate the $\overline{\mathbb{Q}}$-rationality of
$f_{\operatorname{gen}}$ with $\langle
f_{\operatorname{gen}},f_{\operatorname{gen}}\rangle$. Note that Theorem 3.3
is about the relationship between rationality of Whittaker functions and that
of coherent cohomology classes, for the anti-generic LDS (namely, those
appearing in $H^{2}$ of coherent cohomology). Thus, as
$[f_{\operatorname{gen}}]$ is defined over $\overline{\mathbb{Q}}$, by Serre
duality,
$\langle f_{\operatorname{gen}},f_{\operatorname{gen}}\rangle_{P}=(2\pi
i)^{3}\langle[f_{\operatorname{gen}}],[\overline{f}_{\operatorname{gen}}]\rangle_{\operatorname{coh}},$
where $\langle-,-\rangle_{\operatorname{coh}}$ is the cohomological cup
product defined over $\overline{\mathbb{Q}}$ normalized so that it is induced
from the $\overline{\mathbb{Q}}$-morphism of algebraic
$K_{\infty}$-representations
$W\otimes\operatorname{Hom}(W,\mathfrak{g}^{-1,1})\rightarrow\mathfrak{g}^{-1,1}$.
If we let $C\in\mathbb{C}^{\times}$ be such that
$C[\overline{f}_{\operatorname{gen}}]$ is defined over
$\overline{\mathbb{Q}}$, we see that the RHS is
$\sim_{\overline{\mathbb{Q}}^{\times}}\pi^{3}C^{-1}$. On the other hand, by
Theorem 3.3,
$\Lambda(1/2,\Pi)\Lambda(1/2,\Pi\otimes\chi_{F})W_{C\overline{f}_{\operatorname{gen}}}=C\Lambda(1/2,\Pi)\Lambda(1/2,\Pi\otimes\chi_{F})W_{\overline{f}_{\operatorname{gen}}},$
is defined over $\overline{\mathbb{Q}}$. We now invoke the Lapid–Mao
conjecture, Conjecture 2.19:
$\frac{\langle
f_{\operatorname{gen}},f_{\operatorname{gen}}\rangle}{|W(1)|^{2}}\sim_{\overline{\mathbb{Q}}^{\times}}\pi^{9}\cdot\frac{L(1,\Pi,\operatorname{Ad})}{|W_{\infty}(1)|^{2}}.$
By Remark 3.2(1), $W_{\overline{f}_{\operatorname{gen}}}(1)\neq 0$, so that
$C\Lambda(1/2,\Pi)\Lambda(1/2,\Pi\otimes\chi_{F})W_{\overline{f}_{\operatorname{gen}}}(1)\in\overline{\mathbb{Q}}^{\times}.$
Since
$W_{\overline{f}_{\operatorname{gen}}}(1)=\overline{W_{f_{\operatorname{gen}}}(1)}$,
we have
$\pi^{9}\frac{L(1,\Pi,\operatorname{Ad})}{|W_{\infty}(1)|^{2}}|W_{f_{\operatorname{gen}}}(1)|^{2}\mathrel{\overset{\makebox{\text{\tiny
Lapid--Mao}}}{\sim_{\overline{\mathbb{Q}}^{\times}}}}\langle
f_{\operatorname{gen}},f_{\operatorname{gen}}\rangle\mathrel{\overset{\makebox{\text{\tiny(B)\quad}}}{\sim_{\overline{\mathbb{Q}}^{\times}}}}\pi^{3}C^{-1}\mathrel{\overset{\makebox{\text{\tiny(C)\quad}}}{\sim_{\overline{\mathbb{Q}}^{\times}}}}\pi^{3}\overline{W_{f_{\operatorname{gen}}}(1)}\Lambda(\frac{1}{2},\Pi)\Lambda(\frac{1}{2},\Pi\otimes\chi_{F}),$
or
$W_{f_{\operatorname{gen}}}(1)\sim_{\overline{\mathbb{Q}}^{\times}}\pi^{-6}\frac{|W_{\infty}(1)|^{2}\Lambda(\frac{1}{2},\Pi)\Lambda(\frac{1}{2},\Pi\otimes\chi_{F})}{L(1,\Pi,\operatorname{Ad})}\sim_{\overline{\mathbb{Q}}^{\times}}\pi^{-10}\frac{|W_{\infty}(1)|^{2}L(\frac{1}{2},\Pi)L(\frac{1}{2},\Pi\otimes\chi_{F})}{L(1,\Pi,\operatorname{Ad})}.$
We need to compute $\frac{\langle
f_{\operatorname{hol}},f_{\operatorname{hol}}\rangle_{P}}{\langle
f_{\operatorname{gen}},f_{\operatorname{gen}}\rangle_{P}}$, which can be now
seen as follows.
$\displaystyle\frac{\langle
f_{\operatorname{hol}},f_{\operatorname{hol}}\rangle_{P}}{\langle
f_{\operatorname{gen}},f_{\operatorname{gen}}\rangle_{P}}$
$\displaystyle\mathrel{\overset{\makebox{\text{\tiny Lapid--Mao +
(A)}}}{\sim_{\overline{\mathbb{Q}}^{\times}}}}\frac{\pi^{-6}\frac{L(1,\Pi,\operatorname{Ad})}{L(1/2,\Pi)L(1/2,\Pi\otimes\chi_{F})}}{\pi^{9}|W_{f_{\operatorname{gen}}}(1)|^{2}\frac{L(1,\Pi,\operatorname{Ad})}{|W_{\infty}(1)|^{2}}}$
$\displaystyle=\pi^{-15}\frac{|W_{\infty}(1)|^{2}}{|W_{f_{\operatorname{gen}}}(1)|^{2}L(1/2,\Pi)L(1/2,\Pi\otimes\chi_{F})}$
$\displaystyle\mathrel{\overset{\makebox{\text{\tiny(D)}}}{\sim_{\overline{\mathbb{Q}}^{\times}}}}\pi^{-15}\frac{|W_{\infty}(1)|^{2}}{\pi^{-20}\frac{|W_{\infty}(1)|^{4}L(1/2,\Pi)^{3}L(1/2,\Pi\otimes\chi_{F})^{3}}{L(1,\Pi,\operatorname{Ad})^{2}}}$
(E)
$\displaystyle\sim_{\overline{\mathbb{Q}}^{\times}}\frac{\pi^{9}}{|W_{\infty}(1)|^{2}}\left(\frac{L_{\infty}(1,\Pi,\operatorname{Ad})}{L_{\infty}(0,\Pi,\operatorname{Ad})}\cdot\frac{L(1,\Pi,\operatorname{Ad})}{\frac{\sqrt{L(1/2,\Pi)L(1/2,\Pi\otimes\chi_{\mathbb{Q}(i)})}^{3}}{\pi^{3}}}\right)^{2},$
which involves elementary calculation of archimedean $L$-factors.
On the other hand, by Deligne’s conjectures, Proposition 3.5 implies that
$\operatorname{vol}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}M)\sim_{\overline{\mathbb{Q}}^{\times}}\frac{\sqrt{L(1/2,\Pi)L(1/2,\Pi\otimes\chi_{\mathbb{Q}(i)})}^{3}}{\pi^{3}}.$
Thus, the parenthized term in (E) is precisely the RHS of Conjecture 2.15,
which finishes the proof. ∎
###### Remark 3.6.
Unfortunately, for now, the author has been unable to calculate
$W_{\infty}(1)$. It is however very believable that the archimedean zeta
integral against a preferred, nice test vector is equal to an archimedean
local $L$-factor, which is $\sim_{\overline{\mathbb{Q}}^{\times}}$ half-
integral powers of $\pi$. In our case, there is even an explicit integral
expression of $W_{\infty}(1)$, as written in [CI]:
$W_{\infty}(1)=16e^{-2\pi}\pi^{\frac{7}{2}}\int_{c-i\infty}^{c+i\infty}\pi^{-2s}\Gamma(s+\frac{1}{2})^{2}U(s+\frac{1}{2},1,4\pi)\Gamma(s)\frac{ds}{2\pi
i},$
where
$U(a,b,z)=\frac{1}{\Gamma(a)}\int_{0}^{\infty}e^{-zt}t^{a-1}(1+t)^{b-a-1}dt$
is the confluent hypergeometric function of the second kind.
## 4\. Evidence II: The case of $\operatorname{SU}(2,1)$
In this section, we provide the evidence for the Period conjecture (Conjecture
2.15) for the case of certain irregular automorphic forms on
$G(\mathbb{A})=\operatorname{SU}(2,1)(\mathbb{A})$. In this case, we will use
an integral reprsentation of the base-change $L$-function by Gelbart and
Piatetski-Shapiro ([GPS], completed by [KO]), whose coherent cohomological
interpretation was given by [Oh].
Let $\Pi$ be a globally generic cuspidal automorphic representation of
$G=\operatorname{SU}(2,1)(\mathbb{A}_{\mathbb{Q}})$ as in the previous
subsection that also satisfies Assumption 2.5.
###### Definition 4.1.
Let $M\supset F_{\Pi}$ be a number field. A $\psi$-Whittaker function $W$ on
$G$ is called to be _defined over $M$_ if it takes values in $M(\mu_{\infty})$
and satisfies
$\sigma(W(g))=W(w(\kappa(\sigma))g),$
for all $g\in G(\mathbb{A}_{f})$ and
$\sigma\in\operatorname{Gal}(\overline{\mathbb{Q}}/M)$, where
$w(x)=\operatorname{diag}(x,1,x^{-1})$ and
$\kappa:\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\rightarrow\widehat{\mathbb{Z}}^{\times}$
is the cyclotomic character.
Again, by Remark 3.2(1), $W$ is defined over $M$ if and only if $W(1)\in
M^{\times}$.
### 4.1. Whittaker periods via cohomological period integrals
The purpose of this section is to prove the following
###### Theorem 4.2.
Let
$\varphi\in\Pi^{\operatorname{new}}=\Pi_{f}^{\operatorname{new}}\otimes\Pi_{\infty}^{\operatorname{new}}$.
If $[\varphi]$ (see Definition 2.8) is defined over a number field
$F^{\prime}\supset F_{\Pi}$, then $\Lambda\left(1/2,BC(\Pi)\right)W_{\varphi}$
is a nontrivial $\psi$-Whittaker function defined over $F^{\prime}$.
###### Proof.
Let $C\in\mathbb{C}^{\times}$ be such that $\frac{W_{\varphi}}{C}$ is defined
over $F^{\prime}$, which exists by Remark 3.2(2). We would like to show that
$C\Lambda(1/2,BC(\Pi))\in F^{\prime}{}^{\times}.$
First of all, this is indeed nonzero as $BC(\Pi)$ is cuspidal and tempered.
We consider Gelbart–Piatetski-Shapiro integral representation of base change
$L$-function. This is, roughly speaking, the period integral of $\varphi$
times an Eisenstein over an embedded modular curve. More precisely, there is
an embedding of $H=\operatorname{U}(1,1)$ into $G$. Let $B\subset H$ be the
upper-triangular Borel. For $\Phi:\mathbb{A}^{2}\rightarrow\mathbb{C}$ and
$\chi$ a unitary Grössencharacter of the imaginary quadratic field $F$ used
for the definition of unitary groups, we define an Eisenstein series on $H$
with respect to $B$, $E(h,\chi,\Phi,s)$. The same arithmeticity condition as
Proposition 3.4 applies, as the objects involved are elliptic modular forms.
Using the notation of Example B.3(3), let $(m,n)=(a-b,b-c)$ be the
Harish–Chandra character141414Note that $G$ is now $\operatorname{SU}(2,1)$,
so only the differences matter. of $\Pi_{\infty}$. Let
$\Gamma_{H}=H\cap\Gamma$, and let $i:X_{H}(\Gamma_{H})\hookrightarrow
X_{G}(\Gamma)$ be the closed embedding of closed Shimura varieties, as before.
Then, Gelbart–Piatetski-Shapiro’s integral representation can be understood
via a cup product pairing
$H^{1}(X_{G},V)\otimes_{\mathbb{C}}H^{0}(X_{H},W)\xrightarrow{(\operatorname{id},i_{*})}H^{1}(X_{G},V)\otimes_{\mathbb{C}}H^{1}(X_{G},W^{\prime})\xrightarrow{\cup}H^{2}(X_{G},V\otimes
W^{\prime})\xrightarrow{S}H^{0}(X_{G},\mathcal{O})=\mathbb{C},$
where $\cup$ is the cohomological cup product, $S$ is a Serre duality pairing,
normalized as in the proof of Theorem 3.3 (namely, the pairing induced from a
morphism of $K_{\infty}$-representations defined over
$\overline{\mathbb{Q}}$). Here, $W$ and $W^{\prime}$ are certain automorphic
vector bundles over $X_{H}$ and $X_{G}$, respectively, corresponding to
$[V_{H}(|m-n|-1)]\otimes\omega_{H}(1,0)$ and $[\widetilde{L}_{1}]$, if we use
the notation analogous to [LPSZ, §6]. The reinterpretation, done in [Oh], of
Gelbart–Piatetski-Shapiro’s integral asserts that there is a coherent
cohomology class $[E]\in H^{0}(X_{H}(\Gamma_{H}),W)$ defined over
$FF_{\Pi_{f}}$, which corresponds to a nearly-holomorphic Eisenstein series
$E$ under the Hodge splitting as in [LPSZ, §6.3], such that
$\langle[\varphi],[E]\rangle=C\Lambda\left(1/2,BC(\Pi)\right),$
where $BC$ means base-change. As $\langle,\rangle$ is defined over
$\mathbb{Q}$, the left hand side is in $F^{\prime}$. On the other hand, the
right hand side is nonzero as observed above. Thus, the desired statement
follows. ∎
### 4.2. Hodge-linear algebra
We now conduct relevant calculations in Hodge-linear algebra to prove Theorem
2.18 for $G=\operatorname{SU}(2,1)$. For the sake of simplicity, we assume
that we work with the case of Harish-Chandra character $(1,1,0)$; the same
calculation yields the proof for general weights. Our goal is, as in §3.2, to
convert $\operatorname{vol}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}M)$
into an expression that involves Deligne’s periods. We will prove
###### Proposition 4.3.
For a motive $M$ associated with $\Pi$ in the sense of Conjecture 2.20, we
have
$\operatorname{vol}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}M)\sim_{\overline{\mathbb{Q}}^{\times}}\frac{\pi^{3}c^{+}(BC(M))^{3/2}}{\delta(M_{\overline{\sigma}})^{1/2}}.$
###### Proof.
In this case, the motive $M$ is a motive over $F$ with coefficients in
$\mathbb{Q}$. If we denote $\sigma:F\rightarrow\mathbb{C}$ by the preferred
complex embedding, then as the minimal $K$-type is just $(1,1,1)$, by the
recipe in [HLS, 2.3],
$H_{B}(M_{\sigma})\otimes_{\mathbb{Q}}\mathbb{C}=\mathrel{\underset{\makebox{\text{\tiny$H^{1,0}$}}}{\underbrace{\mathbb{C}v_{1}\oplus\mathbb{C}v_{2}}}}\oplus\mathrel{\underset{\makebox{\text{\tiny$H^{0,1}$}}}{\underbrace{\mathbb{C}v_{3}}}},\quad
H_{B}(M_{\overline{\sigma}})\otimes_{\mathbb{Q}}\mathbb{C}=\mathrel{\underset{\makebox{\text{\tiny$H^{-1,0}$}}}{\underbrace{\mathbb{C}\overline{v}_{1}\oplus\mathbb{C}\overline{v}_{2}}}}\oplus\mathrel{\underset{\makebox{\text{\tiny$H^{0,-1}$}}}{\underbrace{\mathbb{C}\overline{v}_{3}}}}.$
We can choose $v_{i}$ and $\overline{v}_{i}$ so that
$F_{\infty}(v_{i})=\overline{v}_{i}$. Then,
$BC(M):=\operatorname{Res}_{F/\mathbb{Q}}M_{F}=M_{\sigma}\oplus
M_{\overline{\sigma}}(-1)$ and $\operatorname{Ad}M=M_{\sigma}\otimes
M_{\overline{\sigma}}$. The Deligne period $\delta$ of a motive, as before, is
the determinant of the Betti-to-de Rham comparison map with respect to natural
underlying $\mathbb{Q}$-structures. Also, $c^{+}(BC(M))$ in this case would be
the determinant of the map
$H_{B}(BC(M))^{+}\otimes\mathbb{C}\hookrightarrow
H_{B}(BC(M))\otimes\mathbb{C}\xrightarrow{\sim}H_{\operatorname{dR}}(BC(M))\otimes\mathbb{C}\twoheadrightarrow(H_{\operatorname{dR}}(BC(M))/F^{1}H_{\operatorname{dR}}(BC(M)))\otimes\mathbb{C},$
with respect to the natural underlying $\mathbb{Q}$-structures.
Let $e_{1}^{+},e_{2}^{+},e_{3}^{+}$ be a $\mathbb{Q}$-basis of
$H_{B}(BC(M))^{+}$, $e_{1}^{-},e_{2}^{-},e_{3}^{-}$ be a $\mathbb{Q}$-basis of
$H_{B}(BC(M))^{-}$, $f_{1}f_{2},f_{3}\in F^{1}H_{\operatorname{dR}}(BC(M))$ be
an $F$-basis, $g_{1},g_{2},g_{3}\in
H_{\operatorname{dR}}(BC(M))/F^{1}H_{\operatorname{dR}}(BC(M))$ be an
$F$-basis, and $\widetilde{g}_{1},\widetilde{g}_{2},\widetilde{g}_{3}\in
H_{\operatorname{dR}}(BC(M))$ be lifts of $g_{1},g_{2},g_{3}$. We can further
assume that $f_{1},f_{2}\in F^{1}H_{\operatorname{dR}}(M_{\sigma})$, $f_{3}\in
F^{1}H_{\operatorname{dR}}(M_{\overline{\sigma}})$,
$\widetilde{g}_{1},\widetilde{g}_{2}\in
H_{\operatorname{dR}}(M_{\overline{\sigma}})$, $\widetilde{g}_{3}\in
H_{\operatorname{dR}}(M_{\sigma})$. Given two $\mathbb{C}$-bases of
$H_{B}(M)\otimes\mathbb{C}\cong H_{\operatorname{dR}}(M)\otimes\mathbb{C}$, we
can express the map into a matrix,
$\begin{pmatrix}e_{1}^{+}&e_{2}^{+}&e_{3}^{+}&e_{1}^{-}&e_{2}^{-}&e_{3}^{-}\end{pmatrix}=\begin{pmatrix}f_{1}&f_{2}&f_{3}&\widetilde{g}_{1}&\widetilde{g}_{2}&\widetilde{g}_{3}\end{pmatrix}\begin{pmatrix}A&B\\\
C&D\end{pmatrix},$
for $A,B,C,D\in M_{3}(\mathbb{C})$.
Under the canonical isomorphisms
$H^{1,0}(BC(M))\cong F^{1}H_{\operatorname{dR}}(BC(M))\otimes\mathbb{C},\quad
H^{0,1}(BC(M))\cong\frac{H_{\operatorname{dR}}(BC(M))}{F^{1}H_{\operatorname{dR}}(BC(M))}\otimes\mathbb{C},$
let
$f_{1,B},f_{2,B},f_{3,B}\in H^{1,0}(BC(M)),\qquad g_{1,B},g_{2,B},g_{3,B}\in
H^{0,1}(BC(M)),$
be the images of $f_{1},f_{2},f_{3},g_{1},g_{2},g_{3}$ under the above
isomorphisms. Then, by the same argument as in §3.2,
$\begin{pmatrix}e_{1}^{+}&e_{2}^{+}&e_{3}^{+}&e_{1}^{-}&e_{2}^{-}&e_{3}^{-}\end{pmatrix}=\begin{pmatrix}f_{1,B}&f_{2,B}&f_{3,B}&{g}_{1,B}&{g}_{2,B}&{g}_{3,B}\end{pmatrix}\begin{pmatrix}A^{\prime}&B^{\prime}\\\
C&D\end{pmatrix}.$
As $c_{B}(f_{1,B}),c_{B}(f_{2,B}),c_{B}(f_{3,B})$ and
$g_{1,B},g_{2,B},g_{3,B}$ are two $\mathbb{C}$-bases of $H^{0,1}(BC(M))$,
there is a system of linear relations
$\begin{pmatrix}c_{B}(f_{1,B})&c_{B}(f_{2,B})&c_{B}(f_{3,B})\end{pmatrix}=\begin{pmatrix}g_{1,B}&g_{2,B}&g_{3,B}\end{pmatrix}X,$
for some $X\in\operatorname{GL}_{3}(\mathbb{C})$. Using the same technique as
[PV, Lemma, §8.2.1], we see that $C=XA^{\prime},D=XB^{\prime}$, which
similarly implies that
$c^{+}(BC(M))^{2}\sim_{\overline{\mathbb{Q}}^{\times}}\delta(BC(M))\det X,$
where we used $c^{+}(BC(M))\sim_{\overline{\mathbb{Q}}}c^{-}(BC(M))$ due to
the decomposition $BC(M)=M_{\sigma}\oplus M_{\overline{\sigma}}(-1)$ as in
[HL, (8)]. Furthermore, as $g_{1,B},g_{2,B}\in H^{0,1}(M_{\sigma})$ and
$g_{3,B}\in H^{0,1}(M_{\overline{\sigma}}(-1))$, it turns out that $X$ is of
form
$\begin{pmatrix}Y&\\\ &\mu\end{pmatrix},\quad
Y\in\operatorname{GL}_{2}(\mathbb{C}),\mu\neq 0.$
We know that
$(\operatorname{vol}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}M))^{2}\sim_{\overline{\mathbb{Q}}^{\times}}\lambda$,
where $\varphi(c_{B}(v^{+}))=\lambda v^{-}$, $v^{+}=(f_{1}\otimes
f_{3})\wedge(f_{2}\otimes f_{3})$ and $v^{-}=(g_{1}\otimes
g_{3})\wedge(g_{2}\otimes g_{3})$. This implies that $\lambda=\mu^{2}\det
Y=\mu\det X$.
Now, as in [Ha4, §1.2], consider the determinant motive $\det(M)$. An
$F$-rational basis vector of $H_{\operatorname{dR}}(\det(M_{\sigma}))$ can be
taken as $v_{\sigma}:=f_{1}\wedge f_{2}\wedge\widetilde{g}_{3}$, and similarly
$v_{\overline{\sigma}}:=\widetilde{g}_{1}\wedge\widetilde{g}_{2}\wedge f_{3}$
can be taken as an $F$-rational basis vector of
$H_{\operatorname{dR}}(\det(M_{\overline{\sigma}}))$. On the other hand, if we
take $e_{\sigma}$ to be a $\mathbb{Q}$-rational basis vector of
$H_{B}(\det(M_{\sigma}))$, then
$F_{\infty}(e_{\sigma})=:e_{\overline{\sigma}}$ is a $\mathbb{Q}$-rational
basis vector of $H_{B}(\det(M_{\overline{\sigma}}))$. Then
$e_{\sigma}=\delta(M_{\sigma})v_{\sigma}$ and
$e_{\overline{\sigma}}=\delta(M_{\overline{\sigma}})v_{\overline{\sigma}}$, so
$c_{B}(v_{\sigma})=\delta(M_{\sigma})^{-1}e_{\overline{\sigma}}.$
On the other hand,
$c_{B}(v_{\sigma})=F_{\infty}(v_{\sigma})=(\det
Y\cdot\mu^{-1})v_{\overline{\sigma}},$
so
$\delta(M_{\overline{\sigma}})=\mu^{-1}\delta(M_{\sigma})\det Y.$
On the other hand, due to the polarization, we have [Ha4, (1.2.5)],
$\delta(M_{\overline{\sigma}})=\delta(M_{\sigma})^{-1}(2\pi i)^{-6}.$
Thus,
$\pi^{6}\delta(M_{\sigma})^{2}\det Y\sim_{\overline{\mathbb{Q}}^{\times}}\mu,$
so
$\mu^{2}\sim_{\overline{\mathbb{Q}}^{\times}}\pi^{6}\delta(M_{\sigma})^{2}\det
X\sim_{\overline{\mathbb{Q}}^{\times}}c^{+}(BC(M))^{2}\pi^{6}\frac{\delta(M_{\sigma})}{\delta(M_{\overline{\sigma}})}\sim_{\overline{\mathbb{Q}}^{\times}}c^{+}(BC(M))^{2}\pi^{12}\delta(M_{\sigma})^{2},$
or
$\mu\sim_{\overline{\mathbb{Q}}^{\times}}\pi^{6}\delta(M_{\sigma})c^{+}(BC(M)),$
which implies that
$(\operatorname{vol}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}M))^{2}\sim_{\overline{\mathbb{Q}}^{\times}}\frac{\pi^{6}}{\delta(M_{\overline{\sigma}})}c^{+}(BC(M))^{3},$
as desired.∎
### 4.3. Completion of the proof
###### Proof of Theorem 2.18 for $\operatorname{SU}(2,1)$.
Let $f_{\operatorname{hol}},f_{\operatorname{gen}}$ be newforms (see
Assumption 2.5) in $\Pi_{f}\otimes\Pi_{\operatorname{hol}}$ and
$\Pi_{f}\otimes\Pi_{\operatorname{gen}}$, respectively, such that
$[f_{\operatorname{hol}}],[f_{\operatorname{gen}}]$ are defined over
$\overline{\mathbb{Q}}$. Then, under the assumptions of (2.5) of Assumption
2.5151515There is an extra condition on large residue characterstic in _loc.
cit._ , but this restriction is recently removed by [BP, Theorem 1]., [Zha,
Theorem 1.2]161616This is a special case of the refined Gan–Gross–Prasad
conjecture for Fourier–Jacobi periods (see e.g. [Xu, §1.1]), which is also
often referred as the Ichino–Ikeda conjecture. implies that
$\frac{\langle
f_{\operatorname{hol}},f_{\operatorname{hol}}\rangle_{P}}{|FJ(f_{\operatorname{hol}})|^{2}}\sim_{\overline{\mathbb{Q}}^{\times}}\pi^{-2}\frac{L(1,\Pi,\operatorname{Ad})}{L(1/2,BC(\Pi))},$
where $FJ(f_{\operatorname{hol}})$ is the special Fourier–Jacobi period of
$f_{\operatorname{hol}}$, defined by
$FJ(f_{\operatorname{hol}})=\int_{\operatorname{SU}(1,1)(\mathbb{Q})\backslash\operatorname{SU}(1,1)(\mathbb{A})}f_{\operatorname{hol}}(h)dh,$
integrated against the Tamagawa measure. This period is in turn expressed as
an inner product of (algebraic) theta functions,
$FJ(f_{\operatorname{hol}})\sim_{\overline{\mathbb{Q}}^{\times}}\langle
a(0,f_{\operatorname{hol}})(v),1\rangle,$
where $a(0,f_{\operatorname{hol}})$ is the zero-th Fourier–Jacobi coefficient
of $f_{\operatorname{hol}}$ and $1$ is the constant function (regarded as a
trivial theta function). Note that the nonvanishing of
$FJ(f_{\operatorname{hol}})$ (and thus $a(0,f_{\operatorname{hol}})$) is also
a part of the content of [Zha, §1.1].
By [Lan1], $a(0,f_{\operatorname{hol}})$ is identified with the algebraic
Fourier–Jacobi coefficient, and in particular is in
$\overline{\mathbb{Q}}^{\times}$, as $[f_{\operatorname{hol}}]$ is defined
over $\overline{\mathbb{Q}}$. Thus, we have $\langle
f_{\operatorname{hol}},f_{\operatorname{hol}}\rangle_{P}\sim_{\overline{\mathbb{Q}}^{\times}}\pi^{-2}\frac{L(1,\Pi,\operatorname{Ad})}{L(1/2,BC(\Pi))}$.
On the other hand, we exploit the fact that $H^{1}$ is the middle degree of
the Shimura variety. Note that the infinity type of
$\overline{f}_{\operatorname{gen}}$ is a generic LDS as well (with different
infinitesimal character from $f_{\operatorname{gen}}$). Suppose
$C\in\mathbb{C}^{\times}$ is a constant where
$C[\overline{f}_{\operatorname{gen}}]$ is defined over
$\overline{\mathbb{Q}}$. Then,
(G) $\displaystyle\langle
f_{\operatorname{gen}},f_{\operatorname{gen}}\rangle_{P}$ $\displaystyle=(2\pi
i)^{2}\langle[f_{\operatorname{gen}}],[\overline{f}_{\operatorname{gen}}]\rangle_{\operatorname{coh}}$
$\displaystyle\sim_{\overline{\mathbb{Q}}^{\times}}\pi^{2}C^{-1},$
where the cohomological cup product pairing
$\langle-,-\rangle_{\operatorname{coh}}$ is the Serre duality pairing induced
from a $\overline{\mathbb{Q}}$-morphism of algebraic
$K_{\infty}$-representations
$V\otimes\operatorname{Hom}(V,\mathfrak{g}^{-1,1})\rightarrow\mathfrak{g}^{-1,1}$.
By Theorem 4.2,
$\Lambda(1/2,BC(\Pi))W_{C\overline{f}_{\operatorname{gen}}}=C\Lambda(1/2,BC(\Pi))W_{\overline{f}_{\operatorname{gen}}},$
is defined over $\overline{\mathbb{Q}}$. We now invoke the Lapid–Mao
conjecture (Conjecture 2.19), which says
$\langle
f_{\operatorname{gen}},f_{\operatorname{gen}}\rangle_{P}\sim_{\mathbb{Q}^{\times}}|W_{f_{\operatorname{gen}}}(1)|^{2}\frac{L(1,\Pi,\operatorname{Ad})}{|W_{\infty}(1)|^{2}}.$
By Remark 3.2(1), $W_{\overline{f}_{\operatorname{gen}}}(1)\neq 0$, and
$C\Lambda(1/2,BC(\Pi))W_{\overline{f}_{\operatorname{gen}}}(1)\in\overline{\mathbb{Q}}^{\times}$
as observed in the beginning of the section. Thus, we have
$|W_{f_{\operatorname{gen}}}(1)|^{2}\frac{L(1,\Pi,\operatorname{Ad})}{|W_{\infty}(1)|^{2}}\mathrel{\overset{\makebox{\text{\tiny
Lapid--Mao}}}{\sim_{\overline{\mathbb{Q}}^{\times}}}}\langle
f_{\operatorname{gen}},f_{\operatorname{gen}}\rangle_{P}\mathrel{\overset{\makebox{\text{\tiny(F)\quad}}}{\sim_{\overline{\mathbb{Q}}^{\times}}}}\pi^{2}\Lambda(1/2,BC(\Pi))\overline{W_{f_{\operatorname{gen}}}(1)}.$
By using
$W_{\overline{f}_{\operatorname{gen}}}(1)=\overline{W_{f_{\operatorname{gen}}}(1)}$,
we have
$W_{f_{\operatorname{gen}}}(1)\sim_{\overline{\mathbb{Q}}^{\times}}\pi^{2}\frac{|W_{\infty}(1)|^{2}\Lambda(1/2,BC(\Pi))}{L(1,\Pi,\operatorname{Ad})}.$
On the other hand, Theorem 4.2 says
$W_{f_{\operatorname{gen}}}(1)\sim_{\overline{\mathbb{Q}}^{\times}}\frac{1}{\Lambda(1/2,BC(\Pi))},$
which gives an extra relationship that we can utilize; namely,
$\Lambda(1,\Pi,\operatorname{Ad})\sim_{\overline{\mathbb{Q}}^{\times}}\Lambda(1/2,BC(\Pi))^{2}|W_{\infty}(1)|^{2}\pi^{2}.$
For example, by combining (F) and (G), we have
$\langle
f_{\operatorname{gen}},f_{\operatorname{gen}}\rangle_{P}\sim_{\overline{\mathbb{Q}}^{\times}}\pi^{4}\frac{|W_{\infty}(1)|^{2}\Lambda(1/2,BC(\Pi))^{2}}{L(1,\Pi,\operatorname{Ad})}.$
###### Remark 4.4.
Note that this is already observed in [Ha4, Corollary 1.3.5], under the
assumption of Deligne’s conjectures. In particular, assuming Deligne’s
conjectures, we deduce that
$|W_{\infty}(1)|\sim_{\overline{\mathbb{Q}}^{\times}\pi^{\mathbb{Z}}}1$.
Combining these, we get
$\displaystyle\frac{\langle
f_{\operatorname{hol}},f_{\operatorname{hol}}\rangle_{P}}{\langle
f_{\operatorname{gen}},f_{\operatorname{gen}}\rangle_{P}}$
$\displaystyle\mathrel{\overset{\makebox{\text{\tiny Lapid--Mao +
(H)}}}{\sim_{\overline{\mathbb{Q}}^{\times}}}}\frac{\pi^{-2}\frac{L(1,\Pi,\operatorname{Ad})}{L(1/2,BC(\Pi))}}{\pi^{4}\frac{|W_{\infty}(1)|^{2}\Lambda(1/2,BC(\Pi))^{2}}{L(1,\Pi,\operatorname{Ad})}}$
$\displaystyle\sim_{\overline{\mathbb{Q}}^{\times}}\frac{1}{|W_{\infty}(1)|^{2}}\frac{L(1,\Pi,\operatorname{Ad})^{2}}{L(1/2,BC(\Pi))^{3}}$
$\displaystyle\sim_{\overline{\mathbb{Q}}^{\times}}\frac{\pi^{18}}{|W_{\infty}(1)|^{2}}\left(\frac{L_{\infty}(1,\Pi,\operatorname{Ad})}{L_{\infty}(0,\Pi,\operatorname{Ad})}\cdot\frac{L(1,\Pi,\operatorname{Ad})}{L(1/2,BC(\Pi))^{3/2}}\right)^{2},$
$\displaystyle\sim_{\overline{\mathbb{Q}}^{\times}}\frac{\pi^{21}}{|W_{\infty}(1)|^{2}}\left(\frac{L_{\infty}(1,\Pi,\operatorname{Ad})}{L_{\infty}(0,\Pi,\operatorname{Ad})}\cdot\frac{L(1,\Pi,\operatorname{Ad})}{\operatorname{vol}F^{1}H_{\operatorname{dR}}(\operatorname{Ad}M)}\right)^{2},$
by Deligne’s conjecture applied to Proposition 4.3. We are done, as the
paranthesized term is the RHS of Conjecture 2.15.∎
###### Remark 4.5.
Similarly to Remark 3.6, $W_{\infty}(1)$ is given by the inverse Mellin
transform of the formulae given in [KO, Theorem 5.5], which the author at the
moment is unable to compute.
## 5\. Towards motivic action conjecture for rationality of classes
The original motivic action conjecture of [PV] involves the rational structure
of singular cohomology. To derive a similar conjecture, we would have to come
up with a way to normalize all the (choice of newforms of) automorphic
representations at once. Recall that, in the case of modular forms, this is
done by using complex conjugation. Unfortunately, so far there is no general
construction of an operation that can move between different infinity types.
We will tentatively name such an operation a _generalized complex
conjugation_. Approaching the generalized complex conjugations using the
theory of cycle spaces and Penrose transform will be the subject of the
author’s forthcoming work. For now, we will have to content ourselves with a
preliminary analysis on what a generalized complex conjugation should be.
On the other hand, as the name suggests, the usual complex conjugation can be
used when the symmetric space is the upper half plane. More generally, if the
symmetric space is a product of upper half planes (e.g. in the case of Hilbert
modular varieties), then the complex conjugations with respect to each
variable would be a good candidate for generalized complex conjugation; these
are called _partial complex conjugations_ [Ha2] in the literature. In this
case, we can deduce a precise conjecture on the
$\overline{\mathbb{Q}}$-rational structure of coherent cohomology, from the
generalities of Appendix B. There is an existing work of Horawa exactly on
this problem [Ho], and we will compare our conjecture with that of _op. cit._
In particular, we observe that the numerical evidences given in _op. cit._ are
compatible with both conjectures.
### 5.1. The case of Hilbert modular forms: comparison with [Ho]
The work [Ho] states a similar conjecture, Conjecture 3.21 of _op. cit._ , on
what archimedean motivic action should be for Hilbert modular forms of partial
weight one. It uses the partial complex conjugation, which utilizes the fact
that every (limit of) discrete series for
$\operatorname{SL}_{2}(\mathbb{R})^{d}$ is holomorphic or antiholomorphic in
each variable.
###### Definition 5.1 ((Partial complex conjugations)).
Let $F$ be a totally real field of degree $d$ and $\varphi$ be a holomorphic
automorphic form for
$G=\operatorname{Res}_{F/\mathbb{Q}}\operatorname{GL}_{2,F}$, seen as a
holomorphic function on the symmetric space for $G(\mathbb{R})$,
$(\mathbb{C}-\mathbb{R})^{d}$, of weight $(k_{1},\cdots,k_{d};r)$ where
$k_{i}\geq 1$, $k_{i}\equiv r(\operatorname{mod}2)$ for $i=1,\cdots,d$. For
$I\subset\\{1,\cdots,d\\}$, $\varphi^{I}$ is the automorphic form for
$G=\operatorname{Res}_{F/\mathbb{Q}}\operatorname{SL}_{2,F}$, defined by
$\varphi^{I}(g)=\varphi(gJ^{I})$ for $J^{I}=(J_{1}^{I},\cdots,J_{d}^{I})$
given by $J_{j}^{I}=\left(\begin{smallmatrix}-1&0\\\
0&1\end{smallmatrix}\right)$ if $j\in I$ and $J_{j}^{I}=\operatorname{id}_{2}$
if $j\notin I$. This is called the _partial complex conjugation_.
The main conjecture of op. cit., [Ho, Conjecture 3.21], describes the
rationality of cohomology classes in terms of partial complex conjugations. In
particular, it implies that the decomposition
$H^{i}(X)[\Pi_{f}]=H^{i}(\mathfrak{p},K;\omega\otimes
I(\lambda))\otimes\Pi_{f}^{\Gamma}=\left(\bigoplus_{\pi\in\mathfrak{P}_{\lambda}}H^{i}(\mathfrak{p},K;\omega\otimes\pi)\right)\otimes\Pi_{f}^{\Gamma}$
descends to $\overline{\mathbb{Q}}$.
We have not been successful in approaching the conjecture. On the other hand,
based on the materials developed in Appendix B, we suggest a slightly
different conjecture. We use the language of [Ho, §3] belwo.
###### Conjecture 5.2 ((Motivic action conjecture for Hilbert modular
varieties; compare with [Ho, Conjecture 3.21])).
Let $f$ be a parallel weight one form. For each $u\in U_{f}^{\vee}$, let
$u_{i}\in(\operatorname{Ad}^{0}M\otimes_{\iota}\mathbb{C})^{\sigma_{i}c_{0}\sigma_{i}^{-1}}\cong\mathbb{C}$
be the $\sigma_{i}$-component of $U_{f}^{\vee}\otimes_{\iota}\mathbb{C}$ as in
[Ho, Proposition 3.2], where the isomorphism is given by the natural
$\overline{\mathbb{Q}}$-structure on $\mathfrak{sl}_{2}^{d}$. Then, for every
$u\in U_{f}^{\vee}$ not in the kernel of the pairing of [Ho, Lemma 3.1],
$2\pi
i\sum_{i=1}^{d}\frac{\omega_{f}^{\\{i\\}}}{\log(|\tau\otimes\iota(u)|)}\in
H^{1}(X(\Gamma),\omega),$
defines a cohomology class over $\overline{\mathbb{Q}}$.
Conjecture 5.2 suggests that the decomposition of coherent cohomology as a
$(\mathfrak{p},K)$-cohomology of archimedean representations is not in general
$\overline{\mathbb{Q}}$-rational. This is compatible with the corrected
version of Conjecture 3.21 of _op. cit._.
In [Ho, §5], a numerical evidence in favor of the conjecture of [Ho] is given
for base change forms in the case of Hilbert modular forms for real quadratic
fields. We claim that, for such Hilbert modular forms, the two conjectures
coincide:
###### Proposition 5.3.
Let $f$ be a Hilbert modular eigen-cuspform of parallel weight one for a real
quadratic field $F$. If $f$ is a base change form, then [Ho, Conjecture 3.21]
implies Conjecture 5.2.
###### Proof.
Let $\sigma_{1},\sigma_{2}$ be the two real embeddings of $F$, and suppose $f$
is a base change form of $f_{0}$. Indeed, it is shown in [Ho, Corollary 5.2]
that the space of $\overline{\mathbb{Q}}$-Stark units for $f$,
$U_{f}\otimes\overline{\mathbb{Q}}$, is naturally isomorphic to
$(U_{f_{0}}\otimes\overline{\mathbb{Q}})^{\oplus 2}$, and the decomposition is
compatible with the Beilinson regulator. In particular, the
$\overline{\mathbb{Q}}$-vector space spanned by the log of Stark units of
${f}$ is exactly the $\overline{\mathbb{Q}}$-vector space spanned by the log
of Stark units of $f_{0}$. In particular, both Conjecture 5.2 and [Ho,
Conjecture 3.21] are equivalent to the statement that
$\overline{\mathbb{Q}}\omega_{f}^{\\{1\\}}\oplus\overline{\mathbb{Q}}\omega_{f}^{\\{2\\}}=\frac{\log(u_{f_{0}})}{2\pi
i}H^{1}(X(\Gamma)_{\overline{\mathbb{Q}}},\omega),$
where $u_{f_{0}}\in U_{f_{0}}$. ∎
Indeed, a base change form satisfies an extra symmetry with respect to the
“change of two upper half planes”, namely
$\mathbb{H}^{2}\xrightarrow{(x,y)\mapsto(y,x)}\mathbb{H}^{2},$
and this extra symmetry guarantees that the $\overline{\mathbb{Q}}$-splitting
of our form is compatible with the $\overline{\mathbb{Q}}$-splitting of the
form in [Ho, Conjecture 3.21].
### 5.2. Desiderata for generalized complex conjugations
To have a normalized choice of newforms simultaneously, we would like a
certain way to relate different newforms. Example 2.17 suggests that the
complex conjugation should play a role in the tentative statement of the full
motivic action conjecture regarding rationality of cohomology classes. On the
other hand, the complex conjugation can go back and forth between only two
types of LDS’s. For example, it sends a vector in the “holomorphic LDS” to
that in the “anti-holomorphic LDS.” Since a general motivic action conjecture
involves many more LDS’s, we suggest that there are _generalized complex
conjugations_ that can go between any of $\pi\in\mathfrak{P}_{\lambda}$.
We denote a generalized complex conjugation, sending an automorphic form
$v\in\Pi\otimes A_{C}(\lambda)$ to another automorphic form in $\Pi\otimes
A_{C^{\prime}}(\lambda)$, by $c_{C,C^{\prime}}$. There are several desired
properties:
* •
$c_{C,C^{\prime}}$ is $\mathbb{C}$-linear,
* •
If $C_{\operatorname{hol}},C_{\mathrm{antihol}}$ are in
$\mathfrak{C}_{\lambda}$,
$c_{C_{\operatorname{hol}},C_{\mathrm{antihol}}}(f)=\overline{f}$,
* •
$\langle f,f\rangle_{P}=\langle
c_{C,C^{\prime}}(f),c_{C,C^{\prime}}(f)\rangle_{P}$ (Condition (B)),
* •
$c_{C^{\prime},C^{\prime\prime}}\circ c_{C,C^{\prime}}=c_{C,C^{\prime\prime}}$
and $c_{C,C}=\operatorname{id}$,
* •
$c_{C_{1}^{\prime}\times C_{2}^{\prime},C_{1}\times
C_{2}}=(c_{C_{1}^{\prime},C_{1}},c_{C_{2}^{\prime},C_{2}})$, for
$G(\mathbb{R})=G_{1}(\mathbb{R})\times G_{2}(\mathbb{R})$.
It is still unclear how to formulate a set of conditions which will uniquely
characterize $c_{C,C^{\prime}}$’s. Although the nature of generalized complex
conjugations still remains mysterious, using the ideas of Penrose transform
and its related geometry, it could be possible to construct the purported
generalized complex conjugations, following the suggestion by Joseph Wolf.
This is the subject of the author’s forthcoming work.
###### Remark 5.4.
The last bullet point suggests that the _partial complex conjugation_ (e.g.
[Ha2], [Sh1]) serves the role of generalized complex conjugations in the case
of Hilbert modular forms . Unfortunately, for $C_{\operatorname{hol}}$ and
$C_{\mathrm{anithol}}$ to be both in $\mathfrak{C}_{\lambda}$, $\lambda$ has
to be orthogonal to all compact roots, and this is allowed only if there is no
compact root (as we exclude degenerate limit of discrete series from our
discussion). Thus, we cannot use the usual complex conjugation besides when
the associated symmetric space is a product of several upper half planes.
###### Remark 5.5.
It is a relatively well-accepted technique in the case of unitary groups to
use theta correspondence to move between different infinity types, as
suggested by the recipe of [Pr]. Indeed, for a unitary group, [HLS] proves
that the theta correspondences and character twists act transitively upon the
full Vogan $L$-packet (see [Vo2]). However, due to the idiosyncrasies of the
recipe for the theta correspondence, it is still unclear whether the theta
correspondence should be the generalized complex conjugation in this case.
###### Remark 5.6.
We also speculate that this is the archimedean version of _excursion
operators_ (first appeared in the work of [Laf] on the global Langlands
correspondence over function fields, and extended to the context of mixed
characteristic local Langlands via Kottwitz’s conjecture, e.g. [FaMa], [RV]).
Indeed, the isotypic decomposition with respect to the excursion algebra
canonically decomposes the automorphic spectrum into $L$-indistinguishable
pieces, which would mean that the excursion operators can go around different
members of an $L$-packet.
Under the hypothesis on existence of generalized complex conjugations
$c_{C,C^{\prime}}$, we can formulate the motivic action conjecture in the
Shimura variety context in its full form.
###### Conjecture 5.7.
Let $\lambda,\Pi$ as in Conjecture 2.13. Let
$f_{h}\in\Pi_{f}^{\operatorname{new}}\otimes\pi_{h}^{\operatorname{new}}$ be a
newform such that $[f_{h}]$ (see Definition 2.8) is defined over
$\overline{\mathbb{Q}}$. Let
$\mathcal{E}_{\lambda}\cong\operatorname{Ext}^{1}_{(\mathfrak{p},K)}(I(\lambda),\pi_{h})$
be defined such that
$\mathbb{C}\alpha_{i}\cong\operatorname{Ext}^{1}_{(\mathfrak{p},K)}(\pi_{\\{1,\cdots,n_{\lambda}\\}-\\{i\\}},\pi_{h})$
sends $1\alpha_{i}$ to the homomorphism
$c_{C_{h},C_{\\{1,\cdots,n_{\lambda}\\}-\\{i\\}}}(f_{h})\mapsto f_{h}$ (using
the identification from Proposition B.5). Then, for $v\in
H_{M}^{1}((\operatorname{Ad}^{*}\Pi)_{\mathcal{O}_{E}},\overline{\mathbb{Q}}(1))\subset
H_{\mathscr{D}}^{1}((\operatorname{Ad}^{*}\Pi)_{\mathbb{R}},\mathbb{C}(1))$,
$a(v)\cdot f_{l}$ defines a coherent cohomology class $[a(v)\cdot f_{l}]\in
H^{i_{l}+1}(X_{G}(\Gamma),[V_{A(\lambda)}])$ that is defined over
$\overline{\mathbb{Q}}$.
One can easily state a similar conjecture for the action of
$\bigwedge^{*}H_{M}^{1}((\operatorname{Ad}\Pi)_{\mathcal{O}_{E}},\overline{\mathbb{Q}}(1))$,
but it is no deeper than the conjecture stated above.
There is basically one known case of what generalized complex conjugation
should be, and it is the case of Hilbert modular forms.
## Appendix A Beilinson’s conjecture over a general number field
In this section, we recall the statement of _Beilinson’s conjecture_ we will
need in the paper. A usual formulation of the conjecture involves motives over
$\mathbb{Q}$ with coefficients in $\mathbb{Q}$, but we would have to relax
both to be arbitrary number fields. A standard reference of this matter is
[Ra, §6].
### A.1. Chow motives
We recall the definition of Chow motives over a number field $k$,
$\mathscr{M}_{k,\operatorname{rat}}$ in [PV, §2.1.1]. defined by cohomological
correspondences up to rational equivalence. If $k$ is a number field, then for
a Chow motive $M\in\mathscr{M}_{k,\operatorname{rat}}$, there are the
following cohomology theories, motivated by the cohomology theories of smooth
proper $k$-varieties.
* •
For each prime $\ell$, there is $\ell$-adic cohomology
$H^{i}(M_{\overline{K}},\mathbb{Q}_{\ell}(r))$, which is a finite-dimensional
$\ell$-adic representation of $\operatorname{Gal}(\overline{K}/K)$.
* •
For each embedding $\sigma:k\hookrightarrow\mathbb{C}$, there is Betti
cohomology $H^{i}_{B}(M_{\sigma},\mathbb{Q}(r))$, which is a pure
$\mathbb{Q}$-Hodge structure. If $\sigma$ is a real embedding, it is equipped
with the _infinite Frobenius_ $\operatorname{Fr}_{\sigma,\infty}$. On
$H^{i}_{B}(M_{\sigma},\mathbb{Q}(r))\otimes_{\mathbb{Q}}\mathbb{C}$, the
involution $\operatorname{Fr}_{\sigma,\infty}\otimes c_{B}$ preserves the
Hodge decomposition, where $c_{B}$ is the complex conjugation on the second
factor. If $\sigma$ is a complex embedding,
$\operatorname{Fr}_{\sigma,\infty}$ is rather an isomorphism of
$\mathbb{Q}$-vector spaces
$\operatorname{Fr}_{\sigma,\infty}:H_{B}^{i}(M_{\sigma},\mathbb{Q}(r))\xrightarrow{\sim}H^{i}_{B}(M_{\overline{\sigma}},\mathbb{Q}(r)).$
* •
There is de Rham cohomology $H^{i}_{\operatorname{dR}}(M)(r)$, which is a
finite-dimensional $k$-vector space, equipped with a decreasing filtration
$F^{k}H_{\operatorname{dR}}^{i}(M)(r)$.
We will not care much about $\ell$-adic realization, as it plays no role in
the paper. For each embedding $\sigma:k\hookrightarrow\mathbb{C}$, there is a
comparison isomorphism
$\operatorname{comp}_{\sigma}:H_{B}^{i}(M_{\sigma},\mathbb{Q}(r))\otimes_{\mathbb{Q}}\mathbb{C}\cong
H_{\operatorname{dR}}^{i}(M)(j)\otimes_{k,\sigma}\mathbb{C},$
such that $\oplus_{p\geq k}H^{p,q}$ corresponds to
$(F^{k}H_{\operatorname{dR}})\otimes\mathbb{C}$. If $\sigma$ is a real
embedding, $\operatorname{Fr}_{\sigma,\infty}\otimes c_{B}$ corresponds to
$1\otimes c_{\operatorname{dR}}$, where $c_{\operatorname{dR}}$ is the complex
conjugation on the second factor of
$H_{\operatorname{dR}}^{i}(M)(j)\otimes_{k,\sigma}\mathbb{C}$. If $\sigma$ is
a complex embedding, $\operatorname{Fr}_{\sigma,\infty}\otimes c_{B}$
corresponds to $1\otimes c_{\operatorname{dR}}$ in the sense that there is a
commutative diagram
$\textstyle{H_{B}^{i}(M_{\sigma},\mathbb{Q}(r))\otimes_{\mathbb{Q}}\mathbb{C}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\operatorname{comp}_{\sigma}}$$\scriptstyle{\operatorname{Fr}_{\sigma,\infty}\otimes
c_{B}}$$\textstyle{H_{\operatorname{dR}}^{i}(M)(j)\otimes_{k,\sigma}\mathbb{C}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{1\otimes
c_{\operatorname{dR}}}$$\textstyle{H_{B}^{i}(M_{\overline{\sigma}},\mathbb{Q}(r))\otimes_{\mathbb{Q}}\mathbb{C}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\operatorname{comp}_{\overline{\sigma}}}$$\textstyle{H_{\operatorname{dR}}^{i}(M)(j)\otimes_{k,\overline{\sigma}}\mathbb{C}}$
Another key player is the Deligne cohomology. For a complex smooth projective
variety $X$ and a subring $A\subset\mathbb{C}$ invariant under complex
conjugation, the Deligne cohomology $H_{\mathscr{D}}^{i}(X,A(r))$ is defined
as the hypercohomology of the complex
$A(r)_{\mathscr{D}}:A(r)\rightarrow\mathcal{O}_{X}\xrightarrow{d}\Omega_{X}^{1}\xrightarrow{d}\cdots\xrightarrow{d}\Omega_{X}^{r-1},$
regarded as a complex of analytic sheaves. This admits a complex conjugation
of coefficients, denoted $c_{\mathscr{D}}$, which is induced from the complex
conjugation on $A(r)_{\mathscr{D}}$. Similarly, there is infinite Frobenius
for Deligne cohomology.
In the formulation of Beilinson’s conjecture over $\mathbb{Q}$, a central role
is played by the cohomology of $M_{\mathbb{R}}$. It plays the same role in
Beilinson’s conjecture over general number fields, even though it may sound
peculiar to consider the base-change of $M$ to $\mathbb{R}$ even if $k$ is not
a real field.
###### Definition A.1.
Given a subfield $A\subset\mathbb{C}$ stable under complex conjugation, define
$H_{\operatorname{dR}}^{i}(M_{\mathbb{R}})(r)=\left(\bigoplus_{\sigma:k\hookrightarrow\mathbb{C}}H_{\operatorname{dR}}^{i}(M)(r)\otimes_{k,\sigma}\mathbb{C}\right)^{1\otimes
c_{\operatorname{dR}}}$
$H_{B}^{i}(M_{\mathbb{R}},A(r))=\left(\bigoplus_{\sigma:k\hookrightarrow\mathbb{C}}H_{B}^{i}(M_{\sigma},\mathbb{Q}(r))\otimes_{\mathbb{Q}}A\right)^{\oplus_{\sigma}\operatorname{Fr}_{\sigma,\infty}\otimes
c_{B}}$
$H_{\mathscr{D}}^{i}(M_{\mathbb{R}},A(r))=\left(\bigoplus_{\sigma:k\hookrightarrow\mathbb{C}}H_{\mathscr{D}}^{i}(M_{\sigma},A(r))\right)^{\oplus_{\sigma}\operatorname{Fr}_{\sigma,\infty}\otimes
c_{\mathscr{D}}}.$
Concretely, cohomology of $M_{\mathbb{R}}$ is the part fixed by
$\operatorname{Fr}_{\infty}\otimes c_{B}=1\otimes c_{\operatorname{dR}}$ via
Betti-de Rham comparison isomorphism. Furthermore, there is a Chern class map
$r_{\mathscr{D}}:H_{M}^{i}(M,\mathbb{Q}(r))\rightarrow
H_{\mathscr{D}}^{i}(M_{\mathbb{R}},\mathbb{R}(r)).$
The source of Chern class map is too large, and we choose a subspace
$H_{M}^{i}(M_{\mathcal{O}_{k}},\mathbb{Q}(r))\subset
H_{M}^{i}(M,\mathbb{Q}(r))$ consisting of classes that “extend to a good
proper model of $M$ over $\mathcal{O}_{k}$”. If $M=h(X)$ for a smooth proper
$k$-variety $X$, which has a regular proper model $\mathfrak{X}$ over
$\mathcal{O}_{k}$, then
$H_{M}^{i}(M_{\mathcal{O}_{k}},\mathbb{Q}(r))=H_{M}^{i}(M,\mathbb{Q}(r))\cap(\operatorname{im}(K_{2r-i}\mathfrak{X}\rightarrow
K_{2r-i}X)\otimes\mathbb{Q}),$
where the latter image of $K$-theory groups is the image via the Chern class
characters, and this definition is independent of choice of $\mathfrak{X}$. In
general, using alterations, Scholl defined this subspace in [Sch, Theorem
1.1.6] and showed that this is a unique way to assign subspaces satisfying
various natural properties. The restriction of Chern class charcater into the
integral subspace,
$r_{\mathscr{D}}:H^{i}_{M}(M_{\mathcal{O}_{k}},\mathbb{Q}(r))\rightarrow
H_{\mathscr{D}}^{i}(M_{\mathbb{R}},\mathbb{R}(r)),$
is called the Beilinson regulator.
### A.2. Beilinson’s conjecture for Chow and Grothendieck motives
From now on, we assume that $r\geq\frac{i+1}{2}$171717This is equivalent to
that the weight of $M$, $i-2r$, is negative. This is not a restriction as one
can always reduce to this case possibly after using functional equation..
Beilinson’s conjecture is formulated using fundamental exact sequences, which
we review. From the definition of Deligne cohomology, for a complex smooth
projective variety $X$, there is a long exact sequence
$\cdots\rightarrow
H^{i-1}_{B}(X,A(r))\rightarrow\frac{H^{i-1}_{B}(X,\mathbb{C})}{F^{r}H_{\operatorname{dR}}^{i-1}(X)}\rightarrow
H_{\mathscr{D}}^{i}(X,A(r))\rightarrow H_{B}^{i}(X,A(r))\rightarrow\cdots.$
As this long exact sequence intertwines the involutions that define the
cohomology of $M_{\mathbb{R}}$, for a Chow motive $M$ defined over a number
field $k$,
$H_{\mathscr{D}}^{i}(M_{\mathbb{R}},\mathbb{R}(r))=\frac{H_{B}^{i-1}(M_{\mathbb{R}},\mathbb{C})}{F^{r}H_{\operatorname{dR}}^{i-1}(M_{\mathbb{R}})+H_{B}^{i-1}(M_{\mathbb{R}},\mathbb{R}(r))}=\frac{H_{B}^{i-1}(M_{\mathbb{R}},\mathbb{R}(r-1))}{F^{r}H_{\operatorname{dR}}^{i-1}(M_{\mathbb{R}})},$
which gives rise to two fundamental exact sequences,
$0\rightarrow F^{r}H_{\operatorname{dR}}^{i-1}(M_{\mathbb{R}})\rightarrow
H_{B}^{i-1}(M_{\mathbb{R}},\mathbb{R}(r-1))\rightarrow
H_{\mathscr{D}}^{i}(M_{\mathbb{R}},\mathbb{R}(r))\rightarrow 0,$ $0\rightarrow
H_{B}^{i-1}(M_{\mathbb{R}},\mathbb{R}(r))\rightarrow\frac{H_{\operatorname{dR}}^{i-1}(M_{\mathbb{R}})}{F^{r}H_{\operatorname{dR}}^{i-1}(M_{\mathbb{R}})}\rightarrow
H_{\mathscr{D}}^{i}(M_{\mathbb{R}},\mathbb{R}(r))\rightarrow 0.$
The first two entries of the two fundamental exact sequences as above have
natural $\mathbb{Q}$-structures, yielding the $\mathbb{Q}$-structure on $\det
H_{\mathscr{D}}^{i}(M_{\mathbb{R}},\mathbb{R}(r))$. Let $\mathscr{R}$ be the
one from the first sequence, and $\mathscr{D}\mathscr{R}$ be the one from the
second exact sequence.
###### Conjecture A.2 ((Beilinson’s conjecture)).
Suppose either $r>\frac{i}{2}+1$, or $r=\frac{i}{2}+1$ and there is no Tate
cycle, namely
$H_{\operatorname{\acute{e}t}}^{i}(M_{\overline{k}},\mathbb{Q}_{\ell}(i/2))^{\operatorname{Gal}(\overline{k}/k)}=0$.
Then, the following hold.
1. (1)
The Beilinson regulator
$r_{\mathscr{D}}:H_{M}^{i+1}(M_{\mathcal{O}_{k}},\mathbb{Q}(r))\otimes\mathbb{R}\rightarrow
H_{\mathscr{D}}^{i+1}(M_{\mathbb{R}},\mathbb{R}(r)),$
is an isomorphism.
2. (2)
Let $\mathscr{M}$ be the $\mathbb{Q}$-structure defined on $\det
H_{\mathscr{D}}^{i+1}(M_{\mathbb{R}},\mathbb{R}(r))$ via the Beilinson
regulator and the $\mathbb{Q}$-structure of the motivic cohomology. Then,
$\mathscr{M}=L(h^{-i}(M^{\vee}),1-r)^{*}\mathscr{R}=L(h^{i}(M),r)\mathscr{D}\mathscr{R}.$
###### Remark A.3.
The motives we will be working with come from automorphic representations in
the sense of [Cl, §4.3.3], and for that purpose, we would rather like to work
with _Grothendieck motives_ , where the equivalence relation used is numerical
equivalence, which is equivalent to homological equivalence under the Standard
Conjecture D, which we have to assume. Fortunately, [PV, §2.1.9] works with
arbitrary base field, so a similar set of assumptions would naturally lead to
Beilinson’s conjecture for Grothendieck motives.
## Appendix B Deligne cohomology and Lie algebra cohomology
In this appendix, we develop a representation theoretic background parallel to
[PV, §2-§4]. This is to correctly guess the motivic action conjectures, namely
Conjecture 2.13 and Conjecture 5.7.
### B.1. Nondegenerate limit of discrete series as constituents of reducible
principal series
We need to understand what kinds of infinity types can appear in the coherent
cohomology of Shimura varieties, given the finite part and the coefficient.
Fortunately, the $(\mathfrak{p},K)$-cohomology of unitary
$G(\mathbb{R})$-representations is computed by Vogan–Zuckerman. As far as
$G(\mathbb{R})$-representations are concerned, we will be only interested in
discrete series, or in general nondegenerate limits of discrete series.
For a (L)DS, its $L$-packet consists of all (L)DS with the same infinitesimal
character. Thus each such $L$-packet is consisted of $|W_{G}|/|W_{K}|$
elements, and in particular, upon choosing a system of positive roots
$\Delta_{K}^{+}$ for $K$, can be indexed by Weyl chambers which makes all
roots in $\Delta_{K}^{+}$ positive. Given an infinitesimal character $\lambda$
and a Weyl chamber $C$, let $A_{C}(\lambda)$ be the corresponding (L)DS. This
is in accordance with the notation of [VZ]. In particular,
$A_{C}(\lambda)=A_{\mathfrak{q}}(\lambda_{C})$ for
$\mathfrak{q}=\mathfrak{k}\oplus\mathfrak{p}_{C}$, where $\mathfrak{p}_{C}$ is
the subalgebra of noncompact roots in $\Delta_{C}^{+}$, the system of positive
roots determined by $C$, and $\lambda_{C}$ is the Weyl conjugate of $\lambda$
that is contained in $C$. By the relatively straightforward nature of
$K$-multiplicities of such representations, we have the following formulae.
###### Proposition B.1.
Given $\lambda$ and $C$ as above, we have the following,
$i_{A_{C}(\lambda)}=\dim_{\mathbb{C}}(\mathfrak{p}_{-}\cap\mathfrak{p}_{C}),$
$V_{A_{C}(\lambda)}=V(\lambda_{C}+2\rho(\mathfrak{p}_{+})),$
the highest weight representation of $K$ with highest weight
$\lambda_{C}+2\rho(\mathfrak{p}_{+})$, where $2\rho(\mathfrak{p}_{+})$ is the
sum of all roots in $\mathfrak{p}_{+}$.
Therefore, if $\lambda$ lies on some wall of the Weyl chambers, then it can
happen that $V_{A_{C}(\lambda)}=V_{A_{C^{\prime}}(\lambda)}$ for different
Weyl chambers $C,C^{\prime}$. This is the setting we will be interested in. In
such cases, we drop the subscript $C$ if there is no issue of confusion.
###### Theorem B.2.
Let $\lambda$ be a nondegenerate analytically integral character. Let
$\mathfrak{C}_{\lambda}=\\{C\text{ Weyl chamber}\mid\lambda\in C\\}$, and
$\mathfrak{P}_{\lambda}=\\{A_{C}(\lambda)\mid C\in\mathfrak{C}_{\lambda}\\}$.
Then, there is a parabolic subgroup $Q\subset G(\mathbb{R})$ and a discrete
series representation $\rho$ of the Levi $M_{Q}$ such that
$I(\lambda):=\operatorname{Ind}_{Q}^{G}\rho=\bigoplus_{\pi\in\mathfrak{P}_{\lambda}}\pi,$
where $\operatorname{Ind}_{Q}^{G}\rho$ is the normalized induction. These
satisfy the following properties.
1. (1)
$Q$ is a parabolic subgroup which is minimal with respect to the property that
the Langlands parameter $\varphi:W_{\mathbb{C}/\mathbb{R}}\rightarrow{}^{L}G$
corresponding to the representations in $\mathfrak{P}_{\lambda}$ can be
arranged so that $\varphi(W_{\mathbb{C}/\mathbb{R}})\subset{}^{L}Q$.
Furthermore, if we denote $T_{M_{Q}}$ by a Cartan subgroup of $M_{Q}$, then
one can further assume that
$\varphi(\mathbb{C}^{\times})\subset{}^{L}T_{M_{Q}}$ and that
$\varphi(W_{\mathbb{C}/\mathbb{R}})$ normalizes ${}^{L}T_{M_{Q}}$.
2. (2)
$|\mathfrak{C}_{\lambda}|$ is a power of $2$, and
$\lambda^{\perp}\subset\mathfrak{g}$ is spanned by a superorthogonal set of
real roots.
3. (3)
The infinitesimal character of $\rho$ is the restriction of $\lambda$.
###### Proof.
We freely use the terminology of [Kn]. The property (3) is clear. The content
of [Kn, §14.15] implies that any NLDS appears as a direct summand of such
principal series with multiplicity one; this is via repeated application of
generalized Schmid identities [Kn, Theorem 14.68]. The relation between $Q$
and the Langlands parameter follows from the discussion before [La, Lemma 1].
Since the parabolic subgroup appears as Cayley transform of the minimal
parabolic in the sense of [KZ], and one chooses noncompact simple roots for
the Cayley transform, which in fact form a superorthogonal set of roots by
[Kn, Theorem 14.64], based only on the infinitesimal character and not the
chamber, it follows that
$\\{\operatorname{Ind}_{Q}^{G}\widetilde{\rho}\mid\widetilde{\rho}\text{ DS,
infinitesimal character $=$ the restriction of $\lambda$}\\},$
sees every NLDS with infinitesimal character $\lambda$ at least once as its
constituent. The number of constituents is exactly $|W_{G}|/|W_{K}|$, so each
such NLDS appears exactly once. Each $\operatorname{Ind}_{Q}^{G}\rho$ has
constituents $\pi=A_{C}(\lambda)$ such the representative of $\lambda$
(thought as a Weyl orbit) in $C$ is a fixed character (namely, $C$’s that
appear are all adjacent to a single representative of the Weyl orbit of
$\lambda$). This is an equivalence relation, so this implies $I(\lambda)$ is
precisely consisted of $A_{C}(\lambda)$ where $C$ contains $\lambda$ (regarded
as an actual character). As the $R$-group is a direct sum of bunch of
$\mathbb{Z}/2\mathbb{Z}$ by [Kn, §14.15], we get (2). ∎
###### Example B.3.
We explain how Theorem B.2 is realized in some examples. In the figures, red
arrows are the compact roots, so NLDS’s are those lying on a wall not
orthogonal to red arrows.
1. (1)
$\operatorname{SL}_{2}(\mathbb{R})$. Let
$Q\subset\operatorname{SL}_{2}(\mathbb{R})$ be the upper triangular Borel, and
let $\rho=\det/|\det|:Q\rightarrow\\{\pm 1\\}$. Then
$I_{Q}^{\operatorname{SL}_{2}(\mathbb{R})}\rho=D_{0}^{+}\oplus D_{0}^{-}$, the
sum of the two NLDS, holomorphic (“weight $1$”) and anti-holomorphic (“weight
$-1$”).
2. (2)
$\operatorname{Sp}_{4}(\mathbb{R})$ (e.g. [Mu]). There are four types of LDS,
two of them being holomorphic and anti-holomorphic, respectively, and the
other two being large (i.e., maximal Gelfand–Kirillov dimension). We call the
one adjacent to the holomorphic chamber _generic_ and the other one adjacent
to the anti-holomorphic chamber _anti-generic_. Using the notation of [Mu], a
singular infinitesimal character is of one of the forms $(p,0)$, $(0,-p)$ or
$(p,-p)$ with $p\in\mathbb{N}$. The Langlands parameter $\varphi_{\lambda}$
for infinitesimal character $\lambda=(a,b)$ is given by
$\varphi_{\lambda}(re^{i\theta})=\operatorname{diag}(e^{i(a+b)\theta},e^{i(a-b)\theta},e^{-i(a+b)\theta},e^{-i(a-b)\theta}),\quad\varphi_{\lambda}(j)=\begin{pmatrix}0&(-1)^{a+b}I_{2}\\\
I_{2}&0\end{pmatrix}.$
* •
$\lambda=(p,0)$ lies on the wall between the holomorphic chamber and the
generic chamber. Then, $Q$ is the so-called _Klingen parabolic_ , whose Levi
is $\operatorname{GL}_{1}(\mathbb{R})\times\operatorname{SL}_{2}(\mathbb{R})$,
and $\rho$ is the (trivial extension of the) holomorphic DS $D_{p}^{+}$ of
$\operatorname{SL}_{2}(\mathbb{R})$ of infinitesimal character $p$ (or,
equivalently, weight $p+1$).
* •
$\lambda=(p,-p)$ lies on the wall between the generic chamber and the anti-
generic chamber. Then, $Q$ is the so-called _Siegel parabolic_ , whose Levi is
$\operatorname{GL}_{2}(\mathbb{R})$, and $\rho$ is the DS $D_{2p}$ of
$\operatorname{GL}_{2}(\mathbb{R})$ with central charcater $2p$ (which is as
$\operatorname{SL}_{2}(\mathbb{R})$-representation the same as
$D_{2p}^{+}\oplus D_{2p}^{-}$).
* •
$\lambda=(0,-p)$ lies on the wall between the anti-generic chamber and the
anti-holomorphic chamber. This situation is complex-conjugate to the situation
of $\lambda=(p,0)$. Thus, $Q$ is again the Klingen parabolic, but $\rho$ is
the anti-holomorphic DS of the same infinitesimal character.
3. (3)
$\operatorname{U}(2,1)$ (e.g. [Wa], [Ro, §12]). There are three types of LDS,
_holomorphic_ , _generic_ and _anti-holomorphic_. There are two typse of
NLDS’s , those lying on the wall between the holomorphic chamber and the
generic chamber, and those lying on the wall betweent the generic chamber and
the anti-holomorphic chamber. The Langlands parameters for (L)DS are of the
form
$\varphi(z)=\operatorname{diag}((z/\overline{z})^{a},(z/\overline{z})^{b},(z/\overline{z})^{c}),\quad\varphi(j)=\begin{pmatrix}&&1\\\
&-1\\\ 1\end{pmatrix},$
for $a,b,c\in\mathbb{Z}$, and the parameter only depends on the unordered set
$\\{a,b,c\\}$. Ordering $a\geq b\geq c$, the two types of NLDS can occur for
$a=b>c$ and $a>b=c$.
* •
$a=b>c$ lies on the wall between the holomorphic chamber and the generic
chamber. Then, $Q$ is the upper-triangular Borel (when $\operatorname{U}(2,1)$
is seen as the unitary group for the diagonal Hermitian matrix such as
$\operatorname{diag}(1,1,-1)$), whose Levi is $\mathbb{C}^{\times}\times
S^{1}$, and $\rho$ is the character
$\operatorname{diag}(\alpha,\beta,\overline{\alpha}^{-1})\mapsto\alpha^{a}\beta^{b}\overline{\alpha}^{-c}$
for $\alpha\in\mathbb{C}^{\times}$, $\beta\in S^{1}$.
* •
$a>b=c$ lies on the wall between the generic chamber and the anti-holomorphic
chamber. The situation is completely symmetric to the previous case.
### B.2. Action of the $\operatorname{Ext}$-space
We can now build an archimedean realization of motivic action from abstract
nonsense. Recall that for a $(\mathfrak{p},K)$-module $M$,
$H^{i}(\mathfrak{p},K;M)$ can be regarded as
$\operatorname{Ext}_{(\mathfrak{p},K)}^{i}(\mathbf{1},M)$, the $i$-th
$\operatorname{Ext}$ group in the category of $(\mathfrak{p},K)$-modules,
where $\mathbf{1}$ is the trivial module. Thus, there is a natural action
$\operatorname{Ext}_{(\mathfrak{p},K)}^{m}(M,N)\times\operatorname{Ext}_{(\mathfrak{p},K)}^{n}(\mathbf{1},M)\rightarrow\operatorname{Ext}^{m+n}_{(\mathfrak{p},K)}(\mathbf{1},N).$
In particular, if $\lambda$ is a nondegenerate singular character as above,
with $\pi_{1},\pi_{2}\in\mathfrak{P}_{\lambda}$ with
$i_{\pi_{1}}<i_{\pi_{2}}$, there is a natural action
|
# A Survey on Recent Advances in LLM-Based Multi-turn Dialogue Systems
Zihao Yi<EMAIL_ADDRESS>Sun Yat-Sen UniversityShenzhenChina ,
Jiarui Ouyang<EMAIL_ADDRESS>Sun Yat-Sen UniversityShenzhenChina ,
Yuwen Liu<EMAIL_ADDRESS>Sun Yat-Sen UniversityShenzhenChina ,
Tianhao Liao<EMAIL_ADDRESS>Sun Yat-Sen UniversityShenzhenChina ,
Zhe Xu<EMAIL_ADDRESS>Sun Yat-Sen UniversityShenzhenChina and
Ying Shen<EMAIL_ADDRESS>Sun Yat-Sen UniversityShenzhenChina
(2024; 20 February 2007; 12 March 2009; 5 June 2009)
###### Abstract.
This survey provides a comprehensive review of research on multi-turn dialogue
systems, with a particular focus on multi-turn dialogue systems based on large
language models (LLMs). This paper aims to (a) give a summary of existing LLMs
and approaches for adapting LLMs to downstream tasks; (b) elaborate recent
advances in multi-turn dialogue systems, covering both LLM-based open-domain
dialogue (ODD) and task-oriented dialogue (TOD) systems, along with datasets
and evaluation metrics; (c) discuss some future emphasis and recent research
problems arising from the development of LLMs and the increasing demands on
multi-turn dialogue systems.
large language models, fine-tuning, prompt engineering, task-oriented dialogue
systems, open-domain dialogue systems
††copyright: acmlicensed††journalyear: 2024††doi: XXXXXXX.XXXXXXX††journal:
JACM††journalvolume: 37††journalnumber: 4††article: 111††publicationmonth:
2††isbn: 978-1-4503-XXXX-X/18/06††ccs: Computing methodologies Discourse,
dialogue and pragmatics††ccs: General and reference Surveys and overviews
## 1\. INTRODUCTION
### 1.1. What is Multi-turn Dialogue System?
Multi-turn dialogue systems that can generate natural and meaningful responses
to communicate with humans are a long-term goal of artificial intelligence
(AI). Such human-computer interaction task has attracted increasing attention
from academia and industry due to their potential impact and attractive
commercial value. The multi-turn dialogue task can be regarded as a sequence-
to-sequence task, which generates the system response
$\mathcal{S}=(s_{1},s_{2},...s_{t})$ from the user message
$\mathcal{U}=(u_{1},u_{2},...u_{t})$, where $u_{t}$ and $s_{t}$ are the user
message and system response in the $t$-th round, respectively.
Multi-turn dialogue systems can be divided into TOD systems and ODD systems.
TOD systems assist users in addressing tasks within a specific domain such as
hotel booking, restaurant recommendation, etc., while ODD systems chat with
users without domain restrictions. TOD tasks and ODD tasks are not entirely
independent, an ODD task can be converted into a TOD task once the dialogue
system detects specific user requirement.
Conventional dialogue systems primarily rely on rule-based approaches and
retrieval-based methods. Rule-based dialogue systems (weizenbaum1966eliza, ;
colby1971artificial, ; goddeau1996form, ) generate responses by predefining
conversation flows for specific scenarios. Retrieval-based dialogue systems
(wu2016sequential, ; zhao2016towards, ; ma2019triplenet, ) rely on predefined
templates, making them more flexible than rule-based systems. However, the
application scope of retrieval-based dialogue systems remains constrained, as
the generated responses are based on predefined template. With the development
of deep-learning methods, many multi-turn dialogue systems
(serban2016building, ; he2020amalgamating, ; qiu2019training, ) based on deep
neural networks are proposed. Recently, the performance of multi-turn dialogue
systems is significantly enhanced by the emergence of pre-trained LLMs.
### 1.2. Why a Survey on LLM-based Multi-turn Dialogue System?
Arora et al. (arora2013dialogue, ) provided an overview of dialogue systems
and introduced various dialogue system frameworks. However, this survey
treated the dialogue system as a generic system rather than categorizing it
into TOD and ODD systems and did not encompass deep learning models. Chen et
al. (chen2017survey, ) categorized dialogue systems into TOD and ODD systems,
discussing the application of deep learning techniques in both types of
dialogue systems. Nevertheless, this survey did not dive into multi-turn
dialogue systems based on pre-trained LLMs. The review written by Ni et al.
(ni2023recent, ) covered multi-turn dialogue systems based on pre-trained
LLMs, but this study did not provide detailed insights into LLMs and methods
for adapting them to downstream sub-tasks. In contrast, Qin et al.
(qin2023end, ) provided a more comprehensive exploration of the application of
pre-trained LLMs in specific-target dialogue scenarios. However, the focus of
this paper primarily centered on end-to-end task-oriented multi-turn dialogue
systems.
Our paper aims to provide a state-of-the-art overview of LLM-based multi-turn
dialogue systems, before which we will provide a comprehensive exposition of
existing pre-trained LLMs and methodologies employed to adapt these models for
downstream tasks. This investigation is anticipated to appeal to broad
audiences in both academia and industry, encompassing researchers and
practitioners alike.
### 1.3. Contribution of this Survey
In this paper, we provide a comprehensive review of methods, evaluation
metrics and datasets for LLM-based multi-turn dialogue. The contribution of
our paper can be summarized as follows:
(1) To give a thorough review of LLMs and methods adapting LLMs to different
subtasks, as well as up-to-date LLM-based muti-turn dialogue systems;
(2) To provide a detailed exposition on state-of-the-art multi-turn dialogue
datasets and evaluation metrics.
(3) To discuss some future emphasis and recent research problems arising from
the increasing demands on dialogue systems and the development of LLMs.
The rest of this survey is organized as follows. In Sec.2, we provide a
detailed exposition of prevalent LLMs. From Sec.3 to Sec.4, we will give a
comprehensive introduction to methods for adapting LLMs to downstream tasks.
In Sec.5, we present important methods of TOD, including pipeline-based
methods and end-to-end methods. State-of-the-art methods of ODD are proposed
in Sec.6. In Sec.7 and Sec.8, we introduce some relevant datasets and
evaluation metrics. Besides, some problems and challenges of LLM based multi-
turn dialogue are proposed in Sec.9. Finally, we conclude our survey in
Sec.10.
## 2\. GENERAL METHODS
LLMs are a class of extensive artificial intelligence models characterized by
their massive scale with billions of parameters (kaplan2020scaling, ). Scaling
up LLMs allows them to learn more intricate and accurate language
representations, resulting in improved performance across diverse downstream
Natural Language Processing (NLP) tasks, particularly excelling in Natural
Language Generation (NLG) challenges (wei2022emergent, ; qiu2020pre, ). The
brief comparison of different structures of the LLMs mentioned can be seen in
Table 1.
The vanilla Transformer architecture (vaswani2017attention, ), a sequence-to-
sequence model, has emerged as a foundational framework for diverse LLMs,
utilizing encoders and decoders with self-attention mechanisms as its core
components, thanks to its exceptional parallelism and capacity. Based on the
masking methods utilized by various attention mechanisms in the model, the
current LLMs can be divided into three categories, i.e., Encoder-Decoder,
Decoder-only, and Encoder-only. The decoder-only category further includes
distinctions such as causal decoders and prefix decoders, illustrated in
Figure 1.
In the following subsection, we shall introduce different types of LLMs based
on various Transformer architectures.
Table 1. The Comparison of Different Model Structures
Model | Model Name | Decoder | Encoder | Attention Mechanisms
---|---|---|---|---
Causal | Prefix
GPT series | GPT-1 | ✓ | - | - | Masked unidirectional multi-head self-attention
GPT-2 | ✓ | - | - | Masked unidirectional multi-head self-attention
GPT-3 | ✓ | - | - | Sparse unidirectional attention (Factorized attention)
GPT-3.5 | ✓ | - | - | Sparse unidirectional attention (Factorized attention)
GPT-4 | ✓ | - | - | Multi-query unidirectional Attention
LLaMA series | LLaMA | ✓ | - | - | Causal unidirectional multi-head attention
LLaMA2 | ✓ | - | - | Grouped-query unidirectional attention
GLM series | GLM | - | ✓ | - | bidirectional self-attention
BERT series | BERT | - | - | ✓ | bidirectional self-attention
UNILM series | UNILM | - | - | ✓ | bidirectional self-attention
BART series | BERT | ✓ | - | ✓ | Masked multi-head self-attention & Cross-attention between encoder and decoder
T5 series | T5 | ✓ | - | ✓ | Masked multi-head self-attention & Cross-attention between encoder and decoder
Figure 1. The matrix comparison of attention mask patterns between decoder-
only and encoder-decoder architectures. The matrix uses dark cells to allow
for self-attention of input elements $j$ at the output time step $i$, while
light cells restrict this attention. The left panel represents the full input
attention, the middle panel refers to preventing future input reliance, and
the right panel combines causal masking with a prefix for partial input
sequence fully-visible masking. (raffel2020exploring, )
### 2.1. Decoder-only Transformer Architecture
The decoder-only model (raffel2020exploring, ), functioning independently
without an encoder, can act as a language model designed primarily for next-
step prediction (liu2018generating, ; radford2018improving, ; al2019character,
). During the training of language models, decoder-only model takes on the
task of generating the target sequence.
#### 2.1.1. Causal Decoder
The causal decoder architecture employs unidirectional attention masking to
ensure that each input token can only attend to past tokens and itself. Both
input and output tokens are processed in a similar manner within the decoder.
The schematic for this architecture is depicted in the middle panel of Figure
1.
##### GPT Series
The Generative Pre-trained Transformer (GPT) model has garnered significant
attention and interest in the field of NLP (ye2023comprehensive, ), and the
technical evolution of the GPT series models is illustrated in Figure 2.
Positioned as a cutting-edge technology built upon the Transformer
architecture, the versatility and robust performance of the GPT model
establish it as a universal solution for various NLP tasks.
Figure 2. A brief illustration for the technical evolution of GPT-series
models. We created this flowchart primarily relying on information from
research papers, blog articles, and official APIs provided by OpenAI. The
solid line represents explicit evidence of the evolutionary path between the
two models (e.g., the official statement that a new model is developed based
on a base model), while the dashed line denotes a relatively weaker
evolutionary relationship.
###### GPT-1.
Proposed by the OpenAI team, GPT-1 (radford2018improving, ), short for
Generative Pre-trained Transformer 1 (vaswani2017attention, ), serves as the
foundational model for the GPT-series, establishing the key architecture and
the fundamental principle of modeling natural language text, specifically
predicting the next word. GPT-1 adopts a decoder-only Transformer
architecture, implementing a semi-supervised approach for language
understanding through a combination of unsupervised pre-training and
supervised fine-tuning.
###### GPT-2.
GPT-2 (radford2019language, ), is an extension of the GPT-1 architecture
(radford2018improving, ), where the parameter scale is increased to 1.5
billion. Trained on the extensive WebText dataset, GPT-2 aims to perform
diverse tasks through unsupervised language modeling, eliminating the need for
explicit fine-tuning with labeled data. GPT-2 utilizes a probabilistic form
for multitask solving, denoted as $p(output|input,task)$, predicting the
output based on input and task information. This approach is reminiscent of
similar methods found in (mccann2018natural, ). Leveraging a multi-layer self-
attentive mechanism, GPT-2 achieves fully-connected cross-attention to the
entire context in a computationally efficient manner.
###### GPT-3.
GPT-3 (brown2020language, ) employs attention mechanisms, allowing the model
to selectively focus on segments of input text it deems most relevant and
adopts an autoregressive approach, leveraging the transformative power of the
Transformer architecture.
###### GPT-3.5.
The GPT-3.5 models, also known as InstructGPT (ouyang2022training, ), are
developed based on the code-davinci-002 underscoring the efficacy of training
on code data to improve the reasoning capacity of GPT models (zhao2023survey,
). InstructGPT, is a language model fine-tuned on GPT-3 (brown2020language, )
using a combination of supervised learning and reinforcement learning based on
human feedback (RLHF). GPT-3.5 is utilized to develop the chatbot product
ChatGPT (openai_chatgpt, ; lock2022ai, ). The gpt-3.5-turbo is the most
capable and cost-effective GPT-3.5 model. GPT-3.5 shows improvements in
maintaining truthfulness and reducing toxic outputs, demonstrating that fine-
tuning with human feedback is an effective approach to enhance the alignment
of language models with human intent.
###### GPT-4.
GPT-4 (openai2023gpt4, ), is a multimodal LLM, accepting both image and text
inputs and produces text outputs. Despite being less capable than humans in
various real-world scenarios, GPT-4 showcases human-level performance across a
range of professional and academic benchmarks. Notably, GPT-4 has improved
calibration compared to GPT-3.5 (ouyang2022training, ), exhibiting enhanced
accuracy in predicting the correctness of its answers. GPT-4’s versatility
extends to ChatGPT (openai_chatgpt, ), where it can process images as inputs.
###### ChatGPT.
ChatGPT (openai_chatgpt, ), short for Chat Generative Pre-trained Transformer,
represents a chatbot innovation. Leveraging a robust LLM, it empowers users to
shape and guide conversations according to specific criteria such as length,
format, style, level of detail, and language (lock2022chatgpt, ). ChatGPT is
built upon either GPT-3.5 (ouyang2022training, ) or GPT-4 (openai2023gpt4, ),
and is fine-tuned for conversational applications using a combination of
supervised learning and reinforcement learning (gertner2023wikipedia, ).
ChatGPT Plus (chatgpt-openai, )is a version of ChatGPT backed by GPT-4. Users
of ChatGPT Plus have the ability to upload images, and mobile app users can
engage in conversations with the chatbot (roose2023chatgpt, ).
###### GPTs.
GPTs (openaigpts, ) represent an innovative feature enabling users to
customize versions of ChatGPT (openai_chatgpt, ) for specific purposes. These
personalized GPTs assist users in enhancing daily efficiency, optimizing
performance in specific tasks, and simplifying work or home activities. Users
can share their creations with others to promote the utility of these purpose-
built GPTs. GPTs prioritize privacy and safety by ensuring user control over
data within the ChatGPT environment. Interactions with GPTs remain
confidential, with no sharing of user conversations with the creators.
##### LLAMA Series
LLaMA (touvron2023llama, ), short for Large Language Model Meta AI, is a
family of LLMs, released by Meta AI (llama2023, ). Due to its openness and
effectiveness, LLaMA attracts significant attention from the research
community, and many efforts are devoted to fine-tuning or continually pre-
training its different model versions for implementing new models or tools.
The technical evolution of the LLaMA series models is illustrated in Figure 3.
Figure 3. A brief illustration for LLaMA series.
###### LLAMA.
LLaMA (touvron2023llama, ), a collection of foundation language models ranging
from 7B to 65B parameters, is trained on trillions of tokens, and show that it
is possible to train state-of-the-art models using publicly available datasets
exclusively, without resorting to proprietary and inaccessible datasets. In
particular, LLaMA-13B outperforms GPT-3 (brown2020language, ) (175B) on most
benchmarks, and LLaMA-65B is competitive with the best models at that time,
Chinchilla-70B (hoffmann2022training, ) and PaLM-540B (rae2021scaling, ).
###### LLAMA2.
LLaMA2 (llama2announcement, ), the next iteration in the LLaMA series, has
been unveiled through a collaborative effort between Meta and Microsoft. Llama
2 is a collection of pre-trained and fine-tuned LLMs ranging in scale from 7
billion to 70 billion parameters. The Llama 2 pre-trained models, trained on 2
trillion tokens, feature double the context length compared to Llama. The
release of LLaMA2 includes model weights and starting code for pre-trained and
fine-tuned LLaMA language models (Llama Chat, Code Llama).
###### LLAMA2 CHAT.
LLaMA2 includes both foundational models and models fine-tuned for dialog,
called LLaMA 2-Chat (touvron2023llama2, ). Llama 2-Chat, are optimized for
dialogue use cases. The models outperform open-source chat models on most
benchmarks tested, and, based on human evaluations for helpfulness and safety,
may be a suitable substitute for closed-source models.
###### CODE LLAMA.
Code Llama (rozière2023code, ) contains three versions with different
parameter amounts, i.e., 700 million parameter version, 1.3 billion parameter
version and 34 billion parameter version. When training the basic model, first
initialize the weights with the Llama 2 model (llama2announcement, ) with the
same number of parameters, and then train on the code dataset of 500 billion
words. Meta has also fine-tuned the trained basic model in two different
styles: Python expert version (plus 100 billion additional words), and
instruction fine-tuning version which can understand natural language
instructions.
#### 2.1.2. Prefix Decoder
The prefix decoder structure (raffel2020exploring, ) modifies the causal
encoder’s masking mechanism to enable bidirectional attention on prefix tokens
while maintaining unidirectional attention on generated tokens. Similar to the
encoder-decoder paradigm, this allows bidirectional encoding of prefix
sequences and auto-regressive generation of output tokens, sharing the same
parameters during both encoding and decoding phases. Prominent models
utilizing the prefix decoder architecture include U-PaLM, GLM-130B, and others
in large-scale prefix encoders. The schematic for this architecture is
depicted in the right panel of Figure 1.
##### GLM
GLM (du2021glm, ), short for General Language Model, is a comprehensive pre-
training framework. The technical evolution of the GLM series models is
illustrated in Figure 4. The architecture modifications in GLM include
rearranging the order of layer normalization and the residual connection,
using a single linear layer for output token prediction, and replacing ReLU
activation functions with GeLUs. GLM unifies pre-training objectives for
various tasks under the common framework of autoregressive blank infilling,
employing mixed attention masks and novel 2D position encodings.
Figure 4. A brief illustration for GLM series.
##### ChatGLM series
ChatGLM (zeng2022glm, ) is a bilingual model incorporating question-answering,
multi-turn dialogue, and code generation. Built upon GLM-130B (zeng2022glm, ),
ChatGLM follows the design principles of ChatGPT (openai_chatgpt, ).
###### ChatGLM-6B.
ChatGLM-6B (zeng2022glm, ), as the inaugural ChatGLM conversational model,
builds upon the training insights from GLM-130B. It resolves issues in the
implementation of 2D RoPE positional encoding and adopts a conventional
FeedForward Network(FFN) structure.
###### ChatGLM2-6B.
ChatGLM2-6B (thudm2022chatglm2-6b, ), incorporates a blend of target functions
within the GLM framework, undergoing pre-training with 1.4 trillion Chinese
and English identifiers and aligning with human preferences. Leveraging
FlashAttention technology, ChatGLM2-6B extends the base model’s context length
from 2K (ChatGLM-6B) to 32K, employing an 8K context length during dialogue
training. With the integration of Multi-Query Attention technology,
ChatGLM2-6B achieves more efficient inference speed and lower GPU memory
usage, boasting a 42% improvement in inference speed compared to its
predecessor in the official model implementation.
###### ChatGLM3-6B.
ChatGLM3-6B (THUDM2023chatglm3, ) introduces a newly designed Prompt format,
accommodating normal multi-turn conversations as well as native support for
complex scenarios such as tool invocation (Function Call), code execution
(Code Interpreter), and Agent tasks.
### 2.2. Encoder-only Transformer Architecture
Unlike decoder-only and encoder-decoder LLMs that utilize autoregressive
regression, the encoder-only LLMs emphasize comprehension of input content and
generation of task-specific outputs.
#### 2.2.1. BERT Series
BERT (devlin2018bert, ), short for Bidirectional Encoder Representations from
Transformers, is a language model based on the transformer architecture, known
for its significant improvement over previous state-of-the-art models. The
brief illustration for BERT series is shown in Figure 5.
Figure 5. A brief illustration for BERT series.
##### BERT
BERT (devlin2018bert, ) namely a bidirectional Transformer-based encoder, is a
LLM introduced by the Google AI Language team (bertopensourcing, ). It
incorporates masked language modeling, allowing pre-training to capture
interactions between left and right context words. Recent advancements, such
as extended training duration, parameter tying across layers, and span masking
instead of individual words, have demonstrated improved performance. Notably,
BERT’s auto-regressive predictions limit its effectiveness for generation
tasks.
##### RoBERTa
RoBERTa (liu2019roberta, ), namely Robustly optimized BERT (devlin2018bert, )
approach, improves upon BERT (devlin2018bert, ) through straightforward
modifications, such as longer training with larger batches, removal of the
next sentence prediction objective, and dynamic changes to the masking
pattern, and employs a more extensive byte-level Byte Pair Encoding (BPE)
vocabulary. However, unlike BERT, RoBERTa streamlines its training process by
omitting the Next Sentence Prediction (NSP) task and focusing solely on
optimizing the Masked Language Model (MLM) task. This approach enhances the
model’s ability to learn bidirectional contextual information in language.
#### 2.2.2. UNiLM
UNILM (dong2019unified, ) is a unified pre-training model jointly optimized
for multiple language modeling objectives with shared parameters, covering
bidirectional, unidirectional, and sequence-to-sequence language models. The
model undergoes pre-training using three types of language modeling tasks:
unidirectional, bidirectional, and sequence-to-sequence prediction. Unified
modeling is achieved through a shared Transformer network, incorporating
specific self-attention masks to control the contextual conditions for
predictions.
### 2.3. Encoder-decoder Transformer Architecture
The traditional Transformer model is built on an encoder-decoder architecture
(raffel2020exploring, ), consisting of two Transformer blocks serving as the
encoder and decoder, respectively. The encoder utilizes stacked multi-head
self-attention layers to encode the input sequence and generate its latent
representation. Meanwhile, the decoder performs cross-attention on these
representations and autoregressively generates the target sequence. The
schematic for this architecture is depicted in the left panel of Figure 1.
#### 2.3.1. BART
BART (lewis2019bart, ) is a denoising autoencoder designed for pre-training
sequence-to-sequence models, combining Bidirectional and Auto-Regressive
Transformers. It employs a sequence-to-sequence model applicable to diverse
end tasks, utilizing a two-stage pre-training process involving text
corruption and reconstruction. BART’s architecture, based on the standard
Transformer for neural machine translation, can be seen as a generalization of
BERT, GPT, and other recent pre-training schemes.
#### 2.3.2. Text-to-Text Transfer Transformer
T5 (raffel2020exploring, ), known as the ”Text-to-Text Transfer Transformer,”
adopts the Transformer encoder-decoder structure, treating each text
processing challenge as a ”text-to-text” task—generating new text from given
input. This framework ensures consistent application of the same model,
objective, training procedure, and decoding process across all tasks. The
model employs an unsupervised span-corruption objective and a multi-task pre-
training strategy. T5 establishes a unified model framework for various NLP
tasks, simplifying the evaluation of different structures, pre-training
objectives, and datasets across reading comprehension, summarization, and text
classification tasks.
## 3\. Fine-Tuning
### 3.1. Full Fine-Tuning
Full Fine-Tuning (FFT) is a cornerstone technique in the domain of neural
network adaptation, it involves optimizing all model parameters to integrate
task-specific knowledge into the foundational architecture acquired during
pre-training. FFT is instrumental in tailoring these models for specialized
applications, ranging from nuanced language understanding to domain-specific
tasks.
FFT is based on optimizing a pre-trained model’s parameters. For a model $M$
characterized by parameters $\theta$, and a dataset $D$ designed for a
specific task, the objective of FFT is to find the optimal parameter
configuration $\theta^{*}$ that minimizes a defined loss function
$\mathcal{L}$. This can be formally expressed as:
(1) $\theta^{*}=\underset{\theta}{\mathrm{argmin}}\ \mathcal{L}(M(\theta),D),$
where $\mathcal{L}(M(\theta),D)$ represents the loss function that quantifies
the disparity between the model’s predictions and the true outcomes in $D$.
Selecting a pre-trained model that has learned from a wide variety of data
begins the process. The next step involves preparing a dataset tailored to the
specific task at hand. The core of FFT lies in thoroughly optimizing all model
parameters to reduce the loss associated with the task. Additionally, the
process includes enhancing the model’s learning through techniques such as
data augmentation, advanced regularization, and optimizing the learning rate.
FFT is valued for its capacity to incorporate detailed, task-specific features
into a model, thereby boosting its accuracy and effectiveness. This approach
has contributed to significant advancements in fields like natural language
processing, computer vision, and predictive analytics.
Despite FFT’s effectiveness in deeply adapting models, the development of
neural network technologies has led to more focused fine-tuning methods that
conserve resources. Parameter-Efficient Fine-Tuning (PEFT) methods, for
example, adjust only a portion of the parameters, achieving a balance between
thorough adaptation and computational demand, making model adaptation feasible
even with limited resources.
### 3.2. Parameter-efficient Fine-Tuning
Parameter-Efficient Fine-Tuning (PEFT) methods have gained popularity due to
their ability to fine-tune pre-trained models without altering all the model
parameters(houlsby2019parameter, ; pfeiffer2020Adapter, ; liu2021ptuning, ;
hu2021lora, ). This section provides an overview of several PEFT techniques
that have been developed, highlighting their key concepts and contributions to
the field.
#### 3.2.1. Adapters
Adapters have emerged as an innovative approach within the domain of
Parameter-Efficient Fine-Tuning, particularly for adapting large pre-trained
models to specific tasks. Initially conceptualized by Houlsby et al.
(houlsby2019parameter, ), Adapters are strategically inserted between the
layers of a pre-trained model, allowing for the original model parameters to
remain unaltered while the adapters learn the nuances of the task-specific
features.
The architecture of an Adapter is characterized by a down-projection layer, a
non-linear activation function, and an up-projection layer. The down-
projection layer compresses the input to a lower dimension, the activation
function introduces non-linearity to enable complex mappings, and the up-
projection layer expands the transformed representation back to the original
dimensionality. It can be mathematically depicted as:
(2)
$\text{Adapter}(\mathbf{x})=\mathbf{U}(\text{Activation}(\mathbf{D}\mathbf{x}+\mathbf{b}_{d}))+\mathbf{b}_{u},$
where $\mathbf{x}$ is the input, $\mathbf{D}$ and $\mathbf{U}$ denote the
down-projection and up-projection matrices, and $\mathbf{b}_{d}$ and
$\mathbf{b}_{u}$ are their corresponding bias vectors.
Adapters are integrated into pre-trained models by inserting them after the
feedforward networks of each layer. The output of a layer, after incorporating
an Adapter, is a summation of the original layer output and the Adapter’s
processed output. This is represented as:
(3)
$\text{Layer}_{\text{output}}^{\text{mod}}=\text{Layer}_{\text{output}}+\text{Adapter}(\text{Layer}_{\text{output}}),$
where $\text{Layer}_{\text{output}}$ is the initial output of a layer in the
model.
In the Adapter framework, the training phase is exclusively focused on the
parameters of the Adapters, with the remaining model parameters being kept
static. This selective training targets the minimization of a task-specific
loss function $\mathcal{L}_{\text{task}}$, formulated as:
(4)
$\theta_{\text{adapter}}^{*}=\underset{\theta_{\text{adapter}}}{\mathrm{argmin}}\
\mathcal{L}_{\text{task}}(M_{\text{adapter}}(\theta_{\text{adapter}}),D_{\text{task}}),$
with $\theta_{\text{adapter}}$ representing the Adapter parameters,
$M_{\text{adapter}}$ the model including Adapters, and $D_{\text{task}}$ the
specific dataset for the task.
Adapters offer distinct advantages in fine-tuning scenarios. They require
training significantly fewer parameters compared to full model fine-tuning,
leading to a more resource-efficient training process. Additionally, adapters’
modular nature allows for easy insertion and removal from models, enabling
swift adaptation to various tasks. Importantly, by keeping the original model
parameters frozen, Adapters preserve the foundational knowledge and
representations learned during pre-training, ensuring that the integrity and
robustness of the pre-trained model are maintained.
#### 3.2.2. LoRA
LoRA (Low-Rank Adaptation) (hu2021lora, ) is a parameter-efficient fine-tuning
method that modifies a pre-trained model by introducing low-rank updates to
specific weight matrices. It allows for significant changes in the model’s
behavior while only training a small number of additional parameters. The idea
behind LoRA is to update the weights of the model using low-rank matrices,
which significantly reduces the number of parameters to be fine-tuned. For a
weight matrix $\mathbf{W}\in\mathbb{R}^{m\times n}$, the low-rank update is
given by:
(5) $\Delta\mathbf{W}=\mathbf{B}\mathbf{A},$
where $\mathbf{B}\in\mathbb{R}^{m\times r}$ and
$\mathbf{A}\in\mathbb{R}^{r\times n}$ are the low-rank matrices, and $r$ is
the rank which is much smaller than $m$ and $n$.
In practice, LoRA is applied to specific layers of a neural network, such as
the attention and feedforward layers in transformer models
(vaswani2017attention, ). The updated weight matrix is:
(6) $\mathbf{W}^{\prime}=\mathbf{W}+\Delta\mathbf{W},$
where $\mathbf{W}^{\prime}$ is the new weight matrix used during fine-tuning
and inference.
LoRA offers several advantages: By updating only a small number of parameters,
LoRA reduces computational and memory requirements. It can be applied to
various layers of a network, allowing for targeted modifications. Since the
original weights are not discarded, LoRA maintains the rich representations
learned during pre-training.
Several extensions of LoRA have been proposed to further enhance its
efficiency. For instance, Quantized LoRA (QLoRA) (dettmers2023qlora, ) is an
advancement in the efficient fine-tuning. It combines the principles of LoRA
with 4-bit quantization to reduce the memory footprint significantly. QLoRA
enables the fine-tuning of extremely large models on limited hardware
resources while maintaining task performance.
#### 3.2.3. Instruction Fine-Tuning
Instruction Fine-Tuning (IFT) (wei2021finetuned, ) is an approach to enhance
the capabilities of PLMs by leveraging task-specific instructions. This
technique adapts PLMs to better understand and execute instructions, thus
improving their performance on a wide range of tasks.
The core idea of IFT involves fine-tuning a pre-trained LM on a dataset where
each data point includes a specific instruction and its associated input-
output pair. The goal is to enable the LM to comprehend and follow the
instructions for generating the desired output.
IFT is particularly beneficial for tasks requiring nuanced understanding and
execution of complex instructions. By preserving the original model
architecture, it offers an efficient way to extend the applicability of PLMs
to new tasks and domains without extensive architectural modifications.
### 3.3. Mitigating Fine-Tuning Instabilities
Fine-tuning pre-trained models, particularly in deep learning applications,
often encounters instabilities leading to suboptimal
performance(zhang2020revisiting, ; mosbach2021on, ). Instabilities manifest as
erratic loss landscapes, difficulty in convergence, and sensitivity to
hyperparameter settings. A common issue is representational collapse, where
the model’s representations become less expressive over the course of fine-
tuning (aghajanyan2020better, ). Researchers have developed various strategies
to stabilize the fine-tuning process and enhance the robustness of fine-tuned
models (dodge2020fine; mosbach2021on, ).
Aghajanyan et al. (aghajanyan2020better, ) present methods to mitigate
representational collapse during fine-tuning of pre-trained models. They
introduced robust representation through Regularization Fine-tuning (R3F) and
Regularization and Reparameterization Fine-tuning (R4F), with the former
adding regularization terms to the loss function and the latter extending R3F
by introducing reparameterization. The approaches are formulated as:
(7) R3F Loss:
$\displaystyle\mathcal{L}_{\text{R3F}}=\mathcal{L}_{\text{original}}+\lambda\cdot\mathcal{R}(\theta),$
(8) R4F Loss:
$\displaystyle\mathcal{L}_{\text{R4F}}=\mathcal{L}_{\text{original}}+\lambda\cdot(\mathcal{R}(\theta)+\mathcal{R}(\text{Reparam}(\theta))),$
where $\mathcal{L}_{\text{original}}$ is the original loss function, $\lambda$
is the regularization strength, $\mathcal{R}$ is the regularization term, and
Reparam is the reparameterization function.
Jiang et al. (jiang2019smart, ) propose the SMART (Smoothness-inducing
Adversarial Regularization for Multitask Training) to enhance fine-tuning of
pre-trained models through regularized optimization. It controls model
complexity using Smoothness-inducing Adversarial Regularization and stabilizes
updates with Bregman Proximal Point Optimization. The method is designed to
improve robustness and efficiency in fine-tuning PLMs, addressing issues like
overfitting and improving generalization in NLP tasks.
(9)
$\min_{\theta}\mathbb{E}_{(x,y)\sim\mathcal{D}}\left[\max_{\delta}\mathcal{L}(f(x+\delta;\theta),y)-\rho\cdot\|\delta\|_{2}^{2}\right],$
where $\delta$ represents adversarial perturbations, $\rho$ balances the
adversarial objective with regularization, and $f$ is the predictive function
of the model.
Zhu et al. (zhu2019freelb, ) propose Free Large-Batch Adversarial
Training(FreeLB), which is an adversarial training algorithm for improving the
robustness and generalization of PLMs (zhu2019freelb, ). It enhances training
by adding adversarial perturbations to word embeddings and minimizing
adversarial risk.
(10)
$\min_{\theta}\frac{1}{N}\sum_{i=1}^{N}\max_{\delta_{i}}\mathcal{L}(f(x_{i}+\delta_{i};\theta),y_{i})-\rho\cdot\|\delta_{i}\|_{2}^{2},$
where $N$ is the batch size, $\delta_{i}$ are the adversarial perturbations,
and $\rho$ balances the original loss with the adversarial term.
## 4\. Prompt Engineering
Prompt engineering have gained significant attention in the field of helping
PLMs understand a given task. Prompt engineering can be divided into two
methods: Prompt Tuning and Tuning-free Prompting, which will be discussed in
this section.
### 4.1. Prompt Tuning
Prompt tuning involves modifying the parameters of the pre-trained model or
adjusting additional prompt-related parameters to enhance the adaptation of
the pre-trained model to downstream tasks. A prominent approach in this regard
is Pattern-Verbalizer-Pair (PVP) structure, which is initially proposed by
(schick2020exploiting, ). Subsequent Prompt Tuning methods largely build upon
the PVP framework. As shown in Figure 6, Gao et al. (gao2020making, ) has
proved that the choice of templates and labels can lead to substantial
differences in final accuracy. Therefore, current research primarily focuses
on how to select or construct appropriate Patterns and Verbalizers. The design
of prompts can be categorized into discrete prompts and continuous prompts.
Figure 6. The impact of templates and label words on prompt tuning.
#### 4.1.1. Discrete Prompts
Previous efforts have been devoted to the exploration of discrete prompts,
also referred to as hard prompts, which usually correspond to readable
language phrases. The word embeddings of discrete template remain unchanged
during training and won’t introduce any new parameter to the LLMs. In the
following sections, we provide a detailed overview of several methods proposed
for this purpose:
##### D1:Pattern-Exploiting Training
Pattern-Exploiting Training (PET) (schick-schutze-2021-exploiting, )
constructs a prompt set by manually generating PVP content p = (_P_ , _v_),
where $P$ is a function that takes $x$ as input and outputs a phrase or
sentence $P(x)\in V^{*}$ that contains exactly one mask token. $v$ is an
injective function $\mathcal{L}\rightarrow V$ that maps each label to a word
in the masked language model vocabulary $M$. The PET can be formulated by:
(11) $\displaystyle s_{p}(l\ |\ x)=M(v(l)\ |\ P(x)),$ (12) $\displaystyle
q_{p}(l\ |\
x)=\frac{e^{s_{p}(l|x)}}{{\textstyle\sum_{l^{\prime}\in\mathcal{L}}e^{s_{p}(l^{\prime}|x)}}},$
where $s_{p}$ is the score for label $l\in\mathcal{L}$, $q_{p}$ is the
probability distribution over labels using softmax. Then, the cross-entropy
between $q_{p}(l\ |\ x)$ and the true (one-hot) is used as the loss function
for fine-tuning $M$ in p.
##### D2:LM-BFF
Finding the right prompts is cumbersome and fallible, which requires both
domain expertise and understanding of NLP. Even if significant effort is
invested, manual prompts are likely to be suboptimal. Therefore, LM-BFF
(gao2020making, ) automatically generates prompts, including pruned brute-
force searches to identify the best working label words, and utilizes the T5
model to automatically generate templates (raffel2020exploring, ). Given a
fixed template $T(x)$, constructing a pruned set $V_{c}\subset V$ of the top
$k$ vocabulary words based on their conditional likelihood using the initial
$L$ can be formulated as:
(13) $\displaystyle{}_{v\in V}^{{Top-k}}\left\\{\sum_{x_{in}\in
D_{train}^{c}}\log{P_{\mathcal{L}}([MASK]=v\ |\ T(x_{in}))}\right\\},$
where $P_{\mathcal{L}}$ denotes the output probability distribution of
$\mathcal{L}$.
##### D3:Prompt Tuning with Rules
Prompt Tuning with Rules (PTR) (han2022ptr, ) can apply logical rules to
automatically compose a small number of manually created sub-prompts into
final task-specific prompts. Take relation classification as an example, given
a sentence $x=\\{...e_{s}...e_{o}...\\}$, where $e_{s}$ and $e_{o}$ are the
subject entity and the object entity respectively, the sub-prompt template and
label word set of $e_{s}$ and $e_{o}$ can be formalized as:
(14) $\displaystyle T_{e_{s}/e_{o}}=``x\ the\ [MASK]\ e_{s}/e_{o}",$
$\displaystyle V_{e_{s}/e_{o}}=\\{``person",``organization",...\\},$
and the sub-prompt of relationship between $e_{s}$ and $e_{o}$ can be
formalized as:
(15) $\displaystyle T_{e_{s}/e_{o}}=``x\ e_{s}\ [MASK]\ e_{o}",$
$\displaystyle V_{e_{s}/e_{o}}=\\{``^{\prime}s\ parent\ was",``was\ born\
in",...\\},$
by aggregating the sub-prompts, the complete prompt is as follows,
(16) $\displaystyle T=``x\ the\ [MASK]_{1}\ e_{s}\ [MASK]_{2}\ the\
[MASK]_{3}\ e_{o}",$ $\displaystyle
V_{[MASK]_{1}}=\\{``person",``organization",...\\},$ $\displaystyle
V_{[MASK]_{2}}=\\{``^{\prime}s\ parent\ was,``was\ born\ in",...\\},$
$\displaystyle V_{[MASK]_{3}}=\\{``person",``organization",...\\},$
the final learning objective of PTR is to maximize
(17) $\displaystyle\frac{1}{|X|}\sum_{x\in
X}\log{\prod_{j=1}^{n}}p\left([MASK]_{j}=\phi_{j}(y)|T(x)\right),$
where $\phi_{j}(y)$ is to map the class $y$ to the set of label words
$V_{[MASK]_{j}}$ for the $j$-th masked position $[MASK]_{j}$.
##### D4:Knowledgeable Prompt-tuning
Knowledgeable prompt-tuning (KPT) (hu2021knowledgeable, ) generate a set of
label words for each label using external knowledge bases (KBs). The expanded
label words are not simply synonyms of each other, but cover different
granularities and perspectives, thus are more comprehensive and unbiased than
the class name. Then, refinement methods are proposed to cope with the noise
in the generated label words. This preserves high-quality words that can be
used to fine tune PLMs and demonstrate the effectiveness of in-context
learning (ICL).
#### 4.1.2. Continuous Prompts
Instead of creating readable language phrases, continuous prompts convert the
prompts into continuous vectors. Continuous prompts have their own parameters
that can be tuned based on training data from downstream tasks.
##### C1:P-tuning
P-tuning (liu2023gpt, ) is an early attempt to achieve prompt tuning by
employing continuous prompts. Instead of readable templates in discrete
prompts, P-tuning uses continuous prompt embedding $p_{i}$ to build the prompt
template:
(18) $\displaystyle T=\\{[P_{0:i}],x,[p_{(i+q:j)}],y,[P_{j+1}:k]\\},$
and then leverages an extra embedding function $f:[p_{i}]\to h_{i}$ to map the
template to:
(19)
$\displaystyle\\{h_{0},...,h_{i},e(x),h_{(i+1)},...h_{j},e(y),h_{j+1},...j_{k}\\},$
with the function $f$, task loss function can be optimized by updating the
embedding $\\{P_{i}\\}_{i=1}^{k}$.
##### C2:Prefix Tuning
Prefix tuning(li2021prefix, ) freezes the parameters of PLMs and only
optimizes the prefix parameters. Consequently, we only need to store the
prefix for each task, making prefix-tuning modular and space-efficient. Given
a trainable prefix matrix $M_{\phi}$ and a fixed pre-trained LM parameterized
by $\theta$, the training objective is the same as that of full fine-tuning:
(20) $\displaystyle\max_{\phi}{\log{P_{\phi}(y|x)}=\max_{\phi}\sum_{i\in
Y_{idx}}\log{P_{\phi}(z_{i}|h_{<i})}},$
where $h_{<i}$ is the concatenation of all neural network layers at time step
$i$. If $i\in P_{idx}$, $h_{i}=P_{\theta}[i,:]$, otherwise,
$h_{i}=LM_{\phi}(z_{i},h_{<i})$.
Lester et al. propose prompt tuning (lester2021power, ), which can be regarded
as a simplification of prefix tuning. It greatly reduces the number of
parameters and for the first time demonstrates that using only prompt tuning
is also competitive.
Figure 7. From P-tuning to P-tuning v2. (liu2021p, )
##### C3: P-tuning V2
P-tuning v2 (liu2021p, ) is an implementation of prefix-tuning (li2021prefix,
) and mixture of soft prompts (qin2021learning, ). As shown in Figure 7, when
compared to P-tuning, it not only has more tunable task-specific parameters
(from 0.01% to 0.1%-3%) to allow more per-task capacity while being parameter-
efficient, but also adds prompts to deeper layers have more direct impact on
model predictions. P-tuning v2 is always comparable to fine-tuning at all
scales but with only 0.1% task-specific parameters needed comparing to fine-
tuning, proving that prompt-tuning can effectively assist PLMs in adapting to
downstream tasks.
### 4.2. Tuning-free Prompting
Tuning-free Prompting directly generate answers without modifying the
parameters of the PLMs. These can optionally leverage response prompts to
enhance inputs, previous studies have explored the impact of prompts on the
generation effectiveness of PLMs and have provided numerous tricks for
creating prompts. The mainstream tuning-free prompting methods include ICL and
Chain-of-thought (CoT).
#### 4.2.1. In-Context Learning
ICL uses multiple input-output demonstration pairs to PLMs to generate the
desired response, which is first proposed along with GPT-3. ICL is an
effective method as no parameter is tuned, an example prompt of ICL is shown
in the Figure 8.
Previous researches have demonstrated that ICL can acquire knowledge of the
target tasks’ label space, the distribution of the input text and the input-
label correspondence from in-context examples. The similarity between in-
context prompts and the target task also significantly influences the
performance of ICL. Generally, the performance tends to improve when in-
context prompts are closer to the test samples in the embedding space
(liu2021makes, ). The arrangement of in-context prompts itself also exerts a
substantial impact on the performance of ICL, which becomes particularly
pronounced in smaller-scale models (lu2021fantastically, ). Therefore, many
researchers are devoted to exploring methodologies for constructing well-
performing in-context prompts and many methods (chen2022improving, ;
chen2021meta, ; min2021metaicl, ) based on ICL are proposed to better design
in-context prompts.
Figure 8. Examples of ICL and CoT.
#### 4.2.2. Chain-of-Thought
CoT (wei2022chain, ) improves performance on a range of arithmetic,
commonsense, and symbolic reasoning tasks by mimics a step-by-step thought
process for arriving at the answer. An example prompt of CoT is shown in the
Figure 8. Zero-shot CoT (kojima2022large, ) is a classical CoT method that
guides the model in reasoning and generating results by employing the same
prompt ”Let’s think step by step,” across different tasks. Another classical
approach is Least-to-Most (zhou2022least, ), which decomposes the target
problem into a series of simpler sub-problems, solving them gradually.
A more widespread approach involves feeding the model a set of CoT prompts
designed for step-by-step reasoning to guide its thinking. Similar to ICL, the
selection of prompts significantly influences the generated results, prompting
extensive efforts to identify optimal prompts through methods such as voting
or automatically generating prompts (zhang2022automatic, ; shum2023automatic,
). Additionally, recent researches have focused on exploring the application
of CoT in the context of multimodal (zhang2023multimodal, ) and multilingual
(huang2023not, ) scenarios.
## 5\. LLM Based Task-oriented Dialogue Systems
The TOD system assists users in achieving specific domain-related goals, such
as hotel reservations or restaurant queries, through interactive
conversations. Due to its pronounced utility, this technology has garnered
increasing attention from researchers in recent years. Generally speaking, TOD
can be divided into pipeline-based TOD and end-to-end TOD. In this section, we
will give a comprehensive introduction to LLM-based TOD.
As shown in Figure 9, the pipeline-based TOD system comprises four connected
modules: (1) NLU, employed for extracting the user intent, and filling slots;
(2) Dialogue State Tracking (DST), a pivotal module in the Pipeline-based TOD,
utilized to track the dialogue state for the current turn based on the output
of the NLU module and historical inputs of the dialogue; (3) Policy Learning
(PL), determining the subsequent action based on the dialogue state generated
from the DST module; (4) NLG, the final module in the Pipeline-based TOD
system, transforming dialogue actions generated by the PL module into
comprehensible natural language. Dialogue Manager (DM) is the central
controller of a pipeline-based TOD system, which is comprised of DST module
and PL module.
### 5.1. Pipeline-based Methods
Figure 9. Pipeline-based task-oriented dialogue framework.
As each module in pipeline-based TOD systems is trained independently, any
module’s failure in adapting to sub-tasks can result in terrible performance
of the entire system. Simultaneously, as the Pipeline-based TOD system
sequentially solve all the sub-tasks, errors accumulate between modules,
leading to the error propagation problem. However, since each module in the
Pipeline-based TOD system operates separately, ensuring consistent input and
output, it becomes convenient to interchange individual modules within the
Pipeline-based TOD system. With the development of PLMs, large-scale language
modules fine-tuned through different approaches can be easily accessed and
seamlessly integrated to replace modules within the TOD system, which enables
users to easily adapt the system to sub-tasks in the target domain.
#### 5.1.1. Natural Language Understanding
The NLU module identifies and extracts information such as user intent and
slot values from the natural language input provided by the user. Generally,
the output of the NLU module is as follows: $Un=(In,Zn)$, where $In$ is the
detected user intent and $Zn$ is the slot-value pair. For example, in
restaurant recommendation tasks, the user intent is “find-restaurant” and the
domain is “restaurant”. Slot filling can be regarded as a sequence
classification task. For example, the user inputs a message: “I am looking for
a place to eat in the east that is expensive,” and the NLU module reads this
input and generates the following slot-value pairs: {restaurant-area: east,
restaurant-pricerange: expensive}.
##### Intent detection
Deep-learning based methods (deng2012use, ; tur2012towards, ) are widely used
in solving intent detecting tasks. In particular, many methods based on neural
networks achieve promising performances. However, with the development of
LLMs, numerous researchers apply LLMs to solving TOD tasks and many methods
achieve great performance. Comi et al. propose a pipeline-based intent
detection method (comi2023zero, ) based on pre-trained BERT model. They first
extract a set of potential intents as candidate classes for the utterance
intent classification problem using a zero-shot approach, which is based on a
fine-tuned BERT model. GPT-3 and Flan-T5-XXL models using prompts are utilized
for intent classification tasks by Parikh et al. (parikh2023exploring, ). They
also use PEFT methods to fine-tune LLMs and demonstrate outstanding
performance in intent classification tasks. To solve the problem that
augmentation via in-context prompting of LLMs alone does not improve
performance, Lin et al. (lin2023selective, ) introduce a novel approach based
on pointwise V-information and successfully improve the performance of intent
detection tasks based on LLMs.
Slot filling task tags each word subsequence with different labels. Therefore,
slot filling tasks can be regarded as sequence classification tasks. Coope et
al. proposed Span-ConveRT (coope2020span, ), a model for dialog slot-filling
tasks, which demonstrate great performance in few-shot scenarios by
integrating conversational knowledge coded in large pre-trained conversational
models. Siddique et al (siddique2021linguistically, ). propose a zero-shot
slot filling model, LEONA, which employs the pre-trained LLMs to provide
contextualized word representations that capture complex syntactic and
semantic features of words based on the context of their usage and uses these
word representations to produce slot-specific predictions for each word.
##### Joint intent detection and slot filling
Some researches combine intent detection and slot filling into a joint intent
detection and slot filling module, which promote two-way information sharing
between intent detection tasks and slot filling tasks. Chen et al.
(chen2019bert, ) fine-tune the LLM based on NLU datasets and their experiment
results indicate that joint NLU module based on fine-tuned LLMs outperforms
both separated NLU module and NLU module based on untuned LLMs. Nguyen et al
(nguyen2023cof, ). propose CoF-CoT approach, which breaks down NLU tasks into
multiple reasoning steps. LLMs can enhance their capability in solving NLU
tasks from different granularities by learning to acquire and leverage
essential concepts.
#### 5.1.2. Dialogue State Tracking
As shown in Figure 9, DST and PL constitute the dialogue manager (DM) module,
the central controller of a pipeline-cased TOD system. As the first module of
a DM module, DST involves tracking the current dialogue state by predicting
the slot-value pairs at current turn $t$. In TOD tasks, a dialogue state
$\mathcal{B}_{t}$ records the entire dialogue history until turn t. DST
modules record user’s objectives in the form of slot-value pairs, for example,
in hotel reservation tasks, the dialogue state at turn $t$ is
$\mathcal{B}_{t}={(hotel-bookstay,5),(hotel-bookday,Friday),(hotel-
bookname,Hiltion)}$.
DST methods can be divided into static ontology DST models and dynamic
ontology DST models. Static ontology DST models predict dialogue state from
predefined slot-value pairs and dynamic ontology DST models predict dialogue
state from an unfixed set of slot-values. Many static ontology DST models
(balaraman2019scalable, ; zhong2018global, ; lee2019sumbt, ) have been
proposed. However, most LLM-based DST methods are based on dynamic ontology
DST models, which tracks the dialogue state from unfixed slot-value pairs. For
example, SAVN and MinTL (wang2020slot, ; lin2020mintl, ) focus on creating
methods or frameworks where LLMs can be effectively applied. These methods
achieve competitive results and allow users to plug-and-play pre-trained
sequence-to-sequence models for solving DST tasks. Hu et al (hu2022context, ).
proposed IC-DST, a zero-shot and few-shot DST framework based on ICL. IC-DST
retrieves a few most similar turns from the labeled dialogues as prompts,
which are subsequently fed into the LLMs to produce dialogue state changes of
the current turn. Feng et al. proposed LDST (feng2023towards, ), a DST
framework that leverage LLaMa model. LDST initially create an instruction-
tuning dataset and fine-tune the LLaMa model on this dataset. Subsequently,
LDST guided the LLaMa in generating accurate responses by constructing and
inputting an output prompt.
#### 5.1.3. Policy Learning
As the second part of DM module, PL module takes the responsibility for
generating the appropriate next system action based on the dialogue state
$\mathcal{B}_{t}$ at current turn t from the DST module. Therefore, the task
of PL module can be formulated as learn a mapping function:
$f:\mathcal{B}_{t}\to a_{i}\in\mathcal{A},$
where $\mathcal{A}$ is the action set $\mathcal{A}=\left\\{a_{1},\ \dots,\
a_{n}\right\\}$.
The PL module in TOD systems can be approached at two levels: the dialogue act
(DA)-level and the word-level dialogue policy. The goal of DA-level dialogue
policy is to generate dialogue acts such as ‘Inform’: (‘people’, ‘area’),
which are then transferred into readable output in the NLG module.
Reinforcement learning methods (takanobu2020multi, ; wang2020task, ;
gordon2020learning, ) are widely used in DA-level PL tasks. Word-level
dialogue PL module combines PL and NLG module since it conducts a sequence of
actions by selecting a string of words as a readable sentence. In this way,
the word-level dialogue PL task can be regarded as a sequence-to-sequence
generation task. Since LLMs have an outstanding performance in solving
sequence-to-sequence tasks, numerous LLM-based word-level dialogue PL methods
are proposed. Chen et al. (chen2019semantically, ) use the BERT model as a
decoder. Li et al. (li2021retrieve, ) utilize BERT model as a context-aware
retrieval module. Numerous researchers (budzianowski2019hello, ;
hosseini2020simple, ; jang2022gpt, ) fine-tune GPT-2 model and apply the fine-
tuned model to address word-level dialogue PL tasks. Ramachandran et al.
(ramachandran2021causal, ) fine-tune BART and He et al. (he2022galaxy, ) fine-
tune UniLM. Yu et al. (yu2023prompt, ) propose a prompt-based method, which
solves PL tasks by prompting LLMs to act as a policy prior.
#### 5.1.4. Natural Language Generation
NLG, which comprises data-to-text generation and text-to-text generation, is
the process of generating natural language text for specific purposes. In
pipeline-based TOD systems, NLG is the last module, which is responsible for
transforming the dialogue actions generated by the PL module into readable
natural language. For example, given the dialogue active: “Inform:
(‘people’)”, NLG module converts it into readable sentence “How many people
are planning to check in?” Conventional NLG modules are based on pipeline
structure, which can be divided into Text Planner module, Sentence Planner
module and Linguistic Planner module (REITER_DALE_1997, ). With the
development of deep learning methods, researchers introduce end-to-end NLG
methods (wen2015stochastic, ; wen2015semantically, ; zhou2016context, ) based
on neural network to solve NLG tasks in recent works. Many recent works are
proposed to solve NLG tasks with LLMs since NLG tasks in pipeline-based TOD
systems are sequence-to-sequence tasks, which can be efficiently addressed by
LLMs. For example, Peng et al. propose SC-GPT (peng2020few, ) model, which is
pre-trained on a large set of annotated NLG corpus and fine-tuned on datasets
with limited domain labels to adapt to new domains. Chen et al. (chen2019few,
) fine-tune other parameters of the PLMs and keep the pre-trained word
embeddings fixed to enhance the model’s generalization ability. Baheti et al.
(baheti2020fluent, ) incorporate a BERT-based classifier into end-to-end NLG
system to identify the best answer from candidate responses. Qian et al.
(qian2022controllable, ) enhance the performance of GPT-2 in addressing NLG
tasks by utilizing prefix tuning method.
### 5.2. End-to-End Methods
Figure 10. Modularly end-to-end task-oriented dialogue system (a) and fully
end-to-end task-oriented dialogue system (b).
As shown in Figure 10, end-to-end TOD systems can be divided into modularly
end-to-end TOD systems and fully end-to-end TOD systems. Although modularly
end-to-end TOD systems generate response through separated modules, which is
similar to pipeline-based TOD systems, modularly end-to-end TOD systems
simultaneously train all the modules and optimize the parameters of all
modules. End-to-end TOD system generates dialogue system response
$\mathcal{S}$ base on the corresponding knowledge base $\mathcal{KB}$ and the
dialogue history
$\mathcal{H}=\left(u_{1},s_{1}\right),\left(u_{2},s_{2}\right),\ldots,\left(u_{n-1},s_{n-1}\right)$,
where $u$ is the user input and $s$ is the system answer:
(21) $\mathcal{S}=\text{ End-to-end TOD }(\mathcal{H},\mathcal{KB}).$
LLMs achieve significant success in ODD tasks. However, researches related to
training fully LLM-based end-to-end TOD models remains relatively limited due
to the lack of large amount of training datasets for TOD tasks. Therefore,
most existing LLM-based end-to-end TOD methods are based on modularly end-to-
end TOD systems.
Simple Task-Oriented Dialogue (SimpleTOD) (hosseini2020simple, ) is an end-to-
end TOD method based on a single, causal language model trained on all sub-
tasks. The methodology SimpleTOD employed in training LLMs, serves as a
successful example of leveraging such models for solving TOD tasks. Given the
concatenation
$x^{t}=[\mathcal{H}_{t},\mathcal{B}_{t},\mathcal{D}_{t},\mathcal{A}_{t},\mathcal{S}_{t}]$,
where $\mathcal{H}_{t}$, $\mathcal{B}_{t}$, $\mathcal{D}_{t}$,
$\mathcal{A}_{t}$ and $\mathcal{S}_{t}$ are the values of dialogue history
$\mathcal{H}$, belief state $\mathcal{B}$, database query result
$\mathcal{D}$, dialogue action $\mathcal{A}$ and system answer $\mathcal{S}$
in turn $t$. The joint probability $p(x)$ and negative log-likelihood
$\mathcal{L}(D)$ over a dataset $D=\left\\{x^{1},\ldots,x^{|D|}\right\\}$ can
be formulated as:
(22) $p(x)=\prod_{i=1}^{n}p\left(x_{i}\mid x_{<i}\right),$ (23)
$\mathcal{L}(D)=-\sum_{t=1}^{|D|}\sum_{i=1}^{n_{t}}\log
p_{\theta}\left(x_{i}^{t}\mid x_{<i}^{t}\right),$
where $n_{t}$ is the length of $x^{t}$ and $\theta$ is the parameters in a
neural network, which is trained to minimize $\mathcal{L}(D)$.
Peng et al. proposed Soloist (peng2021soloist, ), an approach that uses
transfer learning and machine teaching to construct the end-to-end TOD system.
The training process of Soloist is pretty similar to that of SimpleTOD.
However, Soloist refines the data format for each dialogue turn, dialogue
action $\mathcal{A}$ is no longer required. Each dialog turn in the training
data can be represented as
$x=[\mathcal{H},\mathcal{B},\mathcal{D},\mathcal{S}]$. The full pre-training
objective of Soloist is divided into three sub-tasks: belief prediction,
grounded response generation and grounded response generation. Given the
length of belief state sequence $T_{\mathcal{B}}$ and the tokens before turn
$t$ $\mathcal{B}_{<t}$, the objective of predicting the belief state is
defined as:
(24) $\mathcal{L}_{\mathrm{B}}=\log
p(\mathcal{B}\mid\mathcal{H})=\sum_{t=1}^{T_{\mathcal{B}}}\log
p_{\boldsymbol{\theta}}\left(\mathcal{B}_{t}\mid\mathcal{B}_{<t},\mathcal{H}\right),$
where $p(x)$ is the joint probability and $\theta$ is the parameters to be
learned.
Similarly, given the length $T_{\mathcal{S}}$ of delexicalized response
$\mathcal{S}=\left[\mathcal{S}_{1},\cdots,\mathcal{S}_{T_{\mathcal{S}}}\right]$,
the corresponding training objective can be formulated as:
(25) $\displaystyle\mathcal{L}_{\mathrm{R}}$ $\displaystyle=\log
p(\mathcal{S}\mid\mathcal{D},\mathcal{B},\mathcal{H})$
$\displaystyle=\sum_{t=1}^{T_{\mathcal{S}}}\log
p_{\boldsymbol{\theta}}\left(\mathcal{S}_{t}\mid\mathcal{S}_{<t},\mathcal{D},\mathcal{B},\mathcal{H}\right).$
Let $x$ be the positive samples and let $x^{\prime}$ be the negative samples,
Soloist utilizes a binary classifier applied to the features to forecast
whether the items within the sequence correspond ($y$ = 1) or do not
correspond ($y$ = 0).The contrastive object is cross-entropy defined as:
(26)
$\mathcal{L}_{\mathrm{C}}=y\log\left(p_{\boldsymbol{\theta}}(\boldsymbol{x})\right)+(1-y)\log\left(1-p_{\boldsymbol{\theta}}\left(\boldsymbol{x}^{\prime}\right)\right).$
Then, the fully training objective can be formulated as:
(27)
$\mathcal{L}_{\boldsymbol{\theta}}(D)=\sum_{t=1}^{|D|}\left(\mathcal{L}_{\mathrm{B}}\left(\boldsymbol{x}_{t}\right)+\mathcal{L}_{\mathrm{R}}\left(\boldsymbol{x}_{t}\right)+\mathcal{L}_{\mathrm{C}}\left(\boldsymbol{x}_{t}\right)\right).$
For the UBAR (yang2021ubar, ), previous modularly end-to-end TOD methods that
are trained and evaluated in turn-level sequences where they are based on
dialog
history$\mathcal{H}_{t}=\left(u_{1},s_{1}\right),\left(u_{2},s_{2}\right),\ldots\left(u_{t-1},s_{t-1}\right),\left(u_{t}\right)$
to generate response in turn t. While UBAR integrates the intermediate
information $\mathcal{B}$, $\mathcal{D}$, and $\mathcal{A}$ within the
context. Therefore, the training sequence of UBAR in turn $t$ is de fined as
$[\mathcal{H}_{0},\mathcal{B}_{0},\mathcal{D}_{0},\mathcal{A}_{0},\mathcal{S}_{0},\dots\mathcal{H}_{t},\mathcal{B}_{t},\mathcal{D}_{t},\mathcal{A}_{t},\mathcal{S}_{t}]$,
which is then used to fine-tune the large pre-trained model GPT-2.
Su et al. propose a plug-and-play model for task-oriented dialogue (PPTOD)
(su2021multi, ), which is a modularly end-to-end TOD model. PPTOD is pre-
trained with four TOD-related tasks and prompts are used to enhance the
performance of language model. It’s worth mentioning that the learning
framework of PPTOD allows it to be trained with partially annotated data,
which significantly reduce the cost of manually creating datasets.
Semi-supervised Pre-trAined Conversation ModEl (SPACE) comprises a serious of
PLMs (he2022galaxy, ; he2022space, ; he2022unified, ) proposed by
Conversational AI Team, Alibaba DAMO Academy. GALAXY (SPACE-1) (he2022galaxy,
) is a modularly end-to-end TOD model that explicitly acquires dialog policy
from a combination of limited labeled dialogues and extensive unlabeled
dialogue corpora through the application of semi-supervised learning. Previous
works predominantly focused on enhancing the performance of NLU and NLG
modules, while GALAXY optimizes the performance of PL module by introducing a
new dialogue action prediction task during pre-training. These methodologies
enhance the performance of GALAXY in solving TOD tasks and empower GALAXY with
superior few-shot capabilities compared to other models.
SPACE-2 (he2022space, ) is a tree-structured conversation model pre-trained on
limited labeled dialogs and large-scale unlabeled dialog corpora. In
conventional methods, positive samples are exclusively defined as examples
with identical annotations, while all other instances are categorized as
negative samples. This classification overlooks the possibility that diverse
examples may exhibit shared semantic similarities to some extent. Therefore,
the SPACE-2 framework establishes tree structures, which is called semantic
tree structure (STS), for diverse datasets based on their respective data
structures. Then, SPACE-2 measures the similarity among different labeled
dialogues and aggregate the output multiple scores. In this approach, all
annotated data is considered as positive instances with soft scores, as
opposed to the binary scores (0 or 1) commonly employed in previous methods.
SPACE-3 (he2022unified, ) is one of the most state-of-art pre-trained
modularly end-to-end TOD models. The SPACE-3 framework consolidates the
efforts of SPACE-1 and SPACE-2, incorporating STS to unify the inconsistent
annotation schema across different datasets and devising a dedicated pre-
training objective for each component. SPACE-3 uses
$p^{u}=\left\\{p_{1}^{u},p_{2}^{u},\ldots,p_{A}^{u}\right\\}$ and
$p^{o}=\left\\{p_{1}^{o},p_{2}^{o},\ldots,p_{B}^{o}\right\\}$ to represent the
dialog understanding prompt sequence and policy prompt sequence, where $A$ and
$B$ are the length of prompt sequences. Then, $p^{u}$ and $p^{o}$ are
leveraged to extract semantics and help pass the task-flow in a TOD system.
## 6\. LLM Based Open-Domain Dialogue Systems
ODD systems are designed to converse on a wide range of topics without a
specific task or goal. While TOD are focused on achieving specific tasks, ODD
aim to provide coherent and contextually relevant responses across any topic
presented by the user. ODD is primarily categorized into three methods:
Retrieval-based Methods, which select responses from a predefined set;
Generation-based Methods, which generate responses dynamically; and Hybrid
Methods, which combine both retrieval and generation to optimize dialogue
outcomes. Table 2. shows the latest advancements in these three approaches
within the domain of ODD systems.
Table 2. Recent Advances in Open-Domain Dialogue Systems.
Task | Methods | Description
---|---|---
Retrieval-based Methods | Dense Retriever (karpukhin2020dense, ) | Dense vector representations for improved accuracy
MSN (yuan2019multi, ) | Context management via multi-hop mechanism
IoI Network (tao2019one, ) | Multi-turn response selection enhancement
Generation-based Methods | PLATO-LTM (xu2022long, ) | Persona coherence with long-term memory
PAML (madotto2019personalizing, ) | Personalization via meta-learning
Persona-Consistent Generation (chen2023learning, ) | Coherence with latent variables for consistency
PHMN (li2021dialogue, ) | Personalized matching with user history
DHAP (ma2021one, ) | Dynamic user profile learning for personalization
MSP Model (zhong2022less, ) | Dialogue history refinement for personalization
GDR Framework (song2020generate, ) | Persona-consistent dialogue generation
CLV Model (tang2023enhancing, ) | Dual persona data utilization for personalized responses
Hybrid Methods | Retro (borgeaud2022improving, ) | Retrieval-augmented auto-regressive LM
FiD (izacard2020leveraging, ) | Passage retrieval and decoding fusion
K2R (adolphs2021reason, ) | Knowledge-first approach for factual accuracy
EMDR2 (singh2021end, ) | T5 integration with Top-k MIPS retrieval
Latent Retrieval (lee2019latent, ) | MIPS for efficient evidence retrieval
IAG (komeili2021internet, ) | Real-time Internet search integration
### 6.1. Retrieval-based Methods
In ODD systems, early foundational works have set the stage for subsequent
advancements. Bordes et al. (bordes2016learning, ) were shifting the focus
towards end-to-end learning approaches in goal-oriented dialog systems,
challenging the traditional reliance on domain-specific handcrafting.
Furthering this paradigm, Tao et al. (tao2019one, ) introduced a nuanced
approach to interaction depth in multi-turn response selection, demonstrating
that a more profound interaction could significantly enhance context-response
matching. Concurrently, Henderson et al.(henderson2017efficient, ) showcased
the practical application of these concepts in a large-scale commercial
environment. These seminal contributions have laid the groundwork for a new
wave of innovations in retrieval-based ODD systems.
Karpukhin et al.(karpukhin2020dense, ) introduce an approach in the field of
information retrieval, centering around the dense retriever model. This model
represents a departure from traditional retrieval methods like Lucene-BM25,
primarily through its use of dense vector representations as opposed to
conventional sparse vector models such as TF-IDF and BM25. These dense
representations are obtained from embeddings learned from a carefully curated
set of questions and passages, leading to improvements in top-20 passage
retrieval accuracy. Additionally, the model utilizes a dual-encoder framework
based on BERT, with one encoder processing questions and the other focusing on
passages, each mapping its input into a low-dimensional vector space for
efficient retrieval.The training process of the model focuses on the
optimization of embeddings to effectively align question and passage vectors.
The objective function for this optimization is defined as follows:
(28)
$L(q_{i},p_{i}^{+},p_{i,1}^{-},\ldots,p_{i,n}^{-})=-\log\frac{e^{\text{sim}(q_{i},p_{i}^{+})}}{e^{\text{sim}(q_{i},p_{i}^{+})}+\sum_{j=1}^{n}e^{\text{sim}(q_{i},p_{i,j}^{-})}},$
where $q_{i}$ is the question vector, $p_{i}^{+}$ the positively aligned
passage vector, and $p_{i,j}^{-}$ the negative passage vectors. The similarity
function $\text{sim}(q,p)$ computes the dot product of question and passage
embeddings.
With the groundwork laid by dense retrieval models, subsequent efforts focused
on refining the interaction depth within dialogue systems. Tao et
al.(tao2019one, ) introduce the Interaction-over-Interaction (IoI) Network.
IoI enhances multi-turn response selection in retrieval-based chatbots. This
model deepens the interaction between context and response by leveraging
multiple rounds of interaction. It introduces multiple layers of interaction
between the utterance and the response, allowing the network to capture
complex semantic relationships more effectively. The IoI model’s interaction
blocks process the utterance-response pairs sequentially, with self-attention
mechanisms and iterative refinement enhancing the depth of interaction.
Additionally, the model incorporates a mechanism for aggregating matching
signals across various interaction layers.
### 6.2. Generation-based Methods
Generation-based methods in ODD systems have evolved significantly, offering
flexibility and creativity in response synthesis. Early works like Vinyals and
Le (vinyals2015neural, ) and Sutskever et al. (sutskever2014sequence, )
pioneered the use of sequence-to-sequence models for coherent dialogue
generation. Further advancements by Shang et al. (shang2015neural, )
introduced attention mechanisms, improving response relevance and quality,
while Serban et al. (serban2016building, ) developed the Hierarchical
Recurrent Encoder-Decoder for more complex dialogues. Recent progress, marked
by Radford et al. (radford2019language, ), has seen the integration of large-
scale transformer models like GPT-2, pushing the boundaries of fluency and
contextual awareness in generated responses.
#### 6.2.1. Knowledge-Enhanced Generation
Building on these foundational advancements in sequence-to-sequence models and
attention mechanisms, Zhao et al. (zhao2020knowledge, ) propose an enhancement
for dialogue generation through the integration of external knowledge with
PLMs such as GPT-2. Their approach involves a BERT-based Knowledge Selection
Module for selecting relevant documents $D^{\prime}$ from a set $D$ based on
the dialogue context $U$. The selected knowledge is utilized in a GPT-2-based
Response Generation Model for generating responses:
(29)
$P(r|U,D^{\prime};\theta)=\prod_{t=1}^{lr}P(r_{t}|g(U,D^{\prime}),r_{1:t-1};\theta),$
where $g(U,D^{\prime})$ represents the integrated user dialogue context and
selected knowledge, and $\theta$ denotes the GPT-2 parameters. This method
allows the generation of responses that are contextually relevant and informed
by external knowledge.
Moreover, while Zhao et al. focused on enhancing dialogue generation with
external knowledge, Xu et al. (xu2022long, ) introduce the PLATO-LTM model,
featuring a Long-Term Memory (LTM) mechanism to consistently maintain persona
information. The model utilizes a Persona Extractor (PE) to classify persona
labels based on user input $U_{i}$ using an ERNIE-CNN architecture. The LTM
module in PLATO-LTM is responsible for matching the context with the relevant
persona, enabling the system to retrieve appropriate persona-specific
information. The Generation Module is represented as:
(30) $L_{\text{NLL}}=-\mathbb{E}\left[\sum_{t=1}^{T}\log
p(r_{t}|c,\rho_{u},\rho_{s},r_{<t})\right].$
This equation represents the negative log-likelihood loss function, where
$r_{t}$ is the generated response at time $t$, $c$ denotes the current
context, $\rho_{u}$ and $\rho_{s}$ are user and system persona embeddings,
respectively, and $r_{<t}$ symbolizes the responses generated up to the
previous time step. PLATO-LTM’s design focuses on enhancing the coherence and
engagement of conversations by dynamically managing persona information, thus
facilitating more natural and contextually relevant dialogue generation.
Further expanding on the theme of personalization, Song et al.
(song2020generate, ) develop the Generate-Delete-Rewrite (GDR) framework to
create persona-consistent dialogues. The GDR framework operates in three
stages: initially generating a response prototype, then identifying and
masking elements that are inconsistent with the established persona, and
finally refining the output. The process starts with the creation of an
initial response vector, followed by calculating attention weights to identify
inconsistencies, and concludes with a refinement stage where the initial
output is adjusted to ensure persona consistency. Building on the idea of
tailored dialogue generation, Tang et al. (tang2023enhancing, ) introduce the
Contrastive Latent Variable (CLV)-based model. This model enhances dialogue
personalization by using both sparse and dense persona resources. It begins by
encoding persona information and user queries, and then generates personalized
responses by combining these encoded inputs. The CLV model distinguishes
itself by clustering dense persona descriptions into sparse categories,
effectively utilizing varied persona data to inform and personalize dialogue
responses. The CLV model is not only consistent with user personas but also
enriched by a deeper understanding of individual user characteristics.
#### 6.2.2. Personalization and Consistency
Transitioning from the focus on dialogue personalization and consistency,
Madotto et al. (madotto2019personalizing, ) use Model-Agnostic Meta-Learning
(MAML) for personalizing dialogue learning. Their Persona-Agnostic Meta-
Learning (PAML) framework treats different personas as separate tasks in meta-
learning. PAML adapts dialogue models to new personas through dialogue
samples, contrasting traditional persona-specific descriptions. This method is
effective in generating fluent and consistent dialogues.
As the field advances towards more nuanced methods of personalization, Li et
al. (li2021dialogue, ) have developed a Personalized Hybrid Matching Network
(PHMN) that incorporates user-specific dialogue history into response
selection. The PHMN model operates on two principal fronts: firstly, it
extracts personalized wording behaviors from the user’s dialogue history.
Secondly, it employs hybrid representation learning on context-response
utterances, integrating a customized attention mechanism to extract essential
information from context-response interactions. It improves the accuracy of
matching responses to the user’s conversational style and preferences.
Building upon the concept of utilizing dialogue history for personalized
response generation, Zhong et al. (zhong2022less, ) introduce the MSP model to
refine user dialogue history. The MSP model includes User Refiner, Topic
Refiner, Token Refiner, and Response Generator. The dialogue generation
process integrates these components:
(31) $\hat{y}=\text{TRMdec}(x,u_{\text{sim}},t,A),$
where $\hat{y}$ is the response, $x$ the dialogue input, $u_{\text{sim}}$ the
user similarity output, $t$ the topic information, and $A$ the token-level
attention. This model enhances personalization in response generation by
refining user dialogue history.
### 6.3. Hybrid Methods
Hybrid methods in ODD systems represent the integration of retrieval-based and
generation-based approaches, combining the strengths of both. Early research
in this area includes the work of Sordoni et al. (sordoni2015neural, ), who
explored the use of context-sensitive generation models conditioned on
traditional retrieval methods. This approach laid the groundwork for
subsequent developments in the field. Another contribution by Yan et al.
(yan2016docchat, ), introduced a model that dynamically selects between
generating a response and retrieving one, depending on the conversation’s
context. This blend allows for more flexible and contextually appropriate
responses.
#### 6.3.1. Integrating Retrieval and Generation
Building upon these initial explorations into hybrid methodologies, Borgeaud
et al. (borgeaud2022improving, ) introduce the Retrieval-Enhanced Transformer
(Retro) model, which combines a large-scale retrieval mechanism with an auto-
regressive language model. This integration allows the language model to
access contextually relevant information from a retrieval database. The model
retrieval-enhanced sequence log-likelihood defined as:
(32)
$\mathcal{L}(\mathbf{X}|\theta,\mathcal{D})=\sum_{u=1}^{l}\sum_{i=1}^{m}\log
p_{\theta}(x^{(u-1)m+i}|\mathbf{x}{<(u-1)m+i},\text{Ret}\mathcal{D}(\mathcal{C}^{u_{0}}{<u})),$
where $\mathbf{X}$ represents the input sequence, $\theta$ denotes the model
parameters, $\mathcal{D}$ is the retrieval database, and
$\text{Ret}\mathcal{D}$ refers to the retrieval operation.
#### 6.3.2. Enhancing Dialogue with External Knowledge
Adolphs et al. ((adolphs2021reason, )) developed the Knowledge to Response
(K2R) model, focusing on factual accuracy. K2R first generates knowledge
sequences relevant to the dialogue context and then integrates this knowledge
to synthesize the final response. Izacard and Grave ((izacard2020leveraging,
)) developed the Fusion-in-Decoder (FiD) model, integrating generative models
with passage retrieval. FiD encodes a question and multiple retrieved passages
independently, concatenating each passage with the question for decoding,
which facilitates the synthesis of information from various passages.
Expanding on the concept of retrieval augmentation, Xu et al. ((xu2021beyond,
)) focus on the dynamics of long-term conversation. They explore Retrieval-
Augmented Generative Models, characterized by their approach to integrating
conversation context and model mechanics:
(33) $p(y|x)=\sum_{z\in\text{Retrieve}(x)}p(z|x)\cdot p(y|x,z).$
Additionally, they explored Memory-Based Models with Summarization, where the
model summarizes past dialogues for response generation:
(34) $P(y|x,S)=\text{SummaryGeneration}(x,\text{Past Dialogues}).$
Singh et al. (singh2021end, ) develop $\mathrm{EMDR}^{2}$, integrating T5
encoding and decoding with a Top-k MIPS retrieval mechanism, enhanced by a
scale evidence document encoder. The system aims to optimize multi-document
reading and retrieval. $\mathrm{EMDR}^{2}$ implements an Expectation-
Maximization (EM) algorithm for latent variable model training. The training
process involves computing the posterior of the latent variable $Z$, and the
training objective is given by:
(35) $\displaystyle L=\log
p(a|q,Z_{\text{top-K}};\Theta)+\log\sum_{k=1}^{K}\text{SG}(p(a|q,z_{k};\Theta))p(z_{k}|q,Z_{\text{top-K}};\lambda),$
where SG is the stop-gradient operator, $\Theta$ are the reader’s parameters,
and $\lambda$ are the retriever’s parameters.
Komeili et al. (komeili2021internet, ) introduce an ”Internet-Augmented
Dialogue Generation” system, integrating real-time Internet. It consists of a
search query generator and a response generation module. The search query
generator, a transformer-based encoder-decoder model, generates internet
search queries from the dialogue context. The response module uses the
retrieved information to construct responses. The system’s function is
encapsulated in:
(36) $R=\text{FiD}\left(\text{Encoder-
Decoder}(C),\,\text{InternetSearch}(\text{Encoder-Decoder}(C))\right),$
where $C$ is the dialogue context and $R$ the generated response. The Encoder-
Decoder function processes $C$, and InternetSearch retrieves information based
on the query. The FiD method integrates this information to generate $R$.
## 7\. EVALUATION APPROACHES
The effective evaluation approaches for models of kinds of tasks have always
been the focus of attention in the research field. In this section we
introduce the automatic evaluation and human evaluation approaches that have
been widely used.
### 7.1. Automatic Evaluation
#### 7.1.1. Automatic Evaluation Approaches for Task-oriented Dialogue
Systems
The evaluation for TOD systems is mainly performed using automatic approaches,
namely joint goal accuracy, slot accuracy, average goal accuracy, requested
slot F1, BLEU, and Entity F1. In the following, a brief description of each
approach is provided.
##### Joint Goal Accuracy
Joint goal accuracy (JGA), developed from Henderson et al. (henderson2014word,
) and Zhong et al. (zhong2018global, ) , is the most widely used evaluation
approach for DST. The joint goal is a set of accumulated turn level goals up
to a given turn in the dialogue. It compares the predicted dialogue states to
the ground truth which includes slot values for all possible pairs. The output
is considered as a correct prediction if and only if all the predicted values
match its ground truth values at each turn. JGA can be expressed as:
(37) $JGA=\begin{cases}1&\text{if predicted state = gold state},\\\
0&\text{otherwise}.\end{cases}$
##### Slot Accuracy
Slot accuracy (SA) (wu2019transferable, ) is also a widely used automatic
evaluation approach. Unlike joint goal accuracy, it only compares each value
to the corresponding ground truth individually without seeing other turns. SA
can be expressed as:
(38) $SA=\frac{T-M-W}{T},$
where $T$ indicates the total number of predefined slots for all the domains,
$M$ denotes the number of missed slots that the model does not accurately
predict among the slots included in the gold state, and $W$ represents the
number of wrongly predicted slots among the slots that do not exist in the
gold state.
##### Average Goal Accuracy
Average goal accuracy (AGA) (rastogi2020towards, ) is the average accuracy of
predicting the correct value for an active slot in each turn. A slot becomes
active if its value is mentioned in the current turn and is not inherited from
previous turns. AGA can be expressed as:
(39) $AGA=\frac{|N_{t}\cap B_{t}^{\prime}|}{|N_{t}|},$
where $B_{t}$ and $B_{t}^{\prime}$ are the set of ground-truth and predicted
belief state for turn $t$ respectively. Then let $N_{t}\subseteq B_{t}$ be the
set of ground-truth triplets having non-empty slot values.
##### Requested Slot F1
Requested slot F1 indicates the model performance in correctly predicting if a
requested slot is requested by the user, estimated as the macro-averaged F1
score over for all requested slot. The macro-averaged of F1 score is computed
over the individual slot-type and slot-value for every turn. To define the
macro-averaged F1 score($ma\,F_{1}$), first consider the following
precision($P_{i}$) and recall ($R_{i}$) within each class, $i\,=\,1,\ \dots,\
2$:
(40) $\displaystyle P_{i}$
$\displaystyle=\frac{TP_{i}}{(TP_{i}+FP_{i})}=\frac{p_{ii}}{p_{i-}},$ (41)
$\displaystyle R_{i}$
$\displaystyle=\frac{TP_{i}}{(TP_{i}+FN_{i})}=\frac{p_{ii}}{p_{-i}},$
and F1 score within each class ($F_{1i}$) is defined as the harmonic mean of
$P_{i}$ and $R_{i}$, that is:
(42) $F_{1i}=2\frac{P_{i}\times
R_{i}}{P_{i}+R_{i}}=2\frac{p_{ii}}{p_{i-}+p_{-i}}.$
The macro-average F1 score is defined as the simple arithmetic mean of
$F_{1i}$:
(43)
$ma\,F_{1}=\frac{1}{r}\sum_{i=1}^{r}F_{1i}=\frac{2}{r}\sum_{i=1}^{r}\frac{p_{ii}}{p_{i-}+p_{-i}}.$
##### BLEU
BLEU (papineni2002bleu, ) is used to calculate the co-occurrence frequency of
two sentences based on the weighted average of matched n-gram phrases. BLEU
was originally used to evaluate machine translation, and has been used for
evaluating TOD and ODD systems.
##### Entity F1
Entity F1 is used to evaluated the model’s ability to generate relevant
entities from the underlying knowledge base and to capture the semantics of
the user-initiated dialogue flow. To compute an entity F1, one need to micro-
average over the entire set of system dialogue responses and use the entities
in their canonicalized forms.
#### 7.1.2. Automatic Evaluation Approaches for Open-domain Dialogue Systems
The evaluation for ODD system is mainly performed using automated approaches,
namely perplexity, BLEU, DIST-n, and recall@K. In the following, a brief
description of each approach is provided.
##### Perplexity
Perplexity (vinyals2015neural, ) was originally conceived as an information-
theoretic measure to assess how much a given language model is suited to
predict a text sequence or, equivalently, how much a word sequence fits into a
specific language model. It is now used as an analytic approach with potential
application to support early diagnosis of symptoms of mental disorder.
Perplexity can be expressed as:
(44) $PP(W)=P(w_{1}w_{2}...w_{N})^{-1/N}=\sqrt[N]{\frac{1}{P(w_{1}w_{2}\dots
w_{N})}},$
where $W$ is a word sequence of length $N$, $P(w_{1}w_{2}\dots w_{N})$ is the
probability of that word sequence.
##### DIST-n
DIST-n (li2015diversity, ) is used to measure the diversity of response
sequence for dialogue generation by calculating the number of distinct
unigrams and bigrams in generated responses. The value is scaled by total
number of generated tokens to avoid favoring long sentences. DIST-1 and DIST-2
are respectively the number of the distinct unigrams and bigrams divided by
total number of generated words.
##### Recall@K
Recall@K is one of the standard approaches to evaluate. For query $q$, it is
defined as a radio of the number of relevant(positive) examples within the
top-k ranked examples to the total number of relevant examples for $q$ given
by $|P_{q}|$. It is denoted by $R^{k}_{\Omega}(q)$ when compute for query $q$
and database $\Omega$ and function $H(.)$ is the Heaviside step function.
Therefore, it can be expressed as:
(45) $R^{k}_{\Omega}(q)=\frac{H(k-1-\sum_{z\in\Omega,z\neq
x}H(S_{qz}-S_{qx}))}{|P_{q}|}.$
### 7.2. Human Evaluation
Human evaluation is also used as an evaluation approach in different tasks.
Human evaluation focuses on the explanation of two matters: diversity and
creativity, i.e., the capacity of varying their texts in form and emphasis to
fit an enormous range of speaking situations, and the potential to express any
object or relation as a natural language text. Furthermore, human evaluation
scrutinizes three key aspects: Grammar (whether a generated sentence is
grammatically correct and fluent), Faithful (whether the output accurately
reflects the input), and Coherence (ensuring that a sentence is logically
consistent and follows the natural flow of human writing).
## 8\. DATASETS
In this section we introduce the datasets that have been widely used in TOD
and ODD systems in recent years. Table 3. and Table 4. show some information
for TOD and ODD datasets respectively.
### 8.1. Datasets for Task-oriented Dialogue Systems
##### MultiWOZ
MultiWOZ (budzianowski2018multiwoz, ) is a fully-labeled collection of human-
human written conversations, containing 10,438 dialogues. The dataset is
collected using Wizard-of-Oz and contains dialogues in 7 domains and the
dialogues cover between 1 and 5 domains per dialogue thus greatly varying in
length and complexity. MultiWOZ has undergone various versions, with several
error corrections. MultiWOZ 2.1 (eric2019multiwoz, ) provides 2-3 descriptions
for each slot in the dataset. MultiWOZ 2.2 (zang2020multiwoz, ) further
provides descriptions of domain and slot as well as possible values for
categorical slot. MultiWOZ 2.3 (han2021multiwoz, ) differentiates incorrect
annotation in dialogue acts from dialogue states, identifying a lack of co-
reference. MultiWOZ 2.4 (ye2021multiwoz, ) is the latest version and fixes the
incorrect and inconsistent annotations.
##### RiSAWOZ
RiSAWOZ (quan2020risawoz, ) is a large-scale multi-domain Chinese Wizard-of-Oz
dataset with rich semantic annotations. It contains 11.2 thousand human-to-
human multi-turn semantically annotated dialogues, with more than 150 thousand
utterances spanning over 12 domains. Each dialogue is labeled with
comprehensive dialogue annotations, including dialogue goal, domain, dialogue
states and acts at both the users and system side.
##### CrossWOZ
CrossWOZ (zhu2020crosswoz, ) is a large-scale Chinese cross-domain Wizard-of-
Oz task-oriented dataset. It contains 6 thousand dialogue sessions and 102
thousand utterances for 5 domains. It contains rich annotation of dialogue
states and acts at both the users and system side and about 60% of the
dialogues have cross-domain user goals.
##### PersuasionForGood
PersuasionForGood (P4G) (wang2019persuasion, ) is a dataset with 1,017
dialogues and annotated emerging persuasion strategies from a subset. The
dataset was collected using an online persuasion task where one participant
was asked to persuade the other to donate to a specific charity. It is a rich
human-human persuasion dialogue dataset with comprehensive user psychological
study and persuasion strategy annotation.
##### WOZ 2.0
WOZ 2.0 (mrkvsic2016neural, ) is a dataset updated from the CamRest
(wen2016network, ) dataset which has 676 dialogues. The dataset is collected
using Wizard-of-Oz and contains 1,200 dialogues. Each turn in a dialogue was
contributed by different users, who had to review all previous turns in that
dialogue.
##### Stanford Multi-Domain
Stanford Multi-Domain (SMD) (eric2017key, ) is a Wizard-of-Oz dataset. It
contains 3,031 dialogues in 3 distinct domains. The dialogues are grounded
through underlying knowledge bases and a knowledge snippet is attached with
each dialogue as a piece of simplified database information.
Table 3. Overview of datasets for task-oriented dialogue. *SMD only provides
statistics of avg of utterances per dialogue and avg of tokens per utterance.
Single domain and multi domains show whether the dataset has dialogues that
contain single domain or multi domains respectively.
Dataset | Dialogues | Avg. turns / dial. | Avg. tokens / turn | Domains | Single Domain | Multi Domains
---|---|---|---|---|---|---
MultiWOZ | 10,438 | 13.70 | 13.18 | 7 | ✓ | ✓
RiSAWOZ | 11,200 | 13.57 | 10.91 | 12 | ✓ | ✓
CrossWOZ | 6,012 | 16.90 | 16.25 | 5 | ✓ | ✓
P4G | 1,017 | 10.43 | - | 1 | ✓ | ✗
WOZ 2.0 | 1,200 | 7.35 | 11.27 | 1 | ✓ | ✗
SMD | 3,031 | 5.29* | 9* | 3 | ✓ | ✗
### 8.2. Datasets for Open-domain Dialogue Systems
##### PersonaChat
PersonaChat (zhang2018personalizing, ) consists of chats and persons which are
collections of five or more sentences that describe a personality. The dataset
consists crowd-source dialogues where each participant plays the part of an
assigned persona; and each person has a word distinct paraphrase. It paired
human generated profiles and conversations aiding the construction of agents
that have consistent personalities and viewpoints.
##### MMdialog
MMdialgo (feng2022mmdialog, ) is a large-scale multi-turn dialogue dataset
towards multi-model open domain conversations. It is composed of a curated set
of 1.08 million real-world dialogues with 1.53 million unique images across
4148 topics. It contains massive topics to generalize the open domain and it
is the largest multi-model conversation dataset by the number of dialogues by
88x.
##### Dailydialog
Dailydialog (li2017dailydialog, ) is a multi-turn dialogue dataset with 13118
dialogues. It is created by scarping text from conversations held on an
English learning website. The dialogues cover 10 topics and conform common
dialog flows. Besides the dataset contains unique multi-turn dialog flow
patterns, which reflect our realistic communication way. Each utterance is
labeled with a dialogue act and an emotion.
##### Pchatbot
Pchatbot (qian2021pchatbot, ) is a large-scale dialogue dataset that contains
two subsets collected from Weibo and Judicial forums respectively. It is
composed of almost 200 million dialogue pairs. The dataset is elaborately
normalized via process such as anonymization, deduplication, segmentation, and
filtering. It provides anonymized user IDs and timestamps for both posts and
responses. This enables the development of models to directly learn implicit
user personality from user’s dialogue history.
##### PersonalDialogue
PersonalDialogue (zheng2019personalized, ) is a large-scale multi-turn
dialogue dataset containing various traits from a large number of people. The
dataset consists of 20.83 million sessions and 56.25 million utterances from
8.47 million speakers. Each utterance is connected with a speaker who is
marked with traits like age, gender, location, interest tags, etc. The dataset
facilitates the study of personalized dialogue generation.
##### Douban
Douban (wu2016sequential, ) is the first human-labeled dataset for multi-turn
response selection. It crawled 1.1 million dyadic dialogues longer than 2
turns from Douban group and randomly sampled 0.5 million dialogues for
creating a training set, 25 thousand dialogues for creating a validation set,
and 1,000 dialogues for creating a test set. Conversations in this dataset
come from the open domain, and response candidates in this dataset are
collected from a retrieval engine.
Table 4. Overview of datasets for open-domain dialogue. Human–Human denotes
datasets where two people converse with each other. Scraped marks datasets
which are gathered from an existing online resource.
Dataset | Dialogues | Method | Source | Language
---|---|---|---|---
PersonaChat | 164,356 | Human-Human | Crowdsourcing | en
MMdialog | 1,079,117 | Scraped | Social Media | en
Dailydialog | 13,118 | Scraped | - | en
Pchatbot | 198,875,796 | Scraped | Weibo, Judicial | zh
PersonalDialogue | $\approx$20.83 million | Scaped | Weibo | zh
Douban | 526,000 | Scaped | Douban | zh
## 9\. DISCUSSION
More and more researchers are investigating on applying LLMs to different
components of a multi-turn TOD system or a muti-turn ODD system. An
influential factor driving the popularity of multi-turn dialogue system tasks
is the growing demand for chatbots in both industrial settings and everyday
activities. Industry representatives such as OpenAI’s ChatGPT, Anthropic’s
Claude 2, Meta’s Meta AI, and Google’s Gemini Ultra have greatly improved the
convenience of people’s lives. Another reason is that a significant portion of
natural language data exists in the form of dialogues, and multi-turn dialogue
systems are more aligned with real-world scenarios, thus fostering the
development of large model-based multi-turn dialogue systems. Meanwhile, LLMs
provide humans with a powerful toolkit.
This section presents a discussion on challenges faced by LLM based multi-turn
dialogue systems that deserved to be tackled and investigated future.
Deep understanding and Large-scale open-retrieval. By employing LLMs in multi-
turn conversations, it requires a more proficient understanding and retention
of longer contextual information to generate responses that are more coherent
and relevant. In contrast to open-domain QA tasks, open-retrieval dialogues
pose new challenges primarily arising from the features of human-machine
interaction, impacting both efficiency and effectiveness.
Emotionalization and Personalization. Emotionalizing dialogue systems and
enhancing logical consistency can make responses more aligned with the needs
of the queries, understand the emotions behind expressions, thus better
understanding the semantic and contextual connections, and providing more
appropriate answers.
Multi-task dialogue systems. Recent research in end-to-end TOD systems and
knowledge-grounded open-domain systems has opened up the prospect of merging
these two distinct paradigms within a cohesive framework, or perhaps even a
unified model. These hybrid dialogue systems are designed to operate
concurrently as both assistants, efficiently executing specific tasks, and as
chatbots, engaging in conversational interactions.
Multi-model dialogue systems. The world we inhabit is inherently multi-modal,
with humans utilizing a multitude of senses such as sight, hearing, smell,
taste, and touch to perceive it. Hence, it is imperative that chatbots possess
the proficiency to blend information from disparate modalities. While
contemporary large chat models exhibit admirable effectiveness in processing
text, audio, and pictures, they still encounter significant limitations and
challenges when it comes to video processing.
Bias identification and Privacy protection. LLMs have the potential to
generate content that is harmful, offensive, or biased due to their training
on publicly available online datasets. While researchers have endeavored to
address this through fine-tuning, challenges may persist, particularly in
languages other than English where publicly available datasets are limited. It
is important to prioritize user privacy alongside the advancement of more
sophisticated dialogue systems.
## 10\. CONCLUSION
In recent years, the rapid advancement of LLMs has propelled multi-turn
dialogue tasks to the forefront of natural language processing research. This
paper delves into the study of LLM-based multi-turn dialogue systems. It
begins by categorizing common LLMs based on their model structures and
introduces methods for adapting LLMs to various subtasks, including fine-
tuning and prompt engineering. Subsequently, it discusses two main categories
of LLM-based multi-turn dialogue systems: LLM-Based TOD systems and LLM-Based
ODD systems. Following this, the paper outlines evaluation metrics derived
from the outputs of multi-turn dialogue systems, which aid in assessing and
understanding the conversational abilities of LLMs. Additionally, it
highlights datasets that have been widely used in TOD and OOD systems in
recent years. Finally, the paper suggests some open problems to indicate the
major challenges and future research directions for LLM-based multi-turn
dialogue systems.
## References
* (1) Joseph Weizenbaum. Eliza—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1):36–45, 1966.
* (2) Kenneth Mark Colby, Sylvia Weber, and Franklin Dennis Hilf. Artificial paranoia. Artificial intelligence, 2(1):1–25, 1971.
* (3) David Goddeau et al. A form-based dialogue manager for spoken language applications. In Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP’96, volume 2, pages 701–704. IEEE, 1996.
* (4) Yu Wu et al. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 496–505, Vancouver, Canada, July 2017. Association for Computational Linguistics.
* (5) Tiancheng Zhao and Maxine Eskenazi. Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 1–10, Los Angeles, September 2016. Association for Computational Linguistics.
* (6) Wentao Ma et al. TripleNet: Triple attention network for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 737–746, Hong Kong, China, November 2019. Association for Computational Linguistics.
* (7) Iulian Serban et al. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the AAAI conference on artificial intelligence, volume 30, 2016.
* (8) Wanwei He et al. Amalgamating knowledge from two teachers for task-oriented dialogue system with adversarial training. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3498–3507, 2020.
* (9) Lisong Qiu et al. Are training samples correlated? learning to generate dialogue responses with multiple references. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3826–3835, 2019.
* (10) Suket Arora, Kamaljeet Batra, and Sarabjit Singh. Dialogue system: A brief review. arXiv preprint arXiv:1306.4134, 2013.
* (11) Hongshen Chen et al. A survey on dialogue systems: Recent advances and new frontiers. Acm Sigkdd Explorations Newsletter, 19(2):25–35, 2017.
* (12) Jinjie Ni et al. Recent advances in deep learning based dialogue systems: A systematic survey. Artificial intelligence review, 56(4):3055–3155, 2023.
* (13) Libo et al. Qin. End-to-end task-oriented dialogue: A survey of tasks, methods, and future directions. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5925–5941, Singapore, December 2023. Association for Computational Linguistics.
* (14) Jared Kaplan et al. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.
* (15) Jason Wei et al. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682, 2022.
* (16) Xipeng Qiu et al. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63(10):1872–1897, 2020.
* (17) Ashish Vaswani et al. Attention is all you need. Advances in neural information processing systems, 30, 2017.
* (18) Colin Raffel et al. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551, 2020.
* (19) Peter J. Liu et al. Generating wikipedia by summarizing long sequences. arXiv preprint arXiv:1801.10198, 2018.
* (20) Alec Radford et al. Improving language understanding by generative pre-training. 2018\.
* (21) Rami Al-Rfou et al. Character-level language modeling with deeper self-attention. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3159–3166, 2019.
* (22) Junjie Ye et al. A comprehensive capability analysis of gpt-3 and gpt-3.5 series models. arXiv preprint arXiv:2303.10420, 2023.
* (23) Alec Radford et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
* (24) Bryan McCann et al. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730, 2018.
* (25) Tom Brown et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
* (26) Long Ouyang et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
* (27) Wayne Zhao Xin et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023.
* (28) OpenAI. Introducing chatgpt, November 2022.
* (29) Samantha Lock. What is ai chatbot phenomenon chatgpt and could it replace humans. The Guardian, 5, 2022.
* (30) OpenAI. Gpt-4 technical report, 2023.
* (31) Samantha Lock. What is ai chatbot phenomenon chatgpt and could it replace humans? The Guardian.
* (32) Jon Gertner. Wikipedia’s moment of truth. The New York Times Magazine.
* (33) Chatgpt can now see, hear, and speak. https://www.openai.com, 2023. Retrieved October 16, 2023.
* (34) Kevin Roose. The new chatgpt can ’see’ and ’talk.’ here’s what it’s like. The New York Times, September 27 2023. Retrieved October 16, 2023.
* (35) OpenAI. Introducing gpts, November 2023.
* (36) Hugo Touvron et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
* (37) Meta AI. Introducing llama: A foundational, 65-billion-parameter large language model. February 2023. URL-of-the-Article (Accessed: ).
* (38) Jordan Hoffmann et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022.
* (39) Jack W. Rae et al. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446, 2021.
* (40) Meta. Meta and microsoft introduce the next generation of llama. July 2023. Retrieved 21 July 2023, from https://about.fb.com/news/2023/07/llama-2/.
* (41) Hugo Touvron et al. Llama 2: Open foundation and fine-tuned chat models, 2023.
* (42) Baptiste Rozière et al. Code llama: Open foundation models for code, 2023.
* (43) Zhengxiao Du et al. Glm: General language model pretraining with autoregressive blank infilling. arXiv preprint arXiv:2103.10360, 2021.
* (44) Aohan Zeng et al. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414, 2022.
* (45) THUDM. Chatglm2-6b. https://github.com/thudm/chatglm2-6b, 2022.
* (46) THUDM. Chatglm3. https://github.com/THUDM/ChatGLM3, 2023.
* (47) Jacob Devlin et al. BERT: Pre-training of deep bidirectional transformers for language understanding. pages 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics.
* (48) Google AI Blog. Open sourcing bert: State-of-the-art pre-training for natural language processing. November 2018. Retrieved November 27, 2019.
* (49) Yinhan Liu et al. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
* (50) Li Dong et al. Unified language model pre-training for natural language understanding and generation. Advances in neural information processing systems, 32, 2019.
* (51) Mike Lewis et al. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
* (52) Neil Houlsby et al. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790–2799. PMLR, 2019.
* (53) Jonas Pfeiffer et al. AdapterHub: A framework for adapting transformers. pages 46–54, Online, October 2020. Association for Computational Linguistics.
* (54) Xiao Liu et al. Gpt understands, too. AI Open, 2023.
* (55) Edward Hu et al. Lora: Low-rank adaptation of large language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4395–4409, 2021.
* (56) Tim Dettmers et al. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023.
* (57) Jason Wei et al. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.
* (58) Tianyi Zhang et al. Revisiting few-sample bert fine-tuning. arXiv preprint arXiv:2006.05987, 2020.
* (59) Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. arXiv preprint arXiv:2006.04884, 2021.
* (60) Armen Aghajanyan et al. Better fine-tuning by reducing representational collapse. arXiv preprint arXiv:2008.03156, 2020.
* (61) Haoming Jiang et al. SMART: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2177–2190, Online, July 2020. Association for Computational Linguistics.
* (62) Chen Zhu et al. Freelb: Enhanced adversarial training for natural language understanding. arXiv preprint arXiv:1909.11764, 2019.
* (63) Timo Schick and Hinrich Schütze. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online, April 2021. Association for Computational Linguistics.
* (64) Tianyu Gao, Adam Fisch, and Danqi Chen. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816–3830, Online, August 2021. Association for Computational Linguistics.
* (65) Timo Schick and Hinrich Schütze. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online, April 2021. Association for Computational Linguistics.
* (66) Xu Han et al. Ptr: Prompt tuning with rules for text classification. AI Open, 3:182–192, 2022.
* (67) Shengding Hu et al. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2225–2240, Dublin, Ireland, May 2022. Association for Computational Linguistics.
* (68) Xiao Liu et al. Gpt understands, too. AI Open, 2023.
* (69) Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–4597, Online, August 2021. Association for Computational Linguistics.
* (70) Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics.
* (71) Xiao Liu et al. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv preprint arXiv:2110.07602, 2021.
* (72) Guanghui Qin and Jason Eisner. Learning how to ask: Querying LMs with mixtures of soft prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5203–5212, Online, June 2021. Association for Computational Linguistics.
* (73) Jiachang Liu et al. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114, Dublin, Ireland and Online, May 2022. Association for Computational Linguistics.
* (74) Yao Lu et al. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086–8098, Dublin, Ireland, May 2022. Association for Computational Linguistics.
* (75) Mingda Chen et al. Improving in-context few-shot learning via self-supervised training. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3558–3573, Seattle, United States, July 2022. Association for Computational Linguistics.
* (76) Yanda Chen et al. Meta-learning via language model in-context tuning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 719–730, Dublin, Ireland, May 2022. Association for Computational Linguistics.
* (77) Sewon Min et al. MetaICL: Learning to learn in context. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2791–2809, Seattle, United States, July 2022. Association for Computational Linguistics.
* (78) Jason Wei et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022.
* (79) Takeshi Kojima et al. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199–22213, 2022.
* (80) Denny Zhou et al. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625, 2022.
* (81) Zhuosheng Zhang et al. Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493, 2022.
* (82) Kashun Shum, Shizhe Diao, and Tong Zhang. Automatic prompt augmentation and selection with chain-of-thought from labeled data. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12113–12139, Singapore, December 2023. Association for Computational Linguistics.
* (83) Zhuosheng Zhang et al. Multimodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923, 2023.
* (84) Haoyang Huang et al. Not all languages are created equal in LLMs: Improving multilingual capability by cross-lingual-thought prompting. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12365–12394, Singapore, December 2023. Association for Computational Linguistics.
* (85) Li Deng et al. Use of kernel deep convex networks and end-to-end learning for spoken language understanding. In 2012 IEEE Spoken Language Technology Workshop (SLT), pages 210–215. IEEE, 2012.
* (86) Gokhan Tur et al. Towards deeper understanding: Deep convex networks for semantic utterance classification. In 2012 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5045–5048. IEEE, 2012.
* (87) Daniele Comi et al. Zero-shot-bert-adapters: a zero-shot pipeline for unknown intent detection. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 650–663, 2023.
* (88) Soham Parikh et al. Exploring zero and few-shot techniques for intent classification. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track), pages 744–751, Toronto, Canada, July 2023. Association for Computational Linguistics.
* (89) Yen-Ting Lin et al. Selective in-context data augmentation for intent detection using pointwise V-information. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1463–1476, Dubrovnik, Croatia, May 2023. Association for Computational Linguistics.
* (90) Samuel Coope et al. Span-ConveRT: Few-shot span extraction for dialog with pretrained conversational representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 107–121, Online, July 2020. Association for Computational Linguistics.
* (91) AB Siddique, Fuad Jamour, and Vagelis Hristidis. Linguistically-enriched and context-awarezero-shot slot filling. In Proceedings of the Web Conference 2021, pages 3279–3290, 2021.
* (92) Qian Chen, Zhu Zhuo, and Wen Wang. Bert for joint intent classification and slot filling. arxiv. arXiv preprint arXiv:1902.10909, 2019.
* (93) Hoang Nguyen et al. CoF-CoT: Enhancing large language models with coarse-to-fine chain-of-thought prompting for multi-domain NLU tasks. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12109–12119, Singapore, December 2023. Association for Computational Linguistics.
* (94) Vevake Balaraman and Bernardo Magnini. Scalable neural dialogue state tracking. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 830–837. IEEE, 2019.
* (95) Victor Zhong, Caiming Xiong, and Richard Socher. Global-locally self-attentive encoder for dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1458–1467, 2018.
* (96) Hwaran Lee, Jinsik Lee, and Tae-Yoon Kim. SUMBT: Slot-utterance matching for universal and scalable belief tracking. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5478–5483, Florence, Italy, July 2019. Association for Computational Linguistics.
* (97) Yexiang Wang, Yi Guo, and Siqi Zhu. Slot attention with value normalization for multi-domain dialogue state tracking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3019–3028, 2020.
* (98) Zhaojiang Lin et al. MinTL: Minimalist transfer learning for task-oriented dialogue systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3391–3405, Online, November 2020. Association for Computational Linguistics.
* (99) Yushi Hu et al. In-context learning for few-shot dialogue state tracking. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2627–2643, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics.
* (100) Yujie Feng et al. Towards LLM-driven dialogue state tracking. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 739–755, Singapore, December 2023. Association for Computational Linguistics.
* (101) Ryuichi Takanobu, Runze Liang, and Minlie Huang. Multi-agent task-oriented dialog policy learning with role-aware reward decomposition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 625–638, Online, July 2020. Association for Computational Linguistics.
* (102) Sihan Wang et al. Task-completion dialogue policy learning via monte carlo tree search with dueling network. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3461–3471, 2020.
* (103) Gabriel Gordon-Hall, Philip John Gorinski, and Shay B. Cohen. Learning Dialog Policies from Weak Demonstrations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1394–1405, Online, July 2020. Association for Computational Linguistics.
* (104) Wenhu Chen et al. Semantically conditioned dialog response generation via hierarchical disentangled self-attention. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3696–3709, Florence, Italy, July 2019. Association for Computational Linguistics.
* (105) YunHao Li et al. Retrieve & memorize: Dialog policy learning with multi-action memory. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 447–459, Online, August 2021. Association for Computational Linguistics.
* (106) Paweł Budzianowski and Ivan Vulić. Hello, it’s GPT-2 - how can I help you? towards the use of pretrained language models for task-oriented dialogue systems. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 15–22, Hong Kong, November 2019. Association for Computational Linguistics.
* (107) Ehsan Hosseini-Asl et al. A simple language model for task-oriented dialogue. Advances in Neural Information Processing Systems, 33:20179–20191, 2020.
* (108) Youngsoo Jang, Jongmin Lee, and Kee-Eung Kim. Gpt-critic: Offline reinforcement learning for end-to-end task-oriented dialogue systems. In 10th International Conference on Learning Representations, ICLR 2022. International Conference on Learning Representations, ICLR, 2022.
* (109) Govardana Sachithanandam Ramachandran, Kazuma Hashimoto, and Caiming Xiong. [CASPI] causal-aware safe policy improvement for task-oriented dialogue. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 92–102, Dublin, Ireland, May 2022. Association for Computational Linguistics.
* (110) Wanwei He et al. Galaxy: A generative pre-trained model for task-oriented dialog with semi-supervised learning and explicit policy injection. In Proceedings of the AAAI conference on artificial intelligence, volume 36, pages 10749–10757, 2022.
* (111) Xiao Yu, Maximillian Chen, and Zhou Yu. Prompt-based Monte-Carlo tree search for goal-oriented dialogue policy planning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7101–7125, Singapore, December 2023. Association for Computational Linguistics.
* (112) EHUD REITER and ROBERT DALE. Building applied natural language generation systems. Natural Language Engineering, 3(1):57–87, 1997.
* (113) Tsung-Hsien Wen et al. Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 275–284, Prague, Czech Republic, September 2015. Association for Computational Linguistics.
* (114) Tsung-Hsien Wen et al. Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711–1721, Lisbon, Portugal, September 2015. Association for Computational Linguistics.
* (115) Hao Zhou, Minlie Huang, and Xiaoyan Zhu. Context-aware natural language generation for spoken dialogue systems. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2032–2041, 2016.
* (116) Baolin Peng et al. Few-shot natural language generation for task-oriented dialog. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 172–182, Online, November 2020. Association for Computational Linguistics.
* (117) Zhiyu Chen et al. Few-shot NLG with pre-trained language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 183–190, Online, July 2020. Association for Computational Linguistics.
* (118) Ashutosh Baheti, Alan Ritter, and Kevin Small. Fluent response generation for conversational question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 191–207, Online, July 2020. Association for Computational Linguistics.
* (119) Jing Qian et al. Controllable natural language generation with contrastive prefixes. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2912–2924, Dublin, Ireland, May 2022. Association for Computational Linguistics.
* (120) Baolin Peng et al. Soloist: Building task bots at scale with transfer learning and machine teaching. Transactions of the Association for Computational Linguistics, 9:807–824, 2021.
* (121) Yunyi Yang, Yunhao Li, and Xiaojun Quan. Ubar: Towards fully end-to-end task-oriented dialog system with gpt-2. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14230–14238, 2021.
* (122) Yixuan Su et al. Multi-task pre-training for plug-and-play task-oriented dialogue system. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4661–4676, Dublin, Ireland, May 2022. Association for Computational Linguistics.
* (123) Wanwei He et al. SPACE-2: Tree-structured semi-supervised contrastive pre-training for task-oriented dialog understanding. In Proceedings of the 29th International Conference on Computational Linguistics, pages 553–569, Gyeongju, Republic of Korea, October 2022. International Committee on Computational Linguistics.
* (124) Wanwei He et al. Unified dialog model pre-training for task-oriented dialog understanding and generation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 187–200, 2022.
* (125) Vladimir Karpukhin et al. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online, November 2020. Association for Computational Linguistics.
* (126) Chunyuan Yuan et al. Multi-hop selector network for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pages 111–120, 2019.
* (127) Chongyang Tao et al. One time of interaction may not be enough: Go deep with an interaction-over-interaction network for response selection in dialogues. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 1–11, 2019.
* (128) Xinchao Xu et al. Long time no see! open-domain conversation with long-term persona memory. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2639–2650, Dublin, Ireland, May 2022. Association for Computational Linguistics.
* (129) Andrea Madotto et al. Personalizing dialogue agents via meta-learning. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 5454–5459, 2019.
* (130) Ruijun Chen et al. Learning to memorize entailment and discourse relations for persona-consistent dialogues. arXiv preprint arXiv:2301.04871, 2023.
* (131) Juntao Li et al. Dialogue history matters! personalized response selection in multi-turn retrieval-based chatbots. ACM Transactions on Information Systems (TOIS), 39(4):1–25, 2021.
|
# Some Grammatical Errors are Frequent, Others are Important
Leshem Choshen
Department of Computer Science
Hebrew University of Jerusalem
<EMAIL_ADDRESS>
Ofir Shifman
Department of Computer Science
Hebrew University of Jerusalem
<EMAIL_ADDRESS>
Omri Abend
Department of Computer Science
Hebrew University of Jerusalem
<EMAIL_ADDRESS>
###### Abstract
In Grammatical Error Correction, systems are evaluated by the number of errors
they correct. However, no one has assessed whether all error types are equally
important. We provide and apply a method to quantify the importance of
different grammatical error types to humans. We show that some rare errors are
considered disturbing while other common ones are not. This affects possible
directions to improve both systems and their evaluation.111All code and
annotations are found in https://github.com/borgr/GEC_BOTHER
## 1 Introduction
Grammatical Error Correction (GEC) is the task of correcting erroneous human
(mostly written; Siddharth et al., 2015) sentences. Predominantly, the
sentences are writings of non-natives (Wang et al., 2020). The use of this
correction could be quite diverse, it could help communication, educate
(O’brien, 2015; Tsai et al., 2020), evaluate (Gamon et al., 2013), reduce
language obstacles for learners (Wolfe et al., 2016) and more.222All code to
replicate as well as the gathered data could be found in
https://github.com/borgr/GEC_BOTHER
In this work, we focus on the recipients of the grammatically erroneous text,
rather than the writers. Doing so, we assess which types of errors are most
important to correct. We follow a simplifying assumption that some errors
inherently disrupt communication more than others, regardless of the sentence
context. Under this assumption we ask native speakers to express their
preference in partially erroneous sentences.
We manually annotate NUCLE (Dahlmeier et al., 2013) erroneous sentences to
find which ones are more crucial to correct (§3). We then extrapolate the
contribution of each type of error to the assessment of sentence correctness.
Specifically, we train a linear predictor of the sentence score as a function
of the amount of errors of each type (§4). From this we can not only know
which error types’ contribution is more important, without explicitly asking
annotators about it, but also assess the contribution of each type to any
typology of errors without further annotation.
Finally, computing the results on both the manual type system of NUCLE and
automatic taxonomies, we find that some of the most frequent errors are of low
importance and some infrequent ones are important, i.e., the errors which are
most important to correct for humans and for current evaluation differ.
Similarly, loss is implicitly weighted by frequency, but in this case
frequency and importance differ. Thus, the emphasis in training is on the
wrong types of errors.
## 2 Background
Typologies of GEC error types date back to the early days of the field Dale
and Kilgarriff (2011). Assuming each error stand by itself and is independent
from other errors, each error could be given a class. Following this
assumption manual annotations of typologies arrived with every dataset
(Dahlmeier et al., 2013; Shatz, 2020) differing between them and between
languages (Rozovskaya and Roth, 2019; Lee et al., 2021).
Later, ERRANT proposed a method for automatically extracting errors from text
and automatically annotating them with a set of rules (Bryant et al., 2017a).
This allowed to use the same annotation for any dataset in English. Lately,
SErCl (Choshen et al., 2020) proposed another typology, more fine-grained and
based on syntax. It comes with an automatic extraction for most languages
(depending on a part of speech tagger). SERRANT (Choshen et al., 2021)
combined the errors of ERRANT and SErCl to have a broader coverage, coming
from SERRANT but use the meaningful rules for ERRANT categories. We do not
give preferance to any of the methods and report results on each.
In most evaluation and literature, edit types are considered of equal
importance, for example the $M^{2}$ (Dahlmeier et al., 2013) scorer is based
on errors corrected, regardless of their types. There are works however that
show that models (Choshen and Abend, 2018b) and metrics (Choshen and Abend,
2018a) do not perform equally well on all error types. Specifically, they are
better on closed class types where given that a valid correction was made, the
reference is likely to correct in the same way and not perform another valid
correction. Frequent types are also better addressed by learnt models,
understandably. An exception to the above is Gotou et al. (2020) that focuses
on the most difficult types to correct. This is close in spirit to our work
and valuable in itself.
Knowing what is difficult to correct, as they suggest has merits. This
knowledge may allow building a curriculum and highlight model failures. Still,
we see our question as a more central one to the field, one that may shape the
focus of future contributions for both models and evaluation. Difficulty to
learn may change with technology, but what is perceived important to pursue
will not. We propose an ideal for GEC to pursue and a way to measure it.
Another work that is similar to ours in spirit is Tetreault et al. (2017),
proposing to follow fluency rather that correct errors. In a sense, the most
important errors to correct are those that most improve fluency of a text.
## 3 Annotation
To get a reliable ranking of error importance we follow previous works’
methodology. First, we do not ask annotators about grammaticality, as grammar
in non-professionals is implicit and often even judged unimportant (Loewen et
al., 2009). Instead, we ask annotators the extent to which a text is
bothersome, following Wolfe et al. (2016); Graham et al. (2015b). They found
that impolite messages bothered job interviewers and to a lower extent so did
ungrammatical writing. However, impolite texts were undeservedly judged
ungrammatical, showing judges mix between the two.
We ask crowd annotators to directly assess the extent to which sentences need
correction. We adapt the methodology of Graham et al. (2016) for assessing
fluency of a text to assess instead how bothering a text is. Specifically,
annotators were asked to move a slide to indicate how much they agree with the
following: ”The English mistakes in the following text bother me (1 = it
doesn’t bother me at all, 100 = it really bothers me)”. All other details
follow the original work.
We note that while we choose to follow common wording, other wordings may be
acceptable and might even have slightly different results. For example,
framing the question in terms of the context in which the sentence is written
may produce different results. A sentence may be harshly judged in an academic
writing but not in an email.
Every batch of sentences sent to the crowd contained 100 sentences ensuring
that each annotator would produce at least 100 annotations. Only annotators
from the United States with high (95%¿) acceptance rate and that reported they
were English natives were accepted. This is to reduce noise due to faulty
judgments and disagreements due to different countries of origin (e.g., native
Australian citizens). Annotators were given 0.5$ per batch, 333The payment is
not high, but by personal communication with the authors of Direct Assessment,
high payment lures fraudulent annotators. Moreover, annotating the whole of
NUCLE took less than two days, indicating that the payment was not deemed as
low by the crowd annotators. and their answers were normalized to follow a
standard normal distribution (henceforth Z-score).
To allow filtering the data, each batch contains 3 types of sentences. 15
unique sentences which contain no mistakes. 70 unique sentences with at least
one error. 15 sentences which were sampled from a a pre-sampled set of 400
sentences. The latter were repeatedly shown in different batches. The choice
of 400 sentences was made to make sure a single annotator would not often see
the same sentences and that we will have enough repetitions for each of the
400 to find outlying annotators.
### 3.1 Dataset
We chose to annotate NUCLE (Dahlmeier et al., 2013) containing about 59K
sentences. Out of which we separated sentences with and without errors to two
groups. Additionally, we filtered out sentences with less than 7 words, or
ones that contained one of the strings: http, &, [, ], *, ”, ; to reduce non-
English sentences. We also normalized spaces, deleting spaces after ) or
before (, !, %, ., $, / and a comma (,).
We sent 58K sentences for annotation, which roughly corresponds to annotating
each sentence with errors twice, plus multiple annotations of the 400
repetitive sets and about 8.7K annotations for grammatical sentences.
### 3.2 Filtering
An important aspect when asking for direct assessment from crowdworkers is to
filter low quality annotations. We proceed to discuss this procedure.
Annotators that took less than 350 seconds for 100 sentences were removed.
Removing about 5% of annotators (see Figure 1). This is expected to remove
annotators who did not pay attention or mistakenly skipped a large number of
sentences.
Among the remaining annotators, we made sure each judged the grammatical
sentences to be better than the erroneous ones. Under the hypothesis that
ungrammatical sentences had a lower score, we made a t-test for each
annotator. If the grammatical sentences did not have a significantly higher
average sentence score than the ungrammatical ($p<0.05$), we filtered out all
the annotations made by the annotator. Overall about 2% of annotators were
filtered in this method.
Last, we compared the Pearson correlation between each annotator’s Z-scores
and the rest’s on the repeating sentences. Following Graham et al. (2015a),
correlation only took into account sentences with at least 15 responses as the
average is noisy. Annotators with strong negative correlations ($>-0.4$) were
filtered out. Overall, these procedures filtered about 10% of the annotators.
Furthermore, we found most annotators filtered in the previous stages had
negative correlation, which validates this methodology, as the different
filtering methods agree. Raising the bars of either P or minimum time had
diminishing gains in terms of finding negative correlation annotators.
While the annotations still contain noise, trying to filter out more with
harsher thresholds produced similar results (See §5) with more variance (due
to less data). This suggests that the results are robust to this filtering and
are reliable in that sense.
Figure 1: Right: Working time per batch for one pass over NUCLE. Left: the
tail of the distribution and in red the threshold below which annotators were
filtered
## 4 Score per Type
As mentioned above, annotations are done on a sentence level. While this means
we need to extrapolate which type of error is more important, it also allows
us to do it for different error annotation schemes.
We experiment with both the manual annotated error types in the NUCLE corpus
and automatic error types. Specifically, we analyse both automatic error types
of ERRANT (Bryant et al., 2017b) and SErCL (Choshen et al., 2020). We do not
analyze SERRANT (Choshen et al., 2021) as it is based on the two latter and is
hence quite similar.
Given the sentence scores we train a linear classifier with the error types
count as features. For each sentence, we extract the number of times each type
of error was found in it. We then train the linear regression to predict the
annotation score based on these features.
The output weights can be understood as the contribution of each type to the
sentence annoyance levels. Note that in doing so, we assume a linear
contribution of types. Namely, that when multiple types appear or a single
type appears more than once, their contribution is additive. Future work may
consider more complex extrapolations with softer assumptions.
Because the actual weights are hard to interpret, we focus on the ranks of
each phenomena. In other words, we look to see who got the largest weight, the
second largest and so on, rather than the actual distribution of weights that
were assigned (we report those for completeness in App. A).
We extrapolate for each NUCLE type, for SErCl’s most frequent types, for
ERRANT’s types and for ERRANT’s types without sub-categorization to
replacement additions and deletions.
## 5 Results
Figure 2: Importance ranks of each SErCl type. Std in error bars. Figure 3:
Importance ranks of each ERRANT type. Std in error bars. (fine-grained ERRANT
types in Appendix A.) Figure 4: Importance ranks of each NUCLE type. Std in
error bars.
We present the ranking for SErCl in Fig. 2, for ERRANT in Fig. 3 (Fine grained
with insertion deletion and modification in App. A) and for NUCLE in Fig. 4.
We also report the actual weight in appendix A and note that those are more
variable and harder to reason about.
We see that despite the large sample there is still variance. Thus, some error
types are not significantly harder than others. Still, which errors are easy,
medium or hard is clear.
We find that, across the typologies, verb inflection and verb errors in
general are among most bothering errors. So are orthography errors,
unnecessarily added tokens, wrong determiner and other errors.
On the other side of the spectrum we can find missing tokens, inflection,
morphology and others. Several errors related to determiners are also low
ranking.
## 6 Discussion and Conclusion
Most metrics disregard the error type, at least in principal (Choshen and
Abend, 2018a, In practice errors are unintentionally weighted differently, but
not by design;). This has been criticized and difficulty of correction was
suggested to address it (Gotou et al., 2020). Our results show that not only
some errors are more important to correct than others, those are not
determined by frequency in the data nor in the difficulty to correct.
Determiners are extremely common and a closed class (Choshen and Abend,
2018b), making them more important to correct to gain high scores in metrics,
but those errors are not considered very important by humans. Similarly,
orthographic errors are very easy to correct, but they are considered very
annoying and important to correct.
We also performed initial studies with weighting training spans by giving each
token its weight by the importance of the error (non-error tokens weight is
constant). Unsurprisingly, the network improves over the relevant errors more
than on others or the baseline, although not by a large margin.
## 7 Acknowledgments
We thank Dan Malkin for the experiments with weighted gradients.
## References
* Bryant et al. (2017a) Christopher Bryant, Mariano Felice, and Edward Briscoe. 2017a. Automatic annotation and evaluation of error types for grammatical error correction. Association for Computational Linguistics.
* Bryant et al. (2017b) Christopher Bryant, Mariano Felice, and Ted Briscoe. 2017b. Automatic annotation and evaluation of error types for grammatical error correction. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 793–805, Vancouver, Canada. Association for Computational Linguistics.
* Choshen and Abend (2018a) Leshem Choshen and Omri Abend. 2018a. Automatic metric validation for grammatical error correction. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1372–1382, Melbourne, Australia. Association for Computational Linguistics.
* Choshen and Abend (2018b) Leshem Choshen and Omri Abend. 2018b. Inherent biases in reference-based evaluation for grammatical error correction. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 632–642.
* Choshen et al. (2020) Leshem Choshen, D. Nikolaev, Yevgeni Berzak, and Omri Abend. 2020. Classifying syntactic errors in learner language. _ArXiv_ , abs/2010.11032.
* Choshen et al. (2021) Leshem Choshen, Matanel Orenm Dmitry Nikolaev, and Omri Abend. 2021. Serrant: a syntactic classifier for english grammatical error types. _arXiv preprint arXiv:2104.02310_.
* Dahlmeier et al. (2013) Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner English: The NUS corpus of learner English. In _Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications_ , pages 22–31, Atlanta, Georgia. Association for Computational Linguistics.
* Dale and Kilgarriff (2011) R. Dale and Adam Kilgarriff. 2011. Helping our own: The hoo 2011 pilot shared task. In _ENLG_.
* Gamon et al. (2013) Michael Gamon, Martin Chodorow, Claudia Leacock, and Joel Tetreault. 2013. Grammatical error detection in automatic essay scoring and feedback. In _Handbook of automated essay evaluation_ , pages 273–288. Routledge.
* Gotou et al. (2020) Takumi Gotou, Ryo Nagata, Masato Mita, and Kazuaki Hanawa. 2020. Taking the correction difficulty into account in grammatical error correction evaluation. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 2085–2095.
* Graham et al. (2016) Yvette Graham, Timothy Baldwin, Meghan Dowling, Maria Eskevich, Teresa Lynn, and L. Tounsi. 2016. Is all that glitters in mt quality estimation really gold standard. In _COLING 2016_.
* Graham et al. (2015a) Yvette Graham, Timothy Baldwin, and Nitika Mathur. 2015a. Accurate evaluation of segment-level machine translation metrics. In _Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 1183–1191.
* Graham et al. (2015b) Yvette Graham, Timothy Baldwin, A. Moffat, and J. Zobel. 2015b. Can machine translation systems be evaluated by the crowd alone. _Natural Language Engineering_ , 23:3 – 30.
* Lee et al. (2021) Myunghoon Lee, Hyeonho Shin, Dabin Lee, and Sung-Pil Choi. 2021. Korean grammatical error correction based on transformer with copying mechanisms and grammatical noise implantation methods. _Sensors_ , 21(8).
* Loewen et al. (2009) Shawn Loewen, Shaofeng Li, Fei Fei, Amy Thompson, Kimi Nakatsukasa, Seongmee Ahn, and Xiaoqing Chen. 2009. Second language learners’ beliefs about grammar instruction and error correction. _The Modern Language Journal_ , 93(1):91–104.
* O’brien (2015) J. O’brien. 2015. Consciousness-raising, error correction and proofreading. _Journal of the Scholarship of Teaching and Learning_ , 15:85–103.
* Rozovskaya and Roth (2019) Alla Rozovskaya and Dan Roth. 2019. Grammar error correction in morphologically rich languages: The case of Russian. _Transactions of the Association for Computational Linguistics_ , 7:1–17.
* Shatz (2020) Itamar Shatz. 2020. Refining and modifying the efcamdat: Lessons from creating a new corpus from an existing large-scale english learner language database. _International Journal of Learner Corpus Research_ , 6(2):220–236.
* Siddharth et al. (2015) Siddharth, Sandeep Swarnakar, and Sandeep Sharma. 2015. Grammatical error correction in oral conversation. _International Journal for Scientific Research and Development_ , pages 50–52.
* Tetreault et al. (2017) Joel R. Tetreault, Keisuke Sakaguchi, and Courtney Napoles. 2017. Jfleg: A fluency corpus and benchmark for grammatical error correction. In _EACL_.
* Tsai et al. (2020) C. Tsai, Jhih-Jie Chen, Chingyu Yang, and Jason J. S. Chang. 2020. Lingglewrite: a coaching system for essay writing. In _ACL_.
* Wang et al. (2020) Yu Wang, Yuelin Wang, J. Liu, and Zhuo Liu. 2020. A comprehensive survey of grammar error correction. _ArXiv_ , abs/2005.06600.
* Wolfe et al. (2016) Joanna Wolfe, Nisha Shanmugaraj, and Jaclyn Sipe. 2016. Grammatical versus pragmatic error: Employer perceptions of nonnative and native english speakers. _Business and Professional Communication Quarterly_ , 79(4):397–415.
## Appendix A Additional Graphs
We present here the fine-grained ERRANT labels and the linear regression
weights with their std. Note that negative score does not necessarily means
that this type is considered positive by annotators, as there is a baseline
too (so it might only be less severe than other errors).
Figure 5: Importance ranks of each coarse-grained ERRANT types. Std in error
bars. Figure 6: Importance weights of each fine-grained ERRANT types. Std in
error bars. Figure 7: Importance weights of each coarse-grained ERRANT types.
Std in error bars. Figure 8: Importance weights of each SErCl type. Std in
error bars. Figure 9: Importance weights of each NUCLE type. Std in error
bars.
|
determined to be complete by the binding of a _base completion gadget_ (Figure
54), returning a signal to $t_{0}$ that causes another set of signals to be
propagated that enable the placement of a base tile in the $+z$ direction.
Figure 51: Cooperative growth along blue base tiles allows for counting tiles
(purple) to reach the furthestmost tile. A duple (green) allows for the
counting row to sense when it must extend the base by an additional tile by
cooperatively binding to both the furthest counting tile and a ‘0’ on the
encoding supertile. Messages are sent to extend both the base tiles counting
the current base width and to extend the width of the base by 1. Figure 52:
Once the counting row reaches the ‘1’ tiles, this indicates the base is of the
correct width. This is sensed by a counting duple (black) which cooperatively
binds to both the counting row and the ‘1’ glue. Figure 53: After binding of
the black counting duple, the counting tiles dissolve and a signal is sent to
begin cooperative growth of the remainder of the base adjacent to the encoding
Figure 54: Base completion duple (white) allows for the base to detect when
tiles have extended the base along the entire edge of the initial encoding
supertile. A message returns to the initial tiles placed once all tiles of the
row adjacent to the encoding have been placed in the base.
#### 3.3.3 Row 1 Tile Placement
Once the base is complete, a signal is sent to begin the decoding process of
the first row. Figure 55 demonstrates how this signal allows for a strength 2
glue to be exposed in the $+y$ axis, allowing for a base tile to generate
cooperative binding on top of the first directional tile. Unlike other
directional tiles, the directional tile of first tile of the first row encodes
the information that a row change tile is to be utilized, without the need for
sensing the directional tile prior (as there is no prior directional tile).
Once the directional tile binds, it then activates a glue allowing for the
cooperative binding of a decoder tile that determines if the origin tile is a
shape or fill tile. Additionally, this binding causes a signal to be passed
backwards through the base tile most recently placed such that it initiates
the growth of a _backing tile_. Backing tiles serve two main purposes; first,
to indicate to tiles of the first slice that they are adjacent to an exterior
edge, and any shape tile must encode exterior glues on its $-z$ face. Second,
backing tiles allow for the tiles in the topmost row of a slice to bind along
their top edge with strength 2 connections. The process by which this second
item proceeds is outlined in Section 3.3.6.
Once the decoder tile determines which type is to be placed, a glue is exposed
in the $+x$ direction to enable growth of the decoding tiles. Due to the
current decoding tile being the first tile of the row, we can guarantee that
at this point a neighbor detection gadget must bind to the recently placed
backing tile and the decoding tile (Figure 56). This binding of the neighbor
detection gadget with the backing tile additionally causes the backing tile to
activate a glue allowing for cooperative binding of another backing tile with
the base in the $+x$ direction. The decoding tile now contains all the
information regarding the tile type to place after binding with the neighbor
detection gadget. A strength 2 glue allows for the growth of an additional
decoder tile (mapping to the tile type indicated in the encoding assembly);
this enables cooperative binding of the tile type mapped between itself and
the base tiles (Figure 57). After the base tile and decoded tile of the shape
are connected with strength 2, signals are sent back through the decoder tiles
towards the directional tile which initiated growth. Upon passing this signal
to the decoder tile’s predecessor, all decoder tiles not bound to the
directional tile dissolve into size 1 junk (Figure 58). The decoder tile
adjacent to the directional tile activates a glue indicating for the next
directional tile to be placed, thus allowing for the placement of an
additional decoding tile.
Figure 55: The initial tile, once messages have been received that the base is
complete, initiates a signal which causes a base tile allow cooperative
binding of the first directional tile. Additionally, a glue initiating growth
of the first backing tile is exposed in the $+x$ direction Figure 56: Once
cooperative binding has occurred to dictate the decoding tile, glues are
activated on the $+x$ face of the decoding tile, allowing for cooperative
growth and binding of neighbor detection tiles. A strength 2 glue is exposed
upon binding with the neighbor decoding tile, allowing for a decoding tile to
be added which cooperatively places the tile encoded. Figure 57: Once the
row-1 tile binds to the base, it exposes a glue in the $-z$ axis that allows
for the cooperative binding of the fill/shape tile encoded by the first
location. Once cooperative binding occurs, a second glue is activated which
allows for a strength 2 connection between the shape/fill tile most recently
placed and its predecessor (in this case the base tile - the first row of
which contains glues and signals which allow for binding in this manner).
Additionally, when the detection duple binds to the backing tile, a signal is
sent to activate glues in both the $+x$ and $+y$ directions which allows for a
second tile to bind Figure 58: A message is passed backwards along the binding
edges such that the direction tile activates a glue which allows for the next
directional tile to bind. Additionally, the decoding tiles placed in support
of the prior encoded location of the shape deactivate all glues and become
junk to allow for the next tile of the encoding to be placed utilizing the
same path of voxels. Figure 59: Placement of encoded tiles continues, with
decoding tiles re-utilizing the same set of voxels to grow voxels further away
from the origin.
Tile additions continue also utilizing the direction change decoding
demonstrated in Section 3.3.1, until the final tile of the row is reached. At
this point growth continues by the standard process of directional tiles
allowing cooperative binding with the encoding structure, however switched to
direction ‘1’ growth. In order to enable the placement of encoding tiles via
direction ‘1’ growth, the backing tiles must be present in the new row to
allow for binding of neighbor detection gadgets. A _backing growth detector_
(see Figure 60)binds to the most recently placed backing tile and the base (or
backing) tile in the row prior. Binding of the backing growth detector allows
for a strength 2 glue to be turned on to enable the growth of a backing tile
in the $+y$ direction (Figure 61).
Figure 60: Backing growth detector (purple) binds to the outermost backing
tile and the base to signal to the backing tile to activate a strength 2 glue
in the $+y$ direction. Note that for following rows, the backing growth
detector will bind with two backing tiles Figure 61: Binding of the next
backing tile in order ready growth, allowing for binding of neighbor detection
tiles.
#### 3.3.4 Row 2n Tile Placement
For each even numbered row, tiles grow in the ‘-’ direction; that is, the
first tile in the some row 2n is placed above the last tile of the prior row
(2n - 1) For row 0 growth, each additional tile placed took us further away
from the origin point (e.g., incrementing the $x$ value in the ($x,\>y,\>z$)
position tuple). In the case of ‘-’ direction growth, tiles of the slice are
placed at the furthest-most $x$ value of the slice and decrement to 0. While
the decoding tiles of the first row bind initially to decoding tiles, the most
recently placed tile and directional tiles, the decoding tiles of ‘-’
direction growth cooperatively bind with the prior decoding tile and a base
tile. Growth occurs in two cases; in the case of the first tile of a row of
direction 1 growth, tiles bind until they reach the furthest-most base tile.
When reaching the outermost base tile, a _direction ‘1’ detection gadget_
binds with the outermost base tile and the furthest placed decoding tile
(Figure 62). At this point, a glue is activated on the decoding tile’s $+y$
face, allowing for cooperative growth to continue. This allows for cooperative
growth along the previously placed tiles until no longer possible, at which
point a neighbor detection gadget is able to bind to the decoding tile and the
neighbor tile (in this case a backing tile, see Figure 63).
Figure 62: Cooperative binding for direction ‘1’ tile growth of the first tile
in row 2 extends to the edge of the base. A direction ‘1’ detection gadget
(green) attaches to the base and the growing row, indicating the edge has been
reached. Once the direction ‘1’ detection gadget is bound, a glue activates on
the $+y$ face of the tile, allowing for cooperative growth in the $+y$
direction on the currently grown structure. Figure 63: The binding of the
neighbor detection gadget allows for a strength 2 glue to activate in the $+y$
direction, allowing for a tile with glues mapping to the decoding tile type
(in this case, a shape tile which has a $g_{x}$ glue encoded on its back side)
to cooperatively bind to the prior tile placed.
Similarly, this allows for both the placement of the encoded tile and the
extension of the backing tiles; upon the placement of the encoded tiles, a
signal is sent to dissolve all decoding tiles not involved in growth in the
$+y$ direction into size 1 junk. The next directional tile is added, allowing
for the binding of the next decoding tile and the growth to place the tile
dictated by the encoding structure. To sense when the growth of the decoding
tiles in the $+x$ direction has reached its furthest-most point, the remaining
decoding tile which originally redirected growth in the $+y$ direction enables
a glue similar to that present on the direction ‘1’ detector gadget. We note
this does not cause interactions between multiple encoding processes going on
in parallel, as the presence of the base tiles and the directional row offset
any possible growing decoding tile (Figure 64). Once the neighbor detector
gadget binds, it grows in the $+y$ direction and places its encoded tile
(Figure 65). This repeats until all tiles of the row have been added.
Figure 64: After binding of neighbor detection gadget, shape tiles are placed.
Figure 65: Mid-growth of the second tile in the encoding of row 2. Note that
all but one horizontal tiles are deactivated in direction ‘1’ growth, this is
in order for collision to occur and correctly place remaining tiles. Figure
66: Neighbor detector gadget binds to the furthest-most placed decoding tile
of the second decoding tile after colliding with the prior decoding tile
growth. This leads to placement of encoded tile and growth of backing. This
process repeats for all remaining direction ‘1’ tiles in the row.
At the end of this row, the backing tiles must grow in the $+y$ direction
again. For row 2, the current backing gadget will not work as there exists a
base tile hindering growth (which is necessary for future signals to be sent).
A modified, one-tile gadget is utilized for this specific case. Additionally,
once the row is complete after the placement of a direction change tile, all
remaining decoding tiles are dissolved into size 1 junk allow for growth of
direction ‘0’ tiles of the following layer.
#### 3.3.5 Row 2n + 1 Tile Placement
While growth of row 1 was in direction ‘0’, it is a special case due to the
fact that it placed tiles in voxels with the same coordinate in the $y$ axis
as the decoding tiles by a set of tiles unique to the first row. For remaining
odd-numbered rows, we must carry out a similar growth in the $+y$ direction
before as placing the encoded tile as demonstrated by the row 2 growth
example, but incrementing $x$ values. We note that the example figures in this
section do not directly correspond to the encoding provided in Figure 45,
however these are presented to provide the reader with examples of how this
process would occur in an encoding which does contain at least 3 rows.
Decoding tiles of some odd valued row grow by cooperatively binding with the
decoding tile and previously placed directional tiles, as with the row 1
tiles. However, upon binding with a shape or a fill tile they activate a glue
in the $+y$ direction. This glue attempts to allow for growth of decoding
tiles in the $+y$ direction, leading to the binding of a neighbor detection
gadget and the placement of the encoded tile (Figures 67, 68). Similarly to
even numbered direction ‘1’ row growth, decoding tiles are dissolved into size
1 junk to allow for reuse of voxels. In contrast, all but the bottom-most
decoding tile are removed, and glues are activated allowing remaining decoding
tiles to sense that a tile has already been placed in the current location
(Figure 69). In the case when the decoding tile activates its glue in the $+y$
direction and binds to a tile, it continues growth in the $+x$ direction until
finding an open location to grow (Figure 70).
Figure 67: As the direction ‘0’ tile (first tile of row 3) initiates growth,
when a tile is cooperatively placed on a base tile it immediately activates a
glue in the $+y$ direction. Since a path exists for tiles to grow in that
direction, they grow until no cooperative location is available. Figure 68:
At this point, a detector gadget (teal) binds and indicates that growth has
reached the point for the placement of the voxel encoded. Figure 69: As
signals are passed backwards through the tile growth, all horizontal tiles are
deactivated. This allows for the direction ‘0’ voxels to sense prior placed
tile locations from the same row. Note that tiles growing along the $+y$ axis
are retained initially. Figure 70: As the tiles which encode the second tile
of row 3 grow to their placement location, upon first cooperative binding with
the base they attempt to grow in the $+y$ direction. The signal ‘bounces’, and
the growth continues along the base. Since the second location has not been
placed, the $+y$ direction of growth is free to take place.
#### 3.3.6 Slice Completion
Once the directional tiles reach the end of the encoding of the final row
within the structure, a _slice completion gadget_ binds to the end of the
encoding and the directional tile. At this point, a message is returned
through the current row of directional tiles which enabled growth of the slice
(Figure 71). Once the message is received by the first directional tile, it
carries out two operations - the first being unique to the first slice. In
order for the growth of the next slice, we must be able to guarantee the shape
tiles in the slice are connected to either the shape which has grown, or are
connected to the newly growing slice. To guarantee connection of all tiles of
the first slice persist even after filler tile removal, we must create
strength 2 connections between the encoding structure and all tiles of the
first slice. This is accomplished by extending the growth of the backing
tiles, which allows for all tiles to be connected via strength 2 to the
encoding structure. The message is sent through the base tile which initiated
growth of the first slice, into the adjacent backing tiles. After backing
tiles receiving the message, strength 2 glues are activated on all the $+y$
direction faces of the currently placed backing tiles. Only the topmost layer
of backing tiles will allow for cooperative placement of the new backing tiles
on top of the newly created slice. The newly placed backing tiles opens up
cooperative binding locations for the backing tiles to then bind with the top
row of the slice (Figure 72). This allows for the tiles in the topmost row of
the slice to activate glues for binding to their neighbor in the $-y$
direction. Once bound to the neighbor in the $-y$ direction, the tiles are
then able to activate glues which allow for neighbor detection gadgets to
bind, allowing for the growth of a new slice.
Figure 71: The slice completion gadget (green) binds to the outermost
directional tile and decoding tiles, signaling for dissolution of decoding
tiles and extension of backing tiles Figure 72: Backing tiles activate
strength 2 glues, allowing for cooperative growth along the top of the first
slice
In addition to the growth of the backing tiles, a signal is sent to place a
new directional tile. This directional tile takes the information of the first
row of directional tiles and cooperatively binds with both 0 and 1 tiles on
the encoding structure; its purpose is to simply pass forward the directional
information and allow for the tile placement process to continue in the next
slice. In addition to the directional tile exposing a directional glue, we
also expose a terminating glue ($g_{t}$) which is used in the detection of the
completion of the final slice. Once the growth of the new directional tile
occurs alongside the creation of the top row of backing tiles, growth of the
new slice can begin with starting conditions shown in Figure 73.
Figure 73: First directional tile of the second slice is ready to begin
growth.
#### 3.3.7 Detaching From Base
Slice growth proceeds via the previously described process until reaching the
final slice. Once the final slice is placed, a slice completion gadget binds
allowing for the placement of a directional tile, as per any other row.
However, the exposed terminating glue allows for the attachment of the
_decoder completion detector_ with the outermost edge of the encoding
structure (Figure 74). Upon binding of the decoder completion detector, a glue
is activated to allow for the growth of _decoder completion tiles_ which
cooperatively bind to the outermost slice layer. Binding of the decoder
completion tiles occurs such that only attachments between shape tiles
activate glues for cooperative growth, and filler tiles must form a strength 2
duple with the decoder completion tiles. Once bound as a duple, the filler
tiles send glue deactivation signals to their remaining active glues.
Figure 74: At the completion of the final row, the decoder completion detector
(black) is able to bind with the outermost directional tile and cause growth
of decoder completion tiles which remove remaining fill tiles.
Once a decoder completion tile binds with the outermost backing tile above the
top row of a slice, it sends a dissolve message to all the base and backing
tiles in the same $yz$ plane (Figure 75) to turn them into size 1 junk. The
base tiles, upon receiving this dissolve message, also initiate a message to
dissolve the remaining tiles placed as part of the assembly sequence into size
1 junk, including the initial binding tile $t_{0}$. The initial binding tile
then signals to the encoding structure to dissolve into size 1 junk, and the
only terminal assembly remaining is the shape assembly produced by the
decoding process.
Figure 75: After the decoder completion tiles (green) bind to the final slice,
this sends deactivation signals to the fill tiles and bind to the backing
tiles, a dissolve message is sent to the remaining tiles involved in the
decoding process.
#### 3.3.8 Proof of Universal Shape Decoding Correctness
Here we briefly summarize the decoding process and show that during this
process, the shapes which were encoded in the set of input encoding assemblies
$\Phi$ are correctly assembled. We first consider the decoding process of a
single encoding assembly $\phi\in\Phi$ and note that a similar process happens
for all encoding assemblies simultaneously without interfering with one
another.
Our decoding process begins by building a base of tiles connected to $\phi$.
This base holds the shape as it’s being constructed and is used to help ensure
the connectivity of the shape as it’s being constructed. The decoding process
is performed in iterations, where during each iteration a row of $\phi$ is
scanned tile-by-tile and a corresponding 2D slice of the shape is constructed.
Each slice is constructed starting from the bottom (smallest $y$ coordinate)
to the top (largest $y$ coordinate), with tiles attaching in a zig-zag manner,
as illustrated in Figure 21. Each slice of the assembled shape corresponds to
a unique $z$ coordinate so for convenience we call the slice whose $z$
coordinate is $i$, $\sigma_{i}$. As each slice is assembled, tiles are placed
in each location of the slice, even those locations that will not be part of
the final shape, though these will be removed during the assembly of the next
slice.
The first slice $\sigma_{1}$ can be assembled naively, but during the assembly
of each following slice, tiles which will not be part of the final shape on
the previous slice must be removed. This is done as follows. Suppose that
slice $\sigma_{i}$ ($i>1$) is currently being assembled. Before a tile $t_{i}$
is placed in a location $(x,y,i)$, a gadget is used to determine the type of
the tile $t_{i-1}$ at location $(x,y,i-1)$ (i.e. the tile with the same $x$
and $y$ coordinates on the previous slice). If this $t_{i-1}$ is part of the
final shape, then $t_{i}$ is placed and signals are used to activate strength
2 glues between $t_{i}$ and $t_{i-1}$; otherwise, if $t_{i-1}$ is not part of
the final shape, it is removed before $t_{i}$ is placed. Regardless of the
type of tile $t_{i-1}$, when $t_{i}$ is placed, glues are activated which
connect $t_{i}$ to all adjacent tiles on the same slice. Once the final slice
is assembled, a final zig-zag pass is made in the next $z$ coordinate which
removes all tiles from the last slice which are not part of the final shape.
It is also important to note that the base, on which the shape is being
assembled, also forms a ceiling above the slices being assembled. This ceiling
helps ensure that tiles on the top row of each slice are able to remain
attached to the assembly during construction. It should be clear that during
this decoding process (1) each tile that belongs to the final shape is placed
in its correct location, and (2) that those tiles of a slice which are not
part of the final shape will be removed from the assembly during the assembly
of the next slice. However, because tiles are removed during the process, we
must show that none of these removals can cause parts of the assembly to
unintentionally detach. We state this as Lemma 3.4.
###### Lemma 3.4.
Let $\phi$ be an encoding assembly which encodes the shape $s$. During the
decoding process above, as slice $\sigma_{i}$ ($i>1$) is being assembled, no
tile in slices $\sigma_{1},\ldots,\sigma_{i-1}$ which are part of the final
shape assembly can detach.
###### Proof 3.5.
To prove this, we first note that all tiles in the slice $\sigma_{1}$ which
will be part of the final shape assembly are bound to each neighboring tile in
the slice, meaning that there is no risk of detachment until tiles are removed
in later slices. We use induction on the $z$ coordinate of the slices to show
that this holds. Therefore, assume the hypothesis holds for slices
$\sigma_{1},\ldots,\sigma_{k-1}$ and consider what happens as the slice
$\sigma_{k}$ assembles. Before the assembly of $\sigma_{k}$, the only slice
containing tiles that may need removal are in slice $\sigma_{k-1}$ since
during the assembly of a slice, all tiles which are not part of the final
shape assembly are removed from the previous slice.
As slice $\sigma_{k}$ is being assembled, if all of the tiles in
$\sigma_{k-1}$ are part of the final shape assembly, then nothing will be
detached and the proof is complete. Assume then that there is some tile in
slice $\sigma_{k-1}$ which is not part of the final shape assembly and thus
needs to be removed. Assembly of $\sigma_{k}$ will continue until we reach
such a tile, say $t$ at coordinates $(x_{t},y_{t},z_{t}=k-1)$. Gadgets will
detect that $t$ needs to be removed before a tile, say $t^{\prime}$, is placed
in coordinates $(x_{t},y_{t},z_{t}+1=k)$. When $t$ is detected, $\sigma_{k}$
will be assembled up to the location of $t^{\prime}$ meaning that there will
be a tile in every location of $\sigma_{k}$ below $y$ coordinate $y_{t}$ as
well as all locations at $y$ coordinate $y_{t}$ to either the left or right of
$t^{\prime}$ depending on the parity of the $y$ coordinate in the zig-zag
growth procedure for $\sigma_{k}$.
To ensure that the detachment of $t$ does not cause any other tiles to detach,
we must look at all neighbors of $t$ in the assembly. 1 of these neighbors
will be $t^{\prime}$ itself and this tile will be attached to all of its
neighbors in $\sigma_{k}$ so we don’t have to consider that one. If $t$ has a
neighboring tile in slice $\sigma_{k-2}$, then notice that this tile must (1)
be a tile belonging to the final shape assembly since it was not removed
during the assembly of slice $\sigma_{k-1}$, and (2) have at least 1 other
neighboring tile in $\sigma_{k-2}$ or $\sigma_{k-3}$ to which it is attached
since otherwise the shape being encoded would have disconnected parts which we
don’t allow. Therefore, the removal of $t$ would not cause this tile to
detach.
We now consider the 4 potential neighbors of $t$ in the slice $\sigma_{k-1}$.
For the neighbor below $t$, say $t_{-y}$, we again note that, because shape
$s$ cannot have any disconnected components, $t_{-y}$ must have at least one
neighbor other than $t$ which is part of the final shape assembly. Because the
current slice $\sigma_{k}$ has grown up to the $y$ coordinate of $t$, any such
neighbor of $t_{-y}$ must already exist in the assembly is attached to
$t_{-y}$ with strength 2. Therefore, the removal of $t$ will not cause
$t_{-y}$ to detach.
Now consider the neighbors of $t$ with the same $y$ and $z$ coordinates, call
these $t_{-x}$ and $t_{+x}$. Notice that because slices are grown in a zig-zag
manner, the growth of the current slice $\sigma_{k}$ will be such that one of
these already has a neighboring tile in $\sigma_{k}$ and one does not. Without
loss of generality, suppose that at the current row of slice $\sigma_{k}$
attachments are happening from the $-x$ direction to the $+x$ direction so
that $t_{-x}$ already has a neighbor in $\sigma_{k}$ and $t_{+x}$ does not.
Because any neighbor of $t_{-x}$ that exists must have been placed by now, the
detachment of $t$ will not cause $t_{-x}$ to detach for the same reason as
$t_{-y}$. Now, For $t_{+x}$ it may be the case its only neighbor that is part
of the final shape assembly is in slice $\sigma_{k}$ and has not yet attached.
Still notice that because $\sigma_{k}$ has not yet finished growth, no tiles
have yet been removed from $\sigma_{k-1}$ with a $y$ coordinate greater than
$t_{y}$. This means that $t_{+x}$ still has neighboring tiles to which it is
attached. This is even true if $t_{y}$ is at the top of the slice since the
base contains a ceiling above the assembly to which the tiles are attached.
Therefore, even if $t$ is removed, $t_{+x}$ will remain attached to the
assembly. The same argument applies to $t_{+y}$, the neighbor above $t$.
By the assembly procedure up to this point, it is therefore safe to remove
tile $t$, place $t^{\prime}$ and continue with the assembly of slice
$\sigma_{k}$. Since this holds for any tile which needs to be removed from
slice $\sigma_{k-1}$, the assembly of $\sigma_{k}$ will complete without any
tiles that are part of the final shape assembly detaching.
From here, it’s clear that the assembly of the slices of the shape can
complete without erroneous detachment. Since all tiles that are part of the
final shape assembly have been added during the slice construction and since
all tiles which are not part of the final shape assembly have been removed
from their respective slices, it’s clear that the decoding process
successfully assembles our final shape assembly.
Given the set of input encoding structures
$\Phi=\\{\phi_{1},\ldots,\phi_{n}\\}$, the STAMR system
$\mathcal{D}_{\Phi}=\\{D,\Sigma_{\Phi},\tau=2\\}$ produces a set of terminal
supertiles $S=\\{s_{1},\ldots,s_{n}\\}$ in parallel with a maximum junk size
of 3. $\mathcal{D}_{\Phi}$ finitely completes, as for the production of the
set of shapes $s\in S$ from input encoding structures $\Phi$ a finite number
of tiles are required for each encoding structure to produces a terminal
assembly. We can guarantee this as each encoding produce a single terminal
shape, as the encoding of the shape dissolves into size 1 junk after the
terminal shape has decoded. By our construction, there are never exposed glues
on the surfaces of any pair of assemblies that each contain an input encoding
that would allow them to bind to each other. Since junk assemblies produced by
any assembly sequence are also unable to negatively interact with other
assemblies, a system whose input assemblies have multiple shapes will behave
simply as the union of individual systems which each have one input assembly
shape, creating terminal assemblies of all of (and only) the correct shapes.
This proves Lemma 3.3.
Now that we have shown the existence of universal encoding and universal
decoding tilesets, we have the basis to demonstrate a universal shape
replicator. We generate a new STAMR tileset $R=E\cup D$ and STAMR system
$\mathcal{R}_{S}=\\{R,\Sigma_{S},\tau=2\\}$, where $\Sigma_{S}$ consists of an
infinite number of copies of each tile type from $R$ and an infinite number of
copies of each uniformly covered assembly from the set
$S=\\{s_{1},\ldots,s_{n}\\}$, whose shapes are any arbitrary set of shapes.
Recall that during the encoding process, the encoding corner gadget is bound
to the encoding structure while it is being built. Once the entire encoding
process finishes and the corner gadget receives a ’dissolve’ signal, it first
activates a glue to signal to the first tile placed in the encoding structure
that it should turn on the _initiator glue_ which is the glue initially bound
to by the tiles of $D$. Thus, exactly when an encoding of some $s_{i}$,
$\phi_{i}$, is completed by the tiles of $E$, decoding that $\phi_{i}$ will
begin by the tiles of $D$, resulting in a terminal assembly with the same
shape as $s_{i}$. We make a slight modification to the tile of the encoding
structure that exposes the initiator glue, and the initial decoding tile which
attach to it, the _initiator tile_. We make two copies of the initiator tile,
which we will call $t_{1}$ and $t_{2}$. The first, $t_{1}$, will bind to the
initiator glue and cause the decoding process to proceed exactly as before.
However, when the original initiator tile would have detected completion of
the decoding process and sent a ‘dissolve’ signal to the first tile of the
encoding structure, $t_{1}$ instead sends a signal that tells that tile to
activate a glue that will allow $t_{2}$ to attach, and then $t_{1}$ will
detach. This will effectively cause the encoding to produce a decoded
structure and then have all of the ‘helper’ tiles dissolve, leaving the
encoding structure able to bind to $t_{2}$ which then initiates the regular
decoding process, and when it receives the signal telling it that has
completed, $t_{2}$ does pass the ‘dissolve’ signal to the first tile of the
encoding structure. In this way, each encoding structure causes two copies of
the decoded assembly to be produced, and then dissolves.
By our construction, the only glues required to be shared between the two
tilesets are the glues encoding 1 and 0 on the encoding structure, and the
previously mentioned glues on the encoded assembly which initiate the decoding
process. The glues for 0/1 are shared by multiple tiles in both $E$ and $D$.
All tiles in $D$ which have the the 0/1 glue (or its complement) are required
to be placed by cooperation with a non 0/1 glue. Additionally, each tile in
$D$ has at most one face which contains strength 1 0/1 glue. Since no other
glues are shared between $E$ and $D$ it is not possible for strength 2 binding
to occur between (super)tiles in $E$ and $D$ aside from the binding of $\phi$
with the initiator tiles of $D$. Since junk assemblies produced by any
assembly sequence are also unable to negatively interact with other
assemblies, a system whose input assemblies have multiple shapes will behave
simply as the union of individual systems which each have one input assembly
shape, creating terminal assemblies of all of (and only) the correct shapes.
The maximal junk size of $R$ is 4, driven by the junk size of $E$. We can say
that $\mathcal{R}_{S}$ finitely completes with respect to the set of
assemblies created from the shape tiles of $D$ in the shape of each assembly
in $S$, as the tileset $R$ operates such that any input shape $s_{i}$ is
encoded into an intermediate structure $\phi_{i}$, $\phi_{i}$ is then decoded
into two copies of $s^{\prime}_{i}$, an assembly which contains tiles in the
exact same locations as $s$ (up to rotation and translation). As
deconstruction leads to the production of a single structure $\phi_{i}$, and
$\phi_{i}$ is only able to be decoded to $s^{\prime}_{i}$ two times, we can
place a finite bound on the number of each tile type required to produce each
terminal assembly $s^{\prime}$. (This largely follows from the fact that
encoding systems using $E$ finitely complete with respect to the set of
encoding assemblies, and that decoding systems using $D$ finitely complete
with respect to the set of assemblies whose shapes are encoded.) Therefore,
$R$ also finitely completes, with respect to the set of assemblies with the
same shape as the input assemblies, and Theorem 3.1 is proven.
Note that the condition that a single encoding structure $\phi_{i}$ leads to
the production of exactly two target assemblies $s^{\prime}_{i}$ is imposed to
allow for the universal shape replicator to technically be able to replicate
shapes from an arbitrarily large set of input assembly shapes without the
potential to ‘starve’ the encodings of one shape so that they never produce
decoded copies (and thus the replicator would not finitely complete with
respect to the full set of terminal assembly shapes). If only one input
assembly shape was provided as input, it would instead be possible to just
remove the dissolve signals from the encoding structure and allow each to
initiate the production of an unbounded number of decoded copies. It would
also be trivial to add tiles that make copies of the encoded structures that
can each initiate the decoding process, leading to exponential replication.
## 4 Universal Shape Encoding, Decoding, and Replication in the STAM
As previously mentioned, our use of the STAMR instead of the standard STAM for
the previous results was intended to allow for the input assemblies to be more
generic. That is, a single uniform glue can cover their entire surfaces rather
than having glues that are direction specific, which is implicitly the case
with glues in the STAM (as well as the aTAM and 2HAM, as commonly defined)
since tiles are not allowed to rotate in those models and therefore glues with
complementary labels but in non-opposite directions can’t bind. Giving tiles
the ability to rotate, meaning that glues are not specific to directions, made
aspects of the shape encoding problem more difficult to solve, especially the
“leader election” process to select a corner of the bounding box to be the
location of the origin. Nonetheless, the constructions can be easily modified
to work in the STAM. To do this we can simply define rotated versions of each
of our tiles, one for each of the 24 possible rotations. The behavior of these
tiles will be identical to the behavior of the tiles in the STAMR which can
easily be seen by forming the trivial bijection between individual tiles in
the STAM tileset and rotated instances of those tiles in the STAMR tileset.
This induces a bijection between assemblies formed by the tiles in both, and
this bijection clearly preserves the dynamics of the system as any binding of
assemblies possible in one corresponds to a binding of the corresponding
assemblies in the other. Thus we have an isomorphism between our systems
defined on these tilesets with the same input shape assemblies. Additionally,
the leader election process is essentially unnecessary in the STAM version
with rotated tiles since we could just choose say the top, northeastern most
tile of the bounding box assembly as leader once the filler verification has
finished. In principle, despite the STAM tileset requiring many rotated copies
of the tiles necessary for the bounding box construction, we wouldn’t need
rotated copies of any other tiles if the same corner was always elected
leader.
Also, it can be argued that the STAMR is in a sense more physically realizable
than the STAM if only for the fact that the STAM requires glues to implicitly
encode their orientations. When implementing tiles physically using DNA, where
glues are often made of single stranded DNA exposed on the sides of some more
rigid DNA structure, several copies of each glue (often one for each of the 6
directions) are needed. Because there are only so many fixed length sequences
of nucleotides, requiring that several sequences correspond to the same glue
is expensive. This is not only because those sequences can no longer be used
for different glues, but also because several similar sequences become
unusable as glue sequences must be sufficiently orthogonal to mitigate
erroneous binding. Consequently, our choice of a non-standard model of tile
assembly does not weaken our results, but rather strengthens them both
theoretically and, to some extent, practically.
## 5 Beyond Shape Replication
The constructions used to prove Theorem 3.1 were intentionally broken into
separate, modular constructions proving Lemmas 3.2 and 3.3 and thus providing
a universal shape encoder and a universal shape decoder. This is not only
useful for proving their correctness, but also for allowing for computational
transformations to be performed on the encodings of input shapes in order to
instead produce output shapes based on those transformations. Like even the
much simpler aTAM, the STAM (and STAMR) are Turing universal, meaning any
arbitrary computer program can be executed by systems in these models. Thus,
given any program that can perform a computational transformation of the
points of a shape and output points of another shape, tiles that execute that
program (for instance, by simulating an arbitrary Turing machine in standard
ways, e.g. [25, 18]) can receive as input the binary encodings of arbitrary
shapes (after their creation by the universal encoder), transform them in any
algorithmic manner, and then assemblies of the shapes output by those
transformations can be produced (using the universal shape decoder).
(a)
(b)
(c)
Figure 76: (a) An example shape, (b) The same shape at scale factor $2$, (c) A
shape which is complementary to the top surface of the shape in (a).
Due to space constraints, we don’t go into great detail about the
opportunities that such constructions provide. Instead, we mention just a few
of the possibilities (and depict some in Figure 76) while noting that the
possibilities are technically infinite:
1. 1.
Scaled shapes: a system could be designed to produce assemblies that have the
shapes of input assemblies scaled by either a built-in constant factor
(including negative, to shrink the shapes), or instead with another type of
input assembly that specifies the scaling factor, allowing for a “universal
scaler”.
2. 2.
Inverse shapes: a system could be designed to produce assemblies that have the
inverse, i.e. complementary, shapes of the input assemblies (assuming the
complements are connected, and restricting to some bounding box size since the
complement of any finite shape is infinite).
3. 3.
Pattern matching: a system could be designed to inspect input assembly shapes
for specific patterns and to either produce assemblies that signal the
presence of a target pattern, or instead assemblies that are complementary to,
and can bind to, the surfaces of assemblies containing those patterns.
Although such constructions are highly theoretical and quite complex, and thus
unlikely in their current forms to be practically implementable, they provide
a mathematical foundation for the construction of complex, dynamic systems
that mimic biological systems. One possible example is an “artificial immune
system” capable of inspecting surfaces, detecting those which match (or fail
to match) specific patterns, and creating assemblies capable of binding to
those deemed to be foreign, harmful, or otherwise targeted. As mentioned,
there are infinite possibilities.
## 6 Impossibility of Shape Replication Without Deconstruction
In this section, we prove that in order for a system in the STAMR to encode
and/or replicate shapes which have enclosed or bent cavities (see Definitions
2.4 and 2.5), the input assemblies must have the potential for tiles to be
removed. To do so, we first utilize a theorem from [2].
###### Theorem 6.4 (from [2]).
Let $U$ be an STAM* tileset such that for an arbitrary 3D shape $S$, the STAM*
system $\mathcal{T}=(U,\sigma_{S},\tau)$ with ${\rm dom}\;\sigma_{S}=S$,
$\mathcal{T}$ is a shape self-replicator for $S$ and $\sigma_{S}$ is non-
porous. Then, for any $r\in\mathbb{N}$, there exists a shape $S$ such that
$\mathcal{T}$ must remove at least $r$ tiles from the seed assembly
$\sigma_{S}$.
Theorem 4 from [2] applies to the STAM*. However, the STAMR is simply a
restricted version of the STAM* which only allows tiles to be a single shape,
that of a unit cube, and which does not allow flexible glues. Since all
assemblies in the STAMR are non-porous (i.e. free tiles cannot pass through
the tiles of an assembly or the gaps between bound tiles) and the STAMR has
more restrictive dynamics than the STAM*, the proof of this impossibility
result, which shows the impossibility of self-replicating assemblies with
enclosed cavities without removing tiles, suffices to prove the following
corollary (stated using the terminology of this paper) as well.222The proof
can be found in [2], and we omit duplicating it here due to space constraints.
Note that this proof holds even if the input assemblies are not uniformly
covered.
###### Corollary 6.1.
There exist neither a universal shape encoder nor a universal shape replicator
in the STAMR for the class of shapes with enclosed cavities whose assemblies
are not deconstructable.
(a)
(b)
(c)
(d)
Figure 77: (a) and (b) Partial depictions of a pair of shapes which cannot be
correctly encoded/replicated without a deconstructable input assembly. Each
consists of a $5\times 5\times 4$ cube with a 4-cube-long bent cavity. For
each, the green, purple, blue, and yellow locations indicate the empty
locations that make the bent cavity. The rest of the $5\times 5\times 4$ cube
locations would be filled in with red cubes (some have been omitted to make
the cavity locations visible). (c) and (d) The shapes of assemblies that could
grow into the bent cavities.
Our next theorem deals with shapes having bent cavities.
###### Theorem 6.2.
There exist neither a universal shape encoder nor a universal shape replicator
in the STAMR for the class of shapes with bent cavities whose input assemblies
are uniformly covered but are not deconstructable.
We prove Theorem 6.2 by contradiction. Therefore, let $f_{e}$ be a shape
encoding function and assume $E$ is a universal shape encoder with respect to
$f_{e}$, and let $c$ be the constant value which bounds the size of the junk
assemblies. (Nearly identical arguments will hold for a universal shape
replicator.) Define the shapes $s_{1}$ and $s_{2}$ as shown in Figures 77(a)
and 77(b), i.e. each is a $5\times 5\times 4$ cube with a bent cavity that
goes into the cube to a depth of 3, then turns one of two directions for each.
Note importantly that the well is offset from the center of the cube such that
$s_{1}$ and $s_{2}$ are not rotationally equivalent. Since $E$ is assumed to
be a universal shape encoder, there must exist two STAMR systems
$\mathcal{E}_{1}=(E,\sigma_{1},\tau)$ and
$\mathcal{E}_{2}=(E,\sigma_{2},\tau)$, where $\sigma_{1}$ consists of infinite
copies of tiles from $E$ and infinite copies of uniformly covered assemblies
in the shape of $s_{1}$, and $\sigma_{2}$ consists of infinite copies of tiles
from $E$ and infinite copies of uniformly covered assemblies in the shape of
$s_{2}$.
$\mathcal{E}_{1}$ must produce terminal assemblies which encode shape $s_{1}$
but must not produce terminal assemblies which encode shape $s_{2}$, since no
assembly of shape $s_{2}$ is included in its input assemblies. Similarly,
$\mathcal{E}_{2}$ must produce terminal assemblies which encode shape $s_{2}$
but not $s_{1}$. Let $\vec{\alpha}$ be an assembly sequence in
$\mathcal{E}_{1}$ which results in a terminal assembly encoding shape $s_{1}$.
We now show that every action of $\vec{\alpha}$ must be valid, in the same
ordering, in $\mathcal{E}_{2}$ but using an input assembly of shape $s_{2}$.
This is because the exact same glues will be exposed by the input assemblies
of shapes $s_{1}$ and $s_{2}$ in the same relative locations with the slight
difference of relative rotations of the innermost locations of the bent
cavities of each from the adjacent cavity locations. Assuming that, in
$\vec{\alpha}$, tiles attach into all locations of the bent cavity (if only
the location shown in yellow remains empty the same argument will hold, and if
both the locations shown in yellow and blue remain empty then there is
absolutely no difference in any aspect of the assembly sequence in
$\mathcal{E}_{2}$ and the argument immediately holds), this results only in
the relative orientations of at most the bottom two tiles being turned 90
degrees relative to the tile immediately above them (i.e. the tile in the
purple location in Figure 77). Since tiles in the STAMR are rotatable, with no
distinction for directions, there is no mechanism for tiles in the purple
locations of assemblies shown in Figures 77(c) and 77(d) from distinguishing
from each other (via tile types, glues, or signals). Tiles of the same types
which bind into those locations in $\vec{\alpha}$ must also be able to do so
in the assembly sequence of $\mathcal{E}_{2}$ using the exact same glues and
firing the exact same signals (if any). Thus $\vec{\alpha}$ must be a valid
assembly sequence in $\mathcal{E}_{2}$ as well. This means that an assembly
encoding the shape of $s_{1}$ is also created as a terminal assembly in
$\mathcal{E}_{2}$. Note that if the constant $c$ is greater than the size of
the shapes $s_{1}$ and $s_{2}$ (i.e. $5*5*4-4=96$), then we can simply
increase their dimensions until they are larger than $c$ (but still contain
the same bent cavities) and the argument still holds and the incorrectly
produced assemblies cannot be considered “junk” assemblies. This is a
contradiction that $E$ is a universal shape encoder with respect to $f_{e}$
and constant $c$. Since no assumptions were made about $E$ other than it being
a universal shape encoder, no such $E$ can exist. By slightly altering the
argument for a universal shape replicator $R$ (instead of universal encoder
$E$) and generating terminal assemblies of shapes $s_{1}$ and $s_{2}$ (rather
than assemblies encoding those shapes), the same argument holds to show that
no universal shape replicator exists, and thus Theorem 6.2 is proven.
## References
* [1] Zachary Abel, Nadia Benbernou, Mirela Damian, Erik D. Demaine, Martin L. Demaine, Robin Flatland, Scott D. Kominers, and Robert T. Schweller. Shape replication through self-assembly and RNAse enzymes. In SODA 2010: Proceedings of the Twenty-first Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1045–1064, Austin, Texas, 2010. Society for Industrial and Applied Mathematics.
* [2] Andrew Alseth, Daniel Hader, and Matthew J. Patitz. Self-replication via tile self-assembly. Technical Report 2105.02914, Computing Research Repository, 2021. URL: {https://arxiv.org/abs/2105.02914}.
* [3] Andrew Alseth, Daniel Hader, and Matthew J. Patitz. Self-Replication via Tile Self-Assembly (Extended Abstract). In Matthew R. Lakin and Petr Šulc, editors, 27th International Conference on DNA Computing and Molecular Programming (DNA 27), volume 205 of Leibniz International Proceedings in Informatics (LIPIcs), pages 3:1–3:22, Dagstuhl, Germany, 2021. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. URL: https://drops.dagstuhl.de/opus/volltexte/2021/14670, doi:10.4230/LIPIcs.DNA.27.3.
* [4] Sarah Cannon, Erik D. Demaine, Martin L. Demaine, Sarah Eisenstat, Matthew J. Patitz, Robert T. Schweller, Scott M. Summers, and Andrew Winslow. Two hands are better than one (up to constant factors): Self-assembly in the 2HAM vs. aTAM. In Natacha Portier and Thomas Wilke, editors, STACS, volume 20 of LIPIcs, pages 172–184. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2013.
* [5] Cameron Chalk, Erik D. Demaine, Martin L. Demaine, Eric Martinez, Robert Schweller, Luis Vega, and Tim Wylie. Universal shape replicators via Self-Assembly with Attractive and Repulsive Forces. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 225–238. Society for Industrial and Applied Mathematics, January 2017. doi:10.1137/1.9781611974782.15.
* [6] Qi Cheng, Gagan Aggarwal, Michael H. Goldwasser, Ming-Yang Kao, Robert T. Schweller, and Pablo Moisset de Espanés. Complexities for generalized models of self-assembly. SIAM Journal on Computing, 34:1493–1515, 2005.
* [7] Erik D. Demaine, Martin L. Demaine, Sándor P. Fekete, Mashhood Ishaque, Eynat Rafalin, Robert T. Schweller, and Diane L. Souvaine. Staged self-assembly: nanomanufacture of arbitrary shapes with ${O}(1)$ glues. Natural Computing, 7(3):347–370, 2008.
* [8] Erik D. Demaine, Matthew J. Patitz, Trent A. Rogers, Robert T. Schweller, Scott M. Summers, and Damien Woods. The two-handed assembly model is not intrinsically universal. In 40th International Colloquium on Automata, Languages and Programming, ICALP 2013, Riga, Latvia, July 8-12, 2013, Lecture Notes in Computer Science. Springer, 2013.
* [9] Erik D. Demaine, Matthew J. Patitz, Robert T. Schweller, and Scott M. Summers. Self-Assembly of Arbitrary Shapes Using RNAse Enzymes: Meeting the Kolmogorov Bound with Small Scale Factor (extended abstract). In Thomas Schwentick and Christoph Dürr, editors, 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011), volume 9 of Leibniz International Proceedings in Informatics (LIPIcs), pages 201–212, Dagstuhl, Germany, 2011. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. URL: http://drops.dagstuhl.de/opus/volltexte/2011/3011, doi:http://dx.doi.org/10.4230/LIPIcs.STACS.2011.201.
* [10] David Doty, Jack H. Lutz, Matthew J. Patitz, Robert T. Schweller, Scott M. Summers, and Damien Woods. The tile assembly model is intrinsically universal. In Proceedings of the 53rd Annual IEEE Symposium on Foundations of Computer Science, FOCS 2012, pages 302–310, 2012.
* [11] Constantine Glen Evans. Crystals that count! Physical principles and experimental investigations of DNA tile self-assembly. PhD thesis, California Institute of Technology, 2014.
* [12] Tyler Fochtman, Jacob Hendricks, Jennifer E. Padilla, Matthew J. Patitz, and Trent A. Rogers. Signal transmission across tile assemblies: 3D static tiles simulate active self-assembly by 2D signal-passing tiles. Natural Computing, 14(2):251–264, 2015.
* [13] Jacob Hendricks, Matthew J. Patitz, and Trent A. Rogers. Replication of arbitrary hole-free shapes via self-assembly with signal-passing tiles. In Cristian S. Calude and Michael J. Dinneen, editors, Unconventional Computation and Natural Computation - 14th International Conference, UCNC 2015, Auckland, New Zealand, August 30 - September 3, 2015, Proceedings, volume 9252 of Lecture Notes in Computer Science, pages 202–214. Springer, 2015. URL: http://dx.doi.org/10.1007/978-3-319-21819-9_15, doi:10.1007/978-3-319-21819-9\\_15.
* [14] Jacob Hendricks, Matthew J. Patitz, and Trent A. Rogers. Universal simulation of directed systems in the abstract tile assembly model requires undirectedness. In Proceedings of the 57th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2016), New Brunswick, New Jersey, USA October 9-11, 2016, pages 800–809, 2016.
* [15] Nataša Jonoska and Daria Karpenko. Active tile self-assembly, Part 1: Universality at temperature 1. International Journal of Foundations of Computer Science, 25(02):141–163, 2014. doi:10.1142/S0129054114500087.
* [16] Yonggang Ke, Luvena L Ong, William M Shih, and Peng Yin. Three-dimensional structures self-assembled from DNA bricks. Science, 338(6111):1177–1183, 2012.
* [17] Alexandra Keenan, Robert T. Schweller, and Xingsi Zhong. Exponential replication of patterns in the signal tile assembly model. In David Soloveichik and Bernard Yurke, editors, DNA, volume 8141 of Lecture Notes in Computer Science, pages 118–132. Springer, 2013\.
* [18] James I. Lathrop, Jack H. Lutz, Matthew J. Patitz, and Scott M. Summers. Computability and complexity in self-assembly. Theory Comput. Syst., 48(3):617–647, 2011.
* [19] James I. Lathrop, Jack H. Lutz, and Scott M. Summers. Strict self-assembly of discrete Sierpinski triangles. Theoretical Computer Science, 410:384–405, 2009.
* [20] Austin Luchsinger, Robert Schweller, and Tim Wylie. Self-assembly of shapes at constant scale using repulsive forces. Natural Computing, Aug 2018. doi:10.1007/s11047-018-9707-9.
* [21] Austin Luchsinger, Robert T. Schweller, and Tim Wylie. Self-assembly of shapes at constant scale using repulsive forces. In UCNC, volume 10240 of Lecture Notes in Computer Science, pages 82–97. Springer, 2017.
* [22] Pierre-Étienne Meunier, Damien Regnault, and Damien Woods. The program-size complexity of self-assembled paths. In Konstantin Makarychev, Yury Makarychev, Madhur Tulsiani, Gautam Kamath, and Julia Chuzhoy, editors, Proccedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020, Chicago, IL, USA, June 22-26, 2020, pages 727–737. ACM, 2020. doi:10.1145/3357713.3384263.
* [23] Jennifer E. Padilla, Matthew J. Patitz, Robert T. Schweller, Nadrian C. Seeman, Scott M. Summers, and Xingsi Zhong. Asynchronous signal passing for tile self-assembly: Fuel efficient computation and efficient assembly of shapes. International Journal of Foundations of Computer Science, 25(4):459–488, 2014.
* [24] Matthew J. Patitz, Robert T. Schweller, and Scott M. Summers. Exact shapes and Turing universality at temperature 1 with a single negative glue. In Luca Cardelli and William M. Shih, editors, DNA Computing and Molecular Programming - 17th International Conference, DNA 17, Pasadena, CA, USA, September 19-23, 2011. Proceedings, volume 6937 of Lecture Notes in Computer Science, pages 175–189. Springer, 2011.
* [25] Matthew J. Patitz and Scott M. Summers. Self-assembly of decidable sets. Natural Computing, 10(2):853–877, 2011.
* [26] Matthew J. Patitz and Scott M. Summers. Identifying shapes using self-assembly. Algorithmica, 64(3):481–510, 2012.
* [27] Paul W. K. Rothemund and Erik Winfree. The program-size complexity of self-assembled squares (extended abstract). In STOC ’00: Proceedings of the thirty-second annual ACM Symposium on Theory of Computing, pages 459–468, Portland, Oregon, United States, 2000. ACM.
* [28] Rebecca Schulman, Bernard Yurke, and Erik Winfree. Robust self-replication of combinatorial information via crystal growth and scission. Proceedings of the National Academy of Sciences, 109(17):6405–10, 2012. URL: http://www.biomedsearch.com/nih/Robust-self-replication-combinatorial-information/22493232.html.
* [29] David Soloveichik and Erik Winfree. Complexity of self-assembled shapes. SIAM Journal on Computing, 36(6):1544–1569, 2007.
* [30] Scott M. Summers. Reducing tile complexity for the self-assembly of scaled shapes through temperature programming. Algorithmica, 63(1-2):117–136, June 2012. URL: http://dx.doi.org/10.1007/s00453-011-9522-5, doi:10.1007/s00453-011-9522-5.
* [31] Erik Winfree. Algorithmic Self-Assembly of DNA. PhD thesis, California Institute of Technology, June 1998.
* [32] Damien Woods, David Doty, Cameron Myhrvold, Joy Hui, Felix Zhou, Peng Yin, and Erik Winfree. Diverse and robust molecular algorithms using reprogrammable DNA self-assembly. Nature, 567:366–372, 2019.
|
# The monodromy of families of subvarieties on abelian varieties
Ariyan Javanpeykar Ariyan Javanpeykar
IMAPP
Radboud University Nijmegen
PO Box 9010
6500GL Nijmegen
The Netherlands<EMAIL_ADDRESS>, Thomas Krämer Thomas Krämer
Institut für Mathematik
Humboldt Universität zu Berlin
Rudower Chaussee 25, 12489 Berlin
Germany<EMAIL_ADDRESS>, Christian Lehn Christian Lehn
Fakultät für Mathamatik
Technische Universität Chemnitz
Reichenhainer Strasse 39
09126 Chemnitz
Germany<EMAIL_ADDRESS>and Marco Maculan Marco
Maculan
Institut de Mathématiques de Jussieu
Sorbonne Université
4, place Jussieu
75005 Paris
France<EMAIL_ADDRESS>
###### Abstract.
Motivated by recent work of Lawrence-Venkatesh and Lawrence-Sawin, we show
that non-isotrivial families of subvarieties in abelian varieties have big
monodromy when twisted by generic rank one local systems. While Lawrence-Sawin
discuss the case of subvarieties of codimension one, our results hold for
subvarieties of codimension at least half the dimension of the ambient abelian
variety. For the proof, we use a combination of geometric arguments and
representation theory to show that the Tannaka groups of intersection
complexes on such subvarieties are big.
###### Key words and phrases:
Subvarieties of abelian varieties, characteristic cycles, convolution,
monodromy, perverse sheaves, Tannaka categories.
###### 2020 Mathematics Subject Classification:
14K12, 14D05 (primary), 18M25, 20G05, 32S60 (secondary).
## 1\. Introduction
Recently, Lawrence and Venkatesh [LV20] have developed a technique to prove
nondensity of integral points on varieties that are defined over a number
field and support a geometric variation of Hodge structures with big
monodromy. They used this method to give an alternative proof of the Mordell
conjecture and to show nondensity for hypersurfaces in projective space of a
given (high) degree with good reduction outside a fixed finite set of primes.
Later, Lawrence and Sawin [LS20] applied this strategy to show that up to
translation any abelian variety over a number field contains only finitely
many smooth ample hypersurfaces with given Néron-Severi class and good
reduction outside a fixed finite set of primes. The main novelty of their work
lies in their way to control monodromy. The arguments of Lawrence and
Venkatesh have a topological flavor. For the Mordell conjecture they rely on a
judicious choice of Dehn twists; for hypersurfaces in projective space they
use the computation of the integral monodromy of the universal family by
Beauville [Bea86] (based on the work of Ebeling [Ebe84] and Janssen [Jan83]),
see also the discussion by Katz in [Kat04]. Instead, the approach by Lawrence
and Sawin involves Tannaka groups of perverse sheaves on abelian varieties
introduced by Krämer and Weissauer [KW15c]; the relation of these groups to
monodromy is reminiscent of the one between the monodromy group of a variation
of Hodge structures and its generic Mumford-Tate group [And92].
With a view towards new arithmetic applications along these lines, we prove a
big monodromy theorem for families of subvarieties of higher codimension in
abelian varieties. Our results hold for all subvarieties of codimension at
least half the dimension of the abelian variety. The geometry in this
codimension range is very different from the codimension one case in [LS20],
and the results about Tannaka groups that we obtain on the way may be of
independent interest.
### 1.1. Big monodromy
Let $S$ be a smooth irreducible variety over an algebraically closed field $k$
of characteristic zero. Let $A$ be an abelian variety of dimension $g$ over
$k$. Inside the constant abelian scheme $A_{S}:=A\times S$, let
$\mathcal{X}\subset A_{S}$ be a closed subvariety which is smooth over $S$
with connected fibers of dimension $d$. The goal of this paper is to
understand the monodromy of rank one local systems on the smooth proper family
$f\colon\mathcal{X}\to S$ in the following diagram:
${\mathcal{X}}$${A}$${A_{S}}$${S}$$\scriptstyle{\pi}$$\scriptstyle{f}$$\scriptstyle{\operatorname{pr}_{A}}$$\scriptstyle{\operatorname{pr}_{S}}$
Our results apply both in the analytic and in the algebraic setup, using
topological local systems with coefficients in $\mathbb{F}=\mathbb{C}$ for
$k=\mathbb{C}$ resp. étale $\ell$-adic local systems with coefficients in
$\mathbb{F}=\overline{\mathbb{Q}}_{\ell}$ for a prime $\ell$ over an arbitrary
algebraically closed field $k$ of characteristic zero. Let $\pi_{1}(A,0)$ be
the topological resp. étale fundamental group with the discrete resp.
profinite topology, and denote the group of its continuous characters by
$\Pi(A,\mathbb{F})=\operatorname{Hom}(\pi_{1}(A,0),\mathbb{F}^{\times}).$
In what follows, by a _linear subvariety_ we mean a subset
$\Pi(B,\mathbb{F})\subset\Pi(A,\mathbb{F})$ for an abelian quotient variety
$A\twoheadrightarrow B$ with $\dim B<\dim A$. We say that a statement holds
for most $\chi\in\Pi(A,\mathbb{F})$ if it holds for all $\chi$ outside a
finite union of torsion translates of linear subvarieties. For
$\chi\in\Pi(A,\mathbb{F})$, let $L_{\chi}$ denote the associated rank one
local system on $A$. It follows from generic vanishing [BSS18, KW15c, Sch15]
that for most $\chi$ the higher direct images $R^{i}f_{*}\pi^{*}L_{\chi}$
vanish in all degrees $i\neq d$; we consider the local system
$V_{\chi}\;:=\;R^{d}f_{*}\pi^{*}L_{\chi}$
of rank $|e|$ where $e$ is the topological Euler characteristic of the fibers
of $\mathcal{X}\to S$. More generally, the study of finite étale covers of the
subvariety $\mathcal{X}\subset A_{S}$ induced by finite étale covers of $A$
leads to direct sums
$V_{\underline{\chi}}\;:=\;V_{\chi_{1}}\oplus\cdots\oplus V_{\chi_{n}}$
where $\underline{\chi}=(\chi_{1},\dots,\chi_{n})\in\Pi(A,\mathbb{F})^{n}$ is
an $n$-tuple of characters of the fundamental group. Using the natural
identification $\Pi(A,\mathbb{F})^{n}=\Pi(A^{n},\mathbb{F})$ we will also
apply the terminology most for such $n$-tuples of characters. Consider for
$s\in S(k)$ the monodromy representation
$\rho\colon\quad\pi_{1}(S,s)\;\longrightarrow\;\operatorname{GL}(V_{\underline{\chi},s})\quad\textnormal{on
the fiber}\quad
V_{\underline{\chi},s}\;=\;\bigoplus_{i=1}^{n}\textup{H}^{d}(\mathcal{X}_{s},L_{\chi_{i}}).$
The _algebraic monodromy group_ of the local system $V_{\underline{\chi}}$ is
the Zariski closure of the image of $\rho$. By construction it is an algebraic
subgroup of
$\operatorname{GL}(V_{\chi_{1},s})\times\cdots\times\operatorname{GL}(V_{\chi_{n},s})\;\subset\;\operatorname{GL}(V_{\underline{\chi},s}).$
This upper bound can sometimes be refined: We say that the subvariety
$\mathcal{X}\subset A_{S}$ is _symmetric up to translation_ if there exists
$a\colon S\to A$ such that $\mathcal{X}_{t}=-\mathcal{X}_{t}+a(t)$ for all
$t\in S(k)$. In this case, Poincaré duality furnishes a nondegenerate bilinear
pairing
$\theta_{\chi,s}\colon\quad V_{\chi,s}\otimes V_{\chi,s}\longrightarrow
L_{\chi,a(s)}$
for each $\chi\in\Pi(A,\mathbb{F})$, because for the dual of a rank one local
system and for its inverse image under the translation $\tau_{a(t)}\colon A\to
A,x\mapsto x+a(t)$ we have natural isomorphisms
$\displaystyle L_{\chi}^{\vee}$ $\displaystyle\simeq$
$\displaystyle[-1]^{\ast}L_{\chi},$ $\displaystyle\tau_{a(t)}^{\ast}L_{\chi}$
$\displaystyle\simeq$ $\displaystyle
L_{\chi}\otimes_{\mathbb{F}}L_{\chi,a(t)}.$
The pairing $\theta_{\chi,s}$ is symmetric if $d$ is even, and alternating
otherwise. Since the pairing is compatible with the monodromy operation on the
fiber, it follows that the algebraic monodromy group of $V_{\underline{\chi}}$
is contained in an orthogonal resp. symplectic group in the two cases. This
leads to the following definition:
###### Definition.
We say that $V_{\underline{\chi}}$ has big monodromy if its algebraic
monodromy group contains $G_{1}\times\cdots\times G_{n}$ as a normal subgroup
where $G_{i}\subset\operatorname{GL}(V_{\chi_{i},s})$ is defined by
$G_{i}\;:=\;\begin{cases}\operatorname{SL}(V_{\chi_{i},s})&\textup{if
$\mathcal{X}$ is not symmetric up to translation},\\\
\operatorname{SO}(V_{\chi_{i},s},\theta_{\chi_{i},s})&\textup{if $\mathcal{X}$
is symmetric up to translation and $d$ is even},\\\
\operatorname{Sp}(V_{\chi_{i},s},\theta_{\chi_{i},s})&\textup{if $\mathcal{X}$
is symmetric up to translation and $d$ is odd}.\end{cases}$
Note that the connected component of the algebraic monodromy group of
$V_{\underline{\chi}}$ is unaffected by base change along étale morphisms
$S^{\prime}\to S$. To take this into account we consider the fiber
$\mathcal{X}_{\bar{\eta}}$ of $\mathcal{X}\to S$ at a geometric generic point
$\bar{\eta}$ of $S$. There are four obvious cases where the local system
$V_{\underline{\chi}}$ does not have big monodromy: We say that
$\mathcal{X}_{\bar{\eta}}\subset A_{S,\bar{\eta}}$ is
1. (1)
_constant up to a translation_ if it is the translate of a subvariety
$Y\subset A$ along a point in $A(\bar{\eta})$. In this case the algebraic
monodromy is finite.
2. (2)
divisible if it stable under translation by a torsion point $0\neq x\in
A(\bar{\eta})$. In this case the algebraic monodromy of each $V_{\chi_{i}}$ is
itself a group of block matrices which is normalized by the group generated by
the point $x$.
3. (3)
a _symmetric power of a curve_ if there is a smooth curve $C\subset
A_{S,\bar{\eta}}$ such that the sum morphism $\operatorname{Sym}^{d}C\to
A_{S,\bar{\eta}}$ is a closed embedding with image $\mathcal{X}_{\bar{\eta}}$
and $d\geqslant 2$. After an étale base change over $S$, we may assume that
$C$ spreads out to a relative curve $\mathcal{C}\subset A_{S}$ which is smooth
and proper over $S$ such that the relative sum morphism
$\operatorname{Sym}_{S}^{d}\mathcal{C}\to A_{S}$ is a closed embedding with
image $\mathcal{X}$. Then we have an isomorphism compatible with monodromy:
$\textup{H}^{d}(\mathcal{X}_{s},L_{\chi})\simeq\operatorname{Alt}^{d}\textup{H}^{1}(\mathcal{C}_{s},L_{\chi}).$
4. (4)
a _product_ if there are smooth subvarieties $X_{1},X_{2}\subset
A_{S,\bar{\eta}}$ with $\dim X_{i}>0$ such that the sum morphism $X_{1}\times
X_{2}\to A_{S,\bar{\eta}}$ is a closed embedding with image
$\mathcal{X}_{\bar{\eta}}$. Again, after an étale base change over $S$ we may
assume that $X_{i}$ spreads out to a subvariety $\mathcal{X}_{i}\subset A_{S}$
which is smooth and proper over $S$ such that the relative sum morphism
$\mathcal{X}_{1}\times\mathcal{X}_{2}\to A_{S}$ is a closed embedding with
image $\mathcal{X}$. Then we have the Künneth isomorphism which is compatible
with monodromy:
$\textup{H}^{d}(\mathcal{X}_{s},L_{\chi})\simeq\bigoplus_{i_{1}+i_{2}=d}\textup{H}^{i_{1}}(\mathcal{X}_{1,s},L_{\chi})\otimes\textup{H}^{i_{2}}(\mathcal{X}_{2,s},L_{\chi}).$
If $\mathcal{X}_{\bar{\eta}}$ is nondivisible, then condition (1) holds if and
only if the family $\mathcal{X}\to S$ is isotrivial; see corollary 4.8. In
order to avoid the appearance of the exceptional groups $E_{6}$ and $E_{7}$
and some low-dimensional half-spin groups, we assume that the topological
Euler characteristic $e$ of $\mathcal{X}_{\bar{\eta}}$ satisfies
$\displaystyle|e|\;$ $\displaystyle\neq\;27$ $\displaystyle\textup{if
$d\geqslant 2$ and $\mathcal{X}$ is not symmetric up to a translation},$ (1.1)
$\displaystyle|e|\;$ $\displaystyle\neq\;56$ $\displaystyle\textup{if
$d\geqslant 3$ is odd and $\mathcal{X}$ is symmetric up to translation},$
$\displaystyle|e|\;$ $\displaystyle\neq\;2^{2m-1}$ if $d\geqslant(g-1)/4$,
$m\in\\{3,\dots,d\\}$ has the same parity as $d$ $\displaystyle\textup{and
$\mathcal{X}$ is symmetric up to translation}.$
Note that $|e|\geqslant g$ if $\mathcal{X}_{\bar{\eta}}\subset
A_{S,\bar{\eta}}$ has ample normal bundle, see lemma 2.12. We do not know any
example of a smooth subvariety of $A_{S,\bar{\eta}}$ with ample normal bundle
and dimension $d<(g-1)/2$ whose Euler characteristic $e$ does not satisfy
(1.1).
###### Main theorem (monodromy version).
Suppose $\mathcal{X}_{\bar{\eta}}\subset A_{S,\bar{\eta}}$ has ample normal
bundle, dimension $d<(g-1)/2$, and (1.1) holds. Then the following are
equivalent:
1. (1)
$\mathcal{X}_{\bar{\eta}}$ is nondivisible, not constant up to translation,
not a symmetric power of a curve and not a product;
2. (2)
$V_{\underline{\chi}}$ has big monodromy for most torsion $n$-tuples
$\underline{\chi}\in\Pi(A,\mathbb{F})^{n}$.
Smooth proper subvarieties of a simple abelian variety have ample normal
bundle. Therefore when $A$ is simple the preceding theorem is as general as it
gets for smooth subvarieties of dimension $d<(g-1)/2$, save the finite list of
exceptions in (1.1). When $A$ is arbitrary, the theorem can be applied in the
following concrete cases:
###### Corollary.
Suppose $\mathcal{X}_{\bar{\eta}}\subset A_{S,\bar{\eta}}$ is nondivisible,
not constant up to translation, and one of the following holds:
1. (1)
$\mathcal{X}_{\bar{\eta}}$ is a curve generating $A_{S,\bar{\eta}}$ and
$g\geqslant 4$;
2. (2)
$\mathcal{X}_{\bar{\eta}}$ is a surface with ample normal bundle which is
neither a symmetric square of a curve nor a product, and $e\neq 27$,
$g\geqslant 6$;
3. (3)
$\mathcal{X}_{\bar{\eta}}$ is a complete intersection of ample divisors and
$d<(g-1)/2$.
Then $V_{\underline{\chi}}$ has big monodromy for most $n$-tuples
$\underline{\chi}\in\Pi(A,\mathbb{F})^{n}$ of torsion characters.
Indeed a smooth complete intersection of ample divisors is neither a symmetric
power of a curve (corollary 2.10) nor a product (remark 6.3) and its
topological Euler characteristic satisfies $|e|\geqslant 2^{g}$ and $|e|\neq
27,56$ (corollaries 2.17, LABEL: and 2.16).
Over $k=\mathbb{C}$, the main theorem in the analytic setup is deduced from
the algebraic one by the comparison between classical and étale topology; the
hypothesis that the characters are torsion is only used here. For the proof in
the algebraic setting, we start as in [LS20] by relating the algebraic
monodromy to the Tannaka group of the rank one local systems in question, seen
as perverse sheaves on $\mathcal{X}_{\bar{\eta}}$. The idea is similar to the
study of monodromy groups via Mumford-Tate groups in the complex case [And92]:
An analog of the theorem of the fixed part due to Lawrence and Sawin says that
the monodromy will be big if we can show that the Tannaka group of the
geometric generic fiber is big (see theorem 4.10); note that the property of
the family being symmetric up to translation can be read off from its
geometric generic fiber (corollary 4.8). Thus, we are left with a question
about the Tannaka group of the geometric generic fiber of our family. In this
setting, we will reset our notation and replace $k$ by an algebraic closure of
the function field of $S$.
### 1.2. Big Tannaka groups
As before, let $A$ be an abelian variety of dimension $g$ over an
algebraically closed field $k$ of characteristic zero. Let $i\colon
X\hookrightarrow A$ be the inclusion of a smooth connected closed subvariety
of dimension $d$. We define the perverse intersection complex
$\delta_{X}\;:=\;i_{\ast}\mathbb{F}_{X}[d]$
as the pushforward of the constant sheaf, shifted in cohomological degree $-d$
so that it becomes an object of the abelian category
$\textup{Perv}(A,\mathbb{F})$ of perverse sheaves on $A$ as in [BBDG18]. As we
will recall in section 3.1, the group law on the abelian variety induces a
convolution product on perverse sheaves, and the perverse intersection complex
$\delta_{X}$ generates a neutral Tannaka category $\langle\delta_{X}\rangle$
with respect to this convolution. For the rest of this introduction, we fix a
character $\chi\in\Pi(A,\mathbb{F})$ with $\textup{H}^{i}(X,L_{\chi})=0$ for
all $i\neq d$. Such a character exists by generic vanishing. We then have a
fiber functor
$\omega\colon\quad\langle\delta_{X}\rangle\;\longrightarrow\;\operatorname{Vect}(\mathbb{F}),\quad
P\;\longmapsto\;\textup{H}^{0}(A,P\otimes L_{\chi}),$
see [KW15c, th. 13.2]. Applying this fiber functor to $P=\delta_{X}$ we
recover the vector space
$V:=\omega(\delta_{X})=\textup{H}^{0}(A,\delta_{X}\otimes
L_{\chi})=\textup{H}^{d}(X,L_{\chi}).$
The automorphisms of the fiber functor are represented by a reductive
algebraic group
$G_{X,\omega}:=G_{\omega}(\delta_{X})\subset\operatorname{GL}(V)$ which we
call the _Tannaka group of $X$_, see also definition 3.2. The definitions in
section 1.1 with $S=\operatorname{Spec}(k)$ show that if $X\subset A$ is
symmetric up to translation, then $V$ comes with a natural symmetric bilinear
form $\theta$ which is induced by Poincaré duality. This bilinear form is
symmetric or alternating depending on the parity of $d$, and it is preserved
by the action of the group $G_{X,\omega}$ as in [KW15a, lemma 2.1]. Let
$G_{X,\omega}^{\circ}\subset G_{X,\omega}$ be the connected component of the
identity and
$G_{X,\omega}^{\ast}:=[G_{X,\omega}^{\circ},G_{X,\omega}^{\circ}]$
its derived group, which is a connected semisimple algebraic group.
###### Definition.
We say that the Tannaka group $G_{X,\omega}$ of $X$ is big if the derived
group of its connected component of the identity is
$G_{X,\omega}^{*}\;=\;\begin{cases}\operatorname{SL}(V)&\textup{if $X$ is not
symmetric up to translation},\\\ \operatorname{SO}(V,\theta)&\textup{if $X$ is
symmetric up to translation and $d$ is even},\\\
\operatorname{Sp}(V,\theta)&\textup{if $X$ is symmetric up to translation and
$d$ is odd}.\end{cases}$
The main theorem from the previous section is obtained by combining the analog
of the theorem of the fixed part by Lawrence and Sawin (theorem 4.10) with the
following result, whose proof will be the main task of this paper. Again we
need to exclude a finite list of values of the topological Euler
characteristic $e$ of $X$, for which we refer to (1.1) with
$S=\mathrm{Spec}(k)$ and $\mathcal{X}=X$.
###### Main theorem (Tannaka version).
Suppose $X\subset A$ has ample normal bundle, dimension $d<(g-1)/2$, and (1.1)
holds. Then the following are equivalent:
1. (1)
$X$ is nondivisible, not a symmetric power of a curve and not a product;
2. (2)
The Tannaka group $G_{X,\omega}$ is big.
Similarly to the monodromy version, the preceding statement is substantially
sharp in the simple case and can be applied in the following special cases:
###### Corollary.
Suppose $X\subset A$ is nondivisible and one of the following holds:
1. (1)
$X$ is a curve generating $A$ and $g\geqslant 4$;
2. (2)
$X$ is a surface with ample normal bundle which is neither a product nor the
symmetric square of a curve, and $e\neq 27$, $g\geqslant 6$;
3. (3)
$X$ is a complete intersection of ample divisors and $d<(g-1)/2$.
Then the Tannaka group $G_{X,\omega}$ is big.
The Tannaka version of the main theorem also applies when $X$ does not arise
from a family as in section 1.1, so it is stronger than the monodromy version.
We also note that over the complex numbers both versions apply in many cases
where we have no control on Mumford-Tate groups of the subvarieties. Again,
when $X$ is a complete intersection of ample divisors, then automatically
$|e|\neq 27,56$, $X$ is not a symmetric power of a curve nor a product. Before
we describe the proof of the main theorem, let us mention the following
observation to illustrate the information captured by Tannaka groups (see
corollary 3.8):
###### Fact.
Suppose that $X\subset A$ has ample normal bundle and $d<g/2$. If
$G_{X,\omega}$ is big, then the sum morphism $\operatorname{Sym}^{2}(X)\to
X+X$ is birational.
This follows from the observation that the direct image of the constant sheaf
under the sum morphism is related to the decomposition of $V\otimes
V\in\textup{Rep}_{\mathbb{F}}(G_{X,\omega})$. In fact, Larsen’s alternative
yields a necessary and sufficient criterion for the Tannaka group to be big,
using only the decomposition of the direct image of the constant sheaf under
the sum morphism. However, it seems hard to control this direct image in the
generality needed for our main theorem, so for the proof of the main theorem
we follow a different route that will be described in sections 1.3, LABEL:,
1.4, LABEL: and 1.5.
### 1.3. Simplicity of Tannaka groups
The first step in our proof of the main theorem from the previous section will
be to show that under the given assumptions, the algebraic group
$G_{X,\omega}^{*}$ is simple. We refine the methods in [Krä21, section 6] to
obtain the following simplicity criterion (see theorem 6.1):
###### Theorem A.
Suppose $X\subset A$ has ample normal bundle and is nondivisible. Then for
$g\geqslant 3$ the following are equivalent:
1. (1)
The algebraic group $G_{X,\omega}^{\ast}$ is not simple;
2. (2)
There are smooth subvarieties $X_{1},X_{2}\subset A$ such that the sum
morphism induces an isomorphism
$X_{1}\times X_{2}\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}X.$
A smooth projective curve $C\subset A$ generating $A$ has ample normal bundle,
thus the algebraic group $G_{C,\omega}^{\ast}$ is simple for $g\geqslant 3$.
When $g=2$ the simplicity of $G_{C,\omega}^{\ast}$ remains open. More
generally A implies that $G_{X,\omega}^{\ast}$ is simple when $X\subset A$ is
nondivisible with ample normal bundle and
1. (1)
the image of the Albanese morphism $X\to\operatorname{Alb}(X)$ is
nondegenerate in the sense of section 2.3;
2. (2)
the natural morphism $\varphi\colon\operatorname{Alb}(X)\to A$ is an isogeny.
By Debarre’s Barth-Lefschetz theorem for abelian varieties (see [Deb95, th.
4.5] or remark 6.3) this is the case as soon as $d>g/2$ or when $X$ is a
complete intersection of ample divisors and $d\geqslant 2$.
Note that situation (2) above is a particular case of (1).
Our proof of A uses characteristic cycles on the cotangent bundle of $A$ and
their link with representation theory [Krä22, Krä21]. For convenience, we
recall some background in section 5, together with computations for the Dynkin
types $A$, $B$, $D$ to be used later. A look at characteristic cycles will
also show that the representation
$\omega(\delta_{X})\in\textup{Rep}_{\mathbb{F}}(G_{X,\omega}^{\ast})$ is
minuscule in the sense that its weights form a single Weyl group orbit, see
corollary 5.15. There are only few nontrivial minuscule representations $V$ of
a simply connected simple algebraic group $G$, all of which are listed below:
Dynkin type | G | $V$ | $\dim V$
---|---|---|---
$A_{n}$ | $\operatorname{SL}_{n+1}$ | $r$-th wedge power | $\tbinom{n+1}{r}$
$B_{n}$ | $\operatorname{Spin}_{2n+1}$ | spin | $2^{n}$
$C_{n}$ | $\operatorname{Sp}_{2n}$ | standard | $2n$
$D_{n}$ | $\operatorname{Spin}_{2n}$ | standard of $\operatorname{SO}_{2n}$ | $2n$
$D_{n}$ | $\operatorname{Spin}_{2n}$ | half-spins | $2^{n-1}$
$E_{6}$ | $E_{6}$ | | smallest nontrivial
---
or its dual
$27$
$E_{7}$ | $E_{7}$ | smallest nontrivial | $56$
The dimension of $\omega(\delta_{X})$ is the absolute value of the topological
Euler characteristic of $X$. Recall that the subvariety $X\subset A$ is
symmetric up to a translation if and only if the vector space
$\omega(\delta_{X})$ carries a nondegenerate bilinear form preserved by the
action of $G_{X,\omega}^{\ast}$, and this pairing is symmetric if $d$ is even
and alternating if $d$ is odd; see [KW15a, lemma 2.1]. This rules out the
occurence of $E_{6}$ for symmetric subvarieties; note that the group $E_{6}$
appears as the Tannaka group of the Fano surface in the intermediate Jacobian
of a smooth cubic threefold, but $d=(g-1)/2$ here because $d=2$ and $g=5$.
Similarly, the group $E_{7}$ preserves a nondegenerate alternating bilinear
form on its $57$-dimensional irreducible representation, so subvarieties $X$
with $G_{X,\omega}^{\ast}\simeq E_{7}$ must be odd-dimensional. However, for
$d=1$ this does not happen as we show by a direct geometric argument (see
corollary 3.11), and in higher dimension we do not any such example.
Altogether, to prove that the Tannaka group is big and conclude the proof of
the main theorem from section 1.2, we are left with wedge powers and spin
representations. The next two sections will characterize the occurence of the
former and rule out the latter.
### 1.4. Wedge powers
In contrast to the situation for hypersurfaces studied by Lawrence and Sawin
in [LS20], one cannot rule out the occurrence of nontrivial wedge powers for
subvarieties of higher codimension by numerical arguments. In fact, wedge
powers do appear, but we will use geometric arguments to obtain the following
complete classification (see theorem 7.3):
###### Theorem B.
Suppose $X\subset A$ has ample normal bundle and is nondivisible. Then for
$d<(g-1)/2$ the following are equivalent:
1. (1)
There are integers $r$ and $n$ with $1<r\leqslant n/2$ such that
$G_{X,\omega}^{\ast}\simeq\operatorname{Alt}^{r}(\operatorname{SL}_{n})$ and
$\omega(\delta_{X})$ is the $r$-th wedge power of the standard representation.
2. (2)
There is a nondegenerate irreducible smooth projective curve $C\subset A$ such
that
* •
$X=C+\cdots+C\subset A$ is the sum of $r$ copies of $C$, and
* •
the sum morphism $\operatorname{Sym}^{r}C\to X$ is an isomorphism.
### 1.5. Spin representations
Recall that for $N\geqslant 3$ the group $\operatorname{SO}_{N}(\mathbb{F})$
admits a double cover
$\operatorname{Spin}_{N}(\mathbb{F})\;\longrightarrow\;\operatorname{SO}_{N}(\mathbb{F})$
by the _spin group_ $\operatorname{Spin}_{N}(\mathbb{F})$, a simply connected
algebraic group with a faithful representation $\mathbb{S}_{N}$, the _spin
representation_. We have $\dim\mathbb{S}_{N}=2^{n}$ for $n=\lfloor
N/2\rfloor$, and if $N$ is odd, then the spin representation is irreducible.
If $N=2n$ is even, then the spin representation
$\mathbb{S}_{N}\simeq\mathbb{S}_{N}^{+}\oplus\mathbb{S}_{N}^{-}$ splits as the
direct sum of two irreducible representations called the half-spin
representations. They both have dimension
$\dim\mathbb{S}_{N}^{+}=\dim\mathbb{S}_{N}^{-}=2^{n-1}$. For odd $n=2m+1$, the
half-spin representations are both faithful and dual to each other; for even
$n=2m$, they are both self-dual and their images
$\operatorname{Spin}_{4m}^{\pm}(\mathbb{F})\;\subset\;\operatorname{GL}(\mathbb{S}_{4m}^{\pm})$
are called the half-spin groups. We show that spin or half-spin groups do not
occur for smooth nondivisible subvarieties of high enough codimension (see
theorem 8.3):
###### Theorem C.
Suppose that $X\subset A$ has ample normal bundle, is nondivisible and has
dimension $d<(g-1)/2$. Then the pair $(G_{X,\omega},\omega(\delta_{X}))$ is
not isomorphic to any of the above spin or half-spin groups with their spin or
half-spin representations unless
$(G_{X,\omega}^{\ast},\omega(\delta_{X}))\simeq(\operatorname{Spin}_{4m}^{\pm}(\mathbb{F}),\mathbb{S}_{4m}^{\pm})\quad\text{for
some $m\in\\{3,\dots,d\\}$},$
in which case $X$ has topological Euler characteristic of absolute value
$|e|=2^{2m-1}$ and is symmetric up to a translation, $d-m$ is even and
$d\geqslant(g-1)/4$.
The main theorem in section 1.2 now follows by combining theorems A, B, C, and
from this we also obtain the main theorem in section 1.1 by the analog of the
theorem of the fixed part given by theorem 4.10.
### 1.6. Conventions and notation
We always work over a field $k$ of characteristic zero. A _variety_ over $k$
is a separated finite type $k$-scheme, and a _subvariety_ is a closed
subvariety unless said otherwise. An _algebraic group_ is a finite type group
scheme over a field. For a locally free sheaf $\mathcal{E}$ (of finite rank)
on a variety $X$, we denote by
$\mathbb{P}(\mathcal{E}):=\operatorname{Proj}\operatorname{Sym}^{\bullet}\mathcal{E}^{\vee}$
the associated projective bundle. If $A$ is an abelian variety over $k$, we
denote by $\operatorname{Lie}A$ its tangent space at the identity and define
$\mathbb{P}_{A}:=\mathbb{P}((\operatorname{Lie}A)^{\vee})$. For a smooth
projective connected variety $X$, we denote by $\operatorname{Pic}^{0}(X)$ the
connected component of the identity in its Picard scheme. This is an abelian
variety, and we denote by $\operatorname{Alb}(X)$ its dual abelian variety.
Given a locally closed subvariety $Y$ of a variety $X$ over $k$, let
$\mathcal{C}_{Y/X}$ denote the _conormal sheaf_ of $Y$ in $X$, i.e., the
$\mathcal{O}_{Y}$-module $I/I^{2}$, where $I$ is the ideal sheaf of the closed
immersion $i\colon Y\to U$ for a suitable open subset $U\subset X$.
### Acknowledgments
We would like to thank Daniele Agostini, Benjamin Bakker, Yohan Brunebarbe,
Marco D’Addezio, and Olivier Debarre for helpful comments. A.J. gratefully
acknowledges support by the IHES where part of this work was completed. T.K.
was supported by the DFG research grant Kr 4663/2-1. C.L. was supported by the
DFG research grant Le 3093/3-2 and by the SMWK research grant SAXAG. M.M. was
supported by ANR grant ANR-18-CE40-0017.
## 2\. Gauss maps, positivity and nondegeneracy
In this section, we recall from the view of conormal geometry various notions
of positivity and nondegeneracy for subvarieties in abelian varieties. We
denote by $A$ an abelian variety over an algebraically closed field $k$ of
characteristic zero.
### 2.1. The stabilizer and the abelian variety generated
The _stabilizer_ of a subvariety $X\subset A$ is the algebraic subgroup
$\operatorname{Stab}_{A}(X)\subset A$ whose $k$-points are
$\operatorname{Stab}_{A}(X)(k)\;=\;\\{a\in A(k)\mid X+a=X\\}.$
Write $\operatorname{Stab}(X)=\operatorname{Stab}_{A}(X)$ if the ambient
abelian variety is clear from the context.
###### Definition 2.1.
We say $X\subset A$ is _nondivisible_ if it is integral and
$\operatorname{Stab}(X)=\\{0\\}$.
If $X\subset A$ is a connected subvariety, the _abelian subvariety generated
by $X$_ is defined to be the smallest abelian subvariety $\langle
X\rangle\subset A$ containing the image of the difference morphism $X\times
X\to A$, $(x,x^{\prime})\mapsto x-x^{\prime}$. Note that this image
$X-X\subset A$ is connected because $X\times X$ is so.
### 2.2. Conormal varieties and Gauss maps
Let us briefly recall the notion of conormal varieties and Gauss maps, which
will be crucial later. For abelian varieties, the cotangent bundle
$\Omega^{1}_{A}$ is a trivial bundle with fiber
$\textup{H}^{0}(A,\Omega^{1}_{A})={(\operatorname{Lie}{A})^{\vee}}$ of rank
$g=\dim A$. Consider the projection
$p\colon\quad\mathbb{P}(\Omega^{1}_{A})\;\longrightarrow\;\mathbb{P}_{A}=\mathbb{P}({(\operatorname{Lie}{A})^{\vee}}).$
If
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}\subset\mathbb{P}(\Omega^{1}_{A})$
is a $(g-1)$-dimensional integral subvariety, then for dimension reasons the
morphism
$\gamma_{\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}}\;:=\;p_{|\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}}\colon\quad\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}\;\longrightarrow\;\mathbb{P}_{A}$
is either dominant (and hence generically finite) or not dominant. We say that
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}$ is
_clean_ in the first case and _negligible_ in the second case. In the clean
case we denote by
$\deg\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}$
the generic degree of the generically finite dominant morphism
${\gamma}_{\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}}$,
in the negligible case we formally put
$\deg\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}=0$.
We want to apply these definitions to conormal varieties, for which we need
some more notation. For any subvariety $X\subset A$, its conormal sheaf
$\mathcal{C}_{X/A}$ fits in the exact sequence of coherent sheaves
$\mathcal{C}_{X/A}\stackrel{{\scriptstyle
i}}{{\longrightarrow}}\Omega^{1}_{A\rvert
X}\longrightarrow\Omega^{1}_{X}\longrightarrow 0.$
If $X\subset A$ is regular immersion, then $\mathcal{C}_{X/A}$ is locally free
and if $X$ is moreover integral, then $i$ is injective. If $X$ is smooth, then
all three terms are locally free and the sequence is short exact.
###### Definition 2.2.
For a reduced subvariety $X\subset A$ we define its (projective) _conormal
variety_
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}\subset\mathbb{P}(\Omega^{1}_{A})$
to be the closure of $\mathbb{P}(\mathcal{C}_{X^{\textup{reg}}/A})$ in
$\mathbb{P}(\Omega^{1}_{A})$. The _Gauss map_ of $X$ is the morphism
$\gamma_{X}\;:=\;{\gamma}_{\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}}\colon\quad\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}\;\longrightarrow\;\mathbb{P}_{A}$
We denote by
$\operatorname{pr}_{X}\colon\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}\to
X$ the projection and
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X,x}:=\operatorname{pr}_{X}^{-1}(x)$
for $x\in X(k)$.
###### Remark 2.3.
As we almost exclusively work with the projective conormal varieties and not
with affine ones, we will usually drop the adjective _projective_. We clearly
have:
1. (1)
The morphism
$\gamma_{X|\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X,x}}\colon\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X,x}\to\mathbb{P}_{A}$
is injective.
2. (2)
If $X$ is smooth at a point $x$, then
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X,x}=\mathbb{P}(\mathcal{C}_{X/A,x})$.
3. (3)
If $X$ is smooth, then
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}=\mathbb{P}(\mathcal{C}_{X/A})$.
The effect of isogenies on conormal varieties is easy to control. For an
integer $e\geqslant 1$ and an integral subvariety $X\subset A$ we denote by
$[e](X)\subset A$ its image under the isogeny $[e]\colon A\to A$. We will
always endow this image with the reduced subscheme structure, and we denote by
$e_{X}:=[e]_{\rvert X}\colon X\to[e](X)$ the finite morphism obtained by
restriction of the isogeny to the given subvariety. By abuse of notation, we
also denote by $[e]:A\times\mathbb{P}_{A}\to A\times\mathbb{P}_{A}$ the
induced morphism. Then we have:
###### Lemma 2.4.
Let $X\subset A$ be an integral subvariety, and let $Y=[e](X)\subset A$ for an
integer $e\geqslant 1$. Then we have an identity
$[e]_{\ast}\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}\;=\;\deg(e_{X})\cdot\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Y}$
of cycles. In particular, if the subvariety $X\subset A$ is nondivisible, then
$[e]_{\ast}\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}=\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Y}$.
###### Proof.
The first claim follows easily from the fact that by construction the conormal
variety to any integral subvariety is integral. The second claim is then clear
because the morphism $e_{X}\colon X\to Y$ is birational if $X$ is
nondivisible. ∎
###### Corollary 2.5.
Let $X\subset A$ be a smooth integral subvariety and $Y=[e](X)$ for an integer
$e\geqslant 1$. Then the fibers of
$\operatorname{pr}_{Y}\colon\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Y}\to
Y$ are pure of dimension $\operatorname{codim}_{A}Y-1$.
###### Proof.
Lemma 2.4 gives a commutative diagram
${\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}}$${\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Y}}$${X}$${Y}$$\scriptstyle{[e]}$$\scriptstyle{\operatorname{pr}_{X}}$$\scriptstyle{\operatorname{pr}_{Y}}$$\scriptstyle{e_{X}}$
where the horizontal arrows are finite morphisms, and if $X$ is smooth, then
the fibers of the morphism
$\operatorname{pr}_{X}:\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}\to
X$ are pure of dimension $\operatorname{codim}_{A}X-1$. ∎
### 2.3. Positivity and nondegeneracy of subvarieties
We now discuss various notions of positivity and nondegeneracy for
subvarieties of an abelian variety. We say that an integral subvariety
$X\subset A$ is degenerate if there exists a surjective morphism $\pi\colon
A\to B$ of abelian varieties with
$\dim\pi(X)<\min\\{\dim B,\dim X\\}.$
Otherwise, we say that $X$ is nondegenerate. Any closed point on the abelian
variety is a nondegenerate subvariety, and so is the abelian variety itself.
Also note that if the abelian variety $A$ is simple, then any integral
subvariety is nondegenerate. We say that a proper integral variety $X$ is _of
general type_ if there is a proper birational morphism $\nu\colon Y\to X$ from
a smooth proper connected variety $Y$ with big canonical bundle. For instance,
we have:
1. (1)
An integral effective divisor $X\subset A$ is nondegenerate if and only if it
is ample. A curve $X\subset A$ is nondegenerate if and only if it generates
$A$. See [Deb95, §1, examples].
2. (2)
For any elliptic curve $E$ and any simple abelian variety $B$ of dimension
$\geqslant 3$, Debarre has constructed in [Deb95, p. 189] a smooth subvariety
$X\subset A=E\times B$
of codimension $2$ which is nondegenerate but whose normal bundle is not
ample. The smooth subvariety is obtained by choosing a general ample divisor
$D\subset B$ and intersecting $E\times D$ with a general ample divisor in $A$.
3. (3)
For $i=1,2$, let $A_{i}$ be an abelian variety and $X_{i}\subset A_{i}$ a
nondegenerate integral subvariety. By considering the projections onto the
factors, one sees that $X_{1}\times X_{2}\subset A_{1}\times A_{2}$ is of
general type but degenerate.
###### Remark 2.6.
Nondegeneracy is invariant under isogenies: Let $f\colon A\to A^{\prime}$ be
an isogeny of abelian varieties over $k$. Then an integral subvariety
$X\subset A$ is nondegenerate if and only if $f(X)\subset A^{\prime}$ is.
In what follows we often consider the sum morphism $\sigma\colon X\times Y\to
A$ for reduced subvarieties $X,Y\subset A$, and we denote by $X+Y\subset A$
its image. For nondegenerate subvarieties we have the following result by
Debarre:
###### Lemma 2.7.
Let $X,Y\subset A$ be integral subvarieties.
1. (1)
If $X$ is nondegenerate, then $\dim(X+Y)=\min\\{\dim(X)+\dim(Y),\dim(A)\\}$.
2. (2)
If $X$ and $Y$ are both nondegenerate, then so is $X+Y\subset A$.
###### Proof.
See [Deb05, corollary 8.11]. ∎
The relations between the various notion of nondegeneracy and positivity that
will play a role in this paper are summarized in the following diagram where
for a smooth subvariety $X\subset A$ we denote by $\mathcal{N}_{X/A}$ its
normal bundle:
$X$ smooth and
---
$\mathcal{N}_{X/A}$ ample
Gauss map $\gamma_{X}$ is
---
a finite morphism
$X$ nondegenerate
---
and $X\neq A$
$X$ of general type $\operatorname{Stab}(X)$
---
finite
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}$
clean$X$ smooth $A$ simple
---
$X$ smooth
More precisely, we have:
###### Theorem 2.8.
Let $X\subset A$ be an integral subvariety with $0<\dim X<\dim A$.
1. (1)
The following are equivalent:
1. (a)
the conormal cone
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}$ is
clean;
2. (b)
the algebraic group $\operatorname{Stab}(X)$ is finite;
3. (c)
the variety $X$ is of general type.
2. (2)
If $X$ is nondegenerate, then $\operatorname{Stab}(X)$ is finite and $\langle
X\rangle=A$.
3. (3)
If
$\gamma_{X}\colon\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}\to\mathbb{P}_{A}$
is a finite morphism, then $X$ is nondegenerate.
4. (4)
Suppose $X$ smooth. Then the normal bundle $\mathcal{N}_{X/A}$ is ample if and
only the Gauss map
$\gamma_{X}\colon\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}\to\mathbb{P}_{A}$
is a finite morphism.
5. (5)
If $A$ is a simple abelian variety and $X$ is of general type, then $X$ is
nondegenerate. If $X$ is moreover smooth, then $\mathcal{N}_{X/A}$ is ample.
###### Proof.
(1) The equivalence (a) $\Leftrightarrow$ (b) is shown in [Wei15a, th. 1],
while (b) $\Leftrightarrow$ (c) follows from Ueno’s fibration theorem [Uen73,
th. 3.10], [Abr94, th. 3].
(2) For the finiteness of the stabilizer, denote by $p\colon A\to
B:=A/\operatorname{Stab}(X)$ the quotient morphism. This quotient morphism is
not surjective, since by construction we have $p^{-1}(p(X))=X\neq A$. The
nondegeneracy of $X$ then forces $p\colon X\to\pi(X)$ to be generically
finite, and it follows that $\operatorname{Stab}(X)$ is finite as desired. To
show that $\langle X\rangle=A$, consider the quotient morphism $q\colon A\to
A/\langle X\rangle$. The image $q(X)$ is a point, hence the nondegeneracy of
$X$ and the assumption $\dim X>0$ imply that $\dim A/\langle X\rangle=0$,
which shows that we have $\langle X\rangle=A$.
(3) We prove the contrapositive. If $X\subset A$ is degenerate, then there is
a surjective morphism $\pi\colon A\to B$ of abelian varieties such that $\dim
Y<\min\\{\dim B,\dim X\\}$, where $Y:=\pi(X)$. We have the following
commutative of $\mathcal{O}_{X}$-modules with exact rows
${(\pi^{\ast}\mathcal{C}_{Y/B})_{\rvert
X}}$${(\pi^{\ast}\Omega^{1}_{B})_{\rvert
X}}$${(\pi^{\ast}\Omega^{1}_{Y})_{\rvert
X}}$${0}$${\mathcal{C}_{X/A}}$${\Omega^{1}_{A\rvert
X}}$${\Omega^{1}_{X}}$${0}$$\scriptstyle{j}$$\scriptstyle{\varepsilon}$$\scriptstyle{\textup{d}\pi}$$\scriptstyle{i}$
where $\textup{d}\pi$ is the pull-back of differential forms along $\pi$. Here
$i$ is injective over the smooth locus $X^{\textup{reg}}\subset X$, and
likewise $j$ is injective over $\pi^{-1}(Y^{\textup{reg}})$: Indeed, the short
exact sequence
$0\longrightarrow(\mathcal{C}_{Y/B})_{\rvert
Y^{\textup{reg}}}\longrightarrow\Omega^{1}_{B\rvert
Y^{\textup{reg}}}\longrightarrow\Omega^{1}_{Y^{\textup{reg}}}\longrightarrow
0$
of $\mathcal{O}_{Y^{\textup{reg}}}$-modules is locally split because the
$\mathcal{O}_{Y^{\textup{reg}}}$-module $\Omega^{1}_{Y^{\textup{reg}}}$ is
locally free; the pull-back along $\pi$ of the above short exact sequence
hence stays exact. It follows that $\varepsilon$ is also injective over the
nonempty open subset
$U:=X^{\textup{reg}}\cap\pi^{-1}(Y^{\textup{reg}}).$
The hypothesis $\dim Y<\dim X$ implies that the induced morphism $\pi_{\rvert
U}\colon U\to Y^{\textup{reg}}$ is not generically finite. Thus, for $y$
ranging over a dense open subset of $Y^{\textup{reg}}$, the fiber
$Z:=\pi^{-1}(y)\cap U$ is positive-dimensional. Pick a nonzero vector
$v\in\mathcal{C}_{Y,y}$, which exists because $\dim Y<\dim B$. Then
$0\;\neq\;j(v)\;\in\;\bigcap_{x\in Z}\mathcal{C}_{X,x}.$
Thus, if we denote by $F:=\operatorname{pr}_{X}(\gamma_{X}^{-1}([j(v)])\subset
X$ the image of $\gamma_{X}^{-1}([j(v)])$ under the projection
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}\to
X$, then the subset $Z$ is contained in $F$. This shows that the dimension of
$\gamma_{X}^{-1}([j(v)])$ is positive.
(4) Since $X$ is smooth we have
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}=\mathbb{P}(\mathcal{C}_{X/A})$.
The normal bundle $\mathcal{N}_{X/A}$ is globally generated, thus the
equivalence is [Laz04b, Example 6.1.5].
(5) When $A$ is a simple abelian variety, any integral subvariety is
nondegenerate, and the ampleness of the normal bundle of a smooth subvariety
$X$ in $A$ follows from [Har71, prop. 4.1]. ∎
### 2.4. Symmetric powers of curves in abelian varieties
We show here that symmetric powers of a (smooth projective) curve $C$ cannot
be embedded as a complete intersection of ample divisors as claimed in section
1.4. Recall that the curve $C$ has gonality $\geqslant n+1$ if and only if the
sum map $\operatorname{Sym}^{n}C\to
X:=C+\cdots+C\subset\operatorname{Pic}^{0}(C)$ is an isomorphism. If so the
normal bundle of $X$ is ample [Deb95, §1, Examples (2)]. Imposing further
positivity properties to the normal bundle is far more restrictive:
###### Proposition 2.9.
Let $C\subset A$ a smooth irreducible projective curve such that the sum
morphism $\operatorname{Sym}^{n}C\to X:=C+\cdots+C\subset A$ is an isomorphism
for some $n\geqslant 2$. Then $C$ is nonhyperelliptic of genus $g\geqslant 3$
and the following hold:
1. (1)
If the normal bundle
$\mathcal{N}_{X/A}=\mathcal{V}_{1}\oplus\dots\oplus\mathcal{V}_{r}$ is a
direct sum of ample vector bundles, then
$n\leqslant\max_{i=1,\dots,r}\operatorname{rk}\mathcal{V}_{i}+1.$
2. (2)
The normal bundle $\mathcal{N}_{X/A}$ is a direct sum of ample line bundles if
and only if $g=3$, $n=2$, and $A$ is isomorphic to
$\operatorname{Pic}^{0}(C)$.
###### Proof.
By Lefschetz’s principle, we may assume $k=\mathbb{C}$. First of all, the
curve $C$ is nonhyperelliptic of genus $g\geqslant 3$. Otherwise, $C$ would be
symmetric when suitably embedded in its Jacobian. In particular, the sum
morphism would contract the antidiagonal and thus would not induce an
isomorphism $\operatorname{Sym}^{n}C\simeq C+\cdots+C$.
(1) Arguing by contradiction, suppose the inequality in the statement does not
hold. Then we can apply the Barth-Lefschetz theorem [Deb95, th. 4.5] to obtain
isomorphisms
$\textup{H}^{i}(A)\simeq\textup{H}^{i}(X),\qquad i=1,2,$
of rational cohomology groups. On the other hand, the computation of
cohomology of symmetric powers of curves [Mac62, 1.2] yields the following
expressions:
$\displaystyle\textup{H}^{1}(X)$
$\displaystyle=\textup{H}^{1}(\operatorname{Sym}^{n}C)\simeq\textup{H}^{1}(C^{n})^{\mathfrak{S}_{n}}=\textup{H}^{0}(C)\otimes\textup{H}^{1}(C),$
$\displaystyle\textup{H}^{2}(X)$
$\displaystyle=\textup{H}^{2}(\operatorname{Sym}^{n}C)\simeq\textup{H}^{2}(C^{n})^{\mathfrak{S}_{n}}=\operatorname{Alt}^{2}\textup{H}^{1}(C)\oplus\textup{H}^{0}(C)\otimes\textup{H}^{2}(C)^{n-1}.$
Recalling the equality
$\textup{H}^{2}(A)=\operatorname{Alt}^{2}\textup{H}^{1}(A)$ we obtain a
contradiction.
(2) If $C$ has genus $g=3$, the subvariety
$C+C\subset\operatorname{Pic}^{0}(C)$ is a theta divisor and hence ample.
Conversely, suppose that the normal bundle $\mathcal{N}_{X/A}$ is a direct sum
of ample line bundles. We first claim that then $A$ is isogenous to
$\operatorname{Pic}^{0}(C)$. Indeed, as above we have isomorphisms
$\textup{H}^{1}(A)\simeq\textup{H}^{1}(X)\simeq\textup{H}^{1}(C).$
Now we cannot conclude as in (1) because the Barth-Lefschetz theorem here only
says that $\textup{H}^{2}(A)\to\textup{H}^{2}(X)$ is injective. Instead, write
$\mathcal{N}_{X/A}=\mathcal{L}_{1}\oplus\cdots\oplus\mathcal{L}_{g-2}$ for
ample line bundles $\mathcal{L}_{i}$ on $X$. By looking at the short exact
sequence
$0\longrightarrow\textup{T}_{X}\longrightarrow\operatorname{Lie}A\otimes\mathcal{O}_{X}\longrightarrow\mathcal{N}_{X/A}=\mathcal{L}_{1}\oplus\cdots\oplus\mathcal{L}_{g-2}\longrightarrow
0,$
we see that the line bundles $\mathcal{L}_{i}$ are globally generated and
(2.1)
$\mathcal{L}_{1}\otimes\cdots\otimes\mathcal{L}_{g-2}\simeq\mathcal{K}_{X}$
where $\mathcal{K}_{X}=\operatorname{Alt}^{2}\Omega^{1}_{X}$ is the canonical
bundle on $X$. We identify $X$ with $\operatorname{Sym}^{2}C$ and write
$\pi\colon C\times C\to X$ for the quotient morphism. Since $\pi$ ramifies
exactly on the diagonal $\Delta$ of $C\times C$, we have
$\pi^{\ast}\mathcal{K}_{X}=\mathcal{K}_{C\times C}(-\Delta)$. Let us fix a
point $p\in C(k)$ and consider the embedding $f\colon C\to C\times C$,
$x\mapsto(x,p)$. Then
(2.2) $f^{\ast}\pi^{\ast}\mathcal{K}_{X}=\mathcal{K}_{C}(-p).$
On the other hand, for $i=1,\dots,g-2$, the line bundle
$\mathcal{M}_{i}:=f^{\ast}\pi^{\ast}\mathcal{L}_{i}$ on $C$ is ample and
globally generated. Moreover, the curve $C$ being nonhyperelliptic, we
necessarily have $\deg\mathcal{M}_{i}\geqslant 3$. By combining (2.1) and
(2.2) and then by taking degrees, we obtain the inequality
$2g-3=\deg\mathcal{K}_{C}(-p)=\sum_{i=1}^{g-2}\deg\mathcal{M}_{i}\geqslant
3(g-2).$
This forces $g=3$. For a suitable Abel-Jacobi embedding
$C\hookrightarrow\operatorname{Pic}^{0}(C)$, there exists an isogeny
$\varphi\colon\operatorname{Pic}^{0}(C)\to A$ such that the following diagram
commutes:
${\operatorname{Sym}^{2}C}$${\Theta}$${\operatorname{Pic}^{0}(C)}$${\operatorname{Sym}^{2}C}$${X}$${A}$$\scriptstyle{\sim}$
$\scriptstyle\sim$
$\scriptstyle{\varphi}$$\scriptstyle{\sim}$
Here the leftmost horizontal arrows are induced by the sum and
$\Theta\subset\operatorname{Pic}^{0}(C)$ is a theta divisor. The preimage
$\varphi^{-1}(X)$ is smooth, thus its connected components are irreducible. As
$\Theta$ is one of them, the others are $\Theta+a$ for
$a\in\operatorname{Ker}\varphi$. Since any two translates of an ample divisor
meet, we have $\Theta=\varphi^{-1}(X)$. But the isogeny $\varphi$ induces an
isomorphism $\Theta\simeq X$, thus $\varphi$ must be injective. ∎
###### Corollary 2.10.
Let $C\subset A$ be a smooth irreducible projective curve such that the sum
morphism $\operatorname{Sym}^{n}C\to X:=C+\cdots+C\subset A$ is an isomorphism
for some $n\geqslant 2$. Then $C$ is nonhyperelliptic of genus $g\geqslant 3$
and the following are equivalent:
1. (1)
The subvariety $X\subset A$ is a complete intersection of ample divisors.
2. (2)
We have $g=3$, $n=2$, and $A$ is isomorphic to $\operatorname{Pic}^{0}(C)$.
###### Proof.
For complete intersections of ample divisors, the normal bundle is a direct
sum of ample line bundles. Hence, proposition 2.9 (2) applies. ∎
As an amusing aside, of no use in what follows, note that proposition 2.9
implies the classical bound for the gonality of a smooth projective curve:
###### Corollary 2.11.
A smooth projective curve of genus $g$ has gonality $\leqslant(g+3)/2$.
###### Proof.
As already mentioned, the curve $C$ has gonality $\geqslant n+1$ if and only
if the sum morphism $\operatorname{Sym}^{n}C\to
X:=C+\cdots+C\subset\operatorname{Pic}^{0}(C)$ is an isomorphism, and if this
is the case, then $X$ has ample normal bundle in $\operatorname{Pic}^{0}(C)$.
Since the normal bundle has rank $g-n$ proposition 2.9 (1) implies $n\leqslant
g-n+1$, that is $n+1\leqslant(g+3)/2$. ∎
In particular proposition 2.9 (1) is sharp in the two extremal cases—that of
an indecomposable ample normal bundle and that of a sum of ample line bundles.
### 2.5. Bounds for the topological Euler characteristic
We now pass to some numerics concerning the topological Euler characteristic
of complete intersections. To ease notation below, we define $g:=\dim A$. For
a smooth subvariety $X\subset A$, let $e_{X}$ denote its topological Euler
characteristic. By definition it is the top Chern class of the tangent bundle
$T_{X}$ of $X$. Consider the short exact sequence of vector bundles on $X$,
$0\longrightarrow T_{X}\longrightarrow T_{A\rvert
X}\longrightarrow\mathcal{N}_{X/A}\longrightarrow 0.$
Since the total Chern class is multiplicative in short exact sequences and the
tangent bundle of $A$ is trivial, we have
$c(T_{X})=c(T_{A\rvert
X})c(\mathcal{N}_{X/A})^{-1}=c(\mathcal{N}_{X/A})^{-1}.$
First of all, note that we have the following lower bound whenever the normal
bundle is ample.
###### Lemma 2.12.
Let $X\subset A$ be a $d$-dimensional smooth subvariety with ample normal
bundle. Then
$|e_{X}|\geqslant\max\\{g,2^{\min\\{d,\lfloor\sqrt{g-1}\rfloor\\}}\\}.$
###### Proof.
We may suppose $k=\mathbb{C}$. By definition the inverse of the total Chern
class is the total Segre class. We have
$|e_{X}|=(-1)^{d}s_{d}(\mathcal{N}_{X/A})=s_{d}(\mathcal{N}_{X/A}^{\vee})$
where $d=\dim X$ and $s_{d}$ is the $d$-th Segre class. Now the normal bundle
$\mathcal{N}_{X/A}$ is ample and globally generated. Since
$\textup{H}^{1}(X,\mathbb{C})\neq 0$ we have $|e_{X}|\geqslant g$ by [BSS93,
Theorem 4]. According to [EIL00, Prop. 2.4] we also have $|e_{X}|\geqslant
2^{\min\\{d,\lfloor\sqrt{g-1}\rfloor\\}}$ because the cotangent bundle of $X$
is nef.111Beware that in both references the authors adopt the convention dual
to the one in [Ful98] for the definition of Segre classes. ∎
The previous lower bound is doubtlessly not sharp. Indeed for a smooth
projective curve $X$ generating $A$ we have $|e_{X}|\geqslant 2g-2$. For
surfaces we have:
###### Lemma 2.13.
Let $X\subset A$ be a smooth projective surface generating $A$ and with finite
stabilizer. Then
$e_{X}\geqslant 3g-3.$
###### Proof.
Write $c_{1}=c_{1}(T_{X})$ and $c_{2}=c_{2}(T_{X})=e_{X}$ and
$\chi=\chi(X,\mathcal{O}_{X})$ as usual. By Theorem 2.8 the surface $X$ is of
general type. Thus the Bogomolov-Miyaoka-Yau inequality gives
$c_{1}^{2}\leqslant 3c_{2}$ which is equivalent to $3\chi\leqslant c_{2}$ by
Noether’s formula. On the other hand, let us write
$q=h^{1}(X,\mathcal{O}_{X})$ and $p=h^{2}(X,\mathcal{O}_{X})$ so that
$\chi=1-q+p$. The surface $X$ is minimal, thus we can apply the inequality
$p\geqslant 2q-4$ (see Beauville’s appendix to [Deb82] for a proof), which is
equivalent to $\chi\geqslant q-3$. Combining these inequalities yields
$c_{2}\geqslant 3(q-1)$. Since $X$ generates $A$ by hypothesis, we have
$q\geqslant g$ which concludes the proof. ∎
When the subvariety is a complete intersection of ample divisors the previous
lower bounds can be drastically improved. In order to show this, for integers
$n\geqslant 2$ and $r\in\\{1,\dots,n-1\\}$, consider the following subset of
partitions of $n$,
$P(n,r)\;:=\;\\{a=(a_{1},\dots,a_{r})\in\mathbb{Z}^{r}\mid
a_{1},\dots,a_{r}\geqslant 1,a_{1}+\cdots+a_{r}=n\\}.$
Note that $P(n,r)$ has cardinality $\binom{n-1}{n-r}$.
###### Lemma 2.14.
Let $X$ be a smooth complete intersection of ample divisors
$D_{1},\dots,D_{r}$ in $A$. Then
$e_{X}\;=\;(-1)^{\dim X}\sum_{a\in P(g,r)}D_{1}^{a_{1}}\cdots D_{r}^{a_{r}}.$
###### Proof.
The hypothesis of $X$ being a complete intersection of the divisors
$D_{1},\dots,D_{r}$ implies that the normal bundle $\mathcal{N}_{X/A}$ is the
direct sum of (the restriction to $X$ of) the line bundles
$\mathcal{O}(D_{1}),\dots,\mathcal{O}(D_{r})$. In particular,
$c(\mathcal{N}_{X/A})=c(\mathcal{O}(D_{1}))\cdots
c(\mathcal{O}(D_{r}))=(1+D_{1})\cdots(1+D_{r})\in\operatorname{CH}(X).$
By inverting formally $1+D_{i}$ we find the following expression
$c(T_{X})=\sum_{n=0}^{g-r}(-1)^{n}\sum_{\begin{subarray}{c}a_{1},\dots,a_{r}\geqslant
0\\\ a_{1}+\cdots+a_{r}=n\end{subarray}}D_{1}^{a_{1}}\cdots
D_{r}^{a_{r}}\in\operatorname{CH}(X).$
Looking at it in the Chow ring of $A$ amounts to multiplying it by
$D_{1}\cdots D_{r}$. We conclude by then taking the piece of degree $g$. ∎
Recall that, for an ample divisor $D\subset A$, the self-intersection $D^{g}$
is positive and divisible by $g!$, as the ratio $D^{g}/g!$ is given by
$h^{0}(A,\mathcal{O}(D))$.
###### Lemma 2.15.
For ample divisors $D_{1},\dots,D_{g}\subset A$, we have $D_{1}\cdots
D_{g}\geqslant g!$.
###### Proof.
The Khovanskii-Teissier inequality [Laz04a, Theorem 1.6.1] states that the
lower bound $(D_{1}\cdots D_{g})^{g}\geqslant D_{1}^{g}\cdots D_{g}^{g}$
holds. Since each factor on the right-hand side is a positive multiple of
$g!$, this concludes the proof. ∎
###### Proposition 2.16.
Let $X\subset A$ be a smooth complete intersection of ample divisors of
dimension $d\geqslant 1$. Then $e_{X}$ is even and
$|e_{X}|\geqslant g!\tbinom{g-1}{d}.$
###### Proof.
By assumption $X$ is a complete intersection of ample divisors, say
$D_{1},\dots,D_{r}$ where $r=g-d$ is the codimension of $X$. Lemma 2.14 shows
$e_{X}\;=\;(-1)^{d}\sum_{a\in P(g,r)}D_{1}^{a_{1}}\cdots D_{r}^{a_{r}}.$
Since the divisors $D_{1},\dots,D_{r}$ are ample, by lemma 2.15 we have
$D_{1}^{a_{1}}\cdots D_{r}^{a_{r}}\geqslant g!$ for each $a\in P(g,r)$. Since
the cardinality of $P(g,g-d)$ is $\binom{g-1}{d}$, the inequality in the
statement follows. For the parity of $e_{X}$, by the Lefschetz principle, we
may assume $k=\mathbb{C}$. Then each
$[D_{i}]^{a_{i}}\in\textup{H}^{2a_{i}}(A,\mathbb{Z})$ is divisible by
$a_{i}!$. Since $d\geqslant 1$, for each $a\in P(g,r)$ we have $a_{i}\geqslant
2$ for some $i$, thus we conclude that $e_{X}$ is even. ∎
By Proposition 2.16, the absolute value of the Euler characteristic of a
smooth connected complete intersection of ample divisors in $A$ is never equal
to $27$. We now prove that $|e_{X}|\neq 56$, except in the case of curves in
abelian surfaces and abelian threefolds (in which case there are examples).
###### Corollary 2.17.
If $X\subset A$ is a smooth complete intersection of ample divisors of
dimension $d\geq 1$ and $(d,g)\neq(1,2),(1,3)$, then $|e_{X}|\neq 56$.
###### Proof.
Proposition 2.16 implies $|e_{X}|\geqslant g!\binom{g-1}{d}$ which settles the
matter for $g\geqslant 5$. On the other hand, if $X$ is itself a divisor, that
is $d=g-1$, then $|e_{X}|=X^{g}$ is divisible by $g!$. The only two cases left
are $(d,g)=(1,4),(2,4)$ for which $g!\binom{g-1}{d}=72$. ∎
Note that proposition 2.16 furnishes another proof of corollary 2.10. Indeed
the $n$-th symmetric power of a smooth projective curve of genus $g\geqslant
2$ has topological Euler characteristic of absolute value $\binom{2g-2}{n}$;
see [Mac62, 4.4]. Using that the gonality is $\leqslant(g+3)/2$ we conclude
because, for $g\geqslant 4$ and $n\leqslant(g+1)/2$, we have
$\tbinom{2g-2}{n}<g!\tbinom{g-1}{n}.$
## 3\. Perverse sheaves on abelian varieties
In this section, we collect some general results about perverse sheaves on
abelian varieties. We work over a field $k$ with $\operatorname{char}(k)=0$,
but as in [LS20, section 3] we do not require this field to be algebraically
closed; for the relation with monodromy groups we will later need to work over
function fields. For any variety $X$ over $k$ we denote by
$\textup{Perv}(X,\mathbb{F})\;\subset\;\textup{D}^{b}_{c}(X,\mathbb{F})$
the abelian category of perverse sheaves with coefficients in
$\mathbb{F}=\overline{\mathbb{Q}}_{\ell}$ for a fixed prime number $\ell$. For
$k=\mathbb{C}$, we will later also consider perverse sheaves in the analytic
sense with coefficients in $\mathbb{F}=\mathbb{C}$, and we will use the above
notation also in this case. The results below work both in the $\ell$-adic
setting over any field $k$ and in the analytic setting with
$k=\mathbb{F}=\mathbb{C}$. We let $\pi_{1}(A,0)$ be the étale resp.
topological fundamental group in the two settings, with the profinite resp.
discrete topology, and write
$\Pi(A,\mathbb{F})=\operatorname{Hom}(\pi_{1}(A,0),\mathbb{F}^{\times})$ for
the group of its continuous characters.
### 3.1. Convolution on abelian varieties
For convenience, let us briefly recall the Tannakian description of perverse
sheaves on abelian varieties $X=A$ given in [KW15c]. The sum morphism
$\sigma\colon A\times A\to A$ induces a convolution product
$*:{\mathrm{D}^{b}_{c}}(A,\mathbb{F})\times{\mathrm{D}^{b}_{c}}(A,\mathbb{F})\longrightarrow{\mathrm{D}^{b}_{c}}(A,\mathbb{F}),\quad
K_{1}*K_{2}:=R\sigma_{*}\left(K_{1}\boxtimes K_{2}\right)$
which endows the derived category with the structure of a rigid symmetric
monoidal category [Wei11] (in loc. cit. this is stated only over algebraically
closed fields $k$, but the proof works in the general case without changes).
The subcategory of perverse sheaves is not stable under the convolution
product, but it becomes so after passing to a certain quotient category. To
explain this, recall that for any $P\in\textup{Perv}(A,\mathbb{F})$ we have
$\chi(A,P)\;:=\;\sum_{i\in\mathbb{Z}}\,(-1)^{i}\dim_{\mathbb{F}}H^{i}(A,P)\;\geqslant\;0.$
Indeed, over $k=\mathbb{C}$ this was observed by Franecki and Kapranov [FK00,
cor. 1.4]; the general case can be reduced to the complex case by choosing a
model over some algebraically closed subfield of $k$ which embeds into the
complex numbers, see lemma A.1. The additivity of the Euler characteristic in
short exact sequences then implies that perverse sheaves of Euler
characteristic zero form a Serre subcategory
$\textup{S}(A,\mathbb{F})\;:=\;\\{P\in\textup{Perv}(A,\mathbb{F})\mid\chi(A,P)=0\\}\;\subset\;\textup{Perv}(A,\mathbb{F})$
inside the abelian category of perverse sheaves. Let
$\textup{T}(A,\mathbb{F})\subset{\mathrm{D}^{b}_{c}}(A,\mathbb{F})$ be the
full subcategory of sheaf complexes whose perverse cohomology sheaves are in
$\textup{S}(A,\mathbb{F})$; its objects will be called negligible sheaf
complexes.
###### Proposition 3.1.
The triangulated quotient category
$\overline{{\mathrm{D}^{b}_{c}}}(A,\mathbb{F}):={\mathrm{D}^{b}_{c}}(A,\mathbb{F})/\textup{T}(A,\mathbb{F})$
inherits from the perverse $t$-structure on the derived category a
$t$-structure whose heart
$\overline{\textup{Perv}}(A,\mathbb{F})\;\subset\;\overline{{\mathrm{D}^{b}_{c}}}(A,\mathbb{F})$
is equivalent to the abelian quotient category
$\textup{Perv}(A,\mathbb{F})/\textup{S}(A,\mathbb{F})$. It also inherits the
structure of a rigid symmetric monoidal category with respect to a convolution
product
$*:\overline{{\mathrm{D}^{b}_{c}}}(A,\mathbb{F})\times\overline{{\mathrm{D}^{b}_{c}}}(A,\mathbb{F})\longrightarrow\overline{{\mathrm{D}^{b}_{c}}}(A,\mathbb{F})$
induced by the convolution product on the derived category. On the
triangulated quotient category, this product is $t$-exact in both of its
arguments. Thus, it restricts to a product
$*:\overline{\textup{Perv}}(A,\mathbb{F})\times\overline{\textup{Perv}}(A,\mathbb{F})\longrightarrow\overline{\textup{Perv}}(A,\mathbb{F}),$
and $\overline{\textup{Perv}}(A,\mathbb{F})$ is a neutral Tannaka category
with respect to this product.
###### Proof.
Fix an algebraic closure $K\supset k$. Then the functor
${\mathrm{D}^{b}_{c}}(A,\mathbb{F})\to{\mathrm{D}^{b}_{c}}(A_{K},\mathbb{F})$
is exact for the perverse $t$-structure, compatible with the convolution
product, and preserves the subcategories of negligible objects. Hence, the
result follows from the statement over algebraically closed fields in [KW15c,
Krä14]; note that by Deligne’s internal characterization of neutral Tannaka
categories [Cou20, §6.4], it suffices to construct a fiber functor on every
finitely generated tensor subcategory. ∎
In what follows, by an abelian tensor category we mean a rigid symmetric
monoidal abelian $\mathbb{F}$-linear category.
### 3.2. Tannaka groups of perverse sheaves
Let $\mathcal{C}\subset\overline{\textup{Perv}}(A,\mathbb{F})$ be a full
abelian tensor subcategory and
$\omega\colon\quad\mathcal{C}\;\longrightarrow\;\operatorname{Vect}(\mathbb{F})$
a given fiber functor on this subcategory. The existence of such fiber
functors is guaranteed by proposition 3.1; there is no canonical choice of
such a fiber functor, but any two fiber functors on a neutral Tannaka category
over an algebraically closed field $\mathbb{F}$ are noncanonically isomorphic
[DMOS82, th. 3.2.(b)]. Once we have chosen a fiber functor, we get an
equivalence of abelian tensor categories between $\mathcal{C}$ and the
category $\textup{Rep}_{\mathbb{F}}(G_{\omega}(\mathcal{C}))$ of finite-
dimensional algebraic representations of the affine group scheme
$G_{\omega}(\mathcal{C})\;:=\;\operatorname{Aut}^{\otimes}(\omega)$
over $\mathbb{F}$ called the _Tannaka group_ of $\mathcal{C}$. We are
interested in algebraic quotients of this proalgebraic group scheme:
###### Definition 3.2.
For any $P\in\mathcal{C}$, we obtain from the above construction an affine
algebraic group
$G_{\omega}(P)\;:=\;\operatorname{Im}\bigl{(}G_{\omega}(\mathcal{C})\to\operatorname{GL}(\omega(P))\bigr{)}$
over $\mathbb{F}$ with a faithful representation on the vector space
$\omega(P)\in\operatorname{Vect}(\mathbb{F})$ whose dimension is the Euler
characteristic
$\dim_{\mathbb{F}}(\omega(P))\;=\;\chi(A,P),$
see [KW15c, proof of cor. 4.2]. Let us denote by $\iota\colon\langle
P\rangle\hookrightarrow\mathcal{C}$ the smallest abelian tensor subcategory
which contains the object $P$ and is stable under subobjects and quotients.
Then $G_{\omega}(P)=G_{\omega\circ\iota}(\langle P\rangle)$ for the fiber
functor $\omega\circ\iota\colon\langle
P\rangle\to\operatorname{Vect}(\mathbb{F})$ and we have a commutative diagram
of abelian tensor categories:
${\langle
P\rangle}$${\textup{Rep}_{\mathbb{F}}(G_{\omega}(P))}$${\mathcal{C}}$${\textup{Rep}_{\mathbb{F}}(G_{\omega}(\mathcal{C}))}$$\scriptstyle{\sim}$$\scriptstyle{\sim}$
If $P\in\mathcal{C}$ is a simple object, then the faithful representation
$\omega(P)\in\textup{Rep}_{\mathbb{F}}(G_{\omega}(P))$ is irreducible and then
$G_{\omega}(P)$ is reductive by [Hum78, 19.1 prop. (b)]. This is in particular
the case when $P=\delta_{X}$ is the intersection complex of an integral
subvariety $X\subset A$, in which case we write
$G_{X,\omega}:=G_{\omega}(\delta_{X}).$
For the rest of this section, we fix a full abelian tensor subcategory
$\mathcal{C}\subset\overline{\textup{Perv}}(A,\mathbb{F})$ and a fiber functor
$\omega\colon\mathcal{C}\to\operatorname{Vect}(\mathbb{F})$. When there is no
risk of confusion, we also write $\omega$ for the restriction of the given
fiber functor to any subcategory of $\mathcal{C}$.
### 3.3. The derived group of the connected component
It is often convenient to pass from arbitrary reductive groups to connected
semisimple groups: For a reductive group $G$, let $G^{\circ}\subset G$ be its
connected component of the identity, and note that the derived group
$G^{\ast}\;:=\;[G^{\circ},G^{\circ}]$
is a connected semisimple group. For the reductive Tannaka groups from section
3.2 we will understand the connected components and the center in terms of
direct images of perverse sheaves under the morphisms $[d]\colon A\to
A,x\mapsto dx$ for $d\in\mathbb{N}$ and $t_{a}\colon A\to A,x\mapsto x+a$ for
$a\in A(k)$. For a perverse sheaf $Q\in\textup{Perv}(A,\mathbb{F})$ and a
point $a\in A(k)$, we define
$Q_{a}\;:=\;t_{a*}P$
and we say that $Q$ is nondivisible if it is simple and satisfies
$Q_{a}\not\simeq Q$ for all $a\in A(k)$ with $a\neq 0$. We denote by
$\Gamma_{P}:=\\{a\in A(k)_{\textup{tors}}\mid\delta_{a}\in\langle P\rangle\\}$
the abelian group of torsion points whose associated skyscraper sheaf appears
in the Tannaka category $\langle P\rangle$ generated by a perverse sheaf
$P\in\mathcal{C}$. Note that $\Gamma_{P}$ is finite: Indeed, every skyscraper
sheaf $\delta_{a}\in\langle P\rangle$ defines a character of the Tannaka group
$G_{\omega}(P)$ and algebraic groups have only finitely many torsion
characters. In fact the first part of the following result shows that all
torsion characters of the Tannaka group are given by skyscraper sheaves in
torsion points:
###### Proposition 3.3.
Let $k$ be algebraically closed and $P\in\mathcal{C}$ a simple perverse sheaf.
1. (1)
The group of connected components of the Tannaka group $G:=G_{\omega}(P)$ is
given by
$G/G^{\circ}\;\simeq\;\operatorname{Hom}(\Gamma_{P},\mathbb{G}_{m}).$
2. (2)
Fix an integer $d\geqslant 1$ with $d\cdot\Gamma_{P}=\\{0\\}$. Then for all
$Q,Q^{\prime}\in\langle P\rangle$ we have:
$\displaystyle\omega(Q)_{\rvert G^{\circ}}\;\simeq\;\omega(Q^{\prime})_{\rvert
G^{\circ}}$ $\displaystyle\quad\Longleftrightarrow\quad$
$\displaystyle[d]_{*}Q\;\simeq\;[d]_{*}Q^{\prime},$ $\omega(Q)_{\rvert
G^{\circ}}$ is irreducible $\displaystyle\quad\Longleftrightarrow\quad$
$\displaystyle\textnormal{\em$Q$ is nondivisible}.$
3. (3)
Let $\det(P)\in\langle P\rangle$ be the unique simple perverse sheaf which
corresponds to the top wedge power of $V:=\omega(P)$. Then $P$ is a skyscraper
sheaf. If $V_{\rvert G^{\circ}}$ is irreducible, we have:
$\textnormal{\em$G^{\circ}$
semisimple}\quad\Longleftrightarrow\quad\textnormal{\em$\operatorname{Supp}(\det(P))$
is a torsion point}.$
###### Proof.
For $k=\mathbb{C}$, parts (1) and (2) are due to Weissauer [Wei15b] who also
shows that every invertible object in the Tannaka category of perverse sheaves
is a skyscraper sheaf (this in particular applies to $\det(P)$); alternatively
one could use the Riemann-Hilbert correspondence and the results for holonomic
$\mathcal{D}$-modules in [Krä22, section 3.c]. From $k=\mathbb{C}$ one can
pass to an arbitrary algebraically closed field of characteristic zero because
the Tannaka group is invariant under extensions of algebraically closed fields
and any perverse sheaf is defined over the algebraic closure of a finitely
generated field, see corollary 4.4 resp. lemma A.1. The claim about
semisimplicity in (3) follows since by Schur’s lemma the center
$Z=Z(G^{\circ})$ acts on $V$ by scalars and hence $\det(V)$ has finite order
if and only if $Z$ is finite. ∎
###### Definition 3.4.
For perverse sheaves $P\in\mathcal{C}$ we denote the derived group of the
connected component of the Tannaka group $G=G_{\omega}(P)$ by
$G_{\omega}^{\ast}(P)\;:=\;[G^{\circ},G^{\circ}].$
If $P=\delta_{X}$ is the intersection complex of a subvariety $X\subset A$, we
put $G_{X,\omega}^{\ast}:=G_{\omega}^{\ast}(P)$.
Proposition 3.3 allows us to realize this group as the Tannaka group of
another perverse sheaf:
###### Corollary 3.5.
Suppose $k$ is algebraically closed. Let $P\in\mathcal{C}$ be a simple
perverse sheaf. Then for any integer $d\geqslant 1$ with
$[d]_{*}P\in\mathcal{C}$ and any $a\in A(k)$ with $P_{a}\in\mathcal{C}$ the
following properties hold:
1. (1)
$G_{\omega}^{\ast}(P_{a})\simeq G_{\omega}^{\ast}(P)$.
2. (2)
$G_{\omega}^{\circ}([d]_{*}P)\simeq G_{\omega}^{\circ}(P)$.
3. (3)
$G_{\omega}([d]_{*}P)$ is connected if and only if $d\cdot\Gamma_{P}=0$.
4. (4)
Suppose $P$ is nondivisible with $[d]_{*}\det(P_{a})=\delta_{0}$ and
$d\cdot\Gamma_{P_{a}}=0$. If $[d]_{*}P_{a}$ belongs to $\mathcal{C}$, then
$G_{\omega}([d]_{*}P_{a})\;\simeq\;G_{\omega}^{\ast}(P).$
###### Proof.
(1) By [Krä21, lemma 4.3.2] the inclusions $\langle P\rangle\subset\langle
P\oplus\delta_{a}\rangle\supset\langle P_{a}\rangle$ induce isomorphisms
$G_{\omega}^{\ast}(P)\simeq G_{\omega}^{\ast}(P\oplus\delta_{a})\simeq
G_{\omega}^{\ast}(P_{a})$.
(2) By [Wei15b] or [Krä22, cor. 1.6], the pushforward $[d]_{*}\colon\langle
P\rangle\to\langle[d]_{*}P\rangle$ is a tensor functor which induces an
isomorphism between the connected components of the identity of the respective
Tannaka groups.
(3) This follows from proposition 3.3 (1) applied to $[d]_{*}P$ since
$d\cdot\Gamma_{P}=\Gamma_{[d]_{*}P}$.
(4) By the previous two steps, the group $G_{\omega}([d]_{*}P_{a})$ is
connected. One easily sees that the perverse sheaf $[d]_{*}P_{a}$ is
nondivisible with $\det([d]_{*}P_{a})=[d]_{*}\det(P_{a})=\delta_{0}$ so that
$G_{\omega}([d]_{*}P_{a})$ is a semisimple group by the last part of
proposition 3.3. It is therefore equal to the derived group of its connected
component of the identity, which by (1) and (2) coincides with
$G_{\omega}^{\ast}(P)$. ∎
###### Remark 3.6.
The isomorphism $G_{\omega}^{\circ}([d]_{*}P)\simeq G_{\omega}^{\circ}(P)$ in
corollary 3.5 (2) is not canonical, it involves the choice of an isomorphism
between the two fiber functors $\omega$ and $\omega\circ[d]_{*}$ on the tensor
category $\langle P\rangle$. But we can choose the isomorphism in a
contravariant functorial way with respect to monomorphisms in the full tensor
subcategory
$\mathcal{C}\cap[d]_{*}^{-1}(\mathcal{C})\;:=\;\\{Q\in\mathcal{C}\mid[d]_{*}Q\in\mathcal{C}\\}\;\subset\;\mathcal{C}$
by fixing an isomorphism between the fiber functors $\omega$ and
$\omega\circ[d]_{*}$ on this category.
### 3.4. Larsen’s alternative
Let $X\subset A$ be a subvariety such that $\delta_{X}\in\mathcal{C}$. We are
interested in criteria under which the Tannaka group $G_{X,\omega}$ is big.
Suppose that $X\subset A$ is nondegenerate and $2\dim X<\dim A$, so that by
lemma 2.7 the sum morphism
$\sigma\colon\quad X\times X\;\longrightarrow\;W\;:=\;X+X\;\subset\;A$
is generically finite onto its image, and this image is nondegenerate. Let
$U\subset W$ be a smooth open dense subset over which $\sigma$ is a finite
étale cover. By adjunction, we have an inclusion
$\delta_{U}\subset\sigma_{*}(\delta_{X\times X})_{|U}$ as a direct summand.
The decomposition theorem [BBDG18] extends this to an inclusion
$\delta_{W}\subset\delta_{X}*\delta_{X}=\sigma_{*}(\delta_{X\times X})$ as a
direct summand in the derived category of constructible sheaf complexes. More
precisely, there exists a unique semisimple perverse sheaf
$\varepsilon_{W}\in\textup{Perv}(A,\mathbb{F})$ without negligible direct
summands, and a unique negligible complex
$\nu_{X}\in{\mathrm{D}^{b}_{c}}(A,\mathbb{F})$, such that
$\delta_{X}*\delta_{X}\;=\;\delta_{W}\oplus\varepsilon_{X}\oplus\nu_{X}.$
With this notation, we obtain the following criterion for big Tannaka groups:
###### Lemma 3.7.
For any nondegenerate subvariety $X\subset A$ with $2\dim X<\dim A$, the
following are equivalent:
1. (1)
$G_{X,\omega}$ is big in the sense of section 1.2.
2. (2)
$\varepsilon_{X}$ is either a simple perverse sheaf, or a direct sum of a
simple perverse sheaf and a skyscraper sheaf of rank one.
###### Proof.
Recall that $W\subset A$ is a proper nondegenerate subvariety, so it cannot be
the support of a negligible sheaf complex. On the other hand,
$\operatorname{Supp}(\varepsilon_{X}\oplus\nu_{X})=W$ since the morphism
$\sigma\colon X\times X\to W$ has generic degree two. It follows that
$\operatorname{Supp}(\varepsilon_{X})=W$. In particular, the representation
$V=\omega(\delta_{X})\in\textup{Rep}_{\mathbb{F}}(G_{X,\omega})$ must have
dimension $\dim V>2$, since otherwise $\varepsilon_{X}$ would be the
skyscraper sheaf corresponding to $\det(V)$ by proposition 3.3 (3).
By applying the fiber functor $\omega$, one sees that the condition (2) is
equivalent to saying that in the decomposition of the tensor square $V\otimes
V$ there are only two irreducible direct summands of dimension $>1$. Since
$\dim(V)>2$, this is equivalent to (1) by Larsen’s alternative [Kat01, p. 113]
for the subgroup $G_{X,\omega}\subset\operatorname{GL}(V)$. ∎
###### Corollary 3.8.
Let $X\subset A$ be nondegenerate with $2\dim X<\dim A$. If $G_{X,\omega}$ is
big, then the sum morphism $Y=\operatorname{Sym}^{2}(X)\to W=X+X$ is
birational.
###### Proof.
Consider the following commutative diagram, where $q$ denotes the quotient
morphism:
$Z=X\times
X$$W=X+X$$Y=\operatorname{Sym}^{2}X$$\scriptstyle\sigma$$\scriptstyle q$
$\scriptstyle\tau$
Since $q\colon Z\to Y$ is a finite branched cover of degree $\deg(q)=2$, the
decomposition theorem shows that
$q_{*}(\delta_{Z})\;\simeq\;\delta_{Y}^{{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}-}}\oplus\delta_{Y}^{-}$
where $\delta_{Y}^{-}\in\textup{Perv}(Y,\mathbb{F})$ is a semisimple perverse
sheaf of generic rank one. It then follows that
$\delta_{X}*\delta_{X}\;\simeq\;R\sigma_{*}(\delta_{Z})\;\simeq\;R\tau_{*}(\delta_{Y}^{{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}-}})\oplus
R\tau_{*}(\delta_{Y}^{-}).$
Since $\tau$ is generically finite, the decomposition theorem shows
$R\tau_{*}(\delta_{Y})\simeq\delta_{W}\oplus K$ where the complex
$K\in{\mathrm{D}^{b}_{c}}(W,\mathbb{F})$ is a direct sum of shifts of
semisimple perverse sheaves. Over any smooth open dense subset $U\subset W$
over which $\sigma$ is a finite étale cover, we have
$R\tau_{*}(\delta_{Y}^{-})^{{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}-}}_{|U}\;\simeq\;M[d]\quad\textnormal{and}\quad
K_{|U}\;\simeq\;N[d]\quad\textnormal{for}\quad d\;=\;\dim X,$
where $M$, $N$ are local systems of rank $\operatorname{rk}(M)=\deg(\tau)$ and
$\operatorname{rk}(N)=\deg(\tau)-1$. So in the decomposition
$\delta_{X}*\delta_{X}\;\simeq\;\delta_{W}\oplus
R\tau_{*}(\delta_{Y}^{-})\oplus K$
all three summands have support $W$ unless $\deg(\tau)=1$ (in which case
$N=0$). Our assumptions imply that $W$ is of general type, hence a sheaf
complex with support $W$ is not negligible. But if the Tannaka group
$G_{X,\omega}$ is big, then by lemma 3.7 we can have only two summands in
$\delta_{X}*\delta_{X}$ that are not negligible and not skyscraper sheaves.
Hence, in this case it follows that $\deg(\tau)=1$ as required. ∎
In the above we have used the full decomposition of the tensor square of the
defining representation. However, it suffices in fact to consider only the
alternating or the symmetric square:
###### Corollary 3.9.
Let $X\subset A$ be nondegenerate with $2\dim X<\dim A$, and consider the
representation
$V\;:=\;\begin{cases}\operatorname{Alt}^{2}(\omega(\delta_{X}))&\text{if
$2\nmid\dim X$},\\\ \operatorname{Sym}^{2}(\omega(\delta_{X}))&\text{if
$2\mid\dim X$}.\end{cases}$
If $V\in\textup{Rep}_{\mathbb{F}}(G_{X,\omega}^{*})$ has at most one
irreducible direct summand of dimension $>1$, then the sum morphism
$\tau\colon\operatorname{Sym}^{2}X\to X+X$ is birational.
###### Proof.
Let $Y=\operatorname{Sym}^{2}X$ and $W=X+X$. We know
$V=\omega(R\tau_{*}(\delta_{Y}))$. Recall that by [Wei15b] or [Krä22, section
3.c] all one-dimensional representations of the Tannaka group arise from
skyscraper sheaves. Hence our assumption on the representation $V$ means that
$R\tau_{*}(\delta_{Y})\in{\mathrm{D}^{b}_{c}}(W,\mathbb{F})$ has at most one
non-negligible direct summand $P$ which is not a skyscraper sheaf. But from
the proof of the previous corollary we know that
$R\tau_{*}(\delta_{Y})=\delta_{W}\oplus K$ where $K$ is a direct sum of a
semisimple perverse sheaf and a negligible complex. As in that proof it
follows that $K_{|U}\simeq 0$ over some open dense subset $U\subset W$ and
hence that $\deg(\tau)=1$. ∎
###### Corollary 3.10.
Let $X\subset A$ be a smooth irreducible curve generating $A$, and assume
$\dim A\geqslant 3$. If the representation
$V=\operatorname{Alt}^{2}(\omega(\delta_{X}))\in\textup{Rep}_{\mathbb{F}}(G_{X,\omega}^{*})$
is a sum of an irreducible representation and a one-dimensional trivial
representation, then
1. (1)
$X=p-X$ for some point $p\in X$,
2. (2)
$\tau\colon Y=\operatorname{Sym}^{2}X\to W=X+X$ is finite birational over
$U=W\setminus\\{p\\}$,
3. (3)
$G_{X,\omega}^{*}=\operatorname{Sp}(\omega(\delta_{X}),\theta)$ for the
natural symplectic form $\theta$ on $\omega(\delta_{X})$.
###### Proof.
By assumption $\operatorname{Alt}^{2}(\omega(\delta_{X}))$ contains a one-
dimensional trivial representation, so the representation $\omega(\delta_{X})$
is isomorphic to its dual. Therefore $X=p-X$ for some point $p\in X$. Now for
dimension reasons $\tau\colon Y\to W$ restricts to a finite morphism over the
complement $U=W\setminus\Sigma$ of a finite set $\Sigma\subset X$ of points.
Note that $Y=\operatorname{Sym}^{2}X$ is smooth for a smooth curve $X$, so we
have $\delta_{Y}=\mathbb{F}_{Y}[2]$. Base change then shows that for any point
$q$ we have
$\mathcal{H}^{0}(R\tau_{*}(\delta_{Y}))_{q}\;\simeq\;H^{2}(\tau^{-1}(q),\mathbb{F})\;\begin{cases}\;=\;0&\text{if
$q\notin\Sigma$},\\\ \;\neq\;0&\text{if $q\in\Sigma$}.\end{cases}$
Since $R\tau_{*}(\delta_{Y})$ is a direct sum of a semisimple perverse sheaf
$P$ and a negligible sheaf complex and since negligible sheaf complexes cannot
have cohomology sheaves which are skyscraper sheaves, it follows that $P$
contains the skyscraper sheaves $\delta_{q}$ in all points $q\in\Sigma$. But
by assumption $R\tau_{*}(\delta_{Y})$ contains a unique skyscraper summand,
hence it follows that $\Sigma=\\{p\\}$ and thus $\tau$ is finite over
$U=X\setminus\\{p\\}$.
In particular $R\tau_{*}(\delta_{Y}^{-})_{|U}$ is a perverse sheaf, and we
have $\mathcal{H}^{i}(R\tau_{*}(\delta_{Y}^{-}))|_{|U}=0$ in all degrees
$i\neq-2$ because $\delta_{Y}^{-}$ is a constructible sheaf placed in degree
$-2$. But any semisimple perverse sheaf on a surface with cohomology sheaves
only in degrees $-2$ is the minimal extension of a local system on any open
dense subset of the surface. In our case that local system has rank one
because $\delta_{Y}^{-}$ has generic rank one and $\deg(\tau)=1$. Local
systems of rank one are simple, hence it follows that the minimal extension
$R\tau_{*}(\delta_{Y}^{-})$ is a simple perverse sheaf.
In conclusion, this shows that
$\delta_{X}*\delta_{X}=R\tau_{*}(\delta_{Y})\oplus R\tau_{*}(\delta_{Y}^{-})$
is a sum of two simple perverse sheaves and a skyscraper sheaf. It then
follows by the same argument as in [KW15b, th. 6.1] that
$G_{X,\omega}^{*}=\operatorname{Sp}(\omega(\delta_{X}),\theta)$; note that
$\dim(\omega(\delta_{X}))=\chi(\delta_{X})\geqslant g>2$ since the curve $X$
generates $A$. ∎
###### Corollary 3.11.
Let $X\subset A$ be a smooth irreducible curve generating $A$, and assume
$\dim A\geqslant 3$. Then the group $G_{X,\omega}^{\ast}$ is not isomorphic to
$E_{7}$ acting on $\omega(\delta_{X})$ via its irreducible representation of
dimension $56$.
###### Proof.
For the $56$-dimensional irreducible representation $W$ of the group $E_{7}$
the alternating square $\operatorname{Alt}^{2}(W)$ is a sum of an irreducible
and a one-dimensional trivial representation. However, corollary 3.10 says
that $\operatorname{Alt}^{2}(\omega(\delta_{X}))$ can be a sum of an
irreducible and a one-dimensional trivial representation only if
$G_{X,\omega}^{\ast}\simeq\operatorname{Sp}_{56}(\mathbb{F})$. ∎
### 3.5. Character twists
Recall that
$\Pi(A,\mathbb{F})=\operatorname{Hom}(\pi_{1}(A,0),\mathbb{F}^{\times})$
denotes the group of continuous characters of the étale resp. topological
fundamental group of the abelian variety. For $\chi\in\Pi(A,\mathbb{F})$, let
$L_{\chi}$ be the local system of rank one with monodromy representation given
by the character $\chi$. For $P\in\textup{Perv}(A,\mathbb{F})$ we call
$P_{\chi}:=P\otimes_{\mathbb{F}}L_{\chi}\in\textup{Perv}(A,\mathbb{F})$ the
twist of the given perverse sheaf by the character. Such twists of perverse
sheaves appear in the generic vanishing theorem of [KW15c, Sch15, BSS18]: Let
us say that a subset of $\Pi(A,\mathbb{F})$ is a proper subtorus if it has the
form
$\Pi(A/B,\mathbb{F})\;\subset\;\Pi(A,\mathbb{F})$
where $B\subset A$ is a nonzero abelian subvariety. Then the generic vanishing
theorem says that there is a finite union
$\mathcal{S}(P)\subset\Pi(A,\mathbb{F})$ of translates of proper subtori such
that
$H^{i}(A,P_{\chi})\;=\;0\quad\textnormal{for all $i\neq 0$ and all
$\chi\in\Pi(A,\mathbb{F})\smallsetminus\mathcal{S}(P)$}.$
We will use this in section 4.3 to write down explicit fiber functors with a
natural Galois action. Up to noncanonical isomorphism, the Tannaka group of a
perverse sheaf does not change under twists:
###### Lemma 3.12.
Let $P\in\mathcal{C}$. Then for every character $\chi\in\Pi(A,\mathbb{F})$
with $P_{\chi}\in\mathcal{C}$ we have
$G_{\omega}(P_{\chi})\;\simeq\;G_{\omega}(P).$
###### Proof.
By [KW15c, prop. 4.1], twisting by $\chi$ gives rise to an equivalence of
tensor categories
$\langle P\rangle\;\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\;\langle
P_{\chi}\rangle,\quad Q\;\longmapsto\;Q_{\chi}$
in $\overline{\textup{Perv}}(A,\mathbb{F})$. This equivalence need not be
compatible with the fiber functor $\omega$ on the source and target, but since
$\mathbb{F}$ is algebraically closed, any two fiber functors are
noncanonically isomorphic; hence the same holds for the Tannaka groups. ∎
## 4\. Galois theory for perverse sheaves
In this section we discuss the behavior of Tannaka groups of perverse sheaves
under extension of the base field and recall the connection between such
Tannaka groups and classical monodromy groups in [LS20, section 5]. We mostly
follow the arguments in loc. cit. but remove the assumption of geometric
semisimplicity in the Galois exact sequence by using a result of D’Addezio and
Esnault [DE21].
### 4.1. Extension of the base field
Let $K/k$ be a field extension, and consider the base change functor
$(-)_{K}\colon\quad\textup{Perv}(A,\mathbb{F})\;\longrightarrow\;\textup{Perv}(A_{K},\mathbb{F}),\quad
P\;\longmapsto\;P_{K}.$
Passing to the abelian quotient categories by the subcategories of perverse
sheaves of Euler characteristic zero, we have:
###### Lemma 4.1.
The base change functor descends to a faithful exact $\mathbb{F}$-linear
tensor functor
$(-)_{K}\colon\quad\overline{\textup{Perv}}(A,\mathbb{F})\;\longrightarrow\;\overline{\textup{Perv}}(A_{K},\mathbb{F}).$
###### Proof.
The functor
$(-)_{K}\colon\textup{Perv}(A,\mathbb{F})\to\textup{Perv}(A_{K},\mathbb{F})$
is a faithful $\mathbb{F}$-linear exact functor. Let $q_{K}=q\circ(-)_{K}$
denote its composite with the quotient functor $q$ as shown below:
$\textup{Perv}(A,\mathbb{F})$$\textup{Perv}(A_{K},\mathbb{F})$$\overline{\textup{Perv}}(A_{K},\mathbb{F})$$\scriptstyle(-)_{K}$$\scriptstyle
q_{K}$ $\scriptstyle q$
Since $q_{K}$ is an exact functor between abelian categories which sends all
objects of the Serre subcategory
$\textup{S}(A,\mathbb{F})\subset\textup{Perv}(A,\mathbb{F})$ to zero, it
factors by the universal property of abelian quotient categories [Gab62, cor.
2, p. 368] through a unique exact functor
$(-)_{K}\colon\quad\overline{\textup{Perv}}(A,\mathbb{F})\longrightarrow\overline{\textup{Perv}}(A_{K},\mathbb{F}).$
This functor is clearly $\mathbb{F}$-linear, and it admits the structure of a
tensor functor with respect to the natural isomorphisms $(P*Q)_{K}\simeq
P_{K}*Q_{K}$ inherited from the derived category. Any exact
$\mathbb{F}$-linear tensor functor of rigid abelian tensor categories with
$\operatorname{End}(\mathbf{1})=\mathbb{F}$ is automatically faithful [DMOS82,
prop. 1.19], so the claim follows. ∎
Starting from a given full abelian tensor subcategory
$\mathcal{C}\subset\overline{\textup{Perv}}(A,\mathbb{F})$, let us now denote
by
$\mathcal{C}_{K}\;=\;\\{Q\mid\textnormal{$\exists P\in\mathcal{C}$ such that
$Q$ is a subquotient of
$P_{K}$}\\}\;\subset\;\overline{\textup{Perv}}(A_{K},\mathbb{F})$
the full abelian tensor subcategory generated by the essential image of
$\mathcal{C}$ under the functor $(-)_{K}$ from lemma 4.1. The category
$\mathcal{C}_{K}$ is again neutral Tannaka as it is a full abelian tensor
subcategory of the neutral Tannaka category
$\overline{\textup{Perv}}(A_{K},\mathbb{F})$. In what follows, we fix a fiber
functor
$\omega\colon\quad\mathcal{C}_{K}\;\longrightarrow\;\operatorname{Vect}(\mathbb{F}).$
Precomposing with the base extension functor $(-)_{K}$ we get a fiber functor
on $\mathcal{C}$ and we denote by
$\displaystyle G_{\omega}(\mathcal{C}_{K})$ $\displaystyle\ =$
$\displaystyle\operatorname{Aut}^{\otimes}(\omega\mid\mathcal{C}_{K}),$
$\displaystyle G_{\omega}(\mathcal{C})$ $\displaystyle:=$
$\displaystyle\operatorname{Aut}^{\otimes}(\omega\mid\mathcal{C}),$
the corresponding Tannaka groups.
###### Corollary 4.2.
We have a closed immersion $G_{\omega}(\mathcal{C}_{K})\hookrightarrow
G_{\omega}(\mathcal{C})$.
###### Proof.
By construction the faithful exact $\mathbb{F}$-linear tensor functor
$(-)_{K}\colon\mathcal{C}\to\mathcal{C}_{K}$ is compatible with our chosen
fiber functors, hence it defines a homomorphism of Tannaka groups. The latter
is a closed immersion by [DMOS82, prop. 2.21(b)], since every object of
$\mathcal{C}_{K}$ is isomorphic to a subquotient of $P_{K}$ for some
$P\in\mathcal{C}$. ∎
### 4.2. The Galois sequence
Let $k^{\prime}\subset K$ be the algebraic closure of $k$ in $K$. The category
$\textup{Rep}_{\mathbb{F}}(\operatorname{Aut}(k^{\prime}/k))$
of continuous finite-dimensional representations of the profinite group
$\operatorname{Aut}(k^{\prime}/k)$ over $\mathbb{F}$ is a neutral Tannaka
category. If $k^{\prime}/k$ is Galois, then
$\operatorname{Aut}(k^{\prime}/k)=\operatorname{Gal}(k^{\prime}/k)$ is a
quotient of the absolute Galois group of $k$. In this case we can identify
objects of the above category with sheaves on $\operatorname{Spec}(k)$ and
hence the pushforward under the neutral element
$e\colon\operatorname{Spec}(k)\to A$ gives a fully faithful embedding
${e_{*}\colon}$${\textup{Rep}_{\mathbb{F}}(\operatorname{Gal}(k^{\prime}/k))}$${\overline{\textup{Perv}}(A,\mathbb{F}).}$
We will view Galois representations as a full subcategory of skyscraper
sheaves and drop the $e_{*}$ from the notation. Our chosen fiber functor on
$\mathcal{C}$ restricts to a fiber functor
$\omega\colon\quad\mathcal{C}\cap\textup{Rep}_{\mathbb{F}}(\operatorname{Gal}(k^{\prime}/k))\;\longrightarrow\;\operatorname{Vect}(\mathbb{F}).$
Let
$G_{\omega,\mathcal{C}}(k^{\prime}/k)\;:=\;\operatorname{Aut}^{\otimes}(\omega\,|\,\mathcal{C}\cap\textup{Rep}_{\mathbb{F}}(\operatorname{Gal}(k^{\prime}/k)))$
denote its Tannaka group. Representations of this group correspond to
skyscraper sheaves $P\in\mathcal{C}$ in the origin, and we have a homomorphism
$\operatorname{Aut}(k^{\prime}/k)\to G_{\omega,\mathcal{C}}(k^{\prime}/k)$.
###### Theorem 4.3.
Assume as above that $k^{\prime}/k$ is Galois. Then we have a short exact
sequence of proalgebraic groups
(4.1) $1\longrightarrow G_{\omega}(\mathcal{C}_{K})\longrightarrow
G_{\omega}(\mathcal{C})\longrightarrow
G_{\omega,\mathcal{C}}(k^{\prime}/k)\longrightarrow 1.$
###### Proof.
Corollary 4.2 gives a closed immersion $i\colon G_{\omega}(\mathcal{C}_{K})\to
G_{\omega}(\mathcal{C})$. Moreover, since $k^{\prime}/k$ is a Galois
extension, we have by the above an embedding as a full tensor subcategory
${\mathcal{C}\cap\textup{Rep}_{\mathbb{F}}(\operatorname{Gal}(k^{\prime}/k))}$${\mathcal{C}.}$
which is stable under subobjects, and this embedding is compatible with the
chosen fiber functors on the source and target. By [DMOS82, prop. 2.21(a)] we
then have an epimorphism
${p\colon}$${G_{\omega}(\mathcal{C})}$${G_{\omega,\mathcal{C}}(k^{\prime}/k).}$
By construction, $p\circ i$ is trivial. Thus, to complete the proof, by [DE21,
prop. A.13], it suffices to check that
1. (1)
the functor $(-)_{K}\colon\mathcal{C}\to\mathcal{C}_{K}$ is observable [DE21,
Appendix A], and
2. (2)
for every $P\in\mathcal{C}$ the maximal trivial subobject of $P_{K}$ lies in
the essential image of the functor
$e_{*}\colon\mathcal{C}\cap\textup{Rep}_{\mathbb{F}}(\operatorname{Gal}(k^{\prime}/k))\to\mathcal{C}$.
For part (1) it suffices by lemma A.4(1) in loc. cit. to show that, for
$P\in\mathcal{C}$, any rank one subobject
$S\;\subset\;P_{K}$
is a direct summand in a semisimple object $Q_{K}$ with $Q\in\mathcal{C}$. To
check this, note that the rank one objects in the Tannaka category of perverse
sheaves are rank one skyscraper sheaves, and that the sum of all perverse rank
one skyscraper subsheaves of $P_{K}$ is semisimple, being a sum of simple
objects. To conclude the proof of (1), it suffices to show that this direct
sum descends to a perverse subsheaf $Q\subset P$, as it then follows that $S$
is a direct summand of $Q_{K}$ as desired. To prove that the sum of all rank
one skyscraper subsheaves descends to $k$, we first show that the maximal
skyscraper subsheaf of $P_{K}$ descends to a subsheaf of $P$. Indeed, the
Verdier dual of the sum of all perverse skyscraper subsheaves is the maximal
perverse skyscraper quotient of the Verdier dual $D(P_{K})$, which is
$\mathcal{H}^{0}(D(P_{K}))=\mathcal{H}^{0}(D(P))_{K}$. Hence, the maximal
skyscraper subsheaf descends. Replacing the given perverse sheaf $P$ by the
maximal skyscraper subsheaf supported at the origin, we are reduced to the
case $A=\operatorname{Spec}k$. Then $P$ is given by a Galois representation
$V\in\textup{Rep}_{\mathbb{F}}(\operatorname{Gal}(\bar{k}/k))$ and the claim
reduces to the following to facts:
* •
A subspace of $V$ is stable under $\operatorname{Gal}(\bar{K}/K)$ if and only
if it is so under $\operatorname{Gal}(\bar{k}/k^{\prime})$ (since
$\operatorname{Gal}(\bar{K}/K)\to\operatorname{Gal}(\bar{k}/k^{\prime})$ is
surjective for $k^{\prime}$ algebraically closed in $K$).
* •
The sum of all one-dimensional subrepresentations of
$V_{|\operatorname{Gal}(\bar{k}/k^{\prime})}$ is stable under
$\operatorname{Gal}(\bar{k}/k)$ (since
$\operatorname{Gal}(\bar{k}/k^{\prime})$ is a normal subgroup of
$\operatorname{Gal}(\bar{k}/k)$).
For (2) we argue similarly: The unit object of the tensor category
$\mathcal{C}_{K}$ is the skyscraper sheaf $\delta_{0}$ of rank one supported
in the origin. So the maximal trivial subobject of $P_{K}$ is the maximal
subobject of the form $\delta_{0}^{\oplus n}$ for some integer $n\geqslant 0$,
and this subobject descends to a subobject $Q\subset P$ as before. ∎
###### Corollary 4.4.
If $k$ is algebraically closed, then for every extension $K/k$ we have a
natural isomorphism
$G_{\omega}(\mathcal{C}_{K})\;\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\;G_{\omega}(\mathcal{C}).$
In particular, for every perverse sheaf $P\in\mathcal{C}$, we have
$G_{\omega}(P_{K})\simeq G_{\omega}(P)$.
###### Proof.
If $k$ is algebraically closed, then $k^{\prime}=k$ and hence
$G_{\omega}(k^{\prime}/k)\simeq\\{1\\}$. ∎
### 4.3. A splitting of the sequence
We now apply the above when $K=\bar{k}$ is an algebraic closure of $k$. In the
Galois sequence in theorem 4.3 we have used the fully faithful functor
${e_{*}\colon}$${\textup{Rep}_{\mathbb{F}}(\operatorname{Gal}(\bar{k}/k))}$${\overline{\textup{Perv}}(A,\mathbb{F}).}$
that identifies a Galois representation with the corresponding skyscraper
sheaf at the origin. We now describe a splitting of the sequence in theorem
4.3 for a special category $\mathcal{C}$ such that the functor
$e_{*}\colon\mathcal{C}\cap\textup{Rep}_{\mathbb{F}}(\operatorname{Gal}(\bar{k}/k))\hookrightarrow\mathcal{C}$
has a left inverse. More precisely, let
$\mathcal{C}\;=\;\textup{Perv}_{0}(A,\mathbb{F})$
be the full subcategory of all $P\in\textup{Perv}(A,\mathbb{F})$ for which all
simple subquotients $Q$ of $P_{\bar{k}}$ satisfy
$H^{i}(A_{\bar{k}},Q)\;=\;0\quad\textnormal{for all}\quad i\;\neq\;0.$
Its image
$\overline{\textup{Perv}}_{0}(A,\mathbb{F})\;\subset\;\overline{\textup{Perv}}(A,\mathbb{F})$
is a full abelian tensor subcategory which is equivalent to
$\textup{Perv}_{0}(A,\mathbb{F})/S_{0}(A,\mathbb{F})$, where
$S_{0}(A,\mathbb{F}):=S(A,\mathbb{F})\cap\textup{Perv}_{0}(A,\mathbb{F})$ is
the full subcategory of perverse sheaves $P$ with the property that all the
subquotients $Q$ of $P_{\bar{k}}$ satisfy $H^{\bullet}(A_{\bar{k}},Q)=0$. We
then get a functor
$\omega\colon\quad\overline{\textup{Perv}}_{0}(A,\mathbb{F})\;=\;\textup{Perv}_{0}(A,\mathbb{F})/S_{0}(A,\mathbb{F})\;\longrightarrow\;\operatorname{Vect}(\mathbb{F}),\quad
Q\;\longmapsto\;H^{0}(A_{\bar{k}},Q)$
which is exact by definition of the source category. Moreover, $\omega$ is a
tensor functor by the Künneth isomorphism
$H^{\bullet}(A_{\bar{k}},P*Q)\;\simeq\;H^{\bullet}(A_{\bar{k}},P)\otimes
H^{\bullet}(A_{\bar{k}},Q),$
since for $P,Q\in\textup{Perv}_{0}(A,\mathbb{F})$ only the cohomology in
degree zero contributes. For the fiber functor obtained in this way, we
obtain:
###### Lemma 4.5.
For $\mathcal{C}=\overline{\textup{Perv}}_{0}(A,\mathbb{F})$ with the fiber
functor $\omega:=H^{0}(A_{\bar{k}},-)$ we have a splitting
$G_{\omega}(\mathcal{C})\;\simeq\;G_{\omega}(\mathcal{C}_{\bar{k}})\rtimes
G_{\omega,\mathcal{C}}(\bar{k}/k).$
So for any $P\in\overline{\textup{Perv}}_{0}(A,\mathbb{F})$, the action of
$\operatorname{Gal}(\bar{k}/k)$ on $V=\omega(P)$ factors through the
normalizer
$N(G_{\omega}(P_{\bar{k}}))\;\subset\;\operatorname{GL}(V).$
###### Proof.
While the fiber functor $\omega=H^{0}(A_{\bar{k}},-)$ is only defined on
$\mathcal{C}:=\overline{\textup{Perv}}_{0}(A,\mathbb{F})$, it comes with a
natural Galois action in the sense that we have a commutative diagram
$\mathcal{C}$$\textup{Rep}_{\mathbb{F}}(\operatorname{Gal}(\bar{k}/k))$$\operatorname{Vect}(\mathbb{F})$$\scriptstyle\exists$$\scriptstyle\omega$
where the top row is a left inverse of the functor
$e_{*}\colon\textup{Rep}_{\mathbb{F}}(\operatorname{Gal}(\bar{k}/k))\to\mathcal{C}$.
∎
### 4.4. Big monodromy from big Tannaka groups
Now again assume that $k$ is an algebraically closed field of characteristic
zero. Consider the constant abelian scheme $A_{S}:=A\times_{k}S$, where $S$ is
an integral scheme over $k$. We denote by $\bar{\eta}$ a geometric point over
the generic point $\eta$ of $S$. Let $\mathcal{X}\subset A_{S}$ be an
irreducible closed subscheme which is smooth over $S$. We want to control the
monodromy of the family $\mathcal{X}\to S$ twisted by a generic rank one local
system as in [LS20]. In this context, the following terminology will be
useful.
###### Definition 4.6.
We say that $\mathcal{X}\subset A_{S}$ is _constant up to translation in
$A(S)$_ if there is a subvariety $Y\subset A$ and a point $a\in A(S)$ such
that $\mathcal{X}=Y_{S}+a$.
In favorable situations, this condition can be read off from the geometric
generic fiber of $\mathcal{X}\to S$ via the following descent result:
###### Lemma 4.7.
Suppose $S$ is a smooth and irreducible variety. Let
$\mathcal{Y},\mathcal{Z}\subset A_{S}$ be subvarieties which are flat over
$S$. If the subvariety $\mathcal{Y}_{\bar{\eta}}\subset A_{S,\bar{\eta}}$ has
trivial stabilizer, then the following are equivalent:
1. (1)
$\mathcal{Z}=\mathcal{Y}+a$ for some $a\in A(S)$.
2. (2)
$\mathcal{Z}_{\bar{\eta}}=\mathcal{Y}_{\bar{\eta}}+a$ for some $a\in
A(\bar{\eta})$.
###### Proof.
Clearly, the first property implies the second. Conversely, suppose that we
have $\mathcal{Z}_{\bar{\eta}}=\mathcal{Y}_{\bar{\eta}}+a$ for some point
$a\in A(\bar{\eta})$. First, we claim that the point $a$ comes from a point
$a\in A(\eta)$. Indeed, let $F$ be the function field of $S$ and let
$y=[\mathcal{Y}_{\eta}]$ and $z=[\mathcal{Z}_{\eta}]$ be the $F$-points of the
Hilbert scheme $\operatorname{Hilb}(A)$ defined by the generic fibers of
$\mathcal{Y}\to S$ and $\mathcal{Z}\to S$, seen as subvarieties of
$A_{S,\eta}$. Now, the abelian variety $A$ acts on the Hilbert scheme by
translation. The transporter
$T=\\{t\in A_{S,\eta}\mid z=y+t\\}$
is a subvariety of $A_{S,\eta}$. Note that $T(\bar{\eta})$ is nonempty, as it
contains the point $a$. Actually, the point $a$ is the only one of
$T(\bar{\eta})$. For, note that the stabilizer of the subvariety
$\mathcal{Y}_{\bar{\eta}}\subset A_{S,\bar{\eta}}$ acts freely and
transitively on the base-change of $T$ to $\bar{\eta}$. On the other hand, the
stabilizer of $\mathcal{Y}_{\bar{\eta}}$ is trivial by assumption, so the
transporter $T(\bar{\eta})$ must be a singleton. The variety $T$ is defined
over $F$ and has only one point over an algebraically closed field, thus
$T=\operatorname{Spec}F$ which proves the claim.
The point $a\in A(\eta)$ can be seen as a rational map $a\colon
S\dashrightarrow A$, which is moreover everywhere defined by smoothness of $S$
[Mil86, th. 3.1]. To conclude the proof, note that the generic fibers of
$\mathcal{Y}+a$ and $\mathcal{Z}$ coincide, hence $\mathcal{Z}=\mathcal{Y}+a$
by flatness. ∎
###### Corollary 4.8.
If $S$ is a smooth irreducible variety and if the subvariety
$\mathcal{X}_{\bar{\eta}}\subset A_{S,\bar{\eta}}$ is nondivisible, then the
following are equivalent:
1. (1)
$\mathcal{X}\subset A_{S}$ is constant (resp. symmetric) up to translation in
$A(S)$.
2. (2)
$\mathcal{X}_{\bar{\eta}}\subset A_{S,\bar{\eta}}$ is constant (resp.
symmetric) up to translation in $A(\bar{\eta})$.
Moreover, the subvariety $\mathcal{X}\subset A_{S}$ is constant up to
translation in $A(S)$ if and only if the family $\mathcal{X}\to S$ is
isotrivial.
###### Proof.
The equivalence of (1) and (2) follows directly from lemma 4.7. Now suppose
that the family $\mathcal{X}\to S$ is isotrivial. In order to prove that the
subvariety $\mathcal{X}\subset A_{S}$ is constant up to translation, we may by
the equivalence of (1) and (2) replace $S$ by an étale cover and hence assume
$\mathcal{X}\simeq Y_{S}$ for some $Y\subset A$. Fixing $y\in Y(k)$, we get a
section $x\colon S\to\mathcal{X}$ that gives rise to a commutative diagram:
${\mathcal{X}}$${\operatorname{Alb}(\mathcal{X}/S)}$${A_{S}}$${A_{S}}$${Y_{S}}$${\operatorname{Alb}(Y_{S}/S)}$${A_{S}}$${A_{S}}$$\scriptstyle{\operatorname{alb}_{x}}$
$\scriptstyle\sim$
$\scriptstyle\sim$
$\scriptstyle{z\mapsto
z+x}$$\scriptstyle{\operatorname{alb}_{y}}$$\scriptstyle{z\mapsto z+y}$
Here $\operatorname{alb}_{a}$ and $\operatorname{alb}_{y}$ are the relative
Albanese morphisms and the composite of the horizontal arrows are the
inclusions $\mathcal{X}\subset A_{S}$ resp. $Y_{S}\subset A_{S}$. Hence,
$\mathcal{X}\subset A_{S}$ is constant up to translation. ∎
###### Example 4.9.
The primitivity is needed in the above: Let $Y\subset A$ be a subvariety with
finite stabilizer $\operatorname{Stab}(Y)\neq\\{0\\}$. Viewing
$S:=A/\operatorname{Stab}(Y)$ as the orbit of the point $[Y]$ in
$\operatorname{Hilb}(A)$ under the translation action of $A$, we get by
restriction of the universal subvariety of $A\times_{k}\operatorname{Hilb}(A)$
a subvariety $\mathcal{X}\subset A_{S}$ with fiber $Y+a$ over a point $[a]\in
S(k)$. Then the family $\mathcal{X}\to S$ is not constant up to translation in
$A(S)$, but it is so up to translation by a section in $A(\bar{\eta})$.
We now assume that $\mathcal{X}\subset A_{S}$ is not constant up to
translation in $A(S)$. Then the monodromy of the smooth family $\mathcal{X}\to
S$ twisted by a generic rank one local system is related to the Tannaka group
of the perverse sheaf
$\delta_{X}\in\textup{Perv}(A_{S,\bar{\eta}},\mathbb{F})$ on the geometric
generic fiber
$X\;:=\;\mathcal{X}_{\bar{\eta}}$
as follows. For $\chi\in\Pi(A,\mathbb{F})$, let $L_{\chi}$ denote the
corresponding rank one local system on $A$. The generic vanishing theorem for
perverse sheaves [BSS18, KW15c, Sch15] shows that
$\delta_{X,\chi}\;:=\;\delta_{X}\otimes
L_{\chi}\;\in\;\overline{\textup{Perv}}_{0}(A_{S,\bar{\eta}},\mathbb{F})$
for most $\chi\in\Pi(A,\mathbb{F})$, where most means all characters $\chi$
outside a finite union of torsion translates of linear subvarieties of
$\Pi(A,\mathbb{F})$. From section 4.3 we get a fiber functor
$\omega\;:=\;H^{0}(A_{S,\bar{\eta}},-)\colon\quad\langle\delta_{X,\chi}\rangle\;\longrightarrow\;\operatorname{Vect}(\mathbb{F}),$
and we denote by
$G^{*}_{X,\chi}\;:=\;[G_{\omega}^{\circ}(\delta_{X,\chi}),G_{\omega}^{\circ}(\delta_{X,\chi})]$
the derived group of the connected component of the Tannaka group. Note that
by lemma 3.12 the isomorphism type of this group does not depend on the chosen
character; we say that $X$ has a simple derived connected Tannaka group if
$G_{X,\chi}^{\ast}$ is simple for some (hence every) character $\chi$ with the
above vanishing properties.
To define the monodromy of the family $f\colon\mathcal{X}\to S$ twisted by a
rank one local system, let $\pi\colon\mathcal{X}\to A$ be the projection to
the abelian variety. Using generic vanishing on the geometric generic fiber of
$X\subset A_{S,\bar{\eta}}$ and proceeding by Noetherian induction, one sees
that for most $\chi$ the higher direct images $R^{i}f_{*}\pi^{*}L_{\chi}$
vanish in all degrees $i\neq d$, where $d$ denotes the relative dimension of
the family $f\colon\mathcal{X}\to S$. For such $\chi$ the remaining direct
image
$V_{\chi}\;:=\;R^{d}f_{*}\pi^{*}L_{\chi}$
is a local system. More generally we consider for
$\underline{\chi}=(\chi_{1},\dots,\chi_{n})\in\Pi(A,\mathbb{F})^{n}$ the
direct sum
$V_{\underline{\chi}}\;:=\;V_{\chi_{1}}\oplus\cdots\oplus V_{\chi_{n}}.$
Let
$\rho\colon\pi_{1}(S,\bar{\eta})\rightarrow\operatorname{GL}(V_{\underline{\chi},\bar{\eta}})$
be the corresponding monodromy representation on the geometric generic fiber.
We define the _algebraic monodromy group_ of $V_{\underline{\chi}}$ as the
Zariski closure
$M(V_{\underline{\chi}})\;:=\;\overline{\operatorname{Im}(\rho)}\;\subset\;\operatorname{GL}(V_{\underline{\chi},\bar{\eta}}).$
The link between our main theorem from the introduction and the Tannaka groups
introduced above is the following result by Lawrence and Sawin, an analog of
the theorem of the fixed part:
###### Theorem 4.10.
Let $S$ be a smooth integral variety over $k$, and let $\mathcal{X}\subset
A_{S}$ an integral subvariety such that
1. (1)
the family $f\colon\mathcal{X}\to S$ is smooth of relative dimension $d$, it
is not constant up to translation in $A(S)$, and
2. (2)
the geometric generic fiber $X=\mathcal{X}_{\bar{\eta}}\subset
A_{S,\bar{\eta}}$ is nondivisible and has a simple derived connected Tannaka
group.
Then for most $\underline{\chi}\in\Pi(A,\mathbb{F})^{n}$ we have
$G^{\ast}_{X,\chi_{1}}\times\cdots\times
G^{\ast}_{X,\chi_{n}}\;\unlhd\;M(V_{\underline{\chi}}).$
###### Proof.
In [LS20, th. 5.6] this is stated for hypersurfaces, but the proof works for
smooth subvarieties of any codimension. For convenience, we recall the main
ideas in our setup: The fiber
$V_{\underline{\chi},\bar{\eta}}\;=\;\bigoplus_{i=1}^{n}\textup{H}^{d}(X,L_{\chi_{i}})$
comes with a monodromy action of $\pi_{1}(S,{\bar{\eta}})$ preserving the
summands on the right-hand side; the algebraic monodromy is the Zariski
closure of the image of $\pi_{1}(S,{\bar{\eta}})$ inside
$\operatorname{GL}(V_{\chi_{1},\bar{\eta}})\times\cdots\times\operatorname{GL}(V_{\chi_{n},\bar{\eta}})\quad\textnormal{where}\quad
V_{\chi_{i},\bar{\eta}}\;=\;\textup{H}^{d}(X,L_{\chi_{i}}).$
Since $S$ is smooth, this algebraic monodromy is the Zariski closure of the
image of the absolute Galois group of the function field of $S$. By lemma 4.5
the Galois action normalizes the subgroups
$G_{X,\chi_{i}}^{\ast}\subset\operatorname{GL}(V_{\chi_{i}})$, in fact the
algebraic monodromy is a subgroup
$M(V_{\underline{\chi}})\;\subset\;G_{X_{0},\chi_{1}}\times\cdots\times
G_{X_{0},\chi_{n}}$
where $X_{0}:=\mathcal{X}_{\eta}$ denotes the generic fiber and
$G_{X_{0},\chi_{i}}:=G_{\omega}(\delta_{X_{0},\chi_{i}})$. We must show that
this upper bound on the algebraic monodromy is almost sharp in the sense that
for most $\underline{\chi}=(\chi_{1},\dots,\chi_{n})$, the algebraic monodromy
contains the normal subgroup
$G^{\ast}_{X,\chi_{1}}\times\cdots\times
G^{\ast}_{X,\chi_{n}}\;\unlhd\;G_{X_{0},\chi_{1}}\times\cdots\times
G_{X_{0},\chi_{n}}.$
In what follows, it will be convenient to identify all factors on the left-
hand side with a fixed simple algebraic group. For this, we fix a fiber
functor $\xi\colon\langle\delta_{X}\rangle\to\operatorname{Vect}(\mathbb{F})$
and pick an isomorphism between $\textup{H}^{0}(A_{\bar{\eta}},-)$ and the
fiber functor obtained as the composite
${\langle\delta_{X_{0},\chi_{i}}\rangle}$${\langle\delta_{X_{0}}\rangle}$${\langle\delta_{X}\rangle}$${\operatorname{Vect}(\mathbb{F}),}$$\scriptstyle{\sim}$$\scriptstyle{(-)_{\bar{\eta}}}$$\scriptstyle{\xi}$
where the isomorphism on the left is the inverse of $P\mapsto P_{\chi_{i}}$.
We get a commutative diagram
${G_{X,\chi_{1}}^{*}\times\cdots\times
G_{X,\chi_{n}}^{*}}$${(G^{*}_{X})^{n}}$${M(V_{\underline{\chi}})}$${G_{X_{0},\chi_{1}}\times\cdots\times
G_{X_{0},\chi_{n}}}$${(G_{X_{0}})^{n}}$$\scriptstyle{\sim}$$\scriptstyle{\sim}$
where
$G_{X}^{*}:=[G^{\circ}_{\xi}(\delta_{X}),G^{\circ}_{\xi}(\delta_{X})]\subset
G_{X_{0}}:=G_{\xi}(\delta_{X_{0}})$. Note that $G_{X_{0}}$ is contained in the
normalizer
$N(G_{X}^{*})\subset\operatorname{GL}(\xi(\delta_{X_{0}}))$
by lemma 4.5. Now we use the following general observation [LS20, lemma 5.4]:
###### Fact 4.11.
Let $G\subset\operatorname{GL}(V)$ be a simple algebraic group, and
$N(G)\subset\operatorname{GL}(V)$ its normalizer. Then for every integer
$n\geqslant 1$ there exists a finite list of irreducible representations
$W_{\alpha}\;=\;W_{\alpha,1}\boxtimes\cdots\boxtimes
W_{\alpha,n}\;\in\;\textup{Rep}_{\mathbb{F}}(N(G)^{n})\quad(\alpha\in\\{1,\dots,N\\})$
such that for any reductive subgroup $H\subset N(G)^{n}$ the following two
properties are equivalent:
1. (1)
$G^{n}\subset H$.
2. (2)
$H$ has no invariants on any of the representations $W_{\alpha}$.
In particular, the group $G^{n}$ has no invariants on any of the
representations $W_{\alpha}$.
We apply this to $V=\xi(\delta_{X_{0}})$, $G=G_{X}^{*}$ and
$H=M(V_{\underline{\chi}})$. Since $G_{X_{0}}\subset N(G_{X}^{*})$, each
$W_{\alpha,i}\in\textup{Rep}_{\mathbb{F}}(N(G))$ defines a representation of
the Tannaka group $G_{X_{0}}$ and hence a perverse sheaf
$P_{\alpha,i}\;\in\;\langle\delta_{X_{0}}\rangle.$
The representation obtained by pullback under the isomorphism
$G_{X_{0},\chi_{i}}\to G_{X_{0}}$ then corresponds to the perverse sheaf
$(P_{\alpha,i})_{\chi_{i}}\in\langle\delta_{X_{0},\chi_{i}}\rangle$. By
construction, we have an isomorphism
(4.2) $\displaystyle W_{\alpha}$ $\displaystyle\;=\;$ $\displaystyle
W_{\alpha,1}\boxtimes\cdots\boxtimes W_{\alpha,n}$ $\displaystyle\;\simeq\;$
$\displaystyle\textup{H}^{0}(A_{S,\bar{\eta}},(P_{\alpha,1})_{\chi_{1}})\boxtimes\cdots\boxtimes\textup{H}^{0}(A_{S,\bar{\eta}},(P_{\alpha,n})_{\chi_{n}})$
of representations of $N(G_{X,\chi_{1}}^{*})\times\cdots\times
N(G_{X,\chi_{n}}^{*})$. To keep track of how the Galois action on the right-
hand side depends on the chosen characters, it will be convenient to pass to
$A_{S,\bar{\eta}}^{n}=A_{S,\bar{\eta}}\times\cdots\times A_{S,\bar{\eta}}$ via
the Künneth isomorphism. Consider the perverse sheaf
$K_{0}\;:=\;e_{1*}\delta_{X_{0}}\oplus\cdots\oplus
e_{n*}\delta_{X_{0}}\;\in\;\overline{\textup{Perv}}_{0}(A_{S,\eta}^{n},\mathbb{F})$
where $e_{i}\colon A_{S,\eta}\hookrightarrow A_{S,\eta}^{n}$ denotes the
$i$-th coordinate inclusion. Let
$K\in\overline{\textup{Perv}}_{0}(A_{S,\bar{\eta}}^{n},\mathbb{F})$ be the
base change of the perverse sheaf $K_{0}$ to the geometric generic fiber. Note
that $G^{*}_{\zeta}(K)=(G_{X}^{*})^{n}$ for the fiber functor
$\zeta:=\xi\boxtimes\cdots\boxtimes\xi\colon\langle
K\rangle\to\operatorname{Vect}(\mathbb{F})$ and that we have
$Q_{\alpha}\;:=\;e_{1*}P_{\alpha,1}*\cdots*e_{n*}P_{\alpha,n}\;=\;P_{\alpha,1}\boxtimes\cdots\boxtimes
P_{\alpha,n}\;\in\;\langle K_{0}\rangle.$
Returning to character twists again, consider the local system
$L_{\underline{\chi}}:=L_{\chi_{1}}\boxtimes\cdots\boxtimes L_{\chi_{n}}$ and
put
$K_{0,\underline{\chi}}\;:=\;K_{0}\otimes L_{\underline{\chi}},\quad
K_{\underline{\chi}}\;:=\;K\otimes
L_{\underline{\chi}}\quad\textnormal{and}\quad
Q_{\alpha,\underline{\chi}}\;:=\;Q_{\alpha}\otimes L_{\underline{\chi}}.$
Then we have
$G_{\omega}^{*}(K_{\underline{\chi}})\;=\;G_{X,\chi_{1}}^{*}\times\cdots\times
G_{X,\chi_{n}}^{*}\quad\textnormal{and}\quad
Q_{\alpha,\underline{\chi}}\;\in\;\langle K_{0,\underline{\chi}}\rangle$
Combining (4.2) with the Künneth isomorphism we obtain a Galois equivariant
isomorphism
$W_{\alpha}\;\simeq\;\textup{H}^{0}(A^{n}_{S,\bar{\eta}},Q_{\alpha,\underline{\chi}}),$
where the Galois group acts on the left-hand side via
$\operatorname{Gal}(\bar{\eta}/\eta)\to M(V_{\underline{\chi}})$ and on the
right-hand side by the natural Galois action.
Now recall that by the last claim in 4.11, the group $(G_{X}^{*})^{n}$ has no
invariants on $W_{\alpha}$. But $(G_{X}^{*})^{n}=G_{\zeta}^{*}(K)$ is the
derived group of the connected component of the Tannaka group of the perverse
sheaf $K$, and $W_{\alpha}=\zeta(Q_{\alpha})$ is the representation defined by
$Q_{\alpha}\in\langle K_{0}\rangle$. Hence, we can apply [LS20, lemma 5.2]:
The vanishing of invariants of the derived connected Tannaka group on the
geometric generic fiber implies that $Q_{\alpha}$ has no perverse subquotient
coming by pullback from $A$ via $A_{S,\eta}\to A$. By a spreading out argument
[LS20, lemma 5.3] the last property implies that for most $\underline{\chi}$
the Galois invariants of
$\textup{H}^{0}(A_{S,\bar{\eta}},Q_{\alpha,\underline{\chi}})$ vanish. Thus,
the algebraic monodromy has no invariants on any of the representations
$W_{\alpha}$ and hence by the equivalence of (1) and (2) above it contains all
of $(G_{X}^{*})^{n}$ as required. ∎
###### Remark 4.12.
For $n\geqslant 2$, the above proof gives more precise information on the
dependence of $n$ of the locus of characters on which the conclusion of
theorem 4.10 holds: There exists a finite union
$\Sigma\subset\Pi(A,\mathbb{F})^{2}$ of torsion translates of proper linear
subvarieties such that the conclusion of the theorem holds for all $n\geqslant
2$ and all
$\underline{\chi}=(\chi_{1},\dots,\chi_{n})\in\Pi(A,\mathbb{F})^{n}$ with
$(\chi_{i},\chi_{j})\notin\Sigma\quad\textnormal{for all}\quad i\neq j.$
This follows from the fact that the list of representations constructed in the
proof of 4.11 arises from a finite list of representations of $N(G)^{2}$ by
pullback under the various projections $N(G)^{n}\to N(G)^{2}$.
## 5\. From representations to geometry
In this section, we explain the link between representations and
characteristic cycles, which will be our main tool to show that under certain
assumptions the Tannaka group of a smooth subvariety will be big. We work over
an algebraically closed field $k$ with $\operatorname{char}(k)=0$, and
starting from section 5.3 we assume $k=\mathbb{C}$.
### 5.1. The ring of clean cycles
Over the complex numbers an important invariant of a perverse sheaf is its
characteristic cycle, which is a formal sum of conormal varieties adapted to a
suitable Whitney stratification. As we recall in section 5.3, the convolution
product of perverse sheaves is mirrored by a ‘convolution product’ on their
characteristic cycles. To define the latter, we need to introduce a
convolution product of conormal varieties, which can be done over any
algebraically closed field $k$ of characteristic zero as follows.
Recall that, for an integral subvariety $Z\subset A$, its conormal variety
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z}$ is
said to be _clean_ if its Gauss map
$\gamma_{Z}\colon\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z}\to\mathbb{P}_{A}$
is dominant—by theorem 2.8 this is the case if and only if the variety $Z$ is
of general type—and _negligible_ otherwise.
###### Definition 5.1.
The group of clean cycles $\mathcal{L}(A)$ is the free abelian group generated
by the projective conormal cones
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z}$ of
integral subvarieties $Z\subset A$, modulo the subgroup generated by the
negligible ones. The projection onto the quotient induces an isomorphism
$\bigoplus_{Z\subset
A}\mathbb{Z}\cdot\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z}\;\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\;\mathcal{L}(A),$
where the direct sum ranges over the integral subvarieties of general type
$Z\subset A$.
A _clean cycle_ is an element of $\mathcal{L}(A)$ and, by means of the
preceding isomorphism, will always be seen as a finite formal sum
$\sum_{Z}m_{Z}\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z}$,
$m_{Z}\in\mathbb{Z}$, indexed by the integral subvarieties $Z\subset A$ of
general type.
Recall that in definition 2.2 we defined the conormal variety
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z}$
for a reduced but not necessarily irreducible subvariety $Z\subset A$. For
simplicity, we still write
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z}$
for the conormal variety seen as a cycle on $A\times\mathbb{P}_{A}$, or merely
as a clean cycle. In particular, in the latter case, we have
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z}=\sum_{Z^{\prime}\subset
Z}\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z^{\prime}},$
the sum ranging over the irreducible components $Z^{\prime}\subset Z$ of
general type. We will consistently perpetrate this abuse of notation by
writing
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z}=m_{1}\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z_{1}}+\cdots+m_{n}\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z_{n}}$
for a cycle $Z=m_{1}Z_{1}+\cdots+m_{n}Z_{n}$ on $A$, with $m_{i}\in\mathbb{Z}$
and $Z_{i}\subset A$ integral.
###### Definition 5.2.
Let
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X_{1}},\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X_{2}}$
be clean conormal varieties. Let $U\subset\mathbb{P}_{A}$ be an open dense
subset of the projective cotangent space to the abelian variety such that over
this open subset the Gauss maps
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X_{i}\mid
U}:=\gamma_{X_{i}}^{-1}(U)\to U$ are finite étale covers. The fiber product of
these two finite étale covers embeds into $A\times A\times U\subset A\times
A\times\mathbb{P}_{A}$, and we denote by
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}\;:=\;\overline{\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X_{1}\rvert
U}\times_{U}\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X_{2}\rvert
U}}\;\subset\;A\times A\times\mathbb{P}_{A}$
its Zariski closure. We define the convolution of the conormal varieties to be
the clean cycle
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X_{1}}\circ\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X_{2}}\;:=\;\sigma_{*}(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}})\;\in\;\mathcal{L}(A)$
arising by pushforward under the sum morphism $\sigma\colon A\times
A\times\mathbb{P}_{A}\to A\times\mathbb{P}_{A}$. We extend this product
$\circ$ on conormal varieties bilinearly to a product on the group of clean
cycles
$\circ\colon\quad\mathcal{L}(A)\times\mathcal{L}(A)\;\longrightarrow\;\mathcal{L}(A).$
This endows the group $\mathcal{L}(A)$ with a natural ring structure. The
product $\circ$ should not be confused with an intersection of cycles, indeed
the intersection product of any two cycles in $\mathcal{L}(A)$ is zero for
dimension reasons. For any integer $n\neq 0$ the pushforward
$[n]_{*}\colon\quad\mathcal{L}(A)\;\longrightarrow\;\mathcal{L}(A)$
is a ring homomorphism. For
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}\in\mathcal{L}$
we denote by
$\langle\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}\rangle\subset\mathcal{L}(A)$
the smallest subring of $\mathcal{L}(A)$ which contains
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}$ and is
stable under passing from a clean cycle to its irreducible components.
### 5.2. A reminder on Segre classes
In the discussion of wedge powers and spin representations to be carried out
in sections 7 to 8, we will need to control the effect that certain tensor
constructions on clean cycles have on the dimension of their base. For this we
recall in this section some basic facts about Segre classes, or Chern-Mather
classes222In the case of abelian varieties Segre classes and Chern-Mather
classes are the same, since the cotangent bundle to abelian varieties is
trivial. in the terminology of [Krä21, section 3]:
###### Definition 5.3.
The Segre classes of a cycle
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}$ on
$A\times\mathbb{P}_{A}$ of pure dimension $g-1$ are defined as the cycle
classes
$s_{d}(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}})\;:=\;(\operatorname{pr}_{A})_{*}([\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}]\cdot[A\times
H_{d}])\;\in\;\operatorname{CH}_{d}(A)$
where $H_{d}\subset\mathbb{P}_{A}$ is a general linear subspace of dimension
$d<g=\dim A$ and $\operatorname{CH}_{d}(A)$ denotes the Chow group of
dimension $d$ algebraic cycles with $\mathbb{Z}$-coefficients, modulo rational
equivalence.
The following observation allows to control the dimension of the base of a
clean cycle in terms of its Segre classes:
###### Remark 5.4.
For any subvariety $Z\subset A$ we have
$s_{d}(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z})=0$
for all $d>\dim Z$. On the other hand, if $Z$ has a top-dimensional
irreducible component of general type, then the Segre classes $s_{d}(Z)$ are
represented by nonzero effective cycles for all $d\in\\{0,1,\dots,\dim Z\\}$;
this follows from the dominance of the Gauss map $\gamma_{Z}$ and Kleiman’s
generic transversality theorem [Krä21, lemma 3.1.2 (3)]. The top degree Segre
class is the fundamental class
$s_{\dim Z}(Z)\;=\;[Z].$
Since clean cycles live on the projective cotangent bundle, there is no Segre
class in degree $g=\dim A$. We view the total Segre class
$s(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}):=s_{0}(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}})+\cdots+s_{g-1}(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}})$
as an element of quotient
$\operatorname{CH}_{<g}(A)\;:=\;\operatorname{CH}_{\bullet}(A)/\operatorname{CH}_{g}(A).$
To define a ring structure on this quotient, recall that the additive group
$\operatorname{CH}_{\bullet}(A)$ comes with a natural ring structure where the
product is given by the Pontryagin product
$[X]*[Y]\;:=\;\sigma_{*}[X\times Y]\quad\textnormal{for the sum
morphism}\quad\sigma\colon X\times Y\to X+Y\subset A.$
and that $\operatorname{CH}_{g}(A)\subset\operatorname{CH}_{\bullet}(A)$ is an
ideal for the Pontryagin product. Working with the truncated Chow ring has the
advantage that the total Segre class is compatible with the convolution
product of clean cycles in the following sense:
###### Lemma 5.5.
Let
$\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{1},\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{2}\in\mathcal{L}(A)$.
If both Gauss maps
$\gamma_{\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{i}}\colon\operatorname{Supp}(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{i})\to\mathbb{P}$
are finite morphisms, then
$s(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{1}\circ\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{2})=s(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{1})*s(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{2})\quad\textnormal{\em
in}\quad\operatorname{CH}_{<g}(A).$
###### Proof.
See [Krä21, lemma 3.3.1]. ∎
Thus, the convolution product of clean cycles can be controlled via Pontryagin
products of Segre classes. For the latter, one can use the following
observation:
###### Lemma 5.6.
Let $X,Y\subset A$ be proper reduced subvarieties. Suppose that every
irreducible component of maximal dimension in $Y$ is of general type and that
at least one irreducible component of maximal dimension in $X$ is
nondegenerate. Then the cycle
$s(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X})\ast
s(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Y})$
is nonzero and effective in all degrees $\leqslant\min\\{\dim X+\dim Y,\dim
A-1\\}$.
###### Proof.
Since the Pontryagin product is bilinear and the Pontryagin product of two
effective cycles is effective or zero, it suffices to show the statement when
$X$ and $Y$ are both integral. Let $d=\dim X$, $e=\dim Y$ and $g=\dim A$, and
consider the Segre class
$s_{m-d}(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Y})\;\in\;\operatorname{CH}_{m-d}(A)\quad\textnormal{for}\quad
d\;\leqslant\;m\;\leqslant\;\min\\{d+e,g-1\\}.$
This class is represented by an effective cycle since
$m-d\in\\{0,1,\dots,e\\}$. For any irreducible component $Z_{m-d}\subset A$ of
an effective cycle representing this class, we have
$s_{d}(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X})*s_{m-d}(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Y})\;=\;[X]*s_{m-d}(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Y})\;=\;[X]*[Z_{m-d}]+\cdots\;\in\;\operatorname{CH}_{m}(A)$
where $\cdots$ stands for a cycle which is effective or zero. Since by
assumption $X$ is nondegenerate, we furthermore know from lemma 2.7 that the
sum morphism
$\sigma\colon\quad X\times Z_{m-d}\;\longrightarrow\;X+Z_{m-d}\;\subset\;A$
is generically finite onto its image. So $[X]*[Z_{m-d}]=\sigma_{*}([X\times
Z_{m-d}])$ is an effective class in $\operatorname{CH}_{m}(A)$. In conclusion,
we see that the Pontryagin product
$s(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X})*s(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Y})$
is nonzero and effective in all degrees $m$ with $d\leqslant
m\leqslant\min\\{d+e,g-1\\}$. In the remaining range $0\leqslant m<d$ the
effectivity of the Pontryagin product is trivial because in that range we can
look at
$s_{m}(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X})*s_{0}(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Y})=\deg(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Y})\cdot
s_{m}(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X})$;
note that
$\deg(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Y})>0$
because $Y$ is of general type. ∎
###### Corollary 5.7.
Let $X,Y\subset A$ be reduced subvarieties, possibly reducible. If the Gauss
maps $\gamma_{X}$, $\gamma_{Y}$ are both finite morphisms, then
$\dim\pi(\operatorname{Supp}(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{X}\circ\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Y}))\;=\;\min\\{\dim
X+\dim Y,\dim A-1\\}.$
###### Proof.
Combine lemma 5.5 and lemma 5.6. ∎
### 5.3. Clean characteristic cycles
From now on and until the end of this section, we work over $k=\mathbb{C}$.
Recall that to any perverse sheaf $P\in\textup{Perv}(A,\mathbb{C})$ one may
attach a characteristic cycle [Dim04, definition 4.3.19], a finite formal sum
of conormal varieties
$\operatorname{CC}(P)\;=\;\sum_{Z\subset
A}m_{Z}(P)\cdot\Lambda_{Z}\quad\textnormal{with}\quad
m_{Z}(P)\;\in\;\mathbb{N}.$
Here the sum runs over all integral subvarieties $Z\subset A$, and only
finitely many $m_{Z}(P)$ are nonzero. These cycles contain a lot of
information, e.g., the Dubson-Kashiwara index formula shows that we can read
off the topological Euler characteristic as
$\chi(A,P)\;=\;\sum_{Z\subset
A}m_{Z}(P)\cdot\deg(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z})$
where
$\deg(\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z})$
is the degree of the Gauss map from section 2.2, see [FK00]. Passing from
$CC(P)$ to its projectivization and discarding all components which are not
clean, we define the clean characteristic cycle by
$\operatorname{cc}(P)\;:=\;\sum_{\begin{subarray}{c}\textup{$Z\subset A$
of}\\\ \textup{general
type}\end{subarray}}m_{Z}(P)\cdot\scalerel*{{\rotatebox[origin={c}]{180.0}{$\mathbb{V}$}}}{\mathbb{V}}_{Z}.$
It contains all information needed for the Dubson-Kashiwara index formula.
This index formula implies that for $P\in\textup{Perv}(A,\mathbb{C})$ we have
$\operatorname{cc}(P)=0$ if and only if $P\in\textup{S}(A,\mathbb{C})$. So the
clean characteristic cycle of perverse sheaves is defined on the abelian
quotient category
$\overline{\textup{Perv}}(A,\mathbb{C})=\textup{Perv}(A,\mathbb{C})/\textup{S}(A,\mathbb{C})$.
By additivity in short exact sequences we then obtain a group homomorphism
from the Grothendieck group of this abelian quotient category to the group of
clean cycles:
$\operatorname{cc}\colon\quad
K(\overline{\textup{Perv}}(A,\mathbb{C}))\;\longrightarrow\;\mathcal{L}(A)$
The Grothendieck group of an abelian tensor category is not just an abelian
group, but also a ring with the product given by the tensor product, which in
our case is the convolution product $*$ of perverse sheaves. By [Krä21, th.
2.1.1 and ex. 1.3.2] we have
|
# A motion planning algorithm in a figure eight track
C. Jardon, B. Sheppard, V. Zaveri
###### Abstract -
We design a motion planning algorithm to coordinate the movements of two
robots along a figure eight track, in such a way that no collisions occur. We
use a topological approach to robot motion planning that relates instabilities
in motion planning algorithms to topological features of configuration spaces.
The topological complexity of a configuration space is an invariant that
measures the complexity of motion planning algorithms. We show that the
topological complexity of our problem is $3$ and construct an explicit
algorithm with three continuous instructions.
Keywords : topological robotics; motion planning; topological complexity
Mathematics Subject Classification (2020) : 55P99; 05C85; 90C35, 05C90
## 1 Introduction
The motion planning problem in robotics relates to the movement of robots from
one location to another in their physical space. When there are several robots
moving in the same physical space without collisions we need to study the
configuration space.
The configuration space gives a complete specification of the position of
every robot in the physical space and a path in the configuration space is
interpreted as a particular collision-free movement from the initial position
of the robots to their final one.
When $X$ is the configuration space, the motion planning problem consists in
constructing an algorithm that takes as input a pair of initial and final
configurations $(x_{i},x_{f})\in X\times X$, and produces a continuous path
$\alpha:I\to X$ from the initial configuration $x_{i}=\alpha(0)$ to the final
one $x_{f}=\alpha(1)$. Farber initiated a topological approach to this problem
in [1]. Let $PX$ be the space of all paths in $X$. The motion planning
algorithm (MPA) is a map $s:X\times X\to PX$ defined by sending a pair of
points to a path between them. This approach favors maps that depend
continuously on the input since they will translate into more robust
algorithms in real-life problems. However, Farber proved that there exists a
continuous MPA in $X$ if and only if $X$ is contractible. Therefore, most MPAs
in real life may have essential discontinuities, or instabilities, due to the
topology of $X$.
Motivated by this, Farber introduced the topological complexity of a space.
This is a numerical homotopy invariant which measures the minimal number of
instabilities of any MPA on a space. Essentially, the topological complexity
of a space $X$, $TC(X)$, is the smallest $k$ such that $X\times X$ can be
covered by $k$ sets over each of which there is a continuous local section. We
will call instructions each of these local sections. Then, the topological
complexity of a space $X$ is the minimal number of instructions in any MPA in
$X$.
We present our approach to calculating the topological complexity of the
motion planning problem for two robots moving in a figure eight graph, and
construct an explicit algorithm with the minimal number of instructions given
by the topological complexity.
Our constructive strategy is based on finding a spine $Z\subset X$ which is a
deformation retract of $X$ with lesser dimension and design the algorithm in
$Z$. Then extend the algorithm to the whole configuration space $X$ by using
the traces of the homotopy deformation. We also give a translation of the
algorithm for the physical space that shows explicitly how to move each of the
two robots from any initial position to any final position.
This paper will be organized as follows. Section 2 introduces basic
terminology used throughout the paper along with the basic definitions of MPA
and configuration spaces. Section 3 explains the construction of the
configuration space and its flat representation. In section 4 we discuss the
topological complexity and its basic properties. Section 5 gives a detailed
description of the homotopy deformation into the spine. We also show in this
section that the spine is homotopy equivalent to a wedge of seven circles,
which allows us to calculate the topological complexity of the configuration
space. In section 6 we present our main algorithm in the spine, and extend it
to the whole configuration space. We translate the previous algorithm in the
configuration space to the physical space in section 7.
## 2 Definitions and terminology
### 2.1 Motion planning problem
One of the goals of robotics is to create autonomous robots. A set of robots
is a mechanical system that accepts descriptions of tasks and executes them
without further human intervention. The input description specifies what
should be done and the robots decide how to do it and perform the task. The
motion planning problem consists of designing a general algorithm for the
robots to move from an initial position to a final one. We will use a
topological approach to the robot motion planning problem initiated by Farber
in [1].
### 2.2 Basic notions
Since continuity is a key feature of our algorithm, we will start by recalling
its definition for topological spaces. Let $X$ and $Y$ be topological spaces.
We say that $f:X\to Y$ is continuous at $a\in X$ if whenever $V$ is an open
neighborhood of $f(a)$ in $Y$, there is an open neighborhood $U$ of $a$ in $X$
such that $f(U)\subset V$. If $f$ is continuous at every point $a\in X$ then
we say $f$ is continuous on $X$.
A path from a point $A$ to a point $B$ in a topological space $X$ is a
continuous function $\alpha$ from the unit interval $I=\left[0,1\right]$ to
$X$ with $\alpha(0)=A$ and $\alpha(1)=B$. The path space, $PX$, is the space
of all paths $\alpha$ in $X$, i.e. $PX=\left\\{\alpha\ \lvert\
\alpha:\left[0,1\right]\to X\right\\}$. A space $X$ is path-connected if for
all points $x,y\in X$ there exists a path from $x$ to $y$.
The product space of $X$ and $Y$ will be the Cartesian product $X\times
Y=\\{(x,y)\ \lvert\ x\in X,y\in Y\\}$ with the product topology.
The map $e:PX\to X\times X$ which takes a path $\alpha$ as its input and
returns the pair $(\alpha(0),\alpha(1))$ is called the evaluation map. Here
$\alpha(0)$ is the initial point of the path and $\alpha(1)$ is the final one.
If $X$ is a path-connected space, let the function $s:X\times X\to PX$ be a
section of the evaluation such that $e\circ s=id_{X\times X}$. This section
takes as input two points and produces a path $\alpha$ between them as output:
$s:(A,B)\mapsto\alpha\text{ and }\alpha(0)=A,\alpha(1)=B$
where $\alpha(0)=A$ is the initial point and $\alpha(1)=B$ is the final point
of the path.
Let $f:X\to Y$ be a bijection. If both the function $f$ and the inverse
function $f^{-1}:Y\to X$ are continuous, then $f$ is called a homeomorphism.
If such a function exists, we say $X$ is homeomorphic to $Y$ using the
notation $X\approx Y.$
Let $h$ and $h^{\prime}$ be continuous maps from $X$ to $Y.$ We say that $h$
is homotopic to $h^{\prime}$, denoted by $h\simeq h^{\prime}$, if there exists
a continuous map
$F:X\times I\to Y$
such that
$F(x,0)=h(x)\text{ and }F(x,1)=h^{\prime}(x)$
for each $x\in X$.
A map $f:X\to Y$ is a homotopy equivalence if there exists a map $g:Y\to X$
where $f\circ g\simeq id_{Y}$ and $g\circ f\simeq id_{X}.$ If such maps exist,
we say $X$ and $Y$ are homotopy equivalent and write $X\simeq Y$. Note that
homeomorphic spaces are always homotopy equivalent.
A topological space $X$ is contractible if it is homotopy equivalent to a
point.
### 2.3 Configuration space
The variety of all the possible states that any mechanical system can take is
called the configuration space. Each point in the configuration space
represents a state of the system. The space $\Gamma$ where the robots are able
to move will be called the physical space. The configuration space of $n$
robots moving in a space $\Gamma$ without collisions, denoted $C^{n}(\Gamma)$,
is defined as:
$C^{n}(\Gamma)=(\Gamma\times\Gamma\times\cdots\times\Gamma)-D.$
Here $D$ represents the pairwise diagonal:
$D=\left\\{(x_{1},x_{2},\ldots,x_{n})\in\Gamma^{n}\ \lvert\ x_{i}=x_{j}\text{
for some }i\neq j\right\\}$
given by the states in which at least two robots occupy the same place.
## 3 Configuration space of two robots on a figure eight track
Let $\Gamma$ denote the figure eight space, which is a wedge sum of two
circles $S^{1}\vee S^{1}$. We will consider the motion planning problem of two
robots moving in the track $\Gamma=S^{1}\vee S^{1}$. See figure 1.
Figure 1: Physical space $\Gamma$.
In this case, the configuration space is
$C^{2}(\Gamma)=(\Gamma\times\Gamma)-D.$ We will visualize this space
$X=C^{2}(\Gamma)$ first when embedded in $\mathbb{R}^{3}$ and then we will
explain its flat representation that will be useful for calculations.
### 3.1 Visualization of the configuration space $X$
The robots in $\Gamma$ are represented by triangles and squares. The triangle
robot will be also referred to as the first robot and the square robot as the
second one. We shade each circle with a different color to distinguish the two
separate circles in the configuration space. Each concrete position in the
physical space, see for instance figure 2, will have a corresponding state in
the configuration space.
Figure 2: First and second robot in $\Gamma$
The Cartesian product $\Gamma\times\Gamma$ represents all positions of the two
robots in $\Gamma$. The product of a figure eight by itself is given by four
connected tori as in figure 3. For robots in the same circle, their states are
represented by the red and blue tori whereas the purple tori represent states
where the robots are in different circles.
Figure 3: Cartesian product $(S^{1}\vee S^{1})\times(S^{1}\vee S^{1})$.
To obtain the configuration space, we need to remove the points in this
Cartesian product that correspond to positions in which both robots are at the
same place. We observe that the two tori representing positions of robots in
the same circle have a diagonal removed from them, whereas the other two tori
have just one point removed.
The diagonal $D=\\{(x,y)\in(S^{1}\vee S^{1})\times(S^{1}\vee S^{1})|\;x=y\\}$
determines a curve on the red and blue tori. The configuration space $X$ is
the Cartesian product $\Gamma\times\Gamma$ with this curve removed. See figure
4.
Figure 4: $X=(\Gamma\times\Gamma)-D$.
We will represent now the configuration space in the two-dimensional space by
first considering the circle $S^{1}$ as a quotient of the unit interval where
the points $0$ and $1$ are identified, i.e. $S^{1}\approx I/\\{0\sim 1\\}$.
See figure 5.
Figure 5: Flat representation of the circle.
The flat representation of the Cartesian product $\Gamma\times\Gamma$ is given
by four connected squares with their sides identified as in figure 6.
Figure 6: The product $\Gamma\times\Gamma$
The diagonal $D=\\{(x,y)\in(S^{1}\vee S^{1})\times(S^{1}\vee S^{1})|\;x=y\\}$
is given by the diagonals of the red and blue squares in the flat
representation. The configuration space is the Cartesian product
$\Gamma\times\Gamma$ with the diagonal $D$ removed. See figure 7.
Figure 7: Flat representation of $X=(\Gamma\times\Gamma)-D$
Once that the diagonal is removed, we can visualize the flat representation of
the configuration space as the following parallelogram with sides identified
as shown in figures 8 and 9.
Figure 8: Transition from square to parallelogram representation. Figure 9:
Flat representation of $X=(\Gamma\times\Gamma)-D$ as a parallelogram.
## 4 Topological Complexity
In order to program robots to move autonomously from any initial state to any
desired state we will need a motion planning algorithm that takes as input the
pair $(A,B)$ where $A$ is the initial state and $B$ is the final state and
outputs a continuous motion of the system starting at $A$ and ending at $B$.
A Motion Planning Algorithm (MPA) is a section $s:X\times X\rightarrow PX$ of
the evaluation map. It is a function that takes as input a pair of initial and
final states, and outputs a path between them.
A motion planning algorithm $s:X\times X\rightarrow PX$ is a map that is not
necessarily continuous. When it is continuous, if the inputs are slightly
modified then the path between the inputs is only slightly modified too. The
discontinuities of the MPA emerge as instabilities of the robot motion. We
want to minimize such instabilities to produce instructions which are robust
to noise: if there are small errors in the initial and final positions
measurement, we want the paths associated to the exact positions and the one
given by the measurement to be nearby.
Farber proved that the only case where a continuous motion planning algorithm
exists is when the space is contractible.
###### Theorem 4.1
[1] A continuous motion planning $s$: $X\times X\rightarrow PX$ exists if and
only if the configuration space $X$ is contractible.
For spaces that are not contractible, all MPAs will be discontinuous. In real-
life cases, we would like to minimize these discontinuities to produce
optimally stable MPAs.
Farber invented a number known as Topological Complexity, $TC(X)$, that
roughly can be described as the minimal number of continuous instructions
which are needed to describe any motion planning algorithm in $X$.
This number measures how complex it is for the robots to navigate the space.
For example, if the topological complexity of a space is two, then robots need
a minimal number of two continuous instructions to move. The higher the $TC$
is, the harder it is for the robots to navigate the space.
###### Definition 4.2
[1][4] Let $X$ be a path-connected topological space. The topological
complexity of $X$, $TC(X)$, is the smallest number $k$, such that $X\times X$
can be covered by $k$ sets $U_{1},U_{2},U_{3}$, $\ldots$, $U_{k}$ where on
each of them there is a continuous local section $s_{i}:U_{i}\to PX$ for each
$i=1,2,\ldots,k$.
Each of these local sections will be our continuous instructions in the motion
planning algorithm. For instance, if the space $X$ is contractible we need
only one section and $TC(X)=1$. In the case of a circle, we need two sets to
cover $S^{1}\times S^{1}$. The first set is $U_{1}=\\{(x,y)|\;x\text{ is
antipodal to }y\\}$ and the second set is $U_{2}=\\{(x,y)|\;x\text{ is not
antipodal to }y\\}$. The first instruction over the set $U_{1}$ is “move
counterclockwise” while the second instruction over $U_{2}$ is “move following
the shortest path”. Both of these instructions are continuous and
$TC(S^{1})=2$. Note that none of these instructions are continuous over
$S^{1}\times S^{1}$.
Farber also proved that this number is a homotopy invariant. If a space is
homotopy equivalent to another, then their topological complexities are the
same on their domains.
###### Theorem 4.3
[1] If $X$ is homotopy equivalent to $Y$, then $TC(X)=TC(Y)$.
## 5 Homotopy type of $X$
Our aim will be to find a simpler space that is homotopy equivalent to the
space $X$ and then construct our algorithm in the simpler space. Finding an
algorithm and controlling the discontinuities directly in $X$ is not trivial.
Theorem 4.3 is crucial to reduce the complexity of the task by using the
homotopy equivalence not only to calculate the topological complexity of a
simpler space, but also to construct the actual algorithm in the simpler space
and extend it to $X$ afterward using the homotopy traces.
Ghrist proved that the configuration space of robots moving in a graph is
homotopy equivalent to a space of lesser dimension.
###### Theorem 5.1
[2] Given a graph $\Gamma$, the configuration space of $N$ distinct points on
$\Gamma$ can be deformation retracted to a spine whose dimension is bounded
above by the number of vertices of $\Gamma$ of valency greater than two.
Our figure-eight graph has only one vertex with valency greater than two and
therefore the configuration space retracts to a spine of dimension $1$. We
know then that it is possible to shrink the space $X$ to a one-dimensional
space. We will show next the explicit construction of this spine for our
graph.
### 5.1 Construction of the Spine
The spine $Z$ of the configuration space is a one-dimensional space that
carries all of the topology of the configuration space through reducing the
full space by deformation retraction to a simpler space. The center of the
physical space $\Gamma$ is the point at which the circles intersect and the
poles are the antipodal points to the center in each circle.
Figure 10: The center and the poles, respectively.
The spine $Z$ is a set of segments in the flat representation with their
extremes identified that represents the positions of the robots in $\Gamma$
where at least one robot is at a pole and the other at a different circle or
both robots are at antipodal positions in the same circle.
We construct a retraction $r$ of the whole space $X$ into the spine $Z$. The
homotopy deformation $H$ will follow the traces $H$($\textunderscore$,t) shown
in figure 11.
Figure 11: Homotopy traces of the retraction of $X$ into $Z$.
The map $r$ is a deformation retraction since it is a retraction and the
composition with the inclusion is homotopic to the identity map in $X$.We
observe the following identifications of the segment extremes in $Z$ shown in
figure 12.
Figure 12: Spine $Z$.
These identifications make the spine $Z$ homeomorphic to a chain of circles
$C$. We color each segment to describe this homeomorphism.
Figure 13: The spine $Z$ is homeomorphic to the chain $C$.
We can also visualize the chain of circles $C$ directly in the representation
of $X$ embedded in $\mathbb{R}^{3}$. By following the traces of the homotopy,
the two cylinders deform into circles and each torus minus a point deforms
into a figure-eight. See figure 14. Note that in figures 14 and 15, the dashed
lines are the diagonal $D$.
Figure 14: Traces of the homotopy in $X$.
We can visualize in the following figure the chain of circles $C$ embedded in
the three-dimensional space.
Figure 15: The chain $C$.
### 5.2 The wedge of seven circles
We will show in this section that the chain $C$ is homotopy equivalent to a
wedge of seven circles $Y$. We perform a series of contractions in each circle
as shown in figure 16.
Figure 16: Contraction of a segment in a circle.
We repeat this deformation in each of the circles. See figures 17 and 18.
Figure 17: First stages of the deformation. Figure 18: Last stages of the
deformation.
The final space is a wedge of seven circles as shown in figure 19.
Figure 19: Wedge of seven circles $Y$.
### 5.3 Calculation of the topological complexity of the configuration space
The configuration space $X$ is homotopy equivalent to the spine $Z$ and $Z$ is
homeomorphic to the chain of circles $C$, which is homotopy equivalent to the
wedge $Y$, i.e.
$X\simeq Z\approx C\simeq Y.$
Then, by Farber’s theorem 4.3 we have that
$TC(X)=TC(Z)=TC(C)=TC(Y).$ (1)
Farber also studied and calculated the topological complexity of wedges of
spheres:
###### Lemma 5.2
[3] Let $W$ denote the wedge of $n$ spheres $S^{m}$, $W=S^{m}\vee
S^{m}\vee\cdots\vee S^{m}$. Then $TC(W)=\left\\{\begin{array}[]{ll}2&\mbox{if
}n=1\mbox{ and }m\mbox{ is odd,}\\\ 3&\mbox{if either }n>1\mbox{, or m is
even.}\end{array}\right.$
For our space $Y$ we have that $m=1$ and $n=7$. By lemma 5.2, we know that the
topological complexity of a wedge of seven circles is $3$, then $TC(X)=3$
following equation 1. We know now that our algorithm will have to have at
least $3$ continuous instructions.
## 6 Algorithm in the configuration space
We will construct first our algorithm with three instructions in the spine $Z$
that is homotopy equivalent to $X$. Then we will extend the algorithm to the
whole configuration space $X$ following the traces of the homotopy.
Recall that the spine $Z$ in the flat representation is a network consisting
of two crosses and four sub-diagonals.
Figure 20: Space Z
When the state is in a diagonal segment in $Z$, we will say that the state is
a sub-diagonal state. In the physical space $\Gamma$, this state will
correspond to antipodal positions in the same circle.
Figure 21: A sub-diagonal state.
When a state is located in a cross-segment in $Z$, we will say that it is a
cross-state. In the physical space $\Gamma$, cross-states translate as the
position of at least one robot being at a pole and the other anywhere on the
opposite circle.
Figure 22: A cross-state.
A cross center is a cross-state at the intersection of horizontal and vertical
cross-segments. A cross center corresponds to the positions in which both
robots are at poles in different circles in $\Gamma$.
Figure 23: Cross centers in the physical space $\Gamma$. Figure 24: Cross
centers in $Z$ and $C$.
An intersection point is a state in $Z$ at the intersection between a cross-
segment and a sub-diagonal segment. In $\Gamma$, this state corresponds to
positions in which one robot is at the center and the other at a pole. There
are two types of intersection points: $H$-points located between a sub-
diagonal segment and a horizontal cross-segment, and $V$-points located
between a sub-diagonal segment and a vertical cross-segment. An $H$-point
corresponds in the physical space to a position in which the first robot is at
the center and the second robot is at a pole. Conversely for $V$-points.
Figure 25: Intersection V-points(left) and H-points(right) in the physical
space $\Gamma$. Figure 26: Intersection points in $Z$ and $C$.
The cross and sub-diagonal segments in $Z$ will constitute the circles in the
chain $C$ whereas the intersection points and cross centers will be the
vertices in $C$.
### 6.1 Algorithm in the chain $C$
The initial, final, and current states are the states corresponding to the
initial, final, and current positions respectively.
If the current state is at a vertex in $C$, we will say that the current
circle is the next immediate circle counterclockwise. Otherwise, the current
circle is the circle where the current state is.
Let $(I,F)=((x_{i},y_{i}),(x_{f},y_{f}))$ be a pair of initial and final
states in $C\times C$. We define the following subsets of $C\times C$:
* •
$A=\left\\{(I,F)\in C\times C\ \lvert\ I\text{ is antipodal to }F\text{ in the
same circle}\right\\}$
* •
$V=\left\\{(I,F)\in C\times C\ \lvert\ I\text{ or }F\text{ is a
vertex}\right\\}$
* •
$V^{\prime}=\left\\{(I,F)\in C\times C\ \lvert\ I\text{ and }F\text{ are
vertices}\right\\}$
###### Algorithm 6.1
Set current state to initial state $I$
Instruction 1
For $(I,F)\in C\times C-(A\cup V)$:
1. 1.
Check if final state is in current circle.
* •
If true, take shortest path to final state.
* •
If false, move counterclockwise in current circle to next vertex and set
current state to that vertex. Repeat 1.
Instruction 2
For $(I,F)\in(A\cup V)-V^{\prime}$:
1. 2.
Check if final state is in current circle.
* •
If true, check if final state is antipodal to current state.
* –
If true, go counterclockwise to final state.
* –
If false, take shortest path to final state.
* •
If false, check if current state is a vertex.
* –
If true, move counterclockwise in current circle to next vertex and set
current state to that vertex. Repeat 2.
* –
If false, take shortest path to next counterclockwise vertex and set current
state to that vertex. Repeat 2.
Instruction 3
For $(I,F)\in V^{\prime}$:
1. 3.
Check if final state is next counterclockwise vertex.
* •
If true, go counterclockwise in the current circle to final state.
* •
If false, go counterclockwise in the current circle to next vertex and set
current state to that vertex. Repeat 3.
Each of these instructions is continuous in its domain.
### 6.2 Algorithm in $Z$
By following the homeomorphism between $C$ and $Z$ we will translate now each
of these instructions to the spine $Z$.
We define a distinguished direction of movement in $Z$ that will correspond to
moving from vertices counterclockwise in the current circle in $C$. The
positive direction in $Z$ is given by:
* •
from a cross center, move to the right in the horizontal segment;
* •
from a $H$-point, move up in the diagonal segment;
* •
from a $V$-point, move up in the vertical segment;
* •
from any other point, move to the right in horizontal or diagonal segments and
up in vertical segments.
Figure 27: Positive direction in $C$ and $Z$.
If the current state is at an intersection point or cross center in $Z$, we
will say that the current segment is the next immediate segment in the
positive direction. Otherwise, the current segment is where the current state
is.
We consider $Z$ to be a metric graph where each edge has length one and the
distance between two points is given by the infimum of the lengths of paths
joining them.
Let $(I,F)=((x_{i},y_{i}),(x_{f},y_{f}))$ be a pair of initial and final
states in $Z\times Z$. We define the following subsets of $Z\times Z$:
* •
$H=\\{(I,F)\in Z\times Z\ \lvert\ I\text{ and }F\text{ are on the same segment
and }d(I,F)=\frac{1}{2}\\}$
* •
$J=\\{(I,F)\in Z\times Z\ \lvert\ I\text{ or }F\text{ are at a cross center or
intersection point}\\}$
* •
$J^{\prime}=\\{(I,F)\in Z\times Z\ \lvert\ I\text{ and }F\text{ are both at
cross centers or intersection points}\\}$
###### Algorithm 6.2
Set current state to initial state $I$
Instruction 1
For $(I,F)\in Z\times Z-(H\cup J)$:
1. 1.
Check if final state is in current segment.
* •
If true, take shortest path to final state.
* •
If false, move in the positive direction in current segment to next point of
intersection and set current state to that point of intersection. Repeat 1.
Instruction 2
For $(I,F)\in(H\cup J)-J^{\prime}$:
1. 2.
Check if final state is in current segment.
* •
If true, check if the distance from current state to final state is
$\frac{1}{2}$.
* –
If true, move in the positive direction to final state.
* –
If false, take shortest path to final state.
* •
If false, check if current state is a point of intersection.
* –
If true, move in the positive direction along current segment to next point of
intersection and set current state to that point of intersection. Repeat 2.
* –
If false, take shortest path, in the positive direction, to next point of
intersection and set current state to that point of intersection. Repeat 2.
Instruction 3
For $(I,F)\in J^{\prime}$:
1. 3.
Check if final state is the next point of intersection in the positive
direction.
* •
If true, move in the positive direction along the current segment to final
state.
* •
If false, move in the positive direction along the current segment to next
point of intersection and set current state to that point of intersection.
Repeat 3.
### 6.3 Algorithm in $X$
We will extend now the algorithm 6.2 to the whole configuration space.
Given an initial state $I$, let $I_{s}$ be the state corresponding to the
point of intersection between the trace of the homotopy passing through $I$
and the spine $Z$. Analogously, $F_{s}$ is the corresponding state to the
final one in the spine.
###### Algorithm 6.3
Let $I$ and $F$ be the initial and final states.
1. 1.
Find $(I_{s},F_{s})$.
2. 2.
Move state $I$ to $I_{s}$.
3. 3.
Execute algorithm 6.2 for $(I_{s},F_{s})$.
4. 4.
Move state $F_{s}$ to $F$.
Given the initial and final states shown in figure 28, the following figures
illustrate algorithm 6.3 for this case.
Figure 28: Step 2. Figure 29: Step 3. Figure 30: Step 4.
## 7 Algorithm in the physical space
We will translate now our algorithm 6.3 to the physical space $\Gamma$.
Observe that states in the spine $Z$ correspond to positions in which at least
one robot is at a pole if the robots are in different circles and antipodal
positions if the robots are in the same circle.
### 7.1 States $(I_{s},F_{s})$ in the physical space $\Gamma$
To translate the first step of the algorithm, we need to find the positions of
the robots in $\Gamma$ corresponding to the intersection of the traces with
the spine.
If the robots are in the same circle, move the robots away from each other
until they are in antipodal positions. See figure 31. If they are in different
circles, move them away from the center until at least one of them reaches a
pole position. See figure 32. If one robot is at the center, move the other
robot until it reaches a pole position. See figure 33.
Figure 31: Moving to antipodal positions Figure 32: Moving until one robot
reaches the pole. Figure 33: Robot in center scenario.
The ratio between the speeds at which the robots are moving is determined by
the slope of the traces of the homotopy discussed in section 5.1. If a trace
of the homotopy is given by the curve $(x(t),y(t))$, then the slope of the
trace is given by the ratio between the speeds $y^{\prime}(t)$ and
$x^{\prime}(t)$.
### 7.2 Algorithm in the physical space for the positions in $Z$
We will translate here the movement in the positive direction in $Z$ to the
physical space. Recall that a cross-center state corresponds to both robots
being at poles, and intersection points in $Z$ correspond to one robot being
at a pole and the other at the center. In the physical space, we will call
these positions vertex positions.
The positive direction in $\Gamma$ is given by:
* •
if both robots are at poles, move robot $1$ counterclockwise;
* •
if the first robot is at the center and the other at a pole, move both
counterclockwise;
* •
if the first robot is at a pole and the other at the center, move the second
robot counterclockwise.
Figure 34: Positive direction in $\Gamma$.
If the current position is a vertex position, we will say that its
neighborhood is the set of positions determined by moving the current position
in the positive direction until it circles back to itself. Otherwise, the
neighborhood is given by moving counterclockwise the robot that is not at a
pole all around its circle if the robots are at different circles or moving
counterclockwise both robots together if they are antipodal in the same
circle.
Figure 35: A neighborhood of a non-vertex position. Figure 36: A neighborhood
of a vertex position.
Let $(I,F)=((x_{i},y_{i}),(x_{f},y_{f}))$ be a pair of initial and final
positions in $\Gamma\times\Gamma$. We define the following subsets of
$\Gamma\times\Gamma$:
* •
$K=\\{((x_{i},y_{i}),(x_{f},y_{f}))\ \lvert\ x_{i}\text{ and }x_{f}\text{ are
antipodal}\text{ or }y_{i}\text{ and }y_{f}\text{ are antipodal}\\}$
* •
$L=\\{((x_{i},y_{i}),(x_{f},y_{f}))\ \lvert\ (x_{i},y_{i})\text{ or
}(x_{f},y_{f})\text{ is a vertex position}\\}$
* •
$L^{\prime}=\\{((x_{i},y_{i}),(x_{f},y_{f}))\ \lvert\ (x_{i},y_{i})\text{ and
}(x_{f},y_{f})\text{ are vertex positions}\\}$
###### Algorithm 7.1
Set current position to initial position $I$.
Instruction 1
For $(I,F)\in\Gamma\times\Gamma-(K\cup L)$:
1. 1.
Check if final position is in the neighborhood of the current position.
* •
If true, move each robot simultaneously to their final position taking the
shortest path.
* •
If false, move in the positive direction in current neighborhood until next
vertex position. Set current position to this one. Repeat 1.
Instruction 2
For $(I,F)\in(K\cup L)-L^{\prime}$:
1. 2.
Check if final position is in the current neighborhood.
* •
If true, check if at least one of the robots have its initial and final
positions antipodal.
* –
If true, move in the positive direction to final position.
* –
If false, move each robot simultaneously to their final position taking the
shortest path.
* •
If false, check if current position is a vertex position.
* –
If true, move in the positive direction in current neighborhood to the next
vertex. Set current position to this one. Repeat 2.
* –
If false, take shortest path to next vertex position in the positive direction
and set current position to this one. Repeat 2.
Instruction 3
For $(I,F)\in L^{\prime}$:
1. 3.
Check if final position is the next vertex position in the positive direction.
* •
If true, move in the positive direction in the current neighborhood to final
position.
* •
If false, move in the positive direction in the current neighborhood to next
vertex position and set current state to that position. Repeat 3.
### 7.3 Complete algorithm in the physical space $\Gamma$
We will write now the complete algorithm for all positions in $\Gamma$.
###### Algorithm 7.2
Let $I$ and $F$ be the initial and final positions.
1. 1.
If the robots are in the same circle, move the robots away from each other
until they are in antipodal positions.
2. 2.
If they are in different circles, move them away from the center until at
least one of them reaches a pole position.
3. 3.
Repeat steps $1$ and $2$ with the final position. Let us call initial
$Z$-position and final $Z$-position the output of this step.
4. 4.
Execute algorithm 7.1 for the initial $Z$-position and final $Z$-position.
5. 5.
Move back from the final $Z$-position to the final position reversing the
movement done in the first step.
In the following case scenarios, we describe the movements in the physical
space as well as in the configuration space.
#### 7.3.1 Case scenario 1
Let us consider the initial and final positions shown in figure 37.
Figure 37: Initial and final positions and their corresponding spine
positions.
The path to move from initial state to final state in the configuration space
is shown in figure 38.
Figure 38: Path in $X$ for case 1.
The steps of the algorithm to move from initial to final positions in the
physical space are shown in figure 39.
Figure 39: Steps of the algorithm in $\Gamma$ for case 1.
#### 7.3.2 Case scenario 2
Let us consider the initial and final positions shown in figure 40.
Figure 40: Initial and final positions and their corresponding skeleton
positions.
The path to move from initial state to final state in the configuration space
is shown in figure 41.
Figure 41: Path in $X$ for case 2.
The steps of the algorithm to move from initial to final positions in the
physical space are shown in figure 42.
Figure 42: Steps of the algorithm in $\Gamma$ for case 2.
#### 7.3.3 Case scenario 3
Let us consider the initial and final positions shown in figure 43.
Figure 43: Initial and final positions and their corresponding skeleton
positions.
The path to move from initial state to final state in the configuration space
is shown in figure 44.
Figure 44: Path in $X$ for case 3.
The steps of the algorithm to move from initial to final positions in the
physical space are shown in figure 45.
Figure 45: Steps of the algorithm in $\Gamma$ for case 3.
## References
* [1] M. Farber, Topological complexity of motion planning, Discrete Comput. Geom., 29 (2003), 211–221.
* [2] R. Ghrist, D. Koditschek, Safe Cooperative Robot Dynamics on Graphs, SIAM J. Control and Optimization, 40 (2002), 1556–1575.
* [3] M. Farber, Instabilities of robot motion, Topology and its Applications., 140 (2004), 245–266.
* [4] J.M. García-Calcines, A note on covers defining relative and sectional categories, Topology and its Applications, 265 (2019), 106810.
Cristian Jardon
Wilbur Wright College
4300 N Narragansett Ave
Chicago, IL 60634
E-mail<EMAIL_ADDRESS>
Brian Sheppard
Harold Washington College
30 E. Lake Street
Chicago, IL 60601
E-mail<EMAIL_ADDRESS>
Veet Zaveri
Wilbur Wright College
4300 N Narragansett Ave
Chicago, IL 60634
E-mail<EMAIL_ADDRESS>
|
# Collaborative Teacher-Student Learning via Multiple Knowledge Transfer
Liyuan Sun Jianping Gou<EMAIL_ADDRESS>School of Computer Science
and Telecommunication Engineering, Jiangsu University, Zhenjiang, 212013,
China Baosheng Yu Lan Du Dacheng Tao Faculty of information technology,
Monash University, Australia UBTECH Sydney AI Centre, School of Computer
Science, Faculty of Engineering, The University of Sydney, Darlington, NSW
2008, Australia.
###### Abstract
Knowledge distillation (KD), as an efficient and effective model compression
technique, has been receiving considerable attention in deep learning. The key
to its success is to transfer knowledge from a large teacher network to a
small student one. However, most of the existing knowledge distillation
methods consider only one type of knowledge learned from either instance
features or instance relations via a specific distillation strategy in
teacher-student learning. There are few works that explore the idea of
transferring different types of knowledge with different distillation
strategies in a unified framework. Moreover, the frequently used offline
distillation suffers from a limited learning capacity due to the fixed
teacher-student architecture. In this paper we propose a collaborative
teacher-student learning via multiple knowledge transfer (CTSL-MKT) that
prompts both self-learning and collaborative learning. It allows multiple
students learn knowledge from both individual instances and instance relations
in a collaborative way. While learning from themselves with self-distillation,
they can also guide each other via online distillation. The experiments and
ablation studies on four image datasets demonstrate that the proposed CTSL-MKT
significantly outperforms the state-of-the-art KD methods.
###### keywords:
## 1 Introduction
Deep neural networks have achieved state-of-the-art performance on many
applications such as computer vision, natural language processing, and speech
recognition in recent years. The remarkable performance of deep learning
relies on designing deeper or wider network architectures with many layers and
millions of parameters to enhance the learning capacity. However, it is almost
impossible to deploy the large-scale networks on platforms with limited
computation and storage resources, e.g., mobile devices and embedded systems.
Thus, the model compression and acceleration techniques mainly including
network pruning [2, 3], model quantization [34, 35] and knowledge distillation
[20, 21, 39, 40, 36] are proposed for training lightweight deep models. Among
compressing methods, knowledge distillation, which carries out knowledge
transfer from a high-capacity teacher network to a low-capacity student one,
has received increasing interest recently since it was first introduced in
[20].
In knowledge distillation, the type of knowledge, the distillation strategy
and the teacher-student architecture are three crucial factors that determine
the KD performance [1]. As pointed out in [1], there are three kinds of
knowledge, i.e., the response-based, the feature-based and the relation-based
knowledge. Generally, most of KD methods distill the response-based knowledge
(e.g., soft logits of the output layer) from a large teacher network and
transfer it to a small student [20, 32, 22]. To overcome the limitation of
knowledge from the output layer of teacher, the feature-based knowledge from
the middle layers of teacher is also used to train the student [21, 18, 19].
Unlike both the response-based and the feature-based knowledge from individual
instances, the relation-based knowledge from instance relations is modelled
for improving student learning [37, 38, 15, 16, 17]. Each kind of knowledge
can provide student training with an informative teacher guidance, and they
can also compensate each other to enrich learning. However, most existing KD
methods only consider either knowledge from individual instance features to
maintain instance consistency between teacher and student or knowledge from
instance relations to preserve the instance correlation consistency. There are
a few works that consider more than one kind of knowledge in knowledge
distillation [37, 14] at the same time and explore the efficacy of each kind
of knowledge.
Transferring different types of knowledge can be implemented with different
distillation methods, e.g., offline distillation, online distillation and
self-distillation [1]. Most of the KD methods employ offline distillation,
which is one-way knowledge transfer from a pre-trained large teacher to a
small student [20, 22, 13]. In offline distillation, the capacity gap caused
by a fixed teacher-student architecture and the requirement of a large dataset
for pre-training the teacher often result in a degraded performance [22].
Thus, finding a proper teacher-student architecture in offline distillation is
challenging. In contrast, online distillation provides a one-phase end-to-end
training scheme via teacher-student collaborative learning on a peer-network
architecture instead of a fixed one [25, 33, 36, 32, 28, 12]. Self-
distillation performs online distillation within the same network to reduce
model over-fitting [23, 24]. Online distillation and self-distillation are
promising methods for knowledge distillation as they bridge the capacity gap
via avoiding the need of a large teacher network, leading to an improved
performance. However, both KD methods used individually are limited to
knowledge distillation from a single source, i.e., individual instances,
online distillation could further suffer from the poor instance consistency
between peer networks caused by the discrepancy in their network outputs.
Figure 1: The overview diagram of CTSL-MKT.
Consequently, it is desirable to have a unified framework that can integrate
the advantages of different KD methods and make efficient use of different
types of knowledge. Inspired by the idea of knowledge distillation via
multiple distillation strategies to transfer more than one types of knowledge,
we propose a collaborative teacher-student learning via multiple knowledge
transfer (CTSL-MKT), which fuses self-distillation and online distillation in
such a way that the former transfers the response-based knowledge within each
peer network and the latter bidirectionally transfers both the response-based
knowledge and the relation-based knowledge between peer networks. CTSL-MKT can
overcome the aforementioned issued faced by existing KD methods that often use
only one distillation strategy to transfer a single type of knowledge. The
overview framework of CTSL-MKT is illustrated in Figure 1. To our knowledge,
this is the first framework that integrates different distillation strategies
together to transfer more than one type of knowledge simultaneously.
In CTSL-MKT, each pear network conducts self-learning via self-distillation.
Meanwhile, they carry out teacher-student collaborative learning to mutually
teach each other. CTSL-MKT can also adopt a variety of peer network
architectures, where the two peer networks can either share the same network
architecture or have different ones. We believe that multiple knowledge
transfer can provide much more informative knowledge to guide each peer
network so that they can obtain better performance with a better
generalization ability. We conduct a set of image classification experiments
on four commonly-used datasets i.e., CIFAR-10, CIFAR-100, Tiny-ImageNet, and
Market-1501. Experimental results demonstrate the superior performance of the
proposed CTSL-MKT over the state-of-the-art KD methods. The main contributions
in our works can be summarized as follows:
* •
A new teacher-student mutual learning framework effectively fuses the
knowledge from individual instances and the knowledge from instance
relationships.
* •
A self-learning enhanced collaborative learning integrates the advantages of
both self-learning and online learning.
* •
The extensive experiments on a variety of the peer teacher-student networks
that compare CTSL-MKT with the state-of-the-art methods to validate its
effectiveness in image classification tasks.
* •
A set of ablation studies of different combinations of knowledge and
distillation methods provides insights into how multiple knowledge transfer
contribute to knowledge distillation.
## 2 Related Work
### 2.1 Self-Distillation
Self-distillation is a novel training scheme for knowledge transfer [23, 24,
27, 11, 10]. In self-distillation, the teacher and student networks are
identical and knowledge transfer is carried out within the same network. Yuan
et al. empirically analyzed the performance of normal, reversed and defective
KD methods, and showed that a weak teacher can strengthen the student and
vice-versa [23]. A teacher-free knowledge distillation method (Tf-KD) instead
makes student model conduct self-learning. To enhance the generalization and
overcome over-fitting, class-wise self-knowledge distillation makes use of
soft logits of different intra-class samples within a model [27]. Phuong and
Lampert [11] proposed a distillation-based training method to reduce time
complexity, where the output of later exit layer supervises the early exit
layer via knowledge transfer. Rather than at the layer level, snapshot
distillation [10] transfers knowledge from earlier to later epochs while
training a deep model.
Overall, self-distillation can overcome the issue of over-fitting and the
capacity gap on the teacher-student architectures, improve the generalization
ability and reduce the inference time of a deep model. However, the self-
distillation performance could be limited by the one-sided response-based
knowledge from model itself. To further improve knowledge distillation, we
integrate both online and self-distillation into CTSL-MKT with more
informative relation-based knowledge.
### 2.2 Collaborative Learning
Recently, there are many new online distillation methods that train a teacher
and a student simultaneously during knowledge transfer. Collaborative learning
is the one used most often [25, 29, 32, 8, 9, 7], where the teacher and the
student as peer networks collaboratively teach and learn from each other, and
the peer network architectures can be different. In particular, Zhang et al.
[25] proposed a deep mutual learning method (DML) for online distillation
using the response-based knowledge. DML uses an ensemble of soft logits as
knowledge and transfers it among arbitrary peer networks via collaborative
learning [9]. Yao and Sun [8] further extended DML with dense cross-layer
mutual-distillation, which learns both the teacher and the student
collaboratively from scratch.
Unlike the ensemble of peer networks, the advantage of a mutual knowledge
distillation method is that it can fuse features of peer networks to
collaboratively learn a powerful classifier [32]. However, the knowledge
distilled by those online mutual distillation methods is limited to the
response-based knowledge from individual instance features. In contrast, our
work can further make use of the relation-based knowledge from instance
relationships to further enrich the transferred knowledge.
### 2.3 Structural Knowledge
Most of knowledge distillation approaches adopt the output logits of a deep
model from individual samples as knowledge and make the logits of the teacher
and student match each other. However, such response-based knowledge ignores
the structural knowledge from the mutual relations of data examples, known as
relation-based knowledge. In recent years, there are some newly proposed
knowledge distillation methods based on structural relations of data samples
[26, 30, 31, 6, 5, 4]. Park et al. [26] proposed a relational knowledge
distillation method (RKD), which transfers the instance relation knowledge
from a teacher to a student. Chen et al. [4] borrowed the idea of manifold
learning to design a novel knowledge distillation method, in which the student
preserves the feature embedding similarities of samples from the teacher. Peng
et al. [31] designed a knowledge transfer method that makes sure the student
matches the instance correlation consistently with the teacher.
However, those structural knowledge distillation methods often ignore the
knowledge directly from individual samples. Our proposed CTSL-MKT instead
considers the knowledge from both individual instances and instance
relationships, and the bidirectional knowledge transfer is carried out between
peer networks via collaborative learning.
## 3 The Proposed CTSL-MKT Method
Figure 2: The framework of CTSL-MKT with two peer networks. Note that the
losses $L_{KL}(\emph{{p}}_{1},\emph{{p}}_{2})$ and
$L_{KL}(\emph{{p}}_{2},\emph{{p}}_{1})$ are for mutual learning via the
response-based knowledge transfer, $L_{RD}$ for mutual learning via the
relation-based knowledge transfer,
$L_{SD}^{k}(\emph{{p}}_{k}^{t},\bar{\emph{{p}}}_{k}^{t})$ for self-learning
via the response-based knowledge transfer.
CTLS-MKT unifies student self-learning and teacher-student mutual learning
under one framework in such a way that it can utilise multiple types of
knowledge and more than one distillation methods during the teacher-student
learning. The teacher-student architectures used in CTSL-MKT are peer
networks, such as ResNet [41] and MobilenetV2 [42]. Different from previous
works, CTSL-MKT distills both the response-based knowledge from individual
instance features and the relation-based knowledge from instance
relationships. During teacher-student mutual learning, peer networks trained
collaboratively can teach each other via online distillation with the two
kinds of knowledge. Meanwhile, each peer network can also self-learn via self-
distillation with the response-based knowledge. The two leaning processes
working together can complement each other to explore different knowledge
space and enhance the learning. Our CTSL-MKT can be seen as a new model
compression technique that generalises the existing self-distillation and
online distillation methods and enables fast computations and improves the
generalization ability. The overall framework of CTSL-MKT with two peer
networks is shown in Figure 2 as an example, and notations are summarized in
Table 1.
Notations Descriptions $X=\\{x_{1},x_{2},\cdots,x_{n}\\}$ $n$ input samples
from $m$ classes $\emph{{y}}=\\{y^{1},y^{2},\cdots,y^{m}\\}$ the one-hot label
vector for $x\in X$
$\emph{{z}}_{k}(x)=\\{z_{k}^{1},z_{k}^{2},\ldots,z_{k}^{m}\\}$ logits of a
network $N_{k}$ for $x\in X$ where $z_{k}^{i}$ is the logit for class $i$
$\sigma_{i}(\emph{{z}}_{k}(x),t)$ softmax function with temperature $t$
$p_{k}^{it}=\sigma_{i}(\emph{{z}}_{k}(x),t)$ output of softmax for $z_{k}^{i}$
$\emph{{p}}_{k}^{t}=\\{p_{k}^{1t},p_{k}^{2t},\ldots,p_{k}^{mt}\\}$ predictions
of $N_{k}$ with $t$
$\emph{{p}}_{k}=\\{p_{k}^{1},p_{k}^{2},\ldots,p_{k}^{m}\\}$ predictions of
$N_{k}$ when $t=1$ $f(s_{1}^{k},s_{2}^{k},\cdots,s_{n}^{k})$ similarity loss
of $n$ samples in $N_{k}$
Table 1: Notations used in CTSL-MKT.
### 3.1 Teacher-Student Mutual Learning
Teacher-student mutual learning contains the response-based knowledge transfer
and the relation-based knowledge transfer among peer network architectures.
Response-Based Knowledge Transfer: The response-based knowledge (i.e., the
output of a peer network) is learned from individual instances. Given a peer
network $N_{k}$ and its output $\emph{{p}}_{k}$ with temperature parameter
$t=1$, the collaborative response-based knowledge transfer makes the student
network $N_{k}$ imitate the teacher network $N_{k^{\prime}}$ ($k\neq
k^{\prime}$) with the following Kullback-Leibler (KL) divergence loss,
$\small L_{KL}(\emph{{p}}_{k},\emph{{p}}_{k^{\prime}})=\sum_{x\in
X}\sum_{i=1}^{m}\sigma_{i}(\emph{{z}}_{k^{\prime}}(x),1)log\frac{\sigma_{i}(\emph{{z}}_{k^{\prime}}(x),1)}{\sigma_{i}(\emph{{z}}_{k}(x),1)}.$
(1)
Similarly, the loss that the student network $N_{k^{\prime}}$ uses to learn
from the teacher network $N_{k}$ is
$L_{KL}(\emph{{p}}_{k^{\prime}},\emph{{p}}_{k})$.
During the collaborative learning for a classification task, each peer network
$N_{k}$ will then be trained with both the KL divergence loss (Eq (1)) and the
cross-entropy (CE) loss (Eq (2)).
$L_{CE}(\emph{{y}},\emph{{p}}_{k})=-\sum_{x\in
X}\sum_{i=1}^{m}y^{i}log(\sigma_{i}(\emph{{z}}_{k}(x),1))~{}.$ (2)
Take two peer networks in Figure 2 as example, the losses used to train
$N_{1}$ and $N_{2}$ will be
$L_{CE}(\emph{{y}},\emph{{p}}_{1})+L_{KL}(\emph{{p}}_{1},\emph{{p}}_{2})$ and
$L_{CE}(\emph{{y}},\emph{{p}}_{2})+L_{KL}(\emph{{p}}_{2},\emph{{p}}_{1})$,
respectively.
Relation-Based Knowledge Transfer: CTSL-MKT further integrates the relation-
based knowledge learned from the instance relationships via the teacher-
student mutual leaning in order to enrich the transferred knowledge and
enhance the teacher guidance. Let $s_{j}^{k}=\phi_{k}(x_{j})$ (where
$\phi_{k}(.)$ is a feature mapping function of $N_{k}$) be the output of any
layer of the network $N_{k}$ for $x_{j}$, and $\chi^{\tau}$ denote a set of
$\tau$-tuples of different samples. A set of $2$-tuples and a set of
$3$-tuples thus correspond to $\chi^{2}=\left\\{(x_{u},x_{v})|u\neq
v\right\\}$ and $\chi^{3}=\left\\{x_{u},x_{v},x_{w})|u\neq v\neq w\right\\}$,
respectively. As in [26], the relation-based knowledge learned by the network
$N_{k}$ can be modelled jointly by a distance-wise function and an angle-wise
function.
Given $N_{k}$, the distance-wise function captures the similarities between
two samples in a $2$-tuple, which is defined as
$f(s_{u}^{k},s_{v}^{k})=\frac{1}{\pi}||s_{u}^{k}-s_{v}^{k}||_{2}~{},$ (3)
where
$\pi=\frac{1}{|\chi^{2}|}\sum_{(x_{u},x_{v})\in\chi^{2}}||s_{u}^{k}-s_{v}^{k}||_{2}$
is a normalization constant. Accordingly, the instance relationships between
any two peer networks $N_{k}$ and $N_{k^{\prime}}$ are transferred by the
following distance-wise distillation loss
$L_{DD}(x_{u},x_{v})=\sum_{(x_{u},x_{v})\in\chi^{2}}R\big{(}f(s_{u}^{k},s_{v}^{k}),f(s_{u}^{k^{\prime}},s_{v}^{k^{\prime}})\big{)}~{},$
(4)
where $R(.)$ is Huber loss that reflects instance relationships and is defined
as
$R(a,b)=\left\\{\begin{array}[]{lr}\frac{1}{2}(a-b)^{2},\quad if~{}|a-b|\leq
1&\\\ |a-b|-\frac{1}{2},\quad otherwise&\end{array}\right.~{}.$ (5)
Furthermore, the similarities between samples in a $3$-tuple are measured by
an angle-wise function
$f(s_{u}^{k},s_{v}^{k},s_{w}^{k})=\cos\angle
s_{u}^{k}s_{v}^{k}s_{w}^{k}=<e^{uv},e^{wv}>~{},$ (6)
where $e^{uv}=\frac{s_{u}^{k}-s_{v}^{k}}{||s_{u}^{k}-s_{v}^{k}||_{2}}$ and
$e^{wv}=\frac{s_{w}^{k}-s_{v}^{k}}{||s_{w}^{k}-s_{v}^{k}||_{2}}$. The instance
relationships are transferred between any two peer networks $N_{k}$ and
$N_{k^{\prime}}$ with the angle-wise distillation loss, defined as
$\displaystyle L_{AD}(x_{u},x_{v},x_{w})$ (7) $\displaystyle=$
$\displaystyle\sum_{(x_{u},x_{v},x_{w})\in\chi^{3}}R\big{(}f(s_{u}^{k},s_{v}^{k},s_{w}^{k}),f(s_{u}^{k^{\prime}},s_{v}^{k^{\prime}},s_{w}^{k^{\prime}})\big{)}~{}.$
It has been shown that the relation-based knowledge transfer can be more
effective if the distance-wise function is used jointly with the angle-wise
function [26], as they capture different degrees of similarities between
samples. We formulate the instance relation distillation loss used in the
collaborative learning between peer networks as
$L_{RD}=L_{DD}(x_{u},x_{v})+\beta_{1}L_{AD}(x_{u},x_{v},x_{w})~{},$ (8)
where $\beta_{1}$ is a tuning parameter that controls the balance between loss
terms.
Consequently, the mutual distillation loss with both the response-based and
the relation-based knowledge between two peer networks ($N_{k}$ and
$N_{k^{\prime}}$) is defined as: for network $N_{k}$, we have
$L_{MD}^{k}=L_{RD}+\beta_{2}L_{KL}(\emph{{p}}_{k},\emph{{p}}_{k^{\prime}})~{},$
(9)
where $\beta_{2}$ is a tuning parameter; for network $N_{k^{\prime}}$, we have
$L_{MD}^{k^{\prime}}=L_{RD}+\beta_{2}L_{KL}(\emph{{p}}_{k^{\prime}},\emph{{p}}_{k})~{}.$
(10)
Algorithm 1 The proposed CSL-MKT
0: Input samples $X$ with labels, learning rate $\eta$, hyperparameters
$\alpha$, $\beta$, $\gamma$, $\beta_{1}$ and $\beta_{2}$.
1: Initialize: Initialize peer networks $N_{1}$ and $N_{2}$ to different
conditions.
2: Stage 1: Pre-train $N_{1}$ and $N_{2}$ for use of the process of self-
learning.
3: for k=1 to 2 do
4: Repeat:
5: Compute stochastic gradient of $L_{CE}$ in Eq. (2) and update $N_{k}$:
6: $N_{k}\leftarrow\text{$N_{k}$}$+$\eta$$\frac{\partial L_{CE}}{\partial
N_{k}}$.
7: Until: $L_{CE}$ converges.
8: end for
9: Stage 2: Train $N_{1}$ and $N_{2}$ collaboratively.
10: Repeat:
11: for k=1 to 2 do
12: Compute stochastic gradient of $L_{KD}^{k}$ in Eq. (12) and update
$N_{k}$:
13: $N_{k}\leftarrow\text{$N_{k}$}$+$\eta$$\frac{\partial L_{KD}^{k}}{\partial
N_{k}}$.
14: end for
15: Until: $L_{KD}^{k}$ converges.
16: return _$N_{k}$_.
### 3.2 Student Self-learning
During the collaborative learning between peer networks, if the outputs of
peer networks are very diverse, the mutual knowledge transfer could become
poor. Since the self-learning via self-distillation can improve the power of
knowledge transfer [23], CTSL-MKT further introduces the self-learning of each
peer network into collaborative learning via the response-based knowledge
self-distillation. To conduct self-learning for each peer network $N_{k}$, we
use the outputs $\bar{\emph{{p}}}_{k}^{t}$ of the pre-trained network $N_{k}$
to supervise itself with the following self-distillation loss:
$\small L_{SD}^{k}(\emph{{p}}_{k}^{t},\bar{\emph{{p}}}_{k}^{t})=\sum_{x\in
X}\sum_{i=1}^{m}\sigma_{i}^{t}(\bar{\emph{{z}}}_{k}(x),t)log\frac{\sigma_{i}^{t}(\bar{\emph{{z}}}_{k}(x),t)}{\sigma_{i}^{t}(\emph{{z}}_{k}(x),t)}~{}.$
(11)
### 3.3 The CSL-MKT Algorithm
Finally, CSL-MKT conducts mutual learning and self-learning simultaneously in
a unified framework, as shown in Algorithm 1. Its objective function for each
pear network is defined as
$L_{KD}^{k}=\alpha L_{CE}^{k}+\beta L_{MD}^{k}+\gamma L_{SD}^{k}~{},$ (12)
where $\alpha$, $\beta$ and $\gamma$ are the tuning parameters, which balance
the contribution of each loss in the collaborative learning, $L_{CE}^{k}$ is
defined in Eq. (2), $L_{MD}^{k}$ for two peer networks in Eqs. (9) or (10),
and $L_{SD}^{k}$ in Eq. (11).
## 4 Experiments
Network | Parameter size | Baseline
---|---|---
(CIFAR-100) | B_10 | B_100 | B_Tiny
ResNet14 | 6.50M | 94.94 | 76.12 | -
ResNet18 | 11.22M | 95.13 | 75.77 | 62.90
ResNet34 | 21.33M | 95.39 | 77.66 | -
VGG19 | 139.99M | 92.83 | 69.42 | -
MobileNetV2 | 2.37M | 90.97 | 68.23 | -
ShuffleNetV2 | 1.36M | 91.03 | 70.10 | -
AlexNet | 57.41M | - | - | 50.25
SqueezeNet | 0.77M | - | - | 43.68
Table 2: The parameter size of each peer network on CIFAR-100 and its
classification performance on three datasets. Note that B_10, B_100, and
B_Tiny denote Top-1 accuracy (%) achieved by each peer network on CIFAR-10,
CIFAR-100 and Tiny-ImageNet, respectively.
We conducted extensive experiments to verify the effectiveness of CTSL-MKT on
image classification tasks using datasets including CIFAR-10 [47], CIFAR-100
[47], Tiny-ImageNet [48] and Market-1501 [49]. The peer network architectures
were chosen from ResNet [41], MobileNet [42], ShuffleNet [43], VGG [45],
AlexNet [44] and SqueezeNet [46]. CTSL-MKT was compared to the state-of-the-
art KD methods, which are DML [25], Tf-KD [23] and RKD [26]. For a fair
comparison, RKD uses online distillation with the peer networks. In all the
experiments, the relation-based knowledge was modelled by the final feature
embedding outputs by the peer networks.
### 4.1 Datasets and Settings
CIFAR-10 and CIFAR-100. Both datasets have 60,000 $32\times 32$ images, where
50,000 images are for training and the other 10,000 images are for testing.
The number of classes is 10 for CIFAR-10 and 100 for CIFAR-100, and each class
has the same numbers of samples in both the training and the testing sets. On
each dataset, data augmentation with random crops and horizontal flips was
used to change the zero-padded $40\times 40$ images to $32\times 32$ ones. The
peer networks were trained for 200 epochs with batch size 128 and initial
learning rate 0.1 which is then multiplied by 0.2 at 60, 120, and 160 epochs.
Temperature parameter was set to 10 for CIFAR-10 and 3 for CIFAR-100.
Tiny-ImageNet. Tiny-ImageNet contains 100,000 training and 10,000 testing
$64\times 64$ images from 200 classes, each of which has the same number of
samples. Each image was randomly resized to $224\times 224$. The peer networks
were trained for 90 epochs with batch size 64 and initial learning rate 0.1
which is then multiplied by 0.1 at 30, 60, 80 epochs. Temperature parameter
was set to 2.
Market-1501. Market-1501 includes 32,688 images taken from 1,501 identities
under condition of six camera views. It has 751 identities for training and
750 ones for testing. Each image was zero-padded by 10 pixels on each side,
and data augmentation with random crops and horizontal flips was used to
change the zero-padded images to $256\times 128$ ones. The peer networks were
trained for 60 epochs with batch size 32 and initial learning rate 0.05 which
is then multiplied by 0.2 at 40 epochs. Temperature parameter was set to 6.
We used the SGD optimizer for training the peer networks with momentum 0.9 and
weight decay 5e-4, and all input images were normalized by Mean-Std
normalization. All the hyper-parameters were greedily searched and set as
follows. The hyper-parameters used on CIFAR and Tiny-ImageNet were set as
$\alpha=0.1$, $\beta=0.05$ and $\gamma=0.9$ for MobileNet and ShuffleNet, and
$\alpha=0.4$, $\beta=0.4$ and $\gamma=0.6$ for the other networks. On
Market-1501, the hyper-parameters were set as $\alpha=1$, $\beta=0.9$ and
$\gamma=1$ for all the networks. Besides, both $\beta_{1}$ and $\beta_{2}$
were set to 2. In all the experiments, we considered a pair of peer networks
that have the same architecture or different architectures. Table 2 shows the
parameter size of each network on CIFAR-100 and its top-1 accuracy on the two
CIFAR datasets and the Tiny-ImageNet dataset, which serves as a baseline.
### 4.2 Results on CIFAR-10
Table 3 reports the average Top-1 accuracy of CTSL-MKT and the state-of-the-
art competitors on CIFAR-10. It is not surprising that those knowledge
distillation methods perform better than the corresponding single network due
to the knowledge transfer, except for DML and RKD with ResNet14 and ResNet18.
The possible reason for the slightly poor performance of DML and RKD with
ResNet14 or ResNet18 could be that the discrepancies between outputs of small
peer networks for individual instances hinder mutual learning. Among those
knowledge distillation methods, CTSL-MKT performs the best with a significant
improvement, which indicates that our idea of collaborative learning with
multiple knowledge transfer is effective. Meanwhile, CTSL-MKT outperforms all
the corresponding baselines shown in Table 2 with a noticeable margin. For
example, CTSL-MKT with ShuffleNetV2-MobileNetV2 has increased the Top-1
accuracy by 1.68% and 1.18%, compared with the corresponding signal network
baselines, i.e., ShuffleNetV2 and MobileNetV2 respectively. Moreover, although
CTSL-MKT, DML, and RKD collaboratively learn the two peer networks, which can
have the same network structures or different ones, each peer network in CTSL-
MKT (i.e., CTSL-MKT_$N_{1}$ or CTSL-MKT_$N_{2}$) performs much better than its
counterpart in DML and RKD, due to the multiple knowledge transfer.
Network $N_{1}$ Network $N_{2}$ Tf-KD DML_$N_{1}$ DML_$N_{2}$ RKD_$N_{1}$
RKD_$N_{2}$ CTSL-MKT_$N_{1}$ CTSL-MKT_$N_{2}$ ResNet14 ResNet14 95.08$\pm$0.01
94.78$\pm$0.02 94.92$\pm$0.01 94.95$\pm$0.01 94.83$\pm$0.02 95.28$\pm$0.01
95.22$\pm$0.01 ResNet18 ResNet18 95.20$\pm$0.01 94.88$\pm$0.01 94.99$\pm$0.01
94.98$\pm$0.04 94.92$\pm$0.01 95.29$\pm$0.04 95.33$\pm$0.03 ResNet34 ResNet34
95.41$\pm$0.01 95.42$\pm$0.01 95.32$\pm$0.01 95.45$\pm$0.01 95.45$\pm$0.01
95.69$\pm$0.03 95.59$\pm$0.01 MobileNetV2 MobileNetV2 91.72$\pm$0.01
91.19$\pm$0.07 91.32$\pm$0.04 91.12$\pm$0.03 90.71$\pm$0.06 92.12$\pm$0.02
92.12$\pm$0.02 ShuffleNetV2 ShuffleNetV2 92.47$\pm$0.01 91.97$\pm$0.03
91.92$\pm$0.01 92.08$\pm$0.01 91.59$\pm$0.01 92.64$\pm$0.01 92.49$\pm$0.01
ResNet18 ResNet34 - 95.09$\pm$0.01 95.41$\pm$0.03 95.12$\pm$0.01
95.31$\pm$0.01 95.24$\pm$0.01 95.60$\pm$0.01 ResNet18 VGG19 - 95.11$\pm$0.03
93.49$\pm$0.02 95.03$\pm$0.01 93.50$\pm$0.01 95.16$\pm$0.01 93.91$\pm$0.01
ShuffleNetV2 MobileNetV2 - 91.78$\pm$0.03 91.25$\pm$0.08 91.73$\pm$0.01
90.72$\pm$0.01 92.71$\pm$0.01 92.15$\pm$0.01
Table 3: The average Top-1 accuracy (%) over three individual runs on
CIFAR-10.
### 4.3 Results on CIFAR-100
Table 4 reports the average Top-1 accuracy of all the competing knowledge
distillation methods with various network architectures on CIFAR-100. We have
similar observations as those on CIRFAR-10. Overall, each competing method
improves on the performance of the corresponding baseline, and CTSL-MKT gains
the largest improvement. Compare to Tf-KD, DML and RKD, Top-1 accuracy of each
peer network in CTSL-MKT has been improved by about 1% on average. For
example, the accuracy of the two MobileNetV2 networks in CTSL-MKT has been
increased by 2.46% and 2.68% respectively, compared to those in DML, and
increased by 3.08% and 2.96% Top-1 accuracy compared to those in RKD.
Network $N_{1}$ Network $N_{2}$ Tf-KD DML_$N_{1}$ DML_$N_{2}$ RKD_$N_{1}$
RKD_$N_{2}$ CTSL-MKT_$N_{1}$ CTSL-MKT_$N_{2}$ ResNet14 ResNet14 76.67$\pm$0.02
75.97$\pm$0.01 76.16$\pm$0.11 76.36$\pm$0.03 76.30$\pm$0.01 77.00$\pm$0.05
76.85$\pm$0.04 ResNet18 ResNet18 77.04$\pm$0.12 76.10$\pm$0.10 76.27$\pm$0.07
76.43$\pm$0.01 76.09$\pm$0.01 77.43$\pm$0.10 77.46$\pm$0.01 ResNet34 ResNet34
77.93$\pm$0.01 77.88$\pm$0.12 77.61$\pm$0.03 77.63$\pm$0.03 77.65$\pm$0.05
78.58$\pm$0.01 78.24$\pm$0.02 MobileNetV2 MobileNetV2 70.82$\pm$0.02
68.98$\pm$0.01 68.58$\pm$0.18 68.36$\pm$0.01 68.30$\pm$0.01 71.44$\pm$0.06
71.26$\pm$0.08 ShuffleNetV2 ShuffleNetV2 71.79$\pm$0.02 70.47$\pm$0.15
70.29$\pm$0.04 70.24$\pm$0.01 69.98$\pm$0.03 72.13$\pm$0.02 71.69$\pm$0.05
ResNet18 ResNet34 - 76.15$\pm$0.10 77.71$\pm$0.01 76.41$\pm$0.05
77.83$\pm$0.01 77.61$\pm$0.08 78.15$\pm$0.12 ResNet18 VGG19 - 76.51$\pm$0.02
68.80$\pm$3.74 76.29$\pm$0.02 68.28$\pm$0.87 77.23$\pm$0.02 72.72$\pm$0.06
ShuffleNetV2 MobileNetV2 - 70.47$\pm$0.13 68.83$\pm$0.14 70.50$\pm$0.28
67.87$\pm$0.01 72.46$\pm$0.15 71.34$\pm$0.09
Table 4: The average Top-1 accuracy (%) over three individual runs on
CIFAR-100.
(a) ShuffleNetV2 (b) MobileNetV2
Figure 3: The values of training loss over epochs on the CIFAR-100 training
dataset.
(a) ShuffleNetV2 (b) MobileNetV2
Figure 4: The values of Top-1 accuracy over epochs on the CIFAR-100 testing
dataset.
To further illustrate the learning process of the peer networks in CTSL-MKT,
Figure 3 plots the training loss of ShuffleNetV2 and MobileNetV2 as a function
of epochs, compared to Tf-KD, DML and RKD. It shows that CTSL-MKT and Tf-KD
converge better than DML and RKD. The possible reason is that each network can
self-learn in CTSL-MKT and Tf-KD to overcome the discrepancy in the outputs of
peer networks in DML and RKD during learning. Although CTSL-MKT with multiple
knowledge transfer introduces extra hyper-parameters, it can still converge
faster, achieving comparable training loss with Tf-KD. The loss becomes stable
around 120 epochs in general. Furthermore, Figure 4 displays the corresponding
Top-1 accuracy of each peer network after each epoch on the testing dataset.
It shows that the proposed CTSL-MKT outperforms the others after the
convergence, its performance improves along with the decrement of the training
loss. Overall, the patterns show that two peer networks in CTSL-MKT can work
collaboratively, via teaching and learning from each other at each epoch, and
each network gradually improves itself to achieve a better performance.
### 4.4 Results on Tiny-ImageNet
Table 5 shows the average Top-1 accuracy of the competing methods with five
various peer network architectures on Tiny-ImageNet. From the comparative
results, it can be seen that CTSL-MKT significantly outperforms the baselines,
DML, RKD and Tf-KD. However, on these five peer network architectures, some
peer networks in DML, RKD and Tf-KD achieve poor performance, compared to
their baselines. The possible reason is that the used peer networks are
smaller with less informative knowledge in knowledge transfer and the outputs
of peer networks for the same individual instances migth be different, which
degrades the effectiveness of mutual learning. With multiple kinds of
knowledge and distillation strategies, our CTSL-MKT can well improve the
performance via mutual learning.
Network $N_{1}$ Network $N_{2}$ Tf-KD DML_$N_{1}$ DML_$N_{2}$ RKD_$N_{1}$
RKD_$N_{2}$ CTSL-MKT_$N_{1}$ CTSL-MKT_$N_{2}$ ResNet18 ResNet18 63.29$\pm$0.02
62.30$\pm$0.01 62.39$\pm$0.03 62.80$\pm$0.01 62.42$\pm$0.08 63.63$\pm$0.08
63.64$\pm$0.02 AlexNet AlexNet 49.78$\pm$0.01 44.47$\pm$0.01 44.80$\pm$0.01
43.54$\pm$0.01 42.97$\pm$0.01 51.39$\pm$0.01 51.28$\pm$0.01 SqueezeNet
SqueezeNet 41.66$\pm$0.01 47.16$\pm$0.03 46.95$\pm$0.19 48.22$\pm$0.01
48.55$\pm$0.09 48.60$\pm$0.30 48.86$\pm$0.03 AlexNet SqueezeNet -
44.35$\pm$0.51 46.15$\pm$0.30 44.66$\pm$1.87 46.86$\pm$0.41 50.98$\pm$0.08
47.99$\pm$0.03 ResNet18 AlexNet - 62.62$\pm$0.11 43.53$\pm$0.62 62.37$\pm$0.01
46.64$\pm$0.03 63.37$\pm$0.01 51.56$\pm$0.02
Table 5: The average Top-1 accuracy (%) over three individual runs on Tiny-
ImageNet.
### 4.5 Results on Market-1501
We further compared those methods on Market-1501, which is used for a re-
identification (re-id) task. In this set of experiments, we adopted ResNet50
that is usually used on this dataset to form a peer network architecture.
Figure 5 shows the performance of Tf-KD, DML, RKD and CTSL-MKT, measured by
Rank-1, Rank-5, Rank-10 and mAP. Note that these results were computed for
only one peer network. DML and RKD with collaborative learning consistently
perform better than Tf-KD via self-learning. Our CTSL-MKT outperforms both DML
and RKD across all the metrics. Specifically, in terms of mAP, the improvement
of CTSL-MKT over DML, RKD and Tf-KD is 1.22%, 0.8% and 4.69%, respectively.
Figure 5: Comparative results (%) on Market-1501.
### 4.6 Ablation Study
CTSL-MKT contains three knowledge distillation strategies, i.e., mutual
learning via response-based knowledge transfer from individual instances
(MLI), mutual learning via relation-based knowledge transfer from instance
relationships (MLR) and self-learning via response-based knowledge transfer
from individual instances (SLI). To study how each strategy contributes to the
model performance, we consider the following four variants of CTSL-MKT:
1. _A_)
the full model using the three strategies altogether, where we used both
online distillation and self-distillation with the two kinds of knowledge;
2. _B_)
the model using online distillation only with both the response-based
knowledge (MLI) and the relation-based knowledge (MLR);
3. _C_)
the model using online distillation with the relation-based knowledge (MLR)
and self-distillation with the response-based knowledge (SLI);
4. _D_)
the model using both online distillation and self-distillation with only the
response-based knowledge, corresponding to MLI + SLI.
Table 6 reports the average Top-1 accuracy of these four variations with
different pairs of peer network architectures. We have the following
observations: 1) Variant A (i.e., the full model) outperforms the other
variants where one knowledge distillation strategy has been removed. It
implies that the use of multiple types of knowledge with both online
distillation and self-distillation plays a crucial role in the performance
gain. 2) Variant B without self-distillation has the largest performance drop,
compared with variants C and D. It indicates that self-distillation
contributes substantially to the overall performance as it could offset the
diversity issue caused by mutual learning and further enhance the knowledge
distillation efficiency. 3) DML, Tf-KD and RKD can be seen as a special case
of CTSL-MKT using only one knowledge distillation strategy. Jointly looking at
Tables 6 and 4 reveals that knowledge distillation methods with two or more
strategies almost always outperform those using only one strategy. Therefore,
it is clear that knowledge distillation via proper multiple knowledge transfer
is very beneficial for improving the performance of model compression.
Case MLI MLR SLI ResNet14 ResNet14 ResNet18 ResNet18 MobileNetV2 MobileNetV2 A
✓ ✓ ✓ 77.00$\pm$0.05 76.85$\pm$0.04 77.43$\pm$0.10 77.46$\pm$0.01
71.44$\pm$0.06 71.26$\pm$0.08 B ✓ ✓ ✗ 76.57$\pm$0.04 76.37$\pm$0.02
76.67$\pm$0.04 76.66$\pm$0.09 69.10$\pm$0.01 69.23$\pm$0.04 C ✗ ✓ ✓
76.69$\pm$0.02 76.70$\pm$0.04 77.35$\pm$0.04 77.29$\pm$0.08 71.18$\pm$0.05
71.10$\pm$0.08 D ✓ ✗ ✓ 76.73$\pm$0.03 76.70$\pm$0.08 77.26$\pm$0.07
77.12$\pm$0.04 71.14$\pm$0.02 71.04$\pm$0.10
(a) Ablation experiments on the same peer network architectures
Case MLI MLR SLI ResNet14 ResNet18 ResNet18 ResNet34 ShuffleNetV2 MobileNetV2
A ✓ ✓ ✓ 77.07$\pm$0.03 77.28$\pm$0.04 77.61$\pm$0.08 78.15$\pm$0.12
72.46$\pm$0.15 71.34$\pm$0.09 B ✓ ✓ ✗ 76.35$\pm$0.01 76.53$\pm$0.07
76.53$\pm$0.02 77.83$\pm$0.01 70.57$\pm$0.03 68.69$\pm$0.01 C ✗ ✓ ✓
76.69$\pm$0.23 77.06$\pm$0.02 77.12$\pm$0.01 77.99$\pm$0.03 72.06$\pm$0.04
71.10$\pm$0.11 D ✓ ✗ ✓ 76.68$\pm$0.03 77.13$\pm$0.02 77.39$\pm$0.01
77.68$\pm$0.01 72.20$\pm$0.09 71.05$\pm$0.15
(b) Ablation experiments on the different peer network architectures
Table 6: Ablation study of CTSL-MKT in terms of the average Top-1 accuracy
over three individual runs on CIFAR-100.
### 4.7 Experiment Discussion
The experimental results reported above have demonstrated the effectiveness of
the proposed CTSL-MKT, while being compared with several state-of-the-art
knowledge distillation methods. We have the following remarks:
* •
Collaborative learning can make peer networks teach and learn from each other,
and iteratively improve themselves.
* •
Self-learning of each peer network can further enhance the ability of mutual
learning among peer networks by compensating the loss caused by the diversity
issue.
* •
Multiple knowledge transfer with more than one types of knowledge and
distillation strategies can significantly improve the KD performance.
* •
Various peer network architectures (i.e., teacher-student architectures) can
be easily adopted for knowledge transfer via collaborative learning.
## 5 Conclusions
In this paper, we propose a novel knowledge distillation method called
collaborative teacher-student learning via multiple knowledge transfer (CTSL-
MKT). It naturally integrates both self-learning via self-distillation and
collaborative learning via online distillation in a unified framework so that
multiple kinds of knowledge can be transferred effectively in-between
different teacher-student architectures, and CTSL-MKT can achieve individual
instance consistency and instance correlation consistency among the peer
networks. Experimental results on four image classification datasets have
demonstrated CTSL-MKT outperforms the competitors with a noticeable margin,
which proves the necessity of using different distillation schemes to transfer
multiple types of knowledge simultaneously. We believe that our proposed
framework opens a door to design multiple knowledge transfer for knowledge
distillation.
## References
* [1] J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge Distillation: A Survey,” _arXiv:2006.05525_ , 2020.
* [2] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf, “Pruning Filters for Efficient ConvNets,” _Int. Conf. Learn. Represent._ , 2017.
* [3] S. Han, J. Pool, J. Tran, and W. J. Dally, “Learning both Weights and Connections for Efficient Neural Networks,” _Adv. Neural Inform. Process. Syst._ , 2015.
* [4] H. Chen, Y. Wang, C. Xu, C. Xu, and D. Tao, “Learning student networks via feature embedding,” _IEEE Trans. Neur. Net. Lear._ , 2020.
* [5] F. Tung, and G. Mori, “Similarity-preserving knowledge distillation,” _Int. Conf. Comput. Vis._ , 2019.
* [6] Y. Liu, C. Shu, J. Wang, and C. Shen, “Structured knowledge distillation for dense prediction,” _IEEE Trans. Pattern Anal. Mach. Intell._ , 2020.
* [7] K. Li, L. Yu, S. Wang, and P. A. Heng, “Towards Cross-Modality Medical Image Segmentation with Online Mutual Knowledge Distillation,” _AAAI_ , 2020.
* [8] A. Yao, and D. Sun, “Knowledge Transfer via Dense Cross-Layer Mutual-Distillation,” _Eur. Conf. Comput. Vis._ , 2020.
* [9] Q. Guo, X. Wang, Y. Wu, Z. Yu, D. Liang, X. Hu, and P. Luo, “Online Knowledge Distillation via Collaborative Learning,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2020.
* [10] C. Yang, L. Xie, C. Su, and A. L. Yuille, “OSnapshot distillation: Teacher-student optimization in one generation,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2019.
* [11] M. Phuong, and C. H. Lampert, “Distillation-based training for multi-exit architectures,” _Int. Conf. Comput. Vis._ , 2019.
* [12] D. Walawalkar, Z. Shen, and M. Savvides, “Online Ensemble Model Compression using Knowledge Distillation,” _Eur. Conf. Comput. Vis._ , 2020.
* [13] T. Li, J. Li, Z. Liu, and C. Zhang, “Few sample knowledge distillation for efficient network compression,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2020.
* [14] C. Shen, M. Xue, X. Wang, J. Song, L. Sun, and M. Song, “Customizing student networks from heterogeneous teachers via adaptive knowledge amalgamation,” _Int. Conf. Comput. Vis._ , 2019.
* [15] J. Yim, D. Joo, J. Bae, and J. Kim, “A gift from knowledge distillation: Fast optimization, network minimization and transfer learning,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2017.
* [16] N. Passalis, M. Tzelepi, and A. Tefas, “Heterogeneous Knowledge Distillation using Information Flow Modeling,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2020.
* [17] L. Yu, V. O. Yazici, X. Liu, J. Weijer, Y. Cheng, and A. Ramisa, “Learning metrics from teachers: Compact networks for image embedding,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2019.
* [18] K. Xu, L. Rui, Y. Li, and L. Gu, “Feature Normalized Knowledge Distillation for Image Classification,” _Eur. Conf. Comput. Vis._ , 2020.
* [19] Y. Guan, P. Zhao, B. Wang, Y. Zhang, C. Yao, K. Bian, and J. Tang, “Differentiable Feature Aggregation Search for Knowledge Distillation,” _Eur. Conf. Comput. Vis._ , 2020.
* [20] G. Hinton, O. Vinyals, and J. Dean, “Distilling the Knowledge in a Neural Network,” _arXiv: 1503.02531_ , 2015.
* [21] A. Romero, N. Ballas, and S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, “FitNets: Hints for Thin Deep Nets,” _Int. Conf. Learn. Represent._ , 2015.
* [22] S.-I. Mirzadeh, M. Farajtabar, A. Li, and H. Ghasemzadeh, “Improved Knowledge Distillation via Teacher Assistant,” _AAAI_ , 2020.
* [23] L. Yuan, F. E. Tay, G. Li, T. Wang, and J. Feng, “Revisiting Knowledge Distillation via Label Smoothing Regularization,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2020.
* [24] L. Zhang, J. Song, A. Gao, J. Chen, C. Bao, and K. Ma, “Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation,” _Int. Conf. Comput. Vis._ , 2019.
* [25] Y. Zhang, T. Xiang, T. M. Hospedales, and H. Lu, “Deep Mutual Learning,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2018.
* [26] W. Park, D. Kim, Y. Lu, and M. Cho, “Relational Knowledge Distillation,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2019.
* [27] S. Yun, J. Park, K. Lee, and J. Shin, “Regularizing Class-wise Predictions via Self-knowledge Distillation,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2020.
* [28] X. Lan, X. Zhu, and S. Gong, “Knowledge Distillation by On-the-Fly Native Ensemble,” _IAdv. Neural Inform. Process. Syst._ , 2018.
* [29] G. Wu and S. Gong, “Peer Collaborative Learning for Online Knowledge Distillation,” _IarXiv: 2006.04147_ , 2020.
* [30] Y. Liu, J. Cao, B. Li, C. Yuan, W. Hu, Y. Li, and Y. Duan, “Knowledge Distillation via Instance Relationship Graph,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2019.
* [31] B. Peng, X. Jin, J. Liu, D. Li, Y. Wu, Y. Liu, S. Zhou, and Z. Zhang, “Correlation Congruence for Knowledge Distillation,” _Int. Conf. Comput. Vis._ , 2019.
* [32] J. Kim, M. Hyun, I. Chung, and N. Kwak, “Feature Fusion for Online Mutual Knowledge Distillation,” _Int. Conf. Pattern Recog._ , 2019.
* [33] S. Hou, X. Liu, and Z. Wang, “DualNet: Learn Complementary Features for Image Recognition,” _Int. Conf. Comput. Vis._ , 2017.
* [34] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, “XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks,” _Eur. Conf. Comput. Vis._ , 2016.
* [35] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, “Binarized Neural Networks,” _Adv. Neural Inform. Process. Syst._ , 2016.
* [36] D. Chen, J.-P. Mei, C. Wang, Y. Feng, and C. Chen, “Online Knowledge Distillation with Diverse Peers,” _AAAI_ , 2020.
* [37] S. You, C. Xu, C. Xu, and D. Tao, “Learning from Multiple Teacher Networks,” _ACM SIGKDD_ , 2017.
* [38] X. Wu, R. He, Y. Hu, and Z. Sun, “Learning an Evolutionary Embedding via Massive Knowledge Distillation,” _Int. J. Comput. Vis._ , 2020, pp. 2089–2106.
* [39] T. Xu, and C. Liu, “Data-Distortion Guided Self-Distillation for Deep Neural Networks,” _AAAI_ , 2019.
* [40] J. H. Cho, and B. Hariharan, “On the Efficacy of Knowledge Distillation,” _Int. Conf. Comput. Vis._ , 2019.
* [41] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2016.
* [42] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2018.
* [43] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design,” _Eur. Conf. Comput. Vis._ , 2018.
* [44] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” _Adv. Neural Inform. Process. Syst._ , 2012.
* [45] K. Simonyan, and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” _Int. Conf. Pattern Recog._ , 2015.
* [46] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and ¡0.5MB model size,” _Int. Conf. Pattern Recog._ , 2017.
* [47] A. Krizhevsky, and G. Hinton, “Learning multiple layers of features from tiny images,” _Technical Report._ , 2009.
* [48] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and F.-F. Li, “ImageNet: A large-scale hierarchical image database,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2009.
* [49] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, Q. Tian, “Scalable Person Re-Identification: A Benchmark,” _Int. Conf. Comput. Vis._ , 2015.
|
# Spinor-Helicity Varieties
Yassine El Maazouz Anaëlle Pfister and Bernd Sturmfels
###### Abstract
The spinor-helicity formalism in particle physics gives rise to natural
subvarieties in the product of two Grassmannians. These include two-step flag
varieties for subspaces of complementary dimension. Taking Hadamard products
leads to Mandelstam varieties. We study these varieties through the lens of
combinatorics and commutative algebra, and we explore their tropicalization,
positive geometry, and scattering correspondence.
## 1 Introduction
Given two matrices $\lambda$ and $\widetilde{\lambda}$ of format $k\times n$,
consider the property that the $k\times k$ matrix
$\lambda\cdot\widetilde{\lambda}^{T}$ has rank at most $r$ where $0\leq r\leq
k\leq n$. We wish to express this property in terms of the $k\times k$ minors
of the matrices $\lambda$ and $\widetilde{\lambda}$. This situation arises in
the study of scattering amplitudes in quantum field theory [3]. The special
case when $k=2$ and $r=0$ is known as spinor-helicity formalism; for textbook
basics see [3, Section 1.8] and [16, Section 2.2]. In physics, it is customary
to write $\langle ij\rangle$ for the $2\times 2$ minors of $\lambda$ and
$[ij]$ for the $2\times 2$ minors of $\widetilde{\lambda}$, where $1\leq
i<j\leq n$, and these minors satisfy the momentum conservation relations.
###### Example 1.1 ($k=2,n=5,r=0$).
We consider the two skew-symmetric $5\times 5$ matrices
$\\!\\!P\,=\,\small\begin{pmatrix}\,\,0&\langle 12\rangle&\langle
13\rangle&\langle 14\rangle&\langle 15\rangle\\\ \\!-\langle
12\rangle&0&\langle 23\rangle&\langle 24\rangle&\langle 25\rangle\\\
\\!-\langle 13\rangle&\\!\\!-\langle 23\rangle&0&\langle 34\rangle&\langle
35\rangle\\\ \\!-\langle 14\rangle&\\!\\!-\langle 24\rangle&\\!\\!-\langle
34\rangle&0&\langle 45\rangle\\\ \\!-\langle 15\rangle&\\!\\!-\langle
25\rangle&\\!\\!-\langle 35\rangle&\\!\\!-\langle 45\rangle&0\\\
\end{pmatrix}\quad{\rm and}\quad
Q\,=\,\small\begin{pmatrix}\,\,0&[12]&[13]&[14]&[15]\\\
\\!-[12]&0&[23]&[24]&[25]\\\ \\!-[13]&\\!\\!-[23]&0&[34]&[35]\\\
\\!-[14]&\\!\\!-[24]&\\!\\!-[34]&0&[45]\\\
\\!-[15]&\\!\\!-[25]&\\!\\!-[35]&\\!\\!-[45]&0\\\ \end{pmatrix}\\!.$
These matrices have rank two, meaning that the $4\times 4$ Pfaffians vanish
for both matrices:
$\\!\\!\\!\langle ij\rangle\langle kl\rangle-\langle ik\rangle\langle
jl\rangle+\langle il\rangle\langle
jk\rangle\,\,=\,\,[ij][kl]-[ik][jl]+[il][jk]\,=\,0\,\,\,\,\hbox{for}\,\,\,1\\!\leq\\!i\\!<\\!j\\!<\\!k\\!<\\!l\\!\leq\\!5.$
(1)
These quadratic Plücker relations are known as Schouten identities in physics
[3, eqn (1.116)]. Momentum conservation [3, eqn (1.117)] stipulates that the
product $P\cdot Q^{T}$ is the zero matrix:
$\langle i1\rangle[1j]+\langle i2\rangle[2j]+\langle i3\rangle[3j]+\langle
i4\rangle[4j]+\langle i5\rangle[5j]\,=\,0\quad\hbox{for}\,\,\,1\leq i,j\leq
5.$ (2)
In total, we have a system of $5+5+25=35$ quadratic equations in
$\binom{5}{2}+\binom{5}{2}=20$ unknowns. The equations (1) define a product of
two Grassmannians ${\rm Gr}(2,5)\times{\rm
Gr}(2,5)\subset\mathbb{P}^{9}\times\mathbb{P}^{9}$. Inside this product, the
bilinear equations (2) cut out a variety of dimension $8$. This is our spinor-
helicity variety, denoted ${\rm SH}(2,5,0)$. Its bidegree in
$\mathbb{P}^{9}\times\mathbb{P}^{9}$ is the cohomology class
$5s^{3}t^{7}+10s^{4}t^{6}+12s^{5}t^{5}+10s^{6}t^{4}+5s^{7}t^{3}\,\,\in\,\,H^{*}(\mathbb{P}^{9}\times\mathbb{P}^{9},\mathbb{Z}).$
(3)
The $35$ quadrics (1) and (2) generate the prime ideal of ${\rm SH}(2,5,0)$.
This ideal coincides with the ideal of the Grassmannian ${\rm Gr}(3,6)$, which
has codimension $10$ and degree $42$ in $\mathbb{P}^{19}$. This identification
was observed by Bossinger, Drummond and Glew [7], who used the term massless
scattering ideal and the notation $I_{\rm 5pt}$ for the ideal of ${\rm
SH}(2,5,0)$. In [7, Section 6.2] they derive the tropicalization of ${\rm
SH}(2,5,0)$ from that of ${\rm Gr}(3,6)$; see [22, Example 4.4.10].
The Grassmannian ${\rm Gr}(k,n)$ is the subvariety of
$\mathbb{P}^{\binom{n}{k}-1}$ defined by the Plücker equations. Points in
${\rm Gr}(k,n)$ represent $k$-dimensional linear subspaces in
$\mathbb{C}^{n}$. See e.g. [23, Chapter 5]. Every point in ${\rm Gr}(k,n)$ is
the row space of a matrix $\lambda\in\mathbb{C}^{k\times n}$ of rank $k$. We
consider the set
$\bigl{\\{}\,(\lambda,\widetilde{\lambda})\,\,:\,\,\lambda,\widetilde{\lambda}\in\mathbb{C}^{k\times
n}\,,\,\,{\rm rank}(\lambda)={\rm rank}(\widetilde{\lambda})=k,\,\,{\rm
and}\,\,\,{\rm rank}\bigl{(}\lambda\cdot\widetilde{\lambda}^{T}\bigr{)}\leq
r\,\bigr{\\}}.$ (4)
We define the spinor-helicity variety to be the image of (4) in the product of
Grassmannians:
${\rm SH}(k,n,r)\,\,\subset\,\,{\rm Gr}(k,n)\times{\rm
Gr}(k,n)\,\,\subset\,\,\mathbb{P}^{\binom{n}{k}-1}\times\mathbb{P}^{\binom{n}{k}-1}.$
(5)
In what follows, we study the algebra, combinatorics and geometry of the
inclusion (5). Our presentation aims to be accessible to a wide range of
readers, not just from mathematics, but also from physics. The prerequisites
are at the level of the algebra textbooks [23, 24, 27].
The motivation for this project arose from our desire to understand the
spinor-helicity formalism in physics. The variety ${\rm SH}(2,n,0)$ is widely
used for scattering amplitudes [3, 16]. Cachazo, Early, Guevara and Mizera
[11, Section 5.1] proposed the variety ${\rm SH}(k,n,k-2)$ as a model to
encode kinematic data for particle scattering. Scattering amplitudes in the
CEGM model are computed by integrating over the moduli space $X(k,n)$ of $n$
points in $\mathbb{P}^{k-1}$. The articles [1, 12, 28] studied the scattering
potential on $X(k,n)$. The nonlinear structure of the kinematic data was
highlighted in Lam’s lectures [21]. We here examine this in detail.
The kinematic data are summarized in the Mandelstam invariants (27). In the
$k=2$ case from Example 1.1, the moduli space is $X(2,n)=\mathcal{M}_{0,n}$,
and the Mandelstam invariants are
$s_{ij}\,\,=\,\,\langle ij\rangle[ij].$ (6)
These quantities play the role of the data in the log-likelihood
interpretation of [28]. Thus, from the algebraic statistics perspective, our
topic here is the geometry of data space. The fundamental object which
underlies this geometry is the spinor-helicity variety ${\rm SH}(k,n,r)$.
The article is organized as follows. In Section 2 we present quadratic
polynomials that form a Gröbner basis for the prime ideal of ${\rm
SH}(k,n,r)$. The underlying toric degeneration is represented by a poset
constructed from two copies of Young’s lattice for ${\rm Gr}(k,n)$. The
special case $r=0$ is understood by identifying ${\rm SH}(k,n,0)$ with the
two-step flag variety ${\rm Fl}(k,n-k;\mathbb{C}^{n})$. In Section 3 we
express the momentum conservation equations by a matrix product $PQ^{T}$ which
generalizes (2), and we show that these equations generate the prime ideal of
(5). Theorem 3.5 features a Khovanskii basis for the coordinate ring of ${\rm
SH}(k,n,r)$.
Section 4 investigates the polynomial relations among the the Mandelstam
invariants. These relations define the Mandelstam variety ${\rm M}(k,n,r)$ in
the kinematic subspace of $\mathbb{P}^{\binom{n}{k}-1}$. We study both
parametric and implicit representations of this variety. The generators of its
prime ideal for the case $k=2$ are presented in Theorem 4.5. We note in
Proposition 4.13 that ${\rm M}(k,n,k)$ is the Hadamard product (see [5]) of
the Grassmannian ${\rm Gr}(k,n)$ with itself.
In Section 5 we turn to positive geometry and tropical geometry. We introduce
the positive parts of ${\rm SH}(k,n,r)$ and ${\rm M}(k,n,r)$, we discuss their
boundaries, and we compute some associated tropical varieties. For $r=0$,
these structures arise from the flag variety.
In Section 6 we study the scattering correspondence. This is a variety in the
product space
${\rm M}(k,n,r)\,\times\,X(k,n),$
where $X(k,n)$ is the moduli space for $n$ points in $\mathbb{P}^{k-1}$. It
parametrizes pairs of Mandelstam invariants and solutions to their scattering
equations; see [21, eqn (0.2)]. This mirrors the likelihood correspondence in
statistics [19, Definition 1.5]. Building on [21, Section 4.3], we offer a
mathematical perspective on results from the physics literature, mostly for
$k=2$.
This article is accompanied by software and data. These materials are made
available in the MathRepo collection at MPI-MiS via
https://mathrepo.mis.mpg.de/SpinorHelicity.
## 2 Two Grassmannians and their Posets
We fix two copies of the Grassmannian ${\rm
Gr}(k,n)\subset\mathbb{P}^{\binom{n}{k}-1}$. The first Grassmannian has
Plücker coordinates $\langle i_{1}i_{2}\ldots i_{k}\rangle$, representing
maximal minors of $\lambda$. The second one has Plücker coordinates
$[i_{1}i_{2}\ldots i_{k}]$, representing maximal minors of
$\widetilde{\lambda}$. These expressions are antisymmetric, so we usually
assume $1\leq i_{1}<i_{2}<\cdots<i_{k}\leq n$. For instance, for $k=3$,
$\begin{matrix}&\langle 123\rangle=-\langle 132\rangle=-\langle
213\rangle=\,\langle 231\rangle=\,\langle 312\rangle\,=-\langle 321\rangle\\\
{\rm
and}\quad&\,[123]\,=-[132]\,=\,-[213]\,\,=\,[231]\,=\,\,[312]\,=-[321].\end{matrix}$
(7)
Their relations are given by two copies of the Plücker ideal, denoted
$J_{k,n}$ and ${\widetilde{J}}_{k,n}$. To describe this ideal, we introduce
Young’s lattice $Y_{k,n}$. This is the partially ordered set (poset) whose
elements are the $\binom{n}{k}$ Plücker coordinates. The order relation in
$Y_{k,n}$ is defined by
$\langle i_{1}i_{2}\cdots i_{k}\rangle\,\leq\,\langle j_{1}j_{2}\cdots
j_{k}\rangle\quad:\Longleftrightarrow\quad i_{1}\leq j_{1}\,\,{\rm
and}\,\,i_{2}\leq j_{2}\,\,{\rm and}\,\,\cdots\,\,{\rm and}\,\,i_{k}\leq
j_{k}.$
Let $\widetilde{Y}_{k,n}$ be a second copy of Young’s poset, but now with the
order relation reversed:
$[i_{1}i_{2}\cdots i_{k}]\,\leq\,[j_{1}j_{2}\cdots
j_{k}]\quad:\Longleftrightarrow\quad i_{1}\geq j_{1}\,\,{\rm and}\,\,i_{2}\geq
j_{2}\,\,{\rm and}\,\,\cdots\,\,{\rm and}\,\,i_{k}\geq j_{k}.$
The following result on the ideal $J_{k,n}$ of the Grassmannian is well-known
(see [27, §3.1]).
###### Proposition 2.1.
The prime ideal $J_{k,n}$ is generated by the quadratic Plücker relations
$\sum_{s=0}^{k}\,(-1)^{s}\cdot\langle\,i_{1}\,i_{2}\,\cdots\,i_{k-1}\,j_{s}\,\rangle\cdot\langle
j_{0}j_{1}\cdots j_{s-1}j_{s+1}\cdots j_{k}\rangle.$ (8)
These quadrics are a Gröbner basis for the reverse lexicographic term order
given by any linear extension of $Y_{k,n}$. The initial ideal of $J_{k,n}$ is
generated by the incomparable pairs in $Y_{k,n}$.
The key point of this result is that every incomparable pair lifts to a
quadric in $J_{k,n}$.
###### Example 2.2 ($k=4,n=8$).
The elements $\langle 1278\rangle$ and $\langle 3456\rangle$ are incomparable
in $Y_{4,8}$. The Plücker relation (8) for the triple $i_{1}i_{2}i_{3}=127$
and the quintuple $j_{0}j_{1}j_{2}j_{3}j_{4}=34568$ is
${\bf\langle 1278\rangle\langle 3456\rangle}\,+\,\langle 1267\rangle\langle
3458\rangle-\langle 1257\rangle\langle 3468\rangle+\langle 1247\rangle\langle
3568\rangle-\langle 1237\rangle\langle 4568\rangle.$
The monomials are listed in the term order we have chosen. The initial
monomial is the prescribed incomparable pair. Our quadric is not in the
reduced Gröbner basis since it has incomparable trailing terms. The
corresponding element in the reduced Gröbner basis equals
$\begin{matrix}{\bf\langle 1278\rangle\langle 3456\rangle-\langle
1256\rangle\langle 3478\rangle}+\langle 1246\rangle\langle 3578\rangle-\langle
1245\rangle\langle 3678\rangle\quad\qquad\\\ \qquad\qquad\qquad-\,\langle
1236\rangle\langle 4578\rangle\,+\,\langle 1235\rangle\langle
4678\rangle\,-\,\langle 1234\rangle\langle 5678\rangle.\end{matrix}$
That quadric has the virtue that its initial binomial $\langle
1278\rangle\langle 3456\rangle-\langle 1256\rangle\langle 3478\rangle$ is
consistent with the toric degeneration of the Grassmannian ${\rm Gr}(4,8)$
given by Young’s lattice $Y_{4,8}$. Indeed, for the incomparable pair $\langle
1278\rangle\langle 3456\rangle$, the meet is $\langle 3478\rangle$ and the
join is $\langle 1256\rangle$. Algebraically, this is the Khovanskii basis (or
SAGBI basis) structure in [27, Theorem 3.2.9].
###### Corollary 2.3.
The number of generators for $J_{k,n}$, or of incomparable pairs in $Y_{k,n}$,
equals
$\frac{1}{2}\left[\binom{n}{k}+1\right]\binom{n}{k}\,\,-\,\,\frac{(n+1)(n-k+1)}{k+1}\prod_{i=0}^{k-2}\frac{(n-i)^{2}}{(k-i)^{2}}.$
(9)
###### Proof.
The first term is the number of all quadratic monomials in the $\binom{n}{k}$
variables $\langle i_{1}i_{2}\cdots i_{k}\rangle$. From this we subtract the
number of standard monomials, which is the number of semi-standard Young
tableaux of shape $k\times 2$ with fillings in $\\{1,2,\ldots,n\\}$. That
number is given by the hook-content formula from combinatorics, which we made
explicit in (9). ∎
We now turn to the spinor-helicity variety. Rephrasing the definition in
(4)-(5), this is
$\mathrm{SH}(k,n,r)\,\,=\,\,\bigl{\\{}\,(V,W)\in\mathrm{Gr}(k,n)\times\mathrm{Gr}(k,n)\,\,\colon\dim(V\cap
W^{\perp})\geq k-r\,\bigr{\\}},$ (10)
where $W^{\perp}$ is the space orthogonal to $W$ with respect to the standard
inner product on $\mathbb{C}^{n}$. In the setting of the Introduction, $V$ and
$W$ are the row spaces of $\lambda$ and $\widetilde{\lambda}$ respectively.
###### Remark 2.4 (Involution).
There is a canonical involution on $\mathrm{SH}(k,n,r)$, defined by swapping
the subspaces $V$ and $W$. This interchanges the coordinates $\langle
i_{1}i_{2}\cdots i_{k}\rangle$ and $[i_{1}i_{2}\cdots i_{k}]$.
###### Proposition 2.5.
Fix integers $k,n,r$ such that $0\leq r\leq k$ and $2k\leq r+n$. The spinor-
helicity variety $\,\mathrm{SH}(k,n,r)\,$ is non-empty and irreducible in
$\mathbb{P}^{\binom{n}{k}-1}\times\mathbb{P}^{\binom{n}{k}-1}$. Its dimension
equals
$\dim(\mathrm{SH}(k,n,r))\,\,=\,\,2k(n-k)-(k-r)^{2}.$ (11)
If $r=0$ then it is linearly isomorphic to the two-step flag variety $\,{\rm
Fl}(k,n-k;\mathbb{C}^{n})$.
###### Proof.
Our hypothesis that $k={\rm dim}(V)$ and $n-k={\rm dim}(W^{\perp})$ are at
least $k-r$ is necessary for $\mathrm{SH}(k,n,r)$ to be non-empty. We assume
$k-r\geq 0$ to rule out trivial cases. The projection of $\mathrm{SH}(k,n,r)$
onto the first factor equals ${\rm Gr}(k,n)$, which is irreducible of
dimension $k(n-k)$. The fibers are subvarieties in the second factor ${\rm
Gr}(k,n)$. In local affine coordinates, the fiber over $V$ consists of all
$k\times(n-k)$ matrices of rank at most $r$. This is an irreducible variety of
codimension $(k-r)^{2}$. Hence, each fiber is irreducible of the same
dimension $k(n-k)-(k-r)^{2}$. From this we conclude that $\mathrm{SH}(k,n,r)$
is irreducible of dimension $2k(n-k)-(k-r)^{2}$.
Fix $r=0$ and recall that $k\leq n-k$. By passing from $W$ to $W^{\perp}$, we
can view ${\rm SH}(k,n,0)$ as a subvariety in ${\rm Gr}(k,n)\times{\rm
Gr}(n-k,n)$. Its points are pairs $(V,W^{\perp})$ of linear subspaces in
$\mathbb{C}^{n}$ such that $V\subseteq W^{\perp}$. In other words, its points
are two-step flags in $\mathbb{C}^{n}$. Hence ${\rm SH}(k,n,0)$ coincides with
the flag variety ${\rm Fl}(k,n-k;\mathbb{C}^{n})$ in its Plücker embedding in
$\,\mathbb{P}^{\binom{n}{k}-1}\times\,\mathbb{P}^{\binom{n}{k}-1}$. ∎
Let $R=\mathbb{C}\bigl{[}\langle i_{1}\dots i_{k}\rangle,[j_{1}\dots
j_{k}]\bigr{]}$ be the polynomial ring in the $2\binom{n}{k}$ bracket
variables. Let $S=\mathbb{C}[\bf{x}]$ be the polynomial ring in the entries of
an $(n-k+r)\times n$ matrix ${\bf x}=(x_{ij})$. We write
$\phi_{k,n,r}:R\rightarrow S$ for the homomorphism which maps $\langle
I\rangle=\langle i_{1}i_{2}\cdots i_{k}\rangle$ to the $k\times k$ minor of
${\bf x}$ in the rows $1,2,\ldots,k$ and columns $I$, and which maps
$[J]=[j_{1}j_{2}\cdots j_{k}]$ to $(-1)^{j_{1}+j_{2}+\cdots+j_{k}}$ times the
$(n\\!-\\!k)\times(n\\!-\\!k)$ minor of ${\bf x}$ in the rows
$r+1,\ldots,n-k+r$ and columns $[n]\backslash J$. Note that all such minors
involve the $k-r$ middle rows, which are indexed by $r+1,\ldots,k$.
###### Remark 2.6 (Parametrization).
Let $I_{k,n,r}$ denote the kernel of the ring map $\phi_{k,n,r}$. This kernel
is a homogeneous prime ideal in $R$, and its zero set is the spinor-helicity
variety ${\rm SH}(k,n,r)$. Hence, the subalgebra $\phi_{k,n,r}(R)$ of
$S=\mathbb{C}[{\bf x}]$ is isomorphic to the coordinate ring of ${\rm
SH}(k,n,r)$. Indeed, following (10), the minors $\phi_{k,n,r}(\langle
I\rangle)$ are the Plücker coordinates for the space $V$, while the signed
minors $\phi_{k,n,r}([J])$ are the Plücker coordinates for $W^{\perp}$. The
condition ${\rm dim}(V\cap W^{\perp})\geq k-r$ is encoded by the overlap in
the $k-r$ middle rows of ${\bf x}$.
We will describe a Gröbner basis of quadrics for $I_{k,n,r}$. The initial
monomials admit a combinatorial description which extends that for
Grassmannians given in Proposition 2.1. We start out with our two copies of
Young’s lattice, $Y_{k,n}$ and $\widetilde{Y}_{k,n}$. We define a new poset
$\mathcal{P}_{k,n,r}$ as follows. As a set, $\mathcal{P}_{k,n,r}$ is the
disjoint union of $Y_{k,n}$ and $\widetilde{Y}_{k,n}$. All order relations in
$Y_{k,n}$ and $\widetilde{Y}_{k,n}$ remain order relations in
$\mathcal{P}_{k,n,r}$. In addition, there are $\binom{2k-2r}{k-r}$ covering
relations
$[12\cdots r\,i_{r+1}\cdots i_{k}]\,\leq\,\langle 12\cdots r\,j_{r+1}\cdots
j_{k}\rangle,$ (12)
one for each ordered set partition
$\,\\{r+1,r+2,\ldots,2k-r\\}=\\{i_{r+1},\ldots,i_{k}\\}\sqcup\\{j_{r+1},\ldots,j_{k}\\}$.
The poset $\mathcal{P}_{k,n,r}$ is the transitive closure of these relations.
Note that $\mathcal{P}_{k,n,r}$ is a graded poset, with unique minimal element
$[n{-}k{+}1\,\cdots\,n{-}1\,n]$ and unique maximal element $\langle
n{-}k{+}1\,\cdots\,n{-}1\,n\rangle$. The Hasse diagram of
$\mathcal{P}_{k,n,r}$ is shown in Figure 1 for $k=2,n=6,r=0$.
$\langle 56\rangle$$\langle 46\rangle$$\langle 36\rangle$$\langle
26\rangle$$\langle 16\rangle$$\langle 45\rangle$$\langle 35\rangle$$\langle
25\rangle$$\langle 15\rangle$$\langle 34\rangle$$\langle 24\rangle$$\langle
14\rangle$$\langle 23\rangle$$\langle 13\rangle$$\langle
12\rangle$[56][46][36][26][16][45][35][25][15][34][24][14][23][13][12]
Figure 1: The poset $\mathcal{P}_{2,6,0}$ is created from $Y_{2,6}$ and
$\widetilde{Y}_{2,6}$ by adding six covering relations.
We are now prepared to state our first theorem on the spinor-helicity variety
${\rm SH}(k,n,r)$.
###### Theorem 2.7.
The prime ideal $I_{k,n,r}$ is minimally generated by quadratic forms. These
quadrics are a Gröbner basis for the reverse lexicographic term order given by
any linear extension of $\mathcal{P}_{k,n,r}$. The initial ideal of
$I_{k,n,r}$ is generated by the incomparable pairs in $\mathcal{P}_{k,n,r}$.
###### Proof.
We first assume $r=0$. By Proposition 2.5, our ideal $I_{k,n,0}$ is the ideal
of the two-step flag variety ${\rm Fl}(k,n-k;\mathbb{C}^{n})$. The quadratic
Gröbner basis for that ideal is derived from the well-known straightening law
for flag varieties. We refer to [24, Chapter 14] for a textbook exposition.
That exposition emphasizes the case of the complete flag variety. This case
applies to our situation as follows. Let $\mathcal{P}$ be the poset on all
$2^{n}$ subsets of $\\{1,2,\ldots,n\\}$ that was introduced in [24, Section
14.2]. The restriction of that poset to subsets that have size $k$ or $n-k$ is
isomorphic to our poset $\mathcal{P}_{k,n,0}$. The poset isomorphism maps
$(n-k)$-sets to their complements. With this, our assertion for $r=0$ follows
from [24, Theorem 14.6].
We next present the proof for $r\geq 1$. This will generalize the known
construction we used for $r=0$. We consider the skew Young diagram
$\lambda/\mu$ where $\lambda=(n-k+r,r)$ and $\mu=(r)$. A filling of
$\lambda/\mu$ with entries in $[n]=\\{1,2,\ldots,n\\}$ is assumed to have its
rows strictly increasing. Hence there are $\binom{n}{k}^{2}$ such fillings. A
filling is semi-standard if the $k-r$ non-trivial columns are weakly
increasing. If this is not the case then the filling of $\lambda/\mu$ is
called non-standard.
With these definitions in place, our poset admits the following alternative
description:
$\begin{matrix}&\quad\langle i_{1}i_{2}\,\cdots
i_{k}\,\rangle\,\geq\,[j_{1}j_{2}\,\cdots\,j_{k}]\qquad\qquad\hbox{holds in
$\mathcal{P}_{k,n,r}$,}\\\ \iff&\langle
i_{r+1}\\!-\\!r\,\cdots\,i_{k}\\!-\\!r\rangle\,\geq\,[j_{r+1}\\!-\\!r\,\cdots\,j_{k}\\!-\\!r]\quad\hbox{holds
in $\mathcal{P}_{k-r,n-r,0}$,}\\\ \iff&\hbox{the filling of $\lambda/\mu$ with
$[n]\backslash\\{j_{1},\ldots,j_{k}\\}$ and $\\{i_{1},\ldots,i_{k}\\}$ is
semi-standard.}\end{matrix}$ (13)
The proof of our theorem is now analogous to that of [24, Theorem 14.6]. Fix
an incomparable pair $\langle I\rangle[J]$ in the poset $\mathcal{P}_{k,n,r}$,
and consider the corresponding non-standard skew tableaux
$\lambda/\mu\quad=\qquad\setcounter{MaxMatrixCols}{11}\begin{bmatrix}&&&j_{1}^{\prime}&\cdots&j^{\prime}_{l}&\cdots&j^{\prime}_{k-r}&j^{\prime}_{k-r+1}&\cdots&j^{\prime}_{n-k}\\\
i_{1}&\cdots&i_{r}&i_{r+1}&\cdots&i_{r+l}&\cdots&i_{k}&&&\\\ \end{bmatrix}.$
(14)
The rows are increasing, and $i_{r+l}<j^{\prime}_{l}$ is the leftmost
violation, and $\\{j^{\prime}_{1},\ldots,j^{\prime}_{n-k}\\}=[n]\backslash J$.
By summing over all permutations $\pi$ of
$i_{1}<\cdots<i_{r+l}<j^{\prime}_{l}<\cdots<j^{\prime}_{n-k}$, we obtain
$\sum_{\pi}{\rm sign}(\pi)\cdot\langle\pi(I)\rangle\cdot[\pi([n]\backslash
J)]\quad\in\,\,R.$ (15)
This is the analogue to [24, eqn (14.2)]. The image of (15) under
$\phi_{k,n,r}$ is an alternating multilinear form in $n-k+r+1$ column vectors
of the matrix ${\bf x}$. But, the matrix has only $n-k+r$ rows, so this is the
zero polynomial. Therefore (15) lies in $I_{k,n,r}={\rm
kernel}(\phi_{k,n,r})$. Finally, we note that the initial monomial of (15) is
the monomial $\langle I\rangle[J]$ we started out with.
To complete the proof, we need to show that the semi-standard monomials in $R$
are linearly independent modulo $I_{k,n,r}$. The argument for this follows
that in the proof of [24, Theorem 14.6]. Namely, we consider any monomial in
$S$ and we write it as in [24, eqn (14.4)]. There exists a unique semi-
standard skew tableau whose image under $\phi_{k,n,r}$ has that initial
monomial. Thus, no cancellation is possible, and this finishes the proof of
Theorem 2.7. ∎
###### Example 2.8 ($k=2,n=6,r=0$).
The ideal $I_{2,6,0}$ is generated by $15+15+36=66$ quadrics which form a
Gröbner basis. Their initial monomials are the incomparable pairs in the poset
$\mathcal{P}_{2,6,0}$ which is shown in Figure 1. The $15$ initial monomials
from $J_{2,6}$ are the pairs $\langle ij\rangle\langle kl\rangle$ in
$Y_{2,6}$. The $15$ initial monomials from $\widetilde{J}_{2,6}$ are the pairs
$[ij][kl]$ in $\widetilde{Y}_{2,6}$. Finally, there are $36$ mixed initial
monomials $\langle ij\rangle[kl]$, corresponding to bilinear generators of
$I_{2,6,0}$.
The poset $\mathcal{P}_{2,5,0}$ arises in Figure 1 from deleting the upper rim
$\langle 16\rangle,\langle 26\rangle,\langle 36\rangle,\langle
46\rangle,\langle 56\rangle$ and the lower rim $[16],[26],[36],[46],[56]$.
Thus $\mathcal{P}_{2,5,0}$ has $\binom{5}{2}+\binom{5}{2}=20$ elements. It has
five incomparable pairs $\langle ij\rangle\langle kl\rangle$, five
incomparable pairs $[ij][kl]$, and $25$ mixed incomparable pairs $\langle
ij\rangle[kl]$. These are the initial monomials of the $35$ ideal generators
in Example 1.1.
The following result concerns the number of incomparable pairs in the poset
$\mathcal{P}_{k,n,r}$.
###### Lemma 2.9.
The number of mixed incomparable pairs $\langle
i_{1},\dots,i_{k}\rangle[j_{1},\dots,j_{k}]$ equals $\binom{n}{k-r-1}^{2}$.
###### Proof.
Let $\lambda=(n-k+r,\ k)$ and $\mu=(r)$. The generating function for semi-
standard skew tableaux of shape $\lambda/\mu$ is the skew Schur polynomial
$s_{\lambda/\mu}$, which can be written as
$s_{\lambda/\mu}\,\,=\,\,\sum_{\nu}c_{\mu,\nu}^{\lambda}s_{\nu}.$
Here $c^{\lambda}_{\mu,\nu}$ are the Littlewood-Richardson coefficients. In
our special case $\mu=(r)$, Pieri’s rule tells us that
$c^{\lambda}_{\mu,\nu}=1$ if $\nu$ is obtained from $\lambda$ by removing $r$
boxes and $0$ otherwise. Using the hook-content formula to evaluate each
individual $s_{\nu}(1,\dots,1)$, we can now compute the number of semi-
standard skew tableaux of shape $\lambda/\mu$ with fillings in $[n]$. That
number equals
$\begin{matrix}s_{\lambda/\mu}(1,1,\ldots,1)\quad\,&=&\,\,\sum_{\begin{subarray}{c}\nu\text{
obtained from $\lambda$ }\\\ \text{by removing $r$
boxes}\end{subarray}}s_{\nu}(1,1,\dots,1)\vskip 6.0pt plus 2.0pt minus
2.0pt\\\ &=&\quad\sum_{\ell=0}^{r}s_{(n-k+\ell,k-\ell)}(1,1,\dots,1)\vskip
3.0pt plus 1.0pt minus 1.0pt\\\
&=&\quad\sum_{\ell=0}^{r}\left[\binom{n}{k-\ell}^{2}-\binom{n}{k-\ell-1}^{2}\right]\vskip
3.0pt plus 1.0pt minus 1.0pt\\\
&=&\quad\binom{n}{k}^{\\!2}-\,\binom{n}{k-r-1}^{\\!2}.\end{matrix}$ (16)
Since $\binom{n}{k}^{2}$ is the number of all tableaux, we see that the number
of non-standard skew tableau of shape $\lambda/\mu$ equals
$\binom{n}{k-r-1}^{2}$. The equivalence in (13) now completes the proof. ∎
Theorem 2.7 and Lemma 2.9 imply the following result about the spinor-helicity
variety.
###### Corollary 2.10.
The number of minimal generators of $I_{k,n,r}$ equals twice (9) plus
$\binom{n}{k-r-1}^{2}$. These generators are quadratic and we can arrange them
to form a reduced Gröbner basis.
After its dimension, the second-most important invariant of a variety in a
projective space is its degree. For a variety in a product of two projective
spaces, one considers the bidegree, which is a homogeneous polynomial in two
variables $s$ and $t$. The degree of that polynomial is the codimension of the
variety. We saw an example in (3). By definition, the bidegree of ${\rm
SH}(k,n,r)$ is its class in the cohomology ring of ambient product of
projective spaces:
$H^{*}\left(\mathbb{P}^{\binom{n}{k}-1}\times\mathbb{P}^{\binom{n}{k}-1},\mathbb{Z}\right)\,\,=\,\,\mathbb{Z}[s,t]\,/\bigl{\langle}\,s^{\binom{n}{k}},\,t^{\binom{n}{k}}\,\bigr{\rangle}.$
We now present a general formula for the cohomology class of the spinor-
helicity variety.
###### Corollary 2.11.
The bidegree of $\,{\rm SH}(k,n,r)$ is equal to
$(st)^{\binom{n}{k}-k(n-k)-1}\cdot\sum c(i_{1}i_{2}\cdots i_{k})\cdot
c(j_{1}j_{2}\cdots j_{k})\cdot
s^{i_{1}+i_{2}+\cdots+i_{k}-\binom{k+1}{2}}\cdot
t^{j_{1}+j_{2}+\cdots+j_{k}-\binom{k+1}{2}},$ (17)
where we sum over all covering relations in (12), and $c(i_{1}i_{2}\cdots
i_{k})$ denotes the number of maximal chains from $\langle i_{1}i_{2}\cdots
i_{k}\rangle$ to the top element $\langle n{-}k{+}1\,\cdots\,n{-}1\,n\rangle$
in Young’s lattice $Y_{k,n}$.
###### Proof.
The bidegree of $I_{k,n,r}$ equals the bidegree of the initial monomial ideal
${\rm in}(I_{k,n,r})$. The latter is generated by the incomparable pairs in
$\mathcal{P}_{k,n,r}$. The bidegree is the multidegree of [24, §8.5] for the
$\mathbb{Z}^{2}$-grading at hand. It is additive over top-dimensional primary
components, and we can use the formula in [24, Theorem 8.44] for its
evaluation. The associated primes of ${\rm in}(I_{k,n,r})$ correspond to the
maximal chains of $\mathcal{P}_{k,n,r}$. Each maximal chain starts out at the
bottom of $\widetilde{Y}_{k,n}$, it uses precisely one of the covering
relations in (12) to transition from $\widetilde{Y}_{k,n}$ to $Y_{k,n}$, and
it then proceeds to the top of $Y_{k,n}$. Thus there are precisely
$c(i_{1}i_{2}\cdots i_{k})\cdot c(j_{1}j_{2}\cdots j_{k})$ maximal chains
which use the specific covering relation in (12). The associated monomial in
$s$ and $t$ records the height at which the transition from
$\widetilde{Y}_{k,n}$ to $Y_{k,n}$ in $\mathcal{P}_{k,n,r}$ takes place. ∎
###### Example 2.12 ($k=2,n=6,r=0$).
The bidegree of $I_{2,6,0}$ is a sum of monomials $s^{i}t^{j}$ over the
maximal chains of the poset $\mathcal{P}_{2,6,0}$ in Figure 1. The degree
$i+j=16$ of each monomial is the codimension of ${\rm SH}(2,6,0)$ in
$\mathbb{P}^{14}\times\mathbb{P}^{14}$. Counting paths in Young’s lattice
$Y_{2,6}$, we see
$c(12)=14,\,c(13)=14,\,c(14)=9,\,c(23)=5,\,c(24)=5,\,c(34)=2.$
The bidegree of $I_{2,6,0}$ is the polynomial
$\,28s^{6}t^{10}+70s^{7}t^{9}+90s^{8}s^{8}+70s^{9}t^{7}+28s^{10}t^{6}$
$=\,c(12)c(34)s^{6}t^{10}+c(13)c(24)s^{7}t^{9}+2c(14)c(23)s^{8}s^{8}+c(24)c(13)s^{9}t^{7}+c(34)c(12)s^{10}t^{6}.$
Hence total number of maximal chains in $\mathcal{P}_{2,6,0}$ equals
$28+70+90+70+28=286$.
###### Example 2.13 ($k=3,n=7,r=1$).
The poset $\mathcal{P}_{3,7,1}$ has $70$ elements and $312816$ maximal chains.
It arises from $Y_{3,7}$ and $\widetilde{Y}_{3,7}$ by adding six covering
relations, as shown in Figure 2. The ideal $I_{3,7,1}$ has $140+140+49=329$
minimal generators, one for each incomparable pair; by Corollary 2.10. The
blue numbers $c(ijk)$ count maximal chains from $\langle ijk\rangle$ to
$\langle 567\rangle$ in $Y_{3,7}$ or maximal chains from $[567]$ to $[ijk]$ in
$\widetilde{Y}_{3,7}$. By Corollary 2.11, the bidegree of $I_{3,7,1}$ equals
$(st)^{22}\bigl{(}462\cdot 56\,s^{4}\,+\,462\cdot 168\,s^{3}t\,+\,(252\cdot
210+210\cdot 252)\,s^{2}t^{2}\,+\,168\cdot 462\,st^{3}\,+\,56\cdot
462\,t^{4}\bigr{)}$.
$\langle 167\rangle$1$\langle 157\rangle$4$\langle 147\rangle$9$\langle
137\rangle$14$\langle 127\rangle$14$\langle 156\rangle$10$\langle
146\rangle$35$\langle 136\rangle$70$\langle 126\rangle$84$\langle
145\rangle$56$\langle 135\rangle$168$\langle 125\rangle$252$\langle
134\rangle$210$\langle 124\rangle$462$\langle 123\rangle$462$\langle
267\rangle$1$\langle 257\rangle$3$\langle 247\rangle$5$\langle
237\rangle$5$\langle 256\rangle$6$\langle 246\rangle$16$\langle
236\rangle$21$\langle 245\rangle$21$\langle 235\rangle$42$\langle
234\rangle$42$\langle 367\rangle$1$\langle 357\rangle$2$\langle
347\rangle$2$\langle 356\rangle$3$\langle 346\rangle$5$\langle
345\rangle$5$\langle 467\rangle$1$\langle 457\rangle$1$\langle
456\rangle$1$\langle
567\rangle$1$[167]$1$[157]$4$[147]$9$[137]$14$[127]$14$[156]$10$[146]$35$[136]$70$[126]$84$[145]$56$[135]$168$[125]$252$[134]$210$[124]$462$[123]$462$[267]$1$[257]$3$[247]$5$[237]$5$[256]$6$[246]$16$[236]$21$[245]$21$[235]$42$[234]$42$[367]$1$[357]$2$[347]$2$[356]$3$[346]$5$[345]$5$[467]$1$[457]$1$[456]$1$[567]$1
Figure 2: The poset $\mathcal{P}_{3,7,1}$ governs the combinatorics of the
variety ${\rm SH}(3,7,1)\subset\mathbb{P}^{34}\times\mathbb{P}^{34}$.
## 3 Bilinear Relations and Khovanskii Basis
In Section 2 we took a route into the combinatorial commutative algebra of the
spinor-helicity variety. This journey continues in this section. We begin by
taking a closer look at the bilinear equations in $I_{k,n,r}$. Thereafter, we
introduce a toric degeneration of ${\rm SH}(k,n,r)$, based on the poset
$\mathcal{P}_{k,n,r}$. The resulting Khovanskii basis matches our earlier
Gröbner basis.
We now introduce two matrices $P$ and $Q$ whose rows are indexed by
$\binom{[n]}{k-r-1}$ and whose columns are indexed by $\binom{[n]}{r+1}$. Here
$[n]=\\{1,2,\ldots,n\\}$ and $\binom{[n]}{s}$ is the set of subsets of size
$s$ in $[n]$. The entries of our matrices are given by concatenating row
labels and column labels
$P_{I,J}\,=\,\langle\,I\,J\,\rangle\quad\text{and }\quad
Q_{I,J}\,=\,[\,I\,J\,]\qquad\hbox{for $\,I\in\binom{[n]}{k-r-1}\,$ and
$\,J\in\binom{[n]}{r+1}$.}$
Here $I$ and $J$ are increasing sequences which we concatenate. We pass to the
sorted Plücker coordinates $\,\langle I\,\cup\,J\rangle$ and $\,[I\,\cup\,J]$
from Section 2 by multiplying with $-1,+1$ or $0$, as in (7). In particular,
this means that $\,\langle I\,J\rangle\,=\,0\,$ and $\,[I\,J]\,=\,0\,$
whenever $I\cap J\not=\emptyset$.
###### Example 3.1 ($k=3,n=5,r=1$).
The matrix $P$ is square of format $10\times 10$. The rows and columns of $P$
are labeled by $12,13,14,15,23,24,25,34,35,45$, in this order. We find
$P\,=\,\small\begin{pmatrix}0&0&0&0&0&0&0&\langle 1234\rangle&\langle
1235\rangle&\langle 1245\rangle\\\ 0&0&0&0&0&\langle 1324\rangle&\langle
1345\rangle&0&0&\langle 1345\rangle\\\ 0&0&0&0&\langle 1423\rangle&0&\langle
1425\rangle&0&\langle 1435\rangle&0\\\ 0&0&0&0&\langle 1523\rangle&\langle
1524\rangle&0&\langle 1534\rangle&0&0\\\ 0&0&\langle 2314\rangle&\langle
2315\rangle&0&0&0&0&0&\langle 2345\rangle\\\ 0&\langle 2413\rangle&0&\langle
2415\rangle&0&0&0&0&\langle 2435\rangle&0\\\ 0&\langle 2513\rangle&\langle
2514\rangle&0&0&0&0&\langle 2534\rangle&0&0\\\ \langle 3412\rangle&0&0&\langle
3415\rangle&0&0&\langle 3425\rangle&0&0&0\\\ \langle 3512\rangle&0&\langle
3514\rangle&0&0&\langle 3524\rangle&0&0&0&0\\\ \langle 4512\rangle&\langle
4513\rangle&0&0&\langle 4523\rangle&0&0&0&0&0\\\ \end{pmatrix}\\!.$
Each entry is now replaced by Plücker coordinates with increasing indices. For
instance, we replace $\langle 1324\rangle$ by $-\langle 1234\rangle$. The
matrix $Q$ is identical to $P$ but with square brackets $[ijkl]$.
###### Example 3.2 ($k=2,r=0$).
This is the case of most interest in physics. Here $P$ and $Q$ are the skew-
symmetric $n\times n$ matrices of Plücker coordinates, shown for $n=5$ in
Example 1.1.
###### Example 3.3 ($r=k-1$).
The variety ${\rm SH}(k,n,k-1)$ is a hypersurface in ${\rm Gr}(k,n)\times{\rm
Gr}(k,n)$. Here $P$ and $Q$ are the row vectors of length $\binom{n}{k}$ whose
entries are the Plücker coordinates. The defining equation of this
hypersurface is the inner product of the two Plücker vectors:
$PQ^{T}\,=\,\sum\langle i_{1}i_{2}\cdots i_{k}\rangle[i_{1}i_{2}\cdots
i_{k}]\,\,=\,\,0.$
These examples guide us towards our next theorem, which is the main result in
Section 3.
###### Theorem 3.4.
The entries of the matrix $\,PQ^{T}$ generate the prime ideal of the spinor-
helicity variety ${\rm SH}(k,n,r)$ in the coordinate ring of $\,{\rm
Gr}(k,n)\times{\rm Gr}(k,n)$. In symbols, we have
$I_{k,n,r}\,\,=\,\,J_{n,k}+\widetilde{J}_{k,n}+\langle\text{entries of
}PQ^{T}\rangle.$ (18)
###### Proof.
We first show that the entries of $PQ^{T}$ vanish on the spinor-helicity
variety. Fix any point in ${\rm SH}(k,n,r)$, represented by a pair of $k\times
n$ matrices $\lambda$ and $\widetilde{\lambda}$ such that
$\lambda\cdot\widetilde{\lambda}^{T}$ has rank $\leq r$. By passing to the
$(r+1)$-st exterior power, we find that
$\,\wedge_{r+1}\lambda\cdot(\wedge_{r+1}\widetilde{\lambda})^{T}\,$ is the
zero matrix of format $\binom{k}{r+1}\times\binom{k}{r+1}$. In other words,
the row spaces of the $\binom{k}{r+1}\times\binom{n}{r+1}$ matrices
$\wedge_{r+1}\lambda$ and $\wedge_{r+1}\widetilde{\lambda}$ are orthogonal to
each other. The row vectors of the matrices $P$ and $Q$ are elements in these
row spaces. Therefore $P\cdot Q^{T}=0$ holds for our point
$(\lambda,\widetilde{\lambda})$.
The previous paragraph shows that the right hand side of (18) is contained in
the left hand side. Both ideals are generated by quadrics, and they contain
the Plücker ideals $J_{k,n}$ and $\widetilde{J}_{k,n}$. It therefore suffices
to show that the entries of $PQ^{T}$ span the space of all bilinear quadrics
in the ideal $I_{k,n,r}$. We know from Lemma 2.9 and Corollary 2.10 that this
space has dimension $\binom{n}{k-r-1}^{2}$. This number coincides with the
number of entries in the square matrix $PQ^{T}$. It therefore suffices to show
that the entries of $PQ^{T}$ are linearly independent over $\mathbb{Q}$.
We shall prove this by contradiction. The entry of $PQ^{T}$ in row $I$ and
column $J$ equals
$f_{IJ}\,\,\,=\sum_{L\in\binom{[n]}{r+1}}\epsilon_{I,L}\,\epsilon_{J,L}\
\langle IL\rangle[JL],$
where $\epsilon_{I,L}=\pm 1$ is the sign of the permutation that sorts the
string $IL$. Suppose that
$\sum_{I,J\in\binom{[n]}{k-r-1}}\\!\\!\\!\\!\alpha_{I,J}\cdot
f_{IJ}\,\,=\,\,0\qquad\hbox{for some scalars $\alpha_{IJ}\in\mathbb{Q}$.}$
We must show that each $\alpha_{IJ}$ is zero. The previous equation can be
rewritten as follows:
$\sum_{\begin{subarray}{c}I^{\prime},J^{\prime}\in\binom{[n]}{k}\\\
|I^{\prime}\cap J^{\prime}|\geq
r+1\end{subarray}}\biggl{[}\,\,\sum_{\begin{subarray}{c}L\subset
I^{\prime}\cap J^{\prime}\\\
|L|=r+1\end{subarray}}\\!\\!\epsilon_{I^{\prime}\setminus
L,L}\,\epsilon_{J^{\prime}\setminus L,L}\ \alpha_{I^{\prime}\setminus
L,J^{\prime}\setminus
L}\,\biggr{]}\,\langle\,I^{\prime}\,\rangle\,[\,J^{\prime}\,]\,\,\,=\,\,\,0.$
From this we conclude that, for any two $k$-subsets $I^{\prime},J^{\prime}$
with $|I^{\prime}\cap J^{\prime}|\geq r+1$, we have
$\begin{matrix}\qquad\sum_{\begin{subarray}{c}L\in\binom{I^{\prime}\cap
J^{\prime}}{r+1}\end{subarray}}\epsilon_{I^{\prime}\setminus
L,L}\,\epsilon_{J^{\prime}\setminus L,L}\ \alpha_{I^{\prime}\setminus
L,J^{\prime}\setminus L}\,\,=\,\,0\qquad\hbox{for
all}\,\,\,I^{\prime},J^{\prime}\in\binom{[n]}{k}.\end{matrix}$ (19)
Our goal is to show that all $\alpha$’s are zero. First consider the case
$I^{\prime}=J^{\prime}$. Here (19) reads
$\sum_{L\subset I^{\prime},|L|=r+1}\\!\\!\\!\\!\alpha_{I^{\prime}\setminus
L,I^{\prime}\setminus L}\,\,=\,\,0.$
We write these equations in the form $Ba=0$ where $a_{I}=\alpha_{I,I}$ and $B$
is an $\binom{n}{k}\times\binom{n}{k-r-1}$ matrix with entries in $\\{0,1\\}$.
The row indices are subsets $J\in\binom{[n]}{k}$ and the column indices are
subsets $I\subset\binom{[n]}{k-r-1}$. The matrix entry $B_{J,I}$ equals $1$
when $I\subset J$ and it is $0$ otherwise.
It is a known result in combinatorics that the columns of the matrix $B$ are
linearly independent. The context is the spectral theory of the Johnson graph,
which is developed in [17, Chapter 6]. To be precise, the desired identity
$\,{\rm rank}(B)=\binom{n}{k-r-1}\,$ can be found in [17, Theorem 6.3.3]. From
this statement we deduce that $\alpha_{I,I}=0$ for all
$I\in\binom{[n]}{k-r-1}$.
For the general case, we fix $I^{\prime}=I_{0}\sqcup K$ and
$J^{\prime}=J_{0}\sqcup K$ where $I_{0},J_{0}\in\binom{[n]}{s}$ are disjoint,
with $s\geq 1$, and $K\subset[n]\backslash(I_{0}\sqcup J_{0})$ has size $k-s$.
From (19) we obtain the equations
$\\!\\!\\!\sum_{\begin{subarray}{c}L\subset K\\\
|L|=r+1\end{subarray}}\\!\\!\epsilon_{(I_{0}\sqcup K)\setminus
L,L}\cdot\epsilon_{(J_{0}\sqcup K)\setminus L,L}\cdot\alpha_{(I_{0}\sqcup
K)\setminus L,(J_{0}\sqcup K)\setminus L}\,=\,0\quad\text{for any
}K\in\binom{[n]\backslash(I_{0}\sqcup J_{0})}{k-s}.$ (20)
Suppressing $I_{0}$ and $J_{0}$ from the indices of $\alpha$, we rewrite (20)
in matrix form $UBVa=0$, where
* •
$a_{I}=\alpha_{I_{0}\sqcup I,J_{0}\sqcup I}$ for any subset
$I\in\binom{[n]\setminus(I_{0}\sqcup J_{0})}{k-s-r-1}$. To get to (20), we
would set $I=K\backslash L$.
* •
$B$ is a $\binom{n-2s}{k-s}\times\binom{n-2s}{k-s-r-1}$ matrix with entries in
$\\{0,1\\}$. The columns of $B$ are indexed by subsets $I$ of size
$k{-}s{-}r{-}1$ of $[n]\backslash(I_{0}\sqcup J_{0})$ and the rows are indexed
by subsets $K$ of size $k{-}s$ subsets of $[n]\backslash(I_{0}\sqcup J_{0})$.
The entries are $B_{K,I}=1$ of $I\subset K$ and $0$ otherwise.
* •
$U$ and $V$ are diagonal matrices of size $\binom{n-2s}{k-s}$ and
$\binom{n-2s}{k-s-r-1}$ respectively, with entries
$U_{K,K}\,=\,(-1)^{N}\quad\text{ and }\quad V_{I,I}\,=\,(-1)^{M},$
where $N$ counts the elements in $I_{0}\sqcup J_{0}$ that are larger than
elements in $K$ and $M$ counts the elements in $I_{0}\sqcup J_{0}$ that are
larger than elements in $I$. In symbols,
$N\,\,=\sum_{i\in I_{0}\sqcup J_{0}}\sum_{\ell\in
K}1_{i>\ell}\qquad\text{and}\qquad M\,\,=\sum_{i\in I_{0}\sqcup
J_{0}}\sum_{\ell\in I}1_{i>\ell}.\vspace{-0.2cm}$
Writing $I\subset K$ and $L=K\backslash I$, we find $\epsilon_{(I_{0}\sqcup
K)\setminus L,L}\cdot\epsilon_{(J_{0}\sqcup K)\setminus L,L}=(-1)^{N+M}$ for
the sign in (20). Again, by virtue of [17, Theorem 6.3.3], we have ${\rm
rank}(B)=\binom{n-2s}{k-s-r-1}$ and hence $a_{I}=\alpha_{I_{0}\sqcup
I,J_{0}\sqcup I}=0$ for any set $I\subset[n]\backslash(I_{0}\sqcup J_{0})$ of
size $k{-}s{-}r{-}1$. Since this holds for any pair of disjoint index sets
$I_{0},J_{0}\in\binom{[n]}{s}$ where $0\leq s\leq k{-}r{-}1$, we deduce that
$\alpha=0$. ∎
We next present a Khovanskii basis [4] for the coordinate ring of the variety
${\rm SH}(k,n,r)$. Khovanskii bases used to be called SAGBI bases in earlier
works, and our arguments follows those given for Grassmannians in [27, Section
3.1] and for flag varieties in [24, Chapter 14].
We fix the reverse lexicographic term order $>$ on the polynomial ring
$S=\mathbb{C}[{\bf x}]$, where $x_{11}>x_{12}>\cdots>x_{n-k+r,n}$. This is a
diagonal term order, i.e. for each minor of ${\bf x}$, the initial monomial is
the product of the entries on its diagonal. Our coordinate ring is the image
of the polynomial ring $R=\mathbb{C}\bigl{[}\langle I\rangle,[J]\,\bigr{]}$
under the ring homomorphism $\phi=\phi_{k,n,r}$ into $S$. See Remark 2.6. For
each of the $2\binom{n}{k}$ generators of $R$, we consider the initial
monomial of its image in $S$. This gives a list of $\binom{n}{k}$ monomials
${\rm in}_{>}\phi(\langle I\rangle)$ of degree $k$ and $\binom{n}{k}$
monomials ${\rm in}_{>}\phi([J])$ of degree $n-k$. These monomials lie in the
initial algebra of our coordinate ring
${\rm in}_{>}(\phi(S))\,\,=\,\,\mathbb{C}\bigl{[}\,{\rm
in}_{>}(f)\,:\,f\in\phi(S)\,\bigr{]}.$ (21)
###### Theorem 3.5.
The $2\binom{n}{k}$ minors $\phi(\langle I\rangle)$ and $\phi([J])$ form a
Khovanskii basis for the coordinate ring $\phi(R)$ of the spinor-helicity
variety ${\rm SH}(k,n,r)$, i.e. their initial monomials generate (21).
###### Proof.
Our argument mirrors that of [24, Theorem 14.11]. We use the set-up in the
proof of Theorem 2.7. Monomials of bidegree $(d_{1},d_{2})$ in $S$ are
represented by skew tableaux. These are formed by placing $d_{1}$ increasing
rows of length $n-k$, shifted by $r$ steps to the right, above $d_{2}$
increasing rows of length $k$. This is seen in (14) for $d_{1}=d_{2}=1$.
Following [24, Lemma 14.13], a monomial in $S$ is the initial monomial of an
element in $\phi(R)$ if and only its representation as a skew tableau, as in
[24, eqn (14.4)], is a semi-standard skew tableau. Hence the initial algebra
(21) is spanned as a $\mathbb{C}$-vector space by ${\bf x}$-monomials that
correspond to semi-standard skew tableaux. Every such monomial is a product of
diagonal monomials ${\rm in}_{>}\phi(\langle I\rangle)$ of degree $k$ and
diagonal monomials ${\rm in}_{>}\phi([J])$ of degree $n-k$. ∎
We now illustrate Theorem 3.5 for the non-trivial instance shown in Figure 2.
###### Example 3.6 ($k=3,n=7,r=1$).
The polynomial ring $R$ is generated by the $35$ entries of
${\bf
x}\,\,=\,\,\small\begin{pmatrix}x_{11}&x_{12}&x_{13}&x_{14}&x_{15}&x_{16}&x_{17}\\\
x_{21}&x_{22}&x_{23}&x_{24}&x_{25}&x_{26}&x_{27}\\\
x_{31}&x_{32}&x_{33}&x_{34}&x_{35}&x_{36}&x_{37}\\\
x_{41}&x_{42}&x_{43}&x_{44}&x_{45}&x_{46}&x_{47}\\\
x_{51}&x_{52}&x_{53}&x_{54}&x_{55}&x_{56}&x_{57}\end{pmatrix}.$ (22)
The polynomial ring $S$ is generated by the $70$ brackets in Figure 2. The map
$\phi:S\rightarrow R$ takes $\langle ijk\rangle$ to the $3\times 3$-minor of
${\bf x}$ with row indices $\\{1,2,3\\}$ and column indices $\\{i,j,k\\}$. It
takes $[ijk]$ to the signed $4\times 4$-minor with row indices $\\{2,3,4,5\\}$
and column indices $[7]\backslash\\{i,j,k\\}$. We consider the image (21) of
the map that takes each bracket to the diagonal initial monomial:
$\begin{matrix}{\rm in}_{>}\phi\,:\,R\rightarrow S:&\\!\\!\\!\\!\langle
123\rangle\,\mapsto\,x_{11}x_{22}x_{33},&\\!\\!\\!\\!\langle
124\rangle\,\mapsto\,x_{11}x_{22}x_{34},&\\!\\!\ldots\,,&\\!\\!\\!\\!\langle
567\rangle\,\mapsto\,x_{15}x_{26}x_{37},\\\ &\,[123]\mapsto
x_{24}x_{35}x_{46}x_{57},&\,[124]\mapsto
x_{23}x_{35}x_{46}x_{57},&\\!\\!\ldots\,,&\,[567]\mapsto
x_{21}x_{32}x_{43}x_{54}.\end{matrix}$
The kernel of the monomial map $\,{\rm in}_{>}\phi\,$ is a toric ideal in $S$.
This is minimally generated by $329$ binomial quadrics. First, there are $140$
quadratic binomials from Young’s poset $Y_{3,7}$:
$\langle 125\rangle\langle 134\rangle-\langle 124\rangle\langle
135\rangle\,,\,\,\langle 126\rangle\langle 134\rangle-\langle
124\rangle\langle 136\rangle\,,\,\,\ldots\,,\,\langle 367\rangle\langle
457\rangle-\langle 357\rangle\langle 467\rangle$ (23)
Likewise, among the generators of the toric ideal $\,{\rm kernel}\bigl{(}{\rm
in}_{>}(\phi)\bigr{)}\,$ are $140$ binomials from $\widetilde{Y}_{3,7}$:
$[125][134]-[124][135]\,,\,\,[126][134]-[124][136]\,,\,\ldots\,,\,[367][457]-[357][467].$
(24)
Third, and most important, there are $49$ mixed binomial quadrics in our toric
ideal:
$\begin{matrix}\langle 123\rangle[123]-\langle 145\rangle[145],&\langle
124\rangle[123]+\langle 145\rangle[135],&\langle 125\rangle[123]-\langle
145\rangle[134],\\\ \qquad\ldots\quad\ldots\quad\ldots\qquad,&\langle
123\rangle[236]+\langle 124\rangle[246],&\langle 123\rangle[237]+\langle
124\rangle[247].\\\ \end{matrix}$ (25)
The initial monomials in (23), (24) and (25) are the incomparable pairs in the
poset $\mathcal{P}_{3,7,1}$. The toric variety defined by our binomials is a
toric degeneration of the spinor-helicity-variety ${\rm SH}(3,7,1)$. The
maximal chains of $\mathcal{P}_{3,7,1}$ form a triangulation of its Newton-
Okounkov body.
We close this section with the remark that the poset $\mathcal{P}_{k,n,r}$ is
a distributive lattice, just like $Y_{k,n}$ and $\widetilde{Y}_{k,n}$. The
join and meet operations $\wedge,\vee$ as follows: if $\langle I\rangle[J]$ is
an incomparable pair, then $\langle I\vee J\rangle$ and $[I\wedge J]$ are
obtained by sorting the columns of the skew Young tableaux (14). With these
lattice operations, the binomials (25) can be written succinctly as follows:
$\langle I\rangle\cdot[J]\,-\,\langle I\vee J\rangle\cdot[I\wedge J].$
In other words, the description in [24, Theorem 14.16] extends to the spinor-
helicity varieties.
## 4 Mandelstam Variety
The componentwise multiplication of two vectors is known as the Hadamard
product. We consider the Hadamard product of two Plücker vectors. This gives
rise to a rational map
$s\,:\,\mathbb{P}^{\binom{n}{k}-1}\times\mathbb{P}^{\binom{n}{k}-1}\,\dashrightarrow\,\mathbb{P}^{\binom{n}{k}-1}.$
(26)
Generalizing the case $k=2$ in (6), the coordinates of $s$ are called
Mandelstam invariants:
$s_{i_{1}i_{2}\cdots i_{k}}\,\,=\,\,\langle i_{1}i_{2}\cdots
i_{k}\rangle[i_{1}i_{2}\cdots i_{k}].$ (27)
We define the Mandelstam variety ${\rm M}(k,n,r)$ to be the closure of the
image of the spinor-helicity variety ${\rm SH}(k,n,r)$ under the Hadamard
product map $s$. Thus, ${\rm M}(k,n,r)$ is an irreducible variety in
$\mathbb{P}^{\binom{n}{k}-1}$. We write $\mathcal{I}({\rm M}(k,n,r))$ for the
homogeneous prime ideal of this variety. This comprises all polynomial
relations among the Mandelstam invariants $\,s_{i_{1}i_{2}\cdots i_{k}}$.
###### Proposition 4.1.
The linear span of the Mandelstam variety ${\rm M}(k,n,r)$ in
$\mathbb{P}^{\binom{n}{k}-1}$ is the subspace $\,\mathbb{P}^{N}$ which is
defined by the momentum conservation relations. Its dimension equals
$N\,\,=\,\,\binom{n}{k}-1-\binom{n}{k-r-1}.$
This refers to the momentum conservation relations in the CEGM model [11, eqn
(5.6)].
###### Proof.
In our notation, the momentum conservation relations are written as follows:
$\sum_{J\in\binom{[n]}{r+1}}\\!\\!s_{IJ}\,\,=\,\,0\qquad\hbox{for
all}\,\,\,I\in\binom{[n]}{k-r-1}.$ (28)
We claim that these linear forms lie in $\mathcal{I}({\rm M}(k,n,r))$ and that
they are linearly independent. To see this, recall the matrix $PQ^{T}$ from
Theorem 3.4. The $\binom{n}{k-r-1}$ diagonal entries of $PQ^{T}$ are
$\,\sum_{J\in\binom{[n]}{r+1}}\langle IJ\rangle[IJ]$, where the index $I$ runs
over $\binom{[n]}{k-r-1}$. This sum agrees with (28), which therefore lies in
$\mathcal{I}({\rm M}(k,n,r))$. The argument with the Johnson matrix in the
proof of Theorem 3.4 shows that our $\binom{n}{k-r-1}$ linear forms are
linearly independent. The dimension count in Corollary 2.10 implies that they
span the space of all linear forms in $\,\mathcal{I}({\rm M}(k,n,r))$. ∎
###### Example 4.2 ($k=3,n=6,r=1$).
There are six momentum conservation relations:
$\begin{matrix}s_{123}+s_{124}+s_{125}+s_{126}+s_{134}+s_{135}+s_{136}+s_{145}+s_{146}+s_{156}&=&0,\\\
s_{123}+s_{124}+s_{125}+s_{126}+s_{234}+s_{235}+s_{236}+s_{245}+s_{246}+s_{256}&=&0,\\\
s_{123}+s_{134}+s_{135}+s_{136}+s_{234}+s_{235}+s_{236}+s_{345}+s_{346}+s_{356}&=&0,\\\
s_{124}+s_{134}+s_{145}+s_{146}+s_{234}+s_{245}+s_{246}+s_{345}+s_{346}+s_{456}&=&0,\\\
s_{125}+s_{135}+s_{145}+s_{156}+s_{235}+s_{245}+s_{256}+s_{345}+s_{356}+s_{456}&=&0,\\\
s_{126}+s_{136}+s_{146}+s_{156}+s_{236}+s_{246}+s_{256}+s_{346}+s_{356}+s_{456}&=&0.\end{matrix}$
(29)
These define a subspace $\mathbb{P}^{13}$ of $\mathbb{P}^{19}$. The variety
${\rm M}(3,6,1)$ has codimension four in this $\mathbb{P}^{13}$. A general
formula for the dimension of any Mandelstam variety is given in the next
result.
###### Proposition 4.3.
The dimension of the Mandelstam variety equals
${\rm dim}({\rm M}(k,n,r))\,\,=\,\,{\rm dim}({\rm
SH}(k,n,r))\,-n+1\,\,=\,\,2k(n-k)-(k-r)^{2}-n+1.$
###### Proof.
The $n$-dimensional torus $(\mathbb{C}^{*})^{n}$ acts on the spinor-helicity
variety as follows:
$\begin{matrix}\langle i_{1}i_{2}\cdots
i_{k}\rangle&\mapsto&t_{i_{1}}\,t_{i_{2}}\,\cdots\,t_{i_{k}}\,\langle
i_{1}i_{2}\cdots i_{k}\rangle,\vskip 3.0pt plus 1.0pt minus 1.0pt\\\
[i_{1}i_{2}\cdots i_{k}]&\mapsto&t_{i_{1}}^{-1}t_{i_{2}}^{-1}\cdots
t_{i_{k}}^{-1}[i_{1}i_{2}\cdots i_{k}].\end{matrix}$ (30)
The stabilizer is a one-dimensional torus $\mathbb{C}^{*}$. The ring of
polynomial invariants of the torus action is generated by the Mandelstam
invariants (27). Therefore, ${\rm M}(k,n,r)$ is the image of the quotient map,
and its dimension is $n-1$ less than the dimension of ${\rm SH}(k,n,r)$. ∎
Propositions 4.1 and 4.3 are illustrated in Table 1. For the given values of
$k,n$ and $r$, we display the dimension of ${\rm M}(k,n,r)$ and the dimension
$N$ of its linear span. Note that $N$ can be quite a bit smaller than the
dimension $\binom{n}{k}-1$ of the ambient Plücker space. For example, ${\rm
M}(3,8,0)$ has dimension $14$ inside a linear subspace $\mathbb{P}^{27}$ of
the Plücker space $\mathbb{P}^{55}$.
$\setcounter{MaxMatrixCols}{13}\begin{matrix}&\\!\\!k,n=\\!\\!\\!\\!&2,4&2,5&2,6&2,7&2,8&3,6&3,7&3,8&3,9&4,8&4,9\vskip
6.0pt plus 2.0pt minus 2.0pt\\\
r=0&&1,1&4,4&7,8&10,13&13,19&4,4&9,13&14,27&19,47&9,13&16,41\\\
r=1&&4,4&7,8&10,13&13,19&16,26&9,13&14,27&19,47&24,74&16,41&23,89\\\
r=2&&5,5&8,9&11,14&14,20&17,27&\\!12,18&17,33&22,54&27,82&21,61&\\!28,116\end{matrix}\vspace{-0.3cm}$
Table 1: The dimension of the Mandelstam variety ${\rm M}(k,n,r)$ and its
ambient space $\mathbb{P}^{N}$.
We next discuss the Mandelstam variety in the case of primary interest in
physics, namely $k=2$. This lives in $\mathbb{P}^{\binom{n}{2}-1}$. Here
$(s_{ij})$ is a symmetric $n\times n$ matrix with $s_{11}=\cdots=s_{nn}=0$.
###### Proposition 4.4.
The $5\times 5$ minors of $(s_{ij})$ vanish on the varieties ${\rm M}(2,n,r)$.
For $r=1$, the sum of all matrix entries is zero. For $r=0$, each row and each
column in $(s_{ij})$ sums to zero, so we only need the
$\frac{1}{2}\bigl{(}\binom{n-1}{5}^{2}+\binom{n-1}{5}\bigr{)}$ minors
involving the last row or last column.
###### Proof.
The sum constraints are the momentum conservations relations in (28). It
suffices to prove the first sentence for $r=2$, when there are no such
relations. Note that ${\rm M}(2,n,2)$ is the Hadamard product [5] of ${\rm
Gr}(2,n)$ with itself. For each matrix $s$ in ${\rm M}(2,n,2)$, we have
$\begin{matrix}s_{ij}\,\,=\,\langle
i\,j\rangle[i\,j]\,=\,(\lambda_{1i}\lambda_{2j}-\lambda_{2i}\lambda_{1j})(\widetilde{\lambda}_{1i}\widetilde{\lambda}_{2j}-\widetilde{\lambda}_{2i}\widetilde{\lambda}_{1j})\qquad\qquad\qquad\qquad\\\
\qquad\qquad\quad=\,\lambda_{1i}\widetilde{\lambda}_{1i}\cdot\lambda_{2j}\widetilde{\lambda}_{2j}\,-\,\lambda_{1i}\widetilde{\lambda}_{2i}\cdot\lambda_{2j}\widetilde{\lambda}_{1j}\,-\,\lambda_{2i}\widetilde{\lambda}_{1i}\cdot\lambda_{1j}\widetilde{\lambda}_{2j}\,+\,\lambda_{2i}\widetilde{\lambda}_{2i}\cdot\lambda_{1j}\widetilde{\lambda}_{1j}.\end{matrix}$
(31)
This shows that $s=(s_{ij})$ is a sum of four matrices of rank one, and hence
${\rm rank}(s)\leq 4$. ∎
Our next theorem states that the equations in Proposition 4.4 generate the
prime ideals.
###### Theorem 4.5.
For $r=0,1,2$, the prime ideal of the Mandelstam variety ${\rm M}(2,n,r)$ is
generated by the $5\times 5$ minors of $(s_{ij})$ together with the respective
linear forms in (28).
###### Proof.
We first record the dimensions of our three Mandelstam varieties from
Proposition 4.3:
${\rm dim}({\rm M}(2,n,2))=3n-7,\,\,{\rm dim}({\rm M}(2,n,1))=3n-8\quad{\rm
and}\quad{\rm dim}({\rm M}(2,n,0))=3n-11.$ (32)
Let $J$ denote the ideal generated by the $5\times 5$ minors. This ideal is
prime. This was shown for $4\times 4$ minors in [14, Theorem 3.4]. The proof
is the same for $5\times 5$ minors. A dimension count shows that $V(J)$ has
dimension $3n-7$, and from this we obtain the first assertion.
Every matrix in ${\rm M}(2,n,2)$ is a product $XX^{T}$ where $X$ is an
$n\times 4$ matrix whose rows lie on the Fermat quadric
$\mathcal{V}(x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2})$. Hence the coordinate
ring of ${\rm M}(2,n,2)$ is the $n$-fold tensor product of the coordinate ring
of the Fermat quadric. The latter ring is a normal domain and hence so is the
former. All associated primes of the principal ideal generated by $\sum
s_{ij}$ have height one in this domain. Using Lemma 4.6 below, we can now
conclude that this principal ideal is a prime ideal. This argument shows that
$J+\langle\,\sum s_{ij}\,\rangle$ is a prime ideal in the polynomial ring
$\mathbb{C}[s]$. The variety of this ideal has codimension $1$ in ${\rm
M}(2,n,2)$. Since this matches the dimension of ${\rm M}(2,n,1)$ in (32), we
conclude that $\mathcal{I}({\rm M}(2,n,1))=J+\langle\,\sum s_{ij}\,\rangle$.
We now turn to $r=0$. Let $K$ be the ideal generated by $J$ and
$\sum_{j=1}^{n}s_{ij}$ for $i=1,2,\ldots,n$. From Proposition 4.4 we know that
${\rm M}(2,n,0)\subseteq\mathcal{V}(K)$. We solve the $n$ linear equations for
$s_{1n},s_{2n},\ldots,s_{n-1,n}$. This leaves us with $\sum_{1\leq i<j\leq
n-1}s_{ij}=0$. Moreover, all $5\times 5$ minors of the $n\times n$ matrix $s$
that involve the index $n$ are sums of $5\times 5$ minors that do not involve
$n$. Using our result for $r=1$, we see that $\mathbb{C}[s]/K$ is isomorphic
to the coordinate ring of ${\rm M}(2,n-1,1)$. This shows that $K$ is prime and
${\rm dim}({\rm V}(K))=3(n-1)-8=3n-11$. This matches the dimension for $r=0$
in (32). We thus conclude that $K$ is the prime ideal of ${\rm M}(2,n,0)$. ∎
To complete the proof of Theorem 4.5, we still need to establish the following
lemma.
###### Lemma 4.6.
The equation $\sum_{1\leq i<j\leq n}s_{ij}=0$ defines a hypersurface that is
is reduced and irreducible in the variety of symmetric $n\times n$ matrices
$(s_{ij})$ with zero diagonal and rank $\leq 4$.
###### Proof.
As in [14, Theorem 3.4], we work in the polynomial ring $\mathbb{C}[X]$. Our
hypersurface is the variety cut out by the ideal $I$, which is generated by
the quadric $\sum_{i,j=1}^{n}\sum_{k=1}^{4}x_{ik}x_{jk}$ together with the $n$
Fermat quadrics $\sum_{k=1}^{4}x_{ik}^{2}$. These quadrics form a regular
sequence in $\mathbb{C}[X]$. Hence $\mathbb{C}[X]/I$ is a complete
intersection ring. By examining the Jacobian matrix of these $n+1$ quadrics,
we can show that this variety is a complete intersection and that its singular
locus has codimension $\geq 2$. Serre’s criterion for normality implies that
the coordinate ring $\mathbb{C}[X]/I$ is normal, so it is a product of normal
domains. This ring being graded, it has no non-trivial idempotents. Hence it
is a normal domain, so $I$ is a prime ideal. ∎
We now turn to the general case $k\geq 3$, with $r$ between $0$ and $k-2$. The
Mandelstam invariants $s_{i_{1}i_{2}\cdots i_{k}}$ form a symmetric
$n\times\cdots\times n$ tensor $s$, where an entry is zero whenever
$\\#\\{i_{1},i_{2},\ldots,i_{k}\\}\leq k-1$. Its two-way marginal is the
symmetric $n\times n$ matrix with entries
$s_{i\,j\,+\cdots+}\,\,\,:=\,\,\,\sum_{l_{3}=1}^{n}\sum_{l_{4}=1}^{n}\cdots\\!\sum_{l_{k}=1}^{n}\\!s_{i\,j\,l_{3}l_{4}\cdots
l_{k}}\quad\hbox{for $\,1\leq i,j\leq n$.}$ (33)
###### Proposition 4.7.
For every tensor $s$ in ${\rm M}(k,n,r)$, the two-way marginal has rank $\leq
4$.
###### Example 4.8 ($k=3,n=6$).
The symmetric $6\times 6\times 6$ tensor $(s_{ijk})$ has only $20$ distinct
nonzero entries $s_{ijk}$ for $1\leq i<j<k\leq 6$. Its two-way marginal is the
$6\times 6$ matrix
$\\!\\!\\!\\!\tiny\begin{bmatrix}0\\!\\!&\\!\\!s_{123}{+}s_{124}{+}s_{125}{+}s_{126}\\!\\!&\\!\\!s_{123}{+}s_{134}{+}s_{135}{+}s_{136}\\!\\!&\\!\\!s_{124}{+}s_{134}{+}s_{145}{+}s_{146}\\!\\!&\\!\\!s_{125}{+}s_{135}{+}s_{145}{+}s_{156}\\!\\!&\\!\\!s_{126}{+}s_{136}{+}s_{146}{+}s_{156}\\\
s_{123}{+}s_{124}{+}s_{125}{+}s_{126}\\!\\!&\\!\\!0\\!\\!&\\!\\!s_{123}{+}s_{234}{+}s_{235}{+}s_{236}\\!\\!&\\!\\!s_{124}{+}s_{234}{+}s_{245}{+}s_{246}\\!\\!&\\!\\!s_{125}{+}s_{235}{+}s_{245}{+}s_{256}\\!\\!&\\!\\!s_{126}{+}s_{236}{+}s_{246}{+}s_{256}\\\
s_{123}{+}s_{134}{+}s_{135}{+}s_{136}\\!\\!&\\!\\!s_{123}{+}s_{234}{+}s_{235}{+}s_{236}\\!\\!&\\!\\!0\\!\\!&\\!\\!s_{134}{+}s_{234}{+}s_{345}{+}s_{346}\\!\\!&\\!\\!s_{135}{+}s_{235}{+}s_{345}{+}s_{356}\\!\\!&\\!\\!s_{136}{+}s_{236}{+}s_{346}{+}s_{356}\\\
s_{124}{+}s_{134}{+}s_{145}{+}s_{146}\\!\\!&\\!\\!s_{124}{+}s_{234}{+}s_{245}{+}s_{246}\\!\\!&\\!\\!s_{134}{+}s_{234}{+}s_{345}{+}s_{346}\\!\\!&\\!\\!0\\!\\!&\\!\\!s_{145}{+}s_{245}{+}s_{345}{+}s_{456}\\!\\!&\\!\\!s_{146}{+}s_{246}{+}s_{346}{+}s_{456}\\\
s_{125}{+}s_{135}{+}s_{145}{+}s_{156}\\!\\!&\\!\\!s_{125}{+}s_{235}{+}s_{245}{+}s_{256}\\!\\!&\\!\\!s_{135}{+}s_{235}{+}s_{345}{+}s_{356}\\!\\!&\\!\\!s_{145}{+}s_{245}{+}s_{345}{+}s_{456}\\!\\!&\\!\\!0\\!\\!&\\!\\!s_{156}{+}s_{256}{+}s_{356}{+}s_{456}\\\
s_{126}{+}s_{136}{+}s_{146}{+}s_{156}\\!\\!&\\!\\!s_{126}{+}s_{236}{+}s_{246}{+}s_{256}\\!\\!&\\!\\!s_{136}{+}s_{236}{+}s_{346}{+}s_{356}\\!\\!&\\!\\!s_{146}{+}s_{246}{+}s_{346}{+}s_{456}\\!\\!&\\!\\!s_{156}{+}s_{256}{+}s_{356}{+}s_{456}\\!\\!&\\!\\!0\end{bmatrix}\\!\\!.$
This matrix has rank four on ${\rm M}(3,6,1)$ and hence also on ${\rm
M}(3,6,0)$, but not on ${\rm M}(3,6,2)$. Geometrically, this matrix encodes
the map ${\rm M}(3,6,1)\dashrightarrow{\rm M}(2,6,0)$ of Mandelstam varieties.
###### Proof of Proposition 4.7.
Since ${\rm M}(n,k,r)\subset{\rm M}(n,k,k-2)$ for all $r<k-2$, it suffices to
prove the statement in the case $r=k-2$. For this case, we consider the
rational map
$\mathrm{SH}(k,n,k-2)\,\dashrightarrow\,\mathrm{SH}(2,n,0)\,,\,\,(V,W)\,\mapsto\,(V\cap
W^{\perp},V^{\perp}\cap W).$ (34)
In terms of the parametrization in Remark 2.6, the space $V^{\perp}\cap W$ is
the kernel of the matrix $X$, while $V\cap W^{\perp}$ is the span of the $k-r$
middle rows, which are indexed by $r+1,\ldots,k$. The Plücker coordinates of
the two spaces can be written as bilinear forms in the given Plücker
coordinates $\langle i_{1}i_{2}\cdots i_{k}\rangle$ and $[i_{1}i_{2}\cdots
i_{k}]$ for $V$ and $W$. The map (34) is equivariant with respect to the
involution in Remark 2.4, and it induces a rational map of Mandelstam
varieties
${\rm M}(k,n,k-2)\,\dashrightarrow\,{\rm M}(2,n,0).$ (35)
In terms of coordinates, the map (35) takes the $n\times\cdots\times n$ tensor
$s$ to its two-way marginal, which is the $n\times n$ matrix with entries
(33). That matrix has rank $\leq 4$, by Proposition 4.4. ∎
###### Remark 4.9.
The rational map in (34) is a well-defined morphism on the open subset
$\mathrm{SH}(k,n,k-2)\backslash\mathrm{SH}(k,n,k-3)$. This open subset is the
smooth locus of $\mathrm{SH}(k,n,k-2)$. In general, the singular locus of the
spinor-helicity variety $\mathrm{SH}(k,n,r)$ is precisely
$\mathrm{SH}(k,n,r-1)$.
The Mandelstam variety ${\rm M}(k,n,r)$ has a natural parametrization, namely
the composition of the Hadamard map with the parametrization given by the map
$\phi_{k,n,r}$ in Remark 2.6. The parameters are the entries in the
$(n-k+r)\times n$ matrix ${\bf x}=(x_{ij})$. The Mandelstam invariant $s_{I}$
is a polynomial in the entries of ${\bf x}$. Namely, $s_{I}$ is the product of
the $k\times k$ minor indexed by $I$ in the first $k$ rows of ${\bf x}$ with
the signed $(n-k)\times(n-k)$ minor indexed by $[n]\backslash I$ in the last
$n-k$ rows of ${\bf x}$. The corresponding ring map $\phi_{k,n,r}\circ s^{*}$
has kernel $\mathcal{I}({\rm M}(k,n,r))$.
We now replace $\phi_{k,n,r}$ with a birational parametrization for (10),
namely by specializing ${\bf x}$ as follows. Start rows $r+1,\ldots,k$ with a
unit matrix. This leaves $(k-r)(n-k+r)$ parameters for $V\cap W^{\perp}$.
Start rows $1,\ldots,r$ with $k-r$ zero columns, followed by a unit matrix.
This leaves $r(n-k)$ parameters for $V/(V\cap W^{\perp})$. Start rows
$k+1,\ldots,n-k+r$ with $k-r$ zero columns, followed by a unit matrix. This
leaves $(n-2k+r)k$ parameters for $W^{\perp}/(V\cap W^{\perp})$.
###### Corollary 4.10.
The rules above give a birational map
$\psi_{k,n,r}:\mathbb{C}^{2k(n-k)-(k-r)^{2}}\rightarrow{\rm SH}(k,n,r)$.
###### Proof.
We first note that the parameter count above matches the dimension formula in
(11):
$(k-r)(n-k+r)\,+\,r(n-k)\,+\,(n-2k+r)k\quad=\quad 2k(n-k)-(k-r)^{2}.$
To parametrize (10), one first chooses $V\cap W^{\perp}$, and thereafter
$V/(V\cap W^{\perp})$ and $W^{\perp}/(V\cap W^{\perp})$. Each block of rows
gives a birational parametrization of the respective Grassmannian. ∎
By composing $\psi_{k,n,r}$ with the Hadamard map $s$, we obtain a
parametrization of the Mandelstam variety ${\rm M}(k,n,r)$. Each fiber has
dimension $n-1$, reflecting the torus action in (30). To obtain a finite-to-
one parametrization of ${\rm M}(k,n,r)$, we can now replace $n-1$ of the
matrix entries $x_{ij}$ by $1$. The resulting parametric representation of
${\rm M}(k,n,r)$ can then be used in numerical algebraic geometry. The
following proposition serves as an illustration.
###### Proposition 4.11.
The Mandelstam variety ${\rm M}(3,6,1)$ has dimension $9$ and degree $56$ in
$\mathbb{P}^{19}$. Its prime ideal is minimally generated by $14$ quartics,
plus the six linear forms in (29). Inside their subspace $\mathbb{P}^{13}$,
the variety is arithmetically Cohen-Macaulay, and its Betti diagram equals
$\,\small\begin{bmatrix}\,14&\cdot&\cdot&\cdot\,\\\
\,\cdot&56&64&21\,\end{bmatrix}.$ (36)
###### Computational proof.
The map $\psi_{3,6,1}$ is given by the following specialization of our matrix:
${\bf x}\,\,=\,\,\begin{bmatrix}\,0&0&1&b_{1}&b_{2}&b_{3}\\\
\,1&0&a_{1}&a_{2}&a_{3}&a_{4}\\\ \,0&1&a_{5}&a_{6}&a_{7}&a_{8}\\\
\,0&0&1&c_{1}&c_{2}&c_{3}\\\ \end{bmatrix}.$
We now remove five parameters by setting $a_{1}=a_{5}=c_{1}=c_{2}=c_{3}=1$.
The Mandelstam invariants are polynomials in the nine remaining unknowns
$a_{i}$ and $b_{j}$. This specifies a two-to-one map
$\,\mathbb{C}^{9}\rightarrow\mathbb{P}^{19}$. A computation checks that the
Jacobian of the map has full rank. Hence the closure of its image is the
$9$-dimensional Mandelstam variety ${\rm M}(3,6,1)$. This lies in the
$\mathbb{P}^{13}$ defined by (29). We use these relations to eliminate the six
variables $\,s_{123},s_{124},s_{134},s_{234},s_{235},s_{236}$. Thereafter, we
can view ${\mathcal{I}}({\rm M}(3,6,1))$ as an ideal in the remaining $14$
variables. Our computations with this ideal were carried out in Macaulay2
[18].
The ideal contains no quadrics or cubics, but we find $14$ linearly
independent quartics. Let $I$ be the subideal generated by the $14$ quartics.
This has codimension $4$ and degree $56$, and it contains the $21$ quintics
given by the $5\times 5$ minors of the $6\times 6$ matrix in Example 4.8. We
compute the minimal free resolution of $I$, and find that its Betti diagram
equals (36). Thus, $I$ is Cohen-Macaulay, and so it is an intersection of
primary ideals of codimension $4$.
We now apply HomotopyContinuation.jl [10] to the map
$\,\mathbb{C}^{9}\rightarrow\mathbb{P}^{19}$, and we compute the degree of its
image. This yields an independent proof that ${\rm M}(3,6,1)$ has degree $56$.
Since $I$ has degree $56$, and since the degree is additive over the primary
components, this shows that the ideal $I$ is prime. We conclude that
$I={\mathcal{I}}({\rm M}(3,6,1))$, and the proof is complete. ∎
###### Remark 4.12.
At present, we do not know the meaning of our $14$ quartic generators. The
shortest quartic we found has $140$ monomials. In reverse lexicographic order,
it looks like
$\small\begin{matrix}s_{136}s_{156}s_{235}s_{345}-s_{135}s_{156}s_{236}s_{345}-s_{145}s_{156}s_{236}s_{345}-s_{156}^{2}s_{236}s_{345}-s_{146}s_{156}s_{245}s_{345}\phantom{moo}\\\
+\,s_{135}s_{156}s_{246}s_{345}+s_{145}s_{156}s_{246}s_{345}+s_{156}^{2}s_{246}s_{345}+s_{136}s_{145}s_{256}s_{345}-s_{135}s_{146}s_{256}s_{345}+\,\cdots\\\
\cdots\,\,\cdots\,\,+\,s_{136}s_{345}s_{456}^{2}+s_{156}s_{345}s_{456}^{2}-s_{135}s_{346}s_{456}^{2}-2s_{135}s_{356}s_{456}^{2}+s_{145}s_{356}s_{456}^{2}-s_{135}s_{456}^{3}.\end{matrix}$
We close this section by recording a few more general facts about Mandelstam
varieties.
###### Proposition 4.13.
The Mandelstam variety ${\rm M}(k,n,k)$ is the Hadamard square of the
Grassmannian ${\rm Gr}(k,n)$. This contains all other Mandelstam varieties by
the chain of inclusions
${\rm M}(k,n,0)\,\subset\,{\rm M}(k,n,1)\,\subset\,{\rm
M}(k,n,2)\,\subset\,\cdots\,\subset\,{\rm M}(k,n,k).$ (37)
There is natural chain of dominant deletion maps, induced by removing columns
in $\lambda$ and $\widetilde{\lambda}$:
${\rm M}(k,n,0)\,\dashrightarrow\,{\rm M}(k,n-1,1)\,\dashrightarrow\,{\rm
M}(k,n-2,2)\,\dashrightarrow\,\,\cdots\,\,\dashrightarrow\,{\rm M}(k,n-k,k).$
(38)
###### Proof and discussion.
The term Hadamard square refers to the Hadamard product of a variety with
itself. For an introduction to Hadamard products of varieties see the book
[5]. We obtain inclusions ${\rm SH}(k,n,r)\subset{\rm SH}(k,n,r+1)$ by
relaxing the rank constraints in (10), and we obtain surjections ${\rm
SH}(k,n,r)\rightarrow{\rm SH}(k,n-1,r+1)$ by deleting the last columns in
$\lambda$ and $\widetilde{\lambda}$ respectively. These maps are compatible
with Hadamard products, so they descend to inclusions ${\rm
M}(k,n,r)\subset{\rm M}(k,n,r+1)$ and surjections ${\rm
M}(k,n,r)\rightarrow{\rm M}(k,n-1,r+1)$. It would be interesting to study the
fibers of the maps in (38). Their dimensions are $0,2,4,\ldots,2k-2$. ∎
## 5 Positivity and Tropicalization
In our last two sections, we set the stage for future research on spinor-
helicity varieties, with a view towards tropical geometry, positive geometry,
and applications to scattering amplitudes.
Bossinger, Drummond and Glew [7] studied the Gröbner fan and positive geometry
of the variety ${\rm SH}(2,5,0)$ in Example 1.1 which they identified with the
Grassmannian ${\rm Gr}(3,6)$. We shall examine this in a broader context. The
following result explains their identification.
###### Proposition 5.1.
For any $k\geq 1$, the varieties $\mathrm{SH}(k,2k{+}1,0)$ and
$\mathrm{SH}(k{+}1,2k{+}1,1)$ are isomorphic and their coordinate ring is
isomorphic to that of the Grassmannian $\mathrm{Gr}(k{+}1,2k{+}2)$.
###### Proof.
The isomorphism between $\mathrm{SH}(k,2k\\!+\\!1,0)$ and
$\mathrm{SH}(k\\!+\\!1,2k\\!+\\!1,1)$ arises because $(V,W)$ is in
$\mathrm{SH}(k\\!+\\!1,2k\\!+\\!1,1)$ if and only if $(V^{\perp},W^{\perp})$
is in $\mathrm{SH}(k,2k\\!+\\!1,0)$. The identification with the Grassmannian
${\rm Gr}(k\\!+\\!1,2k\\!+\\!2)$ uses the specialized parametrization
$\psi_{k,2k+1,0}$ in Corollary 4.10. We introduce a new parameter $z$, to
account for ${\rm dim}({\rm Gr}(k\\!+\\!1,2k\\!+\\!2))=1+{\rm dim}({\rm
SH}(k,2k\\!+\\!1))$, and we augment ${\bf x}$ with one extra column
$(0,0,\ldots,0,z)^{T}$. The new matrix ${\bf x}$ has $k+1$ rows and $2k+2$
columns, and it contains $(k+1)^{2}$ parameters. Its maximal minors give a
birational parametrization of ${\rm Gr}(k\\!+\\!1,2k\\!+\\!2)$ and also of
$\mathrm{SH}(k,2k\\!+\\!1,0)$. The minors involving the extra column are the
Plücker coordinates for $V$. The others are Plücker coordinates for
$W^{\perp}$. ∎
The positive Grassmannian ${\rm Gr}_{+}(k,n)$ is defined by requiring all
Plücker coordinates $\langle i_{1}i_{2}\cdots i_{k}\rangle$ of the subspace
$V$ to be real and positive. We define the dually positive Grassmannian ${\rm
Gr}^{+}(k,n)$ to be ${\rm Gr}_{+}(n-k,n)$ under the identification between $W$
and $W^{\perp}$. Thus ${\rm Gr}^{+}(k,n)$ is an open semialgebraic set
isomorphic to ${\rm Gr}_{+}(n-k,n)$. Its is defined by
${\rm sign}\bigl{(}[j_{1}j_{2}\cdots
j_{k}]\bigr{)}\,=\,(-1)^{j_{1}+j_{2}+\cdots+j_{k}}\quad\hbox{for}\quad 1\leq
j_{1}<j_{2}<\cdots<j_{k}\leq n.$ (39)
The positive spinor-helicity variety consists of all positive points in the
spinor-helicity variety:
${\rm SH}_{+}(k,n,r)\,\,\,:=\,\,\,{\rm SH}(k,n,r)\,\,\cap\,\,\bigl{(}\,{\rm
Gr}_{+}(k,n)\,\times\,{\rm
Gr}^{+}(k,n)\bigr{)}\quad\subset\,\,\,\mathbb{R}\mathbb{P}^{\binom{n}{k}-1}\times\mathbb{R}\mathbb{P}^{\binom{n}{k}-1}.$
(40)
We finally define the positive Mandelstam variety $\,{\rm M}_{+}(k,n,r)$ by
the inequalities seen in (39):
${\rm sign}\bigl{(}s_{j_{1}j_{2}\cdots
j_{k}}\bigr{)}\,=\,(-1)^{j_{1}+j_{2}+\cdots+j_{k}}\quad\hbox{for}\quad 1\leq
j_{1}<j_{2}<\cdots<j_{k}\leq n.$ (41)
Thus ${\rm M}_{+}(k,n,r)$ is a semialgebraic subset of
$\mathbb{P}^{\binom{n}{k}-1}$. It contains the image of ${\rm SH}_{+}(k,n,r)$
under the Hadamard product map $s$ in (26). In general, this inclusion is
strict. For example, ${\rm M}_{+}(2,4.2)$ strictly contains the Hadamard
product of $\mathrm{Gr}_{+}(2,4)$ and $\mathrm{Gr}^{+}(2,4)$. To see this, we
note that the following expression is positive on the latter set but not on
the former set:
$s_{13}s_{24}+s_{14}s_{23}-s_{12}s_{34}\,\,=\,\,\langle 13\rangle\langle
24\rangle[14][23]\,+\,\langle 14\rangle\langle 23\rangle[13][24].$ (42)
We now recycle Example 1.1 and Proposition 4.4 for our running example in this
section.
###### Example 5.2 ($k=2,n=5$).
For points in ${\rm SH}_{+}(2,5,r)$, the rank $2$ matrices $P$ and $Q$ satisfy
${\rm sign}(P)\,\,=\,\,\footnotesize\begin{bmatrix}0&+&+&+&+\\\ -&0&+&+&+\\\
-&-&0&+&+\\\ -&-&-&0&+\\\ -&-&-&-&0\end{bmatrix}\qquad{\rm and}\qquad{\rm
sign}(Q)\,\,=\,\,\footnotesize\begin{bmatrix}0&-&+&-&+\\\ +&0&-&+&-\\\
-&+&0&-&+\\\ +&-&+&0&-\\\ -&+&-&+&0\end{bmatrix}.$
The positive spinor-helicity variety ${\rm SH}_{+}(2,5,0)$ is the subset
defined by the equations in (2).
The positive Mandelstam variety ${\rm M}_{+}(2,5,2)$ is a $9$-dimensional
simplex $\mathbb{R}\mathbb{P}_{+}^{9}$. It consists of symmetric $5\times 5$
matrices $s=(s_{ij})$ with alternating sign pattern, i.e. ${\rm sign}(s)={\rm
sign}(Q)$. Its subset ${\rm M}_{+}(2,5,0)$ is defined in this simplex by the
equations $\sum_{j=1}^{5}s_{ij}=0$ for $i=1,2,3,4,5$. This is a cyclic
$4$-polytope with $6$ vertices. It has the f-vector $(6,15,18,9)$. The
vertices are:
$\tiny\begin{pmatrix}0&-&+&0&0\\\ -&0&0&+&0\\\ +&0&0&-&0\\\ 0&+&-&0&0\\\
0&0&0&0&0\end{pmatrix},\begin{pmatrix}0&0&+&-&0\\\ 0&0&-&+&0\\\ +&-&0&0&0\\\
-&+&-&0&0\\\ 0&0&0&0&0\end{pmatrix},\begin{pmatrix}0&-&0&0&+\\\ -&0&0&+&0\\\
0&0&0&0&0\\\ 0&+&0&0&-\\\ +&0&0&-&0\end{pmatrix},\begin{pmatrix}0&0&0&-&+\\\
0&0&0&+&-\\\ 0&0&0&0&0\\\ -&+&0&0&0\\\
+&-&0&0&0\end{pmatrix},\begin{pmatrix}0&0&0&0&0\\\ 0&0&-&+&0\\\ 0&-&0&0&+\\\
0&+&0&0&-\\\ 0&0&+&-&0\end{pmatrix},\begin{pmatrix}0&-&+&0&0\\\ -&0&0&+&0\\\
+&0&0&-&0\\\ 0&+&-&0&0\\\ 0&0&0&0&0\end{pmatrix}.$
Nine of the inequalities ${\rm sign}(s_{ij})=(-1)^{i+j}$ define facets. Only
$s_{24}\geq 0$ is not facet-defining.
The second thread in this section is tropical geometry. Using notation from
the textbook [22], the tropicalizations of our varieties ${\rm SH}(k,n,r)$ and
${\rm M}(k,n,r)$ are the tropical varieties
${\rm trop}({\rm
SH}(k,n,r))\,\,\subset\,\,\mathbb{R}^{\binom{n}{k}}\\!/\mathbb{R}^{\binom{n}{k}\bf
1}\,\times\,\mathbb{R}^{\binom{n}{k}}\\!/\mathbb{R}{\bf 1}\,\,\quad{\rm
and}\quad\,\,{\rm trop}({\rm
M}(k,n,r))\,\,\subset\,\,\mathbb{R}^{\binom{n}{k}}\\!/\mathbb{R}{\bf 1}.$ (43)
These are balanced polyhedral fans whose dimensions are given by Propositions
2.5 and 4.3. Each such fan is a finite intersection of the tropical
hypersurfaces given by a tropical basis. We illustrate these concepts for an
example where the underlying variety is a linear space.
###### Example 5.3 ($k=2,n=5$).
The tropical linear space ${\rm trop}({\rm M}(2,5,0))$ is a pointed fan of
dimension $4$ in $\mathbb{R}^{10}/\mathbb{R}{\bf 1}$. A minimal tropical basis
consists of $15$ linear forms with four terms:
$\begin{matrix}s_{12}{+}s_{13}{+}s_{14}{+}s_{15},s_{12}{+}s_{23}{+}s_{24}{+}s_{25},s_{13}{+}s_{23}{+}s_{34}{+}s_{35},s_{14}{+}s_{24}{+}s_{34}{+}s_{45},s_{15}{+}s_{25}{+}s_{35}{+}s_{45},\\\
s_{12}{+}s_{13}{+}s_{23}{-}s_{45},s_{12}{+}s_{14}{+}s_{24}{-}s_{35},s_{12}{+}s_{15}{+}s_{25}{-}s_{34},s_{34}{+}s_{35}{+}s_{45}{-}s_{12},s_{13}{+}s_{14}{+}s_{34}{-}s_{25},\\\
s_{13}{+}s_{15}{+}s_{35}{-}s_{24},s_{24}{+}s_{25}{+}s_{45}{-}s_{13},s_{14}{+}s_{15}{+}s_{45}{-}s_{23},s_{23}{+}s_{25}{+}s_{35}{-}s_{14},s_{23}{+}s_{24}{+}s_{34}{-}s_{15}.\end{matrix}$
The underlying rank $5$ matroid on $10$ elements is the exceptional unimodular
matroid $R_{10}$. This matroid is seen in [29, Section 3.3]. Hence ${\rm
trop}({\rm M}(2,5,0))$ is the cone over the Bergman complex of $R_{10}$, which
consists of $315$ tetrahedra and $45$ bipyramids. Its f-vector is
$(40,240,510,360)$. The $40$ vertices are the $10$ coordinate points $e_{ij}$
and the $30$ circuits of $R_{10}$.
Combining our two threads, and using the notion of positivity defined above,
leads us to
${\rm trop}_{+}({\rm
SH}(k,n,r))\,\,\subset\,\,\mathbb{R}^{\binom{n}{k}}\\!/\mathbb{R}{\bf
1}\,\times\,\mathbb{R}^{\binom{n}{k}}\\!/\mathbb{R}{\bf 1}\,\,\quad{\rm
and}\quad\,\,{\rm trop}_{+}({\rm
M}(k,n,r))\,\,\subset\,\,\mathbb{R}^{\binom{n}{k}}\\!/\mathbb{R}{\bf 1}.$ (44)
These positive tropical varieties are subfans of the respective tropical
varieties. They are defined by requiring sign compatibility in the tropical
equations. For details see [2, 8, 26].
###### Example 5.4 ($k=2,n=5$).
Following Examples 5.2 and 5.3, the positive tropical Mandelstam variety ${\rm
trop}_{+}({\rm M}(2,5,0))$ is defined by the following system of $15$ tropical
equations:
$\begin{matrix}s_{13}\oplus s_{15}=s_{12}\oplus
s_{14}\,,\,\,s_{24}=s_{12}\oplus s_{23}\oplus s_{25}\,,\,\,s_{13}\oplus
s_{35}=s_{23}\oplus s_{34}\,,\,\,s_{24}=s_{14}\oplus s_{34}\oplus s_{45},\\\
s_{15}\oplus s_{35}=s_{25}\oplus s_{45}\,,\,\,s_{13}\oplus s_{45}=s_{12}\oplus
s_{23}\,,\,\,s_{24}=s_{12}\oplus s_{14}\oplus s_{35}\,,\,\,s_{15}\oplus
s_{34}=s_{12}\oplus s_{25},\\\ s_{12}\oplus s_{35}=s_{34}\oplus
s_{45}\,,\,\,s_{13}\oplus s_{25}=s_{14}\oplus s_{34}\,,\,\,s_{13}\oplus
s_{15}\oplus s_{35}=s_{24}\,,\,\,s_{24}=s_{13}\oplus s_{25}\oplus s_{45},\\\
s_{15}\oplus s_{23}\,=\,s_{14}\oplus s_{45}\,,\,\,\,s_{14}\oplus
s_{35}\,=\,s_{23}\oplus s_{25}\,,\,\,\,s_{24}\,=\,s_{15}\oplus s_{23}\oplus
s_{34},\end{matrix}$
Here $\,x\oplus y:={\rm min}(x,y)$. Note the special role of the non-facet
variable $s_{24}$. These $15$ equations define the positive Bergman complex,
in the sense of Ardila, Klivans and Williams [2].
We find that $\,{\rm trop}_{+}({\rm M}(2,5,0))$ is the cone over a $3$-sphere.
That sphere is glued from $48$ tetrahedra and $18$ bipyramids, and its
f-vector is $(24,108,150,66)$. Geometrically, this is a subdivision of the
boundary of the $4$-polytope $\Delta_{2}\times\Delta_{2}$, which is the
product of two triangles. This is dual to the cyclic polytope in Example 5.2,
and its f-vector is $(9,18,15,6)$. The “fine subdivision” discussed in [2,
Corollary 3.5] is the barycentric subdivision of ${\rm
Bdr}(\Delta_{2}\times\Delta_{2})$.
The positive and tropical geometry of Grassmannians and flag varieties have
been studied intensely in recent years. See [6, 9, 25, 26] for some references
and [11, 15, 21] for physics perspectives. Results from this literature apply
directly to some spinor-helicity varieties. We know from Proposition 2.5 that
$\,{\rm SH}(k,n,0)={\rm Fl}(k,n-k;\mathbb{C}^{n})$ and ${\rm SH}(k,n,k)={\rm
Gr}(k,n)\times{\rm Gr}(k,n)$. The positive geometry structure on these
varieties is established in [20, Section 3.4].
###### Corollary 5.5.
The positive spinor-helicity variety ${\rm SH}_{+}(k,n,r)$ is a positive
geometry for $r=0$ and $r=k$, with boundary structure and canonical form
induced by known results on flag varieties and Grassmannians. The analogous
statement holds for (positive) tropicalizations.
###### Remark 5.6.
A detailed study of ${\rm trop}_{+}({\rm SH}(1,n,0))$ was carried out by
Olarte in [25]. It would desirable to extend this to $k\geq 2$ using the
techniques that were introduced in [6, 9].
Another situation where results can be transferred directly is that seen in
Proposition 5.1.
###### Corollary 5.7.
Modulo a scaling action by $\mathbb{R}^{+}$, the positive Grassmannian
$\mathrm{Gr}(k{+}1,2k{+}2)$ coincides with the positive spinor-helicity
varieties $\mathrm{SH}_{+}(k,2k{+}1,0)$ and $\mathrm{SH}_{+}(k{+}1,2k{+}1,1)$,
for all $k\geq 1$. The analogous statement holds for their (positive) tropical
varieties.
###### Remark 5.8.
We obtain detailed textbook descriptions of ${\rm trop}(\mathrm{SH}(2,5,0))$
from those for ${\rm trop}({\rm Gr}(3,6))$ in [22, Sections 4.4 and 5.4]. This
was pointed out in [7, Section 6]. Similarly, ${\rm
trop}_{+}(\mathrm{SH}(3,7,0))$ arises from ${\rm trop}_{+}({\rm Gr}(4,8))$.
The latter fan was studied in [15, Section 2].
We finally turn to the tropical Mandelstam variety. Recall that ${\rm
M}(k,n,r)$ is the image of ${\rm SH}(k,n,r)$ under the Hadamard product map
$s$ in (26). The tropicalization of this map,
${\rm trop}(s)\,\,:\,\,\mathbb{R}^{\binom{n}{k}}/\mathbb{R}{\bf
1}\,\times\,\mathbb{R}^{\binom{n}{k}}/\mathbb{R}{\bf
1}\,\,\rightarrow\,\,\mathbb{R}^{\binom{n}{k}}/\mathbb{R}{\bf 1},$ (45)
computes the sum of two tropical Plücker vectors, modulo global tropical
scaling. It follows from [22, Theorem 5.5.1] that the Hadamard product map $s$
commutes with tropicalization. The following tropical constructions are thus
obtained directly from their classical analogues.
###### Corollary 5.9.
The tropical Mandelstam variety is the image of the tropical spinor-helicity
variety under the sum map (45). In symbols, for all values of the parameters
$k,n,r$, we have
${\rm trop}({\rm M}(k,n,r))\,\,=\,\,{\rm trop}(s)\bigl{(}\,{\rm trop}({\rm
SH}(k,n,r))\,\bigr{)}.$ (46)
For the parameter values in Corollaries 5.5 and 5.7, the tropical Mandelstam
variety is the image of a tropical Grassmannian or a tropical flag variety
under the sum map. The largest tropical Mandelstam variety ${\rm M}(k,n,k)$ is
the Minkowski sum of the Grassmannian with itself:
${\rm trop}({\rm M}(k,n,k))\,\,=\,\,{\rm trop}({\rm Gr}(k,n))\,+\,{\rm
trop}({\rm Gr}(k,n)).$ (47)
###### Example 5.10 ($k=2,n=5$).
Equation (46) describes a $2$-to-$1$ map from ${\rm trop}({\rm Gr}(3,6))$ onto
${\rm trop}({\rm M}(2,5,0))$. Starting from the census in [22, Example
4.4.10], we can examine this map on every cone of ${\rm trop}({\rm Gr}(3,6))$.
This fan has $65=20+15+30$ rays, grouped into types E, F and G. These rays map
to the $40=10+15+15$ rays of ${\rm trop}({\rm M}(2,5,0))={\rm trop}(R_{10})$.
## 6 The Scattering Correspondence
Scattering amplitudes in the CEGM model [11] are derived from the scattering
potential
$L_{s}\,\,=\,\,\sum_{I\in\binom{[n]}{k}}s_{I}\cdot{\rm log}(p_{I}).$
We assume that $s=(s_{I})$ is a fixed point in the Mandelstam variety ${\rm
M}(k,n,r)$ where $r\leq k-2$. The unknowns $p=(p_{I})$ are the Plücker
coordinates of the open Grassmannian ${\rm Gr}(k,n)^{o}$, which is defined by
$p_{I}\not=0$ for all $I\in\binom{[n]}{k}$. The momentum conservation
relations on ${\rm M}(k,n,k-2)$ ensure that the scattering potential is a
well-defined on the quotient space
$X(k,n)\,\,=\,\,{\rm Gr}(k,n)/(\mathbb{C}^{*})^{n}.$ (48)
This is a very affine variety of dimension $(n-k-1)(k-1)$; see [1, 21]. It is
the moduli space of configurations of $n$ labeled points in linearly general
position in the projective space $\mathbb{P}^{k-1}$.
The scattering potential $L_{s}$ serves as log-likelihood function in
algebraic statistics [28]. In both statistics and physics, one cares about the
critical points of $L_{s}$. These are defined by
$\nabla_{p}\,L_{s}\,\,=\,\,0.$ (49)
We now let both $s$ and $p$ vary, and we consider all solution pairs $(s,p)$
to the system of equations in (49). The pairs $(s,p)$ satisfying (49) are the
points of the scattering correspondence
${\rm C}(k,n,r)\,\,\subset\,\,{\rm M}(k,n,r)\,\times\,{\rm X}(k,n).$
The aim of this section is to initiate the mathematical study of this variety.
We also consider the lifted scattering correspondence, where ${\rm M}(k,n,r)$
is replaced by the spinor-helicity variety:
$\widetilde{\rm C}(k,n,r)\,\,\subset\,\,{\rm SH}(k,n,r)\,\times\,{\rm
X}(k,n).$
###### Example 6.1 ($k=3,n=6$).
We represent six points in $\mathbb{P}^{2}$ by the columns of a matrix
$P\quad=\quad\begin{bmatrix}1&0&0&1&1&1\\\ 0&1&0&1&x&y\\\
0&0&1&1&z&w\end{bmatrix}$ (50)
Hence $x,y,z,w$ are coordinates on the moduli space ${\rm X}(3,6)$. The
scattering potential equals
$\footnotesize\begin{matrix}L_{s}&\\!\\!=\\!\\!&\,\,\,s_{125}\cdot{\rm
log}(z)+s_{126}\cdot{\rm log}(w)+s_{135}\cdot{\rm log}(-x)+s_{136}\cdot{\rm
log}(-y)+s_{145}\cdot{\rm log}(z-x)+s_{146}\cdot{\rm log}(w-y)\vskip 3.0pt
plus 1.0pt minus 1.0pt\\\ &&\\!\\!\\!\\!\\!+\,\,s_{156}\cdot{\rm log}(wx-
yz)+s_{245}\cdot{\rm log}(1-z)+s_{246}\cdot{\rm log}(1-w)+s_{256}\cdot{\rm
log}(z-w)+s_{345}\cdot{\rm log}(x-1)\vskip 3.0pt plus 1.0pt minus 1.0pt\\\
&&+\,\,s_{346}\cdot{\rm log}(y-1)\,+\,s_{356}\cdot{\rm
log}(y-x)\,+\,s_{456}\cdot{\rm log}(wx-yz-w-x+y+z).\end{matrix}$
The scattering equations are given by the partial derivatives of the
scattering potential $L_{s}$:
$\begin{matrix}s_{126}\,\frac{1}{w}+s_{146}\,\frac{1}{w-y}+s_{156}\,\frac{x}{wx-
yz}-s_{246}\,\frac{1}{1-w}-s_{256}\,\frac{1}{z-w}+s_{456}\,\frac{x-1}{wx-yz-
w-x+y+z}&=&0,\vskip 3.0pt plus 1.0pt minus 1.0pt\\\
s_{135}\,\frac{1}{x}-s_{145}\,\frac{1}{z-x}+s_{156}\,\frac{w}{wx-
yz}+s_{345}\,\frac{1}{x-1}-s_{356}\,\frac{1}{y-x}+s_{456}\,\frac{w-1}{wx-yz-
w-x+y+z}&=&0,\vskip 3.0pt plus 1.0pt minus 1.0pt\\\
s_{136}\,\frac{1}{y}-s_{146}\,\frac{1}{w-y}-s_{156}\,\frac{z}{wx-
yz}+s_{346}\,\frac{1}{y-1}+s_{356}\frac{1}{y-x}+s_{456}\,\frac{1-z}{wx-yz-
w-x+y+z}&=&0,\vskip 3.0pt plus 1.0pt minus 1.0pt\\\
\,s_{125}\,\frac{1}{z}+s_{145}\,\frac{1}{z-x}\,-\,s_{156}\,\frac{y}{wx-
yz}-s_{245}\,\frac{1}{1-z}+s_{256}\,\frac{1}{z-w}+s_{456}\,\frac{1-y}{wx-yz-
w-x+y+z}&=&0.\end{matrix}$ (51)
If the $s_{ijk}$ are general solutions to the linear equations in (29) then
(51) has $26$ complex solutions in ${\rm X}(3,6)$. We are interested in the
case when $s$ lies in ${\rm M}(3,6,1)$, or when we lift to ${\rm SH}(3,6,1)$
by substituting $s_{ijk}=\langle ijk\rangle[ijk]$. We obtain $26$-to-$1$ maps
from the two scattering correspondences ${\rm C}(3,6,1)$ or $\widetilde{\rm
C}(3,6,1)$ onto their kinematic spaces ${\rm M}(3,6,1)$ or ${\rm SH}(3,6,1)$.
We next examine the case of primary interest in physics, namely $k=2$. The
Mandelstam variety ${\rm M}(2,n,r)$ was characterized in Section 4. Here,
$X(2,n)$ is the moduli space $\mathcal{M}_{0,n}$ of $n$ distinct labeled
points $x_{1},x_{2},\ldots,x_{n}$ in
$\mathbb{P}^{1}=\mathbb{C}\cup\\{\infty\\}$. The scattering potential equals
$L_{s}\,\,\,=\,\sum_{1\leq i<j\leq n}\\!\\!s_{ij}\cdot{\rm log}(x_{i}-x_{j}).$
The system of scattering equations $\nabla_{x}L_{s}=0$ can be written
explicitly as follows:
$\sum_{j=1}^{n}\frac{s_{ij}}{x_{i}-x_{j}}\,\,=\,\,0\qquad{\rm
for}\,\,i=1,2,\ldots,n.$ (52)
Let $z$ be a new unknown and consider the following rational function in $z$
of degree $-2$:
$T(z)\,\,\,=\,\sum_{1\leq i<j\leq n}\frac{s_{ij}}{(z-x_{i})(z-x_{j})}.$ (53)
###### Proposition 6.2.
The rational function $T(z)$ is identically zero if and only if (52) holds.
###### Proof.
The residue of $T(z)$ at $z=x_{i}$ is precisely the left hand side of the
equation in (52). The residues at the $n$ poles are all zero if and only if
$T(z)$ is the zero function. ∎
This following result is known in the particle physics literature due to work
of Witten, Roiban-Spradlin-Volovich and Cachazo-He-Yuan [13]. We learned it
from the recent lectures by Thomas Lam [21, Section 4.4]. See the discussion
of “sectors” in [21, Introduction].
###### Theorem 6.3.
The lifted scattering correspondence $\widetilde{\rm C}(2,n,0)$ has $n-3$
irreducible components $\widetilde{\rm C}_{2},\widetilde{\rm
C}_{3},\ldots,\widetilde{\rm C}_{n-2}$. Each of them has the same dimension as
${\rm SH}(2,n,0)$. The irreducible component $\widetilde{\rm C}_{\ell}$
parametrizes all $3$-step flags $V\subseteq U\subseteq W^{\perp}$ where
$(V,W)$ is a point in ${\rm SH}(2,n,0)$ and $U$ is the row span of an
$\ell\times n$ Vandermonde matrix
$(x_{j}^{i})_{i=0,\ldots,\ell-1;j=1,\ldots,n}$. The map from $\widetilde{\rm
C}_{\ell}$ to ${\rm SH}(2,n,0)$ is finite-to-one: its degree is the Eulerian
number $A(n-3,\ell-2)$.
###### Remark 6.4.
The maximum likelihood degree of the moduli space $\mathcal{M}_{0,n}$ equals
$(n-3)!$. In other words, the equations (52) have $(n-3)!$ solutions, provided
$\sum_{j=1}^{n}s_{ij}=0$ for all $i$. See e.g. [28, Proposition 1]. Theorem
6.3 is a geometric realization of the combinatorial identity
$(n-3)!\,\,=\,\,A(n-3,0)+A(n-3,1)+\,\cdots\,+A(n-3,n-4).$ (54)
Note that the Eulerian numbers can be defined by $A(2,0)=A(2,1)=1$ and the
recursions
$A(n-3,\ell-2)\,\,=\,\,(\ell-1)\cdot A(n-4,\ell-2)\,+\,(n-\ell-1)\cdot
A(n-4,\ell-3)\quad{\rm for}\,\,n\geq 3.$ (55)
###### Proof of Theorem 6.3.
A proof for the first part of the statement concerning the irreducible
components of $\widetilde{\rm C}(2,n,0)$ was given in [21, Proposition 4.6 and
Section 4.5]. The argument for the second part was outlined in [13, Section
1.1]. It is based on the degeneration technique known in physics as soft
limits. We now present details from the algebraic perspective of [1].
The variety $\widetilde{\rm C}_{\ell}$ is the image of the following map into
the lifted scattering correspondence:
$(\mathbb{C}[z]_{\leq\ell-1})^{2}\times(\mathbb{C}[z]_{\leq
n-\ell-1})^{2}\times\mathcal{M}_{0,n}\,\rightarrow\,\widetilde{\rm
C}(2,n,0)\,,\,\,(\tau,\widetilde{\tau},x)\,\mapsto\,(\lambda,\widetilde{\lambda},x).$
(56)
Here $x=(x_{1},\ldots,x_{n})\in\mathbb{C}^{n}$ represents a point in
$\mathcal{M}_{0,n}$, the column vector $\tau$ resp. $\widetilde{\tau}$
consists of two polynomials in one variable $z$ of degrees $\ell-1$ resp.
$n-\ell-1$, the $i$th column of $\lambda$ equals $\,\tau(x_{i})\,$, and the
$i$th column of $\,\widetilde{\lambda}\,$ is
$\,\widetilde{\tau}(x_{i})\prod_{j\not=i}(x_{i}-x_{j})^{-1}$. Note that the
row spaces $V$ and $W$ of these $2\times n$ matrices satisfy $V\subseteq
U\subseteq W^{\perp}$, where $U$ is the row space of $(x^{i}_{j})$.
Write $D(n,\ell)$ for the degree of the finite-to-one map $\,\widetilde{\rm
C}_{\ell}\rightarrow{\rm
SH}(2,n,0),\,(\lambda,\widetilde{\lambda},x)\mapsto(\lambda,\widetilde{\lambda})$.
In words, $D(n,\ell)$ is the number of solutions to the scattering equations
in the $\ell$th component of $\widetilde{\rm C}(2,n,0)$. Direct computations
reveal $D(4,2)=D(5,2)=D(5,3)=1$. We claim that
$D(n,\ell)\,\,=\,\,(\ell-1)\cdot D(n-1,\ell)\,+\,(n-\ell-1)\cdot
D(n-1,\ell-1)\quad{\rm for}\,\,n\geq 3.$ (57)
This claim implies Theorem 6.3, because the recursion in (57) matches the
recursion in (55).
It remains to prove (57). This is done using the technique of soft limits. We
drive the $n$th particle to zero, by replacing the Mandelstam coordinate
$s_{ij}$ with $\epsilon\cdot s_{ij}=\epsilon\cdot\langle ij\rangle[ij]$. This
degeneration is compatible with the parametrization (56). As $\epsilon$ tends
to zero, either $\tau(x_{n})=0$ or $\widetilde{\tau}(x_{n})=0$ holds in the
limit. This yields one equation in one unknowns $x_{n}$ of degree $\ell-1$
resp. $n-\ell-1$. The remaining unknowns $x_{1},\ldots,x_{n-1}$ satisfy the
scattering equations on the components of $\widetilde{\rm C}(2,n-1,0)$ that
are indexed by $\ell$ and $\ell-1$ respectively. So, for $\epsilon$ near $0$,
the size of the fibers of $\,\widetilde{\rm C}_{\ell}\rightarrow{\rm
SH}(2,n,0)\,$ is the right hand side of (57). ∎
The irreducible components $\widetilde{\rm C}_{i}$ are referred to as
“sectors” in the physics literature. Section 5.1 in [11] starts with the
sentence “In the k = 2 case it is well known that solutions of the scattering
equations split into $n-3$ sectors”. Our proof was written for mathematicians.
Theorem 6.3 was stated for the lifted scattering correspondence
$\widetilde{\rm C}(2,n,0)$. From our perspective, it is more natural to focus
on the scattering correspondence ${\rm C}(2,n,0)$ because the parameters in
$L_{s}$ are the Mandelstam invariants. The scattering correspondence ${\rm
C}(2,n,0)$ lives over the Mandelstam variety ${\rm M}(2,n,0)$, whose prime
ideal we presented in Theorem 4.5.
###### Corollary 6.5.
The scattering correspondence ${\rm C}(2,n,0)$ has $\lceil\frac{n-3}{2}\rceil$
irreducible components. The varieties ${\rm C}_{\ell}$ and ${\rm C}_{n-\ell}$
in Theorem 6.3 are identified by the map ${\rm SH}(2,n,0)\rightarrow{\rm
M}(2,n,0)$.
###### Proof.
The Hadamard product map $s$ gives rise to a map from the lifted scattering
correspondence in ${\rm SH}(2,n,0)\times\mathcal{M}_{0,n}$ onto the scattering
correspondence in ${\rm M}(2,n,0)\times\mathcal{M}_{0,n}$. This map is a
covering of degree two. The fibers are comprised of solutions for $\ell$ and
for $n-\ell$. These are distinct, unless $\ell=n/2$, where the solution is
fixed under this involution. ∎
We now turn to the case $k=3,r=1$, which was discussed by Cachazo et al. in
[11, Section 5.1]. We revisit their results from an algebraic perspective, and
we report on the identification of irreducible components with the help of
HomotopyContinuation.jl [10].
The moduli space $X(3,n)$ is a quotient of the Grassmannian ${\rm Gr}(3,n)$,
by (48). Hence there are two tautological maps from ${\rm SH}(3,n,1)$ to
$X(3,n)$. These rational maps take the pair $(V,W)$ in (10) to the images of
$V$ and $W$ in $X(3,n)$ respectively. Furthermore, we consider the Veronese
map $v$ from $\mathcal{M}_{0,n}$ into $X(3,n)$ which take $n$ points in
$\mathbb{P}^{1}$ to $n$ points on a conic in $\mathbb{P}^{2}$. Algebraically,
the map $v$ takes a $2\times n$ matrix with $i$th column $(u_{i},v_{i})$ to
the $3\times n$ matrix with $i$th column $(u_{i}^{2},u_{i}v_{i},v_{i}^{2})$.
Its Plücker coordinates satisfy $\langle ijk\rangle\,=\,\langle
ij\rangle\langle ik\rangle\langle jk\rangle$.
We now compose the rational map in (34) with the two tautological maps for
$k=2$, followed by the Veronese map $v$. In this manner, we obtain two
tautological Veronese maps
${\rm SH}(3,n,1)\,\,\dashrightarrow\,\,{\rm
SH}(2,n,0)\,\,\dashrightarrow\,\,X(2,n)\,=\,\mathcal{M}_{0,n}\,\,\stackrel{{\scriptstyle
v}}{{\hookrightarrow}}\,\,X(3,n).$
Explicitly, these two rational maps are $\,(V,W)\mapsto v(V\cap W^{\perp})\,$
and $\,(V,W)\mapsto v(V^{\perp}\cap W)$. We paraphrase the construction in
[11, Section 5.1] in terms of the scattering correspondence.
###### Theorem 6.6.
The lifted scattering correspondence $\widetilde{\rm C}(3,n,1)$ contains four
irreducible components which map birationally onto the moduli space $X(3,n)$.
These are given by the two tautological maps and the two tautological Veronese
maps. In other words, for general kinematic data $(V,W)\in\mathrm{SH}(3,n,1)$,
with Mandelstam invariants $s\in{\rm M}(3,n,1)$, the configurations
$V,\,W,\,v(V\cap W^{\perp}),\,v(V^{\perp}\cap W)$ in $X(3,n)$ are solutions to
the scattering equations (49).
We illustrate this result by solving the equations (51) for a random numerical
instance.
###### Example 6.7 ($k=3,n=6$).
Let $(V,W)\in{\rm SH}(3,6,1)$ be the point with parameter matrix
${\bf x}\,\,\,=\,\,\,\small\begin{bmatrix}4&0&7&4&9&1\\\ 1&3&7&2&8&9\\\
1&7&9&8&0&5\\\ 6&6&2&2&4&2\end{bmatrix}.$
Thus $V$ is the span of the first three rows and $W$ is the kernel of the last
three rows. The Mandelstam invariants are computed by composing the map
$\phi_{3,6,1}$ with the Hadamard map:
$\small\setcounter{MaxMatrixCols}{11}\begin{matrix}s_{123}=12000&&s_{124}=6720&&s_{125}=8272&&s_{126}=-31584&&s_{134}=-37760,\\\
s_{135}=54784&&s_{136}=-35728&&s_{145}=-38080&&s_{146}=92208&&s_{156}=-30832&&\\\
s_{234}=-37920&&s_{235}=68288&&s_{236}=-108016&&s_{245}=-82896&&s_{246}=82416&&\\\
s_{256}=82720&&s_{345}=46592&&s_{346}=57664&&s_{356}=-19904&&s_{456}=-88944.\end{matrix}$
The scattering equations $(\ref{eq:scatter36})$ have $26$ solutions
$(x,y,z,w)\in\mathbb{C}^{4}$. Among these, we find:
$V$ | $W$ | $v(V\cap W^{\perp})$ | $v(V^{\perp}\cap W)$
---|---|---|---
$\bigl{(}\frac{8453}{5723},\frac{6083}{9263},\frac{3713}{1358},\frac{3713}{2198}\bigr{)}$ | $\bigl{(}\frac{6}{11},\frac{87}{172},-\frac{1}{4},-\frac{42}{43}\bigr{)}$ | $\bigl{(}\frac{5}{21},\frac{5}{36},\frac{19}{27},\frac{38}{69}\bigr{)}$ | $\bigl{(}\frac{6588}{14911},\frac{20988}{9139},-\frac{8601}{125060},\frac{17437}{138528}\bigr{)}$
Substituting these $(x,y,z,w)$ into (50), we obtain four $3\times 6$ matrices
$P$ with rational entries. These matrices represent the configurations
$V,\,W,\,v(V\cap W^{\perp}),\,v(V^{\perp}\cap W)$, where $V$ is the linear
span of the first three rows of $X$ and $W^{\perp}$ is the span of the last
three rows of $X$.
###### Conjecture 6.8.
The lifted scattering correspondence $\widetilde{\rm C}(3,n,1)$ decomposes
into five irreducible components, i.e. there is only one component whose map
onto $X(3,n)$ is not birational. Hence, the scattering correspondence ${\rm
C}(3,n,1)$ has three irreducible components. The four birational components
become two components modulo the involution in Remark 2.4.
The passage from the first sentence to the second sentence in Conjecture 6.8
mirrors the passage from Theorem 6.3 to Corollary 6.5. We verified our
conjecture in some small cases:
###### Proposition 6.9.
Conjecture 6.8 is true for $n=6,7,8$.
###### Proof.
The proof is a computation with HomotopyContinuation.jl [10]. Recall from [1]
that the degree of the map $\widetilde{\rm
C}(3,n,1)\rightarrow\mathrm{SH}(3,n,1)$ equals $26$, $1272$, $188112$ for
$n=6,7,8$. We ran an irreducible primary decomposition, based on monodromy
loops, on the defining equations of the lifted scattering correspondence. We
found five irreducible components over $X(3,n)$. For instance, for $n=7$, the
five components have degrees $1,1,1,1$ and $1268$. ∎
Acknowledgements: YEM is supported by Deutsche Forschungsgemeinschaft (DFG,
German Research Foundation) SFB-TRR 195 “Symbolic Tools in Mathematics and
their Application”. AP and BS are supported by the European Union (ERC,
UNIVERSE+, 101118787). We are grateful to Simon Telen for helping us with
numerical computations for Section 6.
## References
* [1] D. Agostini, T. Brysiewicz, C. Fevola, L. Kühne, B. Sturmfels and S. Telen: Likelihood degenerations, Adv. Math. 414 (2023) 108863.
* [2] F. Ardila, C. Klivans and L. Williams: The positive Bergman complex of an oriented matroid, Eur. J. Comb. 27 (2006) 577–591.
* [3] S. Badger, J. Henn, J. Plefka and S. Zoia: Scattering Amplitudes in Quantum Field Theory, Lecture Notes in Physics 1021, Springer, 2024.
* [4] B. Betti, M. Panizzut and S. Telen: Solving equations using Khovanskii bases, Journal of Symbolic Computation (to appear), arXiv:2306.14871.
* [5] C. Bocci and E. Carlini: Hadamard Products of Projective Varieties, Birkhäuser, Cham, 2024.
* [6] J. Boretsky: Positive tropical flags and the positive tropical Dressian, Sémin. Lothar. Comb. 86B (2022) Article 86.
* [7] L. Bossinger, J. Drummond and R. Glew: Adjacency for scattering amplitudes from the Gröbner fan, J. High Energy Phys. 2023, no. 11, 2.
* [8] M. Brandenburg, G. Loho and R. Sinn: Tropical positivity and determinantal varieties, Algebr. Comb. 6 (2023) 999–1040.
* [9] M. Brandt, C. Eur and L. Zhang: Tropical flag varieties, Adv. Math. 384 (2021) 107695.
* [10] P. Breiding and S. Timme: HomotopyContinuation.jl: A package for homotopy continuation in Julia, Mathematical Software – ICMS 2018, 458–465, Springer, 2018.
* [11] F. Cachazo, N. Early, A. Guevara and S. Mizera: Scattering equations: from projective spaces to tropical Grassmannians, J. High Energy Phys. (2019), no. 6, 39.
* [12] F. Cachazo and N. Early: Biadjoint scalars and associahedra from residues of generalized amplitudes, J. High Energy Phys. (2023), no. 10, 15.
* [13] F. Cachazo, S. He and E. Yuan: Scattering in three dimensions from rational maps, J. High Energy Phys. (2013) no. 141.
* [14] K. Devriendt, H. Friedman, B. Reinke and B. Sturmfels: The two lives of the Grassmannian, arXiv:2401.03684.
* [15] J. Drummond, J. Foster, Ö. Gürdoģan and C. Kalousios: Algebraic singularities of scattering amplitudes from tropical geometry, J. High Energy Phys. (2021), no 4, 2.
* [16] H. Elvang and Y. Huang: Scattering Amplitudes in Gauge Theory and Gravity, Cambridge University Press, 2015.
* [17] C. Godsil and K. Meagher: Erdős-Ko-Rado Theorems: Algebraic Approaches, Cambridge University Press, 2016.
* [18] D. Grayson and M. Stillman: Macaulay2, a software system for research in algebraic geometry, available at https://macaulay2.com.
* [19] J. Huh and B. Sturmfels: Likelihood geometry, in Combinatorial Algebraic Geometry (eds. Aldo Conca et al.), Lecture Notes in Mathematics 2108, Springer Verlag, (2014) 63–117.
* [20] T. Lam: An invitation to positive geometries, arXiv:2208.05407.
* [21] T. Lam: Moduli spaces in positive geometry, arXiv:2405.17332.
* [22] D. Maclagan and B. Sturmfels: Introduction to Tropical Geometry, Graduate Studies in Mathematics, Vol 161, American Mathematical Society, 2015.
* [23] M. Michałek and B. Sturmfels: Invitation to Nonlinear Algebra, Graduate Studies in Mathematics, Vol 211, American Mathematical Society, 2021.
* [24] E. Miller and B. Sturmfels: Combinatorial Commutative Algebra, Graduate Texts in Mathematics, Vol 227, Springer-Verlag, New York, 2005.
* [25] J. Olarte: Positivity for partial tropical flag varieties, arXiv:2302.10171.
* [26] D. Speyer and L. Williams: The tropical totally positive Grassmannians, J. Algebr. Comb. 22 (2005) 189–210.
* [27] B. Sturmfels: Algorithms in Invariant Theory, Springer-Verlag, Vienna, 1993.
* [28] B. Sturmfels and S. Telen: Likelihood equations and scattering amplitudes, Algebraic Statistics 12 (2021) 167–186.
* [29] J. Yu and D. Yuster: Representing tropical linear spaces by circuits, Formal Power Series and Algebraic Combinatorics (FPSAC 2007), arXiv:math/0611579.
Authors’ addresses:
Yassine El Maazouz, RWTH Aachen and MPI MiS Leipzig yassine.el-
<EMAIL_ADDRESS>
Anaëlle Pfister, MPI MiS Leipzig<EMAIL_ADDRESS>
Bernd Sturmfels, MPI MiS Leipzig<EMAIL_ADDRESS>
|
# On continuum modeling of cell aggregation phenomena
Soheil Firooz<EMAIL_ADDRESS>Stefan Kaessmair Vasily Zaburdaev Ali
Javili Paul Steinmann Institute of Applied Mechanics, University of
Erlangen-Nuremberg, Egerland Str. 5, 91058 Erlangen, Germany Siemens Industry
Software GmbH, Nordostpark 3, 90411 Nuremberg, Germany Department of Biology,
University of Erlangen-Nuremberg, 91058 Erlangen, Germany Max Planck Zentrum
für Physik und Medizin, 91058 Erlangen, Germany Department of Mechanical
Engineering, Bilkent University, 06800 Ankara, Turkey Glasgow Computational
Engineering Center, James Watt School of Engineering, University of Glasgow,
Glasgow G12 8QQ, United Kingdom
###### Abstract
Cellular aggregates play a significant role in the evolution of biological
systems such as tumor growth, tissue spreading, wound healing, and biofilm
formation. Analysis of such biological systems, in principle, includes
examining the interplay of cell-cell interactions together with the cell-
matrix interaction. These two interaction types mainly drive the dynamics of
cellular aggregates which is intrinsically out of equilibrium. Here we propose
a non-linear continuum mechanics formulation and the corresponding finite
element simulation framework to model the physics of cellular aggregate
formation. As an example, we focus in particular on the process of bacterial
colony formation as recently studied by Kuan et al. [1]. Thereby we describe
the aggregation process as an active phase separation phenomenon. We develop a
Lagrangian continuum description of the problem which yields a substantial
simplification to the formulations of the governing equations. Due to the
presence of spatial Hessian and Laplacian operators, a gradient-enhanced
approach is required to incorporate $\mathcal{C}^{1}$ continuity. In addition,
a robust and efficient finite element formulation of the problem is provided.
Taylor–Hood finite elements are utilized for the implementation to avoid
instabilities related to the LBB condition. Finally, through a set of
numerical examples, the influence of various parameters on the dynamics of the
cellular aggregate formation is investigated. Our proposed methodology
furnishes a general framework for the investigation of the rheology and non-
equilibrium dynamics of cellular aggregates.
###### keywords:
Cellular aggregates, Active phase separation, Continuum model, Eulerian
approach, Lagrangian approach
## 1 Introduction
Within the human body, most cells interact with their neighboring cells and
with their extracellular matrix to establish a unique organization. These
cell-cell and cell-matrix interactions form a complex network of mechanical,
biological and chemical signals which play a significant role in cell
physiology [2, 3, 4, 5]. The study of cellular interactions provides a
significant insight towards a better understanding of many biological
processes such as tumor growth [6, 7], tissue spreading [8, 9, 10, 11],
biofilm formation [12, 13, 14] and wound healing [15, 16]. Although there
exist numerous contributions on experimental methods to examine cellular
interactions and their final outcomes [17, 18, 19, 20, 21], there are still
certain details and information that remain hard to assess. Examples of such
details include the exact relation between the interactions and individual
cell’s adhesion properties, or the speed at which sorting between the cells
occurs. Clearly, determining the relationships between all the problem
variables via conducting experiments is an arduous and time consuming task.
Theoretical approaches, however, provide a valuable alternative as they do not
suffer from such limitations and facilitate parametric studies. Mathematical
modeling allows us to quantify parameters such as local speeds, fluxes and
their rates of change in a natural way. They also enable us to investigate the
effects of interactions between cells of different types or to allow the cell
properties to vary independently of one another. Given these benefits,
mathematical modeling proves to be an inclusive and efficient alternative for
analyses of cellular interactions [22].
There exist two major approaches for mathematical description of cellular
interactions: _agent-based_ and _continuum models_. In agent-based approaches,
the cells are modeled and treated individually using a set of biophysical
rules [23, 24, 25, 26]. Each cell is approximated, for example, as a
homogeneous, isotropic, elastic, spherical object parameterized by measurable
biophysical and biological quantities. The agent-based approach is
particularly useful when one wants to study the interaction of individual
cells with each other and with their environment [27, 28, 29]. Since this
method is based on a series of rules for each cell, translating biological
processes into a model is straightforward [30]. For small-scale studies or
cases in which the properties of the cells vary over distances comparable to
the size of a cell, a higher degree of spatial resolution is obtained via
agent-based models in comparison to continuum models. Despite all the
precision that the agent-based method offers, this approach is difficult to
study analytically and its computational cost greatly increases as the number
of cells increases. For instance, simulation of a tumor growth process
requires systems which evolve from a single progenitor cell to $10^{6}$ cells
_in vitro_ and $10^{11}$ cells _in vivo_. Carrying out a computational
analysis on such large cell population sizes is a cumbersome task, if possible
at all [31]. Additionally, it is often neither desirable nor necessary to
track each individual cell within a very large population. For larger-scale
applications, a continuum modeling proves to be a more viable alternative.
This approach is well suited to describe large scale phenomena where the cell
properties vary smoothly over a length scale of several cell diameters and
therefore the cell properties can be approximated by a local average.
Continuum models frequently involve ordinary and partial differential
equations which are usually in the reaction-diffusion form [32, 33, 34, 35,
36]. Many aspects of tumor and tissue growth have been studied using continuum
models [37, 38, 31]. A continuum description of cell motility due to cell-cell
and cell-matrix interaction was presented in [22, 39, 40]. They introduced a
non-local interaction term to account for adhesion between the cells and
between the cells and matrix. Coarse grained continuum approaches such as
hydrodynamic theories, have also provided a powerful tool to capture large
scale emergent behaviors in active cellular systems [41, 42, 43, 44, 45].
Further studies on continuum modeling of cellular aggregates are available in
[46, 47, 48]. We refer to [49] for a thorough comparison between agent-based
and continuum modeling of cellular aggregates.
A prototypical biological example of cellular aggregation is the formation of
bacterial microcolonies and biofilms. One of the first steps in the process of
bacterial colonization of biotic and abiotic surfaces is the formation of
aggregates or colonies consisting of several thousands of cells. Usually,
these microcolonies later evolve into much more complex bacterial communities,
known as biofilms [50, 51]. Bacterial infections involving biofilms are far
more resistant to anti-microbial treatments in many cases [52]. Thus,
investigation of the mechanism of bacterial microcolony formation is of
immense significance in the fields of medicine and engineering. A few well-
known examples of bacterial microcolonies causing dangerous microbial
infections are _Pseudomonas aeruginosa_ [53], _Neisseria meningitidis_ [54],
_Vibrio cholerae_ [55] and _Neisseria gonorrhoeae_ [56].
In this manuscript, we focus on microcolonies of _Neisseria gonorrhoeae (NG)_
bacteria. These bacterial microcolonies are the infectious units which form on
human epithelial tissue and cause gonorrhoeae, the second most common sexually
transmitted disease [57]. Multi-scale computational simulations have been
conducted recently to study biophysical aspects of NG microcolonies [58, 56,
59]. The NG bacteria, as well as many other bacteria species, use multiple
long and thin retractable filaments, called type IV pili, in order to interact
with the environment and with each other [60]. A series of studies have been
carried out to investigate the twitching motility of bacteria mediated by type
IV pili [61, 62, 63, 64, 65]. Pili can extend from the cell body, attach to
the substrate and retract. Pilus retraction generates forces which are then
translated into movement of cells. The magnitude of the forces generated by
the pilus retraction are in the range of $100-180\,\text{pN}$ which is
considered as one of the strongest active molecular forces known in nature
[66, 56]. Additionally, pili of one cell could also extend and attach to pili
of other cells. Retraction of the attached pili network attracts the cells
towards each other and leads to formation of an aggregate. These cycles of
growth, attachment, detachment and retraction drive the cell motility on
substrates and the aggregate formation process [67]. Pili mediated cell-cell
and cell-matrix interactions are crucial for the formation and maintenance of
microcolonies [68, 19, 21, 69].
The main objective of this contribution is to formulate and simulate the
process of cell aggregation phenomena within a nonlinear continuum mechanics
framework. We develop our framework in a Lagrangian setting which yields
considerable simplification of the equations and enables implicit time
integration which considerably increases the computational robustness. In
doing so, we take a prototypical example of NG bacteria and we describe the
process of colony formation as an active phase separation phenomenon. Our work
is mainly based on the coarse grained approach previously developed by Kuan et
al. [1, 70]. While we focus on intercellular interactions, our aim is to
provide a robust and efficient computational setting for generic cell-matrix
interaction problems, and to develop its fully nonlinear finite element
implementation. Our proposed framework provides a versatile and reliable
simulation technique that allows studying the processes of aggregate formation
under high forces and strong phase-separated regimes nearing much closer to
the physiologically relevant conditions.
Table 1: Summary of key definitions and notations. $\\{\bullet\\}$ | an arbitrary quantity | $\dot{\\{\bullet\\}}$ | material time derivative of $\\{\bullet\\}$
---|---|---|---
$\nabla_{\boldsymbol{X}}\\{\bullet\\}$ | material gradient of $\\{\bullet\\}$ | $\nabla_{\boldsymbol{x}}\\{\bullet\\}$ | spatial gradient of $\\{\bullet\\}$
$\nabla_{\boldsymbol{X}}\cdot\\{\bullet\\}$ | material divergence of $\\{\bullet\\}$ | $\nabla_{\boldsymbol{x}}\cdot\\{\bullet\\}$ | spatial divergence of $\\{\bullet\\}$
$\Delta_{\boldsymbol{x}}\\{\bullet\\}$ | spatial Laplacian of $\\{\bullet\\}$ | $\nabla^{2}_{\boldsymbol{x}}\\{\bullet\\}$ | spatial Hessian of $\\{\bullet\\}$
$\mathcal{L}_{t}\\{\bullet\\}$ | Lie time derivative of $\\{\bullet\\}$ | $\delta\\{\bullet\\}$ | variation of $\\{\bullet\\}$
$R$ | cell radius | $E$ | cell elastic modulus
$f^{\text{p}}$ | pili-pili attractive force | $f^{\text{s}}$ | steric repulsive force
$n^{\text{p}}$ | number of bound pili pairs | $\xi$ | cell-substrate friction coefficient
$k_{\text{on}}$ | pili binding rate | $k_{\text{off}}$ | pili unbinding rate
$l$ | pili length | $\ell_{0}$ | pili average length
$\boldsymbol{l}_{ij}$ | distance vector between cells $i$ and $j$ | $\boldsymbol{l}$ | spatial velocity gradient
$\boldsymbol{X}$ | material position vector | $\boldsymbol{x}$ | spatial position vector
$c_{0}$ | material cell number density | $c_{t}$ | spatial cell number density
$p_{0}$ | material bound pili number density | $p_{t}$ | spatial bound pili number density
$\boldsymbol{g}$ | cell number density gradient | $\boldsymbol{v}$ | cell velocity
$\boldsymbol{y}$ | non-linear deformation map | $\boldsymbol{F}$ | deformation gradient
$J$ | Jacobian of the deformation gradient | $\boldsymbol{K}$ | cofactor of the deformation gradient
$\delta$ | Kronecker delta | $\Psi_{\text{tot}}$ | total internal energy
$\boldsymbol{I}$ | material second-order identity tensor | $\boldsymbol{i}$ | spatial second-order identity tensor
$\mathcal{B}_{0}$ | material configuration | $\partial\mathcal{B}_{0}$ | boundary of the material configuration
$\boldsymbol{t}^{\text{a}}$ | active traction on material configuration | $\boldsymbol{t}^{\text{p}}$ | passive traction on material configuration
$\delta\boldsymbol{y}$ | linear momentum balance test function | $\delta c$ | cell number density conservation test function
$\delta\boldsymbol{g}$ | cell density gradient continuity test function | $\delta p$ | bound pili number density evolution test function
$\boldsymbol{E}$ | Green–Lagrange strain tensor | $\boldsymbol{B}$ | Piola deformation tensor
$\boldsymbol{S}^{\text{a}}$ | active Piola–Kirchhoff stress | $\boldsymbol{\tau}^{\text{a}}$ | active Kirchhoff stress
$\boldsymbol{P}^{\text{a}}$ | active Piola stress | $\boldsymbol{\sigma}^{\text{a}}$ | active Cauchy stress
$\boldsymbol{P}^{\text{p}}$ | passive Piola stress | $\boldsymbol{\sigma}^{\text{p}}$ | passive Cauchy stress
$\boldsymbol{S}^{\text{f}}$ | pili formation Piola–Kirchhoff stress | $\boldsymbol{\sigma}^{\text{f}}$ | pili formation Cauchy stress
$\overline{\boldsymbol{P}}$ | cell density gradient continuity Piola stresses | $\boldsymbol{N}$ | material unit normal to the boundary
R | assembled residual vector | $\boldsymbol{U}$ | global vector of unknowns
K | assembled tangent stiffness | $\\#e$ | number of elements
$N$ | shape function of the finite elements | $\text{I}^{\text{sym}}$ | symmetric fourth-order identity tensor
$\lambda$ | penalty parameter | $\Delta t$ | time step
The remainder of this manuscript is organized as follows. Table 1 gathers the
key definitions and notations of the paper. Section 2 introduces the problem
definition and presents the governing equations. Finite element implementation
of the problem is elaborated in Section 3. Our proposed theory is illustrated
through a set of numerical examples in Section 4. Finally, Section 5 concludes
the work and provides further outlooks.
## 2 Governing equations
This section elaborates on the governing equations. First the problem of cell
aggregation is defined and all the parameters and their roles are introduced.
Afterwards, the continuum approach within the Lagrangian settings is detailed.
Note that for simplicity and readability, our formulations here are developed
for a two-dimensional case, but there are no conceptual limitations to
generalize it to three dimensions.
### 2.1 Problem definition
Figure 1: Left: An image of N. gonorrhoeae bacterium and its pili obtained by
transmission electron microscopy in [1]. Right: Simplification of the cell
geometry to a circular shape for our analysis (right). Figure 2: A sketch of
N. gonorrhoeae bacteria interacting by their pili. The forces acting on each
bacterium are illustrated. Each bacterium $i$ is distinguished by its position
vector $\boldsymbol{r}_{i}$. The inter-bacterium forces are steric repulsion
forces between two attached bacteria which are shown in red, and the pili-pili
mediated attractive forces between two bacteria that have formed a pili
network are shown in blue. These two forces act in the same direction of the
line connecting the centers of two adjacent bacteria. Bound pili are depicted
by blue dotted lines whereas free pili are depicted by solid green lines. Note
that there could be multiple pili pairs pulling two neighboring cells
together, as counted by $n_{is}^{\text{p}}$.
Figure 1 (left) shows a transmission electron microscopy image of a single NG
bacterium together with its pili. In this contribution, we approximate the
cell geometry with a circular disk with the radius $R$ for the sake of
simplicity, as shown in Fig. 1 (right). Each NG bacterium is surrounded by
approximately $10-20$ pili which are isotropically distributed around the cell
[67, 64]. The length of each individual pilus $l$ was shown to be
exponentially distributed with the average value of $\langle
l\rangle=\ell_{0}$. Measurements report the average length $\ell_{0}$ to be
around $1-2\,\mu\text{m}$ with cell radius $R$ being around $1\,\mu\text{m}$
[62].
Figure 2 depicts a group of cells interacting via their pili. In order to move
on a substrate, cells use their pili. Pili can grow, attach to the substrate
and retract, which produces the force required for the displacement. Pili-
substrate interactions play a significant role in determining the cell
motility which is essential for understanding the kinetics of aggregation.
Here, since we are mainly interested in examining the behavior of aggregates,
only a substrate friction is considered to represent the pili-substrate
interactions. Further details on pili-substrate interactions are available in
[71]. The growth and retraction of the pili occurs through the process of
polymerization and depolymerization which is powered by specific motor protein
complexes [72, 73, 74]. Apart from attaching to the substrate, pili can also
attach to pili of other cells. Retraction of an attached pili pair generates
an attractive force $f^{\text{p}}$ which pulls the cells towards each other.
In Fig. 2, bound pili are depicted by blue lines with a dot on them, whereas
free pili are shown by solid green lines. The number of the pili pairs that
have formed between two cells is denoted as $n^{\text{p}}$. Accordingly, as
illustrated for the cells $i$ and $s$, the overall attractive force between
two cells which have formed a network of bound pili is
$f^{\text{p}}n^{\text{p}}_{is}$. Assuming no bound pairs at the initial time,
the number of bound pili between the cells $i$ and $j$ (that do not move
relative to each other) can be obtained via the relation
$\displaystyle\frac{\mbox{d}n_{ij}^{\text{p}}}{\mbox{d}t}=\displaystyle\frac{k_{\text{on}}\,e^{-l_{ij}\ell_{0}}}{2\pi\ell_{0}^{2}}-k_{\text{off}}n_{ij}^{\text{p}}\qquad\Longrightarrow\qquad
n_{ij}^{\text{p}}=\displaystyle\frac{k_{\text{on}}\,e^{-l_{ij}\ell_{0}}}{2\pi\ell_{0}^{2}k_{\text{off}}}\left[1-e^{-k_{\text{off}}\,t}\right]\,,$
(1)
with $k_{\text{on}}$ being the pili binding rate and $k_{\text{off}}$ being
the pili unbinding/detachment rate. The distance between the cells $i$ and $j$
is $l_{ij}$ which is the magnitude of the vector pointing from cell $j$ to
cell $i$ as
$l_{ij}=||\boldsymbol{l}_{ij}||=||\boldsymbol{r}_{i}-\boldsymbol{r}_{j}||$.
This relation was obtained by Kuan et al. [70, 1] which is a mean field
approximation that ignores the discreteness of the pili number. The factor
$\displaystyle\frac{k_{\text{on}}\,e^{-l_{ij}/\ell_{0}}}{2\pi\ell_{0}^{2}}$
stems from the assumption of an exponential distribution of the pili length
and integration over all possible binding points along the line connecting two
bacteria. Figure 3 renders the variation of bound pili pairs between the cells
$i$ and $j$ with respect to their distance $l_{ij}$ and time for three
different values of binding rate $k_{\text{on}}$ (two different views are
given for better illustration). The average pili length is set to
$\ell_{0}=1.0\,\mu\text{m}$. It is confirmed that as time evolves, the number
of bound pili pairs reaches a steady state. Larger distance between the cells
yields a decreased number of bound pili pairs. This is justifiable since the
further the cells are from each other, the less chance their pili have to
attach to each other due to their limited length. Evidently, larger binding
rates also result in more bound pili pairs.
Figure 3: Variation of the bound pili pairs $n_{ij}^{\text{p}}$ between the
cells $i$ and $j$ with respect the their distance $l_{ji}$ and time $t$
according to relation (1). Three different values $0.1$, $0.2$ and $0.3$ for
the pili binding rate are considered. Two different views are provided for
better illustration. Pili average length is set to $\ell_{0}=1.0\,\mu\text{m}$
and the unbinding rate is set to $k_{\text{off}}=0.005\,\text{s}^{-1}$.
Another force that plays a role in the cell aggregate formation dynamics is
the steric repulsive force $f^{\text{s}}_{ij}$ which occurs between the cells
$i$ and $j$ that are in direct contact. In the model, having this repulsive
force is necessary to prevent interpenetration of the cells. In summary, three
major forces determine the dynamics of the cells in the network; cell-
substrate friction, pili-pili mediated attractive force and steric repulsive
force. The forces arising from the contraction of the constantly remodeling
pili network attract the cells together and in the presence of the excluded
volume interactions, a densely packed colony is formed which behaves as an
active visco-elastic material [1].
Since the pili-pili interactions are driven by active retractions and growth
of pili, the system is inherently out of equilibrium. The force balance
equation for an individual cell in the aggregate can be obtained as
$f^{\text{p}}\sum_{\begin{subarray}{c}j=1\\\ j\neq
i\end{subarray}}n_{ij}^{\text{p}}\hat{\boldsymbol{l}}_{ji}+\sum_{\begin{subarray}{c}j=1\\\
j\neq
i\end{subarray}}f^{\text{s}}_{ij}\hat{\boldsymbol{l}}_{ij}-\xi\boldsymbol{v}_{i}=0\,,$
(2)
with $\boldsymbol{v}_{i}$ being the velocity of cell $i$ and $\xi$ being the
friction coefficient between the cell and the substrate. Note the first
summation is over the neighboring cells with which the cell has formed pili
pairs and the second summation is over the neighboring cells with which the
cell is in direct contact. The unit vector
$\hat{\boldsymbol{l}}_{ij}=\boldsymbol{l}_{ij}/l_{ij}$ shows the direction of
the pili-pili mediated force acting on cell $i$ by cell $j$ which is in the
same direction of the steric repulsive force between these two cells. The
three terms in Eq. (2) represent pili-pili mediated forces, excluded volume
interactions and cell-substrate friction, respectively. The summations in Eq.
(2) imply that the equation is solved for each cell individually. While this
approach offers more accuracy together with a higher degree of spatial
resolution, it suffers from computational costs as the number of cells
increases. Moreover, if one is interested in examining the collective/overall
behavior of the aggregate, tracking each individual cell is neither desirable
nor necessary. In this section, we propose a continuum model to describe the
dynamics of dense bacterial colonies. In our methodology, the system is
characterized by two key features which distinguish it from previous models of
active systems. First, at time scales smaller than the pili detachment time,
the bound pili network endows aggregates with elastic-like material properties
for which a Lagrangian approach might be more suitable. At larger time scales,
pili can rearrange which allows for stress relaxation resulting in a fluid-
like behavior for which an Eulerian approach is the common choice. Second, the
attractive pili-pili mediated force dipoles are balanced by steric repulsion
forces which allow for considering dense cellular aggregates. These two key
features are captured by the proposed continuum equations.
### 2.2 Eulerian approach
In the Eulerian approach, the cell aggregate is treated as a compressible
fluid-like material. Therefore, our domain would be a fixed control window and
we observe the changes that the aggregate goes through over this window. This
section briefly details on the governing equations of the problem within an
Eulerian approach that were previously derived by coarse-graining microscopic
equations in [1].
#### 2.2.1 Balance equations
The first balance equation is the overall cell number conservation equation
which reads
$\displaystyle\frac{\partial c_{t}}{\partial
t}+\nabla_{\\!\boldsymbol{x}}\\!\cdot(c_{t}\boldsymbol{v})=0\,,$ (3)
where $c_{t}$ is the spatial cell _number_ density and $\boldsymbol{v}$ is the
cell (ether) velocity. Note, the subscript $\boldsymbol{x}$ indicates that
spatial derivatives are taken with respect to the coordinates in the current
configuration. The two variables $c_{t}$ and $\boldsymbol{v}$ are the unknowns
of our problem. The second balance equation is the linear momentum balance
which arises from the microscopic balance of forces in Eq. (2) and reads
$\nabla_{\\!\boldsymbol{x}}\\!\cdot\boldsymbol{\sigma}^{\text{a}}+\nabla_{\\!\boldsymbol{x}}\\!\cdot\boldsymbol{\sigma}^{\text{p}}-\xi
c_{t}\boldsymbol{v}=\boldsymbol{0}\,,$ (4)
with $\boldsymbol{\sigma}^{\text{a}}$ and $\boldsymbol{\sigma}^{\text{p}}$
being the active and passive Cauchy stresses, respectively. The third balance
equation is the bound pili number density evolution equation which reads
$\displaystyle\frac{\partial p_{t}}{\partial
t}+\nabla_{\\!\boldsymbol{x}}\\!\cdot\left(p_{t}\boldsymbol{v}\right)-k_{\text{on}}\left[c_{t}^{2}+\displaystyle\frac{3\ell_{0}^{2}}{4}c_{t}\,\left[\Delta_{\boldsymbol{x}}c_{t}\right]-\displaystyle\frac{3\ell_{0}^{2}}{4}\big{|}\nabla_{\\!\boldsymbol{x}}c_{t}\big{|}^{2}\right]+k_{\text{off}}p_{t}=0\,,$
(5)
with $p_{t}$ the spatial bound pili number density. This equation incorporates
the pili turnover dynamics from Eq. (1). Note, in our continuum formulation we
denote the spatial pili number density as $p_{t}$ which is a coarse-grained
version of $n_{ji}^{\text{p}}$. The spatial Laplacian operator
$\Delta_{\boldsymbol{x}}$ is defined by the inner product
$\Delta_{\boldsymbol{x}}\\{\bullet\\}\\!:=\nabla_{\boldsymbol{x}}\cdot\left(\nabla_{\boldsymbol{x}}\\{\bullet\\}\right)$.
Equation (5) clearly states that the overall bound pili number is not
conserved since the pili network constantly changes due to the binding and
unbinding pairs.
#### 2.2.2 Stress definitions
The next step is to state the definitions for the active and passive stresses.
For the passive stress, we have a straightforward explicit expression which
reads
$\boldsymbol{\sigma}^{\text{p}}=-\displaystyle\frac{E\pi R^{2}c_{t}}{1-\pi
R^{2}c_{t}}\boldsymbol{i}\,,$ (6)
with $\boldsymbol{i}$ being the spatial second-order identity tensor, $E$
being the bulk modulus of the cell and $R$ being the cell radius. The
expression for the passive Cauchy stress is similar to a _hydrostatic
pressure-like stress_. This expression is one of the simplest forms to
describe the excluded volume interactions. It resembles the van der Waals gas
law for pressure in the absence of attractive interactions. As a result, in
the absence of the active stress, the only forces acting on cells would be
steric repulsive forces and cell-substrate friction, thus while moving on the
substrate, the cells tend to repel each other until they reach a state of
equilibrium. As it will be elucidated in the numerical examples, this gives
rise to formation of a uniform distribution of cells throughout the domain.
Figure 4 depicts the behavior of the passive stress with respect to the cell
number density. The $x$-axis represents the dimensionless parameter
$c_{t}R^{2}$ with the left $y$-axis representing $1-\pi R^{2}c_{t}$ and the
right $y$-axis representing the pressure term in the passive stress. At
$c_{t}R^{2}=1/\pi$, the stress renders an asymptotic behavior since the
denominator tends to zero. As a result, this value represents the upper bound
for the cell number density.
Figure 4: Illustration of the behavior of passive stress with respect to cell
number density. The denominator of the stress becomes zero at the value
$\text{max}(c_{t}R^{2})=1/\pi$ hence, the asymptotic behavior. This value
represents the upper bound for the cell number density.
The definition of the active Cauchy stress is given as coarse-grained average
of the microscopic expression (for details see [1]). For us, its time
evolution equation is of interest which reads
$\displaystyle\frac{\partial\boldsymbol{\sigma}^{\text{a}}}{\partial
t}+\nabla_{\\!\boldsymbol{x}}\\!\cdot(\boldsymbol{\sigma}^{\text{a}}\otimes\boldsymbol{v})-2\left[\boldsymbol{l}\cdot\boldsymbol{\sigma}^{\text{a}}\right]^{\text{sym}}=-\frac{1}{\ell_{0}p_{t}f^{\text{p}}}\left[\boldsymbol{\sigma}^{\text{a}}\otimes\boldsymbol{\sigma}^{\text{a}}\right]:\boldsymbol{l}^{\text{sym}}+\boldsymbol{\sigma}^{\text{f}}-k_{\text{off}}\,\boldsymbol{\sigma}^{\text{a}}\,,$
(7)
with $\boldsymbol{l}$ being the spatial velocity gradient
$\boldsymbol{l}=\mbox{grad}\boldsymbol{v}$ and
$\boldsymbol{\sigma}^{\text{f}}$ being the pili formation Cauchy stress tensor
which will be defined below. The active Cauchy stress results from the pili-
pili mediated attractive forces between the cells which pull the cells
together and lead to the formation of an aggregate. For the pili formation
Cauchy stress an explicit formulation is achievable. Due to the intrinsic
nature of pili-pili interactions as force dipoles, this stress tensor has a
nematic symmetry and reads
$\boldsymbol{\sigma}^{\text{f}}=\displaystyle\frac{f^{\text{p}}k_{\text{on}}\ell_{0}}{2}\left[c_{t}^{2}\boldsymbol{i}-\displaystyle\frac{3\ell_{0}^{2}}{4}\left[2\left[\nabla_{\boldsymbol{x}}c_{t}\otimes\nabla_{\boldsymbol{x}}c_{t}\right]+\big{|}\nabla_{\boldsymbol{x}}c_{t}\big{|}^{2}\boldsymbol{i}\right]+\displaystyle\frac{3\ell_{0}^{2}}{4}\left[c_{t}\left[\Delta_{\boldsymbol{x}}c_{t}\right]\boldsymbol{i}+2c_{t}\nabla^{2}_{\boldsymbol{x}}c_{t}\right]\right]\,,$
(8)
where $\nabla^{2}_{\boldsymbol{x}}$ denotes the spatial Hessian operator
defined by
$\nabla^{2}_{\boldsymbol{x}}\\{\bullet\\}:=\nabla_{\boldsymbol{x}}\left(\nabla_{\boldsymbol{x}}\\{\bullet\\}\right)$.
### 2.3 Lagrangian approach
This section elaborates the pertinent equations within a Lagrangian framework,
hence, all the equations are written in the material configuration. In the
Lagrangian approach, we follow the trajectory of the cells as time elapses.
Key geometric notions from nonlinear continuum kinematics are thereby the
integration of the cell velocity into the cell nonlinear deformation map
$\boldsymbol{x}=\boldsymbol{y}(\boldsymbol{X},t)$ with $\boldsymbol{x}$ and
$\boldsymbol{X}$ denoting the Eulerian and the Lagrangian coordinates. Based
on the notion of the cell deformation map $\boldsymbol{y}$, we introduce the
corresponding material deformation gradient
$\boldsymbol{F}:=\nabla_{\boldsymbol{X}}\boldsymbol{y}$, its Jacobian
(determinant) $J:=\det\boldsymbol{F}$ and its cofactor
$\boldsymbol{K}:=J\boldsymbol{F}^{-T}$. As will be elucidated, the main
advantage of the Lagrangian approach is that it yields a significant
simplification for the equations which facilitates finite element
implementation of the problem. In addition, employing the Lagrangian approach
enables utilization of implicit time integration schemes which is
computationally more robust compared to explicit time integration and less
prone to instability issues. A significant step towards the Lagrangian
formulation of the problem is parametrization of all fields in Lagrangian
coordinates $\boldsymbol{X}$. Especially the spatial cell number density is
parameterized as $c_{t}=c_{t}(\boldsymbol{X},t)$. This critical step allows us
to adopt $\mathcal{C}^{0}$ continuous or discontinuous schemes and to avoid
complications regarding the implementation of $\mathcal{C}^{1}$ continuous
elements.
#### 2.3.1 Balance equations
Similar to the previous section, we start with the overall cell number
conservation equation. Using the material time derivative
$D\\{\bullet\\}/Dt=\dot{\\{\bullet\\}}=\partial\\{\bullet\\}/\partial
t+\nabla_{\boldsymbol{x}}\\{\bullet\\}\cdot\boldsymbol{v}$ and Eq. (3), the
Lagrangian form of the overall cell number conservation equation can be
written as
$\displaystyle\frac{D\left(Jc_{t}\right)}{Dt}=\dot{\overline{Jc_{t}}}=\dot{J}c_{t}+\dot{c_{t}}J=0\,.$
(9)
The second balance equation is the linear momentum balance. To define the
Lagrangian version of the linear momentum balance equation, the velocity field
is replaced by the time derivative of the deformation map as
$\boldsymbol{v}=\dot{\boldsymbol{y}}$. Using the relation
$\nabla_{\boldsymbol{X}}\cdot\boldsymbol{P}=J\nabla_{\boldsymbol{x}}\cdot\boldsymbol{\sigma}$
the material linear momentum balance reads
$\nabla_{\boldsymbol{X}}\cdot\boldsymbol{P}^{\text{a}}+\nabla_{\boldsymbol{X}}\cdot\boldsymbol{P}^{\text{p}}-\xi
Jc_{t}\dot{\boldsymbol{y}}=\boldsymbol{0}\,,$ (10)
with $\boldsymbol{P}^{\text{a}}$ and $\boldsymbol{P}^{\text{p}}$ being the
active and passive Piola stresses, respectively. The next step is to define
the Lagrangian version of the bound pili evolution equation. Using the
relations $p_{0}=Jp_{t}$ and
$\dot{\\{\bullet\\}}=\partial\\{\bullet\\}/\partial
t+\nabla_{\boldsymbol{x}}\\{\bullet\\}\cdot\boldsymbol{v}$, the bound pili
evolution equation can be written in terms of the material pili density as
$\dot{p_{0}}-Jk_{\text{on}}\left[c_{t}^{2}+\displaystyle\frac{3\ell_{0}^{2}}{4}c_{t}\Delta_{\boldsymbol{x}}c_{t}-\displaystyle\frac{3\ell_{0}^{2}}{4}\big{|}\nabla_{\boldsymbol{x}}c_{t}\big{|}^{2}\right]+k_{\text{off}}p_{0}=0\,.$
(11)
Note, in Eq. (11), the derivatives of the cell number density are still with
respect to the spatial coordinates. In Section 2.3.4, these derivatives will
be transformed completely to the material configuration hence, our fully
Lagrangian formulation.
#### 2.3.2 Stress definitions
Similar to the passive Cauchy stress, derivation of the passive Piola stress
is straightforward. Using the relation
$\boldsymbol{P}=J\boldsymbol{\sigma}\cdot\boldsymbol{F}^{-T}$, the passive
Piola stress reads
$\boldsymbol{P}^{\text{p}}=-\displaystyle\frac{E\pi R^{2}c_{t}}{1-\pi
R^{2}c_{t}}\boldsymbol{K}\,.$ (12)
Derivation of the active Piola stress from the relation (7) is an intricate
task and requires further attention. The active Piola stress tensor, a two-
point tensor, must be converted to a fully material stress tensor. In doing
so, firstly the active Cauchy stress $\boldsymbol{\sigma}^{\text{a}}$ is
transformed to the active Kirchhoff stress $\boldsymbol{\tau}^{\text{a}}$ as
$\boldsymbol{\tau}^{\text{a}}=J\boldsymbol{\sigma}^{\text{a}}\,,$ (13)
with its material time derivative
$\dot{\boldsymbol{\tau}^{\text{a}}}=J\left[\displaystyle\frac{\partial\boldsymbol{\sigma}^{\text{a}}}{\partial
t}+\nabla_{\boldsymbol{x}}\\!\cdot(\boldsymbol{\sigma}^{\text{a}}\otimes\boldsymbol{v})\right]\,.$
(14)
Via multiplying Eq. (7) by the Jacobian $J$, we arrive at a corresponding
relation for Eq. (7) in terms of the Kirchhoff stress
$\dot{\boldsymbol{\tau}^{\text{a}}}-2\left[\boldsymbol{l}\cdot\boldsymbol{\tau}^{\text{a}}\right]^{\text{sym}}=-\displaystyle\frac{1}{\ell_{0}p_{0}f^{\text{p}}}\left[\boldsymbol{\tau}^{\text{a}}\otimes\boldsymbol{\tau}^{\text{a}}\right]:\boldsymbol{l}^{\text{sym}}+\boldsymbol{\tau}^{\text{f}}-k_{\text{off}}\,\boldsymbol{\tau}^{\text{a}}\,.$
(15)
To obtain a fully material form of this equation, the Lie time derivative of
the Kirchhoff stress needs to be introduced. To express the Lie time
derivative of the Kirchhoff stress, we need to calculate the pull-back of the
Kirchhoff stress, calculate its material time derivative and then push it
forward to the spatial configuration again which renders
$\displaystyle\mathcal{L}_{t}\boldsymbol{\tau}^{\text{a}}=\dot{\boldsymbol{\tau}^{\text{a}}}-2\left[\boldsymbol{l}\cdot\boldsymbol{\tau}^{\text{a}}\right]^{\text{sym}}\,.$
(16)
Note, the symmetry of the Kirchhoff stress is utilized in the above
derivation. Further details regarding the formulation of the Lie time
derivative are available in B. The pull-back of the active Kirchhoff stress
yields the active Piola–Kirchhoff stress as
$\boldsymbol{S}^{\text{a}}=\boldsymbol{F}^{-1}\cdot\boldsymbol{\tau}^{\text{a}}\cdot\boldsymbol{F}^{-T}\,.$
(17)
Using Eqs. (15) and (16), one can write
$\mathcal{L}_{t}\boldsymbol{\tau}^{\text{a}}=\boldsymbol{F}\cdot\dot{\boldsymbol{S}^{\text{a}}}\cdot\boldsymbol{F}^{T}=-\displaystyle\frac{1}{\ell_{0}p_{0}f^{\text{p}}}\left[\boldsymbol{\tau}^{\text{a}}\otimes\boldsymbol{\tau}^{\text{a}}\right]:\boldsymbol{l}^{\text{sym}}+\boldsymbol{\tau}^{\text{f}}-k_{\text{off}}\,\boldsymbol{\tau}^{\text{a}}\,.$
(18)
Accordingly, the material time derivative of the active Piola–Kirchhoff stress
in terms of the active Kirchhoff stress is obtained as
$\dot{\boldsymbol{S}^{\text{a}}}=\boldsymbol{F}^{-1}\cdot\left[\mathcal{L}_{t}\boldsymbol{\tau}^{\text{a}}\right]\cdot\boldsymbol{F}^{-T}=\boldsymbol{F}^{-1}\cdot\left[-\displaystyle\frac{1}{\ell_{0}\rho_{t}^{\text{p}}f^{\text{p}}}\left[\boldsymbol{\tau}^{\text{a}}:\boldsymbol{l}^{{}^{\text{sym}}}\right]\boldsymbol{\tau}^{\text{a}}+\boldsymbol{\tau}^{\text{f}}-k_{\text{off}}\,\boldsymbol{\tau}^{\text{a}}\right]\cdot\boldsymbol{F}^{-T}\,.$
(19)
Finally, via replacing $\boldsymbol{\tau}^{\text{a}}$ with
$\boldsymbol{F}\cdot\boldsymbol{S}^{\text{a}}\cdot\boldsymbol{F}^{T}$, we
arrive at the final form of the material time derivative of the active
Piola–Kirchhoff stress
$\dot{\boldsymbol{S}^{\text{a}}}=-\displaystyle\frac{1}{\ell_{0}p_{0}f^{\text{p}}}\left[\boldsymbol{S}^{\text{a}}:\dot{\boldsymbol{E}}\right]\boldsymbol{S}^{\text{a}}+\boldsymbol{S}^{\text{f}}-k_{\text{off}}\,\boldsymbol{S}^{\text{a}}\,,$
(20)
with $\boldsymbol{E}$ being the Green–Lagrange strain tensor
$\boldsymbol{E}=\tfrac{1}{2}[\boldsymbol{F}^{T}\cdot\boldsymbol{F}-\boldsymbol{I}]$.
For further details regarding the derivation of Eq. (20), see B. The most
significant advantage of using the Lagrangian approach is that it yields
considerable simplification to the governing equations and the active stress
time evolution equation. The active stress can be numerically determined via
time-discretizing Eq. (20), see C for further details. Subsequently, after
solving for the active Piola–Kirchhoff stress, the active Piola stress can be
immediately obtained via
$\boldsymbol{P}^{\text{a}}=\boldsymbol{F}\cdot\boldsymbol{S}^{\text{a}}$.
Stating the material time derivative of the active Piola–Kirchhoff stress
(20), severely alleviates incrementally objective time integration of the
active stress evolution. The next step is to define the pili formation second
Piola–Kirchhoff stress $\boldsymbol{S}^{\text{f}}$. Inserting the relation
$\boldsymbol{S}=J\boldsymbol{F}^{-1}\cdot\boldsymbol{\sigma}\cdot\boldsymbol{F}^{-T}$
into Eq. (8), yields the pili formation Piola–Kirchhoff stress as
$\displaystyle\boldsymbol{S}^{\text{f}}=\frac{1}{2}f^{\text{p}}k_{\text{on}}\ell_{0}J\left[c_{t}^{2}\boldsymbol{B}+\frac{3\ell_{0}^{2}}{4}\left[-2\boldsymbol{F}^{-1}\\!\\!\cdot\left[\nabla_{\boldsymbol{x}}c_{t}\otimes\nabla_{\boldsymbol{x}}c_{t}\right]\cdot\boldsymbol{F}^{-T}\\!\\!\\!-\big{|}\nabla_{\boldsymbol{x}}c_{t}\big{|}^{2}\boldsymbol{B}+c_{t}\left[\Delta_{\boldsymbol{x}}c_{t}\right]\boldsymbol{B}+c_{t}\left[\boldsymbol{F}^{-1}\\!\\!\cdot\nabla^{2}_{\boldsymbol{x}}c_{t}\cdot\boldsymbol{B}+\boldsymbol{B}\cdot\nabla^{2}_{\boldsymbol{x}}c_{t}\cdot\boldsymbol{F}^{-T}\right]\right]\right]\,,$
(21)
with $\boldsymbol{B}=\boldsymbol{F}^{-T}\cdot\boldsymbol{F}^{-1}$ the Piola
deformation tensor. Note, in Eq. (21), the gradients and divergences are with
respect to the spatial coordinates which will be transformed completely to the
material configuration in Section 2.3.4.
#### 2.3.3 Weak form
To obtain the weak form of the overall cell number conservation equation, its
strong form is multiplied by a scalar-valued test function $\delta c_{t}$ and
then integrated over the referential domain as follows
$\int_{\mathcal{B}_{0}}\left[\dot{J}c_{t}+\dot{c_{t}}J\right]\delta
c_{t}\,\mbox{d}V=0\,\qquad\forall\delta c_{t}\,.$ (22)
To obtain the weak form of the linear momentum balance, its strong form too is
multiplied by a vector-valued test function $\delta\boldsymbol{y}$ and then
integrated over the referential domain which yields
$\int_{\mathcal{B}_{0}}\boldsymbol{P}^{\text{a}}:\nabla_{\boldsymbol{X}}\delta\boldsymbol{y}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\boldsymbol{P}^{\text{p}}:\nabla_{\boldsymbol{X}}\delta\boldsymbol{y}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\xi
Jc_{t}\dot{\boldsymbol{y}}\cdot\delta\boldsymbol{y}\,\mbox{d}V=\int_{\partial\mathcal{B}_{0}}\delta\boldsymbol{y}\cdot\boldsymbol{t}^{\text{a}}\,\mbox{d}A+\int_{\partial\mathcal{B}_{0}}\delta\boldsymbol{y}\cdot\boldsymbol{t}^{\text{p}}\,\mbox{d}A\,\qquad\forall\delta\boldsymbol{y},$
(23)
where $\boldsymbol{t}^{\text{a}}$ and $\boldsymbol{t}^{\text{p}}$ are active
and passive tractions acting on the boundary of the domain, respectively.
Similarly, the weak form of the bound pili evolution equation is obtained via
multiplying Eq. (11) by a scalar-valued test function $\delta p_{0}$ and then
integrating over the referential domain as
$\int_{\mathcal{B}_{0}}\left[\dot{p_{0}}-Jk_{\text{on}}\left[c_{t}^{2}+\displaystyle\frac{3\ell_{0}^{2}}{4}c_{t}\Delta_{\boldsymbol{x}}c_{t}-\displaystyle\frac{3\ell_{0}^{2}}{4}\big{|}\nabla_{\boldsymbol{x}}c_{t}\big{|}^{2}\right]+k_{\text{off}}p_{0}\right]\delta
p_{0}\,\mbox{d}V=0\,\qquad\forall\delta p_{0}.$ (24)
#### 2.3.4 Gradient enhanced framework
So far, the unknowns of our problem in Lagrangian approach have been the cell
number density $c_{t}$, the cell deformation map $\boldsymbol{y}$ and bound
pili number density $p_{0}$. A further challenge that arises in our problem is
the presence of the Hessian of the spatial cell number density
$\nabla^{2}_{\boldsymbol{x}}c_{t}$ and the Laplacian of the spatial cell
number density $\Delta_{\boldsymbol{x}}c_{t}$ in the bound pili number density
evolution equation (11) and the pili density formation Piola stress equation
(21). Thus, second derivatives of the spatial cell number density $c_{t}$ are
required to be calculated. In doing so, different strategies have been
employed in the literature, among which the well-established ones are
$\mathcal{C}^{1}$ continuous elements [75, 76], isogeometric analysis [77,
78], micromorphic continuum approach and the gradient enhanced framework [79,
80, 81]. In order to stay within the realm of the classical finite element
method (FEM) associated with a $\mathcal{C}^{0}$ continuous interpolation
approach, we adopt the gradient enhanced framework. In doing so, we introduce
an additional independent spatial vector field $\boldsymbol{g}$ to represent
the spatial gradient of the cell number density $\nabla_{\boldsymbol{x}}c_{t}$
and _weakly enforce_
$\boldsymbol{g}\overset{!}{=}\nabla_{\boldsymbol{x}}c_{t}$. Note, similar to
the spatial cell number density, its gradient is also parametrized as a
function of the Lagrangian coordinates
$\boldsymbol{g}=\boldsymbol{g}(\boldsymbol{X},t)$. The first step is to
transform the Laplacian and Hessian to substitute the terms including spatial
derivatives of $c_{t}$ with $\boldsymbol{g}$ as
$\displaystyle\Delta_{\boldsymbol{x}}c_{t}=\nabla_{\boldsymbol{x}}\cdot(\nabla_{\boldsymbol{x}}c_{t})=\nabla_{\boldsymbol{x}}\cdot\boldsymbol{g}\,,$
(25)
$\displaystyle\nabla^{2}_{\boldsymbol{x}}c_{t}=\nabla_{\boldsymbol{x}}(\nabla_{\boldsymbol{x}}c_{t})=\nabla_{\boldsymbol{x}}^{\text{sym}}\boldsymbol{g}=\frac{1}{2}\left[\nabla_{\boldsymbol{x}}\boldsymbol{g}+\nabla_{\boldsymbol{x}}^{T}\boldsymbol{g}\right]\,,$
where a symmetric gradient of $\boldsymbol{g}$ follows from the symmetry of
the Hessian of the cell number density. Afterwards the resulting terms must be
pulled back to the material configuration so as to unify the Lagrangian
formalism. For an arbitrary vector field $\boldsymbol{a}$, the following
relations hold between the spatial and material divergence and gradient
$\nabla_{\boldsymbol{x}}\cdot\boldsymbol{a}=\nabla_{\boldsymbol{X}}\boldsymbol{a}:\boldsymbol{F}^{-T}$
and
$\nabla_{\boldsymbol{x}}\boldsymbol{a}=\nabla_{\boldsymbol{X}}\boldsymbol{a}\cdot\boldsymbol{F}^{-1}$.
Accordingly, the Laplacian and Hessian of the cell number density can be
written in terms of $\boldsymbol{g}$ in the material configuration as
$\displaystyle\Delta_{\boldsymbol{x}}c_{t}=\nabla_{\boldsymbol{x}}\cdot\boldsymbol{g}=\nabla_{\boldsymbol{X}}\boldsymbol{g}:\boldsymbol{F}^{-T}\,,$
(26)
$\displaystyle\nabla^{2}_{\boldsymbol{x}}c_{t}=\nabla_{\boldsymbol{x}}^{\text{sym}}\boldsymbol{g}=\frac{1}{2}\left[\nabla_{\boldsymbol{X}}\boldsymbol{g}\cdot\boldsymbol{F}^{-1}+\boldsymbol{F}^{-T}\cdot\nabla_{\boldsymbol{X}}^{T}\boldsymbol{g}\right]\,.$
Using Eq. (26), the strong (11) and weak (24) forms of the bound pili number
density evolution equation can be stated as fully Lagrangian equations as
$\dot{p_{0}}-Jk_{\text{on}}\left[c_{t}^{2}+\displaystyle\frac{3\ell_{0}^{2}}{4}c_{t}\left[\nabla_{\boldsymbol{X}}\boldsymbol{g}:\boldsymbol{F}^{-T}\right]-\displaystyle\frac{3\ell_{0}^{2}}{4}\big{|}\boldsymbol{g}\big{|}^{2}\right]+k_{\text{off}}p_{0}=0\,,$
(27)
and
$\int_{\mathcal{B}_{0}}\left[\dot{p_{0}}-Jk_{\text{on}}\left[c_{t}^{2}+\displaystyle\frac{3\ell_{0}^{2}}{4}c_{t}\left[\nabla_{\boldsymbol{X}}\boldsymbol{g}:\boldsymbol{F}^{-T}\right]-\displaystyle\frac{3\ell_{0}^{2}}{4}\big{|}\boldsymbol{g}\big{|}^{2}\right]+k_{\text{off}}p_{0}\right]\delta
p_{0}\,\mbox{d}V=0\,,$ (28)
respectively. Accordingly the pili density formation Piola–Kirchhoff stress
equation (21) can be rewritten as
$\displaystyle\boldsymbol{S}^{\text{f}}=\frac{1}{2}f^{\text{p}}k_{\text{on}}\ell_{0}J\left[c_{t}^{2}\boldsymbol{B}+\frac{3\ell_{0}^{2}}{4}\left[-2\boldsymbol{F}^{-1}\\!\\!\cdot\left[\boldsymbol{g}\otimes\boldsymbol{g}\right]\cdot\boldsymbol{F}^{-T}\\!\\!\\!-\left[\boldsymbol{g}\right]^{2}\boldsymbol{B}+c_{t}\left[\nabla_{\boldsymbol{X}}\boldsymbol{g}:\boldsymbol{F}^{-T}\right]\boldsymbol{B}+c_{t}\left[\boldsymbol{F}^{-1}\\!\\!\cdot\nabla_{\boldsymbol{X}}\boldsymbol{g}\cdot\boldsymbol{B}+\boldsymbol{B}\cdot\nabla_{\boldsymbol{X}}^{T}\boldsymbol{g}\cdot\boldsymbol{F}^{-T}\right]\right]\right]\,.$
(29)
#### 2.3.5 Fictitious variational approach
The final step to complete our formulation is to update the governing
equations given the new continuity requirement. The advantage of using a
(fictitious) variational approach is that it provides the admissible forms of
tractions and external forces without any prior knowledge or assumptions. In
addition, with no extra effort or derivations, the variational approach
furnishes the weak form which is central to the finite element implementation
of the problem. In order to obtain the updated governing equations, a total
energy functional is minimized via setting its first variation to zero. To
begin, we assume that there exist a fictitious energy function $\Psi_{1}$
whose variation with respect to the degrees of freedom yields the already
existing weak forms (22), (23) and (28). The term “fictitious” for this energy
function implies that the original weak forms were directly derived from the
strong forms and not from any specific energy function and, as mentioned
before, the strong forms themselves are obtained using the coarse-grained
continuum approach developed in [1, 70]. To enforce the continuity of the
spatial cell number density gradient, we introduce a new energy function
$\Psi_{2}$ whose definition will be provided shortly. The total energy
$\Psi_{\text{tot}}$ consists of the initial fictitious energy and the newly
defined energy
$\Psi_{\text{tot}}=\Psi_{1}+\Psi_{2}\,.$ (30)
The energies are the integrals of their corresponding internal energy
densities over their associated domains as
$\Psi_{\text{tot}}=\int_{\mathcal{B}_{0}}\psi_{\text{tot}}\,\mbox{d}V\,,\qquad\Psi_{1}=\int_{\mathcal{B}_{0}}\psi_{1}\,\mbox{d}V\,,\qquad\Psi_{2}=\int_{\mathcal{B}_{0}}\psi_{2}\,\mbox{d}V\,.$
(31)
To minimize $\Psi_{\text{tot}}$, its first variation is set to zero. That is
$\delta\Psi_{\text{tot}}\stackrel{{\scriptstyle\boldsymbol{.}}}{{=}}0\qquad\Longrightarrow\qquad\delta\Psi_{1}+\delta\Psi_{2}\stackrel{{\scriptstyle\boldsymbol{.}}}{{=}}0\,.$
(32)
The field variables in our problem are the cell number density $c_{t}$, cell
deformation map $\boldsymbol{y}$, bound pili number density $p_{0}$ and cell
number density gradient $\boldsymbol{g}$. The energy density function to
impose the continuity of the spatial cell density gradient reads
$\psi_{2}=\psi_{2}\left(c_{t},\boldsymbol{y},\boldsymbol{g}\right)=\displaystyle\frac{1}{2}J\lambda\left[\boldsymbol{F}^{-T}\cdot\nabla_{\boldsymbol{X}}c_{t}-\boldsymbol{g}\right]^{2}\,,$
(33)
with $\lambda$ being the penalty parameter. The penalty parameter determines
how strongly the condition
$\boldsymbol{g}\overset{!}{=}\nabla_{\boldsymbol{x}}c_{t}$ is satisfied. Note,
the dependence of $\psi_{2}$ on $\boldsymbol{y}$ is through the deformation
gradient $\boldsymbol{F}$ and its determinant $J$. Accordingly, one could
write
$\int_{\mathcal{B}_{0}}\\!\\!\delta\psi_{\text{tot}}\,\mbox{d}V=\int_{\mathcal{B}_{0}}\delta\psi_{1}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\delta\left(\displaystyle\frac{1}{2}J\lambda\left[\boldsymbol{F}^{-T}\cdot\nabla_{\boldsymbol{X}}c_{t}-\boldsymbol{g}\right]^{2}\right)\,\mbox{d}V\,.$
(34)
Calculating the variation of the overall energy density with respect to all
degrees of freedom and their gradients, yields the weak forms as
$\int_{\mathcal{B}_{0}}\delta\psi_{\text{tot}}\,\mbox{d}V=\int_{\mathcal{B}_{0}}\\!\\!\\!\delta\psi_{1}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\\!\\!\\!\delta\psi_{2}\,\mbox{d}V\,,$
(35)
with
$\displaystyle\int_{\mathcal{B}_{0}}\delta\psi_{1}\,\mbox{d}V=\int_{\mathcal{B}_{0}}\\!\\!\\!\delta_{c_{t}}\psi_{1}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\\!\\!\\!\delta_{\boldsymbol{y}}\psi_{1}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\\!\\!\\!\delta_{p_{0}}\psi_{1}\,\mbox{d}V\,,$
(36)
$\displaystyle\int_{\mathcal{B}_{0}}\delta\psi_{2}\,\mbox{d}V=\int_{\mathcal{B}_{0}}\\!\\!\\!\delta_{c_{t}}\psi_{2}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\\!\\!\\!\delta_{\boldsymbol{y}}\psi_{2}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\\!\\!\\!\delta_{\boldsymbol{g}}\psi_{2}\,\mbox{d}V\,.$
The variations of the newly defined energy function read
$\displaystyle\int_{\mathcal{B}_{0}}\\!\delta_{c_{t}}\psi_{2}\,\mbox{d}V=\int_{\mathcal{B}_{0}}\lambda
J\left[\boldsymbol{B}\cdot\nabla_{\boldsymbol{X}}c_{t}-\boldsymbol{F}^{-1}\cdot\boldsymbol{g}\right]\cdot\nabla_{\boldsymbol{X}}\delta
c_{t}\,\mbox{d}V\,,$ (37)
$\displaystyle\int_{\mathcal{B}_{0}}\\!\delta_{\boldsymbol{y}}\psi_{2}\,\mbox{d}V=\int_{\mathcal{B}_{0}}\\!\underbrace{\left[\displaystyle\frac{1}{2}\lambda
J\left[\boldsymbol{F}^{-T}\cdot\nabla_{\boldsymbol{X}}c_{t}-\boldsymbol{g}\right]^{2}\boldsymbol{F}^{-T}-\lambda
J\left[\left[\boldsymbol{F}^{-T}\cdot\nabla_{\boldsymbol{X}}c_{t}\right]\otimes\left[\boldsymbol{B}\cdot\nabla_{\boldsymbol{X}}c_{t}-\boldsymbol{F}^{-1}\cdot\boldsymbol{g}\right]\right]\right]}_{\overline{\boldsymbol{P}}}:\nabla_{\boldsymbol{X}}\delta\boldsymbol{y}\,\mbox{d}V\,,$
$\displaystyle\int_{\mathcal{B}_{0}}\\!\delta_{\boldsymbol{g}}\psi_{2}\,\mbox{d}V=-\int_{\mathcal{B}_{0}}\lambda
J\left[\boldsymbol{F}^{-T}\cdot\nabla_{\boldsymbol{X}}c_{t}-\boldsymbol{g}\right]\cdot\delta\boldsymbol{g}\,\mbox{d}V\,,$
with $\overline{\boldsymbol{P}}$ being the cell-density-gradient-continuity-
induced Piola stress which arises due to the dependence of $\psi_{2}$ on the
deformation. Finally, we arrive at the final weak form of the governing
equations which form our residual system as
overall cell number conservation:
$\displaystyle\int_{\mathcal{B}_{0}}\\!\\!\\!\left[\dot{J}c_{t}+\dot{c_{t}}J\right]\delta
c_{t}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\\!\\!\\!\lambda
J\left[\boldsymbol{B}\cdot\nabla_{\boldsymbol{X}}c_{t}-\boldsymbol{F}^{-1}\cdot\boldsymbol{g}\right]\cdot\nabla_{\boldsymbol{X}}\delta
c_{t}\,\mbox{d}V=0\,,$ (38) linear momentum balance:
$\displaystyle\int_{\mathcal{B}_{0}}\\!\\!\\!\xi
Jc_{t}\dot{\boldsymbol{y}}\cdot\delta\boldsymbol{y}\,\mbox{d}V+\\!\\!\\!\int_{\mathcal{B}_{0}}\\!\\!\\!\left[\boldsymbol{P}^{\text{a}}+\boldsymbol{P}^{\text{p}}+\overline{\boldsymbol{P}}\right]:\nabla_{\boldsymbol{X}}\delta\boldsymbol{y}\,\mbox{d}V-\int_{\partial\mathcal{B}_{0}}\\!\\!\\!\\!\delta\boldsymbol{y}\cdot\left[\boldsymbol{t}^{\text{a}}+\boldsymbol{t}^{\text{p}}\right]\,\mbox{d}A=\boldsymbol{0}\,,$
bound pili number density evolution:
$\displaystyle\int_{\mathcal{B}_{0}}\left[\dot{p_{0}}-Jk_{\text{on}}\left[c_{t}^{2}+\displaystyle\frac{3\ell_{0}^{2}}{4}c_{t}\left[\nabla_{\boldsymbol{X}}\boldsymbol{g}:\boldsymbol{F}^{-T}\right]-\displaystyle\frac{3\ell_{0}^{2}}{4}\big{|}\boldsymbol{g}\big{|}^{2}\right]+k_{\text{off}}p_{0}\right]\delta
p_{0}\,\mbox{d}V=0\,,$ cell number density gradient continuity:
$\displaystyle\int_{\mathcal{B}_{0}}\\!\\!\\!-\lambda
J\left[\boldsymbol{F}^{-T}\cdot\nabla_{\boldsymbol{X}}c_{t}-\boldsymbol{g}\right]\cdot\delta\boldsymbol{g}\,\mbox{d}V=\boldsymbol{0}\,.$
## 3 Finite element implementation
In this section, we present a general finite element formulation for the
implementation of our developed methodology. The first step towards finite
element implementation of our problem is the derivation of the discretized
weak forms which has been concluded in Section 2.3.5. For time integration,
the time interval $\mathds{T}$ is subdivided into a set of intervals $\Delta
t$ with
$\mathds{T}=\bigcup_{n=0}^{\\#\text{ts}-1}\left[t_{n},\,t_{n+1}\right]\,,$
(39)
where $\\#\text{ts}$ denotes the number of time steps and the time increment
is defined by $\Delta t=t_{n+1}-t_{n}$. As mentioned earlier, an implicit time
integration associated with the backward Euler method is employed here in
which an unknown $y_{n+1}$ is determined via the relation
$y_{n+1}-y_{n}=f(y_{n+1})\Delta t$ based on the continuous evolution equation
$\dot{y}=f(y)$.
Here, the spatial discretization of the problem domain is carried out using
the Bubnov–Galerkin finite element method. The geometry together with the
variables are approximated using isoparametric coordinates
$\boldsymbol{\xi}\in[-1,1]^{2}$. Using standard interpolations together with
the isoparametric concept, the geometry is discretized as
$\displaystyle\boldsymbol{X}\Big{|}_{\mathcal{B}_{0}}\\!\\!\\!\approx\boldsymbol{X}(\boldsymbol{\xi})=\sum_{i=1}^{\\#\text{e}}N^{i}(\boldsymbol{\xi})\boldsymbol{X}^{i}\,,$
(40)
with $\\#\text{e}$ denoting the number of elements and $\boldsymbol{\xi}$
denoting the natural space coordinates. Accordingly the discretized fields
read
$\displaystyle c_{t}\Big{|}_{\mathcal{B}_{0}}\\!\\!\\!\approx
c_{t}(\boldsymbol{\xi})=\sum_{i=1}^{\\#\text{e}}N^{i}(\boldsymbol{\xi})c_{t}^{i}\,,$
$\displaystyle\delta c_{t}\Big{|}_{\mathcal{B}_{0}}\\!\\!\\!\approx\delta
c_{t}(\boldsymbol{\xi})=\\!\\!\\!\sum_{i=1}^{\\#\text{e}}N^{i}(\boldsymbol{\xi})\delta
c_{t}^{i}\,,$ (41)
$\displaystyle\boldsymbol{y}\Big{|}_{\mathcal{B}_{0}}\\!\\!\\!\approx\boldsymbol{y}(\boldsymbol{\xi})=\sum_{i=1}^{\\#\text{e}}M^{i}(\boldsymbol{\xi})\boldsymbol{y}^{i}\,,$
$\displaystyle\delta\boldsymbol{y}\Big{|}_{\mathcal{B}_{0}}\\!\\!\\!\approx\delta\boldsymbol{y}(\boldsymbol{\xi})=\\!\\!\\!\sum_{i=1}^{\\#\text{e}}M^{i}(\boldsymbol{\xi})\delta\boldsymbol{y}^{i}\,,$
$\displaystyle p_{0}\Big{|}_{\mathcal{B}_{0}}\\!\\!\\!\approx
p_{0}(\boldsymbol{\xi})=\sum_{i=1}^{\\#\text{e}}N^{i}(\boldsymbol{\xi})p_{0}^{i}\,,$
$\displaystyle\delta p_{0}\Big{|}_{\mathcal{B}_{0}}\\!\\!\\!\approx\delta
p_{0}(\boldsymbol{\xi})=\\!\\!\\!\sum_{i=1}^{\\#\text{e}}N^{i}(\boldsymbol{\xi})\delta
p_{0}^{i}\,,$
$\displaystyle\boldsymbol{g}\Big{|}_{\mathcal{B}_{0}}\\!\\!\\!\approx\boldsymbol{g}(\boldsymbol{\xi})=\sum_{i=1}^{\\#\text{e}}N^{i}(\boldsymbol{\xi})\boldsymbol{g}^{i}\,,$
$\displaystyle\delta\boldsymbol{g}\Big{|}_{\mathcal{B}_{0}}\\!\\!\\!\approx\delta\boldsymbol{g}(\boldsymbol{\xi})=\\!\\!\\!\sum_{i=1}^{\\#\text{e}}N^{i}(\boldsymbol{\xi})\delta\boldsymbol{g}^{i}\,,$
with $N^{i}$ and $M^{i}$ being different shape functions, which, however, can
also be chosen identical in particular cases. For the cases where a mixed
finite element formulation is required, these shape functions are polynomials
of different orders.
The discretized weak form of the overall cell number conservation equation
$\eqref{eq:weak_Lagrangian}_{1}$ reads
$\int_{\mathcal{B}_{0}}\left[c_{t_{n}}\displaystyle\frac{J_{n+1}-J_{n}}{\Delta
t}+J_{n+1}\displaystyle\frac{c_{t_{n+1}}-c_{t_{n}}}{\Delta
t}\right]N^{i}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\lambda
J_{n+1}\left[\boldsymbol{B}_{n+1}\cdot\nabla_{\boldsymbol{X}}c_{t_{n+1}}-\boldsymbol{F}_{n+1}^{-1}\cdot\boldsymbol{g}_{n+1}\right]\cdot\nabla_{\boldsymbol{X}}N^{i}\,\mbox{d}V=0\,.$
(42)
The discretized weak form of the linear momentum balance equation
$\eqref{eq:weak_Lagrangian}_{2}$ reads
$\int_{\mathcal{B}_{0}}\xi
J_{n+1}c_{t_{n+1}}\displaystyle\frac{\boldsymbol{y}_{n+1}-\boldsymbol{y}_{n}}{\Delta
t}M^{i}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\left[\boldsymbol{P}^{\text{a}}_{n+1}+\boldsymbol{P}^{\text{p}}_{n+1}+\overline{\boldsymbol{P}}_{n+1}\right]\cdot\nabla_{\boldsymbol{X}}M^{i}\,\mbox{d}V=\int_{\partial\mathcal{B}_{0}}\\!\\!\\!\boldsymbol{t}^{\text{a}}M^{i}\,\mbox{d}A+\\!\\!\\!\int_{\partial\mathcal{B}_{0}}\\!\\!\\!\boldsymbol{t}^{\text{p}}M^{i}\,\mbox{d}A\,.$
(43)
The discretized weak form of the bound pili number density evolution equation
$\eqref{eq:weak_Lagrangian}_{3}$ reads
$\int_{\mathcal{B}_{0}}\left[\displaystyle\frac{p_{0_{n+1}}-p_{0_{n}}}{\Delta
t}-J_{n+1}k_{\text{on}}\left[c_{t_{n+1}}^{2}+\displaystyle\frac{3\ell_{0}^{2}}{4}c_{t_{n+1}}\left[\nabla_{\boldsymbol{X}}\boldsymbol{g}_{n+1}:\boldsymbol{F}_{n+1}^{-T}\right]-\displaystyle\frac{3\ell_{0}^{2}}{4}\big{|}\boldsymbol{g}_{n+1}\big{|}^{2}\right]+k_{\text{off}}p_{0_{n+1}}\right]N^{i}\,\mbox{d}V=0\,,$
(44)
and finally, the discretized weak form of the cell density gradient continuity
equation $\eqref{eq:weak_Lagrangian}_{4}$ reads
$\int_{\mathcal{B}_{0}}-\lambda
J_{n+1}\left[\boldsymbol{F}_{n+1}^{-T}\cdot\nabla_{\boldsymbol{X}}c_{t_{n+1}}-\boldsymbol{g}_{n+1}\right]N^{i}\,\mbox{d}V=\boldsymbol{0}\,.$
(45)
The fully-discrete form of balance equations can be obtained by applying the
spatial approximation which form a residual system associated with the global
node $I$ as
$\displaystyle\text{R}^{I}_{c}=\mathop{\mbox{\Large{{{A}}}}}_{\alpha=1}^{\\#e}\int_{\mathcal{B}_{0}}\left[c_{t_{n}}\frac{J_{n+1}-J_{n}}{\Delta
t}+J_{n+1}\frac{c_{t_{n+1}}-c_{t_{n}}}{\Delta
t}\right]N^{i}\,\mbox{d}V+\mathop{\mbox{\Large{{{A}}}}}_{\alpha=1}^{\\#e}\int_{\mathcal{B}_{0}}\lambda
J_{n+1}\left[\boldsymbol{B}_{n+1}\cdot\nabla_{\boldsymbol{X}}c_{t_{n+1}}-\boldsymbol{F}_{n+1}^{-1}\cdot\boldsymbol{g}_{n+1}\right]\cdot\nabla_{\boldsymbol{X}}N^{i}\,\mbox{d}V=0\,,$
(46)
$\displaystyle\text{R}^{I}_{\boldsymbol{y}}=\mathop{\mbox{\Large{{{A}}}}}_{\alpha=1}^{\\#e}\int_{\mathcal{B}_{0}}\xi
J_{n+1}c_{t_{n+1}}\frac{\boldsymbol{y}_{n+1}-\boldsymbol{y}_{n}}{\Delta
t}M^{i}\,\mbox{d}V+\mathop{\mbox{\Large{{{A}}}}}_{\alpha=1}^{\\#e}\int_{\mathcal{B}_{0}}\left[\boldsymbol{P}^{\text{a}}_{n+1}+\boldsymbol{P}^{\text{p}}_{n+1}+\overline{\boldsymbol{P}}_{n+1}\right]\cdot\nabla_{\boldsymbol{X}}M^{i}\,\mbox{d}V-\mathop{\mbox{\Large{{{A}}}}}_{\alpha=1}^{\\#se}\int_{\partial\mathcal{B}_{0}}\\!\\!\\!\left[\boldsymbol{t}^{\text{a}}+\boldsymbol{t}^{\text{p}}\right]M^{i}\,\mbox{d}A=\boldsymbol{0}\,,$
$\displaystyle\text{R}^{I}_{p}=\mathop{\mbox{\Large{{{A}}}}}_{\alpha=1}^{\\#e}\int_{\mathcal{B}_{0}}\left[\frac{p_{0_{n+1}}-p_{0_{n}}}{\Delta
t}-J_{n+1}k_{\text{on}}\left[c_{t_{n+1}}^{2}+\displaystyle\frac{3\ell_{0}^{2}}{4}c_{t_{n+1}}\left[\nabla_{\boldsymbol{X}}\boldsymbol{g}_{n+1}:\boldsymbol{F}_{n+1}^{-T}\right]-\displaystyle\frac{3\ell_{0}^{2}}{4}\big{|}\boldsymbol{g}_{n+1}\big{|}^{2}\right]+k_{\text{off}}p_{0_{n+1}}\right]N^{i}\,\mbox{d}V=0\,,$
$\displaystyle\text{R}^{I}_{\boldsymbol{g}}=\mathop{\mbox{\Large{{{A}}}}}_{\alpha=1}^{\\#e}\int_{\mathcal{B}_{0}}-\lambda
J_{n+1}\left[\boldsymbol{F}_{n+1}^{-T}\nabla_{\boldsymbol{X}}c_{t_{n+1}}-\boldsymbol{g}_{n+1}\right]N^{i}\,\mbox{d}V=\boldsymbol{0}\,,$
with $\\#se$ denoting the number of surface elements and
$\mathop{\mbox{\Large{{{A}}}}}$ being the assembly operator. The global
residual vector consist of the above four residual vectors as
$\text{R}_{\text{tot}}=\left[\begin{matrix}\text{R}_{c_{t}}\\\
\text{R}_{\boldsymbol{y}}\\\ \text{R}_{p_{0}}\\\
\text{R}_{\boldsymbol{g}}\end{matrix}\right]\,,\qquad\text{with}\qquad\text{R}_{c_{t}}=\left[\begin{matrix}\text{R}^{1}_{c_{t}}\\\
\text{R}^{2}_{c_{t}}\\\ \vdots\\\ \text{R}^{\text{nn}}_{c_{t}}\\\
\end{matrix}\right]\,,\qquad\text{R}_{\boldsymbol{y}}=\left[\begin{matrix}\text{R}^{1}_{\boldsymbol{y}}\\\
\text{R}^{2}_{\boldsymbol{y}}\\\ \vdots\\\
\text{R}^{\text{nn}}_{\boldsymbol{y}}\\\
\end{matrix}\right]\,,\qquad\text{R}_{p_{0}}=\left[\begin{matrix}\text{R}^{1}_{p_{0}}\\\
\text{R}^{2}_{p_{0}}\\\ \vdots\\\ \text{R}^{\text{nn}}_{p_{0}}\\\
\end{matrix}\right]\,,\qquad\text{R}_{\boldsymbol{g}}=\left[\begin{matrix}\text{R}^{1}_{\boldsymbol{g}}\\\
\text{R}^{2}_{\boldsymbol{g}}\\\ \vdots\\\
\text{R}^{\text{nn}}_{\boldsymbol{g}}\\\ \end{matrix}\right]\,,$ (47)
where nn denotes the total number of nodes. Finally, the fully discrete
nonlinear system of governing equations becomes
$\text{R}_{\text{tot}}=\text{R}_{\text{tot}}(\boldsymbol{U})\stackrel{{\scriptstyle\boldsymbol{!}}}{{=}}\boldsymbol{0}\,,$
(48)
with $\boldsymbol{U}$ being the global vector of unknowns. To find the
solution of the system (48), the Newton–Raphson scheme is employed. The
consistent linearization of the resulting system of equations yields
$\text{R}_{\text{tot}}(\boldsymbol{U}_{n+1})=\text{R}_{\text{tot}}(\boldsymbol{U}_{n})+\text{K}_{\text{tot}}\cdot\Delta\boldsymbol{U}_{n}\stackrel{{\scriptstyle\boldsymbol{!}}}{{=}}\boldsymbol{0}\qquad\text{with}\qquad\text{K}_{\text{tot}}=\displaystyle\frac{\partial\text{R}_{\text{tot}}}{\partial\boldsymbol{U}}\big{\lvert}_{n}\qquad\text{and}\qquad\boldsymbol{U}_{n+1}=\boldsymbol{U}_{n}+\Delta\boldsymbol{U}_{n}\,,$
(49)
where the subscript $n$ indicates the step number.
## 4 Numerical results
This section aims to illustrate our proposed theory through a set of numerical
examples. In doing so, the evolution of the cell number density under various
conditions is investigated. Moreover, parametric studies are carried out in
order to highlight the influence of different parameters on the evolution of
the cell number density. Throughout all the examples, the domain is a
$80\times 80$ square subject to periodic boundary conditions. Implicit time
integration is adopted with no stability condition which leads to robust
solution of our problem and larger time steps could be used. All the numerical
results are obtained from our in-house finite element code. The solution
procedure is robust and for all examples, we obtain convergence with a
quadratic rate associated with the Newton–Raphson scheme.
### 4.1 Cell number density evolution
Figure 5: Snapshots of the cell number density evolution in the presence of
the active stress. The first and second rows represent the cell number density
distribution in undeformed and deformed configurations, respectively. The
third and the fourth rows illustrate how an Eulerian observer would see the
cell number density evolution through a fixed window associated with the
Eulerian framework. Figure 6: Another illustration of the cell aggregate
evolution in Fig. 5 with variable color-bar range.
This section investigates the evolution of the cells under two different
scenarios. In the first scenario, the cellular aggregate behavior is
investigated in the presence of all the driving forces. That is, the pili-pili
mediated attractive forces, steric repulsion forces and the cell-substrate
friction. In the second scenario, the problem is simplified to the case where
the pili-pili mediated attractive forces are eliminated. In this case, we
investigate the cell number density evolution when only the steric repulsion
forces and the cell-substrate friction are the forces acting on the cells.
Figure 5 renders six different snapshots of the cellular aggregate evolution
at different times associated with the first scenario. The color represents
the magnitude of cell number density throughout the domain. The first and
second rows represent the cell number density distribution in the undeformed
and deformed configurations, respectively. The third and the fourth rows
illustrate how an Eulerian observer would see the cellular aggregate evolution
through a fixed window associated with the Eulerian framework. At the initial
time $t=0$, the problem starts with a uniform distribution of cells with
initial $c_{t}=0.079$ perturbed with $\pm 0.001$ relative random fluctuation.
The cell radius is set to $R=1\,\mu\text{m}$, the cell bulk modulus is set to
$E=1\,\text{N}/\text{m}^{2}$ and the friction coefficient is set to
$\xi=10\,\text{Ns}/\text{m}$. In addition, the pili pair binding rate is
$k_{\text{on}}=0.0178\,\text{s}^{-1}$, the pili pair unbinding rate is
$k_{\text{off}}=0.01\,\text{s}^{-1}$, the average pili length is
$\ell_{0}=2.0\,\mathrm{\mu}\text{m}$ and the pili-pili mediated attractive
force is $f^{\text{p}}=18.0\,\text{pN}$. Since the backward Euler time
integration is adopted, an adaptive time stepping is employed with initial
time step of $\Delta t=0.5\,\text{s}$. If convergence is obtained at very few
iterations, the time step is enlarged by a factor of $1.2$ and vice versa.
Through the aggregate evolution, first an initial homogeneous smooth cell
distribution is observed due to the action of the passive stress. This is
justifiable since the formation of bound pili pairs (and thus emergence of
active stress) requires time hence, the dominance of the passive stress at the
starting steps. As time elapses, more bound pili pairs form as depicted in
Fig. 3. Thus cells tend to attract each other which is reflected in the
dominance of the active stress which triggers the onset of the phase
separation. Therefore, pili-mediated attractive forces act as the major
driving force which lead to the formation of an aggregate. Figure 6 provides
another illustration of the cell aggregate evolution in Fig. 5 with variable
color range in each step. The purpose of the figure is to elucidate the
initial random distribution of cells and highlight the early smooth
distribution of cells due to the passive stress which is reflected in the
tighter color bar range.
Figure 7: A parametric study on the cell density evolution. The $y$-axis
shows the difference between the maximum and minimum cell density throughout
the domain while the $x$-axis shows the elapsed time. The left figure studies
the effects of the pili average length $\ell_{0}$ where
$k_{\text{on}}=0.05\,\text{s}^{-1}$ and $f^{p}=12\,\text{pN}$. The middle
figure studies the effects of the pili pair binding rate $k_{\text{on}}$ where
$\ell_{0}=2.0\,\mu\text{m}$ and $f^{p}=12\,\text{pN}$. The right figure
studies the effects of the pili pair attractive force $f^{p}$ where
$k_{\text{on}}=0.05\,\text{s}^{-1}$ and $\ell_{0}=2.0\,\mu\text{m}$.
In Fig. 7, a parametric study is carried out to investigate the influence of
the average pili length $\ell_{0}$, pili-pili binding rate $k_{\text{on}}$ and
pili-pili mediated attractive force $f^{\text{p}}$ on the formation of an
aggregate. Each figure renders the difference between the maximum and the
minimum cell number density versus time. The left figure studies the effects
of the pili average length $\ell_{0}$ where
$k_{\text{on}}=0.05\,\text{s}^{-1}$ and $f^{p}=12\,\text{pN}$, the middle
figure studies the effects of the pili pair binding rate $k_{\text{on}}$ where
$\ell_{0}=2.0\,\mu\text{m}$ and $f^{p}=12\,\text{pN}$ and the right figure
studies the effects of the pili pair attractive force $f^{p}$ where
$k_{\text{on}}=0.05\,\text{s}^{-1}$ and $\ell_{0}=2.0\,\mu\text{m}$. For all
figures, an initial decrease in the cell number density difference is observed
which is associated with function of the passive stress. This behavior is
vividly observed in the second step in Fig. 6 as the color bar limits tend to
tighten. It is observed that, if the values of $f^{p}$, $\ell_{0}$ and
$k_{\text{on}}$ are small, the pili-pili mediated forces cannot overcome the
steric repulsive forces and the difference continues to decrease until a
uniform homogeneous distribution is obtained. For these cases the phase
separation does not occur and the dynamics of the cell network is mainly
driven by the steric repulsive forces and cell-substrate friction. The lower
the values of $f^{p}$, $\ell_{0}$ and $k_{\text{on}}$, the faster the
equilibrium state is reached. However, if the values of $f^{p}$, $\ell_{0}$
and $k_{\text{on}}$ are large enough, the pili-pili mediated forces tend to
dominate the other forces as time elapses which is reflected in the larger
cell number density difference. In these cases, the active stress plays a more
considerable role compared to the passive stress which yields the onset of the
phase separation. Increasing the values of $f^{p}$, $\ell_{0}$ and
$k_{\text{on}}$ results in higher rates of phase separation as indicated by
larger slopes of graphs in Fig. 7. This behavior is natural to expect as
larger values of $f^{p}$ means that the attractive force between the pili
pairs is stronger which causes the cells to form a colony at a higher rate.
Larger values of $\ell_{0}$ indicate longer pili implying that pili are more
capable to reach out and attach to pili of other cells which facilitates the
formation of a colony. And finally, larger values of $k_{\text{on}}$ signify
faster binding rate between the pili leading to a faster formation of bound
pairs which accelerates the colony formation. Note, the parameters in the
graphs in Fig 7, totally follow the condition obtained by Kuan et al. [1, 70]
using linear stability analysis which reads
$-\displaystyle\frac{1}{\xi}\displaystyle\frac{E\pi
R^{2}}{\left[1-c_{t}^{\text{initial}}\pi
R^{2}\right]^{2}}+\displaystyle\frac{c_{t}^{\text{initial}}\ell_{0}f^{\text{p}}k_{\text{on}}}{k_{\text{off}}\xi}>0\,.$
Satisfying the above condition indicates the onset of the phase separation.
Figure 8: Illustration of the Taylor–Hood element utilized for the simulation.
For this element type we used quadratic approximation for the deformation map
field and linear approximation for the cell number density in order to observe
the LBB condition.
Now we consider the second scenario where the cell number density evolution is
examined while its dynamics is driven only by the steric repulsive forces and
the pili-substrate friction. In the absence of the the pili-pili mediated
attractive forces, the complexity of the problem reduces considerably since
the bound pili number density evolution equation is eliminated from the system
of equations. In addition, there is no need to adopt a gradient enhanced
framework to deal with the Laplacian and Hessian of the cell number density.
As a result, the field variables simply become the original cell number
density $c_{t}$ and the deformation map $\boldsymbol{y}$ and, under periodic
boundary condition, the fully discrete residual system (46) simplifies to
$\displaystyle\text{R}^{I}_{c}=\mathop{\mbox{\Large{{{A}}}}}_{\alpha=1}^{\\#e}\int_{\mathcal{B}_{0}}\left[c_{t_{n}}\frac{J_{n+1}-J_{n}}{\Delta
t}+J_{n+1}\frac{c_{t_{n+1}}-c_{t_{n}}}{\Delta t}\right]N^{i}\,\mbox{d}V=0\,,$
(50)
$\displaystyle\text{R}^{I}_{\boldsymbol{y}}=\mathop{\mbox{\Large{{{A}}}}}_{\alpha=1}^{\\#e}\int_{\mathcal{B}_{0}}\xi
J_{n+1}c_{t_{n+1}}\frac{\boldsymbol{y}_{n+1}-\boldsymbol{y}_{n}}{\Delta
t}M^{i}\,\mbox{d}V+\mathop{\mbox{\Large{{{A}}}}}_{\alpha=1}^{\\#e}\int_{\mathcal{B}_{0}}\boldsymbol{P}^{\text{p}}_{n+1}\cdot\nabla_{\boldsymbol{X}}M^{i}\,\mbox{d}V=\boldsymbol{0}\,.$
As mentioned earlier, pili-pili attractive forces are absent and the repulsive
forces between the cells are reflected in the passive stress. To cope with the
mixed nature of the problem, i.e. to respect the LBB inf-sup condition, we
allow interpolation of the cell number density $c_{t}$ and the deformation map
$\boldsymbol{y}$ with different polynomial order. We exploit the Taylor–Hood
element for our finite element implementation which is linear in $c_{t}$ and
quadratic in $\boldsymbol{y}$. Subsequently, the corresponding shape functions
$N^{i}$ and $M^{i}$ are polynomials of different orders. Figure 8 provides an
illustrative schematic of the Taylor–Hood elements adopted in our simulations.
Figure 9: Different snapshots of the cell network evolution in the absence of
the active stress. The first and second rows represent the cell number density
distribution in undeformed and deformed configurations, respectively. The
third and the fourth rows illustrate how an Eulerian observer would see the
cell number density evolution through a fixed window associated with the
Eulerian framework. The dynamics of cell is only driven by the steric
repulsion forces and the cell-substrate friction. Uniform color throughout the
domain indicates uniform homogeneous distribution of the cells.
Figure 9 renders six different snapshots of the cell density evolution at
different times. The boundary and the initial conditions are similar to the
previous case study. Similarly the deformed and undeformed bodies are provided
in the first and second rows where the third and fourth rows are associated
with the Eulerian window. Since there exists no pili-pili attractive force, as
time elapses, the cells tend to repel each other until they reach to the state
of equilibrium. This equilibrium state is obtained when the cells are
uniformly distributed throughout the domain. It is observed that throughout
time evolution, the difference between the cell densities tend to vanish which
signifies the uniformity of the cell distribution. Moreover, due to the
absence of the active stress, the domain does not undergo any considerable
deformation where the deformation gradient remains close to identity.
In Fig. 10, a parametric study is carried out to investigate the influence of
the friction coefficient $\xi$ and cell bulk modulus $E$ on the cell number
density evolution. Each figure renders the difference between the maximum and
the minimum cell number density versus time. The left figure studies the
effects of the friction coefficient with $E=1\,\text{N}/\text{m}^{2}$ and
$R=1\,\mu\text{m}$ and the right figure studies the effects of the cell bulk
modulus with $\xi=10\,\text{Ns}/\text{m}$ and $R=1\,\mu\text{m}$. Smaller
density difference in each figure indicates more uniform distribution. It is
observed that increasing the friction coefficient delays the uniform
distribution which is understandable since the friction coefficient impedes
the cells movement. Thus, for larger friction coefficients, the cells require
more time to disperse due to the repulsive forces. On the other hand, we
observe that a uniform distribution is reached at a faster rate for larger
cell bulk modulus. This is also justifiable since larger bulk modulus
indicates more rigidity of the cells which gives rise to their quicker
dispersion after colliding with other cells.
Figure 10: A parametric study on the cell density evolution in the absence of
the active stress. The $y$-axis shows the difference between the maximum and
minimum cell density throughout the domain as a measure of homogeneity of the
cell distribution while the $x$-axis shows the elapsed time. The left figure
renders the effects of friction coefficient on the cell density evolution when
$E=1\,\text{N}/\text{m}^{2}$ and $R=1\,\mu\text{m}$. The right figure renders
the effects of cell bulk modulus on the cell density evolution when
$R=1\,\mu\text{m}$ and $\xi=10\,\text{Ns}/\text{m}$.
### 4.2 Overall cell number conservation
It is noteworthy to mention that our mixed four-field
$\left(c_{t},\boldsymbol{y},p_{0},\boldsymbol{g}\right)$ implementation
conserves the total cell number in a Lagrangian solution domain exactly. To
ensure the discrete conservation of the total cell number throughout the
domain, we rewrite the strong form of the continuity equation (9), integrate
it over the referential domain and expand it as
$\int_{\mathcal{B}_{0}}\dot{\overline{Jc_{t}}}\,\mbox{d}V=\int_{\mathcal{B}_{0}}\left[\dot{J}c_{t}+\dot{c_{t}}J\right]\,\mbox{d}V\approx\int_{\mathcal{B}_{0}}\left[\displaystyle\frac{J_{n+1}-J_{n}}{\Delta
t}c_{t_{n}}+\displaystyle\frac{c_{t_{n+1}}-c_{t_{n}}}{\Delta
t}J_{n+1}\right]\,\mbox{d}V=0\,.$ (51)
Note, to satisfy the overall cell number conservation equation in the last
integral, the cell density in the first term must be chosen from step $n$
whereas the Jacobian in the second term must be chosen from step $n+1$.
Accordingly, Eq. (51) reads
$\int_{\mathcal{B}_{0}}J_{n+1}c_{t_{n+1}}\mbox{d}V-\int_{\mathcal{B}_{0}}J_{n}c_{t_{n}}\,\mbox{d}V=0\,,$
(52)
which can in turn be written as the integrals over the spatial domain as
$\int_{\mathcal{B}_{t_{n+1}}}c_{t_{n+1}}\mbox{d}v_{n+1}=\int_{\mathcal{B}_{t_{n}}}c_{t_{n}}\mbox{d}v_{n}\,,$
(53)
which is equivalent to the conservation of the total cell number. Figure 11
illustrates the percentage of the overall cell number change versus time for
the two case studies associated with Figs. 5 and 9. It is of great
significance to point out that no cell number loss occurs within our systems,
thus the variation of the total cell number remains zero without any
fluctuation, see [82, 83] for further discussions on the issue regarding mass
loss/production in Lagrangian formulations.
Figure 11: Illustration of the discrete conservation of the overall cell
number for both case studies associated with Figs. 5 and 9. The $x$ axis
represents the elapsed time whereas the $y$ axis represents the percentage of
change in the overall cell number. The left figure corresponds to the case
where all forces are present in the dynamics of the system whereas the left
figure corresponds to the case where pili-pili mediated forces are absent.
### 4.3 Colony coalescence
Figure 12: Illustration of the colony coalescence process. Five snapshots of
the coalescence process in both undeformed and deformed configurations are
depicted. The parameter $h$ measures the bridge length between the two merging
colonies. The center plots render the effects of $k_{\text{on}}$ and
$f^{\text{p}}$ on the evolution of the bridge between the colonies.
In this section, two colonies are put next to each other and the process of
their coalescence is examined. In doing so, we define a new parameter $h$
which measures the bridge length between the two merging colonies. From a
computational point of view, we define the boundary of the bridge where the
the cell number density gradient in $y$ direction is $96\%$ of its maximum
value throughout the domain. Figure 12 investigates the influence of the pili-
pili binding rate $k_{\text{on}}$ and the pili-pili mediated attractive force
$f^{\text{p}}$ on the size of the bridge. Five snapshots of the coalescence
process in both undeformed and deformed configurations are shown on top and
bottom, respectively. The snapshots correspond to the green line in the left
plot with $k_{\text{on}}=0.021\,\text{s}^{-1}$ and
$f^{\text{p}}=18\,\text{pN}$. The left plot renders the bridge length versus
time for three different values of the pili-pili binding rate with
$f^{\text{p}}=18\,\text{pN}$. The blue line corresponds to
$k_{\text{on}}=0.019\,\text{s}^{-1}$, the green line corresponds to
$k_{\text{on}}=0.021\,\text{s}^{-1}$ and the red line corresponds to
$k_{\text{on}}=0.023\,\text{s}^{-1}$. It is observed that the bridge grows
with a faster rate as we increase the pili-pili binding rate. The right plot
renders the bridge length versus time for three different values of the pili-
pili mediated attractive forces with $k_{\text{on}}=0.023\,\text{s}^{-1}$. The
blue line corresponds to $f^{\text{p}}=18\,\text{pN}$, the green line
corresponds to $f^{\text{p}}=31\,\text{pN}$ and the red line corresponds to
$f^{\text{p}}=43\,\text{pN}$. Similarly, larger values of $f^{\text{p}}$
result in a faster growth of the bridge. These two observations are
justifiable since larger binding rates and larger pili-pili attractive forces
imply quicker bound pili formation and stronger attractive forces between the
cells, respectively, and thus they yield quicker aggregate formation. In both
cases, the growth starts with smaller rate due to the function of the passive
stress. Further evolution of time leads the pili-pili mediated forces to
become more dominant which yields higher rate growths.
### 4.4 Aggregate position and periodicity
Figure 13: Formation of aggregates under different initial random
distributions. In each block, the top figures render the undeformed
configuration whereas the bottom figures render the deformed bodies. Although
the initial distributions are the different, aggregates with similar sizes and
same degree of phase separations are obtained. Figure 14: Illustration of
the periodicity of the domain. Although the aggregate are formed in different
positions in the domain, due to periodicity, a single centered aggregate can
always be extracted if enough samples are put next to each other.
This section shows the formation of aggregates under different initial random
distribution of cells. Figure 13 renders three different cases with various
initial random distribution of cells. Each block contains the undeformed and
deformed configurations with five different snapshots from the domain. It is
observed that, despite different initial conditions, similar aggregates with
the same size and degree of the phase separation are obtained at the end. It
is noteworthy that due to the different initial distribution, the bodies have
deformed differently in order to yield the same aggregate. Although the
aggregates are formed in different locations in the three cases shown in Fig.
13, due to periodicity, a single centered aggregate can always be extracted if
enough samples are put next to each other. Figure 14 sheds light on this issue
more vividly. The final snapshots of the three cases in Fig. 13 are shown on
the top row in Fig. 14 . In the middle row, these samples are put together in
order to form a periodic structure. In the bottom row, it is shown that
similar centered aggregates can always be extracted.
## 5 Summary and outlook
A continuum framework to model and simulate the behavior of biological
cellular aggregates has been established. The process of micro-colony
formation has been described as an active phase separation phenomenon. It
turns out that employing the Lagrangian approach yields considerable
simplification of the equations in particular for the active stress time
evolution as compared to previously introduced Eulerian approach [1]. In
addition to satisfying the conservation of the total cell number, our proposed
Lagrangian formulation enabled implicit time integration which considerably
increases the computational robustness. We demonstrate that three major forces
determine the dynamics of the cells in an aggregate network; the pili-pili
mediated attractive forces, the steric repulsion forces and the cell-substrate
friction. In the absence of pili-pili mediated forces, the repulsive forces
simply distribute the cells uniformly throughout the domain whereas in the
presence of the pili-pili mediated forces, we observe a phase separation
leading to the formation of micro-colonies. A parametric study has been
carried out to study the influence of various parameters on the cell density
behavior. Our proposed methodology furnishes a general framework for the
continuum modeling of the non-equilibrium dynamics of dense cellular
aggregates. We believe that this contribution provides significant insights
towards the dynamics of cell aggregates which in turn can be exploited to
better understand the behavior of infectious diseases. Further extension of
this work include analysis of different cell species, as well as examining
other biological systems with pronounced cell-matrix interactions.
## Acknowledgment
Soheil Firooz and Paul Steinmann gratefully acknowledge the support provided
by EAM cluster. Also Soheil Firooz would like to thank Hui–Shun Kuan for
fruitful discussions regarding the parametric study. Vasily Zaburdaev would
like to acknowledge the support by Volkswagen foundation “Life?” initiative.
## Appendix A Detailed derivations of the weak forms
This section provides further details regarding the intermediate steps in the
derivation of the linear momentum balance equation (23) as follows
$\displaystyle\int_{\mathcal{B}_{0}}\nabla_{\boldsymbol{X}}\cdot$
$\displaystyle\boldsymbol{P}^{\text{a}}\cdot\delta\boldsymbol{y}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\nabla_{\boldsymbol{X}}\cdot\boldsymbol{P}^{\text{p}}\cdot\delta\boldsymbol{y}\,\mbox{d}V-\int_{\mathcal{B}_{0}}\xi
Jc_{t}\dot{\boldsymbol{y}}\cdot\delta\boldsymbol{y}\,\mbox{d}V$ (54)
$\displaystyle=$
$\displaystyle\int_{\mathcal{B}_{0}}\nabla_{\boldsymbol{X}}\cdot\left(\boldsymbol{P}^{a^{T}}\cdot\delta\boldsymbol{y}\right)\,\mbox{d}V-\int_{\mathcal{B}_{0}}\boldsymbol{P}^{\text{a}}:\nabla_{\boldsymbol{X}}\delta\boldsymbol{y}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\nabla_{\boldsymbol{X}}\cdot\left(\boldsymbol{P}^{p^{T}}\cdot\delta\boldsymbol{y}\right)\,\mbox{d}V-\int_{\mathcal{B}_{0}}\boldsymbol{P}^{\text{p}}:\nabla_{\boldsymbol{X}}\delta\boldsymbol{y}\,\mbox{d}V-\int_{\mathcal{B}_{0}}\xi
Jc_{t}\dot{\boldsymbol{y}}\cdot\delta\boldsymbol{y}\,\mbox{d}V$
$\displaystyle=$
$\displaystyle\int_{\mathcal{B}_{0}}\left[\boldsymbol{P}^{a^{T}}\cdot\delta\boldsymbol{y}\right]\cdot\boldsymbol{N}\,\mbox{d}A-\int_{\mathcal{B}_{0}}\boldsymbol{P}^{\text{a}}:\nabla_{\boldsymbol{X}}\delta\boldsymbol{y}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\left[\boldsymbol{P}^{p^{T}}\cdot\delta\boldsymbol{y}\right]\cdot\boldsymbol{N}\,\mbox{d}A-\int_{\mathcal{B}_{0}}\boldsymbol{P}^{\text{p}}:\nabla_{\boldsymbol{X}}\delta\boldsymbol{y}\,\mbox{d}V-\int_{\mathcal{B}_{0}}\xi
Jc_{t}\dot{\boldsymbol{y}}\cdot\delta\boldsymbol{y}\,\mbox{d}V$
$\displaystyle=$
$\displaystyle\int_{\mathcal{B}_{0}}\delta\boldsymbol{y}\cdot\boldsymbol{P}^{\text{a}}\cdot\boldsymbol{N}\,\mbox{d}A-\int_{\mathcal{B}_{0}}\boldsymbol{P}^{\text{a}}:\nabla_{\boldsymbol{X}}\delta\boldsymbol{y}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\delta\boldsymbol{y}\cdot\boldsymbol{P}^{\text{p}}\cdot\boldsymbol{N}\,\mbox{d}A-\int_{\mathcal{B}_{0}}\boldsymbol{P}^{\text{p}}:\nabla_{\boldsymbol{X}}\delta\boldsymbol{y}\,\mbox{d}V-\int_{\mathcal{B}_{0}}\xi
Jc_{t}\dot{\boldsymbol{y}}\cdot\delta\boldsymbol{y}\,\mbox{d}V$
$\displaystyle=$
$\displaystyle\int_{\mathcal{B}_{0}}\delta\boldsymbol{y}\cdot\boldsymbol{T}^{\text{a}}\,\mbox{d}A-\int_{\mathcal{B}_{0}}\boldsymbol{P}^{\text{a}}:\nabla_{\boldsymbol{X}}\delta\boldsymbol{y}\,\mbox{d}V+\int_{\mathcal{B}_{0}}\delta\boldsymbol{y}\cdot\boldsymbol{T}^{\text{p}}\,\mbox{d}A-\int_{\mathcal{B}_{0}}\boldsymbol{P}^{\text{p}}:\nabla_{\boldsymbol{X}}\delta\boldsymbol{y}\,\mbox{d}V-\int_{\mathcal{B}_{0}}\xi
Jc_{t}\dot{\boldsymbol{y}}\cdot\delta\boldsymbol{y}\,\mbox{d}V=0\,.$
## Appendix B Detailed derivations of the active stress
In this section, the derivation of the material time derivative of the active
Piola–Kirchhoff stress is provided in detail. To begin, the Lie time
derivative of the Kirchhoff stress is obtained as follows
$\displaystyle\mathcal{L}_{t}\boldsymbol{\tau}^{\text{a}}$
$\displaystyle=\boldsymbol{F}\cdot\left[\dot{\overline{\boldsymbol{F}^{-1}\cdot\boldsymbol{\tau}^{\text{a}}\cdot\boldsymbol{F}^{-T}}}\right]\cdot\boldsymbol{F}^{T}=\boldsymbol{F}\cdot\left[\dot{\boldsymbol{F}^{-1}}\cdot\boldsymbol{\tau}^{\text{a}}\cdot\boldsymbol{F}^{-T}+\boldsymbol{F}^{-1}\cdot\dot{\boldsymbol{\tau}^{\text{a}}}\cdot\boldsymbol{F}^{-T}+\boldsymbol{F}^{-1}\cdot\boldsymbol{\tau}^{\text{a}}\cdot\dot{\boldsymbol{F}^{-T}}\right]\cdot\boldsymbol{F}^{T}$
(55)
$\displaystyle=\boldsymbol{F}\cdot\left[-\boldsymbol{F}^{-1}\cdot\boldsymbol{l}\cdot\boldsymbol{\tau}^{\text{a}}\cdot\boldsymbol{F}^{-T}+\boldsymbol{F}^{-1}\cdot\dot{\boldsymbol{\tau}^{\text{a}}}\cdot\boldsymbol{F}^{-T}-\boldsymbol{F}^{-1}\cdot\boldsymbol{\tau}^{\text{a}}\cdot\boldsymbol{l}^{T}\cdot\boldsymbol{F}^{-T}\right]\cdot\boldsymbol{F}^{T}$
$\displaystyle=\dot{\boldsymbol{\tau}^{\text{a}}}-\boldsymbol{l}\cdot\boldsymbol{\tau}^{\text{a}}-\boldsymbol{\tau}^{\text{a}}\cdot\boldsymbol{l}^{T}=\dot{\boldsymbol{\tau}^{\text{a}}}-\boldsymbol{l}\cdot{\boldsymbol{\tau}^{\text{a}}}^{T}-\boldsymbol{\tau}^{\text{a}}\cdot\boldsymbol{l}^{T}=\dot{\boldsymbol{\tau}^{\text{a}}}-2\left[\boldsymbol{l}\cdot\boldsymbol{\tau}^{\text{a}}\right]^{\text{sym}}\,.$
Afterwards, using the relation
$\boldsymbol{S}=\boldsymbol{F}^{-1}\cdot\boldsymbol{\tau}^{\text{a}}\cdot\boldsymbol{F}^{-T}$
on could write
$\displaystyle\dot{\boldsymbol{S}^{\text{a}}}=-\frac{1}{\ell_{0}p_{0}f^{\text{p}}}\left[\left[\boldsymbol{F}\cdot\boldsymbol{S}^{\text{a}}\cdot\boldsymbol{F}^{T}\right]:\boldsymbol{l}^{T}\right]\boldsymbol{S}^{\text{a}}+\boldsymbol{S}^{\text{f}}-k_{\text{off}}\,\boldsymbol{S}^{\text{a}}=-\frac{1}{\ell_{0}p_{0}f^{\text{p}}}\left[\boldsymbol{S}^{\text{a}}:\left[\boldsymbol{F}^{T}\cdot\boldsymbol{l}^{T}\cdot\boldsymbol{F}\right]\right]\boldsymbol{S}^{\text{a}}+\boldsymbol{S}^{\text{f}}-k_{\text{off}}\,\boldsymbol{S}^{\text{a}}\,.$
(56)
Since the Piola–Kirchhoff stress is a symmetric tensor, for an arbitrary
second order tensor $\boldsymbol{A}$ we can write
$\boldsymbol{S}:\boldsymbol{A}=\boldsymbol{S}:\left[\displaystyle\frac{1}{2}\left[\boldsymbol{A}+\boldsymbol{A}^{T}\right]\right]\,.$
(57)
Thus, Eq. (56) can be rewritten as
$\displaystyle\dot{\boldsymbol{S}^{\text{a}}}$
$\displaystyle=-\frac{1}{\ell_{0}p_{0}f^{\text{p}}}\left[\boldsymbol{S}^{\text{a}}:\left[\boldsymbol{F}^{T}\cdot\boldsymbol{l}^{T}\cdot\boldsymbol{F}\right]\right]\boldsymbol{S}^{\text{a}}+\boldsymbol{S}^{\text{f}}-k_{\text{off}}\,\boldsymbol{S}^{\text{a}}$
(58)
$\displaystyle=-\frac{1}{\ell_{0}p_{0}f^{\text{p}}}\left[\boldsymbol{S}^{\text{a}}:\frac{1}{2}\left[\left[\boldsymbol{F}^{T}\cdot\boldsymbol{l}^{T}\cdot\boldsymbol{F}\right]+\left[\boldsymbol{F}^{T}\cdot\boldsymbol{l}^{T}\cdot\boldsymbol{F}\right]^{T}\right]\right]\boldsymbol{S}^{\text{a}}+\boldsymbol{S}^{\text{f}}-k_{\text{off}}\,\boldsymbol{S}^{\text{a}}$
$\displaystyle=-\frac{1}{\ell_{0}p_{0}f^{\text{p}}}\left[\boldsymbol{S}^{\text{a}}:\left[\boldsymbol{F}^{T}\cdot\boldsymbol{l}^{\text{sym}}\cdot\boldsymbol{F}\right]\right]\boldsymbol{S}^{\text{a}}+\boldsymbol{S}^{\text{f}}-k_{\text{off}}\,\boldsymbol{S}^{\text{a}}$
Finally, utilizing the relation
$\boldsymbol{F}^{T}\cdot[\boldsymbol{l}]^{\text{sym}}\cdot\boldsymbol{F}=\boldsymbol{F}^{T}\cdot\left[\displaystyle\frac{1}{2}\left[\boldsymbol{l}^{T}+\boldsymbol{l}\right]\right]\cdot\boldsymbol{F}=\displaystyle\frac{1}{2}\left[\boldsymbol{F}^{T}\cdot\boldsymbol{l}^{T}\cdot\boldsymbol{F}+\boldsymbol{F}^{T}\cdot\boldsymbol{l}\cdot\boldsymbol{F}\right]=\displaystyle\frac{1}{2}\left[\dot{\boldsymbol{F}^{T}}\cdot\boldsymbol{F}+\boldsymbol{F}^{T}\cdot\dot{\boldsymbol{F}}\right]=\dot{\boldsymbol{E}}\,,$
(59)
we can derive the fully Lagrangian form of the active Piola–Kirchhoff stress
as
$\dot{\boldsymbol{S}^{\text{a}}}=-\displaystyle\frac{1}{\ell_{0}p_{0}f^{\text{p}}}\left[\boldsymbol{S}^{\text{a}}:\dot{\boldsymbol{E}}\right]\boldsymbol{S}^{\text{a}}+\boldsymbol{S}^{\text{f}}-k_{\text{off}}\,\boldsymbol{S}^{\text{a}}\,.$
(60)
## Appendix C Time integration of the active stress
In this section we detail on the time integration technique in order to
calculate the active second Piola–Kirchhoff stress. The non-linear relation
for the active second Piola–Kirchhoff stress reads
$\dot{\boldsymbol{S}^{\text{a}}}=-\displaystyle\frac{1}{\ell_{0}p_{0}f^{\text{p}}}\left[\boldsymbol{S}^{\text{a}}:\dot{\boldsymbol{E}}\right]\boldsymbol{S}^{\text{a}}+\boldsymbol{S}^{\text{f}}-k_{\text{off}}\,\boldsymbol{S}^{\text{a}}\,,$
(61)
which could be written in the form
$\displaystyle\frac{\boldsymbol{S}^{\text{a}}_{n+1}-\boldsymbol{S}^{\text{a}}_{n}}{\Delta
t}=-\displaystyle\frac{1}{\ell_{0}p_{0}f^{\text{p}}\Delta
t}\boldsymbol{S}^{\text{a}}_{n+1}\left[\boldsymbol{S}^{\text{a}}_{n+1}:\left[\boldsymbol{E}_{n+1}-\boldsymbol{E}_{n}\right]\right]+\boldsymbol{S}_{n+1}^{\text{f}}-k_{\text{off}}\boldsymbol{S}^{\text{a}}_{n+1}\,.$
(62)
To linearize this equation, we put all the terms on one side and treat them as
a residuum $\boldsymbol{R}$ that must vanish
$\boldsymbol{R}=\displaystyle\frac{\boldsymbol{S}^{\text{a}}_{n+1}-\boldsymbol{S}^{\text{a}}_{n}}{\Delta
t}+\displaystyle\frac{1}{\ell_{0}p_{0}f^{\text{p}}\Delta
t}\boldsymbol{S}^{\text{a}}_{n+1}\left[\boldsymbol{S}^{\text{a}}_{n+1}:\left[\boldsymbol{E}_{n+1}-\boldsymbol{E}_{n}\right]\right]-\boldsymbol{S}_{n+1}^{\text{f}}+k_{\text{off}}\boldsymbol{S}^{\text{a}}_{n+1}\,.$
(63)
The linearization of $\boldsymbol{R}$ reads
$\boldsymbol{R}_{n+1}\approx\text{Lin}\boldsymbol{R}_{n+1}=\boldsymbol{R}_{n}+\displaystyle\frac{\partial\boldsymbol{R}}{\partial\boldsymbol{S}^{\text{a}}}\big{\lvert}_{n}\cdot\Delta\boldsymbol{S}^{\text{a}}_{n}\stackrel{{\scriptstyle!}}{{=}}\boldsymbol{0}\,.$
(64)
The tangent reads
$\displaystyle\text{K}=\frac{\partial\boldsymbol{R}}{\partial\boldsymbol{S}^{\text{a}}}$
$\displaystyle=\left[\frac{1}{\Delta
t}+k_{\text{off}}\right]\text{I}^{\text{sym}}+\frac{1}{\ell_{0}p_{0}f^{\text{p}}\Delta
t}\bigg{[}\left[\boldsymbol{S}^{\text{a}}_{n+1}:\left[\boldsymbol{E}_{n+1}-\boldsymbol{E}_{n}\right]\right]\text{I}^{\text{sym}}+\boldsymbol{S}^{\text{a}}_{n+1}\otimes\left[\boldsymbol{E}_{n+1}-\boldsymbol{E}_{n}\right]\bigg{]}$
(65) $\displaystyle=\left[\frac{1}{\Delta
t}+k_{\text{off}}+\frac{1}{\ell_{0}p_{0}f^{\text{p}}\Delta
t}\left[\boldsymbol{S}^{\text{a}}_{n+1}:\left[\boldsymbol{E}_{n+1}-\boldsymbol{E}_{n}\right]\right]\right]\text{I}^{\text{sym}}+\frac{1}{\ell_{0}p_{0}f^{\text{p}}\Delta
t}\bigg{[}\boldsymbol{S}^{\text{a}}_{n+1}\otimes\left[\boldsymbol{E}_{n+1}-\boldsymbol{E}_{n}\right]\bigg{]}\,,$
with $\text{I}^{\text{sym}}$ being the symmetric fourth-order identity which
reads
$\text{I}^{\text{sym}}=1/2\left[\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right]$.
Finally, an iterative Newton–Raphson scheme is employed to solve for
$\boldsymbol{R}_{n+1}$ and thus, for $\boldsymbol{S}^{\text{a}}_{n+1}$ as
$\left[\boldsymbol{S}^{\text{a}}\right]_{n+1}=\left[\boldsymbol{S}^{\text{a}}\right]_{n}+\left[d\boldsymbol{S}^{\text{a}}\right]\quad\text{with}\quad\left[\Delta\boldsymbol{S}^{\text{a}}\right]_{ij}=-\left[\text{K}^{-1}\right]_{ijkl}\left[\boldsymbol{R}\right]_{kl}\,.$
(66)
To calculate $\text{K}^{-1}$ we use the Sherman–Morrison formula which states
that for an arbitrary fourth-order tensor A that can be written in the form
$\text{A}=\beta\text{B}+\alpha\boldsymbol{C}\otimes\boldsymbol{D}\,,$ (67)
with $\alpha$ and $\beta$ being scalars, $\boldsymbol{C}$ and $\boldsymbol{D}$
being second-order tensors and B being a fourth-order tensor, the inverse of A
reads
$\text{A}^{-1}=\displaystyle\frac{1}{\beta}\text{B}^{-1}-\displaystyle\frac{\alpha}{\beta^{2}+\alpha\beta\boldsymbol{D}:\text{B}^{-1}:\boldsymbol{C}}\left[\text{B}^{-1}:\boldsymbol{C}\otimes\boldsymbol{D}:\text{B}^{-1}\right]\,.$
(68)
To proceed, we write our tangent in the form of Eq. (67) as
$\text{K}=\beta\text{I}^{\text{sym}}+\alpha\bigg{[}\boldsymbol{S}_{n+1}\otimes\left[\boldsymbol{E}_{n+1}-\boldsymbol{E}_{n}\right]\bigg{]}\,,$
(69)
with
$\alpha=\displaystyle\frac{1}{\ell_{0}\rho^{\text{p}}_{0}f^{\text{p}}\Delta
t}\qquad\text{and}\qquad\beta=\left[\displaystyle\frac{1}{\Delta
t}+k_{\text{off}}+\displaystyle\frac{1}{\ell_{0}\rho^{\text{p}}_{0}f^{\text{p}}\Delta
t}\left[\boldsymbol{S}_{n+1}:\left[\boldsymbol{E}_{n+1}-\boldsymbol{E}_{n}\right]\right]\right]\,.$
(70)
Therefore, using the identity
$\text{I}^{\text{sym}^{-1}}=\text{I}^{\text{sym}}$, $\text{K}^{-1}$ reads
$\text{K}^{-1}=\displaystyle\frac{1}{\beta}\text{I}^{\text{sym}}-\displaystyle\frac{\alpha}{\beta^{2}+\alpha\beta\left[\boldsymbol{E}_{n+1}-\boldsymbol{E}_{n}\right]:\text{I}^{\text{sym}}:\boldsymbol{S}_{n+1}}\left[\text{I}^{\text{sym}}:\boldsymbol{S}_{n+1}\otimes\left[\boldsymbol{E}_{n+1}-\boldsymbol{E}_{n}\right]:\text{I}^{\text{sym}}\right]\,,$
(71)
or in index notation
$\left[\text{K}^{-1}\right]_{ijkl}\\!\\!\\!=\displaystyle\frac{1}{\beta}\left[\text{I}^{\text{sym}}\right]_{ijkl}-\displaystyle\frac{\alpha}{\beta^{2}+\alpha\beta\left[\boldsymbol{E}_{n+1}-\boldsymbol{E}_{n}\right]_{mn}:\left[\text{I}^{\text{sym}}\right]_{mnrs}:\left[\boldsymbol{S}_{n+1}\right]_{rs}}\left[\left[\text{I}^{\text{sym}}\right]_{ijmn}:\left[\boldsymbol{S}_{n+1}\otimes\left[\boldsymbol{E}_{n+1}-\boldsymbol{E}_{n}\right]\right]_{mnrs}:\left[\text{I}^{\text{sym}}\right]_{rskl}\right]\,.$
(72)
## Appendix D Some useful relations
This section provides some useful derivatives in index notation relation which
will be helpful regarding the derivation of the tangents for the finite
element implementation.
$\left[\displaystyle\frac{\partial
J}{\partial\boldsymbol{F}}\right]_{ij}=J\left[\boldsymbol{F}^{-T}\right]_{ij}\,,\qquad\left[\displaystyle\frac{\partial\boldsymbol{F}}{\partial\boldsymbol{F}}\right]_{ijkl}=\delta_{ik}\delta_{jl}\,,\qquad\left[\displaystyle\frac{\partial\boldsymbol{F}^{T}}{\partial\boldsymbol{F}}\right]_{ijkl}=\delta_{il}\delta_{jk}\,,\qquad\left[\displaystyle\frac{\partial\boldsymbol{F}^{-1}}{\partial\boldsymbol{F}}\right]_{ijkl}=-\left[\boldsymbol{F}^{-1}\right]_{ik}\left[\boldsymbol{F}^{-T}\right]_{jl}\,,$
$\left[\displaystyle\frac{\partial\boldsymbol{F}^{-T}}{\partial\boldsymbol{F}}\right]_{ijkl}=-\left[\boldsymbol{F}^{-1}\right]_{il}\left[\boldsymbol{F}^{-T}\right]_{jk}\,,\qquad\left[\displaystyle\frac{\partial\boldsymbol{B}}{\partial\boldsymbol{F}}\right]_{ijkl}=\left[\displaystyle\frac{\partial\boldsymbol{F}^{-1}}{\partial\boldsymbol{F}}\right]_{imkl}\left[\boldsymbol{F}^{-T}\right]_{mj}+\left[\boldsymbol{F}^{-1}\right]_{im}\left[\displaystyle\frac{\partial\boldsymbol{F}^{-T}}{\partial\boldsymbol{F}}\right]_{mjkl}\,,$
$\left[\displaystyle\frac{\partial\boldsymbol{E}}{\partial\boldsymbol{F}}\right]_{ijkl}=\displaystyle\frac{1}{2}\left[\left[\displaystyle\frac{\partial\boldsymbol{F}^{T}}{\partial\boldsymbol{F}}\right]_{imkl}\left[\boldsymbol{F}\right]_{mj}+\left[\boldsymbol{F}^{T}\right]_{im}\left[\displaystyle\frac{\partial\boldsymbol{F}}{\partial\boldsymbol{F}}\right]_{mjkl}\right]\,,\\\
$
$\left[\displaystyle\frac{\partial\boldsymbol{P}}{\partial\boldsymbol{F}}\right]_{ijkl}=\left[\displaystyle\frac{\partial\boldsymbol{F}}{\partial\boldsymbol{F}}\right]_{imkl}\left[\boldsymbol{S}\right]_{mj}+\left[\boldsymbol{F}\right]_{im}\left[\displaystyle\frac{\partial\boldsymbol{S}}{\partial\boldsymbol{F}}\right]_{mjkl}\,,\qquad\left[\displaystyle\frac{\partial[\boldsymbol{g}\otimes\boldsymbol{g}]}{\partial\boldsymbol{g}}\right]_{ijk}\\!\\!\\!\\!=\left[\left[\boldsymbol{I}\right]_{ik}\left[\boldsymbol{g}\right]_{j}+\left[\boldsymbol{g}\right]_{i}\left[\boldsymbol{I}\right]_{jk}\right]\,,$
$\left[\displaystyle\frac{\partial[\nabla_{\boldsymbol{X}}\,\boldsymbol{g}:\boldsymbol{F}^{-T}]}{\partial\nabla_{\boldsymbol{X}}\,\boldsymbol{g}}\right]_{ij}\\!\\!\\!\\!=\left[\boldsymbol{F}^{-T}\right]_{ij}\left[\displaystyle\frac{\partial[\nabla_{\boldsymbol{X}}\,\boldsymbol{g}:\boldsymbol{F}^{-T}]}{\partial\,\boldsymbol{F}}\right]_{ij}\\!\\!\\!\\!=\left[\displaystyle\frac{\partial\boldsymbol{F}^{-T}}{\partial\boldsymbol{F}}\right]_{klij}\left[\nabla_{\boldsymbol{X}}\,\boldsymbol{g}\right]_{kl}\,.$
## Appendix E Calculation of the derivatives at the element level
In this section a brief example to calculate the derivatives of a scalar field
$\alpha$, a vector field $\boldsymbol{a}$ and a second-order tensor field
$\boldsymbol{A}$ with respect to the nodal values at the element level is
elaborated.
$\left[\displaystyle\frac{\partial\alpha}{\partial\alpha^{J}}\right]=\displaystyle\frac{\partial\left(\alpha^{S}N^{S}\right)}{\partial\alpha^{J}}=\displaystyle\frac{\partial\alpha^{S}}{\partial\alpha^{J}}N^{S}=\delta^{SJ}N^{S}=N^{J}\,,$
$\left[\displaystyle\frac{\partial\nabla_{\boldsymbol{x}}\alpha}{\partial\alpha^{J}}\right]_{i}=\displaystyle\frac{\partial\left[\nabla_{\boldsymbol{x}}\left(\alpha^{S}N^{S}\right)\right]_{i}}{\partial\alpha^{J}}=\displaystyle\frac{\partial\left[\alpha^{S}\nabla_{\boldsymbol{x}}N^{S}_{i}\right]}{\partial\alpha^{J}}=\displaystyle\frac{\partial\alpha^{S}}{\partial\alpha^{J}}\,\nabla_{\boldsymbol{x}}N^{S}_{i}=\delta^{SJ}\,\nabla_{\boldsymbol{x}}N^{S}_{i}=\left[\nabla_{\boldsymbol{x}}N^{J}\right]_{i}\,,$
$\left[\displaystyle\frac{\partial\boldsymbol{a}}{\partial\boldsymbol{a}^{J}}\right]_{ij}=\displaystyle\frac{\partial\left(\boldsymbol{a}^{S}_{i}N^{S}\right)}{\partial\boldsymbol{a}^{J}_{j}}=\displaystyle\frac{\partial\boldsymbol{a}^{S}_{i}}{\partial\boldsymbol{a}^{J}_{j}}N^{S}=\delta_{ij}\delta^{SJ}N^{S}=\delta_{ij}N^{J}\,,$
$\left[\displaystyle\frac{\partial\nabla_{\boldsymbol{x}}\boldsymbol{a}}{\partial\boldsymbol{a}^{J}}\right]_{ijk}=\displaystyle\frac{\partial\left[\nabla_{\boldsymbol{x}}\left(\boldsymbol{a}^{S}_{i}N^{S}\right)\right]_{j}}{\partial\boldsymbol{a}^{J}_{k}}=\displaystyle\frac{\partial\left[\boldsymbol{a}^{S}_{i}\otimes\nabla_{\boldsymbol{x}}N^{S}_{j}\right]}{\partial\boldsymbol{a}^{J}_{k}}=\displaystyle\frac{\partial\boldsymbol{a}^{S}_{i}}{\partial\boldsymbol{a}^{J}_{k}}\,\nabla_{\boldsymbol{x}}N^{S}_{j}=\delta_{ik}\delta^{SJ}\,\nabla_{\boldsymbol{x}}N^{S}_{j}=\delta_{ik}\,\left[\nabla_{\boldsymbol{x}}N^{J}\right]_{j}\,,$
$\left[\displaystyle\frac{\partial\boldsymbol{A}}{\partial\boldsymbol{A}^{J}}\right]_{ijkl}\\!\\!\\!\\!=\displaystyle\frac{\partial\left(\boldsymbol{A}^{S}_{ij}N^{S}\right)}{\partial\boldsymbol{A}^{J}_{kl}}=\displaystyle\frac{\partial\boldsymbol{A}^{S}_{ij}}{\partial\boldsymbol{A}^{J}_{kl}}N^{S}=\delta_{ik}\delta_{jl}\delta^{SJ}N^{S}=\delta_{ik}\delta_{jl}N^{J}\,,$
$\left[\displaystyle\frac{\partial\nabla_{\boldsymbol{x}}\boldsymbol{A}}{\partial\boldsymbol{A}^{J}}\right]_{ijklm}=\displaystyle\frac{\partial\left[\nabla_{\boldsymbol{x}}\left(\boldsymbol{A}^{S}_{ij}N^{S}\right)\right]_{k}}{\partial\boldsymbol{A}^{J}_{lm}}=\displaystyle\frac{\partial\left[\boldsymbol{A}^{S}_{ij}\otimes\nabla_{\boldsymbol{x}}N^{S}_{k}\right]}{\partial\boldsymbol{A}^{J}_{lm}}=\displaystyle\frac{\partial\boldsymbol{A}^{S}_{ij}}{\partial\boldsymbol{A}^{J}_{lm}}\,\nabla_{\boldsymbol{x}}N^{S}_{k}=\delta_{il}\delta_{jm}\delta^{SJ}\,\nabla_{\boldsymbol{x}}N^{S}_{k}=\delta_{il}\delta_{jm}\,\left[\nabla_{\boldsymbol{x}}N^{J}\right]_{k}\,.$
## References
## References
* [1] H. S. Kuan, W. Pönisch, F. Jülicher, V. Zaburdaev, Continuum Theory of Active Phase Separation in Cellular Aggregates, Phys. Rev. Lett. 126 (2021) 18102.
* [2] Y. Futaki, I. Amimoto, M. Tanaka, T. Ito, Y. Hirano, Discovery of cell aggregate-inducing peptides, Processes 9 (2021) 538.
* [3] L. G. Griffith, M. A. Swartz, Capturing complex 3D tissue physiology in vitro, Nat. Rev. Mol. Cell Biol. 7 (2006) 211–224.
* [4] S. j. Kim, E. M. Kim, M. Yamamoto, H. Park, H. Shin, Engineering Multi-Cellular Spheroids for Tissue Engineering and Regenerative Medicine, Adv. Healthc. Mater. 9 (2020) 1–18.
* [5] F. Pampaloni, E. G. Reynaud, E. H. K. Stelzer, The third dimension bridges the gap between cell culture and live tissue, Nat. Rev. Mol. Cell Biol. 8 (2007) 839–845.
* [6] W. Mueller-Kleiser, Multicellular spheroids, J. Cancer Res. Clin. Oncol. 113 (1987) 101–122.
* [7] T. Eguchi, C. Sogawa, Y. Okusha, K. Uchibe, R. Iinuma, K. Ono, K. Nakano, J. Murakami, M. Itoh, K. Arai, T. Fujiwara, Y. Namba, Y. Murata, K. Ohyama, M. Shimomura, H. Okamura, M. Takigawa, T. Nakatsura, K. i. Kozaki, K. Okamoto, S. K. Calderwood, Organoids with cancer stem cell-like properties secrete exosomes and HSP90 in a 3D nanoenvironment, PLoS One 13 (2018) 1–34.
* [8] S. Douezan, J. Dumond, F. Brochard-Wyart, Wetting transitions of cellular aggregates induced by substrate rigidity, Soft Matter 8 (2012) 4578–4583.
* [9] G. Beaune, T. V. Stirbat, N. Khalifat, O. Cochet-Escartin, S. Garcia, V. V. Gurchenkov, M. P. Murrell, S. Dufour, D. Cuvelier, F. Brochard-Wyart, How cells flow in the spreading of cellular aggregates, Proc. Natl. Acad. Sci. U. S. A. 111 (2014) 8055–8060.
* [10] G. Beaune, G. Duclos, N. Khalifat, T. V. Stirbat, D. M. Vignjevic, F. Brochard-Wyart, Reentrant wetting transition in the spreading of cellular aggregates, Soft Matter 13 (2017) 8474–8482.
* [11] S. Douezan, K. Guevorkian, R. Naouar, S. Dufour, D. Cuvelier, F. Brochard-Wyarta, Spreading dynamics and wetting transition of cellular aggregates, Proc. Natl. Acad. Sci. U. S. A. 108 (2011) 7315–7320.
* [12] H. Clevers, Modeling Development and Disease with Organoids, Cell 165 (2016) 1586–1597.
* [13] L. R. Johnson, Microcolony and biofilm formation as a survival strategy for bacteria, J. Theor. Biol. 251 (2008) 24–34.
* [14] E. Ben-Jacob, I. Cohen, D. L. Gutnick, Cooperative organization of bacterial colonies: From genotype to morphotype, Annu. Rev. Microbiol. 52 (1998) 779–806.
* [15] M. T. Armstrong, P. B. Armstrong, Mechanisms of epibolic tissue spreading analyzed in a model morphogenetic system: Roles for cell migration and tissue contractility, J. Cell Sci. 102 (1992) 373–385.
* [16] D. Bi, X. Yang, M. C. Marchetti, M. L. Manning, Motility-driven glass and jamming transitions in biological tissues, Phys. Rev. X 6 (2016) 1–13.
* [17] N. E. Freitag, H. S. Seifert, M. Koomey, Characterization of the pilF—pilD pilus‐assembly locus of Neisseria gonorrhoeae, Mol. Microbiol. 16 (1995) 575–586.
* [18] L. Brossay, G. Paradis, R. Fox, M. Koomey, J. Hebert, Identification, localization, and distribution of the PilT protein in Neisseria gonorrhoeae, Infect. Immun. 62 (1994) 2302–2308.
* [19] W. J. Todd, G. P. Wray, P. J. Hitchcock, Arrangement of pili in colonies of Neisseria gonorrhoeae, J. Bacteriol. 159 (1984) 312–320.
* [20] M. Klausen, A. Heydorn, P. Ragas, L. Lambertsen, A. Aaes-Jorgensen, S. Molin, T. Tolker-Nielsen, Biofilm formation by Pseudomonas aeruginosa wild type, flagella and type IV pili mutants, Mol. Microbiol. 48 (2003) 1511–1524.
* [21] A. F. Imhaus, G. Duménil, The number of Neisseria meningitidis type IV pili determines host cell interaction, EMBO J. 33 (2014) 1767–1783.
* [22] N. J. Armstrong, K. J. Painter, J. A. Sherratt, A continuum approach to modelling cell-cell adhesion, J. Theor. Biol. 243 (2006) 98–113.
* [23] D. Drasdo, S. Höhme, A single-cell-based model of tumor growth in vitro: Monolayers and spheroids, Phys. Biol. 2 (2005) 133–147.
* [24] J. Griffie, R. Peters, D. M. Owen, An agent-based model of molecular aggregation at the cell membrane, PLoS One 15 (2020) 1–17.
* [25] A. R. A. Anderson, M. A. J. Chaplain, E. L. Newman, R. J. C. Steele, A. M. Thompson, Mathematical Modelling of Tumour Invasion and Metastasis, J. Theor. Med. 2 (2000) 129–154.
* [26] M. Block, E. Schöll, D. Drasdo, Classifying the expansion kinetics and critical surface dynamics of growing cell populations, Phys. Rev. Lett. 99 (2007) 3–6.
* [27] J. MOREIRA, A. DEUTSCH, Cellular Automaton Models of Tumor Development: a Critical Review, Adv. Complex Syst. 05 (2002) 247–267.
* [28] M. S. Alber, M. A. Kiskowski, J. A. Glazier, Y. Jiang, On cellular automaton approaches to modeling biological cells, in: Math. Syst. theory Biol. communi- cation, Financ., Springer, New York, 2002, pp. 1–40.
* [29] T. Alarcón, H. M. Byrne, P. K. Maini, A mathematical model of the effects of hypoxia on the cell-cycle of normal and cancer cells, J. Theor. Biol. 229 (2004) 395–411.
* [30] L. Geris, J. M. A. Ashbourn, T. Clarke, Continuum-level modelling of cellular adhesion and matrix production in aggregates, Comput. Methods Biomech. Biomed. Engin. 14 (2011) 403–410.
* [31] P. Macklin, J. Lowengrub, Nonlinear simulation of the effect of microenvironment on tumor growth, J. Theor. Biol. 245 (2007) 677–704.
* [32] R. P. Araujo, D. L. S. McElwain, A history of the study of solid tumour growth: The contribution of mathematical modelling, Bull. Math. Biol. 66 (2004) 1039–1091.
* [33] T. Roose, S. J. Chapman, P. K. Maini, Mathematical models of avascular tumor growth, SIAM Rev. 49 (2007) 179–208.
* [34] D. Horstmann, K. J. Painter, H. G. Othmer, Aggregation under local reinforcement: From lattice to continuum, Eur. J. Appl. Math. 15 (2004) 545–576.
* [35] K. Anguige, Multi-phase Stefan problems for a non-linear one-dimensional model of cell-to-cell adhesion and diffusion, Eur. J. Appl. Math. 21 (2010) 109–136.
* [36] K. Anguige, C. Schmeiser, A one-dimensional model of cell diffusion and aggregation, incorporating volume filling and cell-to-cell adhesion, J. Math. Biol. 58 (2009) 395–427.
* [37] H. M. Byrne, M. A. J. Chaplain, Modelling the role of cell-cell adhesion in the growth and development of carcinomas, Math. Comput. Model. 24 (1996) 1–17.
* [38] V. Cristini, J. Lowengrub, Q. Nie, Nonlinear simulation of tumor growth, J. Math. Biol. 46 (2003) 191–224.
* [39] A. Gerisch, M. A. J. Chaplain, Mathematical modelling of cancer cell invasion of tissue: Local and non-local models and the effect of adhesion, J. Theor. Biol. 250 (2008) 684–704.
* [40] I. Ramis-Conde, M. A. J. Chaplain, A. R. A. Anderson, Mathematical modelling of cancer cell invasion of tissue, Math. Comput. Model. 47 (2008) 533–545.
* [41] A. C. Callan-Jones, F. Jülicher, Hydrodynamics of active permeating gels, New J. Phys. 13 (2011) 093027.
* [42] J. Ranft, M. Basan, J. Elgeti, J. F. Joanny, J. Prost, F. Jülicher, Fluidization of tissues by cell division and apoptosis, Proc. Natl. Acad. Sci. U. S. A. 107 (2010) 20863–20868.
* [43] F. Jülicher, K. Kruse, J. Prost, J. F. Joanny, Active behavior of the Cytoskeleton, Phys. Rep. 449 (2007) 3–28.
* [44] M. Popović, A. Nandi, M. Merkel, R. Etournay, S. Eaton, F. Jülicher, G. Salbreux, Active dynamics of tissue shear flow, New J. Phys. 19 (2017) 033006.
* [45] J. Prost, F. Jülicher, J. F. Joanny, Active gel physics, Nat. Phys. 11 (2015) 111–117.
* [46] M. H. Köpf, L. M. Pismen, A continuum model of epithelial spreading, Soft Matter 9 (2013) 3727–3734.
* [47] L. Bao, Z. Zhou, Lattice and continuum models analysis of the aggregation diffusion cell movement, arXiv Prepr. (2018).
* [48] J. C. Arciero, Q. Mi, M. F. Branca, D. J. Hackam, D. Swigon, Continuum model of collective cell migration in wound healing and colony expansion, Biophys. J. 100 (2011) 535–543.
* [49] H. Byrne, D. Drasdo, Individual-based and continuum models of growing cell populations: A comparison, J. Math. Biol. 58 (2009) 657–687.
* [50] L. R. Johnson, Microcolony and biofilm formation as a survival strategy for bacteria, J. Theor. Biol. 251 (2008) 24–34.
* [51] G. O’Toole, H. B. Kaplan, R. Kolter, Biofilm formation as microbial development, Annu. Rev. Microbiol. 54 (2000) 49–79.
* [52] N. Hoiby, T. Bjarnsholt, M. Givskov, S. Molin, O. Ciofu, Antibiotic resistance of bacterial biofilms, Int. J. Antimicrob. Agents 35 (2010) 322–332.
* [53] F. Jin, J. C. Conrad, M. L. Gibiansky, G. C. L. Wong, Bacteria use type-IV pili to slingshot on surfaces, Proc. Natl. Acad. Sci. U. S. A. 108 (2011) 12617–12622.
* [54] A. Charles-Orszag, E. Lemichez, G. Tran Van Nhieu, G. Duménil, Microbial pathogenesis meets biomechanics, Curr. Opin. Cell Biol. 38 (2016) 31–37.
* [55] C. Toma, H. Kuroki, N. Nakasone, M. Ehara, M. Iwanaga, Minor pilin subunits are conserved in Vibrio cholerae type IV pili, FEMS Immunol. Med. Microbiol. 33 (2002) 35–40.
* [56] W. Pönisch, C. A. Weber, G. Juckeland, N. Biais, V. Zaburdaev, Multiscale modeling of bacterial colonies: How pili mediate the dynamics of single cells and cellular aggregates, New J. Phys. 19 (2017) 015003.
* [57] A. B. Jonsson, D. Llver, P. Falk, J. Pepose, S. Normark, Sequence changes in the pilus subunit lead to variation of Neisseria gonorrhoeae to human tissue, Mol. Microbiol. 14 (1994) 1103–1103.
* [58] D. Bonazzi, V. Lo Schiavo, S. Machata, I. Djafer-Cherif, P. Nivoit, V. Manriquez, H. Tanimoto, J. Husson, N. Henry, H. Chaté, R. Voituriez, G. Duménil, Intermittent Pili-Mediated Forces Fluidize Neisseria meningitidis Aggregates Promoting Vascular Colonization, Cell 174 (2018) 1–13.
* [59] W. Pönisch, K. B. Eckenrode, K. Alzurqa, H. Nasrollahi, C. Weber, V. Zaburdaev, N. Biais, Pili mediated intercellular forces shape heterogeneous bacterial microcolonies prior to multicellular differentiation, Sci. Rep. 8 (2018) 16567.
* [60] H. L. Howie, M. Glogauer, M. So, The N. gonorrhoeae type IV pilus stimulates mechanosensitive pathways and cytoprotection through a pilT-dependent mechanism, PLoS Biol. 3 (2005) 0627–0637.
* [61] J. S. Mattick, Type IV pili and twitching motility, Annu. Rev. Microbiol. 56 (2002) 289–314.
* [62] C. Holz, D. Opitz, L. Greune, R. Kurre, M. Koomey, M. A. Schmidt, B. Maier, Multiple pilus motors cooperate for persistent bacterial movement in two dimensions, Phys. Rev. Lett. 104 (2010) 1–4.
* [63] N. Biais, D. L. Higashi, J. Brujić, M. So, M. P. Sheetz, Force-dependent polymorphism in type IV pili reveals hidden epitopes, Proc. Natl. Acad. Sci. U. S. A. 107 (2010) 11358–11363.
* [64] J. Eriksson, O. S. Eriksson, L. Maudsdotter, O. Palm, J. Engman, T. Sarkissian, H. Aro, M. Wallin, A. B. Jonsson, Characterization of motility and piliation in pathogenic Neisseria Microbial biochemistry, physiology and metabolism, BMC Microbiol. 15 (2015) 1–13.
* [65] A. J. Merz, M. So, M. P. Sheetz, Pilus retraction powers bacterial twitching motility, Nature 407 (2000) 98–102.
* [66] B. Maier, L. Potter, M. So, H. S. Seifert, M. P. Sheetz, Single pilus motor forces exceed 100 pN, Proc. Natl. Acad. Sci. 99 (2003) 16012–16017.
* [67] V. Zaburdaev, N. Biais, M. Schmiedeberg, J. Eriksson, A. B. Jonsson, M. P. Sheetz, D. A. Weitz, Uncovering the mechanism of trapping and cell orientation during Neisseria gonorrhoeae twitching motility, Biophys. J. 107 (2014) 1523–1531.
* [68] C. A. Weber, Y. T. Lin, N. Biais, V. Zaburdaev, Formation and dissolution of bacterial colonies, Phys. Rev. E - Stat. Nonlinear, Soft Matter Phys. 92 (2015) 1–8.
* [69] J. Taktikos, Y. T. Lin, H. Stark, N. Biais, V. Zaburdaev, Pili-induced clustering of N. Gonorrhoeae Bacteria, PLoS One 10 (2015) 1–16.
* [70] H. S. Kuan, W. Pönisch, F. Jülicher, V. Zaburdaev, see supplemental material at http://link.aps.org/ supplemental/10.1103/PhysRevLett.126.018102.
* [71] W. Pönisch, C. A. Weber, V. Zaburdaev, How bacterial cells and colonies move on solid substrates, Phys. Rev. E 99 (2019) 1–21.
* [72] R. Marathe, C. Meel, N. C. Schmidt, L. Dewenter, R. Kurre, L. Greune, M. Alexander Schmidt, M. J. I. Müller, R. Lipowsky, B. Maier, S. Klumpp, Bacterial twitching motility is coordinated by a two-dimensional tug-of-war with directional memory, Nat. Commun. 5 (2014) 1–10.
* [73] L. Craig, K. T. Forest, B. Maier, Type IV pili: dynamics, biophysics and functional consequences, Nat. Rev. Microbiol. 17 (2019) 429–440.
* [74] L. Craig, M. E. Pique, J. A. Tainer, Type IV pilus structure and bacterial pathogenicity, Nat. Rev. Microbiol. 2 (2004) 363–378.
* [75] S. A. Papanicolopulos, A. Zervos, A method for creating a class of triangular C1 finite elements, International Journal for Numerical Methods in Engineering 89 (2012) 1437–1450.
* [76] P. Fischer, J. Mergheim, P. Steinmann, On the C1 continuous discretization of non-linear gradient elasticity: A comparison of NEM and FEM based on Bernstein–B´ezier patches P., International Journal for Numerical Methods in Engineering 82 (2010) 1282–1307.
* [77] T. J. R. Hughes, J. A. Cottrell, Y. Bazilevs, Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement, Computer Methods in Applied Mechanics and Engineering 194 (2005) 4135–4195.
* [78] Y. Bazilevs, V. M. Calo, J. A. Cottrell, J. A. Evans, T. J. R. Hughes, S. Lipton, M. A. Scott, T. W. Sederberg, Isogeometric analysis using T-splines, Computer Methods in Applied Mechanics and Engineering 199 (2010) 229–263.
* [79] N. Kirchner, P. Steinmann, A unifying treatise on variational principles for gradient and micromorphic continua, Philosophical Magazine 85 (2005) 3875–3895.
* [80] P. Neff, I. D. Ghiba, A. Madeo, L. Placidi, G. Rosi, A unifying perspective: The relaxed linear micromorphic continuum, Continuum Mechanics and Thermodynamics 26 (2014) 639–681.
* [81] S. Kaessmair, P. Steinmann, Computational Mechanics of Generalized Continua, in: Calculus of Variations, Springer-Verlag, 2020, pp. 343–356.
* [82] E. Oñate, A. Franci, J. M. Carbonell, Lagrangian formulation for finite element analysis of quasi-incompressible fluids with reduced mass losses Eugenio, International Journal for Numerical Methods in Fluids 74 (2014) 699–731.
* [83] M. Cremonesi, S. Meduri, U. Perego, A. Frangi, An explicit Lagrangian finite element method for free-surface weakly compressible flows, Computational Particle Mechanics 4 (2017) 357–369.
|
# Gradient-based Bayesian Experimental
Design for Implicit Models using
Mutual Information Lower Bounds
Steven<EMAIL_ADDRESS>[ Michael U.
<EMAIL_ADDRESS>[ University of Edinburgh , e3
(2021)
###### Abstract
We introduce a framework for Bayesian experimental design (BED) with implicit
models, where the data-generating distribution is intractable but sampling
from it is still possible. In order to find optimal experimental designs for
such models, our approach maximises mutual information lower bounds that are
parametrised by neural networks. By training a neural network on sampled data,
we simultaneously update network parameters and designs using stochastic
gradient-ascent. The framework enables experimental design with a variety of
prominent lower bounds and can be applied to a wide range of scientific tasks,
such as parameter estimation, model discrimination and improving future
predictions. Using a set of intractable toy models, we provide a comprehensive
empirical comparison of prominent lower bounds applied to the aforementioned
tasks. We further validate our framework on a challenging system of stochastic
differential equations from epidemiology.
62K05,
62L05,
Bayesian Experimental Design,
Likelihood-Free Inference,
Mutual Information,
Implicit Models,
Parameter Estimation,
Model Discrimination,
###### keywords:
[class=MSC]
###### keywords:
††volume: TBA††issue: TBA
and
## 1 Introduction
Parametric statistical models allow us to describe and study the behaviour of
natural phenomena and processes. These models attempt to closely match the
underlying process in order to simulate realistic data and, therefore, usually
have a complex form and involve a variety of variables. Consequently, they
tend to be characterised by intractable data-generating distributions
(likelihood functions). Such models are referred to as implicit models and
have an increasingly widespread use in the natural and medical sciences.
Notable examples include high-energy physics (Agostinelli et al., 2003),
cosmology (Schafer and Freeman, 2012), epidemiology (Corander et al., 2017),
cell biology (Ross et al., 2017) and cognitive science (Palestro et al.,
2018).
Statistical inference is a key component of many scientific goals such as
estimating model parameters, comparing plausible models and predicting future
events. Unfortunately, because the likelihood function for implicit models is
intractable, we have to revert to so-called likelihood-free, or simulation-
based, inference to solve these downstream tasks (see Cranmer et al., 2020 for
a recent overview). Likelihood-free inference has gained much traction
recently, with many methods leveraging advances in machine-learning (e.g.
Gutmann and Corander, 2016; Lueckmann et al., 2017; Järvenpää et al., 2018;
Chen and Gutmann, 2019; Papamakarios et al., 2019; Thomas et al., 2020;
Järvenpää et al., 2021). Ultimately, however, the quality of the statistical
inference within a scientific downstream task depends on the data that are
available in the first place. Because gathering data is often time-consuming
and expensive, we should therefore aim to collect data that allow us to solve
our specific task in the most efficient and appropriate manner possible.
Bayesian experimental design (BED) attempts to solve this problem by
appropriately allocating resources in an experiment (see Ryan et al., 2016 for
a review). Scientific experiments generally involve controllable variables,
called experimental designs $\mathbf{d}$, that affect the data gathering
process. These might, for instance, be measurement times and locations,
initial conditions of a dynamical system, or interventions and stimuli that
are used to perturb a natural process. BED aims to find optimal experimental
designs $\mathbf{d}^{\ast}$ that allow us to solve our scientific goals in the
most efficient way. As part of this, we have to construct and optimise a
utility function $U(\mathbf{d})$ that indicates the value of an experimental
design $\mathbf{d}$ according to the specific task at hand. In order to be
fully-Bayesian, the utility function generally has to be a functional of the
posterior distribution (Ryan et al., 2016). This exacerbates the difficulty of
BED for implicit models, as the involved posterior distributions are
intractable. Proposing appropriate utility functions and devising methods to
compute them efficiently for implicit models, with the help of likelihood-free
inference, has been the focus of much present-day research in the field of BED
(e.g. Drovandi and Pettitt, 2013; Price et al., 2018; Overstall and McGree,
2020).
A popular choice of utility function with deep roots in information theory is
the mutual information (MI; Lindley, 1972) between two random variables, which
describes the uncertainty reduction in one variable when observing the other.
The MI is a standard utility function in BED for models with tractable
likelihoods (e.g. Ryan, 2003; Paninski, 2005; Overstall and Woods, 2017), but
has only recently started to gain traction in BED for implicit models (e.g.
Price et al., 2018; Kleinegesse and Gutmann, 2019; Foster et al., 2019;
Kleinegesse et al., 2020a). In the context of implicit models, the main
difficulties that arise are 1) that estimating the posterior and obtaining
samples from it via likelihood-free inference is expensive and 2) that
gradients of the MI are generally not readily available, complicating its
optimisation.
Kleinegesse and Gutmann (2020b) recently addressed the aforementioned problems
by considering gradient-ascent on mutual information lower bounds.
Importantly, they only considered one type of lower bound in their work, while
there exist other bounds with different potentially helpful properties (see
Poole et al., 2019 for an overview). Moreover, their method only considers the
specific task of parameter estimation, even though other research goals are
often of equal practical importance.
In this paper, we extend the method of Kleinegesse and Gutmann (2020b) to
other lower bounds and scientific aims, thereby devising a general framework
for Bayesian experimental design based on mutual information lower bounds.
Importantly, our framework does not require the use of variational
distributions (as in Foster et al., 2019), but fully relies on parametrisation
by neural networks. We believe that this increases practicality and
generality, allowing scientists to use this method easily in a variety of
settings. In short, our contributions are:
1. 1.
We devise a BED framework that allows for the use of general lower bounds on
mutual information in the context of implicit models, and
2. 2.
use it to perform BED for parameter estimation, model comparison and improving
future predictions.
Figure 1 provides a visualisation of our general framework, showcasing how
training a neural network allows us to move towards optimal experimental
designs. Our method optimises a parametrised lower bound with gradient-ascent,
while simultaneously tightening it. This avoids spending resources on
estimating the MI accurately for non-optimal designs, which can be seen in the
right plot of Figure 1, where the lower bound is only tight near the optimal
design. Furthermore, once the optimal design has been found, our approach
provides an amortised approximation of the posterior distribution, avoiding an
additional costly likelihood-free inference step. An animation corresponding
to Figure 1 can be found in the public code repository that also includes the
code to reproduce all results in this paper:
https://github.com/stevenkleinegesse/GradBED.
Figure 1: Visualisation of our proposed framework. During early times of
training (left plot), the lower bound on mutual information is loose and the
current design is far from the optimal design. We move closer to the optimal
design while tightening the lower bound during the training process (middle
plot). Finally, we find the optimal design and obtain a tight lower bound at
the end of training (right plot). Note how we only spend resources on
tightening the lower bound near the optimal design.
We provide background knowledge of Bayesian experimental design with different
scientific aims in Section 2. In Section 3 we present our BED framework for
implicit models with different scientific aims and general lower bounds. This
includes an introduction of parametrised mutual information lower bounds for
BED, an explanation of how to optimise them and some practical guidance. In
Section 4 we test our framework on a number of implicit models, followed by a
discussion in Section 5.
## 2 Bayesian Experimental Design for different Aims
In this section we provide background on Bayesian experimental design (BED)
for different scientific aims using mutual information. Generally speaking,
the aim of BED is to identify experimental designs $\mathbf{d}$ that allow us
to achieve our scientific goals most rapidly. In order to do so, we first have
to construct a utility function $U(\mathbf{d})$ that tells us the worth of
doing an experiment with design $\mathbf{d}\in\mathcal{D}$, where
$\mathcal{D}$ specifies the domain of feasible experimental designs. Optimal
experimental designs $\mathbf{d}^{\ast}$ are then found by maximising the
utility function $U(\mathbf{d})$, i.e.
$\mathbf{d}^{\ast}=\operatorname*{arg\,max}_{\mathbf{d}\in\mathcal{D}}U(\mathbf{d}).$
(2.1)
Naturally, the choice of utility function $U(\mathbf{d})$ is crucial, as it
determines the optimal designs that are found. A popular utility function in
BED is the mutual information (MI) between two random variables, which
describes the uncertainty reduction, or expected information gain, in one
variable when observing the other. It is a desirable quantitity because of its
sensitivity to non-linearity and multi-modality, which other utilities cannot
effectively deal with (Ryan et al., 2016; Kleinegesse et al., 2020a).
We here introduce the notion of a variable of interest $\bm{v}$ that we wish
to learn about by means of a scientific experiment. As such, our aim is to
compute the MI $I(\bm{v};\mathbf{y}|\mathbf{d})$ between the variable of
interest $\bm{v}$ and the observed data $\mathbf{y}$ at a particular design
$\mathbf{d}$ (henceforth abbreviated by $\mathbf{y}|\mathbf{d}$). The MI
utility function can then conveniently be adapted to different scientific aims
by changing the variable of interest $\bm{v}$ that is used in its computation
(Ryan et al., 2016). Its general form is given by
$\displaystyle U(\mathbf{d})$ $\displaystyle=I(\bm{v};\mathbf{y}|\mathbf{d})$
(2.2) $\displaystyle=\int\int
p(\bm{v},\mathbf{y}|\mathbf{d})\log\left(\frac{p(\bm{v},\mathbf{y}|\mathbf{d})}{p(\mathbf{y}|\mathbf{d})p(\bm{v})}\right)\mathrm{d}\mathbf{y}\mathrm{d}\bm{v}$
(2.3) $\displaystyle=\int\int
p(\mathbf{y}|\bm{v},\mathbf{d})p(\bm{v})\log\left(\frac{p(\bm{v}|\mathbf{y},\mathbf{d})}{p(\bm{v})}\right)\mathrm{d}\mathbf{y}\mathrm{d}\bm{v},$
(2.4)
where we make the common assumption that the prior over the variable $\bm{v}$
is unaffected by the designs $\mathbf{d}$, i.e.
$p(\bm{v}|\mathbf{d})=p(\bm{v})$. In this form, the MI can also be
reformulated as the expected Kullback-Leibler (KL) divergence (Kullback and
Leibler, 1951) between the posterior $p(\bm{v}|\mathbf{y},\mathbf{d})$ and
prior $p(\bm{v})$. This interpretation means that we are looking for
experimental designs resulting in data that maximally increase the difference
between posterior and prior (Ryan et al., 2016). In the case of implicit
models, the data-generating distribution $p(\mathbf{y}|\bm{v},\mathbf{d})$
and, hence, the posterior distribution $p(\bm{v}|\mathbf{y},\mathbf{d})$ are
intractable, severely complicating the estimation and optimisation of Equation
2.4.
##### Parameter Estimation
The aim here is to find optimal designs that allow us to estimate the model
parameters $\bm{\theta}$ most effectively. In this scenario,
$\bm{v}\rightarrow\bm{\theta}$ and the utility function is the MI
$I(\bm{\theta};\mathbf{y}|\mathbf{d})$ between the model parameters
$\bm{\theta}$ and data $\mathbf{y}|\mathbf{d}$,
$U_{\text{PE}}(\mathbf{d})=\int\int
p(\mathbf{y}|\bm{\theta},\mathbf{d})p(\bm{\theta})\log\left(\frac{p(\bm{\theta}|\mathbf{y},\mathbf{d})}{p(\bm{\theta})}\right)\mathrm{d}\mathbf{y}\mathrm{d}\bm{\theta}.$
(2.5)
Parameter estimation is perhaps the most commonly-used scientific aim in BED
for implicit models (e.g. Drovandi and Pettitt, 2013; Hainy et al., 2016;
Price et al., 2018; Kleinegesse et al., 2020a; Kleinegesse and Gutmann,
2020b).
##### Model Discrimination
In this setting we wish to find designs that allow us to optimally
discriminate between competing models. Each competing model is assigned a
discrete model indicator $m$ that determines from which competing model the
data is sampled, i.e. $\mathbf{y}\sim p(\mathbf{y}|m,\mathbf{d})$. As such, we
let $\bm{v}\rightarrow m$ and define the utility function as the MI
$I(m;\mathbf{y}|\mathbf{d})$ between the model indicator $m$ and data
$\mathbf{y}|\mathbf{d}$,
$U_{\text{MD}}(\mathbf{d})=\sum_{m}\int
p(\mathbf{y}|m,\mathbf{d})p(m)\log\left(\frac{p(m|\mathbf{y},\mathbf{d})}{p(m)}\right)\mathrm{d}\mathbf{y}.$
(2.6)
Generally, each competing model $m$ also has its own model parameters
$\bm{\theta}_{m}$ that are needed for simulating data. In this particular
case, where we are only concerned with discriminating between models, these
model parameters are marginalised out. BED for model discrimination has
recently received increased attention in the context of implicit models (e.g.
Dehideniya et al., 2018a; Hainy et al., 2018).
##### Joint MD/PE
Combining the previous two tasks, we here wish to find designs that allow us
to optimally discriminate between models $m$ and estimate a particular model’s
parameters $\bm{\theta}_{m}$. For this joint task we have
$\bm{v}\rightarrow(\bm{\theta}_{m},m)$, with the utility function being the MI
$I(\bm{\theta}_{m},m;\mathbf{y}|\mathbf{d})$ between $(\bm{\theta}_{m},m)$ and
simulated data $\mathbf{y}|\mathbf{d}$,
$\displaystyle U_{\text{MDPE}}(\mathbf{d})=\sum_{m}\int
p(\mathbf{y}|\bm{\theta}_{m},m,\mathbf{d})p(\bm{\theta}_{m},m)\log\left(\frac{p(\bm{\theta}_{m},m|\mathbf{y},\mathbf{d})}{p(\bm{\theta}_{m},m)}\right)\mathrm{d}\mathbf{y}.$
(2.7)
There have only been a few studies on joint parameter estimation and model
discrimination in the context of BED for implicit models (e.g. Dehideniya et
al., 2018b).
##### Improving Future Predictions
The premise of this setting is slightly different to the previous ones. We
assume that we already know that we can perform an experiment with designs
$\mathbf{d}_{T}$ at some time in the future, yielding (yet unknown) data
$\mathbf{y}_{T}|\mathbf{d}_{T}$. However, we do have some budget to do a few
initial experiments with designs $\mathbf{d}$ and observe initial data
$\mathbf{y}$. Our aim is then to find the experimental designs that allow us
to optimally predict the future observations $\mathbf{y}_{T}$ given the
initial observations $\mathbf{y}$, i.e.
$\bm{v}\rightarrow\mathbf{y}_{T}|\mathbf{d}_{T}$. The corresponding utility is
the MI $I(\mathbf{y}_{T}|\mathbf{d}_{T};\mathbf{y}|\mathbf{d})$ between future
observations $\mathbf{y}_{T}|\mathbf{d}_{T}$ and initial observations
$\mathbf{y}$, conditioned on initial designs $\mathbf{d}$ (Chaloner and
Verdinelli, 1995):
$\displaystyle U_{\text{FP}}(\mathbf{d})=\int\int
p(\mathbf{y}_{T}|\mathbf{y},\mathbf{d},\mathbf{d}_{T})p(\mathbf{y}|\mathbf{d})\log\left(\frac{p(\mathbf{y}_{T}|\mathbf{y},\mathbf{d},\mathbf{d}_{T})}{p(\mathbf{y}_{T}|\mathbf{d}_{T})}\right)\mathrm{d}\mathbf{y}\mathrm{d}\mathbf{y}_{T},$
(2.8)
where we have made the assumptions that
$p(\mathbf{y}_{T}|\mathbf{d},\mathbf{d}_{T})=p(\mathbf{y}_{T}|\mathbf{d}_{T})$
and $p(\mathbf{y}|\mathbf{d},\mathbf{d}_{T})=p(\mathbf{y}|\mathbf{d})$.
Importantly, the future design $\mathbf{d}_{T}$ stays constant during
optimisation because we assume knowledge of this variable. Since we are only
concerned with improving future predictions, we here marginalise out the model
parameters $\bm{\theta}$. This utility function can also be interpreted as the
expected KL divergence between the posterior predictive and prior predictive
of future observations $\mathbf{y}_{T}$. As far as we are aware, there has
only been one work on improving future predictions in the context of BED for
implicit models (Liepe et al., 2013), but there has been some older work for
explicit models where the likelihood is tractable (e.g. Martini and
Spezzaferri, 1984; Verdinelli et al., 1993).
Table 1: Variable of interests and corresponding mutual information utilities for different scientific goals. Scientific Goal | Variable of Interest $\bm{v}$ | MI Utility
---|---|---
Parameter Estimation | $\bm{\theta}$ | $I(\bm{\theta};\mathbf{y}|\mathbf{d})$.
Model Discrimination | $m$ | $I(m;\mathbf{y}|\mathbf{d})$
Joint MD/PE | $\bm{\theta}_{m},m$ | $I(\bm{\theta}_{m},m;\mathbf{y}|\mathbf{d})$
Improving Future Predictions | $\mathbf{y}_{T}|\mathbf{d}_{T}$ | $I(\mathbf{y}_{T}|\mathbf{d}_{T};\mathbf{y}|\mathbf{d})$
We summarise the above formulations in Table 1; however, this list is by no
means exhaustive. In fact, because we only have to change the variable of
interest $\bm{v}$ when computing the MI $I(\bm{v};\mathbf{y}|\mathbf{d})$, we
can easily incorporate a multitude of scientific goals. Furthermore, the
formalism in Equation 2.4 allows us to devise a general framework for BED with
implicit models that is agnostic to a particular scientific goal.
## 3 General BED Framework with Lower Bounds on MI
In this section we explain our general BED framework for implicit models based
on mutual information lower bounds. We first provide motivation for using MI
lower bounds and rephrase the BED problem accordingly. This is followed by
examples of prominent lower bounds from literature. We identify common
structures between them and explain how to use these to derive gradients with
respect to designs, allowing us to solve the rephrased BED problem with
gradient-based optimisation. Finally, we address a few technical difficulties
and how to overcome them.
### 3.1 Rephrasing the BED Problem
Even though it is an effective and desirable metric of dependency, mutual
information is notoriously difficult to estimate. This difficulty is
exacerbated for implicit models, where we do not have access to the data-
generating distribution in Equation 2.4. Recent work devised tractable mutual
information estimators by combining variational bounds with machine-learning
(see Poole et al., 2019 for an overview). These methods generally work by
using a neural network to parametrise a lower bound on mutual information,
which is then tightened by training the neural network with gradient-ascent.
There also exist some methods that use variational approximations to
intractable distributions within lower bounds (e.g. Donsker and Varadhan,
1983; Barber and Agakov, 2003; Foster et al., 2019). However, specifying a
family of variational distributions carries the risk of mis-specification due
to choosing an overly simple variational family. Parametrising non-linear
functions such as neural networks is simpler than parametrising probability
distributions and less prone to mis-specification.111Given that the neural
network has enough capacity to capture the true distribution. This difficulty
is amplified in the context of implicit models, where we might have little
knowledge about the underlying (intractable) distributions. Upper bounds on
the mutual information generally require tractable distributions or
variational approximations as well (e.g. Poole et al., 2019; Cheng et al.,
2020).222We note that this might be because upper bounds on MI have been
consistently understudied. Thus, in this work we shall only be considering
lower bounds on mutual information that are solely parametrised by neural
networks.
Our overall goal is to maximise the MI $I(\bm{v};\mathbf{y}|\mathbf{d})$
between a variable of interest $\bm{v}$ and observed data $\mathbf{y}$ with
respect to the design $\mathbf{d}$. Following the notation of Belghazi et al.
(2018), we represent the scalar output of a neural network by
$T_{\bm{\psi}}(\bm{v},\mathbf{y})$, where $\bm{\psi}$ are the network
parameters. We denote lower bounds on MI by
$\mathcal{L}(\bm{\psi},\mathbf{d})$; these depend on $\bm{\psi}$ via the
neural network $T_{\bm{\psi}}(\bm{v},\mathbf{y})$ and on $\mathbf{d}$ via the
distributions over which expectations are taken. Our aim is then to maximise
$\mathcal{L}(\bm{\psi},\mathbf{d})$ with respect to $\bm{\psi}$ and
$\mathbf{d}$. The BED optimisation problem in Equation 2.1 becomes
$\mathbf{d}^{\ast}=\operatorname*{arg\,max}_{\mathbf{d}\in\mathcal{D}}\max_{\bm{\psi}}\mathcal{L}(\bm{\psi},\mathbf{d}).$
(3.1)
As we maximise $\mathcal{L}(\bm{\psi},\mathbf{d})$ with respect to $\bm{\psi}$
we tighten the lower bound, while maximisation with respect to $\mathbf{d}$
allows us to find the optimal design. This optimisation problem is especially
difficult for implicit models, as $\mathcal{L}(\bm{\psi},\mathbf{d})$
generally involves expectations over distributions which depend on
$\mathbf{d}$ in some complex manner. We will first discuss some prominent
lower bounds from literature in Section 3.2 and then discuss how to optimise
them in Section 3.3.
### 3.2 Prominent Lower Bounds
##### NWJ
The lower bound $\mathcal{L}_{\text{NWJ}}$ was first developed by Nguyen,
Wainwright and Jordan (NWJ; Nguyen et al., 2010) but is also known as $f$-GAN
KL (Nowozin et al., 2016) and MINE-$f$ (Belghazi et al., 2018):
$\mathcal{L}_{\text{NWJ}}(\bm{\psi},\mathbf{d})\equiv\mathbb{E}_{p(\bm{v},\mathbf{y}\mid\mathbf{d})}\left[T_{\bm{\psi}}(\bm{v},\mathbf{y})\right]-e^{-1}\mathbb{E}_{p(\bm{v})p(\mathbf{y}\mid\mathbf{d})}\left[e^{T_{\bm{\psi}}(\bm{v},\mathbf{y})}\right].$
(3.2)
The optimal critic for this lower bound, i.e. when it is tight, is
$T^{\ast}_{\bm{\psi}}(\bm{v},\mathbf{y})=1+\log{\frac{p(\bm{v}|\mathbf{y},\mathbf{d})}{p(\bm{v})}}$.
In other words, training a neural network $T_{\bm{\psi}}(\bm{v},\mathbf{y})$
with Equation 3.2 as the objective function amounts to learning an amortised
density ratio of the posterior to prior distribution. The NWJ bound has been
shown to have a low bias but high variance (Poole et al., 2019; Song and
Ermon, 2020).
##### InfoNCE
The InfoNCE lower bound $\mathcal{L}_{\text{NCE}}$ was introduced by van den
Oord et al. (2018) in the context of contrastive representation learning and
takes the form
$\displaystyle\mathcal{L}_{\text{NCE}}(\bm{\psi},\mathbf{d})$
$\displaystyle\equiv\mathbb{E}\left[\frac{1}{K}\sum_{i=1}^{K}\log{\frac{e^{T_{\bm{\psi}}(\bm{v}_{i},\mathbf{y}_{i})}}{\frac{1}{K}\sum_{j=1}^{K}\left[e^{T_{\bm{\psi}}(\bm{v}_{j},\mathbf{y}_{i})}\right]}}\right],$
(3.3)
where the expectation is over $K$ independent samples from the joint
distribution $p(\bm{v},\mathbf{y}|\mathbf{d})$. The optimal critic for this
lower bound is
$T^{\ast}_{\bm{\psi}}(\bm{v},\mathbf{y})=\log{p(\mathbf{y}|\bm{v},\mathbf{d})}+c(\mathbf{y}|\mathbf{d})$
for the form shown above, where $c(\mathbf{y}|\mathbf{d})$ is an indeterminate
function. However, if we swap the indices of $\bm{v}_{j}$ and $\mathbf{y}_{i}$
in the denominator, i.e. sum out $\mathbf{y}$, the optimal critic becomes
$T^{\ast}_{\bm{\psi}}(\bm{v},\mathbf{y})=\log{p(\bm{v}|\mathbf{y},\mathbf{d})}+c(\bm{v})$,
where $c(\bm{v})$ is again indeterminate. We use the version in Equation 3.3,
as this allows us to obtain an unnormalised posterior that we can sample from,
or normalise numerically in low dimensions (since $c(\mathbf{y}|\mathbf{d})$
is fixed for a given real-world observation). The $\mathcal{L}_{\text{NCE}}$
lower bound has a low variance but, as noted by van den Oord et al. (2018) and
verified by Poole et al. (2019), it is upper-bounded by $\log{K}$, where $K$
is the batch-size in Equation 3.3, leading to a potentially large bias.
##### JSD
We are generally concerned with maximising the mutual information and not
estimating it to a high accuracy. Hjelm et al. (2019) therefore argued to use
the Jensen-Shannon divergence (JSD) as a proxy to mutual information. They
proposed to use a lower bound on the JSD between the joint distribution
$p(\bm{v},\mathbf{y}|\mathbf{d})$ and product of marginal distributions
$p(\bm{v})p(\mathbf{y}|\mathbf{d})$, which takes the form
$\mathcal{L}_{\text{JSD}}(\bm{\psi},\mathbf{d})\equiv\mathbb{E}_{p(\bm{v},\mathbf{y}\mid\mathbf{d})}\left[-\text{sp}(-T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]-\mathbb{E}_{p(\bm{v})p(\mathbf{y}\mid\mathbf{d})}\left[\text{sp}(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right],$
(3.4)
where $\text{sp}(z)=\log(1+e^{z})$ is the softplus function and the optimal
critic of this lower bound is
$T^{\ast}_{\bm{\psi}}(\bm{v},\mathbf{y})=\log\frac{p(\bm{v}|\mathbf{y},\mathbf{d})}{p(\bm{v})}$.
The authors generally found the JSD lower bound to be more stable than other
MI lower bounds. The JSD and KL divergence are both $f$-divergences and have a
close relationship that can be derived analytically (see e.g. Hjelm et al.,
2019). The authors also showed experimentally that distributions that lead to
the highest JSD also lead to the highest MI, suggesting that maximising JSD is
an appropriate proxy for maximising MI. If one is in fact concerned with
accurate MI estimation as well, one could use the learned log density ratio in
a MC estimation of the MI, or substitute it in any of the other MI lower
bounds (as done in Poole et al., 2019; Song and Ermon, 2020). We note that the
JSD lower bound can be viewed as an expectation of the objective used in
likelihood-free inference by ratio estimation (LFIRE; Thomas et al.,
2020)333See the supplementary material for more information on this
relationship., which has been used before in the context of Bayesian
experimental design (Kleinegesse and Gutmann, 2019; Kleinegesse et al.,
2020a), and is, like InfoNCE, related to noise-contrastive estimation (NCE;
Gutmann and Hyvärinen, 2012) as discussed by Hjelm et al. (2019).
We want to emphasise again that this work aims to showcase the use of general
mutual information lower bounds, parametrised by neural networks, in Bayesian
experimental design for different scientific aims. We do not claim that the
aforementioned list of lower bounds is exhaustive and do not suggest that some
bounds are particularly superior to others. We leave this investigation to
more comprehensive studies focused exclusively on mutual information lower
bounds (such as in Poole et al., 2019).
### 3.3 Optimisation of Lower Bounds
In order to solve the rephrased BED problem in Equation 3.1, we need to be
able to optimise the lower bound $\mathcal{L}(\mathbf{d},\bm{\psi})$ with
respect to the neural network parameters $\bm{\psi}$ and the experimental
designs $\mathbf{d}$, regardless of the choice of lower bound. In the interest
of scalability (Spall, 2003), we here optimise both of these with gradient-
based methods.
Looking at MI lower bounds in literature, including the prominent ones shown
in Section 3.2, we can identify a few similar structures that allow us to
generalise gradient-based optimisation. First, lower bounds generally involve
expectations over the joint distribution and/or the product of marginal
distributions. Second, these expectations tend to contain non-linear,
differentiable functions of the neural network output. In other words, lower
bounds usually involve expectations
$\mathbb{E}_{p(\bm{v},\mathbf{y}\mid\mathbf{d})}\left[f(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
and/or
$\mathbb{E}_{p(\bm{v})p(\mathbf{y}\mid\mathbf{d})}\left[g(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$,
where $f:\mathbb{R}\rightarrow\mathbb{R}$ and
$g:\mathbb{R}\rightarrow\mathbb{R}$ are non-linear, differentiable functions.
For instance, for the NWJ lower bound in Equation 3.2 we have $f(z)=z$ and
$g(z)=e^{z-1}$. By deriving gradients of these expectations with respect to
$\bm{\psi}$ and $\mathbf{d}$ we can then easily derive gradients for all
prominent lower bounds listed in the Section 3.2, as well as other ones that
share the same structure.
Fortunately, gradients of the aforementioned expectations with respect to the
network parameters $\bm{\psi}$ are straightforward, as the involved
distributions do not depend on them. This means that we can simply pull the
gradient operator $\nabla_{\bm{\psi}}$ inside the expectations and apply the
chain rule, yielding
$\displaystyle\nabla_{\bm{\psi}}\mathbb{E}_{p(\bm{v},\mathbf{y}\mid\mathbf{d})}\left[f(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
$\displaystyle=\mathbb{E}_{p(\bm{v},\mathbf{y}\mid\mathbf{d})}\left[f^{\prime}(T)\Bigr{|}_{T=T_{\bm{\psi}}(\bm{v},\mathbf{y})}\nabla_{\bm{\psi}}T_{\bm{\psi}}(\bm{v},\mathbf{y})\right]\quad\text{and}$
(3.5)
$\displaystyle\nabla_{\bm{\psi}}\mathbb{E}_{p(\bm{v})p(\mathbf{y}\mid\mathbf{d})}\left[g(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
$\displaystyle=\mathbb{E}_{p(\bm{v})p(\mathbf{y}\mid\mathbf{d})}\left[g^{\prime}(T)\Bigr{|}_{T=T_{\bm{\psi}}(\bm{v},\mathbf{y})}\nabla_{\bm{\psi}}T_{\bm{\psi}}(\bm{v},\mathbf{y})\right],$
(3.6)
where $f^{\prime}(T)=\partial f(T)/\partial T$ and $g^{\prime}(T)=\partial
g(T)/\partial T$. The first factor inside the expectations only depends on the
form of the non-linear functions, while the second factor, the gradients of
the network output with respect to its parameters, is generally computed with
automatic differentiation, i.e. back-propagation. The expectations can then be
approximated as sample averages with $N$ samples from the corresponding
distributions.
Gradients with respect to the designs $\mathbf{d}$ are more complicated, as
the intractable distributions over which expectations are taken depend on
$\mathbf{d}$. For instance, we cannot compute
$\nabla_{\mathbf{d}}\mathbb{E}_{p(\bm{v},\mathbf{y}\mid\mathbf{d})}\left[f(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
in the same way that we computed the previous gradients with respect to
$\bm{\psi}$, because the joint distribution $p(\bm{v},\mathbf{y}|\mathbf{d})$
(and therefore its gradient) is intractable. Similarly, we cannot use score-
function estimators (Kleijnen and Rubinstein, 1996) to approximate these
gradients, as they require an analytic derivative of the log densities.
However, the pathwise gradient estimator lends itself to our setting with
implicit models (see Mohamed et al., 2020 for a review). Indeed, like implicit
models, this estimator assumes that sampling from the data-generation
distribution $p(\mathbf{y}|\bm{v},\mathbf{d})$ is exactly the same as sampling
from a base distribution $p(\bm{\epsilon})$ and then using that noise sample
as an input to a non-linear, deterministic function
$\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d})$ (Mohamed et al., 2020), called
the sampling path, i.e.444For instance, sampling a Gaussian random variable
from $\mathcal{N}(y;\mu,\sigma^{2})$ is exactly the same as first sampling
noise from a standard normal $\mathcal{N}(\epsilon;0,1)$ and then computing
$y=\mu+\epsilon\sigma$.
$\mathbf{y}\sim
p(\mathbf{y}|\bm{v},\mathbf{d})\iff\mathbf{y}=\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d}),\quad\bm{\epsilon}\sim
p(\bm{\epsilon}).$ (3.7)
This then allows us to invoke the law of the unconscious statistician (LOTUS;
e.g. Grimmett and Stirzaker, 2001) and, for instance, rephrase the expectation
over the joint $p(\bm{v},\mathbf{y}|\mathbf{d})$ in terms of the base
distribution $p(\bm{\epsilon})$ and the prior over $p(\bm{v})$,
$\mathbb{E}_{p(\bm{v},\mathbf{y}\mid\mathbf{d})}\left[f(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]=\mathbb{E}_{p(\bm{v})p(\bm{\epsilon})}\left[f(T_{\bm{\psi}}(\bm{v},\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d})))\right],$
(3.8)
where we have factorised the joint distribution as
$p(\bm{v},\mathbf{y}|\mathbf{d})=p(\mathbf{y}|\bm{v},\mathbf{d})p(\bm{v})$,
assuming that $p(\bm{v})$ is unaffected by $\mathbf{d}$.
For the expectation over the product of marginals we need to perform an
additional step. Equation 3.7 specifies the sampling path
$\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d})$ of the data-generation
distribution, but we require the sampling path of the marginal data
$\mathbf{y}\sim p(\mathbf{y}|\mathbf{d})$. This means that we need to express
the marginal as an expectation of the data-generating distribution over the
prior, i.e.
$p(\mathbf{y}|\mathbf{d})=\mathbb{E}_{p(\widetilde{\bm{v}})}[p(\mathbf{y}|\widetilde{\bm{v}},\mathbf{d})]$,
where $p(\widetilde{\bm{v}})$ is exactly the same distribution as $p(\bm{v})$.
We can then specify
$\mathbf{y}=\mathbf{h}(\bm{\epsilon};\widetilde{\bm{v}},\mathbf{d})$ and
invoke LOTUS again, yielding the second rephrased expectation:
$\mathbb{E}_{p(\bm{v})p(\mathbf{y}\mid\mathbf{d})}\left[g(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]=\mathbb{E}_{p(\bm{v})p(\widetilde{\bm{v}})p(\bm{\epsilon})}\left[g(T_{\bm{\psi}}(\bm{v},\mathbf{h}(\bm{\epsilon};\widetilde{\bm{v}},\mathbf{d})))\right].$
(3.9)
The rephrased expectations in Equations 3.8 – 3.9 are now taken over
distributions that do not directly depend on the designs $\mathbf{d}$,
allowing us to easily take gradients with respect to $\mathbf{d}$. In the
interest of space, let us define
$T=T_{\bm{\psi}}(\bm{v},\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d}))$ and
$\widetilde{T}=T_{\bm{\psi}}(\bm{v},\mathbf{h}(\bm{\epsilon};\widetilde{\bm{v}},\mathbf{d}))$.
The required gradients are then
$\displaystyle\nabla_{\mathbf{d}}\mathbb{E}_{p(\bm{v},\mathbf{y}\mid\mathbf{d})}\left[f(T)\right]$
$\displaystyle=\mathbb{E}_{p(\bm{v})p(\bm{\epsilon})}\left[f^{\prime}(T)\nabla_{\mathbf{d}}T\right]\quad\text{and}$
(3.10)
$\displaystyle\nabla_{\mathbf{d}}\mathbb{E}_{p(\bm{v})p(\mathbf{y}\mid\mathbf{d})}\left[g(T)\right]$
$\displaystyle=\mathbb{E}_{p(\bm{v})p(\widetilde{\bm{v}})p(\bm{\epsilon})}\left[g^{\prime}(\widetilde{T})\nabla_{\mathbf{d}}\widetilde{T}\right],$
(3.11)
where the derivatives $f^{\prime}(T)=\partial f(T)/\partial T$ and
$g^{\prime}(\widetilde{T})=\partial g(\widetilde{T})/\partial\widetilde{T}$
depend on the lower bound in question. The $\nabla_{\mathbf{d}}T$ and
$\nabla_{\mathbf{d}}\widetilde{T}$ factors in the above equations are the
gradients of the network output with respect to designs, which is the most
difficult technical part of our method. We discuss different approaches and
caveats to computing these gradients in detail in Section 3.4.
We next briefly consider a special setting where the implicit model is defined
such that we can separate the data generation into a ‘known’ observational
process555With ‘known’, we mean that the model of the observational process is
known analytically in closed form. $p(\mathbf{y}|\bm{v},\mathbf{d},\bm{z})$
and a differentiable, stochastic latent process $p(\bm{z}|\mathbf{d},\bm{v})$,
where $\bm{z}$ is a latent variable. In this case we can apply a score-
function estimator to the observational process, because we assume knowledge
of the likelihood $p(\mathbf{y}|\bm{v},\mathbf{d},\bm{z})$, and a pathwise
gradient estimator to the latent process. In a similar manner to Equation 3.8,
we can, for instance, rephrase the expectation of
$f(T_{\bm{\psi}}(\bm{v},\mathbf{y}))$ over the joint distribution as
$\displaystyle\mathbb{E}_{p(\bm{v},\mathbf{y}\mid\mathbf{d})}\left[f(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
$\displaystyle=\mathbb{E}_{p(\mathbf{y}|\bm{v},\mathbf{d},\bm{z})p(\bm{z}|\mathbf{d},\bm{v})p(\bm{v})}\left[f(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
(3.12)
$\displaystyle=\mathbb{E}_{p(\mathbf{y}|\bm{v},\mathbf{d},\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d}))p(\bm{\epsilon})p(\bm{v})}\left[f(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right],$
(3.13)
where in this case $\bm{\epsilon}$ is the noise random variable that defines
the sampling path of the latent stochastic variable
$\bm{z}=\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d})$. This allows us to pull
the gradient operatior $\nabla_{\mathbf{d}}$ into the required expectations;
see the supplementary material for more details on the resulting estimator.
This rephrasing is useful because 1) we can inject extra knowledge into the
method as we know parts of the data generation process and 2) this also allows
us to deal with some implicit models that generate discrete data (in which
case the full gradient with respect to $\mathbf{d}$ is undefined). For
instance, in experiments later in the paper, we present an implicit model
where the latent process is the solution of a stochastic differential
equation, which is differentiable by nature, and the observational process is
a known discrete Poisson process.
Lastly, we apply the aforementioned methodology of computing gradients to all
of the prominent lower bounds listed in Section 3.2, as they only effectively
differ in what kind of non-linear functions $f$ and $g$ they use, yielding the
gradients shown in Table 2. A similar table for the special setting of knowing
the observational process is shown in the supplementary materials.
Furthermore, we provide an overview of the discussed framework of Bayesian
experimental design for implicit models using lower bounds on mutual
information in Algorithm 1.
Table 2: Gradients of several lower bounds with respect to experimental designs. We define $T=T_{\bm{\psi}}(\bm{v},\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d}))$, $\widetilde{T}=T_{\bm{\psi}}(\bm{v},\mathbf{h}(\bm{\epsilon};\widetilde{\bm{v}},\mathbf{d}))$ and $T_{ij}=T_{\bm{\psi}}(\bm{v}_{j},\mathbf{h}(\bm{\epsilon}_{i};\bm{v}_{i},\mathbf{d}))$. We also use the shorthand $P=p(\bm{v})p(\bm{\epsilon})$ and $Q=p(\bm{v})p(\widetilde{\bm{v}})p(\bm{\epsilon})$. Here, $\sigma(T)$ is the logistic sigmoid function. See the supplementary material for detailed derivations and a corresponding table for the case where the observation model is analytically tractable. Lower Bound | Gradients with respect to designs
---|---
NWJ | $\mathbb{E}_{P}\left[\nabla_{\mathbf{d}}T\right]-e^{-1}\mathbb{E}_{Q}\left[e^{\widetilde{T}}\nabla_{\mathbf{d}}\widetilde{T}\right]$
InfoNCE | $\mathbb{E}_{P^{K}}\left[\frac{1}{K}\sum_{i=1}^{K}\frac{\sum_{j=1}^{K}e^{T_{ij}}(\nabla_{\mathbf{d}}T_{ii}-\nabla_{\mathbf{d}}T_{ij})}{\sum_{j=1}^{K}e^{T_{ij}}}\right]$
JSD | $\mathbb{E}_{P}\left[\sigma(-T)\nabla_{\mathbf{d}}T\right]-\mathbb{E}_{Q}\left[\sigma(\widetilde{T})\nabla_{\mathbf{d}}\widetilde{T}\right]$
Algorithm 1 Gradient-Based BED for Implicit Models using MI Lower Bounds
1:Implicit model sampling path $\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d})$,
neural network $T_{\bm{\psi}}(\bm{v},\mathbf{y})$, MI lower bound
$\mathcal{L}(\bm{\psi},\bm{v})$, optimisers for both $\bm{\psi}$ and
$\mathbf{d}$, number of samples $N$
2:Optimal design $\mathbf{d}^{\ast}$, trained neural network
$T_{\bm{\psi}^{\ast}}(\bm{v},\mathbf{y})$, MI estimate
$\mathcal{L}(\bm{\psi}^{\ast},\mathbf{d}^{\ast})$
3:
4:Sample from the prior over the variable of interest: $\bm{v}^{(i)}\sim
p(\bm{v})$ for $i=1,\dots,N$
5:Randomly initialise the neural network parameters $\bm{\psi}$
6:Randomly initialise the experimental designs $\mathbf{d}$
7:while $\mathcal{L}(\bm{\psi},\mathbf{d})$ not converged do
8: Sample noise from the base distribution, i.e. $\bm{\epsilon}^{(i)}\sim
p(\bm{\epsilon})$ for $i=1,\dots,N$
9: Obtain joint data via the sampling path
$\mathbf{y}^{(i)}=\mathbf{h}(\bm{\epsilon}^{(i)},\bm{v}^{(i)},\mathbf{d})$ for
$i=1,\dots,N$
10: Randomly shuffle data to obtain marginal samples
$\\{\widetilde{\mathbf{y}}^{(i)}\\}_{i=1}^{N}$
11: Compute network output with joint samples:
$T_{\bm{\psi}}(\bm{v}^{(i)},\mathbf{y}^{(i)})$ for $i=1,\dots,N$
12: Compute network output with marginal samples:
$T_{\bm{\psi}}(\bm{v}^{(i)},\widetilde{\mathbf{y}}^{(i)})$ for $i=1,\dots,N$
13: Compute a sample average of the MI lower bound
$\mathcal{L}(\bm{\psi},\mathbf{d})$
14: Estimate gradients with respect to $\bm{\psi}$ and $\mathbf{d}$
15: Update $\bm{\psi}$ and $\mathbf{d}$ using two separate optimisers
### 3.4 Tackling the Sampling Path Gradients
Recall the gradients of the expectations in Equations 3.10 – 3.11,which
involve the gradient of the network output with respect to designs,
i.e.$\nabla_{\mathbf{d}}T_{\bm{\psi}}(\bm{v},\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d}))$.
Below we discuss several approaches to computing, or approximating, this
gradient.
##### Automatic Differentiation
As mentioned previously, the gradients of lower bounds with respect to
$\bm{\psi}$ are already computed with automatic differentiation in a standard
neural network training fashion. We can similarly use automatic
differentiation to compute gradients with respect to $\mathbf{d}$ if the
implicit model is written in a differential programming framework, such as JAX
(Bradbury et al., 2018), PyTorch (Paszke et al., 2019) or TensorFlow (Abadi et
al., 2015). This allows us to attach the simulator model to the computation
graph of the framework and back-propagate from evaluations of the lower bound
to the experimental designs $\mathbf{d}$. This means that we do not have to
explicitly provide gradients, which is helpful if the implicit model is
extremely complex, and allows us to fully-utilise parallelisation with GPUs.
Furthermore, we expect the usefulness of this aspect, and the capability of
our method, to grow with the development of automatic differentiation
frameworks, which are already essential in machine learning.
##### Manual Gradient Computation
In some situations we may need to manually compute the required gradients, for
example when the implicit model is not written in a differential programming
framework. To do so, we first apply the chain rule to
$\nabla_{\mathbf{d}}T_{\bm{\psi}}(\bm{v},\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d}))$,
yielding
$\nabla_{\mathbf{d}}T_{\bm{\psi}}(\bm{v},\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d}))=\mathbf{J_{y}}^{\top}\nabla_{\mathbf{y}}T_{\bm{\psi}}(\bm{v},\mathbf{y})\bigr{|}_{\mathbf{y}=\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d})}.$
(3.14)
The factor $\nabla_{\mathbf{y}}T_{\bm{\psi}}(\bm{v},\mathbf{y})$ is the
derivative of the neural network output with respect to its inputs, which can
be readily obtained via automatic differentiation in most machine-learning
frameworks. $\mathbf{J_{y}}$ is the Jacobian matrix, defined by
$(\mathbf{J_{y}})_{ij}=\partial y_{i}/\partial d_{j}$, and contains the
gradients of the sampling path with respect to the designs. We either need to
assume knowledge of the sampling path gradients or find ways to approximate
them. This includes common settings where the data generation can be written
down in one line, e.g. a linear model with a combination of noises from
different non-Gaussian sources, or we have access to the ordinary, or
stochastic, differential equations that govern data-generation.
##### Gradient-Free Optimisation
While we can facilitate gradient-based optimisation for a variety of implicit
models, there might be situations in which we can neither exactly compute nor
approximate the required Jacobian-vector product through any of the
aforementioned ways. For instance, sampling path gradients might be undefined
if the data generation process involves any discrete variables that we need to
differentiate and we cannot separate it into a known observational process and
a latent process. In such situations we can only resort to gradient-free
optimisation methods for the updates of $\mathbf{d}$. Examples of gradient-
free optimisation techniques for BED with implicit models are grid search,
random search, evolutionary algorithms (as done in e.g. Zhang et al., 2021)
and Bayesian optimisation (as done in e.g. Kleinegesse and Gutmann, 2020b). We
note that in some situations we might be able to use continuous relaxations of
discrete variables instead, which we shall leave for future work. While the
gradient-free methods allow us to deal with a larger class of implicit models,
they do not scale well with the dimensionality of experimental designs
$\mathbf{d}$ (Spall, 2003). As such, we do not consider the gradient-free
setting in this work, but refer the reader to Kleinegesse and Gutmann (2020b)
for more detailed explanations and experiments for such a setting.
## 4 Experiments
In this section we demonstrate our approach to BED for implicit models using
MI lower bounds. First, we consider a toy example that consists of a set of
models with linear, logarithmic and square-root responses. In a comprehensive
comparison, we apply all prominent lower bounds presented in Section 3.2 (NWJ,
InfoNCE and JSD) to all scientific goals presented in Section 2 (PE, MD, MD/PE
and FP). Second, we consider a setting from epidemiology where the latent
process is the solution to a stochastic differential equation and the discrete
observational process is analytically tractable. We here only apply the JSD
lower bound on the PE and MD tasks. Code to reproduce results can be found
here: https://github.com/stevenkleinegesse/GradBED.
### 4.1 Toy Models
We here assume that an observable variable $y\in\mathbb{R}$ depends on an
experimental design $d\in[-2,2]$ in either a linear ($m=1$), logarithmic
($m=2$) or square-root ($m=3$) manner, where the variable $m\in\\{1,2,3\\}$ is
the model indicator. Each model $m$ is also governed by two model parameters
$\bm{\theta}_{m}=(\theta_{m,0},\theta_{m,1})$, e.g. the offset and slope in
the case of the linear model. We assume a limited budget of $10$ measurements,
which means that we have to construct a 10-dimensional design vector
$\mathbf{d}=(d_{1},\dots,d_{10})$ with a corresponding data vector
$\mathbf{y}=(y_{1},\dots,y_{10})$. We include two separate sources of noise,
Gaussian noise $\mathcal{N}(\epsilon;0,1)$ and Gamma noise $\Gamma(\nu;2,2)$,
which leads to all toy models having likelihood functions that do not admit a
closed-form expression. The sampling paths are given by
$\mathbf{y}=\begin{cases}\theta_{1,0}\mathbf{1}+\theta_{1,1}\mathbf{d}+\bm{\epsilon}+\bm{\nu}&\text{if
}m=1,\\\
\theta_{2,0}\mathbf{1}+\theta_{2,1}\log(|\mathbf{d}|)+\bm{\epsilon}+\bm{\nu}&\text{if
}m=2,\\\
\theta_{3,0}\mathbf{1}+\theta_{3,1}\sqrt{|\mathbf{d}|}+\bm{\epsilon}+\bm{\nu}&\text{if
}m=3,\end{cases}$ (4.1)
where $\bm{\epsilon}=(\epsilon_{1},\dots,\epsilon_{10})$ and
$\bm{\nu}=(\nu_{1},\dots,\nu_{10})$ are iid samples from a Gaussian and Gamma
noise source, respectively, $\mathbf{1}$ denotes a 10-dimensional vector of
ones and $|\cdot|$ implies taking the element-wise absolute value. To ensure
numerical stability in the logarithmic model, we clipped $|\mathbf{d}|$ by
$10^{-4}$ from below, i.e. $|\mathbf{d}|\equiv\max(|\mathbf{d}|,10^{-4})$. We
generally use a Gaussian prior
$\mathcal{N}(\bm{\theta}_{m};\mathbf{0},3^{2}\mathds{1})$ over the model
parameters and a discrete uniform prior $\mathcal{U}(m)$ over the model
indicators.
Comprehensive information about neural network architectures, hyper-parameters
and implementation details can be found in the supplementary material.
Throughout this section we compare final MI estimates and posterior
distributions to reference values and distributions. Information about how
these are computed for all scientific tasks can also be found in the
supplementary materials.
#### 4.1.1 Parameter Estimation for the Linear Model
Figure 2: PE results for the linear toy model. Shown are training curves for
the NWJ, InfoNCE and JSD (via NWJ) lower bounds. The top row shows lower bound
estimates as a function of training epochs, with the dotted line being the
reference MI value, and the bottom row shows the elements of the design vector
as it is being updated.
We first consider the scientific task of parameter estimation (PE) for the
linear toy model ($m=1$ in Equation B.2), where the aim is to estimate the
model parameters $\bm{\theta}_{1}=(\theta_{1,0},\theta_{1,1})$, i.e. the slope
and offset of a straight line, respectively. Figure 2 shows the training
results for the PE task, using the NWJ (left column), InfoNCE (middle column)
and JSD (right column) lower bounds. As explained in Section 3.2, we use the
density ratio learned by maximising the JSD lower bound as an input to the NWJ
lower bound, in order to get an actual lower bound on the mutual information,
which can be seen in the right column. All lower bounds converge smoothly to a
final value that is close to the reference MI value, with all lower bounds
finding the same optimal design $\mathbf{d}^{\ast}$, whose elements are
equally clustered at $d=-2$ and $d=2$. Experimental designs at the boundaries
of the design domain are useful because that is where the signal-to-noise
ratio is highest. Intuitively, this means that our data contains more signal
than noise, allowing us to better measure the effect of model parameters on
data. Interestingly, there is no noticeable difference in the convergence
behaviour of each lower bound (they all use the same neural network and design
initialisations).
Table 3: Toy model average MI estimates ($\pm$ standard error) for all scientific tasks, using the NWJ, InfoNCE and JSD (via NWJ) lower bounds on MI (higher values are better). Also shown are average reference MI estimates. MI LB | PE | MD | MD/PE | FP
---|---|---|---|---
NWJ | 3.43 $\pm$ 0.06 | 0.726 $\pm$ 0.007 | 3.72 $\pm$ 0.06 | 1.33 $\pm$ 0.01
InfoNCE | 3.36 $\pm$ 0.06 | 0.725 $\pm$ 0.027 | 3.80 $\pm$ 0.07 | 1.33 $\pm$ 0.04
JSD | 3.48 $\pm$ 0.05 | 0.730 $\pm$ 0.007 | 3.95 $\pm$ 0.08 | 1.34 $\pm$ 0.01
Ref. | 3.55 $\pm$ 0.04 | 0.737 $\pm$ 0.007 | 3.97 $\pm$ 0.02 | 1.34 $\pm$ 0.02
The PE column in Table 3 shows the final MI estimates for each lower bound,
evaluated on several synthetic validation sets, with the JSD lower bound
having the highest estimate. This is intriguing because, while the gradients
are computed with the JSD lower bound, we evaluate it with NWJ lower bound
and, yet, its estimate is higher than that for the actual NWJ lower bound.
Figure 3: PE posterior results for the linear toy model. Shown are average
marginal posteriors for all lower bounds, including the prior, the ground
truth and the average reference posterior. Note how all lower bounds produce
similar posteriors.
As explained in Section 3.2, we can use the trained neural network to directly
compute a posterior distribution. We first generate synthetic ‘real-world’
data $\mathbf{y}^{\ast}$ that was sampled with some ground-truth
$\bm{\theta}_{1,\text{true}}=(2,3)$ at the optimal design $\mathbf{d}^{\ast}$,
i.e. $\mathbf{y}^{\ast}\sim
p(\mathbf{y}|\mathbf{d}^{\ast},\bm{\theta}_{1,\text{true}})$. The prior
distribution, the trained neural network and the observation can then be used
to compute the posterior $p(\bm{\theta}|\mathbf{y}^{\ast},\mathbf{d}^{\ast})$
according to Section 3.2. The top row in Figure 3 shows the marginal
posteriors of the offset $\theta_{1,0}$ and slope $\theta_{1,1}$ for each
lower bound, averaged over $5{,}000$ ‘real-world’ observations. Also shown are
average reference posteriors computed with the same observations.666See the
supplementary material for how we compute these. All posterior distributions
cover the ground truth well, and their MAP estimate is close to the ground
truth. However, there is no noticeable difference between any of the shown
posterior distributions. The relative performance of each lower bound is more
apparent in Table 4, where we show the average KL-Divergence between estimated
and reference posteriors. The JSD lower bound results in posterior
distributions that are closest to the reference posteriors.
Table 4: Toy model average KL-Divergences ($\pm$ standard error) between estimated and reference posteriors for all scientific tasks and all lower bounds (lower values are better). MI LB | PE | MD | MD/PE | FP
---|---|---|---|---
NWJ | 0.079 $\pm$ 0.003 | 0.016 $\pm$ 0.001 | 0.728 $\pm$ 0.010 | 0.009 $\pm$ 0.001
InfoNCE | 0.107 $\pm$ 0.003 | 0.018 $\pm$ 0.001 | 0.053 $\pm$ 0.001 | 0.012 $\pm$ 0.001
JSD | 0.028 $\pm$ 0.001 | 0.009 $\pm$ 0.001 | 0.026 $\pm$ 0.001 | 0.005 $\pm$ 0.001
#### 4.1.2 Model Discrimination
Next, we consider the task of model discrimination (MD), where the aim is to
distinguish between competing models and the variable of interest is the model
indicator $m\in\\{1,2,3\\}$. To sample data $\mathbf{y}\sim
p(\mathbf{y}|\mathbf{d},m)$, as required in Algorithm 1, we need to
marginalise over the model parameters $\bm{\theta}_{m}$. To do so, we first
obtain prior samples, use these to sample data $\mathbf{y}$ from the sampling
path in Equation B.2 and then simply discard the $\bm{\theta}_{m}$ samples.
Figure 4 shows the training results for the MD task, using the NWJ (left
column), InfoNCE (middle column) and JSD (right column) lower bounds. Similar
to the PE task, we evaluate the JSD bound by using the learned density ratio
as an input to the NWJ lower bound. All lower bounds have a similar
convergence behaviour and lead to final MI estimates that are close to the
reference MI value. Similar to the PE tasks, all lower bounds find the same
optimal design $\mathbf{d}^{\ast}$, which consists of $3$ elements at $d=-2$,
$4$ elements at $d=0$ and $4$ elements at $d=2$. The additional design cluster
around $d=0$, as compared to the PE task, is useful for model discrimination
because of the large response of the logarithmic model ($m=2$). Although we
clipped $|\mathbf{d}|$ from below by $10^{-4}$ when $m=2$, this still means
that the response of the logarithmic model is significantly larger than that
of the linear model ($m=1$) and the square-root model ($m=3$) as $d\rightarrow
0$. This allows us to determine with relative ease whether or not observed
data was generated from the logarithmic model. Similarly, the other two
clusters at the boundaries of the design domain are helpful because that is
where the data distributions of all models are most different, which is
helpful in distinguishing between them.777See the supplementary material for a
figure showing this.
Figure 4: MD results for the set of toy models. Shown are training curves for
the NWJ, InfoNCE and JSD (via NWJ) lower bounds. The top row shows lower bound
estimates as a function of training epochs, with the dotted line being the
reference MI value, and the bottom row shows the elements of the design vector
as it is being updated.
Evaluations of the lower bounds on validation sets are presented in Table 3,
showing that while all lower bounds perform well, the JSD lower bound results
in the highest MI lower bound estimate. We note that, even though all lower
bounds had the same neural network and design initialisations, the InfoNCE
lower bound in particular would sometimes find slightly different (locally)
optimal designs; we show an example of this in the supplementary materials.
Figure 5: MD posterior results for the set of toy models. Shown are average
posterior probabilities for different ground truth models (one per row) for
the NWJ, InfoNCE and JSD lower bounds.
Similar to the PE task, we can use the trained neural networks to find
posterior distributions over the model indicator $m$, which express our
posterior belief that a given model has generated the observed ‘real-world’
data $\mathbf{y}^{\ast}$. We generate three different sets of ‘real-world’
data, one for each of the three models being the underlying true model. We
also use the same model parameter ground truth
$\bm{\theta}_{m,\text{true}}=(2,3)$ for each model. Figure 5 shows average
posterior distributions for all lower bounds, including an average reference
posterior distribution. All methods result in extremely good model recovery,
meaning that they can, on average, confidently identify from which model the
data was generated. The difference in performance between the different lower
bounds becomes more pronounced when considering the average KL-Divergence
between estimated and reference posteriors, as shown in Table 4. The average
KL-Divergences show that the JSD lower bound results in posterior
distributions that are slightly closer to the reference than NWJ and InfoNCE.
#### 4.1.3 Joint MD/PE
Here we consider the joint task of model discrimination and parameter
estimation (MD/PE), i.e. a combination of the previous two tasks. This means
that our variable of interest is now the set of model indicator and
corresponding model parameters, i.e. $\bm{v}=(\bm{\theta}_{m},m)$. The MD/PE
results are similar to previous results seen for the separate parameter
estimation and model discrimination tasks. As such, we only show the relevant
training curves and posterior distributions in the supplementary materials.
The training curves look similar for all lower bounds and the optimal designs
$\mathbf{d}^{\ast}$ were also approximately the same. The elements of
$\mathbf{d}^{\ast}$ were clustered around $d=-2$, $d=0$ and $d=2$, similar to
those found in the MD task. In the MD/PE column in Table 3 we present final MI
estimates for each lower bound, evaluated on several validation data sets. The
JSD lower bound performs significantly better than the NWJ and InfoNCE lower
bounds, i.e. it is closer to the reference value. Similarly, Table 4 shows
that the posteriors estimated via JSD have, on average, a lower KL-Divergence
to reference posteriors, compared to the other lower bounds. Furthermore, the
NWJ lower bound results in an extremely large average KL-Divergence, which is
in part due to a poor parameter estimation for the logarithmic model (see the
supplementary materials for a plot).
#### 4.1.4 Improving Future Predictions for the Linear Model
Finally, we consider the task of improving future predictions (FP), as
discussed in Section 2, for the linear toy model. The aim here is to find
current optimal designs $\mathbf{d}^{\ast}$, and gather corresponding data
$\mathbf{y}^{\ast}$, that allow us to maximally improve our predictions of
future data $y_{T}$ gathered at a (fixed and known) future design $d_{T}$. Our
variable of interest is thus the future data $y_{T}$. As before, we construct
a 10-dimensional design vector $\mathbf{d}$ that has elements restricted to
the domain $d\in[-2,2]$, with a corresponding 10-dimensional data vector
$\mathbf{y}$. For simplicity, we assume that our future design is one-
dimensional and fixed at $d_{T}=4$, which is outside the domain of our current
design vector. This emulates a setting where, for instance, we know that we
will be able to make measurements in a specific geographical location of
interest but currently only have access to a different, limited region,
allowing us to improve our prediction at that future measurement location.
This setting naturally assumes that the implicit model extrapolates well to
future designs $d_{T}$ when they are outside the current design domain.
The training curves for the NWJ, InfoNCE and JSD lower bound again show a very
similar convergence behaviour and yield the same optimal designs. As such, we
only show the training curves in the supplementary materials. The optimal
designs $\mathbf{d}^{\ast}$ have elements that are clustered equally at the
boundaries, which is exactly the same as for the parameter estimation task.
These designs are useful for improving future predictions, as the parameter
estimation is best (see the PE section) and the absolute values are close to
that of the future design. Table 3 shows that the final MI estimates for each
lower bound are similar and nearly match the reference value, with the JSD
lower bound yielding a slightly closer estimate.
Figure 6: FP results for the linear toy model. Shown are average posterior
predictive distributions for the NWJ, InfoNCE and JSD lower bounds. Shown are
also the prior predictive and a histogram of ‘real-world’ samples at the
future design.
For the FP task, we can use the trained neural networks to obtain a posterior
predictive distribution $p(y_{T}|d_{T},\mathbf{y}^{\ast},\mathbf{d}^{\ast})$
of our variable of interest $y_{T}|d_{T}$. To generate ‘real-world’ data
$\mathbf{y}^{\ast}$ at $\mathbf{d}^{\ast}$ we use ground-truth values of
$\bm{\theta}_{\text{true}}=(2,3)$ for the offset and slope, respectively. In
Figure 6 we show the average posterior predictive distributions, averaged over
$5{,}000$ different $\mathbf{y}^{\ast}$, for all lower bounds, including the
prior predictive distribution and a histogram showing samples of the data-
generating distribution of $y_{T}$ at $d_{T}$ with $\bm{\theta}_{\text{true}}$
(which is not a predictive distribution). The posterior predictive
distributions for all lower bounds are similar to each other and all have a
mode that matches that of the histogram of ‘real-world’ data at $d_{T}$. The
differences between the lower bounds becomes more noticeable when looking at
the average KL-Divergence values shown in Table 4. As for the other scientific
tasks, the JSD lower bound results in posteriors that have the lowest average
KL-Divergence to reference posteriors.
### 4.2 SDE Epidemiology Model
Figure 7: Example simulations of the SDE-based SIR (left) and SEIR (right)
models.
We here consider the spread of a disease within a population of $N$
individuals, modelled by stochastic versions of the well-known SIR (Allen,
2008) and SEIR (Lekone and Finkenstädt, 2006) models. In the SIR model,
indicated by $m=1$, individuals start in a susceptible state $S(t)$ and can
then move to an infectious state $I(t)$ with an infection rate of $\beta$.
These infectious individuals then move to a recovered state $R(t)$ with a
recovery rate of $\gamma$, after which they can no longer be infected. The SIR
model, governed by the state changes $S(t)\rightarrow I(t)\rightarrow R(t)$,
thus has two model parameters $\bm{\theta}_{1}=(\beta,\gamma)$. In the SEIR
model, indicated by $m=2$, susceptibles first move to an additional exposed
state $E(t)$, where individuals are infected but not yet infectious.
Afterwards, they move to the infectious state $I(t)$ with a rate of $\sigma$.
The SEIR model ($m=2$), governed by $S(t)\rightarrow E(t)\rightarrow
I(t)\rightarrow R(t)$, thus has three model parameters
$\bm{\theta}_{2}=(\beta,\sigma,\gamma)$. We further make the common assumption
that the total population size $N$ stays constant.
The stochastic versions of these epidemiological processes are usually defined
by a continuous-time Markov chain (CTMC), from which we can sample via the
Gillespie algorithm (see Allen, 2017). However, this generally yields discrete
population states that have undefined gradients. In order to test our
gradient-based algorithm, we thus resort to an alternative simulation
algorithm that uses stochastic differential equations (SDEs), where gradients
can be approximated. Figure 7 shows example simulations of the SDE-based SIR
and SEIR models, generated according to the method below.
We first define population vectors $\mathbf{X}_{1}=(S_{1}(t),I_{1}(t))^{\top}$
for the SIR model and $\mathbf{X}_{2}=(S_{2}(t),E_{2}(t),I_{2}(t))^{\top}$ for
the SEIR model. We can effectively ignore the population of recovered because
the total population is fixed.888Allowing us to, e.g., compute
$R_{2}(t)=N-S_{2}(t)-E_{2}(t)-I_{2}(t)$ for the SEIR model. The system of Itô
SDEs for the above epidemiological processes is
$\mathrm{d}\mathbf{X}_{m}(t)=\mathbf{f}_{m}(\mathbf{X}_{m}(t))\mathrm{d}t+\mathbf{G}_{m}(\mathbf{X}_{m}(t))\mathrm{d}\mathbf{W}(t),$
(4.2)
where $m$ is a model indicator, $\mathbf{f}_{m}$ is called the drift vector,
$\mathbf{G}_{m}$ is called the diffusion matrix, and $\mathbf{W}(t)$ is a
vector of independent Wiener processes (also called Brownian motion). The
drift vector and diffusion matrix for both models are derived in detail in the
supplementary materials. Sample paths of the above SDEs can be simulated by
means of the Euler-Maruyama algorithm (see Allen, 2017), which is based on
finite differences and discretised Wiener processes. In our experiments we use
the torchsde package (Li et al., 2020) to do this efficiently.
Moreover, we assume that we only have access to discrete, noisy estimates of
the number of infectious individuals $I(t)$, as other population states might
be difficult to measure in reality. We thus sample a single noisy measurement
of $I_{m}(t)$ by
$y|t,m,\bm{\theta}_{m}\sim\text{Poisson}(y;\phi I_{m}(t)),$ (4.3)
where $\phi$ is assumed to be known and set to $0.95$ throughout our
experiments.999While we have not done so, our method would allow us to treat
$\phi$ as a model parameter as well. This corresponds to the setting of a
known observational model described by Equation 3.12.
The design variable in our experiments is the measurement time $t$ in Equation
4.3. We generally wish to take several measurements at once and hence define a
design vector $\mathbf{d}=(t_{1},\dots,t_{D})$ with corresponding observations
$\mathbf{y}=(y_{1},\dots,y_{D})$, where $D$ is the number of measurements. In
our experiments, we use $N=500$ with initial conditions of
$\mathbf{X}_{1}(t=0)=(498,2,0)^{\top}$ for the SIR model and
$\mathbf{X}_{2}(t=0)=(498,0,2,0)^{\top}$ for the SEIR model. We also use a
discretised design space of $t\in[0,100]$ with resolution of $\Delta
t=10^{-2}$. For the model parameter priors we use
$p(\beta)=\text{Lognorm}(0.50,0.50^{2})$,
$p(\sigma)=\text{Lognorm}(0.20,0.50^{2})$ and
$p(\gamma)=\text{Lognorm}(0.10,0.50^{2})$. Further information about neural
network architectures, hyper-parameters and experimental details are given in
the supplementary materials.
#### 4.2.1 Parameter Estimation for the SIR Model
Figure 8: PE results for the SDE-based SIR model with different experimental
budgets. The left shows validation MI estimates for different number of
measurements, averaged over several validation data sets. The right plot shows
the corresponding optimal measurement times.
First, we consider the task of parameter estimation (PE) for the SIR model,
where the aim is to estimate its model parameters
$\bm{\theta}_{1}=(\beta,\gamma)$. We here use the JSD lower bound as an
objective function and test different experimental budgets of up to $10$
measurements. Figure 18 summarises the results for settings with different
budgets. The left plot shows validation estimates of the maximum MI at the
respective optimal designs (shown in the right plot), obtained through our
proposed method, for varying numbers of measurements $D$. The optimal designs
$\mathbf{d}^{\ast}$ were all in the region between $t=10$ and $t=40$, with a
slightly different spreading for different $D$. Intuitively, this region might
be useful because that is where the signal-to-noise ratio of the SIR
observations is highest.101010See the supplementary materials for a plot
showing this. As before in the PE task for the linear toy model, this means
that we are able to better measure the effect of model parameters on
observations. The maximum MI in the left of Figure 18 starts to saturate at
$D=10$ measurements, suggesting that more measurements may not improve
parameter estimation by much.
Figure 9: PE posterior distributions for the SDE-based SIR model with
different experimental budgets. The top left shows the prior distribution,
while the other plots show posteriors for an increasing number of
measurements. The posteriors are averaged over $5{,}000$ ‘real-world’
observations $\mathbf{y}^{\ast}$ generated with ground-truth $(0.60,0.15)$.
We can estimate posterior distributions of the model parameters
$(\beta,\gamma)$ by means of the trained neural network (see Section 3). To do
so, we generate ‘real-world’ observations $\mathbf{y}^{\ast}$ at
$\mathbf{d}^{\ast}$ using the ground truth
$\bm{\theta}_{\text{true}}=(0.60,0.15)$. Figure 9 shows the prior distribution
(top left) and average posterior distributions for different number of
measurements, where the average is taken over $5{,}000$ observations
$\mathbf{y}^{\ast}$. As we increase the number of measurements, the average
posterior naturally becomes more peaked and moves closer to the ground truth
parameters (indicated by a red cross). At $D=10$, the ground truth is well-
captured by the average posterior, with an average MAP estimate, taken over
all ‘real-world’ observations, of
$\widehat{\bm{\theta}}_{\text{MAP}}=(0.59,0.07)\pm(0.15,0.01)$, where the
errors indicate one standard deviation. All training curves, posterior
distributions and optimal designs are shown in the supplementary materials.
#### 4.2.2 Model Discrimination
Next, we consider the task of model discrimination (MD) between the SIR and
SEIR model. We again use the JSD lower bound as a the utility function and
test different experimental budgets of up to $10$ measurements. The results
for this setting are summarised in Figure 19. The left plot shows optimal
designs for varying numbers of measurements $D$, obtained through our proposed
method. The optimal designs have elements that are clustered around smaller
measurement times between $t=15$ and $t=40$, similar to the optimal designs
for the PE task in Figure 18. These measurement times might be useful because
that is where the average signal-to-noise ratio is highest for both models and
the corresponding data distributions are significantly different.111111See the
supplementary material for plots showing this. This means that we can gather
(relatively) low-noise data that can be related to the corresponding models
more effectively. We show corresponding validation estimates of the MI as a
function of $D$ in the supplementary materials. Similar to the PE task, the MI
tends to increase as the number of measurements are increased, although the
change is much more subtle for the MD task.
Figure 10: MD results for the SDE-based epidemiology models. The left plot
shows optimal experimental designs for varying number of measurements. The
other plots show distributions of posterior densities for $D=10$ measurements,
given that the SIR model (middle plot) or the SEIR model (right plot) is the
ground truth.
For the case of $D=10$ measurements, the middle and right plots in Figure 19
show distributions of posterior densities
$p(m|\mathbf{d}^{\ast},\mathbf{y}^{\ast})$ estimated using several
$\mathbf{y}^{\ast}$, with the middle plot assuming that SIR is the ground-
truth model and the right plot assuming that SEIR is the ground-truth model.
We can recover the ground-truth model effectively in both cases, with
corresponding F1-scores of $0.996$ and $0.983$. However, the model recovery is
slightly worse when the SEIR model is the ground truth, which we discuss
further in the supplementary materials.
## 5 Conclusions
In this work we have introduced a framework for Bayesian experimental design
with implicit models. Our approach finds optimal experimental designs by
maximising general lower bounds on mutual information that are parametrised by
neural networks. We have derived gradients of prominent lower bounds with
respect to the experimental designs, allowing us to perform stochastic
gradient-ascent. In doing so, we have provided a methodology to derive
gradients of general lower bounds on mutual information, which may be applied
to other lower bounds that are introduced in the future. Furthermore, our
framework yields an amortised posterior distribution as a by-product, as long
as the utilised lower bound facilitates this, which is generally a challenging
problem for implicit models and has an entire research field dedicated to it
(see Lintusaari et al., 2017; Sisson et al., 2018; Cranmer et al., 2020 for
recent overviews).
By means of a set of intractable toy models, we have provided a thorough
experimental comparison of several prominent lower bounds for a variety of
scientific tasks, showcasing the versatility of our proposed framework. We
have also applied our method to a challenging set of implicit models from
epidemiology, where the responses are discrete observations of the solutions
to stochastic differential equations. Our approach allowed us to efficiently
obtain optimal designs, leading to suitable posterior distributions that
capture ground-truths well. Throughout our experiments we have considered
popular scientific tasks such as parameter estimation, model discrimination
and improving future predictions. Importantly, we believe that our framework
would also be able to deal with other scientific tasks that are formulated
using mutual information.
Our approach requires us to have access to gradients of the sampling path with
respect to experimental designs, or that we can compute them by means of
automatic differentiation. There may be cases where these gradients are
unknown or undefined, e.g. when data is discrete or categorical. We note that
our framework can handle those cases when the corresponding observational
process has an analytically tractable model, but currently not otherwise.
Kleinegesse and Gutmann (2020b) have shown that it is still possible to use
gradient-free optimisation techniques to find optimal designs, following the
method explained in Section 3. However, these approaches do not scale well
with design dimensions, unlike gradient-based approaches (see Spall, 2003). It
would be interesting to explore how our method can be scalably adapted to
these situations.
In related work, Foster et al. (2019) perform gradient-based Bayesian
experimental design for implicit models by using variational approximations to
likelihoods or posteriors. This allows them to use score-function estimators,
as opposed to pathwise gradient estimators. Recently, Zhang et al. (2021) have
used evolutionary strategies to approximate gradients of the SMILE lower bound
in the context of Bayesian experimental design. There are several other
interesting methods that allow us to approximate gradients during the
optimisation procedure, e.g. finite-difference or simultaneous perturbation
stochastic approximations (Spall, 2003), that have not been investigated yet.
As such, investigating avenues that mitigate or relax the need to have access
to gradients of the sampling path may increase the applicability of our method
further.
We have applied our framework to the setting of static Bayesian experimental
design, where we wish to determine a set of optimal designs prior to actually
performing the experiment. Sequential Bayesian experimental design, however,
advocates that we should update our belief of the variable of interest as we
perform the experiment (see e.g. Kleinegesse et al., 2020a). It would be
interesting to apply our framework to this setting as well. In particular,
Foster et al. (2021) have recently introduced an approach that yields a design
network which directly outputs optimal designs and, as such, allows for
amortised sequential Bayesian experimental design, greatly increasing
practicability. While their method is designed for explicit models that have
known likelihoods, we believe that it would be interesting to apply our method
of maximising parametrised lower bounds to their method as well.
Furthermore, in this work we have mostly glanced over the neural network
architecture selection. However, we believe that there is a lot still to be
explored that may help during the training procedure, e.g. residual
connections, separable critics, pooling, convolutions, and other techniques
that leverage known structures. In particular, it may be useful to directly
incorporate a dependency on the designs into the neural network, e.g. by using
the current design as an input as well. This may strengthen the extrapolation
of the neural network, improving training speed and robustness. We also
believe that the interplay between the number of dimensions of the variable of
interest $\bm{v}$ and the data variable $\mathbf{y}$ is highly important. If
one is much larger than the other, the neural network
$T_{\bm{\psi}}(\bm{v},\mathbf{y})$ may have more difficulties learning the
appropriate density ratio. It may be useful to use a separable critic, or
automatically learn summary statistics that reduce the dimensionality while
retaining all the necessary information as e.g. recently done by Chen et al.
(2021) in the field of likelihood-free inference.
## Appendix A MI Lower bounds
### A.1 Relationship between the JSD lower bound and LFIRE
Likelihood-free inference by ratio estimation (LFIRE, Thomas et al., 2020)
aims to approximate the density ratio $r(x,\theta)$ of the data-generating
distribution $p(x|\theta)$ and the marginal distribution $p(x)$ for implicit
models, where $p(x|\theta)$ is intractable but sampling from it is still
possible. Using a known prior distribution $p(\theta)$, this learned density
ratio can then be used to compute the posterior distribution $p(\theta|x)$.
Importantly, LFIRE only requires samples $x^{\theta}_{i}\sim p(x|\theta)$ from
the data-generating distribution and samples $x^{m}_{i}\sim p(x)$ from the
marginal distribution.
This likelihood-free inference method works by formulating a classification
problem between data sampled from $p(x|\theta)$ and $p(x)$, and then solving
this via (non-linear) logistic regression. Using their notation, the loss
function that they minimise is
$\mathcal{J}(h,\theta)=\frac{1}{n_{\theta}+n_{m}}\left\\{\sum_{i=1}^{n_{\theta}}\log\left[1+\nu
e^{-h(x^{\theta}_{i})}\right]+\sum_{i=1}^{n_{m}}\log\left[1+\frac{1}{\nu}e^{h(x^{m}_{i})}\right]\right\\},$
(A.1)
where $n_{\theta}$ and $n_{m}$ are the number of samples from $p(x|\theta)$
and $p(x)$, respectively, $\nu=n_{\theta}/n_{m}$ is used to correct unequal
class sizes, and $h$ is a non-linear, parametrised function. The authors prove
that for large $n_{\theta}$ and $n_{m}$, the function $h^{\ast}$ that
minimises $\mathcal{J}(h,\theta)$ is given by the desired log-ratio, i.e.
$h^{\ast}(x,\theta)=\log r(x,\theta)$. Importantly, this result holds for any
a fixed $\theta$, so that the result still holds if we consider the objective
$\mathcal{J}(h,\theta)$ averaged over $\theta$.
Recall that the JSD lower bound presented in the main text is given by
$\mathcal{L}_{\text{JSD}}(\bm{\psi},\mathbf{d})\equiv\mathbb{E}_{p(\bm{v},\mathbf{y}\mid\mathbf{d})}\left[-\text{sp}(-T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]-\mathbb{E}_{p(\bm{v})p(\mathbf{y}\mid\mathbf{d})}\left[\text{sp}(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right],$
(A.2)
where $\text{sp}(z)=\log(1+e^{z})$ is the softplus function and
$T_{\bm{\psi}}(\bm{v},\mathbf{y})$ is a neural network with parameters
$\bm{\psi}$. The variables $\mathbf{y},\bm{v}$ and $\mathbf{d}$ are the data,
variable of interest and experimental designs, respectively. Our BED framework
works by maximising Equation A.2 with respect to $\bm{\psi}$ and $\mathbf{d}$
to find optimal designs $\mathbf{d}$. Let us here focus on the specific case
of fixed $\mathbf{d}$, i.e. only $\bm{\psi}$ is updated. We here aim to show
the strong relationship between the JSD lower bound and the objective used in
LFIRE.
First, let us make the notational changes $\bm{v}\rightarrow\theta$,
$\mathbf{y}|\mathbf{d}\rightarrow x$ and $T_{\bm{\psi}}\rightarrow h$, in
order to match the notation of Thomas et al. (2020). Equation A.2 then becomes
$\displaystyle\mathcal{L}_{\text{JSD}}(h)$
$\displaystyle=\mathbb{E}_{p(x,\theta)}\left[-\text{sp}(-h(x,\theta)\right]-\mathbb{E}_{p(x)p(\theta)}\left[\text{sp}(h(x,\theta))\right]$
(A.3)
$\displaystyle=\mathbb{E}_{p(x,\theta)}\left[-\log\left(1+e^{-h(x,\theta)}\right)\right]-\mathbb{E}_{p(x)p(\theta)}\left[\log\left(1+e^{h(x,\theta)}\right)\right].$
(A.4)
Writing $p(x,\theta)=p(x|\theta)p(\theta)$ we can move the expectation
$\mathbb{E}_{p(\theta)}$ outside, as it is present in both terms in the above
equation. The resulting JSD lower bound can then be seen as an expectation of
the LFIRE objective in Equation A.1, with equal class sizes $n_{\theta}=n_{m}$
and large number of samples $n_{m}$, $n_{\theta}\rightarrow\infty$, i.e.
$\displaystyle\mathcal{L}_{\text{JSD}}(h)$
$\displaystyle=-\mathbb{E}_{p(\theta)}\left\\{\mathbb{E}_{p(x\mid\theta)}\left[\log\left(1+e^{-h(x,\theta)}\right)\right]+\mathbb{E}_{p(x)}\left[\log\left(1+e^{h(x,\theta)}\right)\right]\right\\}$
(A.5)
$\displaystyle=-\mathbb{E}_{p(\theta)}\left\\{\mathcal{J}(h,\theta)\right\\}.$
(A.6)
Recall that LFIRE is used to learn the density ratio $r(x,\theta)$ for a
particular value of $\theta$ (with an amortisation in $x$). As explained in
the main text, a neural network trained with the JSD lower bound can be used
to directly compute the log density ratio $\log r(x,\theta)$, but with
amortisation in $\theta$ as well as $x$. This fact and Equation A.6 imply that
maximising the JSD lower bound can be seen as minimising the LFIRE objective
in an amortised fashion. While LFIRE and the JSD lower bound have existed in
literature for quite some time, we believe that this strong relationship
between them has previously not been recorded. Moreover, LFIRE has been used
before in the context of Bayesian experimental design for implicit models
(Kleinegesse and Gutmann, 2019; Kleinegesse et al., 2020a).
### A.2 Derivation of lower bound gradients
Using the methodology presented in Section 3.3 of the main text, we here
provide detailed derivations of the lower bound gradients shown in Table 2 of
the same section. Recall that mutual information lower bounds in literature
generally involve expectations
$\mathbb{E}_{p(\bm{v},\mathbf{y}\mid\mathbf{d})}\left[f(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
over the joint and/or expectations
$\mathbb{E}_{p(\bm{v})p(\mathbf{y}\mid\mathbf{d})}\left[g(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
over the marginal distribution, where $f:\mathbb{R}\rightarrow\mathbb{R}$ and
$g:\mathbb{R}\rightarrow\mathbb{R}$ are non-linear, differentiable functions.
As shown in Equations 3.10 and 3.11 in the main text, we can compute gradients
of these expectations by means of pathwise gradient estimators (i.e. the
reparametrisation trick).
In the interest of space, let us define
$T=T_{\bm{\psi}}(\bm{v},\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d}))$,
$\widetilde{T}=T_{\bm{\psi}}(\bm{v},\mathbf{h}(\bm{\epsilon};\widetilde{\bm{v}},\mathbf{d}))$,
as well as $P=p(\bm{v})p(\bm{\epsilon})$ and
$Q=p(\bm{v})p(\widetilde{\bm{v}})p(\bm{\epsilon})$, where
$p(\widetilde{\bm{v}})$ is the same as $p(\bm{v})$. Importantly, $P$ and $Q$
do not depend on $\mathbf{d}$. Our methodology in the main text requires us to
first compute $f^{\prime}(T)=\partial f(T)/\partial T$ and
$g^{\prime}(\widetilde{T})=\partial g(\widetilde{T})/\partial\widetilde{T}$ in
order to compute gradients of lower bounds with respect to designs. Below we
show step-by-step derivations of the gradients for all lower bounds shown in
Table 2 of the main text.
##### NWJ
Defined in Equation 3.2 in the main text, the NWJ lower bound involves the
functions $f(T)=T$ and $g(\widetilde{T})=e^{\widetilde{T}-1}$, which yield the
corresponding gradients $\nabla_{T}f(T)=1$ and
$\nabla_{\widetilde{T}}g(\widetilde{T})=e^{\widetilde{T}-1}$. The desired
gradients of the NWJ lower bound with respect to designs $\mathbf{d}$ are then
computed as follows,
$\displaystyle\nabla_{\mathbf{d}}\mathcal{L}_{\text{NWJ}}(\bm{\psi},\mathbf{d})$
$\displaystyle=\nabla_{\mathbf{d}}\left\\{\mathbb{E}_{P}\left[f(T)\right]-\mathbb{E}_{Q}\left[g(\widetilde{T})\right]\right\\}$
(A.7)
$\displaystyle=\mathbb{E}_{P}\left[\nabla_{\mathbf{d}}f(T)\right]-\mathbb{E}_{Q}\left[\nabla_{\mathbf{d}}g(\widetilde{T})\right]$
(A.8)
$\displaystyle=\mathbb{E}_{P}\left[\nabla_{T}f(T)\nabla_{\mathbf{d}}T\right]-\mathbb{E}_{Q}\left[\nabla_{\widetilde{T}}g(\widetilde{T})\nabla_{\mathbf{d}}\widetilde{T}\right]$
(A.9)
$\displaystyle=\mathbb{E}_{P}\left[\nabla_{\mathbf{d}}T\right]-\mathbb{E}_{Q}\left[e^{\widetilde{T}-1}\nabla_{\mathbf{d}}\widetilde{T}\right],$
(A.10)
where $\nabla_{\mathbf{d}}T$ and $\nabla_{\mathbf{d}}\widetilde{T}$ are given
by Equation 3.12 in the main text.
##### InfoNCE
This lower bound, defined in Equation 3.3 in the main text, only involves an
expectation over the joint distribution and not the marginal distribution,
i.e. $g(T)=0$. In addition to the previous shorthands, let us also define
$T_{ij}=T_{\bm{\psi}}(\bm{v}_{j},h(\bm{\epsilon}_{i};\bm{v}_{i},\mathbf{d}))$.
In this particular case, it is simpler to directly start with the gradient
with respect to $\mathbf{d}$, yielding
$\displaystyle\nabla_{\mathbf{d}}\mathcal{L}_{\text{NCE}}(\bm{\psi},\mathbf{d})$
$\displaystyle=\nabla_{\mathbf{d}}\mathbb{E}_{P^{K}}\left[\frac{1}{K}\sum_{i=1}^{K}\log{\frac{e^{T_{ii}}}{\frac{1}{K}\sum_{j=1}^{K}e^{T_{ij}}}}\right]$
(A.11)
$\displaystyle=\mathbb{E}_{P^{K}}\left[\frac{1}{K}\sum_{i=1}^{K}\nabla_{\mathbf{d}}\log{\frac{e^{T_{ii}}}{\frac{1}{K}\sum_{j=1}^{K}e^{T_{ij}}}}\right]$
(A.12)
$\displaystyle=\mathbb{E}_{P^{K}}\left[\frac{1}{K}\sum_{i=1}^{K}\nabla_{\mathbf{d}}T_{ii}-\nabla_{\mathbf{d}}\log{\sum_{j=1}^{K}e^{T_{ij}}}+\nabla_{\mathbf{d}}\log{K}\right]$
(A.13)
$\displaystyle=\mathbb{E}_{P^{K}}\left[\frac{1}{K}\sum_{i=1}^{K}\nabla_{\mathbf{d}}T_{ii}-\nabla_{\mathbf{d}}\log{\sum_{j=1}^{K}e^{T_{ij}}}\right]$
(A.14)
$\displaystyle=\mathbb{E}_{P^{K}}\left[\frac{1}{K}\sum_{i=1}^{K}\nabla_{\mathbf{d}}T_{ii}-\frac{\sum_{j=1}^{K}e^{T_{ij}}\nabla_{\mathbf{d}}T_{ij}}{\sum_{j=1}^{K}e^{T_{ij}}}\right]$
(A.15)
$\displaystyle=\mathbb{E}_{P^{K}}\left[\frac{1}{K}\sum_{i=1}^{K}\frac{\sum_{j=1}^{K}e^{T_{ij}}(\nabla_{\mathbf{d}}T_{ii}-\nabla_{\mathbf{d}}T_{ij})}{\sum_{j=1}^{K}e^{T_{ij}}}\right],$
(A.16)
where $\nabla_{\mathbf{d}}T_{ii}$ and $\nabla_{\mathbf{d}}T_{ij}$ are again
given by Equation 3.12 in the main text. As explained in the main text, the
above formulations yields an optimal critic
$T^{\ast}_{\bm{\psi}}(\bm{v},\mathbf{y})=\log{p(\mathbf{y}|\bm{v},\mathbf{d})}+c(\mathbf{y}|\mathbf{d})$,
where $c(\mathbf{y}|\mathbf{d})$ is an indeterminate function.
##### JSD
The JSD lower bound is defined in Equation 3.4 in the main text and involves
expectations of the non-linear functions $f(T)=-\text{sp}(-T)$ and
$g(\widetilde{T})=\text{sp}(\widetilde{T})$, where
$\text{sp}(z)=\log\left(1+e^{z}\right)$ is the softplus function. The
derivative of the softplus function is simply the logistic sigmoid function,
i.e. $\nabla_{z}\text{sp}(z)=1/(1+e^{-z})\equiv\sigma(z)$, resulting in the
gradients $\nabla_{T}f(T)=\sigma(-T)$ and
$\nabla_{\widetilde{T}}g(\widetilde{T})=\sigma(\widetilde{T})$. This allows us
to compute the gradients of the JSD lower bound with respect to experimental
designs,
$\displaystyle\nabla_{\mathbf{d}}\mathcal{L}_{\text{JSD}}(\bm{\psi},\mathbf{d})$
$\displaystyle=\nabla_{\mathbf{d}}\left\\{\mathbb{E}_{P}\left[f(T)\right]-\mathbb{E}_{Q}\left[g(\widetilde{T})\right]\right\\}$
(A.17)
$\displaystyle=\mathbb{E}_{P}\left[\nabla_{\mathbf{d}}f(T)\right]-\mathbb{E}_{Q}\left[\nabla_{\mathbf{d}}g(\widetilde{T})\right]$
(A.18)
$\displaystyle=\mathbb{E}_{P}\left[\nabla_{T}f(T)\nabla_{\mathbf{d}}T\right]-\mathbb{E}_{Q}\left[\nabla_{\widetilde{T}}g(\widetilde{T})\nabla_{\mathbf{d}}\widetilde{T}\right]$
(A.19)
$\displaystyle=\mathbb{E}_{P}\left[\sigma(-T)\nabla_{\mathbf{d}}T\right]-\mathbb{E}_{Q}\left[\sigma(\widetilde{T})\nabla_{\mathbf{d}}\widetilde{T}\right],$
(A.20)
where $\nabla_{\mathbf{d}}T$ and $\nabla_{\mathbf{d}}\widetilde{T}$ are again
computed using Equation 3.12 in the main text.
### A.3 Gradient estimator for analytically tractable observation models
We here provide the gradient estimator of mutual information lower bounds in
situations where it is possible to leverage an analytically tractable
observation model, as discussed in Section 3.4 of the main text. Recall that
we assume that we can separate the data generation into an observation process
with a model $p(\mathbf{y}|\bm{v},\mathbf{d},\bm{z})$ that is known
analytically in closed form and a differentiable, latent process
$p(\bm{z}|\bm{v},\mathbf{d})$. We can reparametrise the latent variable using
its sampling path, i.e. $\bm{z}=\bm{h}(\bm{\epsilon};\bm{v},\mathbf{d})$,
where the noise random variable $\bm{\epsilon}$ defines the sampling path.
This allows us to use score-function estimators to compute the required
gradients.
The score function estimator works by re-writing the derivative of
$p(\mathbf{y}|\bm{v},\mathbf{d},\bm{z})$ using the definition of the
logarithmic derivative, i.e.
$\nabla_{\mathbf{d}}p(\mathbf{y}|\bm{v},\mathbf{d},\bm{z})=p(\mathbf{y}|\bm{v},\mathbf{d},\bm{z})\nabla_{\mathbf{d}}\log
p(\mathbf{y}|\bm{v},\mathbf{d},\bm{z}).$ (A.21)
Following the explanations in the main text, we can then compute the gradient
of the expectation of $f(T_{\bm{\psi}}(\bm{v},\mathbf{y}))$ as follows,
$\displaystyle\nabla_{\mathbf{d}}$
$\displaystyle\mathbb{E}_{p(\bm{v},\mathbf{y}\mid\mathbf{d})}\left[f(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
(A.22)
$\displaystyle=\nabla_{\mathbf{d}}\mathbb{E}_{p(\mathbf{y}|\bm{v},\mathbf{d},\bm{z})p(\bm{z}|\bm{v},\mathbf{d})p(\bm{v})}\left[f(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
(A.23)
$\displaystyle=\nabla_{\mathbf{d}}\mathbb{E}_{p(\mathbf{y}|\bm{v},\mathbf{d},\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d}))p(\bm{\epsilon})p(\bm{v})}\left[f(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
(A.24)
$\displaystyle=\int\mathbb{E}_{p(\bm{\epsilon})p(\bm{v})}\left[\nabla_{\mathbf{d}}\\{p(\mathbf{y}|\bm{v},\mathbf{d},\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d}))\\}f(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]\mathrm{d}\mathbf{y}$
(A.25)
$\displaystyle=\mathbb{E}_{p(\mathbf{y}|\bm{v},\mathbf{d},\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d}))p(\bm{\epsilon})p(\bm{v})}\left[\nabla_{\mathbf{d}}\\{\log
p(\mathbf{y}|\bm{v},\mathbf{d},\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d}))\\}f(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
(A.26)
The last equation above can then simply be approximated using a Monte-Carlo
sample average. Note that when computing the log derivative
$\nabla_{\mathbf{d}}\\{\log
p(\mathbf{y}|\bm{v},\mathbf{d},\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d}))\\}$
we generally require the gradients of the sampling path
$\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d})$ as well. Using Equation 3.9 in
the main text, we can derive a similar equation for the derivative of the
expectation of $g(T_{\bm{\psi}}(\bm{v},\mathbf{y}))$ over the marginal
distribution,
$\displaystyle\nabla_{\mathbf{d}}$
$\displaystyle\mathbb{E}_{p(\bm{v})p(\mathbf{y}\mid\mathbf{d})}\left[g(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
(A.27)
$\displaystyle=\nabla_{\mathbf{d}}\mathbb{E}_{p(\mathbf{y}|\widetilde{\bm{v}},\mathbf{d})p(\widetilde{\bm{v}})p(\bm{v})}\left[g(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
(A.28)
$\displaystyle=\nabla_{\mathbf{d}}\mathbb{E}_{p(\mathbf{y}|\widetilde{\bm{v}},\mathbf{d},\bm{z})p(\bm{z}|\widetilde{\bm{v}},\mathbf{d})p(\widetilde{\bm{v}})p(\bm{v})}\left[g(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
(A.29)
$\displaystyle=\nabla_{\mathbf{d}}\mathbb{E}_{p(\mathbf{y}|\widetilde{\bm{v}},\mathbf{d},\mathbf{h}(\bm{\epsilon};\widetilde{\bm{v}},\mathbf{d}))p(\bm{\epsilon})p(\widetilde{\bm{v}})p(\bm{v})}\left[g(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]$
(A.30)
$\displaystyle=\int\mathbb{E}_{p(\bm{\epsilon})p(\widetilde{\bm{v}})p(\bm{v})}\left[\nabla_{\mathbf{d}}\\{p(\mathbf{y}|\widetilde{\bm{v}},\mathbf{d},\mathbf{h}(\bm{\epsilon};\widetilde{\bm{v}},\mathbf{d}))\\}g(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right]\mathrm{d}\mathbf{y}$
(A.31)
$\displaystyle=\mathbb{E}_{p(\mathbf{y}|\widetilde{\bm{v}},\mathbf{d},\mathbf{h}(\bm{\epsilon};\widetilde{\bm{v}},\mathbf{d}))p(\bm{\epsilon})p(\widetilde{\bm{v}})p(\bm{v})}\left[\nabla_{\mathbf{d}}\\{\log
p(\mathbf{y}|\widetilde{\bm{v}},\mathbf{d},\mathbf{h}(\bm{\epsilon};\widetilde{\bm{v}},\mathbf{d}))\\}g(T_{\bm{\psi}}(\bm{v},\mathbf{y}))\right],$
(A.32)
where the last equation above can conveniently be approximated using a Monte-
Carlo sample average. Equations A.26 and A.32 do not require differentiating
$f$ or $g$ with respect to the neural network output. This allows us to
quickly write down gradients of the prominent lower bounds presented in the
main text (NWJ, InfoNCE and JSD) when the observation model is known
analytically in closed form. This yields Table 5 in a similar manner to Table
2 in the main text.
Table 5: Gradients of several lower bounds with respect to experimental designs, when the observation model is analytically tractable. We define $T=T_{\bm{\psi}}(\bm{v},\mathbf{y})$ and $T_{ij}=T_{\bm{\psi}}(\bm{v}_{j},\mathbf{y}_{i})$, as well as the shorthands $q_{\mathbf{y}}=p(\mathbf{y}|\bm{v},\mathbf{d},\mathbf{h}(\bm{\epsilon};\bm{v},\mathbf{d}))$, $\widetilde{q}_{\mathbf{y}}=p(\mathbf{y}|\widetilde{\bm{v}},\mathbf{d},\mathbf{h}(\bm{\epsilon};\widetilde{\bm{v}},\mathbf{d}))$, $P=q_{\mathbf{y}}p(\bm{\epsilon})p(\bm{v})$ and $Q=\widetilde{q}_{\mathbf{y}}p(\bm{\epsilon})p(\widetilde{\bm{v}})p(\bm{v})$. Here, $\text{sp}(T)$ is the softplus function. Lower Bound | Gradients with respect to designs
---|---
NWJ | $\mathbb{E}_{P}\left[\nabla_{\mathbf{d}}\\{\log q_{\mathbf{y}}\\}T\right]-e^{-1}\mathbb{E}_{Q}\left[\nabla_{\mathbf{d}}\\{\log\widetilde{q}_{\mathbf{y}}\\}e^{T}\right]$
InfoNCE | $\mathbb{E}_{P^{K}}\left[\nabla_{\mathbf{d}}\\{\log q_{\mathbf{y}}\\}\frac{1}{K}\sum_{i=1}^{K}\log{\frac{\exp(T_{ii})}{\frac{1}{K}\sum_{j=1}^{K}\exp(T_{ij})}}\right]$
JSD | $\mathbb{E}_{P}\left[-\nabla_{\mathbf{d}}\\{\log q_{\mathbf{y}}\\}\text{sp}(-T)\right]-\mathbb{E}_{Q}\left[\nabla_{\mathbf{d}}\\{\log\widetilde{q}_{\mathbf{y}}\\}\text{sp}(T)\right]$
## Appendix B Toy experiments
### B.1 Reference computations
In order to compute reference values of the mutual information (MI) and
posteriors, we rely on an approximation of the likelihood
$p(\mathbf{y}|\bm{\theta}_{m},m,\mathbf{d})$ that is based on kernel density
estimation (KDE), where $m$ is a model indicator and $\bm{\theta}_{m}$ are the
corresponding model parameters.
As described in Equation 4.1 of the main text, the sampling paths of all toy
models have additive Gaussian noise $\epsilon\sim\mathcal{N}(\epsilon;0,1)$
and additive Gamma noise $\nu\sim\Gamma(\nu;2,2)$. The overall noise
distribution $p_{\text{noise}}$ of $\epsilon+\nu$ is given by a convolution of
the individual densities and could be computed via numerical integration. We
here opt for an alternative approach, namely to compute the overall noise
distribution by a KDE of $50{,}000$ samples of $\epsilon$ and $\nu$. Once we
have fitted a KDE to noise samples and obtained an estimate
$\widehat{p}_{\text{noise}}$, we can re-arrange the sampling paths, which then
allows us to compute estimates of the likelihood. For the set of toy models we
here make the assumption that performing experiments with certain designs does
not change the data-generating distribution, thereby allowing us to write
$\widehat{p}(\mathbf{y}|\bm{\theta}_{m},m,\mathbf{d})=\prod_{j=1}^{D}\widehat{p}(y_{j}|\bm{\theta}_{m},m,d_{j}),$
(B.1)
where $y_{j}$ and $d_{j}$ are the elements of $\mathbf{y}$ and $\mathbf{d}$,
respectively. We can then focus on estimating the likelihood for each
dimension of $\mathbf{d}$ separately and then multiplying them. Using the
estimated noise distribution, the individual likelihood estimates
$\widehat{p}(y_{j}|\bm{\theta}_{m},m,d_{j})$ for the linear ($m=1$),
logarithmic ($m=2$) and square-root ($m=3$) toy models are,
$\widehat{p}(y_{j}|\bm{\theta}_{m},m,d_{j})=\begin{cases}\widehat{p}_{\text{noise}}(y_{j}-(\theta_{1,0}+\theta_{1,1}d_{j}))&\text{if
}m=1,\\\
\widehat{p}_{\text{noise}}(y_{j}-(\theta_{2,0}+\theta_{2,1}\log(|d_{j}|)))&\text{if
}m=2,\\\
\widehat{p}_{\text{noise}}(y_{j}-(\theta_{3,0}+\theta_{3,1}\sqrt{|d_{j}|}))&\text{if
}m=3.\end{cases}$ (B.2)
Below we explain how to use this likelihood approximation to compute reference
MI and posteriors for each of the scientific tasks considered in our paper.
Note that, in the main text, the equations of mutual information are generally
given in terms of the posterior to prior ratio. We here compute reference MI
values by estimating the ratio of likelihood to marginal, which is equivalent
to the posterior to prior ratio by Bayes’ rule.
##### Parameter Estimation
Here we only consider the linear model, where $m=1$, with the aim of solving
the task of estimating the model parameters $\bm{\theta}_{1}$, i.e. the slope
and offset of a straight line. The analytic MI is given by Equation 2.5 in the
main text and we here approximate it with a nested Monte-Carlo (MC) sample
average and the approximate likelihoods in Equation B.2, i.e.
$\widehat{I}(\bm{\theta};\mathbf{y}|\mathbf{d})\approx\frac{1}{N}\sum_{i=1}^{N}\log\frac{\widehat{p}(\mathbf{y}^{(i)}|\mathbf{d},\bm{\theta}_{1}^{(i)},m=1)}{\frac{1}{K}\sum_{k=1}^{K}\widehat{p}(\mathbf{y}^{(i)}|\mathbf{d},\bm{\theta}_{1}^{(k)},m=1)},$
(B.3)
where $\mathbf{y}^{(i)}\sim
p(\mathbf{y}|\mathbf{d},\bm{\theta}_{1}^{(i)},m=1)$,
$\bm{\theta}_{1}^{(i)}\sim p(\bm{\theta}_{1})$ and $\bm{\theta}_{1}^{(k)}\sim
p(\bm{\theta}_{1})$. The Gaussian prior distributions for each model are
provided in the main text. To compute the above estimate, we use $N=2{,}000$
and $K=500$. We can also use the approximate likelihood and Bayes’ rule to
compute a reference posterior,
$\widehat{p}(\bm{\theta}_{1}|\mathbf{d},\mathbf{y})=p(\bm{\theta}_{1})\frac{\widehat{p}(\mathbf{y}|\mathbf{d},\bm{\theta}_{1},m=1)}{\widehat{p}(\mathbf{y}|\mathbf{d})},$
(B.4)
where the marginal
$\widehat{p}(\mathbf{y}|\mathbf{d})=\int\widehat{p}(\mathbf{y}|\mathbf{d},\bm{\theta}_{1},m=1)p(\bm{\theta}_{1})\mathrm{d}\bm{\theta}_{1}$
can be computed with a MC sample average or via numerical integration. We here
use numerical integration by means of Simpson’s rule because of the low
dimensions of $\bm{\theta}_{1}$.
##### Model Discrimination
We can approximate the analytic MI in Equation 2.6 of the main text in a
similar manner. We use a nested MC sample average, sum over the model
indicators $m$ and approximate the relevant likelihoods using Equation B.2,
yielding
$\widehat{I}(m;\mathbf{y}|\mathbf{d})\approx\sum_{m=1}^{3}p(m)\frac{1}{N}\sum_{i=1}^{N}\log\frac{\widehat{p}(\mathbf{y}^{(i)}|\mathbf{d},m)}{\widehat{p}(\mathbf{y}^{(i)}|\mathbf{d})},$
(B.5)
where $\mathbf{y}^{(i)}\sim p(\mathbf{y}|m,\mathbf{d})$ and we use
$N=3{,}000$. We approximate the density
$\widehat{p}(\mathbf{y}^{(i)}|m,\mathbf{d})$ with a MC sample average as well,
i.e.
$\widehat{p}(\mathbf{y}^{(i)}|\mathbf{d},m)=\frac{1}{K}\sum_{k=1}^{K}\widehat{p}(\mathbf{y}^{(i)}|\bm{\theta}_{m}^{(k)},m,\mathbf{d}),\quad\bm{\theta}_{m}^{(k)}\sim
p(\bm{\theta}_{m}),$ (B.6)
where we use $K=1000$. The density $\widehat{p}(\mathbf{y}^{(i)}|\mathbf{d})$
is computed by summing over $m$,
$\widehat{p}(\mathbf{y}^{(i)}|\mathbf{d})=\sum_{m=1}^{3}p(m)\,\widehat{p}(\mathbf{y}^{(i)}|\mathbf{d},m).$
(B.7)
Using the above approximate densities and Bayes’ rule we can then compute a
reference posterior over the model indicator as well, i.e
$\widehat{p}(m|\mathbf{d},\mathbf{y})=p(m)\frac{\widehat{p}(\mathbf{y}|\mathbf{d},m)}{\widehat{p}(\mathbf{y}|\mathbf{d})}.$
(B.8)
##### Joint MD/PE
The analytic MI for the joint MD/PE task is shown in Equation 2.7 of the main
text. A nested MC sample average of that utility is
$\widehat{I}(\bm{\theta}_{m},m;\mathbf{y}|\mathbf{d})\approx\sum_{m=1}^{3}p(m)\frac{1}{N}\sum_{i=1}^{N}\log\frac{\widehat{p}(\mathbf{y}^{(i)}|\bm{\theta}_{m}^{(i)},m,\mathbf{d})}{\widehat{p}(\mathbf{y}^{(i)}|\mathbf{d})},$
(B.9)
where $\mathbf{y}^{(i)}\sim p(\mathbf{y}|\bm{\theta}_{m}^{(i)},m,\mathbf{d})$
and $\bm{\theta}_{m}^{(i)}\sim p(\bm{\theta}_{m})$. The marginal density
$\widehat{p}(\mathbf{y}^{(i)}|\mathbf{d})$ is computed by summing over $m$ and
using a MC sample average over $\bm{\theta}_{m}$, i.e.
$\widehat{p}(\mathbf{y}^{(i)}|\mathbf{d})=\sum_{i=1}^{3}p(m)\frac{1}{K}\sum_{k=1}^{K}\widehat{p}(\mathbf{y}^{(i)}|\bm{\theta}_{m}^{(k)},m,\mathbf{d}),\quad\bm{\theta}_{m}^{(k)}\sim
p(\bm{\theta}_{m}).$ (B.10)
In our computations we use $N=2{,}000$ and $K=500$. As before, we can compute
a reference joint posterior by using Bayes’ rule:
$\widehat{p}(\bm{\theta}_{m},m|\mathbf{d},\mathbf{y})=p(m)\frac{\widehat{p}(\mathbf{y}|\bm{\theta}_{m},m,\mathbf{d})}{\widehat{p}(\mathbf{y}|\mathbf{d})}.$
(B.11)
##### Improving Future Predictions
We here consider the task of improving future predictions for the linear toy
model ($m=1$). In order to approximate the analytic MI shown in Equation 2.8
of the main text, we need to compute approximations of the posterior
predictive distribution $\widehat{p}(y_{T}|\mathbf{y},\mathbf{d},d_{T})$,
which is given by
$\widehat{p}(y_{T}|\mathbf{y},\mathbf{d},d_{T})=\int\widehat{p}(y_{T}|\bm{\theta}_{1},d_{T})\widehat{p}(\bm{\theta}_{1}|\mathbf{y},\mathbf{d})\mathrm{d}\bm{\theta}_{1},$
(B.12)
where $\widehat{p}(y_{T}|\bm{\theta}_{1},d_{T})$ is computed using Equation
B.2 and $\widehat{p}(\bm{\theta}_{1}|\mathbf{y},\mathbf{d})$ is computed using
Equation B.4. We then approximate
$\widehat{p}(y_{T}|\mathbf{y},\mathbf{d},d_{T})$ by numerically integrating
over a grid of $\bm{\theta}_{1}$. We found that a $50\times 50$ grid of
$\bm{\theta}_{1}\in[-10,10]^{2}$ was sufficient. In the same manner, we
approximate the prior predictive
$\widehat{p}(y_{T}|d_{T})=\int\widehat{p}(y_{T}|\bm{\theta}_{1},d_{T})p(\bm{\theta}_{1})\mathrm{d}\bm{\theta}_{1}$
by numerical integration. Being able to compute the posterior predictive for a
certain $\\{\mathbf{d},\mathbf{y}\\}$ then allows us to approximate the
analytic MI using a combination of numerical integration and MC sample
averages, i.e.
$\widehat{I}(y_{T}|d_{T};\mathbf{y}|\mathbf{d})\approx\frac{1}{N}\sum_{i=1}^{N}\int\widehat{p}(y_{T}|\mathbf{y}^{(i)},\mathbf{d},d_{T})\log\frac{\widehat{p}(y_{T}|\mathbf{y}^{(i)},\mathbf{d},d_{T})}{\widehat{p}(y_{T}|d_{T})}\mathrm{d}y_{T},$
where $\mathbf{y}^{(i)}\sim p(\mathbf{y}|\mathbf{d})$. We used $N=800$ and
numerically integrated over $y_{T}\in[-50,50]$ using Simpson’s rule with a
grid of size $100$.
### B.2 Data distributions
Figure 11: Toy model data distributions (top), average signal-to-noise ratio
(SNR, middle) and Jensen-Shannon (JS) divergences between different data
distributions (bottom).
In Figure 11 we summarise general information about data simulated from each
toy model presented in the main text, i.e. the linear model, the log model and
the square-root model. The top row shows the prior predictive distribution for
each model, where the solid lines represent the means and the shaded areas
indicate one standard deviation from the means. While the linear and square-
root model have a similar data distribution, the log model prior predictive is
significantly different, rapidly increasing as $d$ approaches $0$. This
signifies in which regions the data distributions are most different, under
our prior belief about the model parameters $\bm{\theta}_{m}$, i.e. at the
boundaries $d=-2,2$ and at $d=0$. The bottom plot, which shows the Jensen-
Shannon (JS) divergence between the data distributions of different toy
models, further emphasises this difference. This is in accordance with the
optimal designs found for the toy model MD and MD/PE tasks in the main text.
The middle row of Figure 11 shows the average signal-to-noise ratio (SNR) for
each of the toy models. The SNR, for a single $\bm{\theta}_{m}$, is given by
$\text{SNR}(d|\bm{\theta}_{m})=\left(\frac{\mu(d|\bm{\theta}_{m})}{\sigma(d|\bm{\theta}_{m})}\right)^{2},$
(B.13)
where $\mu(d|\bm{\theta}_{m})$ and $\sigma(d|\bm{\theta}_{m})$ are the mean
and standard deviation of the model response $y|\bm{\theta}_{m},d$,
respectively, averaged over signal noise. The average SNR is then
$\overline{\text{SNR}}(d)=\mathbb{E}_{p(\bm{\theta})}[\text{SNR}(d|\bm{\theta}_{m})]$.
While the linear and square-root model have a relatively high average SNR at
the boundaries, the log model has an extremely high average SNR at $d=0$. A
high SNR is useful for parameter estimation, because the observed data
essentially has less _relative_ noise, allowing us to better identify which
values of $\bm{\theta}_{m}$ might have generated observations. This is again
in accordance with results found in the main text.
### B.3 Architectures and hyper-parameters
In Table 6 we show neural network (NN) architectures, including the number of
hidden layers and number of hidden units, as well as the learning rates (L.R.)
for the NN parameters $\bm{\psi}$ and experimental designs $\mathbf{d}$. We
optimise $\bm{\psi}$ and $\mathbf{d}$ with two separate Adam optimisers. with
default parameters from the PyTorch package in Python. Each epoch, we simulate
$10{,}000$ new data samples.
Table 6: Toy model hyper-parameters for each scientific goal. Scientific Goal | NN Layers | NN Units | L.R. for $\bm{\psi}$ | L.R. for $\mathbf{d}$
---|---|---|---|---
PE | 2 | 50 | $10^{-4}$ | $10^{-3}$
MD | 2 | 50 | $10^{-3}$ | $10^{-4}$
MD/PE | 2 | 70 | $10^{-4}$ | $5\times 10^{-4}$
FP | 2 | 100 | $10^{-3}$ | $10^{-2}$
### B.4 Additional results
Here we show additional results for the toy model experiments that were
mentioned in the main text. Figure 12 shows a setting where the goal is MD and
the InfoNCE lower bound found different optimal experimental designs than the
NWJ and JSD lower bound. This is slightly surprising, as the neural networks
for each lower bound used the same architecture and the same initialisation.
Because different optimal designs were found, the final MI estimate for the
InfoNCE lower bound is lower than that of the NWJ and JSD lower bounds.
Importantly, this implies that the choice of lower bound does matter, as
different optimal designs may be found. In our work we do not, however, aim to
explore which lower bound is best, but rather show how different lower bounds
may be used in our proposed framework.
Figure 12: MD training curves when InfoNCE yields different locally optimal
design. The top row shows lower bounds as a function of training epochs, with
the dotted line being the reference MI at the final optimal design of the
respective lower bound. The bottom row shows the corresponding design elements
as the design vector is being updated. Figure 13: MD/PE results for the set of
toy models. Shown are training curves for the NWJ, InfoNCE and JSD (via NWJ)
lower bounds. The top row shows lower bound estimates as a function of
training epochs, with the dotted line being the reference MI value, and the
bottom row shows the elements of the design vector as it is being updated.
Figure 14: MD/PE posterior results of the model indicator for the set of toy
models. Shown are average posterior probabilities for different ground truths
(one per row) for the NWJ, InfoNCE and JSD lower bounds.
In Figure 13 we show the training curves of each lower bound (top row) for the
joint MD/PE task, as well as the experimental designs as they are being
updated (bottom row). All lower bounds yield the same optimal design and
converge to a final MI close to a reference MI (with the NWJ bound being
further away). Using the trained neural networks for each lower bound we can
compute posterior distributions, as described in the main text. Figure 14
shows marginal posterior distributions over the model indicator $m$ and Figure
15 shows marginal posterior distributions over model parameters
$\bm{\theta}_{m}$. Note that we show average posterior distributions over
$5{,}000$ ‘real-world’ observations
$\mathbf{y}^{\ast}|\mathbf{d}^{\ast},\bm{\theta}_{m,\text{true}}$, where
$\bm{\theta}_{m,\text{true}}=(2,3)$ for all models. Finally, for the FP task,
we show the training curves of each lower bound for the FP task (top row), as
well as corresponding design curves (bottom row), in Figure 16.
Figure 15: MD/PE posterior results of the model parameters for all toy models.
Shown are average marginal posteriors for all lower bounds, including the
prior and an average reference posterior. Figure 16: FP results for the linear
toy model. Shown are training curves for the NWJ, InfoNCE and JSD (via NWJ)
lower bounds. The top row shows lower bound estimates as a function of
training epochs, with the dotted line being the reference MI value, and the
bottom row shows the elements of the design vector as it is being updated.
## Appendix C Epidemiology experiments
### C.1 Discrete state continuous time Markov chain formulation
We start by defining the SIR and SEIR models in terms of continuous time
Markov chains (CTMC), where time is continuous and the state populations are
discrete.
Recall that the SIR model (e.g. Allen, 2008) is governed by the state changes
$S_{1}(t)\rightarrow I_{1}(t)\rightarrow R_{1}(t)$, where $S_{1}(t)$ are the
number of susceptible individuals, $I_{1}(t)$ are the number of infectious
individuals and $R_{1}(t)$ are the individuals that have recovered and can no
longer be infected. The dynamics of these state changes are determined by the
model parameters $\bm{\theta}_{1}=(\beta,\gamma)$, where $\beta$ is the
infection rate and $\gamma$ is the recovery rate. As done in the main text,
let us define a population vector
$\mathbf{X}_{1}(t)=(S_{1}(t),I_{1}(t))^{\top}$.121212We can ignore the
population of recovered individuals here because we assume that the total
population $N$ stays constant. The state changes $\Delta\mathbf{X}_{1}(t)$ of
the SIR model within a small time frame $\Delta t$ are then affected by two
events: _infection_ , where $\Delta\mathbf{X}_{1}(\Delta t)=(-1,+1)^{\top}$,
and _recovery_ , where $\Delta\mathbf{X}_{1}(\Delta t)=(0,-1)^{\top}$.
Denoting the $i$-th event by $(\Delta\mathbf{X}_{1}(t))^{i}$, Table 7
summarises these state changes and their corresponding transition
probabilities131313$\mathchoice{{\scriptstyle\mathcal{O}}}{{\scriptstyle\mathcal{O}}}{{\scriptscriptstyle\mathcal{O}}}{\scalebox{0.7}{$\scriptscriptstyle\mathcal{O}$}}(x)$
in Tables 7 and 8 denotes a function that goes faster to zero than $x$, i.e.
$\lim_{x\to
0}\mathchoice{{\scriptstyle\mathcal{O}}}{{\scriptstyle\mathcal{O}}}{{\scriptscriptstyle\mathcal{O}}}{\scalebox{0.7}{$\scriptscriptstyle\mathcal{O}$}}(x)/x=0$.,
which define the behaviour of the CTMC SIR model.
Table 7: CTMC SIR model state changes and their probabilities. Event | State Change | Probability
---|---|---
Infection | $(\Delta\mathbf{X}_{1}(t))^{1}=(-1,+1)^{\top}$ | $p_{1}=\beta\frac{S_{1}(t)I_{1}(t)}{N}\Delta t+\mathchoice{{\scriptstyle\mathcal{O}}}{{\scriptstyle\mathcal{O}}}{{\scriptscriptstyle\mathcal{O}}}{\scalebox{0.7}{$\scriptscriptstyle\mathcal{O}$}}(\Delta t)$
Recovery | $(\Delta\mathbf{X}_{1}(t))^{2}=(\,\,\,\,\,0,-1)^{\top}$ | $p_{2}=\gamma I_{1}(t)\Delta t+\mathchoice{{\scriptstyle\mathcal{O}}}{{\scriptstyle\mathcal{O}}}{{\scriptscriptstyle\mathcal{O}}}{\scalebox{0.7}{$\scriptscriptstyle\mathcal{O}$}}(\Delta t)$
The SEIR model (Lekone and Finkenstädt, 2006) introduces an exposed state
$E_{2}(t)$, where individuals are infected but not yet infectious, and is
governed by the state changes $S_{2}(t)\rightarrow E_{2}(t)\rightarrow
I_{2}(t)\rightarrow R_{2}(t)$. The dynamics of these changes are determined by
the model parameters $\bm{\theta}_{2}=(\beta,\gamma,\sigma)$, where $\beta$
and $\gamma$ are as before and $\sigma^{-1}$ is the average incubation period.
We again define a population vector
$\mathbf{X}_{2}(t)=(S_{2}(t),E_{2}(t),I_{2}(t))^{\top}$. The state changes
$\Delta\mathbf{X}_{2}(t)$ of the SEIR model are affected by three events:
_infection_ , _becoming infectious_ and _recovery_. These state changes,
denoted by $(\Delta\mathbf{X}_{2}(t))^{i}$, and their corresponding
probabilities, which together define the CTMC SEIR model, are summarised in
Table 8.
Table 8: CTMC SEIR model state changes and their probabilities. Event | State Change | Probability
---|---|---
Infection | $(\Delta\mathbf{X}_{2}(t))^{1}=(-1,+1,\,\,\,\,\,0)^{\top}$ | $p_{1}=\beta\frac{S_{2}(t)I_{2}(t)}{N}\Delta t+\mathchoice{{\scriptstyle\mathcal{O}}}{{\scriptstyle\mathcal{O}}}{{\scriptscriptstyle\mathcal{O}}}{\scalebox{0.7}{$\scriptscriptstyle\mathcal{O}$}}(\Delta t)$
Becoming infectious | $(\Delta\mathbf{X}_{2}(t))^{2}=(\,\,\,\,\,0,-1,+1)^{\top}$ | $p_{2}=\sigma E_{2}(t)\Delta t+\mathchoice{{\scriptstyle\mathcal{O}}}{{\scriptstyle\mathcal{O}}}{{\scriptscriptstyle\mathcal{O}}}{\scalebox{0.7}{$\scriptscriptstyle\mathcal{O}$}}(\Delta t)$
Recovery | $(\Delta\mathbf{X}_{2}(t))^{3}=(\,\,\,\,\,0,\,\,\,\,\,0,-1)^{\top}$ | $p_{3}=\gamma I_{2}(t)\Delta t+\mathchoice{{\scriptstyle\mathcal{O}}}{{\scriptstyle\mathcal{O}}}{{\scriptscriptstyle\mathcal{O}}}{\scalebox{0.7}{$\scriptscriptstyle\mathcal{O}$}}(\Delta t)$
### C.2 Diffusion Approximations
Based on the CTMC formulation of the SIR and SEIR models in Table 7 and Table
8, we can derive continuous-state diffusion approximations of these
epidemiological processes using the method of Allen et al. (2008), which
results in systems of stochastic differential equations (SDEs).
In the main text we wrote down the system of Itô SDEs as
$\mathrm{d}\mathbf{X}_{m}(t)=\mathbf{f}_{m}(\mathbf{X}_{m}(t))\mathrm{d}t+\mathbf{G}_{m}(\mathbf{X}_{m}(t))\mathrm{d}\mathbf{W}(t),$
(C.1)
where $m$ is a model indicator, $\mathbf{f}_{m}$ the drift vector,
$\mathbf{G}_{m}$ the diffusion matrix, and $\mathbf{W}(t)$ is a vector of
independent Wiener processes. The drift vector equals the infinitesimal mean
$\lim_{\mathrm{d}t\to 0}\mathbb{E}(\mathrm{d}\mathbf{X}_{m}(t))/\mathrm{d}t$
of the diffusion, and $\mathbf{G}_{m}$ defines the infinitesimal variance
$\lim_{\mathrm{d}t\to
0}\mathbb{V}\text{ar}(\mathrm{d}\mathbf{X}_{m}(t))/\mathrm{d}t=\mathbf{G}_{m}\mathbf{G}_{m}^{\top}$.
They can be chosen such that the diffusion approximation matches the
infinitesimal mean and variance of the discrete state continuous Markov chain
models. Allen et al. (2008) provide a systematic way for doing this, which we
shall follow below.
Let us first consider the CTMC SIR model summarised in Table 7. The drift
vector is chosen to match the expected change in $\mathbf{X}_{1}(t)$ of the
CTMC formulation, i.e.
$\displaystyle\mathbf{f_{1}}(\mathbf{X}_{1}(t))$ $\displaystyle=\lim_{\Delta
t\to 0}\frac{1}{\Delta t}\mathbb{E}\left[\Delta\mathbf{X}_{1}(t)\right]$ (C.2)
$\displaystyle=\lim_{\Delta t\to 0}\frac{1}{\Delta
t}\sum_{i=1}^{2}p_{i}(\Delta\mathbf{X}_{1}(t))^{i}$ (C.3)
$\displaystyle=\beta\frac{S_{1}(t)I_{1}(t)}{N}\begin{pmatrix}-1\\\
1\end{pmatrix}+\gamma I_{1}(t)\begin{pmatrix}0\\\ -1\end{pmatrix}$ (C.4)
$\displaystyle=\begin{pmatrix}-\beta\frac{S_{1}(t)I_{1}(t)}{N}\\\\[10.00002pt]
\beta\frac{S_{1}(t)I_{1}(t)}{N}-\gamma I_{1}(t)\\\\[10.00002pt]
\end{pmatrix}.$ (C.5)
In order to derive the diffusion matrix $\mathbf{G}_{1}$, let us first define
a matrix $\bm{\Lambda}_{1}$ whose rows correspond to the state changes
$(\Delta\mathbf{X}_{1}(t))^{i}$ in Table 7, i.e.
$\bm{\Lambda}_{1}=\begin{pmatrix}-1&+1&\\\ 0&-1&\\\ \end{pmatrix}.$ (C.6)
Allen et al. (2008) show that the elements $G_{1,ij}$ of the diffusion matrix
$\mathbf{G}_{1}$ are then given by
$G_{1,ij}=\Lambda_{ji}\,p_{j}^{1/2},$ (C.7)
allowing us to derive the full diffusion matrix using Table 7 and Equation
C.7, i.e.
$\displaystyle\mathbf{G}_{1}(\mathbf{X}_{1}(t))$
$\displaystyle=\begin{pmatrix}\Lambda_{11}\,p_{1}^{1/2}&\Lambda_{21}\,p_{2}^{1/2}\\\
\Lambda_{12}\,p_{1}^{1/2}&\Lambda_{22}\,p_{2}^{1/2}\\\ \end{pmatrix}$ (C.8)
$\displaystyle=\begin{pmatrix}-\sqrt{\beta\frac{S_{1}(t)I_{1}(t)}{N}}&0\\\\[10.00002pt]
\sqrt{\beta\frac{S_{1}(t)I_{1}(t)}{N}}&-\sqrt{\gamma I_{1}(t)}\\\\[10.00002pt]
\end{pmatrix}.$ (C.9)
We can check that this diffusion matrix matches the infinitesimal variance of
the CTMC SIR model, i.e.
$\displaystyle\lim_{\Delta t\to
0}\frac{\mathbb{V}\text{ar}(\Delta\mathbf{X}_{1}(t))}{\Delta t}$
$\displaystyle=\lim_{\Delta t\to 0}\frac{1}{\Delta
t}\sum_{i=1}^{2}p_{i}(\Delta\mathbf{X}_{1}(t))^{i}(\Delta\mathbf{X}_{1}(t))^{i,\top}-\lim_{\Delta
t\to 0}\frac{1}{\Delta
t}\mathbb{E}\left[\Delta\mathbf{X}_{1}(t)\right]\mathbb{E}\left[\Delta\mathbf{X}_{1}(t)\right]^{\top}$
$\displaystyle=\lim_{\Delta t\to 0}\frac{1}{\Delta
t}\sum_{i=1}^{2}p_{i}(\Delta\mathbf{X}_{1}(t))^{i}(\Delta\mathbf{X}_{1}(t))^{i,\top}-\lim_{\Delta
t\to
0}\frac{\mathchoice{{\scriptstyle\mathcal{O}}}{{\scriptstyle\mathcal{O}}}{{\scriptscriptstyle\mathcal{O}}}{\scalebox{0.7}{$\scriptscriptstyle\mathcal{O}$}}(\Delta
t)}{\Delta t}$ (C.10)
$\displaystyle=\beta\frac{S_{1}(t)I_{1}(t)}{N}\begin{pmatrix}-1\\\
1\end{pmatrix}\begin{pmatrix}-1&1\end{pmatrix}+\gamma
I_{1}(t)\begin{pmatrix}0\\\ -1\end{pmatrix}\begin{pmatrix}0&-1\end{pmatrix}$
(C.11)
$\displaystyle=\begin{pmatrix}\beta\frac{S_{1}(t)I_{1}(t)}{N}&-\beta\frac{S_{1}(t)I_{1}(t)}{N}\\\\[10.00002pt]
-\beta\frac{S_{1}(t)I_{1}(t)}{N}&\beta\frac{S_{1}(t)I_{1}(t)}{N}+\gamma
I_{1}(t)\\\\[10.00002pt] \end{pmatrix},$ (C.12)
which exactly corresponds to $\mathbf{G}_{1}\mathbf{G}_{1}^{\top}$, as
required. We note that the diffusion matrix $\mathbf{G}_{1}$ is not uniquely
defined, as we only wish to match the variance. For instance, we could
multiply $\mathbf{G}_{1}$ by any orthogonal matrix and the product
$\mathbf{G}_{1}\mathbf{G}_{1}^{\top}$ remains unchanged.
We now follow the same procedure for the SEIR model. To do so, let us consider
the state changes and corresponding probabilities summarised in Table 8. As
done for the SIR model, given these state changes and probabilities, we can
derive the drift vector by computing the expected change in
$\mathbf{X}_{2}(t)$,
$\displaystyle\mathbf{f_{2}}(\mathbf{X}_{2}(t))$ $\displaystyle=\lim_{\Delta
t\to 0}\frac{1}{\Delta t}\mathbb{E}\left[\Delta\mathbf{X}_{2}(t)\right]$
(C.13) $\displaystyle=\lim_{\Delta t\to 0}\frac{1}{\Delta
t}\sum_{i=1}^{3}p_{i}(\Delta\mathbf{X}_{2}(t))^{i}$ (C.14)
$\displaystyle=\beta\frac{S_{2}(t)I_{2}(t)}{N}\begin{pmatrix}-1\\\ 1\\\
0\end{pmatrix}+\sigma E_{2}(t)\begin{pmatrix}0\\\ -1\\\ 1\end{pmatrix}+\gamma
I_{2}(t)\begin{pmatrix}0\\\ 0\\\ -1\end{pmatrix}$ (C.15)
$\displaystyle=\begin{pmatrix}-\beta\frac{S_{2}(t)I_{2}(t)}{N}\\\\[10.00002pt]
\beta\frac{S_{2}(t)I_{2}(t)}{N}-\sigma E_{2}(t)\\\\[10.00002pt] \sigma
E_{2}(t)-\gamma I_{2}(t)\\\\[10.00002pt] \end{pmatrix}.$ (C.16)
In order to derive the diffusion matrix $\mathbf{G}_{2}$, we define a matrix
$\bm{\Lambda}_{2}$ whose rows correspond to the state changes in Table 8, i.e.
$\bm{\Lambda}_{2}=\begin{pmatrix}-1&+1&0\\\ 0&-1&+1\\\ 0&0&-1\\\
\end{pmatrix}.$ (C.17)
Substituting $\bm{\Lambda}_{2}$ in Equation C.7 and using Table 8 then allows
us to derive $\mathbf{G}_{2}$,
$\displaystyle\mathbf{G}_{2}(\mathbf{X}_{2}(t))$
$\displaystyle=\begin{pmatrix}\Lambda_{11}\,p_{1}^{1/2}&\Lambda_{21}\,p_{2}^{1/2}&\Lambda_{31}\,p_{3}^{1/2}\\\
\Lambda_{12}\,p_{1}^{1/2}&\Lambda_{22}\,p_{2}^{1/2}&\Lambda_{32}\,p_{3}^{1/2}\\\
\Lambda_{13}\,p_{1}^{1/2}&\Lambda_{23}\,p_{2}^{1/2}&\Lambda_{33}\,p_{3}^{1/2}\\\
\end{pmatrix}$ (C.18)
$\displaystyle=\begin{pmatrix}-\sqrt{\beta\frac{S_{2}(t)I_{2}(t)}{N}}&0&0\\\
\sqrt{\beta\frac{S_{2}(t)I_{2}(t)}{N}}&-\sqrt{\sigma E_{2}(t)}&0\\\
0&\sqrt{\sigma E_{2}(t)}&-\sqrt{\gamma I_{2}(t)}\\\ \end{pmatrix}.$ (C.19)
We can again check that $G_{2}$ matches the infitesimal variance of the CTMC
SEIR model,
$\displaystyle\lim_{\Delta t\to
0}\frac{\mathbb{V}\text{ar}(\Delta\mathbf{X}_{2}(t))}{\Delta t}$
$\displaystyle=\lim_{\Delta t\to 0}\frac{1}{\Delta
t}\sum_{i=1}^{3}p_{i}(\Delta\mathbf{X}_{2}(t))^{i}(\Delta\mathbf{X}_{2}(t))^{i,\top}-\lim_{\Delta
t\to 0}\frac{1}{\Delta
t}\mathbb{E}\left[\Delta\mathbf{X}_{2}(t)\right]\mathbb{E}\left[\Delta\mathbf{X}_{2}(t)\right]^{\top}$
$\displaystyle=\lim_{\Delta t\to 0}\frac{1}{\Delta
t}\sum_{i=1}^{3}p_{i}(\Delta\mathbf{X}_{2}(t))^{i}(\Delta\mathbf{X}_{2}(t))^{i,\top}-\lim_{\Delta
t\to
0}\frac{\mathchoice{{\scriptstyle\mathcal{O}}}{{\scriptstyle\mathcal{O}}}{{\scriptscriptstyle\mathcal{O}}}{\scalebox{0.7}{$\scriptscriptstyle\mathcal{O}$}}(\Delta
t)}{\Delta t}$ (C.20) $\displaystyle\begin{aligned}
=\beta\frac{S_{2}(t)I_{2}(t)}{N}&\begin{pmatrix}-1\\\ 1\\\
0\end{pmatrix}\begin{pmatrix}-1&1&0\end{pmatrix}\\\ &+\sigma
E_{2}(t)\begin{pmatrix}0\\\ -1\\\
1\end{pmatrix}\begin{pmatrix}0&-1&1\end{pmatrix}\\\ &+\gamma
I_{2}(t)\begin{pmatrix}0\\\ 0\\\
-1\end{pmatrix}\begin{pmatrix}0&0&-1\end{pmatrix}\end{aligned}$ (C.21)
$\displaystyle=\begin{pmatrix}\beta\frac{S_{2}(t)I_{2}(t)}{N}&-\beta\frac{S_{2}(t)I_{2}(t)}{N}&0\\\\[10.00002pt]
-\beta\frac{S_{2}(t)I_{2}(t)}{N}&\beta\frac{S_{2}(t)I_{2}(t)}{N}+\sigma
E_{2}(t)&-\sigma E_{2}(t)\\\\[10.00002pt] 0&-\sigma E_{2}(t)&\sigma
E_{2}(t)+\gamma I_{2}(t)\\\ \end{pmatrix},$ (C.22)
which exactly corresponds to $\mathbf{G}_{2}\mathbf{G}_{2}^{\top}$, as
required. However, as previously said, the diffusion matrix $\mathbf{G}_{2}$
is not uniquely defined.
### C.3 Architectures and hyper-parameters
We list the hyper-parameters for the parameter estimation (PE) and model
discrimination (MD) task of the SDE-based epidemiology experiments in Table 9
and Table 10, respectively, for all experimental budgets $D$ that were
presented in the main text. Shown are neural network (NN) architectures,
including the number of hidden layers and number of hidden units, as well as
the learning rates (L.R.) for the NN parameters $\bm{\psi}$ and experimental
designs $\mathbf{d}$. We optimise $\bm{\psi}$ and $\mathbf{d}$ with two
separate Adam optimisers with default parameters from the PyTorch package in
Python. We pre-simulate $20{,}000$ SDE solutions on a fine time grid. During
training time we can then access solutions, and their gradients, at a specific
time point by simply looking up the solutions corresponding to the nearest
point in the time grid. We found that this was generally much faster than
simulating/solving the SDEs on the fly, at the cost of higher memory usage.
Table 9: PE hyper-parameters for the SDE-based SIR model, for varying number of measurements $D$ (i.e. experimental budget). Budget $D$ | NN Layers | NN Units | L.R. for $\bm{\psi}$ | L.R. for $\mathbf{d}$
---|---|---|---|---
1 | 2 | 20 | $10^{-4}$ | $3\times 10^{-2}$
2 | 3 | 20 | $10^{-4}$ | $3\times 10^{-2}$
3 | 4 | 20 | $10^{-4}$ | $3\times 10^{-2}$
5 | 5 | 20 | $3\times 10^{-4}$ | $3\times 10^{-2}$
10 | 3 | 30 | $10^{-4}$ | $10^{-2}$
Table 10: MD hyper-parameters for the SDE-based SIR and SEIR models, for varying number of measurements $D$ (i.e. experimental budget). Budget $D$ | NN Layers | NN Units | L.R. for $\bm{\psi}$ | L.R. for $\mathbf{d}$
---|---|---|---|---
1 | 2 | 20 | $10^{-4}$ | $3\times 10^{-2}$
2 | 3 | 20 | $10^{-4}$ | $3\times 10^{-2}$
3 | 4 | 20 | $10^{-4}$ | $3\times 10^{-2}$
5 | 5 | 20 | $3\times 10^{-4}$ | $3\times 10^{-2}$
10 | 3 | 30 | $10^{-4}$ | $10^{-2}$
20 | 3 | 30 | $10^{-4}$ | $10^{-2}$
### C.4 Data distributions
In Figure 17 we provide general information about data simulated from the SIR
and SEIR models. The top left plot shows the SIR model prior predictive
distributions of the number of susceptible individuals $S(t)$ and the number
of infectious individuals $I(t)$ as a function of time $t$. Similarly, the top
right plot shows the SEIR model prior predictive distributions of $S(t)$,
$I(t)$ and the number of exposed individuals $E(t)$. Recall, however, that we
only use the number of infectious individuals as data in our experiments. As
such, we compare $I(t)$ from the SIR and from the SEIR model more closely in
the bottom left plot. As can be seen, the number of infectious individuals for
the SEIR model peak at later times $t$. The bottom centre plot then shows the
average signal-to-noise ratio (SNR), computed by means of Equation B.13, for
the $I(t)$ response of the SIR and SEIR model. The average SNR for the SEIR
model is generally much higher than that of the SIR model, also peaking at
later measurement times. Lastly, the bottom right plot shows the Jensen-
Shannon divergence between the prior predictives of the SIR and SEIR model,
i.e. $\text{JS}(p(I_{1}(t)|t)\mid\mid p(I_{2}(t)|t))$, where $I_{1}(t)$ and
$I_{2}(t)$ are the number of infectious individuals for the SIR and SEIR
model, respectively. Interestingly, there are two peaks in the Jensen-Shannon
divergence, towards earlier measurement times and around $t=60$. The prior
predictive distributions are most similar where the means of the data
distributions cross, i.e. near $t=30$ (see the bottom left plot).
Figure 17: General summary of data simulated from the SDE-based epidemiology
models. The top row shows prior predictive distributions for the SIR model
(top left) and the SEIR model (top right). The bottom left figure shows the
number of infectious individuals for the SIR and SEIR models, while the bottom
centre plot shows the corresponding average signal-to-noise ratios (SNR). The
bottom right shows the Jensen-Shannon (JS) divergence between the prior
predictive distributions of both models.
### C.5 Additional results
We here present and discuss several additional results for the epidemiology
experiments in the main text.
Figure 18 shows the training curves of the parameter estimation (PE) task for
the SIR model with different experimental budgets $D$. For these experiments
we maximised the JSD lower bound, which is shown in the top row as a function
of training epochs. The elements of the experimental design vector are shown
in the middle row and show a convergence towards early measurement times. The
bottom row of Figure 18 shows evaluations of the NWJ lower bound using the
density ratio learned by maximising the JSD lower bound (as discussed in the
main text). These NWJ lower bound evaluations show large fluctuations, in
stark contrast to the smooth JSD lower bound curves. These fluctuations
presumably arise because of the exponential term in the NWJ lower bound (see
Equation 3.2 in the main text). Importantly, however, we only use the NWJ
bound as a means to get an estimate of the mutual information, and we do not
use it to update the neural network parameters or experimental designs. Thus,
the training behaviour is not affected by these large fluctuations seen in the
top row. Putting a cap on the exponential term in the NWJ lower bound may help
reduce variance (as done in Song and Ermon, 2020) if that is required.
Figure 18: PE training curves for the SDE-based SIR model, with different
experimental budgets $D$. Shown are JSD lower bound values (top row), the
elements of the experimental design vector (middle row) and NWJ lower bound
evaluations (bottom row). Importantly, the JSD lower bound was used to update
neural network parameters and designs, while the NWJ lower bound was purely
used to obtain MI estimates. Figure 19: MD training curves for the SDE-based
SIR and SEIR models, with different experimental budgets $D$. Shown are JSD
lower bound values (top row), the elements of the experimental design vector
(middle row) and NWJ lower bound evaluations (bottom row). Importantly, the
JSD lower bound was used to update neural network parameters and designs,
while the NWJ lower bound was purely used to obtain MI estimates.
Similarly, Figure 19 shows the training curves of the model discrimination
(MD) task for the SDE-based SIR and SEIR models, with different experimental
budgets $D$, i.e. number of measurements. Unlike for the PE task, the NWJ
lower bound evaluations for the MD task do not have large fluctuations and are
quite smooth. Validation estimates of the mutual information for different
number of measurements are shown in Figure 20, as well as corresponding
optimal designs (that are also shown in the main text). We show corresponding
average posterior distributions for different ground truth models in Figure
21. Model recovery tends to improve as the number of measurements increases.
However, the model recovery of the SEIR model, i.e. the average posterior
probability
$\mathbb{E}[\widehat{p}(m=2|\mathbf{y}^{\ast},\mathbf{d}^{\ast},m_{\text{truth}}=2)]$,
is always worse than that of the SIR model, i.e. the average posterior
probability
$\mathbb{E}[\widehat{p}(m=1|\mathbf{y}^{\ast},\mathbf{d}^{\ast},m_{\text{truth}}=1)]$,
which are shown by the diagonal entries in Figure 21. This may be because the
optimal designs (shown in Figure 20) are all clustered towards earlier
measurement times, i.e. below $t=40$. Looking at the bottom left of Figure 17,
this is the region where the SIR model responses dominate over the SEIR model
responses.
Figure 20: MD results for the SDE-based SIR and SEIR models with different
experimental budgets. The left plot shows validation MI estimates for
|
Department of Computer Science, Durham University, UK
<EMAIL_ADDRESS>Department of Computer Science, Sapienza Università
di Roma<EMAIL_ADDRESS>Department of Computer Science, Durham
University, UK<EMAIL_ADDRESS>Department of Computer Science, Durham
University, UK<EMAIL_ADDRESS>S. Dantchev, A. Ghani, N. Galesi
and B. Martin [500]Theory of computation Computational complexity and
cryptography [500]Theory of computation Proof complexity
###### Acknowledgements.
John Q. Open and Joan R. Access 2 42nd Conference on Very Important Topics
(CVIT 2016) CVIT 2016 CVIT 2016 December 24–27, 2016 Little Whinging, United
Kingdom 42 23
# Depth lower bounds in Stabbing Planes for combinatorial principles
Stefan Dantchev Nicola Galesi Abdul Ghani Barnaby Martin
###### Abstract
Stabbing Planes (also known as Branch and Cut) is a proof system introduced
very recently which, informally speaking, extends the
$\operatorname{\mathsf{DPLL}}$ method by branching on integer linear
inequalities instead of single variables. The techniques known so far to prove
size and depth lower bounds for Stabbing Planes are generalizations of those
used for the Cutting Planes proof system established via communication
complexity arguments. As such they work for the lifted version of
combinatorial statements. Rank lower bounds for Cutting Planes are also
obtained by geometric arguments called protection lemmas.
In this work we introduce two new geometric approaches to prove size/depth
lower bounds in Stabbing Planes working for any formula: (1) the antichain
method, relying on Sperner’s Theorem and (2) the covering method which uses
results on essential coverings of the boolean cube by linear polynomials,
which in turn relies on Alon’s combinatorial Nullenstellensatz.
We demonstrate their use on classes of combinatorial principles such as the
Pigeonhole principle, the Tseitin contradictions and the Linear Ordering
Principle. By the first method we prove almost linear size lower bounds and
optimal logarithmic depth lower bounds for the Pigeonhole principle and
analogous lower bounds for the Tseitin contradictions over the complete graph
and for the Linear Ordering Principle. By the covering method we obtain a
superlinear size lower bound and a logarithmic depth lower bound for Stabbing
Planes proof of Tseitin contradictions over a grid graph.
###### keywords:
proof complexity, computational complexity, lower bounds, cutting planes,
stabbing planes
###### category:
## 1 Introduction
Finding a satisfying assignment for a propositional formula
($\operatorname{\mathsf{SAT}}$) is a central component for many
computationally hard problems. Despite being older than 50 years and
exponential time in the worst-case, the $\operatorname{\mathsf{DPLL}}$
algorithm [7, 8, 20] is the core of essentially all high performance modern
$\operatorname{\mathsf{SAT}}$-solvers. $\operatorname{\mathsf{DPLL}}$ is a
recursive boolean method: at each call one variable $x$ of the formula
$\mathcal{F}$ is chosen and the search recursively branches into the two cases
obtained by setting $x$ respectively to $1$ and $0$ in $\mathcal{F}$. On
$\operatorname{\mathsf{UNSAT}}$ formulas $\operatorname{\mathsf{DPLL}}$
performs the worst and it is well-known that the execution trace of the
$\operatorname{\mathsf{DPLL}}$ algorithm running on an unsatisfiable formula
$\mathcal{F}$ is nothing more than a treelike refutation of $\mathcal{F}$ in
the proof system of Resolution [20] ($\operatorname{\mathsf{Res}}$).
Since $\operatorname{\mathsf{SAT}}$ can be viewed as an optimization problem
the question whether Integer Linear Programming
($\operatorname{\mathsf{ILP}}$) can be made feasible for satisfiability
testing received a lot of attention and is considered among the most
challenging problems in local search [21, 12]. One proof system capturing
$\operatorname{\mathsf{ILP}}$ approaches to $\operatorname{\mathsf{SAT}}$ is
Cutting Planes, a system whose main rule implements the rounding (or Chvátal
cut) approach to $\operatorname{\mathsf{ILP}}$. Cutting planes works with
integer linear inequalities of the form $\mathbf{a}\mathbf{x}\leq b$, with
$\mathbf{a},b$ integers, and, like resolution, is a sound and complete
refutational proof system for $\operatorname{\mathsf{CNF}}$ formulas: indeed a
clause $C=(x_{1}\vee\ldots\vee x_{r}\vee\neg y_{1}\vee\ldots\vee\neg y_{s})$
can be written as the integer inequality $\mathbf{y}-\mathbf{x}\leq s-1$.
Beame et al. [2], extended the idea of $\operatorname{\mathsf{DPLL}}$ to a
more general proof strategy based on $\operatorname{\mathsf{ILP}}$. Instead of
branching only on a variable as in resolution, in this method one considers a
pair $(\mathbf{a},b)$, with $\mathbf{a}\in\mathbb{Z}^{n}$ and
$b\in\mathbb{Z}$, and branches limiting the search to the two half-planes:
$\mathbf{a}\mathbf{x}\leq b-1$ and $\mathbf{a}\mathbf{x}\geq b$. A path
terminates when the $\operatorname{\mathsf{LP}}$ defined by the inequalities
in $\mathcal{F}$ and those forming the path is infeasible. This method can be
made into a refutational treelike proof system for
$\operatorname{\mathsf{UNSAT}}$ CNF’s called Stabbing planes
($\operatorname{\mathsf{SP}}$) ([2]) and it turned out that it is polynomially
equivalent to the treelike version of
$\operatorname{\mathsf{Res}}(\operatorname{\mathsf{CP}})$, a proof system
introduced by Krajíček [14] where clauses are disjunction of linear
inequalities.
In this work we consider the complexity of proofs in
$\operatorname{\mathsf{SP}}$ focusing on the length, i.e. the number of
queries in the proof; the depth (called also rank in [2]), i.e. the length of
the longest path in the proof tree; and the size, i.e. the bit size of all the
coefficients appearing in the proof.
### 1.1 Previous works and motivations
After its introduction as a proof system in the work [2] by Beame, Fleming,
Impagliazzo, Kolokolova, Pankratov, Pitassi and Robere, Stabbing Planes
received great attention. The quasipolynomial upper bound for the size of
refuting Tseitin contradictions in $\operatorname{\mathsf{SP}}$ given in [2]
was surprisingly extended to $\operatorname{\mathsf{CP}}$ in the work of [6]
of Dadush and Tiwari refuting a long-standing conjecture. Recently in [9],
Fleming, Göös, Impagliazzo, Pitassi, Robere, Tan and Wigderson were further
developing the initial results proved in [2] making important progress on the
question whether all Stabbing Planes proofs can be somehow efficiently
simulated by Cutting Planes.
Significant lower bounds for size can be obtained in
$\operatorname{\mathsf{SP}}$, but in a limited way, using modern developments
of a technique for $\operatorname{\mathsf{CP}}$ based on communication
complexity of search problems introduced by Impagliazzo, Pitassi, Urquhart in
[11]: in [2] it is proven that size $S$ and depth $D$
$\operatorname{\mathsf{SP}}$ refutations imply treelike
$\operatorname{\mathsf{Res}}(\operatorname{\mathsf{CP}})$ proofs of size
$O(S)$ and width $O(D)$; Kojevnikov [13], improving the interpolation method
introduced for $\operatorname{\mathsf{Res}}(\operatorname{\mathsf{CP}})$ by
Krajíček [14], gave exponential lower bounds for treelike
$\operatorname{\mathsf{Res}}(\operatorname{\mathsf{CP}})$ when the width of
the clauses (i.e. the number of linear inequalities in a clause) is bounded by
$o(n/\log n)$. Hence these lower bounds are applicable only to very specific
classes of formulas (whose hardness comes from boolean circuit hardness) and
only to $\operatorname{\mathsf{SP}}$ refutations of low depth.
Nevertheless $\operatorname{\mathsf{SP}}$ appears to be a strong proof system.
Firstly notice that the condition terminating a path in a proof is not a
trivial contradiction like in resolution, but is the infeasibility of an
$\operatorname{\mathsf{LP}}$, which is only a polynomial time verifiable
condition. Hence linear size $\operatorname{\mathsf{SP}}$ proofs might be
already a strong class of $\operatorname{\mathsf{SP}}$ proofs, since they can
hide a polynomial growth into one final node whence to run the verification of
the terminating condition.
#### Rank and depth in $\operatorname{\mathsf{CP}}$ and
$\operatorname{\mathsf{SP}}$
It is known that, contrary to the case of other proof systems like Frege,
neither $\operatorname{\mathsf{CP}}$ nor $\operatorname{\mathsf{SP}}$ proofs
can be balanced (see [2]), in the sense that a depth-$d$ proof can always be
transformed into a size $2^{O(d)}$ proof. The depth of
$\operatorname{\mathsf{CP}}$-proofs of a set of linear inequalities ${L}$ is
measured by the Chvátal rank of the associated polytope $P$111This is the
minimal $d$ such that $P^{(d)}$ is empty, where $P^{(0)}$ is the polytope
associated to $L$ and $P^{(i+1)}$ is the polytope defined by all inequalities
which can be inferred from those in $P^{(i)}$ using one Chvátal cut.. It is
known that rank in $\operatorname{\mathsf{CP}}$ and depth in
$\operatorname{\mathsf{SP}}$ are separated, in the sense that Tseitin
principles can be proved in depth $O(\log^{2}n)$ depth in
$\operatorname{\mathsf{SP}}$ [2], but are known to require rank $\Theta(n)$ to
be refuted in $\operatorname{\mathsf{CP}}$ [3]. In this paper we further
develop the study of proof depth for $\operatorname{\mathsf{SP}}$.
Rank lower bound techniques for Cutting Planes are essentially of two types.
The main method is by reducing to the real communication complexity of certain
search problem [11]. As such this method only works for classes of formulas
lifted by certain gadgets capturing specific boolean functions. A second class
of methods have been developed for Cutting Planes, which lower bound the rank
measures of a polytope. In this setting, lower bounds are typically proven
using a geometric method called protection lemmas [3]. These methods were
recently extended in [9] also to the case of Semantic Cutting Planes. In
principle this geometric method can be applied to any formula and not only to
the lifted ones, furthermore for many formulas (such as the Tseitin formulas)
it is known how to achieve $\Omega(n)$ rank lower bounds in
$\operatorname{\mathsf{CP}}$ via protection lemmas, while proving even
$\omega(\log n)$ lower bounds via real communication complexity is impossible,
due to a known folklore upper bound.
Lower bounds for depth in Stabbing Planes, proved in [2], are instead obtained
only as a consequence of the real communication approach extended to Stabbing
Planes. In this paper we introduce two geometric approaches to prove depth
lower bounds in $\operatorname{\mathsf{SP}}$.
Specifically the results we know at present relating
$\operatorname{\mathsf{SP}}$ and $\operatorname{\mathsf{CP}}$ are:
1. 1.
$\operatorname{\mathsf{SP}}$ polynomially simulates
$\operatorname{\mathsf{CP}}$ (Theorem 4.5 in [2]). Hence in particular the
$\operatorname{\mathsf{PHP}}^{m}_{n}$ can be refuted in
$\operatorname{\mathsf{SP}}$ by a proof of size $O(n^{2})$ ([5]). Furthermore
it can be refuted by a $O(\log n)$ depth proof since polynomial size
$\operatorname{\mathsf{CP}}$ proofs, by Theorem 4.4 in [2], can be balanced in
$\operatorname{\mathsf{SP}}$ 222Another way of proving this result is using
Theorem 4.8 in [2] stating that if there are length $L$ and space $S$
$\operatorname{\mathsf{CP}}$ refutations of a set of linear integral
inequalities, then there are depth $O(S\log L)$ $\operatorname{\mathsf{SP}}$
refutations of the same set of linear integral inequalities; and then use the
result in [10] (Theorem 5.1) that $\operatorname{\mathsf{PHP}}^{m}_{n}$ has
polynomial length and constant space $\operatorname{\mathsf{CP}}$
refutations..
2. 2.
Beame et al. in [2] proved the surprising result that the class of Tseitin
contradictions $\operatorname{\mathsf{Ts}}(G,\omega)$ over any graph $G$ of
maximum degree $D$, with an odd charging $\omega$, can be refuted in
$\operatorname{\mathsf{SP}}$ in size quasipolynomial in $|G|$ and depth
$O(\log^{2}|G|+D)$.
Depth lower bounds for $\operatorname{\mathsf{SP}}$ are proved in [2]:
1. 1.
a $\Omega(n/\log^{2}n)$ lower bound for the formula
$\operatorname{\mathsf{Ts}}(G,w)\circ\operatorname{\mathsf{VER}}^{n}$,
composing $\operatorname{\mathsf{Ts}}(G,\omega)$ (over an expander graph $G$)
with the gadget function $\operatorname{\mathsf{VER}}^{n}$ (see Theorem 5.7 in
[2] for details); and
2. 2.
a $\Omega(\sqrt{n\log n})$ lower bound for the formula
$\operatorname{\mathsf{Peb}}(G)\circ\operatorname{\mathsf{IND}}^{n}_{l}$ over
$n^{5}+n\log n$ variables obtained by lifting a pebbling formula
$\operatorname{\mathsf{Peb}}(G)$ over a graph with high pebbling number, with
a pointer function gadget $\operatorname{\mathsf{IND}}^{n}_{l}$ (see Theorem
5.5. in [2] for details).
Similar to size, these depth lower bounds are applicable only to very specific
classes of formulas. In fact they are obtained by extending to
$\operatorname{\mathsf{SP}}$ the technique introduced in [11, 15] for
$\operatorname{\mathsf{CP}}$ of reducing shallow proofs of a formula
$\mathcal{F}$ to efficient real communication protocols computing a related
search problem and then proving that such efficient protocols cannot exist.
Despite the fact that $\operatorname{\mathsf{SP}}$ is at least as strong as
$\operatorname{\mathsf{CP}}$, in $\operatorname{\mathsf{SP}}$ the known lower
bound techniques are derived from those of treelike
$\operatorname{\mathsf{CP}}$. Hence finding other techniques to prove depth
and size lower bounds for $\operatorname{\mathsf{SP}}$ is important to
understand its proof strength. For instance, unlike
$\operatorname{\mathsf{CP}}$ where we know tight $\Theta(\log n$) rank bounds
for the $\operatorname{\mathsf{PHP}}^{m}_{n}$ [3, 19] and $\Omega(n)$ rank
bounds for Tseitin contradictions [3], for $\operatorname{\mathsf{SP}}$ no
depth lower bound is at present known for purely combinatorial statements.
In this work we address such problems.
### 1.2 Contributions and techniques
The main motivation of this work was to study size and depth lower bounds in
$\operatorname{\mathsf{SP}}$ through new methods, possibly geometric.
Differently from weaker systems like Resolution, except for the technique
highlighted above and based on reducing to the communication complexity of
search problems, we do not know of other methods to prove size and depth lower
bounds in $\operatorname{\mathsf{SP}}$. In $\operatorname{\mathsf{CP}}$ and
Semantic $\operatorname{\mathsf{CP}}$ instead geometrical methods based on
protection lemmas were used to prove rank lower bounds in [3, 9].
Our first steps in this direction were to set up methods working for truly
combinatorial statements, like $\operatorname{\mathsf{Ts}}(G,w)$ or
$\operatorname{\mathsf{PHP}}^{m}_{n}$, which we know to be efficiently
provable in $\operatorname{\mathsf{SP}}$, but on which we cannot use methods
reducing to the complexity of boolean functions, like the ones based on
communication complexity.
We present two new methods for proving depth lower bounds in
$\operatorname{\mathsf{SP}}$ which in fact are the consequence of proving
length lower bounds that do not depend on the bit-size of the coefficients.
As applications of our two methods we respectively prove:
1. 1.
An exponential separation between the rank in $\operatorname{\mathsf{CP}}$ and
the depth in $\operatorname{\mathsf{SP}}$, using a new counting principle
which we introduce and that we call the Simple Pigeon Principle
$\operatorname{\mathsf{SPHP}}$. We prove that $\operatorname{\mathsf{SPHP}}$
has $O(1)$ rank in $\operatorname{\mathsf{CP}}$ and requires $\Omega(\log n)$
depth in $\operatorname{\mathsf{SP}}$. Together with the results proving that
Tseitin formulas requires $\Omega(n)$ rank lower bounds in
$\operatorname{\mathsf{CP}}$ ([3]) and $O(\log^{2}n)$ upper bounds for the
depth in $\operatorname{\mathsf{SP}}$ ([2]), this proves an incomparability
between the two measures.
2. 2.
An almost linear lower bounds the size of $\operatorname{\mathsf{SP}}$ proofs
of the $\operatorname{\mathsf{PHP}}^{m}_{n}$ and for Tseitin
$\operatorname{\mathsf{Ts}}(G,\omega)$ contradictions over the complete graph.
These lower bounds immediately give optimal $\Omega(\log n)$ lower bound for
the depth of $\operatorname{\mathsf{SP}}$ proofs of the corresponding
principles.
3. 3.
A superlinear lower bound for the size of $\operatorname{\mathsf{SP}}$ proofs
of $\operatorname{\mathsf{Ts}}(G,\omega)$, when $G$ is a $n\times n$ grid
graph $H_{n}$. In turn this implies an $\Omega(\log n)$ lower bound for the
depth of $\operatorname{\mathsf{SP}}$ proofs of
$\operatorname{\mathsf{Ts}}(H_{n},\omega)$. Proofs of depth $O(\log^{2}n)$ for
$\operatorname{\mathsf{Ts}}(H_{n},\omega)$ are given in [2].
4. 4.
Finally we prove linear lower bound for the size and $O(\log n)$ lower bounds
of the depth for the the Linear Ordering Principle
$\operatorname{\mathsf{LOP}}$.
Our results are derived from the following initial geometrical observation:
let $\mathbb{S}$ be a space of admissible points in $\\{0,1,1/2\\}^{n}$
satisfying a given unsatisfiable system of integer linear inequalities
$\mathcal{F}(x_{1},\ldots,x_{n})$. In a $\operatorname{\mathsf{SP}}$ proof for
$\mathcal{F}$, at each branch $Q=(\mathbf{a},b)$ the set of points in the
$\operatorname{\mathsf{slab}}(Q)=\\{\mathbf{s}\in\mathbb{S}:b-1<\mathbf{a}\mathbf{x}<b\\}$
does not survive in $\mathbb{S}$. At the end of the proof on the leaves, where
we have infeasible $\operatorname{\mathsf{LP}}$’s, no point in $\mathbb{S}$
can survive the proof. So it is sufficient to find conditions such that, under
the assumption that a proof of $\mathcal{F}$ is “small”, even one point of
$\mathbb{S}$ survives the proof. In pursuing this approach we use two methods.
The antichain method. Here we use a well-known bound based on Sperner’s
Theorem [4, 23] to upper bound the number of points in the slabs where the set
of non-zero coefficients is sufficiently large. Trading between the number of
such slabs and the number of points ruled out from the space $\mathbb{S}$ of
admissible points, we obtain the lower bound.
We initially present the method and the $\Omega(\log n)$ lower bound on a set
of unsatisfiable integer linear inequalities - the Simple Pigeonhole Principle
($\operatorname{\mathsf{SPHP}}$) - capturing the core of the counting argument
used to prove the PHP efficiently in CP. Since
$\operatorname{\mathsf{SPHP}}_{n}$ has rank $1$ $\operatorname{\mathsf{CP}}$
proofs, it entails a strong separation between $\operatorname{\mathsf{CP}}$
rank and $\operatorname{\mathsf{SP}}$ depth. We then apply the method to
$\operatorname{\mathsf{PHP}}^{m}_{n}$ and to
$\operatorname{\mathsf{Ts}}(K_{n},\omega)$.
The covering method. The antichain method appears too weak to prove size and
depth lower bounds on $\operatorname{\mathsf{Ts}}(G,w)$, when $G$ is for
example a grid or a pyramid. To solve this case, we consider another approach
that we call the covering method: we reduce the problem of proving that one
point in $\mathbb{S}$ survives from all the $\operatorname{\mathsf{slab}}(Q)$
in a small proof of $\mathcal{F}$, to the problem that a set of polynomials
which essentially covers the boolean cube $\\{0,1\\}^{n}$ requires at least
$\sqrt{n}$ polynomials, which is a well-known problem faced by Alon and Füredi
in [1] and by Linial and Radhakrishnan in [16]. For this reduction to work we
have to find a high dimensional projection of $\mathbb{S}$ covering the
boolean cube and defined on variables effectively appearing in the proof. We
prove that cycles of distance at least 2 in $G$ work properly to this aim on
$\operatorname{\mathsf{Ts}}(G,\omega)$. Since the grid $H_{n}$ has many such
cycles, we can obtain the lower bound on
$\operatorname{\mathsf{Ts}}(H_{n},\omega)$. The use of Linial and
Radhakrishnan’s result is not new in proof complexity. Part and Tzameret in
[18], independently of us, were using this result in a completely different
way from us in the proof system $\operatorname{\mathsf{Res}}(\oplus)$ handling
clauses over parity equations, and not relying on integer linear inequalities
and geometrical reasoning.
We remark that while we were writing this version of the paper, Yehuda and
Yehudayoff in [24] slightly improved the results of [16] with the consequence,
noticed in their paper too, that our size lower bounds for
$\operatorname{\mathsf{Ts}}(G,\omega)$ over a grid graph is in fact
superlinear.
The paper is organized as follows: We give the preliminary definitions in the
next section and then we move to other sections: one on the lower bounds by
the antichain method and the other on lower bounds by the covering method. The
antichain method is presented on the formulas $\operatorname{\mathsf{SPHP}}$
and the lower bound for $\operatorname{\mathsf{PHP}}^{m}_{n}$ and that for the
Tseitin formulas are moved in the Appendix as well as and that for the Linear
Ordering Principle.
## 2 Preliminaries
We use $[n]$ for the set $\\{1,2,\ldots,n\\}$, $\mathbb{Z}/2$ for
$\mathbb{Z}\cup(\mathbb{Z}+\frac{1}{2})$ and $\mathbb{Z}^{+}$ for
$\\{1,2,\ldots\\}$.
### 2.1 Proof systems
Here we recall the definition of the Stabbing Planes proof system from [2].
###### Definition 2.1.
A _linear integer inequality_ in the variables $x_{1},\ldots,x_{n}$ is an
expression of the form $\sum_{i=1}^{n}a_{i}x_{i}\geq b$, where each $a_{i}$
and $b$ are integral. A set of such inequalities is said to be unsatisfiable
if there are no $0/1$ assignments to the $x$ variables satisfying each
inequality simultaneously.
Note that we reserve the term infeasible, in contrast to unsatisfiable, for
(real or rational) linear programs.
###### Definition 2.2.
Fix some variables $x_{1},\ldots,x_{n}$. A _Stabbing Planes (
$\operatorname{\mathsf{SP}}$)_ proof of a set of integer linear inequalities
$\mathcal{F}$ is a binary tree ${\cal T}$, with each node labeled with a
_query_ $(\mathbf{a},b)$ with $\mathbf{a}\in\mathbb{Z}^{n},b\in\mathbb{Z}$.
Out of each node we have an edge labeled with $\mathbf{a}x\geq b$ and the
other labeled with its integer negation $\mathbf{a}x\leq b-1$. Each leaf
$\ell$ is labeled with a $\operatorname{\mathsf{LP}}$ system $P_{\ell}$ made
by a nonnegative linear combination of inequalities from $\mathcal{F}$ and the
inequalities labelling the edges on the path from the root of ${\cal T}$ to
the leaf $\ell$.
If $\mathcal{F}$ is an unsatisfiable set of integer linear inequalities,
${\cal T}$ is a _Stabbing Planes ( $\operatorname{\mathsf{SP}}$)_ refutation
of $\mathcal{F}$ if all the $\operatorname{\mathsf{LP}}$’s $P_{\ell}$ on the
leaves of ${\cal T}$ are infeasible.
###### Definition 2.3.
The _slab_ corresponding to a query $Q=(\mathbf{a},b)$ is the set
$\operatorname{\mathsf{slab}}(Q)=\\{\mathbf{x}\in\mathbb{R}^{n}:b-1<\mathbf{a}\mathbf{x}<b\\}$
satisfying neither of the associated inequalities.
Since each leaf in a $\operatorname{\mathsf{SP}}$ refutation is labelled by an
infeasible $\operatorname{\mathsf{LP}}$, throughout this paper we will
actually use the following geometric observation on
$\operatorname{\mathsf{SP}}$ proofs ${\cal T}$: the set of points in
$\mathbb{R}^{n}$ must all be ruled out by a query somewhere in ${\cal T}$. In
particular this will be true for those points in $\mathbb{R}^{n}$ which
satisfy a set of integer linear inequalities $\mathcal{F}$ and which we call
feasible points for $\mathcal{F}$.
###### Fact 1.
The slabs associated with a $\operatorname{\mathsf{SP}}$ refutation must cover
the feasible points of $\mathcal{F}$. That is,
$\\{\mathbf{y}\in\mathbb{R}^{n}:\mathbf{a}\mathbf{y}\geq b\text{ for all
}(\mathbf{a},b)\in\mathcal{F}\\}\subseteq\bigcup_{(\mathbf{a},b)\in\mathcal{F}}\\{\mathbf{x}\in\mathbb{R}^{n}:b-1<\mathbf{a}\mathbf{x}<b\\}$
The length of a $\operatorname{\mathsf{SP}}$ refutation is the number of
queries in the proof tree. The depth of a $\operatorname{\mathsf{SP}}$
refutation ${\cal T}$ is the longest root-to-leaf path in ${\cal T}$. The size
(respectively depth) of refuting $\mathcal{F}$ in $\operatorname{\mathsf{SP}}$
is the minimum size (respectively depth) over all $\operatorname{\mathsf{SP}}$
refutations of $\mathcal{F}$. We call bit-size of a
$\operatorname{\mathsf{SP}}$ refutation ${\cal T}$ the total number of bits
needed to represent every inequality in the refutation.
###### Definition 2.4 ([5]).
The _Cutting Planes (CP)_ proof system is equipped with boolean axioms and two
inference rules:
$\begin{array}[]{c|c|c}\mbox{Boolean Axioms}&\mbox{Linear
Combination}&\mbox{Rounding}\\\ \frac{\quad\quad}{x\geq
0}\quad\frac{\quad\quad}{-x\geq-1}&\frac{\mathbf{a}\mathbf{x}\geq
c\quad\quad\mathbf{b}\mathbf{x}\geq
d}{\alpha\mathbf{a}\mathbf{x}+\beta\mathbf{b}\mathbf{x}\geq\alpha c+\beta
d}&\frac{\alpha\mathbf{a}\mathbf{x}\geq b}{\mathbf{a}\mathbf{x}\geq\lceil
b/\alpha\rceil}\end{array}$
where $\alpha,\beta,b\in\mathbb{Z}^{+}$ and
$\mathbf{a},\mathbf{b}\in\mathbb{Z}^{n}$. A CP refutation of some
unsatisfiable set of integer linear inequalities is a derivation of $0\geq 1$
by the aforementioned inference rules from the inequalities in $\cal F$.
A $\operatorname{\mathsf{CP}}$ refutation is treelike if the directed acyclic
graph underlying the proof is a tree. The length of a
$\operatorname{\mathsf{CP}}$ refutation is the number of inequalities in the
sequence. The depth is the length of the longest path from the root to a leaf
(sink) in the graph. The rank of a $\operatorname{\mathsf{CP}}$ proof is the
maximal number of rounding rules used in a path of the proof graph. The size
of a $\operatorname{\mathsf{CP}}$ refutation is the bit-size to represent all
the inequalities in the proof.
### 2.2 Restrictions
Let $V=\\{x_{1},\ldots,x_{n}\\}$ be a set of $n$ variables and let
$\mathbf{a}\mathbf{x}\leq b$ be a linear integer inequality. We say that a
variable $x_{i}$ appears in, or is mentioned by a query $Q=(\mathbf{a},b)$ if
$a_{i}\not=0$ and does not appear otherwise.
A restriction $\rho$ is a function $\rho:D\rightarrow\\{0,1\\}$, $D\subseteq
V$. A restriction acts on a half-plane $\mathbf{a}\mathbf{x}\leq b$ setting
the $x_{i}$’s according to $\rho$. Notice that the variables $x_{i}\in D$ do
not appear in the restricted half-plane.
By ${\cal T}\\!\\!\restriction_{\rho}$ we mean to apply the restriction $\rho$
to all the queries in a $\operatorname{\mathsf{SP}}$ proof ${\cal T}$. The
tree ${\cal T}\\!\\!\restriction_{\rho}$ defines a new
$\operatorname{\mathsf{SP}}$ proof: if some $Q\\!\\!\restriction_{\rho}$
reduces to $0\leq-b$, for some $b\geq 1$, then that node becomes a leaf in
${\cal T}\\!\\!\restriction_{\rho}$. Otherwise in ${\cal
T}\\!\\!\restriction_{\rho}$ we simply branch on $Q\\!\\!\restriction_{\rho}$.
Of course the solution space defined by the linear inequalities labelling a
path in ${\cal T}\\!\\!\restriction_{\rho}$ is a subset of the solution space
defined by the corresponding path in ${\cal T}$. Hence the leaves of ${\cal
T}\\!\\!\restriction_{\rho}$ define an infeasible
$\operatorname{\mathsf{LP}}$.
We work with linear integer inequalities which are a translation of families
of CNFs ${\cal F}$. Hence when we write ${\cal F}\\!\\!\restriction_{\rho}$ we
mean the applications of the restriction $\rho$ to the set of linear integer
inequalities defining ${\cal F}$.
## 3 The antichain method
This method is based on Sperner’s theorem. Using it we can prove depth lower
bounds in $\operatorname{\mathsf{SP}}$ for
$\operatorname{\mathsf{PHP}}^{m}_{n}$ and for Tseitin contradictions
$\operatorname{\mathsf{Ts}}(K_{n},\omega)$ over the complete graph. To
motivate and explain the main definitions, we use as an example a
simplification of the $\operatorname{\mathsf{PHP}}^{m}_{n}$, the Simplified
Pigeonhole principle $\operatorname{\mathsf{SPHP}}_{n}$, which has some
interest since (as we will show) it exponentially separates
$\operatorname{\mathsf{CP}}$ rank from $\operatorname{\mathsf{SP}}$ depth.
### 3.1 Simplified Pigeonhole Principle
As mentioned in the Introduction, the $\operatorname{\mathsf{SPHP}}_{n}$
intends to capture the core of the counting argument used to efficiently
refute the $\operatorname{\mathsf{PHP}}$ in $\operatorname{\mathsf{CP}}$.
###### Definition 3.1.
The $\operatorname{\mathsf{SPHP}}_{n}$ is the following unsatisfiable family
of inequalities:
$\begin{array}[]{l}\sum_{i=1}^{n}x_{i}\geq 2\\\ x_{i}+x_{j}\leq 1\qquad\text{
(for all $i\neq j\in[n]$)}\end{array}$
###### Lemma 3.2.
$\operatorname{\mathsf{SPHP}}_{n}$ has a rank $1$ $\operatorname{\mathsf{CP}}$
refutation, for $n\geq 3$.
###### Proof 3.3.
Let $S:=\sum_{i=1}^{n}x_{i}$ (so we have $S\geq 2$). We fix some $i\in[n]$ and
sum $x_{i}+x_{j}\leq 1$ over all $j\in[n]\setminus\\{i\\}$ to find
$S+(n-2)x_{i}\leq n-1$. We add this to $-S\leq-2$ to get
$x_{i}\leq\frac{n-3}{n-2}$
which becomes $x_{i}\leq 0$ after a single cut. We do this for every $i$ and
find $S\leq 0$ \- a contradiction when combined with the axiom $S\geq 2$.
It is easy to see that $\operatorname{\mathsf{SPHP}}_{n}$ has depth $O(\log
n)$ proofs in $\operatorname{\mathsf{SP}}$, either by a direct proof or
appealing to the polynomial size proofs in $\operatorname{\mathsf{CP}}$ of the
$\operatorname{\mathsf{PHP}}^{m}_{n}$ ([5]) and then using the Theorem 4.4 in
[2] informally stating that “$\operatorname{\mathsf{CP}}$ proofs can be
balanced in $\operatorname{\mathsf{SP}}$”.
###### Corollary 3.4.
The $\operatorname{\mathsf{SPHP}}_{n}$ has $\operatorname{\mathsf{SP}}$
refutations of depth $O(\log n)$.
We will prove that this bound is tight.
### 3.2 Sperner’s Theorem
Let $\mathbf{a}\in\mathbb{R}^{n}$. The width $w(\mathbf{a})$ of $\mathbf{a}$
is the number of non-zero coordinates in $\mathbf{a}$. The width of a query
$(\mathbf{a},b)$ is $w(\mathbf{a})$, and the width of a
$\operatorname{\mathsf{SP}}$ refutation is the minimum width of its queries.
Let $n\in\mathbb{N}$. Fix $W\subseteq[0,1]\cap\mathbb{Q}^{+}$ of finite size
$k\geq 2$ and insist that $0\in W$. The $W$’s we work with in this paper are
$\\{0,1/2\\}$ and $\\{0,1/2,1\\}$.
###### Definition 3.5.
A $(n,W)$-word is an element in $W^{n}$.
We consider the following extension of Sperner’s theorem.
###### Theorem 3.6 ([17, 4]).
Fix any $t\geq 2,t\in\mathbb{N}$. For all $f\in\mathbb{N}$, with the pointwise
ordering of $[t]^{f}$, any antichain has size at most
$t^{f}\sqrt{\frac{6}{\pi(t^{2}-1)f}}(1+o(1)).$
We will use the simplified bound that any antichain ${\cal A}$ has size
$|{\cal A}|\leq\frac{t^{f}}{\sqrt{f}}$.
###### Lemma 3.7.
Let $\mathbf{a}\in\mathbb{Z}^{n}$ and $|W|=k\geq 2$. The number of
$(n,W)$-words $\mathbf{s}$ such that $\mathbf{a}\mathbf{s}=b$, where
$b\in\mathbb{Q}$, is at most $\frac{k^{n}}{\sqrt{w(\mathbf{a})}}$.
###### Proof 3.8.
Define $I_{\mathbf{a}}=\\{i\in[n]:a_{i}\not=0\\}$. Let $\preceq$ be the
partial order over $W^{I_{\mathbf{a}}}$ where $\mathbf{x}\preceq\mathbf{y}$ if
$x_{i}\leq y_{i}$ for all $i$ with $a_{i}>0$ and $x_{i}\geq y_{i}$ for the
remaining $i$ with $a_{i}<0$. Clearly the set of solutions to
$\mathbf{a}\mathbf{s}=b$ forms an antichain under $\preceq$. Noting that
$\preceq$ is isomorphic to the typical pointwise ordering on
$W^{I_{\mathbf{a}}}$, we appeal to Theorem 3.6 to upper bound the number of
solutions in $W^{I_{\mathbf{a}}}$ by
$\frac{k^{w(\mathbf{a})}}{\sqrt{w(\mathbf{a})}}$, each of which corresponds to
at most $k^{n-w(\mathbf{a})}$ vectors in $W^{n}$.
### 3.3 Large admissibility
A $(n,W)$-word $s$ is admissible for an unsatisfiable set of integer linear
inequalities ${\cal F}$ over $n$ variables if $s$ satisfies all constraints of
${\cal F}$. A set of $(n,W)$-words is admissible for ${\cal F}$ if all its
elements are admissible. ${\cal A}({\cal F},W)$ is the set of all admissible
$(n,W)$-words for ${\cal F}$.
The interesting sets $W$ for an unsatisfiable set of integer linear
inequalities ${\cal F}$ are those such that almost all $(n,W)$-words are
admissible for ${\cal F}$. We will apply our method on sets of integer linear
inequalities which are a translation of unsatisfiable CNF’s generated over a
given domain. Typically these formulas on a size $n$ domain have a number of
variables which is not exactly $n$ but a function of $n$, $\nu(n)\geq n$.
Hence for the rest of this section we consider ${\mathscr{F}}:=\\{{\cal
F}_{n}\\}_{n\in\mathbb{N}}$ as a family of sets of unsatisfiable integer
linear inequalities, where ${\cal F}_{n}$ has $\nu(n)\geq n$ variables. We
call ${\mathscr{F}}$ an unsatisfiable family.
Consider then the following definition (recalling that we denote $k=|W|$):
###### Definition 3.9.
${\mathscr{F}}$ is almost full if $|{\cal A}({\cal F}_{n},W)|\geq
k^{\nu(n)}-o(k^{\nu(n)})$.
Notice that, because of the $o$ notation, Definition 3.9 might be not
necessarily true for all $n\in\mathbb{N}$, but only starting from some
$n_{{\mathscr{F}}}$.
###### Definition 3.10.
Given some almost full family ${\mathscr{F}}$ (over $\nu(n)$ variables) we let
$n_{{\mathscr{F}}}$ be the natural number with
$\frac{k^{\nu(n)}}{|{\cal A}({\cal F}_{n},W)|}\leq 2\text{\quad for all
\quad}n\geq n_{{\mathscr{F}}}.$
As an example we prove $\operatorname{\mathsf{SPHP}}$ is almost full (notice
that in the case of $\operatorname{\mathsf{SPHP}}_{n}$, $\nu(n)=n$).
###### Lemma 3.11.
$\operatorname{\mathsf{SPHP}}_{n}$ is almost full.
###### Proof 3.12.
Fix $W=\\{0,1/2\\}$ so that $k=|W|=2$. Let $U$ be the set of all $(n,W)$-words
with at least four coordinates set to $1/2$. $U$ is admissible for
$\operatorname{\mathsf{SPHP}}_{n}$ since inequalities $x_{i}+x_{j}\leq 1$ are
always satisfied for any value in $W$ and inequalities $x_{1}+\ldots+x_{n}\geq
2$ are satisfied by all points in $U$ which contain at least four $1/2$s. By a
simple counting argument, in $U$ there are $2^{n}-4n^{3}=2^{n}-o(2^{n})$
admissible $(n,W)$-words. Hence the claim.
###### Lemma 3.13.
Let ${\mathscr{F}}=\\{{\cal F}_{n}\\}_{n\in\mathbb{N}}$ be an almost full
unsatisfiable family, where ${\cal F}_{n}$ has $\nu(n)$ variables. Further let
${\cal T}$ be a $\operatorname{\mathsf{SP}}$ refutation of ${\cal F}$ of
minimal width $\omega$. If $n\geq n_{{\mathscr{F}}}$ then $|{\cal
T}|=\Omega(\sqrt{w})$.
###### Proof 3.14.
We estimate at what rate the slab of the queries in ${\cal T}$ rule out
admissible points in $U$. Let $\ell$ be the least common multiple of the
denominators in $W$. Every $(n,W)$-word $x$ falling in the slab of some query
$(\mathbf{a},b)$ satisfies one of $\ell$ equations $\mathbf{a}x=b+i/\ell,1\leq
i<\ell$ (as $\mathbf{a}$ is integral). Note that as $|W|$ is a constant
independent of $n$, so is $\ell$.
Since all the queries in ${\cal T}$ have width at least $w$, according to
Lemma 3.7, each query in ${\cal T}$ rules out at most
$\ell\cdot\frac{k^{\nu(n)}}{\sqrt{w}}$ admissible points. By Fact 1 no point
survives at the leaves, in particular the admissible points. Then it must be
that
$|{\cal T}|\ell\cdot\frac{k^{\nu(n)}}{\sqrt{w}}\geq|{\cal A}({\cal
F}_{n},W)|\text{ \quad which means \quad}|{\cal
T}|\ell\cdot\frac{k^{\nu(n)}}{|{\cal A}({\cal F}_{n},W)|}\geq\sqrt{w}$
We finish by noting that, by the assumption $n\geq n_{{\mathscr{F}}}$, and
then by Definition 3.10, we have $2\geq\cdot\frac{k^{\nu(n)}}{|{\cal A}({\cal
F}_{n},W)|}$, so $|{\cal T}|\geq\sqrt{w}/(2\ell)\in\Omega(\sqrt{w})$.
### 3.4 Main theorem
We focus on restrictions $\rho$ that after applied on an unsatisfiable family
${\mathscr{F}}=\\{{\cal F}_{n}\\}_{n\in\mathbb{N}}$, reduce the set ${\cal F}$
to another set in the same family.
###### Definition 3.15.
Let ${\mathscr{F}}=\\{{\cal F}_{n}\\}_{n\in\mathbb{N}}$ be an unsatisfiable
family and $c$ a positive constant. ${\mathscr{F}}$ is $c$-self-reducible if
for any set $V$ of variables, with $|V|=v<n/c$, there is a restriction $\rho$
with domain $V^{\prime}\supseteq V$, such that ${\cal
F}_{n}\\!\\!\restriction_{\rho}={\cal F}_{n-cv}$ (up to renaming of
variables).
Let us motivate the definition with an example.
###### Lemma 3.16.
$\operatorname{\mathsf{SPHP}}_{n}$ is $1$-self-reducible.
###### Proof 3.17.
Whatever set of variables $x_{i}$, $i\in I\subset[n]$ we consider, it is
sufficient to set $x_{i}$ to $0$ to fulfill Definition 3.15.
###### Theorem 3.18.
Let ${\mathscr{F}}:=\\{{\cal F}_{n}\\}_{n\in\mathbb{N}}$ be a unsatisfiable
set of integer linear inequalities which is almost full and $c$-self-
reducible. If ${\cal F}_{n}$ defines a feasible $\operatorname{\mathsf{LP}}$
whenever $n>n_{{\mathscr{F}}}$, then for $n$ large enough, the shortest
$\operatorname{\mathsf{SP}}$ proof of ${\cal F}_{n}$ is of length
$\Omega(\sqrt[4]{n})$.
###### Proof 3.19.
Take any $\operatorname{\mathsf{SP}}$ proof $\mathcal{T}$ refuting ${\cal
F}_{n}$ and fix $t=\sqrt[4]{n}$.
The proof proceeds by stages $i\geq 0$ where ${\cal T}_{0}={\cal T}$. The
stages will go on while the invariant property (which at stage $0$ is true
since $n>n_{{\mathscr{F}}}$ and $c$ a positive constant)
$n-ict^{3}>\max\\{n_{{\mathscr{F}}},n(1-1/c)\\}$
holds.
At the stage $i$ we let $\Sigma_{i}=\\{(\mathbf{a},b)\in{\cal
T}_{i}:w(\mathbf{a})\leq t^{2}\\}$ and $s_{i}=|\Sigma_{i}|$. If $s_{i}\geq t$
the claim is trivially proven. If $s_{i}=0$, then all queries in ${\cal
T}_{i}$ have width at least $t^{2}$ and by Lemma 3.13 (which can be applied
since $n-ict^{3}>n_{{\mathscr{F}}}$) the claim is proven (for $n$ large
enough).
So assume that $0<s_{i}<t$. Each of the queries in $\Sigma_{i}$ involves at
most $t^{2}$ nonzero coefficients, hence in total they mention at most
$s_{i}t^{2}\leq t^{3}$ variables. Extend this set of variables to some
$V^{\prime}$ in accordance with Definition 3.15 (which can be done since, by
the invariant, $ict^{3}<n/c$). Set all these variables according to self-
reducibility of ${\cal F}$ in a restriction $\rho_{i}$ and define ${\cal
T}_{i+1}={\cal T}_{i}\\!\\!\restriction\\!\\!_{\rho_{i}}$. Note that by
Definition 3.15 and by that of restriction, ${\cal T}_{i+1}$ is a
$\operatorname{\mathsf{SP}}$ refutation of ${\cal F}_{n-ict^{3}}$ and we can
go on with the next stage. (Also note that we do not hit an empty refutation
this way, due to the assumption that ${\cal F}_{n}$ defines a feasible LP.)
Assume that the invariant does not hold. If this is because
$n-ict^{3}<n_{{\mathscr{F}}}$ then, as each iteration destroys at least one
node,
$|{\cal T}|\geq i>\frac{n-n_{{\mathscr{F}}}}{ct^{3}}\in\Omega(n^{1/4}).$
If this is because $n-ict^{3}<n-n/c$, then again for the same reason it holds
that
$|{\cal T}|\geq i>\frac{n}{c^{2}n^{3/4}}\in\Omega(n^{1/4}).$
Using Lemmas 3.11 and 3.16 and the previous Theorem we get:
###### Corollary 3.20.
The length of any $\operatorname{\mathsf{SP}}$ refutation of
$\operatorname{\mathsf{SPHP}}_{n}$ is $\Omega(\sqrt[4]{n})$. Hence the minimal
depth is $\Omega(\log n)$.
### 3.5 Lower bounds for the Pigeonhole principle
###### Definition 3.21.
The _Pigeonhole Principle_ $\operatorname{\mathsf{PHP}}^{m}_{n}$, $m(n)>n$, is
the family of unsatisfiable integer linear inequalities defined over the
variables $\\{P_{i,j}:i\in[m],j\in[n]\\}$ consisting of the following
inequalities:
$\begin{array}[]{ll}\sum_{j=1}^{n}P_{i,j}\geq 1\quad\forall
i\in[m]&\text{(every pigeon goes into some hole)}\\\ P_{i,k}+P_{j,k}\leq
1\quad\forall k\in[n],i\neq j\in[m]&\text{(at most one pigeon enters any given
hole)}\end{array}$
We present a lower bound for $\operatorname{\mathsf{PHP}}^{m}_{n}$ closely
following that for $\operatorname{\mathsf{SPHP}}_{n}$, in which we largely
ignore the diversity of different pigeons (which makes the principle rather
like $\operatorname{\mathsf{SPHP}}_{n}$).
In this subsection we fix $W=\\{0,1/2\\}$, and for the sake of brevity refer
to $(n,W)$-words as _biwords_.
In this section we fix $m$ to be $n+d$, for any fixed $d\in\mathbb{N}$ at
least one.
###### Lemma 3.22.
The $\operatorname{\mathsf{PHP}}^{n+d}_{n}$ is almost full.
###### Proof 3.23.
We show that there are at least $2^{mn-1}$ admissible biwords (for
sufficiently large $n$). For each pigeon $i$, there are admissible valuations
to holes so that, so long as at least two of these are set to $1/2$, the
others may be set to anything in $\\{0,1/2\\}$. This gives at least
$2^{n}-(n+1)$ possibilities. Since the pigeons are independent, we obtain at
least $(2^{n}-(n+1))^{m}$ biwords. Now this is
$2^{mn}\left(1-\frac{n+1}{2^{n}}\right)^{m}$ where
$\left(1-\frac{n+1}{2^{n}}\right)^{m}\sim e^{\frac{-(n+1)m}{2^{n}}}$ whence,
$\left(1-\frac{n+1}{2^{n}}\right)^{m}\geq e^{\frac{-(n+2)m}{2^{n}}}$ for
sufficiently large $n$. It follows there is a constant $c$ so that:
$2^{mn}\left(1-\frac{n+1}{2^{n}}\right)^{m}\geq
2^{mn-\frac{c(n+2)m}{2^{n}}}\geq 2^{mn-1}$
for sufficiently large $n$.
###### Lemma 3.24.
The $\operatorname{\mathsf{PHP}}^{n+d}_{n}$ is $1$-self-reducible.
###### Proof 3.25.
We are given some set $I$ of variables from
$\operatorname{\mathsf{PHP}}^{n+c}_{n}$. These variables will mention some set
of holes $H:=\\{j:P_{i,j}\in I\text{ for some i}\\}$ and similarly a set of
pigeons $P$. Each of $P$, $H$ have size at most $|I|$ and we extend them both
arbitrarily to have size exactly $|I|$. Our restriction matches $P$ and $H$ in
any way and then sets any other variable mentioning a pigeon in $P$ or a hole
in $H$ to $0$.
###### Theorem 3.26.
The length of any $\operatorname{\mathsf{SP}}$ refutation of
$\operatorname{\mathsf{PHP}}^{n+d}_{n}$ is $\Omega(n^{1/4})$.
###### Proof 3.27.
Note that the all $1/2$ point is feasible for
$\operatorname{\mathsf{PHP}}^{n+d}_{n}$. Then with Lemma 3.22 and Lemma 3.24
in hand we meet all the prerequisites for Theorem 3.18.
By simply noting that a $\operatorname{\mathsf{SP}}$ refutation is a binary
tree, we get the following corollary.
###### Corollary 3.28.
The $\operatorname{\mathsf{SP}}$ depth of the
$\operatorname{\mathsf{PHP}}^{n+d}_{n}$ is $\Omega(\log n)$.
### 3.6 Lower bounds for Tseitin contradictions over the complete graph
###### Definition 3.29.
For a graph $G=(V,E)$ along with a charging function $\omega:V\to\\{0,1\\}$
satisfying $\sum_{v\in V}\omega(v)=1\mod 2$. The _Tseitin contradiction_
$\operatorname{\mathsf{Ts}}(G,\omega)$ is the set of linear inequalities which
translate the CNF encoding of
$\sum_{\begin{subarray}{c}e\in E\\\ e\ni v\end{subarray}}x_{e}=\omega(v)\mod
2.$
for every $v\in V$, where the variables $x_{e}$ range over the edges $e\in E$.
In this subsection we consider $\operatorname{\mathsf{Ts}}(K_{n},\omega)$ and
$\omega$ will always be an odd charging for $K_{n}$. We let $N:=\binom{n}{2}$
and we fix $W=\\{0,1/2,1\\}$, $k=3$ and for the sake of brevity refer to
$(n,W)$-words as _triwords_. We will abuse slightly the notation of Section
3.3 and consider the family
$\\{\operatorname{\mathsf{Ts}}(K_{n},\omega)\\}_{n\in\mathbb{N},\;\omega\text{
odd}}$ as a single parameter family in $n$. The reason we can do this is
because the following proofs of almost fullness and self reducibility do not
depend on $\omega$ at all (so long as it is odd, which we will always ensure).
###### Lemma 3.30.
$\operatorname{\mathsf{Ts}}(K_{n},\omega)$ is almost full.
###### Proof 3.31.
We show that $\operatorname{\mathsf{Ts}}(K_{n},\omega)$ has at least $c3^{N}$
admissible triwords, for any constant $0<c<1$ and $n$ large enough. We define
the assignment $\rho$ setting all edges (i.e. $x_{e}$) to a value in
$W=\\{0,1,1/2\\}$ independently and uniformly at random, and inspecting the
probability that some fixed constraint for a node $v$ is violated by $\rho$.
Clearly if at least $2$ edges incident to $v$ are set to $1/2$ its constraint
is satisfied. If none of its incident edges are set to $1/2$ then it is
satisfied with probability $1/2$. Let $A(v)$ be the event “no edge incident to
$v$ is set to $1/2$ by $\rho$” and let $B(v)$ be the event that “exactly one
edge incident to $v$ is set to $1/2$ by $\rho$”. Then:
$\displaystyle\Pr[\text{$v$ is
violated}]\leq\frac{1}{2}\Pr[A(v)]+\Pr[B(v)]=\frac{1}{2}\frac{2^{n-1}}{3^{n-1}}+\frac{(n-1)2^{n-2}}{3^{n-1}}=n\frac{2^{n-2}}{3^{n-1}}.$
Therefore, by a union bound, the probability that there exists a node with
violated parity is bounded above by $n^{2}\frac{2^{n-2}}{3^{n-1}}$, which
approaches $0$ as $n$ goes to infinity.
###### Lemma 3.32.
$\operatorname{\mathsf{Ts}}(K_{n},\omega)$ is $2$-self-reducible.
###### Proof 3.33.
We are given some set of variables $I$. Each variable mentions $2$ nodes, so
extend these mentioned nodes arbitrarily to a set $S$ of size exactly $2|I|$,
which we then hit with the following restriction: if $S$ is evenly charged,
pick any matching on the set $\\{s\in S:w(s)=1\\}$, set those edges to $1$,
and set any other edges involving some vertex in $S$ to $0$. Otherwise (if $S$
is oddly charged) pick any $l\in\\{s\in S:w(s)=1\\}$ and $r\in[n]\setminus S$
and set $x_{lr}$ to $1$. $\\{s\in S:w(s)=1\\}\setminus l$ is now even so we
can pick a matching as before. And as before we set all other edges involving
some vertex in $S$ to 0. In the first case the graph induced by $[n]\setminus
S$ must be oddly charged (as the original graph was). In the second case this
induced graph was originally evenly charged, but we changed this when we set
$x_{lr}$ to $1$.
###### Lemma 3.34.
For any oddly charged $\omega$ and $n$ large enough, all
$\operatorname{\mathsf{SP}}$ refutations of
$\operatorname{\mathsf{Ts}}(K_{n},\omega)$ have length $\Omega(\sqrt[4]{n})$.
###### Proof 3.35.
We have that the all $1/2$ point is feasible for
$\operatorname{\mathsf{Ts}}(K_{n},\omega)$. Then we can simply apply Theorem
3.18.
###### Corollary 3.36.
The depth of any SP refutation of $\operatorname{\mathsf{Ts}}(K_{n},\omega)$
is $\Omega(\log{n})$.
### 3.7 Lower bound for the Least Ordering Principle
###### Definition 3.37.
Let $n\in\mathbb{N}$. The _Least Ordering Principle_ ,
$\operatorname{\mathsf{LOP}}_{n}$, is the following set of unsatisfiable
linear inequalities over the variables $P_{i,j}$ ($i\neq j\in[n]$):
$\displaystyle P_{i,j}+P_{j,i}=1\quad\text{ for all $i\neq j\in[n]$}$
$\displaystyle P_{i,k}-P_{i,j}-P_{j,k}\geq 1\text{ for all $i\neq j\neq
k\in[n]$}$ $\displaystyle\sum_{i=1,i\neq j}^{n}P_{i,j}\geq 1\text{ for all
$j\in[n]$}$
###### Lemma 3.38.
For any $X\subseteq[n]$ of size at most $n-3$, there is an admissible point
for $\operatorname{\mathsf{LOP}}_{n}$ integer on any edge mentioning an
element in $X$.
###### Proof 3.39.
Let $\preceq$ be any total order on the elements in $X$. Our admissible point
$x$ will be
$x(P_{i,j})=\begin{cases}1&\text{ if $i,j\in X$ and $i\preceq j$, or if
$i\not\in X,j\in X$}\\\ 0&\text{ if $i,j\in X$ and $j\preceq i$, or if $i\in
X,j\not\in X$}\\\ 1/2&\text{otherwise (if $i,j\not\in X$)}.\end{cases}$
The existential axioms $\sum_{i=1,i\neq j}^{n}P_{i,j}$ are always satisfied -
if $j\in X$ then there is some $i\not\in X$ with $P_{i,j}=1$, and otherwise
there are at least two distinct $i,k\neq j\in X$ with $P_{i,j},P_{k,j}=1/2$.
For the transitivity axioms $P_{i,k}-P_{i,j}-P_{j,k}\geq 1$, note that if $2$
or more of $i,j,k$ are not in $X$ there are at least $2$ variables set to
$1/2$, and otherwise it is set in a binary fashion to something consistent
with a total order.
We will assume that a $\operatorname{\mathsf{SP}}$ refutation ${\cal T}$ of
$\operatorname{\mathsf{LOP}}_{n}$ only involves variables $P_{i,j}$ where
$i<j$ \- this is without loss of generality as we can safely set $P_{j,i}$ to
$1-P_{i,j}$ whenever $i>j$, and will often write $P_{\\{i,j\\}}$ for such a
variable. We consider the underlying graph of the support of a query, i.e. an
undirected graph with edges $\\{i,j\\}$ for every variable $P_{\\{i,j\\}}$
that appears with non-zero coefficient in the query.
For some function $f(n)$, we say the query is _$f(n)$ -wide_ if the smallest
edge cover of its graph has at least $f(n)$ nodes . A query that is not
$f(n)$-wide is _$f(n)$ -narrow_. The next lemma works much the same as Theorem
3.18.
###### Lemma 3.40.
Fix $\epsilon>0$ and suppose we have some $\operatorname{\mathsf{SP}}$
refutation ${\cal T}$ of $\operatorname{\mathsf{LOP}}_{n}$, where $|{\cal
T}|\leq n^{\frac{1-\epsilon}{4}}$. Then, if $n$ is large enough, we can find
some $\operatorname{\mathsf{SP}}$ refutation ${\cal T}^{\prime}$ of
$\operatorname{\mathsf{LOP}}_{c\cdot n}$, where $c$ is a positive universal
constant that may be taken arbitrarily close to $1$, ${\cal T}^{\prime}$
contains only $n^{3/4}$-wide queries, and $|T^{\prime}|\leq|{\cal T}|$.
###### Proof 3.41.
We iteratively build up an initially empty restriction $\rho$. At every stage
$\rho$ imposes a total order on some subset $X\subseteq[n]$ and places the
elements in $X$ above the elements not in $X$. So $\rho$ sets every edge not
contained entirely in $[n]\setminus X$ to something binary, and
$\operatorname{\mathsf{LOP}}_{n}\\!\\!\restriction_{\rho}=\operatorname{\mathsf{LOP}}_{n-|X|}$
(up to a renaming of variables).
While there exists a $n^{3/4}$-narrow query $q\in{\cal
T}\\!\\!\restriction_{\rho}$ we simply take its smallest edge cover, which has
size at most $n^{3/4}$ by definition, and add its nodes in any fashion to the
total order in $\rho$. Now all of the variables mentioned by $q\in{\cal
T}\\!\\!\restriction_{\rho}$ are fully evaluated and $q$ is redundant. We
repeat this at most $n^{\frac{1-\epsilon}{4}}$ times (as $|{\cal T}|\leq
n^{\frac{1-\epsilon}{4}}$ and each iteration renders at least one query in
${\cal T}$ redundant). At each stage we grow the domain of the restriction by
at most $n^{3/4}$, so the domain of $\rho$ is always bounded by
$n^{1-\epsilon/4}$. We also cannot exhaust the tree ${\cal T}$ in this way, as
otherwise ${\cal T}$ mentioned at most $n^{1-\epsilon/4}<n-3$ elements and by
Lemma 3.38 there is an admissible point not falling in any slab of ${\cal T}$,
violating 1.
When this process finishes we are left with a $n^{3/4}$-wide refutation ${\cal
T}^{\prime}$ of $\operatorname{\mathsf{LOP}}_{n-n^{1-\epsilon/4}}$. As
$\epsilon$ was fixed we find that as $n$ goes to infinity $n-n^{1-\epsilon/4}$
tends to $n$.
###### Lemma 3.42.
Let $d\leq(n-3)/2$. Given any disjoint set of pairs
$D=\\{\\{l_{1},r_{1}\\},\ldots,\\{l_{d},r_{d}\\}\\}$ (where WLOG $l_{i}<r_{i}$
in $[n]$ as natural numbers) and any binary assignment $b\in\\{0,1\\}^{D}$,
the assignment $x_{b}$ with
$x_{b}(P_{\\{i,j\\}})=\begin{cases}b(\\{l_{k},r_{k}\\})&\text{ if
$\\{i,j\\}=\\{l_{k},r_{k}\\}\in X$ for some $k$}\\\
1/2&\text{otherwise}\end{cases}$
is admissible.
###### Proof 3.43.
The existential axioms $\sum_{i=1,i\neq j}^{n}P_{i,j}$ are always satisfied,
as for any $j$ there are at least $n-2$ $i\in[n]$ different from $j$ with
$P_{i,j}=1/2$. For the transitivity axioms $P_{i,k}-P_{i,j}-P_{j,k}\geq 1$,
note that due to the disjointness of $D$ at least two variables on the left
hand side are set to $1/2$.
###### Theorem 3.44.
Fix some $\epsilon>0$ and let ${\cal T}$ any $\operatorname{\mathsf{SP}}$
refutation of $\operatorname{\mathsf{LOP}}_{n}$. Then, for $n$ large enough,
$|{\cal T}|\in\Omega(n^{\frac{1-\epsilon}{4}})$.
###### Proof 3.45.
Suppose otherwise - then, by Lemma 3.40, we can find some ${\cal T}^{\prime}$
refuting $\operatorname{\mathsf{LOP}}_{cn}$, with $|{\cal
T}^{\prime}|\leq|{\cal T}|$, every query $n^{3/4}$-wide, and $c$ independent
of $n$. We greedily create a set of pairs $D$ by processing the queries in
${\cal T}^{\prime}$ one by one and choosing in each a matching of size
$n^{1/2}$ disjoint from the elements appearing in $D$ \- this always succeeds,
as at every stage $|D|\in O(n^{\frac{1-\epsilon}{4}}\cdot n^{1/2})$ and
involves at most $O(2n^{\frac{3-\epsilon}{4}})<n^{3/4}-n^{1/2}$ elements.
So by Lemma 3.42, after setting every edge not in $D$ to $1/2$, we have some
set of linear polynomials ${\cal
R}=\\{a(x)=\mathbf{a}x-b-1/2:(\overline{a},b)\in{\cal T}^{\prime}\\}$ covering
the hypercube $\\{0,1\\}^{D}$, where every polynomial $p\in{\cal R}$ mentions
at least $n^{1/2}$ edges. By Lemma 3.7 each such polynomial in ${\cal R}$
rules out at most $\nicefrac{{2^{|D|}}}{{n^{1/4}}}$ points, and so we must
have $|{\cal T}|\geq|T^{\prime}|\geq|{\cal R}|\geq n^{1/4}$.
## 4 The covering method
###### Definition 4.1.
A set $L$ of linear polynomials with real coefficients is said to be a _cover_
of the cube $\\{0,1\\}^{n}$ if for each $v\in\\{0,1\\}^{n}$, there is a $p\in
L$ such that $p(v)=0$.
In [16] Linial and Radhakrishnan considered the problem of the minimal number
of hyperplanes needed to cover the cube $\\{0,1\\}^{n}$. Clearly every such
cube can be covered by the zero polynomial, so to make the problem more
meaningful they defined the notion of an essential covering of
$\\{0,1\\}^{n}$.
###### Definition 4.2 ([16]).
A set $L$ of linear polynomials with real coefficients is said to be an
_essential cover_ of the cube $\\{0,1\\}^{n}$ if
1. (E1)
$L$ is a cover of $\\{0,1\\}^{n}$,
2. (E2)
no proper subset of $L$ satisfies (E1), that is, for every $p\in L$, there is
a $v\in\\{0,1\\}^{n}$ such that $p$ alone takes the value $0$ on $v$, and
3. (E3)
every variable appears (in some monomial with non-zero coefficient) in some
polynomial of $L$.
They then proved that any essential cover $E$ of the hypercube $\\{0,1\\}^{n}$
must satisfy $|E|\geq\sqrt{n}$. We will use the slightly strengthened lower
bound given in [25]:
###### Theorem 4.3.
Any essential cover $L$ of the cube with $n$ coordinates satisfies
$|L|\in\Omega(n^{0.52})$.
We will need an auxillary definition and lemma.
###### Definition 4.4.
Let $L$ be a cover of $\\{0,1\\}^{I}$ for some index set $I$. Some subset
$L^{\prime}$ of $L$ is an _essentialisation_ of $L$ if $L^{\prime}$ also
covers $\\{0,1\\}^{I}$ but no proper subset of it does.
###### Lemma 4.5.
Let $L$ be a cover of the cube $\\{0,1\\}^{n}$ and $L^{\prime}$ be any
essentialisation of $L$. Let $M^{\prime}$ be the set of variables appearing
with nonzero coefficient in $L^{\prime}$. Then $L^{\prime}$ is an essential
cover of $\\{0,1\\}^{M^{\prime}}$.
###### Proof 4.6.
* (E1)
Given any point $x\in\\{0,1\\}^{M^{\prime}}$, we can extend it arbitrarily to
a point $x^{\prime}\in\\{0,1\\}^{M}$. Then there is some $p\in L^{\prime}$
with $p(x^{\prime})=0$ \- but $p(x^{\prime})=p(x)$, as $p$ doesn’t mention any
variable outside of $M^{\prime}$.
* (E2)
Similarly to the previous point, this will follow from the fact that if some
set ${\cal T}$ covers a hypercube $\\{0,1\\}^{I}$, it also covers
$\\{0,1\\}^{I^{\prime}}$ for any $I^{\prime}\supseteq I$.
Suppose some proper subset $L^{\prime\prime}\subset L^{\prime}$ covers
$\\{0,1\\}^{M^{\prime}}$, then it covers $\\{0,1\\}^{M}$ \- but we picked
$L^{\prime}$ to be a minimal set with this property.
* (E3)
We defined $M^{\prime}$ to be the set of variables appearing with nonzero
coefficient in $L^{\prime}$.
### 4.1 The covering method and Tseitin
Let $H_{n}$ denote the $n\times n$ grid graph. Fix some $\omega$ with odd
charge and a $\operatorname{\mathsf{SP}}$ refutation ${\cal T}$ of
$\operatorname{\mathsf{Ts}}(H_{n},\omega)$. 1 tells us that for every point
$x$ admissible for $\operatorname{\mathsf{Ts}}(H_{n},\omega)$, there exists a
query $(\mathbf{a},b)\in{\cal T}$ such that $b<\mathbf{a}x<b+1$. In this
section we will only consider admissible points with entries in
$\\{0,1/2,1\\}$, turning the slab of a query $(\mathbf{a},b)$ into the
solution set of the single linear equation $\mathbf{a}\cdot x=b+1/2$. So we
consider ${\cal T}$ as a set of such equations.
We say that an edge of $H_{n}$ is mentioned in ${\cal T}$ if the variable
$x_{e}$ appears with non-zero coefficient in some query in ${\cal T}$. We can
see $H_{n}$ as a set of $(n-1)^{2}$ squares (4-cycles), and we can index them
as if they were a Cartesian grid, starting from $1$. Let $S$ be the set of
$\lfloor(n/3)^{2}\rfloor$ squares in $H_{n}$ gotten by picking squares with
indices that become $2\text{ (mod }3)$. This ensures that every two squares in
$S$ in the same row or column have at least two other squares between them,
and that no selected square is on the perimeter.
We will assume WLOG that $n$ is a multiple of 3, so $|S|=(n/3)^{2}$. Let
$K=\bigcup_{t\in S}t$ be the set of edges mentioned by $S$, and for some $s\in
S$, let $K_{s}:=\bigcup_{t\in S,t\neq s}t$ be the set of edges mentioned in
$S$ by squares other than $s$.
###### Lemma 4.7.
For every $s\in S$ we can find an admissible point
$b_{s}\in\\{0,1/2,1\\}^{E(H_{n})}$ such that
1. 1.
$b_{s}(x_{e})=0$ for all $e\in K_{s}$, and
2. 2.
$b_{s}$ is fractional only on the edges in $s$.
###### Proof 4.8.
We use the following fact due to A. Urquhart in [22]
###### Fact 2.
For each vertex $v$ in $H_{n}$ there is a totally binary assignment, called
$v$-critical in [22], satisfying all parity axioms in
$\operatorname{\mathsf{Ts}}(H_{n},\omega)$ except the parity axiom of node
$v$.
Pick any corner $c$ of $s$. Let $b_{s}$ be the result of taking any
$c$-critical assignment of the variables of
$\operatorname{\mathsf{Ts}}(H_{n},\omega)$ and setting the edges in $s$ to
$1/2$. $b_{s}$ is admissible, as $c$ is now adjacent to two variables set to
$1/2$ (so its originally falsified parity axiom becomes satisfied) and every
other vertex is either unaffected or also adjacent to two $1/2$s. While
$b_{s}$ sets some edge $e\in K_{s}$ to 1, flip all of the edges in the unique
other square containing $e$. This other square always exists (as no square
touches the perimeter) and also contains no other edge in $K_{s}$ (as there
are at least two squares between any two squares in $S$). Flipping the edges
in a cycle preserves admissibility, as every vertex is adjacent to $0$ or $2$
flipped edges.
###### Definition 4.9.
Let $V_{S}:=\\{v_{s}:s\in S\\}$ be a set of new variables. For $s\in S$ define
the substitution $h_{s}$, taking the variables of
$\operatorname{\mathsf{Ts}}(H_{n},\omega)$ to $V_{S}\cup\\{0,1/2,1\\}$, as
$h_{s}(x_{e}):=\begin{cases}b_{s}(e)&\text{ if $e$ is not mentioned in $S$, or
if $e$ is mentioned by $s$,}\\\ v_{t}&\text{if $e$ is mentioned by some square
$t\neq s\in S$.}\\\ \end{cases}$
(where $b_{s}$ is from Lemma 4.7).
###### Definition 4.10.
Say that a linear polynomial $p=c+\sum_{e\in E(H_{n})}\mu_{e}x_{e}$ with
coefficients $\mu_{e}\in\mathbb{Z}$ and some constant part $c\in\mathbb{R}$
has _odd coefficient in $X\subseteq E(H_{n})$_ if $\sum_{e\in X}\mu_{e}$ is an
odd integer. Given some polynomial $p$ in the variables $x_{e}$ of Tseitin,
and some square $s\in S$, let $p_{s}$ be the polynomial in variables $V_{S}$
gotten by applying the substitution $x_{e}\to h_{s}(x_{e})$. Also, for any set
of polynomials ${\cal T}$ in the variables $x_{e}$ let ${\cal
T}_{s}:=\\{p_{s}:p\in{\cal T},p\text{ has odd coefficient in }s\\}.$
Given some assignment $\alpha\in\\{0,1\\}^{V_{S}\setminus\\{v_{s}\\}}$, and
some $h_{s}$ as in Definition 4.9, we let $\alpha(h_{s})$ be the assignment to
the variables of $\operatorname{\mathsf{Ts}}(H_{n},\omega)$ gotten by
replacing the $v_{t}$ in the definintion of $h_{s}$ by $\alpha(v_{t})$.
###### Lemma 4.11.
Let $s\in S$. For all $2^{|S|-1}$ settings $\alpha$ of the variables in
$V_{S}\setminus\\{s\\}$, $\alpha(h_{s})$ is admissible.
###### Proof 4.12.
When $\alpha(v_{t})$ is all $0$, $h_{s}=b_{s}$ is admissible (by Lemma 4.7).
Toggling some $v_{t}$ only has the effect of flipping every edge in a cycle,
which preserves admissibility.
###### Lemma 4.13.
${\cal T}_{s}$ covers $\\{0,1\\}^{V_{S}\setminus\\{s\\}}$.
###### Proof 4.14.
For every setting of $\alpha\in\\{0,1\\}^{V_{S}\setminus\\{s\\}}$,
$\alpha(h_{s})$ as defined above is admissible and therefore covered by some
$p\in{\cal T}$, which has constant part $1/2+b$ for some $b\in\mathbb{Z}$.
Furthermore, as $\alpha(h_{s})$ sets every edge in $s$ to $1/2$, every such
$p$ must have odd coefficient in front of $s$ \- otherwise
$p(\alpha(h_{s}))=1/2+b+(1/2)\left(\sum_{e\in s}\mu_{e}\right)+\sum_{e\not\in
s}\mu_{e}\alpha(h_{s})(x_{e})$
can never be zero, as the $1/2$ is the only non integral term in the
summation.
###### Theorem 4.15.
Any $\operatorname{\mathsf{SP}}$ refutation ${\cal T}$ of
$\operatorname{\mathsf{Ts}}(H_{n},\omega)$ must have $|{\cal
T}|\in\Omega(n^{1.04})$.
###### Proof 4.16.
We are going to find a set of pairs
$(L_{1},M_{1}),(L_{2},M_{2}),\ldots,(L_{q},M_{q})$, where the $L_{i}$ are
pairwise disjoint nonempty subsets of ${\cal T}$, the $M_{i}$ are subsets of
$V_{S}$, and for every $i$ there is some $s_{i}\in
S\setminus\bigcup_{i=1}^{q}M_{i}$ such that
$|(L_{i})_{s_{i}}|\geq|M_{i}|^{0.52}$. These pairs will also satisfy the
property that
$\\{s_{i}:1\leq i\leq q\\}\cup\bigcup_{i=1}^{q}M_{i}=S.$ (1)
As $|S|=(n/3)^{2}$ this would imply that
$\sum_{i=1}^{q}|M_{i}|\geq(n/3)^{2}-q$. If $q\geq(n/3)^{2}/2$, then (as the
$L_{i}$ are nonempty and pairwise disjoint) we have $|{\cal
T}|\geq(n/3)^{2}/2\in\Omega(n^{1.04})$. Otherwise
$\sum_{i=1}^{q}|M_{i}|\geq(n/3)^{2}/2$, and as (by Theorem 4.3) each
$|L_{i}|\geq|M_{i}|^{0.52}$,
$|{\cal
T}|\geq\sum_{i=1}^{q}|L_{i}|\geq\sum_{i=1}^{q}|M_{i}|^{0.52}\geq\left(\sum_{i=1}^{q}|M_{i}|\right)^{0.52}\geq\left((n/3)^{2}/2\right)^{0.52}\in\Omega(n^{1.04}).$
(2)
We create the pairs by stages. Let $S_{1}=S$ and start by picking any
$s_{1}\in S_{1}$. By Lemma 4.13 ${\cal T}_{s_{1}}$ covers
$\\{0,1\\}^{V_{S_{1}}\setminus\\{s_{1}\\}}$ and has as an essentialisation
$E$, which will be an essential cover of $\\{0,1\\}^{V^{\prime}}$ for some
$V^{\prime}\subseteq V_{S_{1}}\setminus\\{s_{1}\\}$. We create the pair
$(L_{1},M_{1})=(\\{p:p_{s_{1}}\in E\\},V^{\prime})$ and update
$S_{2}=S_{1}\setminus\left(V^{\prime}\cup\\{s_{1}\\}\right)$. (Note that
$V^{\prime}$ could possibly be empty - for example, if the polynomial
$x_{e}=1/2$ appears in ${\cal T}$, where $e\in s_{1}$. In this case however we
still have $|L_{1}|\geq|M_{1}|^{0.52}$. If $V^{\prime}$ is not empty we have
the same bound due to Theorem 4.3.) If $S_{2}$ is nonempty we repeat with any
$s_{2}\in S_{2}$, and so on.
We now show that as promised the left hand sides of these pairs partition a
subset of ${\cal T}$, which will give us the first inequality in Equation 2.
Every polynomial $p$ with $p_{s_{i}}\in L_{i}$ has every $v_{t}$ mentioned by
$p_{s_{i}}$ removed from $S_{j}$ for all $j\geq i$, so the only way $p$ could
reappear in some later $L_{j}$ is if $p_{s_{j}}\in{\cal T}_{s_{j}}$, where
$v_{s_{j}}$ does _not_ appear in $p_{s_{i}}$. Let $\mu_{e},e\in s_{j}$ be the
coefficients of $p$ in front of the four edges of $s_{j}$. The coefficient in
front of $v_{s_{j}}$ in $p_{s_{i}}$ is just $\sum_{e\in s_{j}}\mu_{e}$. As
$v_{s_{j}}$ failed to appear this sum is $0$ and $p$ does not have the odd
coefficient sum it would need to appear in ${\cal T}_{s_{j}}$.
## 5 Conclusions and acknowledgements
The $\Omega(\log n)$ depth lower bound for
$\operatorname{\mathsf{Ts}}(H_{n},\omega)$ is not optimal since [2] proved an
$O(\log^{2}n)$ upper bound for $\operatorname{\mathsf{Ts}}(G,\omega)$, for any
bounded-degree $G$. Even to apply the covering method to prove a depth
$\Omega(\log^{2}n)$ lower bound on $\operatorname{\mathsf{Ts}}(K_{n},\omega)$
(notice that it would imply a superpolynomial length lower bound), the
polynomial covering of the boolean cube should be improved to work on general
cubes. To this end the algebraic method used in [16] should be improved to
work with generalizations of multilinear polynomials.
While finishing the writing of this manuscript we learned about [9] from Noah
Fleming. We would like to thank him for answering some questions on his paper
[2], and sending us the manuscript [9] and for comments on a preliminary
version of this work.
## References
* [1] Noga Alon and Zoltán Füredi. Covering the cube by affine hyperplanes. Eur. J. Comb., 14(2):79–83, 1993. doi:10.1006/eujc.1993.1011.
* [2] Paul Beame, Noah Fleming, Russell Impagliazzo, Antonina Kolokolova, Denis Pankratov, Toniann Pitassi, and Robert Robere. Stabbing planes. In Anna R. Karlin, editor, 9th Innovations in Theoretical Computer Science Conference, ITCS 2018, January 11-14, 2018, Cambridge, MA, USA, volume 94 of LIPIcs, pages 10:1–10:20. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2018. doi:10.4230/LIPIcs.ITCS.2018.10.
* [3] Joshua Buresh-Oppenheim, Nicola Galesi, Shlomo Hoory, Avner Magen, and Toniann Pitassi. Rank bounds and integrality gaps for cutting planes procedures. Theory of Computing, 2(4):65–90, 2006. arXiv:toc:v002/a004.
* [4] Teena Carroll, Joshua Cooper, and Prasad Tetali. Counting antichains and linear extensions in generalizations of the boolean lattice, 2009.
* [5] W. Cook, C. R. Coullard, and G. Turán. On the complexity of cutting-plane proofs. Discrete Appl. Math., 18(1):25–38, 1987. doi:http://dx.doi.org/10.1016/0166-218X(87)90039-4.
* [6] Daniel Dadush and Samarth Tiwari. On the complexity of branching proofs. In Shubhangi Saraf, editor, 35th Computational Complexity Conference, CCC 2020, July 28-31, 2020, Saarbrücken, Germany (Virtual Conference), volume 169 of LIPIcs, pages 34:1–34:35. Schloss Dagstuhl \- Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.CCC.2020.34.
* [7] Martin Davis, George Logemann, and Donald W. Loveland. A machine program for theorem-proving. Commun. ACM, 5(7):394–397, 1962. doi:10.1145/368273.368557.
* [8] Martin Davis and Hilary Putnam. A computing procedure for quantification theory. J. ACM, 7(3):201–215, 1960. URL: http://doi.acm.org/10.1145/321033.321034, doi:10.1145/321033.321034.
* [9] Noah Fleming, Mika Göös, Russell Impagliazzo, Toniann Pitassi, Robert Robere, Li-Yang Tan, and Avi Wigderson. On the power and limitations of branch and cut. In Valentine Kabanets, editor, 36th Computational Complexity Conference, CCC 2021, July 20-23, 2021, Toronto, Ontario, Canada (Virtual Conference), volume 200 of LIPIcs, pages 6:1–6:30. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPIcs.CCC.2021.6.
* [10] Nicola Galesi, Pavel Pudlák, and Neil Thapen. The space complexity of cutting planes refutations. In David Zuckerman, editor, 30th Conference on Computational Complexity, CCC 2015, June 17-19, 2015, Portland, Oregon, USA, volume 33 of LIPIcs, pages 433–447. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2015. doi:10.4230/LIPIcs.CCC.2015.433.
* [11] Russell Impagliazzo, Toniann Pitassi, and Alasdair Urquhart. Upper and lower bounds for tree-like cutting planes proofs. In Proceedings Ninth Annual IEEE Symposium on Logic in Computer Science, pages 220–228. IEEE, 1994.
* [12] Henry A. Kautz and Bart Selman. Ten challenges redux: Recent progress in propositional reasoning and search. In Francesca Rossi, editor, Principles and Practice of Constraint Programming - CP 2003, 9th International Conference, CP 2003, Kinsale, Ireland, September 29 - October 3, 2003, Proceedings, volume 2833 of Lecture Notes in Computer Science, pages 1–18. Springer, 2003. doi:10.1007/978-3-540-45193-8\\_1.
* [13] Arist Kojevnikov. Improved lower bounds for tree-like resolution over linear inequalities. In João Marques-Silva and Karem A. Sakallah, editors, Theory and Applications of Satisfiability Testing - SAT 2007, 10th International Conference, Lisbon, Portugal, May 28-31, 2007, Proceedings, volume 4501 of Lecture Notes in Computer Science, pages 70–79. Springer, 2007. doi:10.1007/978-3-540-72788-0\\_10.
* [14] Jan Krajícek. Discretely ordered modules as a first-order extension of the cutting planes proof system. J. Symb. Log., 63(4):1582–1596, 1998. doi:10.2307/2586668.
* [15] Jan Krajícek. Interpolation by a game. Math. Log. Q., 44:450–458, 1998. doi:10.1002/malq.19980440403.
* [16] Nathan Linial and Jaikumar Radhakrishnan. Essential covers of the cube by hyperplanes. Journal of Combinatorial Theory, Series A, 109(2):331–338, 2005\.
* [17] Lutz Mattner and Bero Roos. Maximal probabilities of convolution powers of discrete uniform distributions. Statistics & probability letters, 78(17):2992–2996, 2008.
* [18] Fedor Part and Iddo Tzameret. Resolution with counting: Dag-like lower bounds and different moduli. Comput. Complex., 30(1):2, 2021. doi:10.1007/s00037-020-00202-x.
* [19] Mark Nicholas Charles Rhodes. On the chvátal rank of the pigeonhole principle. Theor. Comput. Sci., 410(27-29):2774–2778, 2009. doi:10.1016/j.tcs.2009.03.035.
* [20] John Alan Robinson. A machine-oriented logic based on the resolution principle. J. ACM, 12(1):23–41, 1965. URL: http://doi.acm.org/10.1145/321250.321253, doi:10.1145/321250.321253.
* [21] Bart Selman, Henry A. Kautz, and David A. McAllester. Ten challenges in propositional reasoning and search. In Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence, IJCAI 97, Nagoya, Japan, August 23-29, 1997, 2 Volumes, pages 50–54. Morgan Kaufmann, 1997. URL: http://ijcai.org/Proceedings/97-1/Papers/008.pdf.
* [22] Alasdair Urquhart. Hard examples for resolution. J. ACM, 34(1):209–219, 1987. doi:10.1145/7531.8928.
* [23] Jacobus Hendricus van Lint and Richard Michael Wilson. A Course in Combinatorics. Cambridge University Press, Cambridge, U.K.; New York, 2001.
* [24] Gal Yehuda and Amir Yehudayoff. A lower bound for essential covers of the cube. CoRR, abs/2105.13615, 2021. URL: https://arxiv.org/abs/2102.05536, arXiv:2102.05536.
* [25] Gal Yehuda and Amir Yehudayoff. A lower bound for essential covers of the cube. arXiv preprint arXiv:2105.13615, 2021.
|
11institutetext: Institute of Astronomy, Faculty of Physics, Astronomy and
Informatics, Nicolaus Copernicus University,
ul. Grudziadzka 5, 87-100 Torun, Poland, 11email<EMAIL_ADDRESS>22institutetext: Jodrell Bank Centre for Astrophysics, School of Physics &
Astronomy, University of Manchester,
Oxford Road, Manchester M13 9PL, UK 33institutetext: School of Mathematical
and Physical Sciences, Macquarie University, Sydney, NSW 2109, Australia
44institutetext: Department of Geodesy, Faculty of Geoengineering, University
of Warmia and Mazury,
ul. Oczapowskiego 2, 10-719 Olsztyn, Poland
# Low velocity streams inside the planetary nebula H 2-18
A 3D photoionization and kinematical reconstruction††thanks: Based on
observations collected at the European Southern Observatory under ESO
programme 099.D-0386(A).
K. Gesicki 11 A. Zijlstra 2233 M. Hajduk 44 A. Iwanowska 11 K. Grzesiak 11 K.
Lisiecki 11 J. Lipinski 11
(Received ; accepted )
###### Abstract
Aims. Numerous planetary nebulae show complicated inner structures not
obviously explained. For one such object we undertake a detailed 3D
photoionization and kinematical model analysis for a better understanding of
the underlying shaping processes.
Methods. We obtained 2D ARGUS/IFU spectroscopy covering the whole nebula in
selected, representative emission lines. A 3D photoionization modelling was
used to compute images and line profiles. Comparison of the observations with
the models was used to fine-tune the model details. This predicts the
approximate nebular 3D structure and kinematics.
Results. We found that within a cylindrical outer nebula there is a hidden,
very dense, bar-like or cylindrical inner structure. Both features are co-
axial and are inclined to the sky by 40 deg. A wide asymmetric one-sided plume
attached to one end of the bar is proposed to be a flat structure. All nebular
components share the same kinematics, with an isotropic velocity field which
monotonically increases with distance from the star before reaching a plateau.
The relatively low velocities indicate that the observed shapes do not require
particularly energetic processes and there is no indication for the current
presence of a jet. The 3D model reproduces the observed line ratios and the
detailed structure of the object significantly better than previous models.
###### Key Words.:
planetary nebulae: general – planetary nebulae: individual: H 2-18 (PN G
006.3+04.4)
## 1 Introduction
The theory of mass loss at the late asymptotic giant branch evolutionary phase
is still in development and this is one of the most important missing factors
in the stellar evolution theory. The ejected matter often takes on interesting
axi- or a-symmetric shapes and the understanding of physical processes causing
these is far from clear. Theoretical approaches need some suggestions and
hints derived from observations. Such hints can be found from studies of
planetary nebulae (PN).
Figure 1: The HST image of the PN H 2-18 obtained in an H$\alpha$ filter. The
plot is in linear intensity scale. The angular dimensions are converted to
$10^{17}$ cm assuming that the PN is at the Galactic Bulge distance of 8 kpc
and centred at the star position. The image is rotated to easier compare with
computed structures.
A classical concept of a PN is a shell of gas ejected by a star at the end of
its life. Initially the shell is opaque and hidden from view. Only after the
shell expands sufficiently for the hot central core to shine through and
ionize the nearby material does the object enter the PN phase. This phase ends
when the ejecta expand to a large volume and very low density, or when the
remaining stellar core enters the White Dwarf cooling track and fades
significantly. Both timescales are comparable.
We study details of PNe by applying the so-called kinematical reconstruction
(sometimes called a ‘reverse engineering’). This derives the nebular structure
and kinematics from line profiles based on assumed velocity fields which are
verified through photoionization and emission line profiles computation
(Gesicki & Zijlstra, 2000). For this method to work and to reduce the inherent
ambiguities, both high quality images and high resolution spectroscopy are
needed in order to constrain the density and velocity spatial distributions.
Figure 2: The ARGUS/IFU monochromatic images with HST contours defined in Fig.
4. The high resolution HST image has been positioned to obtain the best
agreement with IFU images of obviously low resolution. Shown are examples of
H$\alpha$, [N ii] 658.3 nm, and He ii 468.6 nm lines.
Here we present new 2D spectroscopic observations and new 3D photoionization
modelling of a previously analyzed PN, allowing us to compare the new results
with the older ones, to verify the new approach and to indicate all its
advantages.
The planetary nebula H 2-18 (see Fig.1) was previously analyzed using the
kinematical reconstruction approach. Gesicki et al. (2014) applied the Torun
codes and derived a spherically symmetric model of this PN. Later, Gesicki et
al. (2016) used the same observational data but applied the pseudo-3D code py-
Cloudy. Both papers were based on the HST imaging and ESO/VLT spectroscopic
long-slit observations.
In 2017 we obtained ARGUS IFU 2D data of high spatial and spectral resolution
of this object. The observations covered selected spectral lines of high and
low excitation which probe different nebular regions and allowed for a more
precise kinematic and photoionization analysis. The collected data are
presented in the Sect. 2.
For modelling of this nebula we applied the publicly available code MOCASSIN
which computes photoionization of a 3D gaseous structure. To initiate the 3D
modelling and to compare the model outputs with the 2D spectra we wrote a
number of Python scripts. The modelling procedure and the obtained density
structure and velocity field are described in the Sect. 3.
The nebula H 2-18 appears to be complex and there are few published studies of
similar objects to which it can be compared. The generalization of our unique
results is also not easy. Some aspects of this are finally discussed in Sect.
4.
## 2 Observations
The ESO FLAMES/ARGUS instrument provided the high-resolution spectroscopy of
spatially resolved Galactic Bulge PN H 2-18 (PN G 006.3+04.4; 17:43:28.7,
$-$21:09:51.30 (J2000)). The nebula was observed in 2017, on Sep. 18
(wavelengths around 465 and 504 nm) and on Sep. 21 (627 and 651 nm). The wide
field of view is a mosaic of $22\times 14$ microlenses each of $0.52\times
0.52$ arcsec. The instrument setup was selected to cover at high spectral
resolution ($R\gtrsim 30\,000$) the regions around four lines: [O iii] 500.7
nm, H$\alpha$ (including [N ii] 658.3 nm), He ii 468.6 nm and [O i] 630.2 nm.
For this object at the time of observations, the line [O i] 630.2 nm happened
to be strongly affected by the overlying terrestrial atmospheric emission
which unfavourably overlapped in radial velocity, and so it was excluded from
the analysis.
The standard ESO reduction pipeline (EsoReflex) has been applied to the raw
data. Further data processing and plotting was performed using Python with
modules astropy, scipy and matplotlib. These concern the removal of bad pixels
by median filtering the data cube in the XY axes (sky plane) and smoothing the
spectral noise by the Savitzky-Golay filter along the wavelength axis. From
the original IFU field of $22\times 14$ pixels we extracted for the analysis a
square of $10\times 10$ pixels which covers the whole nebula and is shown in
the figures. The examples of the monochromatic images (i.e. integrated over
the full width of a single emission line) are shown in Fig. 2. They show
background emission increasing towards the right edges which was not flat-
fielded completely by the pipeline. We did not correct for this because it did
not affect the morphological and kinematical modelling. Absolute flux
calibration was not attempted, instead the line flux values needed for the
model fitting were taken from literature. In all figures we present the
observational data with the sky coordinates converted to $10^{17}$ cm
(assuming the PN distance of 8 kpc) and measured with respect to the position
of the central star to ease the comparison with models where the real physical
size is used. We do not present the [O iii] 500.7 nm monochromatic image
because it is very similar to H$\alpha$.
Previous HST observations in the H$\alpha$ filter show an overall axially
symmetric, elongated structure which in Fig. 1 is rotated so that it is
positioned vertically. The most intense, inner broad area is asymmetric with
strongest emission on the upper-left of the inner region. Outside of the main
cylinder near its middle, a low-intensity ears-like extensions can be seen
(oriented horizontally in the image) which could be a trace of a faint
equatorial ring.
Our new data confirm these features (Fig. 2 left panel) and show more than
that. The IFU data revealed an intriguing pair of small and bright blobs in
the [N ii] 658.3 nm image (Fig. 2 middle panel); their dominant emission is
axially symmetric with a weaker asymmetric component. This feature is
completely absent in He ii 468.6 nm (Fig. 2 right panel) and is barely visible
in H$\alpha$.
We see now that neither a spherical nor an elliptical model is sufficient to
describe the PN H 2-18 and we need to consider the inner component. Although
the elongation in the [N ii] image appears jet-like, because of the high
density we estimate and the low velocity, we shall rather call it bar-like or
cylindrical, leaving the case open.
## 3 3D model analysis
The photoionization modelling was performed with the publicly available
MOCASSIN code (Ercolano et al., 2003). Supplementary codes were written in
Python, these concern a construction of a 3D density distribution cube,
calculation of the emission line shapes derived from total emissivity with
assumed velocity field and emerging from a defined pixel area on the sky,
graphical presentation of the observed and modelled emissions, etc. The models
presented here were computed on a spatial grid of $121\times 121\times 121$
points which ensured sufficient spatial accuracy with reasonable time for
simulations and image processing.
We aimed at a model which, while being as simple as possible, fits the images
and simultaneously reproduces the line ratios and velocities.
### 3.1 Density structure
Figure 3: The density cross-section of the assumed model. The symmetry axis is
positioned vertically, all structures (except for the plume) are cylindrically
symmetrical. This graph shows the true spatial dimensions of the substructures
for a distance of 8 kpc. The plume is in the plane of the image and extends to
$\pm 2.4\times 10^{16}$ cm above and below it.
Figure 4: The observed image in the H$\alpha$ line (left panel) compared with
the one obtained from the assumed model (right panel). The axes are the same
as in Fig. 1. The contours (0.04, 0.09, 0.19, 0.4, 0.6 of maximum) were
selected to bring out the substructures, the same contours are shown in both
panels and in all other plots. For the model image the main nebular axis is
inclined to the sky plane at 40 deg and because of lack of background noise
the lowest contour is set at 0.01 level.
The components of the model discussed below are presented in Fig. 3 as a
cross-section of the density distribution taken along the presumed symmetry
axis. The values of the hydrogen number density are shown in the side bar,
while the size of the nebula in cm is based on assuming the Galactic Bulge
distance of 8 kpc.
The central region within a radius of $2.4\times 10^{16}$ cm was not
considered in the modelling and assumed empty. The ionizing source (central
star) was assumed a black-body with temperature $T_{\mathrm{bb}}$ of 62 kK
(previously it was estimated 52 kK).
The main nebular body shows nearly straight edges seen clearly in vertical
direction in the HST image in Fig. 1 (especially as the contours on the right
side in the left panel of Fig. 4). This cannot be the ionization boundary
because we do not see there the low-excitation [N ii] 658 nm emission. To
reproduce this feature we assumed a cylindrical outer structure similar to
that derived previously in Gesicki et al. (2016). The difference concerns the
edges which previously were spreading outwards and had higher density. Now we
found a reasonable fit assuming a plain cylinder fully filled with gas density
decreasing along the axis.
The nebular model structure and inclination was fine-tuned together with the
velocity field: both factors provide the emission lines profiles (Sect. 3.2)
which verify the model. The adopted, enhanced, constant density of the middle
part of the main cylinder is responsible for the high intensity of the
symmetric component of the H$\alpha$ emission profiles. The asymmetric parts
of emission profiles of H$\alpha$ and [N ii] (see Figs. 5, 6) originate in the
proposed inner bar and indicate that it is inclined to the plane of the sky at
an angle of 40 deg (previously it was estimated at 35 deg).
The two emission blobs seen in the low-excitation line [N ii] 658 nm (Fig. 2
middle panel) indicate that the ionization front is positioned just at that
location. To obtain an ionization front at a specified position there should
be enough material between this front and the ionizing source. The axial
symmetry of the observed feature guided our modelling towards a narrow and
dense bar-like structure. Sufficient for a rather good fit was the assumption
of a constant density with a value adopted to produce the ionization front at
both far ends. Within this setup the middle region of this bar reproduces the
He ii 468 nm unresolved central emission which requires intense ionization
flux. Were the [N ii] 658 nm lines formed in two separate distant blobs the
explanation of the unresolved He ii 468 nm emission would need a more
elaborated nebular structure.
To reproduce the asymmetric H$\alpha$ image we attached a plume to the bar-
like structure, defined as a quarter of a circle of fixed thickness and with
gas density decreasing away from the symmetry axis. This structure has the
same kinematics as the bar-like outflows, therefore it is natural to place it
near the axis and perpendicularly to the equatorial plane. The asymmetric
plume-like structure extends to $\pm 2.4\times 10^{16}$ cm above and below the
plane of the image shown in Fig. 3.
To complete the model we added an outer equatorial low density torus.
The total nebular mass of the proposed model is 0.09 M☉; the central dense bar
has a mass of 0.006 M☉ while the plume-like feature has a mass of 0.003 M☉.
The chemical composition adopted for the photoionization modelling was taken
from Chiappini et al. (2009) and supplemented by the latest data from Ventura
et al. (2017). The line ratios used for verification of the modelling were
taken from Górny et al. (2009).
In Table 1 we compare the new model parameters with our previous analyses from
2014 and 2016 and with selected, representative observational data adopted
from publications. We did not perform any automated search for a best-fit
model such as the genetic algorithm used previously with spherically symmetric
computations (Gesicki et al., 2006), as the model is now too complicated for
that. We started with the previously found basic parameters and searched for
an improvement (with trials and errors, not far from the starting values), all
the time verifying the agreement with the observed data. The nebular H$\beta$
flux was similarly well fitted with the older simpler codes, however details
of line ratios are significantly better reproduced with the new 3D model. This
in particular concerns the strong [O iii] 500 nm emission which is now excited
by the higher temperature of the central star. Previously, lower value of
$T_{\mathrm{bb}}$ was a compromise to fit high-excitation [O iii] 500 nm
simultaneously with low-excitation [N ii] 658 nm for spherical symmetry. This
conflict is now resolved by the spatial distribution of matter within the PN.
Table 1: Parameters of the models of PN H 2-18 obtained with different codes compared with observed values. The line intensities are on the scale H$\beta$=1 | m o d e l l e d | observedaa$a$dereddened line intensities from Górny et al. (2009)
---|---|---
| 2014bb$b$from Gesicki et al. (2014) | 2016cc$c$from Gesicki et al. (2016) | 2024 |
model version | sphere | ellipsoid | 3D |
ion. mass [M☉] | 0.27 | 0.051 | 0.096 |
kin. age [k yr] | 1.67 | 1.28 | 2.4 |
$T_{\mathrm{bb}}$ [kK] | 51 | 52 | 62 |
$L/L_{\sun}$ | 67 000 | 1 700 | 5 000 |
$\log F(\mathrm{H}\beta$) | $-11.5$ | $-11.6$ | $-11.5$ | $-11.5$
He ii 468 nm | 0.078 | | 0.058 | 0.05
[N ii] 658 nm | 0.0026 | 0.07 | 0.097 | 0.078
[O i] 630 nm | 5$\times 10^{-8}$ | 8$\times 10^{-5}$ | 0.002 | 0.01
[O ii] 372 nm | 0.005 | 0.07 | 0.13 | 0.16
[O iii] 500 nm | 7.8 | 5.24 | 10.6 | 13.1
[Ne iii] 386 nm | 0.70 | | 0.57 | 0.999
[S ii] 673 nm | 5$\times 10^{-6}$ | | 0.013 | 0.018
[S iii 631 nm | 0.0002 | | 0.009 | 0.01
[Cl iii] 551 nm | 0.007 | | 0.081 | 0.0036
[Ar iv] 471 nm | 0.047 | | 0.031 | 0.027
With the adopted parameters the resulting best fitting model produces the
image in the H$\alpha$ line in the plane of the sky presented in the right
panel of Fig. 4 compared with the HST image in the left panel. Both images are
rendered as an inverted colour map to better expose the faint structures and
with contours added to guide the eye. It is not perfectly fitted and it is not
unambiguous however it explains all the different observed characteristics and
is as simple as possible.
### 3.2 Velocity field
The ARGUS/IFU spectra allow much better for the velocity reconstruction than
previously. However the abundance of data required some ingenuity to present
clearly the 2D spectra and to compare observations with models. Fig. 5 shows
the presentation designed by us.
The left panel shows the observations, and the right panel the model results.
The panels cover a fragment of the sky plane, the corresponding HST image
(Fig. 1) is overlaid as thin gray contours; the coordinates are expressed in
units of $10^{17}$ cm.
Both panels are divided into $10\times 10$ small squares representing the IFU
pixels. Each square pixel shows the detailed emission profile of the H$\alpha$
line, with the horizontal axis expressed in km s-1 over the range $\pm$75 km
s-1 and the vertical axis extending from 0 to 1, normalized to the maximum
value of the whole image. To emphasize the value and direction of the velocity
we applied colour coding. As usual red means moving away from the observer
(red-shifted) and blue moving towards the observer. The higher the velocity
the more intense the colour; gray is for zero. We plotted the emission in the
form of bar-plots to show better the colours. The width of each bar is 10 km
s-1, the gray bar is centered on zero velocity.
In the observational panel on the left, we cut-off the noise at the 5% level.
In the model panel on the right we added a correction for the seeing. The
seeing was below the requested 0.9 arcsec for most of the observations, except
for the He ii 468 nm line. Inspecting the modelled emissions integrated over
the exact pixel size it became obvious that we should allow for additional
emission from some area around the given pixel which should mimic the actual
seeing. We found a satisfactory fit (comparable to observations) when
integration for the given pixel was extended by one pixel width (0.52 arcsec)
around.
In Fig. 6 the emission line [N ii] 658 nm is presented in the same setup as in
Fig. 5. Already at first glance we see that red colour dominates in the upper
half of the images, blue dominates in the lower half while the middle row is
approximately symmetric. It is obvious that we see elongated structure and if
we assume only expansion from the central source, then the upper part is
directed away from us while the lower part is expanding towards us. This also
clarifies that the polar axis corresponds to the vertical direction in the
image, something that is not obvious from the images only.
In Fig. 7 the emission line He ii 468 nm is presented in the same setup as in
Fig. 5. Although of lower quality this line is unsplit and limited to the
central pixels, in contrast to the [N ii] emission.
Figure 5: The detailed emission profiles of the H$\alpha$ line shown for each
of IFU pixels. The left panel shows the observed data, the right panel the
corresponding calculated model emissions. To emphasize the value and direction
of the velocity we applied colour coding where red means gas moving away from
the observer and blue moving towards the observer and the higher the velocity
the more intense the colour, gray is for zero. We plotted the emission in a
form of bar-plots, the width of each bar is 10 km s-1, the height (intensity)
is normalized to the maximum value over the whole image. The horizontal size
of each box extends at $\pm$75 km s-1, vertical from zero to unity.
Figure 6: The detailed emission profiles of the [N ii] 658 nm line shown for
each of IFU pixels. The presentation is the same as in Fig. 5
Figure 7: The detailed emission profiles of the He ii 468 nm line shown for
each of IFU pixels. The presentation is the same as in Fig. 5
The left panels of Figs. 5–7 show clearly that no extraordinary kinematics is
detected within H 2-18. The velocities are within normal ranges for PNe, and
there is no evidence for a fast-moving and/or collimated jet.
The data allows us to improve on the velocity model of Gesicki et al. (2016)
where the velocity was linearly increasing with radial distance. Now we have
spectral coverage over the whole nebula. Interestingly the velocities of the
different substructures identified previously, as seen in the H$\alpha$ line,
are similar to each other. This suggests an isotropic velocity field. Indeed,
attempts to apply different velocities to the different substructures failed
to improve the fit, in agreement with this. (There may be differences at
smaller scales: we are limited by the spatial resolution of the IFU data
cube.)
A radial gradient of the expansion is obviously present because the high
excitation He ii 468 nm line shows the lowest velocity while the low
excitation [N ii] 658 nm line has the highest. A very simple relation of
velocity linearly increasing with distance (homologous expansion) was not
satisfactory because it makes the [N ii] 658 nm line too broad in comparison
with observations. The region of the [N ii] line formation should rather have
a constant velocity. We adopted a velocity monotonically increasing with
radial distance, starting from 10 km s-1 at the inner edge and reaching an
asymptotic value of 60 km s-1 near the ends of the bar-like structure. In this
way we obtained a remarkable fit to the broad, split, and asymmetrical
H$\alpha$ emission and simultaneously to the narrow, fast and oppositely
directed N ii emissions and spatially unresolved, narrow, unsplit He ii 468 nm
line, as indicated by the high spectral resolution of the IFU data.
## 4 Discussion
### 4.1 New details and improved parameters
The collected IFU spectra provided new information about the Galactic Bulge
nebula H 2-18. Although of lower spatial resolution than HST they revealed a
feature proposed to be a bar-like structure embedded inside the main object
and co-axial with it. We call it a ‘bar’ rather than ‘jet’ because its
relatively low velocity is similar to that of the surrounding broader
cylinder. The adjoining single-sided plume of ejected gas inclined to the main
axis was earlier seen in the HST images. Its kinematics is also not different
from the rest of the nebula. The ears-like extensions seen in the HST image
are now simply explained as part of the outer equatorial torus.
The photoionization modelling in 3D allowed for improved estimation of stellar
and nebular parameters. The higher $T_{\mathrm{bb}}$ now better reproduces the
line ratios, the advantage of 3D density structure is obvious in this context.
The $T_{\mathrm{bb}}$ value indicates a not very old object, likely in the
middle of its way towards White Dwarf cooling track. The size of this PN
agrees with that: the kinematic age (derived from mass averaged velocity of 43
km s-1, lobes lengths of 0.15 pc, and the corrected formula of Gesicki et al.
(2014)) is 2400 yr.
### 4.2 Co-axial structures
Montoro-Molina et al. (2024) analysed the integral field spectroscopy of the
PN A 58 and derived a structure of wide bipolar outflows with an embedded
narrow co-axial collimated stream. Their kinematics (bipolar outflows at 280
km s-1) is different from ours but the morphology looks nearly the same
(certainly excluding the plume). A common envelope influence is suggested.
Their analyses were performed with the well known tool named
SHAPE111https://wsteffen75.wixsite.com/website which is focused on morphology
and does not consider the photoionization. A thin bar-like structure in the
polar direction is also seen in the JWST image of the planetary nebula NGC
3132 (De Marco et al., 2022), shown in Fig. 8. The equatorial ring is seen in
absorption against the bar, which indicates that the bar is located inside the
nebula. Both cases show that the co-axial inner and outer nebula in H 2-18 is
not a unique phenomenon. (A thin bar-like feature in JWST images of NGC 6720
(Wesson et al., 2024) is a different structure, as it is mainly seen as a gap
and is located in the equatorial direction: it is interpreted as the inner
edges of a wide bipolar flow.)
Figure 8: JWST image of NGC 3132 in the filter NIRcam F187N dominated by
Pa$\alpha$ emission, showing a bar-like, polar structure with some similarity
to that in H 2-18. Image from De Marco et al. (2022).
Comparison with published hydrodynamical calculations is hampered by the fact
that very few of them show the common envelope evolution outcome on a
planetary-nebula wide scale and are rendered as images (most of them are
limited to the solar radius or a.u. scale and they are usually presenting
intricately shaped density cross-sections). García-Segura et al. (2018)
present a common-envelope model that is extended into the PN phase. It assumes
a pre-existing equatorial density enhancement into which the common envelope
ejecta expand. It is restricted to axial symmetry with numerous
simplification, and is focused on explaining bipolar shapes while skipping
kinematics. Their Fig. 8 shows two examples of emission measures (a proxy for
images) for complicated elongated nebulae tilted by 40 deg which show
structures similar to those in H 2-18: the outer barrel shape, central
ellipsoidal enhancements, and symmetric extensions (ears). One of their models
shows blobs of gas piled-up at the symmetry axis which look similar to the
symmetric N ii emissions found by us. Interestingly two separate mass ejection
events are not required for the formation of both inner and outer lobes. A
follow-up study (García-Segura et al., 2022) concluded that lower mass stars
will produce more elongated PNe and the observed jets in PNe must be remnants
of early phases. Both conclusions might be applicable to our case.
### 4.3 Low velocity outflow
The spherical models of Gesicki et al. (2014) commonly revealed a velocity
increase towards the outer nebular edge. This is in agreement with spherically
symmetric hydrodynamical modelling of Schönberner et al. (2005). Here we
obtained a much more complex 3D structure, still with a similar velocity
field. We can speculate that the same acceleration mechanism known from
hydrodynamical models is acting here: the slowly spreading gas is accelerated
and heated by the increasing ionizing radiation of the gradually heating-up
central star.
One of the conclusions of García-Segura et al. (2018) was that the inner
nebulae can be ablated and photoevaporated from the excretion disk. One can
expect that such a process can result in a mild acceleration of the PN. A disk
wind, caused by the evaporation of the photoionized gas was our interpretation
of low velocity outflows in PN M 2-29 (Gesicki et al., 2010). Recently Icke
(2022) analysed a circumstellar ring irradiated by the central star and
obtained that star-driven evaporation would produce a cylindrically collimated
outflow. The simple model was intended to reproduce the very elongated PN Hen
3-401. Interestingly the cylindrical outflow appears to be relatively slow
(below 10 km s-1 near the irradiated disk) and presumably weakly accelerated
along the axis. In our Fig. 1 we do not see any obvious disk comparable to the
one seen in Hen 3-401, nevertheless some similarities exist. Our model of H
2-18 (shown in Fig. 3) has lengths comparable to Hen 3-401 and approximately
twice the width (Icke, 2022). The structure requires higher density in the
equatorial region which can be treated as an approximation to a true
equatorial disk or an equatorial density enhancement. Therefore, photo-
evaporation from this region with further photo-acceleration are a plausible
mechanism shaping H 2-18.
These models are not perfect matches for H 2-18: the lobes in Icke (2022) have
low mass, while for the models of García-Segura et al. (2018) most of the
common envelope mass does not reach escape velocities and remains bound. The
latter model also does not present the kinematics.
Circumbinary disks in PNe have low mass (e.g., De Marco et al., 2022) and may
have difficulty explaining the dense cylindrical structure in H 2-18. A
larger-sized equatorial density enhancement may however play a similar role.
### 4.4 Jets or not
Akashi & Soker (2021) propose that structures such as the ‘ears’ in H 2-18
form through polar jets, while García-Segura et al. (2018) form them by
equatorial gravitational focusing. In the case of H 2-18, the ‘ears’ are
presumably located in the equatorial plane so that the second mechanism is
more likely.
The cylindrical polar structure is more ambiguous. In García-Segura et al.
(2018), it forms through expansion of the common envelope ejecta into a pre-
existing equator-to-pole density gradient. Icke (2022) shows that the
cylindrical nebula of Hen 3-401 can originate from an evaporating disk (or
torus?), without requiring a jet. In contrast, Soker (2002) shows that
cylindrical structures in PNe can form through refocused jets. The paper uses
a jet velocity of 500 km s-1, as could come from a main sequence companion.
Such velocities are not seen in H 2-18, nor is a jet bow shock seen, but the
structure could have formed from a previous jet which is now extinct.
### 4.5 Binary central star
There is no direct evidence of a close binary star in H 2-18 centre,
nevertheless the axial symmetry of the dominant structures suggests the
possibility of a binary system shaping this PN. All the discussed
hydrodynamical models require binarity. However, none appear to depend on
common-envelope evolution. Wider binary interaction which avoids common
envelope can still produce equatorial density enhancement, which may suffice.
A binary (multiple?) system with a disk can be well hidden in the central
unresolved dense region.
### 4.6 Asymmetric plume
The one-sided plume-like structure can be compared with the one observed in
the PN M 2-29 (Gesicki et al., 2010), however in H 2-18 it expands a little
faster, it is seen at different angle, and is located inside the main nebula
although protruding a little out of it. Its flat structure with a nearly
constant density was proposed on the basis of the rather uniform intensity
seen in H$\alpha$ image; any plowing action of the ionization front is not
seen. The mass and brightness of the plume are comparable to those of the bar
which makes it difficult to disentangle both components. This simple model
does not reproduce all details so the true shape may be more complicated. The
models of García-Segura et al. (2018) show structures with a rough similarity
to the plume, although with point-symmetric multiplicity that is lacking in H
2-18.
The mass loss shaping of H 2-18 has in any case been a complex process. The
high density of the inner bar or cylinder, the plume and the lack of a
detected jet put significant constraints on its evolution.
###### Acknowledgements.
We thank Vincent Icke the referee for friendly and helpful comments. Help from
Roger Wesson during installing and running the MOCASSIN code is gratefully
acknowledged. We acknowledge financial support from the Nicolaus Copernicus
University through the University Centre of Excellence ‘Astrophysics and
astrochemistry’. Part of this work was supported by STFC through grant
ST/X001229/1. AAZ also acknowledges support from the Royal Society through
grant IES/R3/233287 and the University of Macquarie. This work made use of
Astropy:222http://www.astropy.org a community-developed core Python package
and an ecosystem of tools and resources for astronomy (Astropy Collaboration
et al., 2022).
## References
* Akashi & Soker (2021) Akashi, M. & Soker, N. 2021, ApJ, 913, 91
* Astropy Collaboration et al. (2022) Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, ApJ, 935, 167
* Chiappini et al. (2009) Chiappini, C., Górny, S. K., Stasińska, G., & Barbuy, B. 2009, A&A, 494, 591
* De Marco et al. (2022) De Marco, O., Akashi, M., Akras, S., et al. 2022, Nature Astronomy, 6, 1421
* Ercolano et al. (2003) Ercolano, B., Barlow, M. J., Storey, P. J., & Liu, X. W. 2003, MNRAS, 340, 1136
* García-Segura et al. (2018) García-Segura, G., Ricker, P. M., & Taam, R. E. 2018, ApJ, 860, 19
* García-Segura et al. (2022) García-Segura, G., Taam, R. E., & Ricker, P. M. 2022, MNRAS, 517, 3822
* Gesicki & Zijlstra (2000) Gesicki, K. & Zijlstra, A. A. 2000, A&A, 358, 1058
* Gesicki et al. (2006) Gesicki, K., Zijlstra, A. A., Acker, A., et al. 2006, A&A, 451, 925
* Gesicki et al. (2014) Gesicki, K., Zijlstra, A. A., Hajduk, M., & Szyszka, C. 2014, A&A, 566, A48
* Gesicki et al. (2016) Gesicki, K., Zijlstra, A. A., & Morisset, C. 2016, A&A, 585, A69
* Gesicki et al. (2010) Gesicki, K., Zijlstra, A. A., Szyszka, C., et al. 2010, A&A, 514, A54
* Górny et al. (2009) Górny, S. K., Chiappini, C., Stasińska, G., & Cuisinier, F. 2009, A&A, 500, 1089
* Icke (2022) Icke, V. 2022, Galaxies, 10, 53
* Montoro-Molina et al. (2024) Montoro-Molina, B., Tafoya, D., Guerrero, M. A., Toalá, J. A., & Santamaría, E. 2024, A&A, 684, A107
* Schönberner et al. (2005) Schönberner, D., Jacob, R., Steffen, M., et al. 2005, A&A, 431, 963
* Soker (2002) Soker, N. 2002, ApJ, 568, 726
* Ventura et al. (2017) Ventura, P., Stanghellini, L., Dell’Agli, F., & García-Hernández, D. A. 2017, MNRAS, 471, 4648
* Wesson et al. (2024) Wesson, R., Matsuura, M., Zijlstra, A. A., et al. 2024, MNRAS, 528, 3392
|
# Layered Decoding of Quantum LDPC Codes
Julien Du Crest111Univ. Grenoble Alpes, Grenoble INP, LIG, F-38000 Grenoble,
France ([email protected]), Francisco Garcia-
Herrero222Department of Computer Architecture and Automatics (DACYA),
Complutense University of Madrid, Spain<EMAIL_ADDRESS>, Mehdi Mhalla333
Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG, F-38000 Grenoble, France
([email protected]),
Valentin Savin 444 Univ. Grenoble Alpes, CEA-Léti, F-38054 Grenoble, France
<EMAIL_ADDRESS>and Javier Valls 555Instituto de Telecomunicaciones y
Aplicaciones Multimedia, Universitat Politecnica de Valencia, 46022 Valencia,
Spain<EMAIL_ADDRESS>
###### Abstract
We address the problem of doing message passing based decoding of quantum LDPC
codes under hardware latency limitations. We propose a novel way to do layered
decoding that suits quantum constraints and outperforms flooded scheduling,
the usual scheduling on parallel architectures. A generic construction is
given to construct layers of hypergraph product codes. In the process, we
introduce two new notions, $t$-covering layers which is a generalization of
the usual layer decomposition, and a modified scheduling called random order
scheduling. Numerical simulations show that the random ordering is of
independent interest as it helps relieve the high error floor typical of
message passing decoders on quantum codes for both layered and serial decoding
without the need of post-processing.
## 1 Introduction
A lot of work has been done in order to improve the decoding of quantum low-
density parity-check (qLDPC) codes using message-passing (MP) decoders. Most
of these works rely on the use of post-processing techniques [1, 2, 3], whose
feasibility is still to be demonstrated on actual hardware, due to the
stringent latency, power and scalability requirements of the quantum system. A
key attribute of MP decoding is the underlying scheduling, indicating the
order in which variable and check node messages are updated. This has been
subject to extensive research in the classical LDPC decoding literature, and
it has been shown that the MP scheduling may significantly impact the
convergence speed [4], the decoding performance (_e.g._ , in case of adaptive
scheduling strategies [5, 6, 7]), or the performance (_e.g._ , latency, area,
power-consumption) of the hardware design [8]. The vast majority of hardware
designs are based on partly-parallel architectures, implementing a layered
decoding scheduling, which can be considered as a de facto standard solution,
able to provide relevant complexity and performance advantages in most
applications [8].
For qLDPC codes, the MP decoding performance may depend even more on the
underlying scheduling, which can be most likely attributed to the code
degeneracy [2]. Moreover, some post-processing techniques may be highly
dependent on the MP decoding scheduling. For instance, the order statistics
decoding post-processing has been shown to provide very good performance when
a layered scheduling is used, but its performance may be drastically degraded
using a flooded (_i.e._ , fully parallel) scheduling [2].
To design an efficient partly parallel architecture implementing a layered
scheduling, one needs a layer decomposition of the parity check matrix. For
qLDPC codes this may be tricky, as they do not have an innate decomposition
into horizontal layers (as for instance in the case of classical quasi-cyclic
LDPC codes). To ensure a high degree of parallelism, it is also desirable to
have a decomposition into a minimal number of layers. In this paper, we first
give a generic construction of a minimal layer decomposition for hypergraph
product codes. Moreover, in an attempt to start bridging the gap between
hardware limitations and state of the art MP decoders, we propose two new
tools to implement layered decoding of qLDPC codes. The first is a
generalization of the notion of layer decomposition, consisting of a family of
$t$-covering layers, which can be seen as a layer decomposition of $t$
decoding iterations, and is aimed at increasing the parallelism degree of the
layered architecture. The second is a new scheduling called random order
scheduling, and is shown to significantly improve the decoding performance.
Our numerical simulations provide evidence that both could be used in the
future to meet hardware needs as they offer a good compromise of speed and
performance.
## 2 Preliminaries
### 2.1 Quantum Codes
Calderbank-Shor-Steane (CSS) codes are defined by two classical
$(m_{X}/m_{Z},n)$-parity check matrices $H_{X},H_{Z}$ satisfying
$H_{X}H_{Z}^{\perp}=0$. The dimension of the quantum code is
$n-\text{rank}(H_{X})-\text{rank}(H_{Z})$ and its minimum distance
$d=\\{\min|v|,v\in\ker H_{X}\backslash\operatorname{im}H_{Z}^{t}\cup\ker
H_{Z}\backslash\operatorname{im}H_{X}^{t}\\}$. One such class of CSS codes are
the hypergraph product codes (HPC), which given two classical codes $A$ and
$B$ give the quantum code $H_{X}=[A\otimes I,I\otimes B^{t}]$ and
$H_{Z}=[I\otimes B,A^{t}\otimes I]$ (see [9] for the construction and
parameters of the code).
In the following, we will only focus on decoding $Z$ errors using $H_{X}$. All
proofs are easily adapted to correcting $X$ errors.
### 2.2 MP decoding
MP decoders work by exchanging soft information between check and variable
nodes on the Tanner graph representation of the code, trying to converge to a
hard decision on the variable nodes that satisfy the syndrome. One crucial
factor is to decide in which order messages are exchanged and soft information
updated. There are three main decoding scheduling used classically: flooded,
in which messages are exchanged simultaneously and soft information updated in
parallel, serial, where the graph is updated sequentially going through all
the checks one by one, and layered, which lies in between, taking advantage of
checks that have a disjoint support to update them in parallel, essentially
doing a speed up of serial scheduling at no cost. For more details on
classical message passing, refer to [10].
### 2.3 Layered Scheduling
To avoid memory conflicts in a partly parallel architecture, implementing a
layered scheduling, the same memory slot should not be read/written to by two
different processing units at the same time. This motivates the following
definitions.
###### Definition 1.
A layer is a collection of check-nodes such that any two check-nodes have no
neighbouring variable-node in common.
###### Definition 2.
A layer decomposition $L_{0}\sqcup\dots\sqcup L_{k-1}$ is a partition of the
set of check-nodes into $k$ layers.
###### Definition 3.
A decomposition is said to be minimal if it is impossible to find a
decomposition in less layers.
A simple density argument is enough to state the following fundamental
inequality:
###### Lemma 1.
Any $k$ layers decomposition of a $(\\_,\delta)$-regular666A matrix is said to
be $(c,d)$-regular if every column is of weight $c$ and every row of weight
$d$. $(m,n)$-parity-check matrix satisfies $k\geq\frac{\delta m}{n}$.
###### Definition 4.
A decomposition is $\gamma$-balanced if
$\frac{|L_{i}|}{|L_{j}|}\leq\gamma,\quad\forall L_{i},L_{j}$
A decomposition will be said to be balanced if it is 1-balanced. Balanced
decompositions ensure an efficient use of hardware resources (check-node
processing units).
## 3 Hardware Requirements
In contrast to classical LDPC decoders, which prioritize optimizing
throughput, qLDPC ones must satisfy highly constrained values of latency to
avoid the backlog problem [11], which would lead to an exponential slowdown of
the quantum processor making the QEC implementation impractical.
Table 1: Latency approximation for the different architectures Parallel | Serial | Layered
---|---|---
$T_{min}^{(P)}\times 2\times it_{max}$ | $T_{min}^{(S)}\times(it_{max}/2)\times m$ | $T_{min}^{(L)}\times 2\times(it_{max}/2)\times k$
To illustrate the behavior of the decoder, the B1 code from [1] is taken as an
example (defined in Section 4). An MP decoder for the B1 code can achieve with
this architecture777All figures reported here come from our implementation of
an either fully parallel [12] or serial min-sum decoder architecture, with
exchanged messages quantized on 6 bits, on a Xilinx FPGA xcv095 board., a
clock period between 8 and 10ns which derives a latency between 480ns and
600ns at 30 iterations, that is close to the most constrained technology.
Taking this into account, the clock period usually can be reduced to 70% and
80% of the clock period obtained with the parallel version. The question is
that with this schedule and the derived architecture only one check node is
updated in a clock cycle, because of the sequential update of the messages.
Due to this, at least $m$ clock-cycles are required 888Assuming that due to
the reduced complexity of the units both the check node and the connected
variable nodes can be updated in parallel. to complete just one iteration of
the MP algorithm. Following the example of code B1, the clock period should be
between 5.6 and 7ns, but the total latency at 30 iterations would be between
74.26$\mu$s and 92.82$\mu$s. Even assuming a reduction of the number of
iterations to get similar performance to the flooded schedule, the range of
latencies would be out of the time budget of supercomputing qubits and
transmons. To meet the timing requirements, the clock period should be equal
to $\frac{5.6}{m/2}$=0.025ns, which is a maximum clock frequency of 40GHz.
This frequency cannot be achieved by any FPGA or ASIC, and on the other hand,
it would require a large power consumption that will cause another problem
with the power budget and the refrigeration system [13]. With the previous
examples, it is easy to conclude that the implementation of serial scheduling,
even if it has a better performance than the flooded one, it is not a
realistic solution when it comes to implementation. A trade-off solution
between both serial and flooded may be the layered schedule. If the number of
layers is small enough, the number of clock cycles per iteration will not grow
too much and the number of iterations will be usually reduced by two. Going
back to the B1 code example, assuming that in the worst case, the clock period
will be similar to the parallel architecture, and with a distribution of 3.5
layers999See Section 4 for the formal definition of fractional layer number.
the total latency for 30 iterations will be between 8x3.5x2x30=1.68$\mu$s and
10x3.5x2x30=2.10$\mu$s; and between 8x3.5x2x15=840ns and
10x3.5x2x15=1.05$\mu$s with 15 iterations, which is fairly close to the
constraints of superconducting qubits and meets the requirements of other
technologies.
As we will see in the following sections, the layered schedule will also
benefit from some non-negligible performance improvements, apart from the
reduction in the number of iterations, compared to the flooded schedule.
In table 1, we can find a summary of the total latency for different
architectures, where $T_{min}^{(P)}$, $T_{min}^{(S)}$ and $T_{min}^{(L)}$ are
the minimum clock periods achievable by the parallel, serial and layered
architectures respectively, and $it_{max}$ is the maximum number of iterations
configured in the parallel decoder. Note that we assume that the number of
iterations of serial and layered is usually half the number of iterations of
the parallel architecture [4].
## 4 Generic Constructions
### 4.1 Layered Construction for Hypergraph Product Codes
Consider an hypergraph product code defined by two matrices $A$ and $B$, such
that $H_{X}=[A\otimes I,I\otimes B^{t}]$ and $H_{Z}=[I\otimes B,A^{t}\otimes
I]$ [9]. There exist a layer construction from a layer decomposition of
$A,B,A^{t},B^{t}$.
###### Theorem 1.
Given minimal decompositions $A=A_{0}\sqcup\dots\sqcup
A_{k_{A}-1},B=B^{t}_{0}\sqcup\dots\sqcup B^{t}_{k_{B^{t}}-1}$, one can
construct a minimal decomposition of $H_{X}$ in $k=\max(k_{A},k_{B^{t}})$
layers.
A similar theorem can be stated for $A^{t},B$ and $H_{Z}$ with the same proof
techniques.
#### Construction
If $k_{A}\neq k_{B}$, without loss of generality, suppose $k_{A}<k_{B}$. The
first step is to add empty layers to A so that
$k^{\prime}_{A}=k^{\prime}_{B}=k$. That is let $A=A_{0}\sqcup\dots\sqcup
A_{k_{A}-1}\sqcup A_{k_{A}}\dots\sqcup A_{k}$ where
$A_{k_{A}}=\dots=A_{k}=\emptyset$. Let’s label each row of $H_{X}$ as
$a\varoast b:=[a\otimes e_{b},e_{a}\otimes b],\quad
a\in\text{rows}(A),b\in\text{rows}(B^{t})$
In the following we will denote by left the sub-matrix $[A\otimes I]$ and
right the sub-matrix $[I\otimes B^{t}]$. Create layers $L_{0}\dots L_{k-1}$
such that
$(a\varoast b)\in L_{i}\Leftrightarrow\exists j\quad a\in A_{j},b\in
B^{t}_{j+i\mod k}$ (1)
By definition, all checks belong to some layer, we now have to check that any
two checks in a given layer have disjoint variable nodes support. Suppose that
two checks $a\varoast b,a^{\prime}\varoast b^{\prime}$ belong to $L_{i}$. Case
A : $a=a^{\prime}$. They do not touch on the left thanks to the tensor product
with the identity. Furthermore, it means that $b\neq b^{\prime}$ but then both
belong to $B^{t}_{l}$ for some $l$, so they have disjoint support on the
right. Case B,C : $a\neq a^{\prime}$. They have disjoint support on the right
because of the tensor product with the identity. To show that they do not
intersect on the left, there are two cases : If $a$ and $a^{\prime}$ belong to
some $A_{l}$ (case B), then by definition they have disjoint support on the
left. If $a$ and $a^{\prime}$ belong respectively to $A_{l},A_{l^{\prime}}$
with $l\neq l^{\prime}$ (case C), then it means that $b\in B^{t}_{l+i\mod
k},b^{\prime}\in B^{t}_{l^{\prime}+i\mod k}$, two distinct classes. Hence even
though $a$ and $a^{\prime}$ might share variable nodes in $A$, they do not
intersect in the tensored version $A\otimes I$. Fig. 1 depicts a simple
example of the 3 cases.
1 | | | 1 | | | 1 | 1 | | | |
---|---|---|---|---|---|---|---|---|---|---|---
| 1 | | | 1 | | 1 | | | | |
| | 1 | | | 1 | | 1 | | | |
\raisebox{-.9pt} {1}⃝ | | | | | | | | \raisebox{-.9pt} {1}⃝ | \raisebox{-.9pt} {1}⃝ | |
| 1 | | | | | | | 1 | | |
| | 1 | | | | | | | 1 | |
| | | \raisebox{-.9pt} {1}⃝ | | | | | | | \raisebox{-.9pt} {1}⃝ | \raisebox{-.9pt} {1}⃝
| | | | 1 | | | | | | 1 |
| | | | | 1 | | | | | | 1
Case A : $a=a^{\prime}\implies b\neq b^{\prime},\exists l,\quad
b,b^{\prime}\in B^{t}_{l}$
---
Case \raisebox{-.9pt} {B}⃝ : $a\neq a^{\prime},\quad a,a^{\prime}\in A_{l}$
Case C : $a\neq a^{\prime},a\in A_{l},a^{\prime}\in A_{l^{\prime}}$
Figure 1: Small visualization example of proof cases where $A=B^{t}$
#### Minimality
The proof is by contradiction. Assume that there is a decomposition in less
than $k_{B^{t}}$ layers, then one could recover a decomposition for $B^{t}$ in
less than $k_{B^{t}}$ from a restriction to the $\\{a\varoast b,\quad\forall
b\\}$ positions for any given $a$. Any decomposition in less than $k_{A}$
layers would similarly give a decomposition for $A$ from the restriction of
$H_{X}$ to any $\\{a\varoast b,\quad\forall a\\}$ for a given $b$. Hence the
decomposition in $\max(k_{A},k_{B}^{t})$ is minimal for $H_{X}$.
Note that the construction is not unique, for example, Equation 1 can be
replaced by the following equation where $\sigma$ is any $k$-permutation,
although this is still not the most generic formula:
$(a\varoast b)\in L_{i}\Leftrightarrow\exists j,\quad a\in A_{\sigma(j)},b\in
B^{t}_{j+i\mod k}$ (2)
###### Theorem 2.
Given $k$-layerings for $A$ and $B^{t}$, respectively $\alpha$ and
$\beta$-balanced. Then $H_{X}$ is $\gamma$-balanced with :
(i) $\gamma<min(\alpha,\beta)$ if $\alpha,\beta>1$ (ii) $\gamma=1\qquad$
otherwise.
###### Proof.
(i) Let $a_{0}...a_{k-1}$ be the sizes of layers $A_{0}...A_{k-1}$, and
$b_{0}...b_{k-1}$ the sizes of the layers $B^{t}_{0}...B^{t}_{k-1}$. Then each
layer $L_{l}$ of $H_{X}$ has size $\sum a_{i}b_{i+l}$. We also suppose layers
of $A$ and $B^{t}$ are ordered from biggest to smallest hence $a_{0}=\alpha
a_{k-1}$ and $b_{0}=\beta b_{k-1}$.
The layer $L_{0}$ of size $\sum a_{i}b_{i}$ is the biggest layer, a classical
proof of that is by contradiction, using the fact that $\forall a\geq c,b\geq
d,\quad ab+cd\geq ad+bc$. The ratio between any other layer $L_{j}$ and
$L_{0}$ is smaller than $\beta$ since $\beta\sum a_{i}b_{i+j}>\sum
a_{i}b_{0}>\sum a_{i}b_{i}$ using the fact that $\forall b_{j},\quad\beta
b_{j}\geq b_{0}$ ( and similarly for $\alpha$ ).
(ii) Suppose $A$ is perfectly balanced. In that case, for any $b\in B^{t}$, it
will appear the same number of times in each layer $L_{i}$ since the
$a\varoast b,\forall a\in A$ will be equally balanced in the layers. Hence the
code will be balanced. The same argument holds if $B^{t}$ is perfectly
balanced.
∎
###### Corollary 2.1.
Given a $k_{A}$-layering or $A$, and a $k_{B}>k_{A}$ layering for $B^{t}$
$\beta$-balanced. Then $H_{X}$ is $\gamma$-balanced with :
$\gamma\leq\beta$
###### Proof.
Same as above, considering a $k_{B}$ layering for $A$ by adding empty layers.
This new layering is $\infty$-balanced. ∎
### 4.2 Random Ordering
We introduce a decoding technique called random ordering. This technique
consists of applying a random order on the layers’ application at each
decoding step. This is also generalized to serial decoding by considering that
each check belongs to its own layer (i.e. $k=m$). This seemingly anodyne step
helps to alleviate the error floor quite dramatically. In addition, further
simulations showed us that one does not even have to use a “good” pseudo-
random generator to generate the permutation, and this can be done with
virtually no cost using a simple congruent generator, a solution that is
hardware friendly.
### 4.3 $t$-Covering of Layers
For many codes, the theoretical bound on $k$ given by a density argument is
not tight. However, since for the quantum codes the number of layers is fixed
due to latency constraints, it is important to stay as close as possible to
the theoretical bound. We introduce a generalization of the layer
decomposition called a $t$-covering of ($k$) layers. We drop the requirement
that the layers should be disjoint, and only require that their union taken
with multiplicities should cover each check exactly $t$ times. In the
following, the parameters of a $t$-covers will be specified as $(t,k,\gamma)$,
giving the cover parameters and the balance of the layers. Note that when
using $t$-covers, the usual term of “iteration” becomes ambiguous because it
might be the case that the decoder stops while all the checks have not been
seen the same number of times. Since by pipelining the process, the syndrome
satisfaction could be checked after each layer application adding very low
latency, in the following we will often refer to the number of iterated layers
(but always specify it when we do so). To quickly compare a $t$-cover with
another or with a layer decomposition, it is useful to introduce the
fractional layer number as $\frac{k}{t}$, intuitively it captures the
“average” number of layers the decoder has to process to see each check once.
Finally, by concatenating $t$ times the matrix $H$, it is clear that the
density bound of lemma 1 applies to the fractional layer number. As a simple
application, for the code B1 given below, we found a (2,7,1)-cover,
$\frac{k}{t}=3.5$ . We could also find a (1,4,2) layer decomposition,
$\frac{k}{t}=4$, and the density bound gives us $\frac{k}{t}\geq 3$, since no
decomposition in $3$ layers is known for the B1 code, our $2$-cover sets a new
standard in decoding efficiency.
## 5 Applications on Particular Quantum Codes
### 5.1 C2 Code
The C2 code is a hypergraph product code generated from a single cyclic matrix
($A=B$) of generator polynomial $p(x)=1+x^{2}+x^{5}$ and length $l=31$. Since
this cyclic matrix (and its transpose) accepts a decomposition in $5$-layers,
using the technique from theorem 1, we can construct a 5 layer decomposition
for the C2 code. As said earlier about the balancing effect of the procedure,
the decomposition used for $A,B,A^{t},B^{t}$ is $(1,5,2)$-cover and it yields
a $(1,5,1.1)$-cover for C2. This shows the balancing effect of the procedure,
as we go from $\gamma=2$ to $\gamma=1.1$. Here are the layers used for the
quasicyclic matrix:
$A_{0}$ | 0 1 7 8 14 15 21 22
---|---
$A_{1}$ | 2 3 9 10 16 17 23 24
$A_{2}$ | 4 11 18 25 29
$A_{3}$ | 5 12 19 26 30
$A_{4}$ | 6 13 20 27 28
It should be noted that in order to improve the latency (at the cost of a more
complex construction), we were also able to create a $(224,961,1)$-cover of
C2, achieving a fractional layer number of 4.29 and giving close numerical
results.
### 5.2 B1 Code
(a) $B1[[882,24]]$ SP (b) $C2[[1922,50,16]]$ SP
(c) $B1[[882,24]]$ NMS (d) $C2[[1922,50,16]]$ NMS
Figure 2: Comparison of different decoders and scheduling on B1 and C2 codes
under Z-noise. In the simulations, we use a perturbated NMS, where each check
node message is multiplied by a normalization factor uniformly chosen at
random in {0.875,0.9275} at each iteration. This perturbation is important to
avoid an error-floor degradation.
The B1 code is a Generalized Hypergraph Product code (construction given in
[1], Appendix) : As such it shares similarities with the Hypergraph Product
Codes. Although we do not have a generic decomposition for this family of
codes, some ideas from the hypergraph product theorem apply when creating a
cover for the B1 code. The B1 code accepts a 2-cover in 7 layers, hence giving
a fractional layer number of 3.5. The layers are as follows :
$L_{i}=\\{i+7j\quad\forall j\\}\cup\\{3+i+7j\quad\forall j\\}$
This 2-cover comes from a decomposition of the quasi-cyclic matrix defined by
polynomial $p(x)=1+x+x^{6}$ in 7 layers $S_{0}\dots S_{6}$ such that
$S_{i}=\\{i+7j\quad\forall j\\}$ which is an obvious decomposition (albeit not
minimal) given the generating polynomial. Those layers have the property that
the union of any two layers $S_{i},S_{i+3\mod 7}$ is still a valid layer which
gives the basis for the layers of the code B1. In fact, those layers are
extended to the matrices of $H_{X},H_{Z}$ much in the fashion of what is done
with the hypergraph product but with a “twist” as the blocks are quasi-cyclic
shifts of identities instead of all identities so one has to be more careful
and cannot use the generic formula.
## 6 Numerical Results
Fig. 2 compares the different decoding techniques proposed on an HGP and
Hyperbicycle Codes under Z-noise. We consider Sum-Product (SP) and Normalized
Min-Sum (NMS) decoders, with serial, layered and flooded scheduling [10]. When
decoding classical codes, using serial scheduling yields a factor two
improvement in convergence speed over flooded scheduling. This is not the case
in our numerics on quantum codes, as the serial scheduling suffers from a high
error floor. This error floor can be virtually eliminated by using a random
ordering scheduling. For the flooded scheduling, the number of iterations used
is $i=128$, for serial $i=64$. For the layered scheduling of a
$(t,k,\\_)$-cover, the layer iteration number $i_{lay}=\lfloor
64\times\frac{k}{t}\rfloor$. Although we argued before that checking the
syndrome after each layer is essentially costless, to do a “fairer” comparison
with serial scheduling under random order scheduling we also tried checking
the syndrome only after $\lceil\frac{k}{t}\rceil$ layer iterations. We did not
include the numerics as the two curves match almost perfectly, making it a
non-issue.
On the B1 code, because it is a $t$-cover and not a layer decomposition, we
alter the random ordering scheduling a little bit to boost the performances by
requiring that the permutation is not chosen uniformly at random, and must
satisfy the additional constraint that two successive layers should not share
any check. These additional constraints help the decoder to converge faster as
processing the same check twice in a row in different layers would not change
its soft information.
## 7 Conclusion
We showed how to implement a layered scheduling for qLDPC codes to meet with
the hardware latency limitations. In our numerics, this decoder was more
efficient than what could be achieved using similar resources in flooded
scheduling which might make it the go to hardware option in the future. We
also show that the random order scheduling is a result interesting on its own,
as it can be applied to both serial and layered scheduling to alleviate the
high error floor of some codes without the need of a post-processing. It
should be noted that presently, the best decoders for those codes use some
kind of post-processing after message passing, something that was not studied
in this paper, as none of the known post-processings can meet the hardware
latency considerations. Knowing that our serial scheduling with random
ordering already achieves the performances of the Ordered Statistic Decoding
(OSD) post-processing on those codes101010See [2][Sec 4, Fig. 2], error
probability should be multiplied by 2/3 to compare the two since the error
model is depolarizing noise there.. Finding such hardware friendly post-
processing to use with our layered scheduling would be another step in the
direction we are aiming for.
## Acknowledgement
This work was supported by the QuantERA grant EQUIP (French
ANR-22-QUA2-0005-01, and Spain MCIN/AEI/10.13039/501100011033, grant
PCI2022-132922), and the Plan France 2030 (ANR-22-PETQ-0006) and by the
European Union “NextGenerationEU/PRTR”.
## References
* [1] P. Panteleev and G. Kalachev, “Degenerate quantum LDPC codes with good finite length performance,” _Quantum_ , vol. 5, p. 585, 2021, arXiv:1904.02703.
* [2] J. Du Crest, M. Mhalla, and V. Savin, “Stabilizer inactivation for message-passing decoding of quantum LDPC codes,” in _IEEE Information Theory Workshop (ITW)_. IEEE, 2022, pp. 488–493.
* [3] N. Raveendran and B. Vasić, “Trapping sets of quantum LDPC codes,” _Quantum_ , vol. 5, p. 562, Oct. 2021.
* [4] J. Zhang, Y. Wang, M. P. C. Fossorier, and J. S. Yedidia, “Iterative decoding with replicas,” _IEEE Transactions on Information Theory_ , vol. 53, no. 5, pp. 1644–1663, 2007.
* [5] Y. Mao and A. H. Banihashemi, “Decoding low-density parity-check codes with probabilistic scheduling,” _IEEE Communications Letters_ , vol. 5, no. 10, pp. 414–416, 2001.
* [6] V. Savin, “Iterative LDPC decoding using neighborhood reliabilities,” in _IEEE Int. Symp. on Inf. Theory (ISIT)_ , 2007, pp. 221–225.
* [7] A. I. Vila Casado, M. Griot, and R. D. Wesel, “LDPC decoders with informed dynamic scheduling,” _IEEE Trans. on Communications_ , vol. 58, no. 12, pp. 3470–3479, 2010.
* [8] E. Boutillon and G. Masera, “Hardware design and realization for iteratively decodable codes,” in _Channel coding: Theory, algorithms, and applications_ , D. Declercq, M. Fossorier, and E. Biglieri, Eds. Elsevier, 2014, pp. 583–642.
* [9] J.-P. Tillich and G. Zémor, “Quantum ldpc codes with positive rate and minimum distance proportional to the square root of the blocklength,” _IEEE Transactions on Information Theory_ , vol. 60, no. 2, pp. 1193–1202, 2013, arXiv:0903.0566.
* [10] V. Savin, “LDPC decoders,” in _Channel coding: Theory, algorithms, and applications_ , D. Declercq, M. Fossorier, and E. Biglieri, Eds. Elsevier, 2014, pp. 211–260.
* [11] A. Holmes, M. R. Jokar, G. Pasandi, Y. Ding, M. Pedram, and F. T. Chong, “NISQ+: Boosting Quantum Computing Power by Approximating Quantum Error Correction,” in _Proceedings of the ACM/IEEE 47th Annual International Symposium on Computer Architecture_ , ser. ISCA ’20. IEEE Press, 2020, p. 556–569.
* [12] J. Valls, F. Garcia-Herrero, N. Raveendran, and B. Vasić, “Syndrome-based min-sum vs OSD-0 decoders: FPGA implementation and analysis for quantum LDPC codes,” _IEEE Access_ , vol. 9, pp. 138 734–138 743, 2021.
* [13] Y. Ueno, M. Kondo, M. Tanaka, Y. Suzuki, and Y. Tabuchi, “QECOOL: On-line quantum error correction with a superconducting decoder for surface code,” in _2021 58th ACM/IEEE Design Automation Conference (DAC)_. IEEE, dec 2021.
|
# Assessment of diffuse-interface methods for compressible
multiphase fluid flows and elastic-plastic deformation in solids
Suhas S. Jain 111equal contribution<EMAIL_ADDRESS>Michael C. Adler
11footnotemark: 1<EMAIL_ADDRESS>Jacob R. West 11footnotemark: 1
<EMAIL_ADDRESS>Ali Mani Parviz Moin Sanjiva K. Lele Center for
Turbulence Research, Stanford University, California, USA 94305
###### Abstract
This work describes three diffuse-interface methods for the simulation of
immiscible, compressible multiphase fluid flows and elastic-plastic
deformation in solids. The first method is the localized-artificial-
diffusivity approach of Cook (2007), Subramaniam et al. (2018), and Adler and
Lele (2019), in which artificial diffusion terms are added to the individual
phase mass fraction transport equations and are coupled with the other
conservation equations. The second method is the gradient-form approach that
is based on the quasi-conservative method of Shukla et al. (2010), in which
the diffusion and sharpening terms (together called regularization terms) are
added to the individual phase volume fraction transport equations and are
coupled with the other conservation equations (Tiwari et al., 2013). The third
approach is the divergence-form approach that is based on the fully
conservative method of Jain et al. (2020), in which the regularization terms
are added to the individual phase volume fraction transport equations and are
coupled with the other conservation equations. In the present study, all three
diffuse-interface methods are used in conjunction with a four-equation,
multicomponent mixture model, in which pressure and temperature equilibria are
assumed among the various phases.
The primary objective of this work is to compare these three methods in terms
of their ability to: maintain constant interface thickness throughout the
simulation; conserve mass, momentum, and energy; and maintain accurate
interface shape for long-time integration. The second objective of this work
is to consistently extend these methods to model interfaces between solid
materials with strength. To assess and compare the methods, they are used to
simulate a wide variety of problems, including (1) advection of an air bubble
in water, (2) shock interaction with a helium bubble in air, (3) shock
interaction and the collapse of an air bubble in water, and (4)
Richtmyer–Meshkov instability of a copper–aluminum interface. The current work
focuses on comparing these methods in the limit of relatively coarse grid
resolution, which illustrates the true performance of these methods. This is
because it is rarely practical to use hundreds of grid points to resolve a
single bubble or drop in large-scale simulations of engineering interest.
###### keywords:
diffuse-interface method , compressible flows , two-fluid flows , solids ,
shock-interface interaction
††journal: Journal of Computational Physics
## 1 Introduction
Compressible multiphase fluid flow and multiphase elastic-plastic deformation
of solid materials with strength are important phenomena in many engineering
applications, including shock compression of condensed matter, detonations and
shock-material-interface interactions, impact welding, high-speed fuel
atomization and combustion, and cavitation and bubble collapse motivated by
both mechanical and biomedical systems. In this work, we are concerned with
the numerical modeling of multiphase systems, i.e., those systems that involve
two or more phases of gas, liquid, or solid in the domain. The numerical
simulation of these multiphase systems presents several new challenges in
addition to those associated with analogous single-phase simulations. These
modeling complications include but are not limited to (1) representing the
phase interface on an Eulerian grid; (2) resolving discontinuities in
quantities at the interface, especially for high-density ratios; (3)
maintaining conservation of (a) the mass of each phase, (b) the mixture
momentum, and (c) the total energy of the system; and (4) achieving an
accurate mixture representation of the interface for maintaining thermodynamic
equilibria. Hence, the numerical modeling of multiphase compressible fluid
flows and deformation of solid materials with strength are still an active
area of research.
With these numerical challenges in mind, we choose to pursue the single-fluid
approach (Kataoka, 1986), in which a single set of equations is solved to
describe all of the phases in the domain, as opposed to a multi-fluid
approach, which requires solving a separate set of equations for each of the
phases. We are presented with various choices in terms of the system of
equations that can be used to represent a compressible multiphase system. In
this work, we employ a multicomponent system of equations (a four-equation
model) that assumes spatially local pressure and temperature equilibria,
including at locations within the diffuse material interface (Shyue, 1998,
Venkateswaran et al., 2002, Marquina and Mulet, 2003, Cook, 2009). Relaxing
the assumption of temperature equilibrium, Allaire et al. (2002) and Kapila et
al. (2001) developed the five-equation model that has proven successful for a
variety of applications with high density ratios, strong compressibility
effects, and phases with disparate equations of state (EOS), and has been
widely adopted for the simulation of compressible two-phase flows (Shukla et
al., 2010, So et al., 2012, Ansari and Daramizadeh, 2013, Shukla, 2014,
Coralic and Colonius, 2014, Tiwari et al., 2013, Perigaud and Saurel, 2005,
Wong and Lele, 2017, Chiapolino et al., 2017, Garrick et al., 2017a, b, Jain
et al., 2018, 2020). Finally, there are six- and seven-equation models that
are more general and include more non-equilibrium effects but are not as
widely used for the simulation of two-phase flows (Yeom and Chang, 2013, Baer
and Nunziato, 1986, Sainsaulieu, 1995, Saurel and Abgrall, 1999).
For representing the interface on an Eulerian grid, we use an interface-
capturing method, as opposed to an interface-tracking method, due to the
natural ability of the former method to simulate dynamic creation of
interfaces and topological changes (Mirjalili et al., 2017). Interface-
capturing methods can be classified into sharp-interface and diffuse-interface
methods. In this work, we choose to use diffuse-interface methods for modeling
the interface between compressible materials (Saurel and Pantano, 2018). This
choice is due to the natural advantages that the diffuse-interface methods
offer over the sharp-interface methods, such as ease of representation of the
interface, low cost, good conservation properties, and parallel scalability.
Historically, diffuse-interface methods for compressible flows involved
modeling the interface implicity, i.e., with no explicit interface capturing
through regularization/reconstruction. These methods can be classified as
implicit diffuse-interface methods. These methods assume that the underlying
numerical methods are capable of handling the material interfaces, a concept
similar to implicit large-eddy simulation. One challenge with the implicit
diffuse-interface capturing of material interfaces is that the interface tends
to diffuse over time. Unlike shock waves, in which the convective
characteristics sharpen the shock over time, material interfaces (like contact
discontinuities) do not sharpen naturally; therefore, modeling material
interfaces requires an active balance between interface sharpening and
diffusion to maintain an appropriate interface thickness over time. Therefore,
in the present work, the focus is on explicit diffuse-interface methods. These
methods explicitly model the interface using the interface
regularization/reconstruction techniques.
This paper explores three explicit diffuse-interface methods that are
representative of the different approaches for this problem and possesses
unique characteristics. The first approach (referred to as the LAD approach)
is based on the localized-artificial-diffusivity (LAD) method (Cook, 2007,
Subramaniam et al., 2018, Adler and Lele, 2019), in which localized, nonlinear
diffusion terms are added to the individual phase mass transport equations and
coupled with the other conservation equations. This method conserves the mass
of individual phases, mixture momentum, and total energy of the system due to
the conservative nature of the diffusion terms added to the system of
equations and results in no net mixture-mass transport. This method is
primarily motivated by applications involving miscible, multicomponent,
single-phase flows, but it has been successfully adapted for multiphase flow
applications. The idea behind this approach is to effectively add species
diffusion in the selected regions of the domain to properly resolve the
interface on the grid and to prevent oscillations due to discontinuities in
the phase mass equations. High-order compact derivative schemes can be used to
discretize the added diffusion terms without resulting in distortion of the
shape of the interface over long-duration time advancement. However, one
drawback of this approach is that the interface thickness increases with time
due to the lack of sharpening fluxes that act against the diffusion. This
method is therefore most effective for problems in which the interface is in
compression (such as shock/material-interface interactions with normal
alignment). However, the deficiency of this method due to the lack of a
sharpening term is evident for applications in which the interface between
immiscible materials undergoes shear or expansion/tension. LAD formulations
have also been examined in the context of five-equation models, in which
localized diffusion is also added to the volume fraction transport equation
(Aslani and Regele, 2018).
The second approach (referred to as the gradient-form approach) is based on
the quasi-conservative method proposed by Shukla et al. (2010), in which
diffusion and sharpening terms (together called regularization terms) are
added for the individual phase volume fraction transport equations and coupled
with the other conservation equations (Tiwari et al., 2013). This method only
approximately conserves the mass of individual phases, mixture momentum, and
total energy of the system due to the non-conservative nature of the
regularization terms added to the system of equations. In contrast to the LAD
approach, this method can result in net mixture-mass transport, which can
sharpen or diffuse the mixture density; depending on the application, this may
be an advantageous or disadvantageous property. The primary advantage of this
method is that the regularization terms are insensitive to the method of
discretization; they can be discretized using high-order compact derivative
schemes without distorting the shape of the interface over long-duration time
advancement. However, the non-conservative nature of this approach results in
poor performance of the method for certain applications. For example,
premature topological changes and unphysical interface behavior can be
observed when the interfaces are poorly resolved (exacerbating the
conservation error) and subjected to shocks that are not aligned with the
interface.
The third approach (referred to as the divergence-form approach) is based on
the fully conservative method proposed by Jain et al. (2020), in which
diffusion and sharpening terms are added to the individual phase volume
fraction transport equations and coupled with the other conservation
equations. This method conserves the mass of individual phases, mixture
momentum, and total energy of the system due to the conservative nature of the
regularization terms added to the system of equations. Similar to the
gradient-form approach and in contrast to the LAD approach, this method can
result in net mixture-mass transport, which can sharpen or diffuse the mixture
density. The primary challenge of this method is that one needs to be careful
with the choice of discretization used for the regularization terms. Using a
second-order finite-volume scheme (in which the nonlinear fluxes are formed on
the faces), Jain et al. (2020) showed that a discrete balance between the
diffusion and sharpening terms is achieved, thereby eliminating the spurious
behavior that was discussed by Shukla et al. (2010). The idea behind this is
similar to the use of the balanced-force algorithm (Francois et al., 2006,
Mencinger and Žun, 2007) for the implementation of the surface-tension forces,
in which a discrete balance between the pressure and surface-tension forces is
necessary to eliminate the spurious velocity around the interface. The current
study also demonstrates that appropriately crafted higher-order schemes may be
used to effectively discretize the regularization terms. This method is free
of premature topological changes and unphysical interface behavior present
with the previous approach. However, due to the method of discretization, the
anisotropy of the derivative scheme can more significantly distort the shape
of the material interface over long-duration time advancement in comparison to
the gradient-form approach; the severity of this problem is significantly
reduced when using higher-order schemes.
For all the three diffuse-interface methods considered in this work, it is
important to include physically consistent corrections, associated with the
interface regularization process, in each of the governing equations. For
example, Cook (2009), Tiwari et al. (2013), and Jain et al. (2020) discuss
physically consistent regularization terms for the LAD, gradient-form, and
divergence-form approaches, respectively. The physically consistent
regularization terms of Cook (2009), Tiwari et al. (2013), and Jain et al.
(2020) are derived in such a way that the regularization terms do not
spuriously contribute to the kinetic energy and entropy of the system. This
significantly improves the stability of the simulation, especially for flows
with high density ratios. However, discrete conservation of kinetic energy and
entropy is needed to show the stability of the methods for high-Reynolds-
number turbulent flows (Jain and Moin, 2020).
We employ a fully Eulerian method for modeling the deformation of solid
materials, as opposed to a fully Lagrangian approach (Benson, 1992) or a mixed
approach such as arbitrary-Lagrangian-Eulerian methods (Donea et al., 2004),
because of its cost-effectiveness and accuracy to handle large deformations.
There are various Eulerian approaches in the literature that differ in the way
the deformation of the material is tracked. The popular methods employ the
inverse deformation gradient tensor (Miller and Colella, 2001, Ortega et al.,
2014, Ghaisas et al., 2018), the left Cauchy-Green tensor (Sugiyama et al.,
2010, 2011), the co-basis vectors (Favrie and Gavrilyuk, 2011), the initial
material location (Valkov et al., 2015, Jain et al., 2019), or other variants
of these methods to track the deformation of the material in the simulation.
In this work we use the inverse deformation gradient tensor approach because
of its applicability to model plasticity. We propose consistent corrections in
the kinematic equations, that describe the deformation of the solid,
associated with the interface regularization process.
In summary, the two main objectives of this paper are as follows. The first
objective is to assess several diffuse-interface-capturing methods for
compressible two-phase flows. The interface-capturing methods in this work
will be used with a four-equation multicomponent model; however, they are
readily compatible with a variety of other models, including the common five-,
six-, or seven-equation models. The second objective is to extend these
interface-capturing methods for the simulation of elastic-plastic deformation
in solid materials with strength, including comparison of these methods in the
context of modeling interfaces between solid materials.
The remainder of this paper is outlined as follows: Section 2 describes the
three diffuse-interface methods considered in this study, along with the
details of their implementation. Section 3 discusses the application of these
methods to a variety of problems including a shock/helium-bubble interaction
in air, an advecting air bubble in water, a shock/air-bubble interaction in
water, and a Richtmyer–Meshkov instability of an interface between copper and
aluminum. Concluding remarks are made in Section 4 along with a summary. A
table highlighting the strengths and limitations of the different methods
considered in this work is also presented in this section.
## 2 Theoretical and numerical model
### 2.1 Governing equations
The governing equations for the evolution of the multiphase flow or
multimaterial continuum in conservative Eulerian form are described in Eqs.
(1)-(3). This consists of the conservation of species mass (Eq. 1), total
momentum (Eq. 2), and total energy (Eq. 3). These are followed by the
kinematic equations that track the material deformation, which include
transport equations for the elastic component of the inverse deformation
gradient tensor (Eq. 4), and the plastic Finger tensor (Eq. 5).
$\underbrace{\frac{\partial\rho Y_{m}}{\partial
t}}_{\begin{subarray}{c}\textrm{local}\\\
\textrm{derivative}\end{subarray}}+\underbrace{\frac{\partial u_{k}\rho
Y_{m}}{\partial
x_{k}}}_{\textrm{advection}}=-\underbrace{\frac{\partial\left(J^{*}_{m}\right)_{i}}{\partial
x_{i}}}_{\begin{subarray}{c}\textrm{artificial}\\\
\textrm{diffusion}\end{subarray}}+\underbrace{J_{m}}_{\begin{subarray}{c}\textrm{interface}\\\
\textrm{regularization}\end{subarray}},$ (1) $\underbrace{\frac{\partial\rho
u_{i}}{\partial t}}_{\begin{subarray}{c}\textrm{local}\\\
\textrm{derivative}\end{subarray}}+\frac{\partial}{\partial
x_{k}}\left(\underbrace{u_{k}\rho
u_{i}}_{\textrm{advection}}-\underbrace{\sigma_{ik}}_{\begin{subarray}{c}\textrm{stress}\\\
\textrm{source}\end{subarray}}~{}\right)=\underbrace{\frac{\partial\tau^{*}_{ik}}{\partial
x_{k}}}_{\begin{subarray}{c}\textrm{artificial}\\\
\textrm{diffusion}\end{subarray}}+\underbrace{F_{i}}_{\begin{subarray}{c}\textrm{interface}\\\
\textrm{regularization}\end{subarray}},$ (2)
$\displaystyle\underbrace{\frac{\partial}{\partial
t}\left[\rho\left(e+\frac{1}{2}u_{j}u_{j}\right)\right]}_{\begin{subarray}{c}\textrm{local}\\\
\textrm{derivative}\end{subarray}}$ $\displaystyle+\frac{\partial}{\partial
x_{k}}\left[\underbrace{u_{k}\rho\left(e+\frac{1}{2}u_{j}u_{j}\right)}_{\textrm{advection}}-\underbrace{u_{i}\sigma_{ik}}_{\begin{subarray}{c}\textrm{stress}\\\
\textrm{source}\end{subarray}}\right]$ (3)
$\displaystyle=\underbrace{\frac{\partial}{\partial
x_{k}}\left(u_{i}\tau^{*}_{ik}-q^{*}_{k}\right)}_{\textrm{artificial
diffusion}}+\underbrace{H}_{\begin{subarray}{c}\textrm{interface}\\\
\textrm{regularization}\end{subarray}},$
$\displaystyle\underbrace{\frac{\partial g^{e}_{ij}}{\partial
t}}_{\begin{subarray}{c}\textrm{local}\\\ \textrm{derivative}\end{subarray}}$
$\displaystyle+\underbrace{\frac{\partial g^{e}_{ik}u_{k}}{\partial
x_{j}}}_{\begin{subarray}{c}\textrm{curl-free}\\\
\textrm{advection/strain}\end{subarray}}+\underbrace{u_{k}\left(\frac{\partial
g^{e}_{ij}}{\partial x_{k}}-\frac{\partial g^{e}_{ik}}{\partial
x_{j}}\right)}_{\begin{subarray}{c}\textrm{non-zero curl}\\\
\textrm{advection/strain}\end{subarray}}-\underbrace{\frac{1}{2\mu\tau_{rel}}g^{e}_{ik}\sigma^{\prime}_{kj}}_{\textrm{elastic-
plastic source}}$ (4) $\displaystyle=\underbrace{\frac{\zeta^{e}}{\Delta
t}\left(\frac{\rho}{\rho_{0}\det{\left(\bm{g^{e}}\right)}}-1\right)g^{e}_{ij}}_{\textrm{density
compatibility}}+\underbrace{\frac{\partial}{\partial
x_{k}}\left({g^{e}}^{*}\frac{\partial g^{e}_{ij}}{\partial
x_{k}}\right)}_{\textrm{artificial
diffusion}}+\underbrace{K^{e}_{ij}}_{\begin{subarray}{c}\textrm{interface}\\\
\textrm{regularization}\end{subarray}},$
$\displaystyle\underbrace{\frac{\partial G^{p}_{ij}}{\partial
t}}_{\begin{subarray}{c}\textrm{local}\\\ \textrm{derivative}\end{subarray}}$
$\displaystyle+\underbrace{u_{k}\frac{\partial G^{p}_{ij}}{\partial
x_{k}}}_{\textrm{advection}}+\underbrace{\frac{1}{2\mu\tau_{rel}}\left(G^{p}_{ik}g^{e}_{kl}\sigma^{\prime}_{lm}\left(g^{e}\right)^{-1}_{mj}+G^{p}_{jk}g^{e}_{kl}\sigma^{\prime}_{lm}\left(g^{e}\right)^{-1}_{mi}\right)}_{\textrm{elasto-
plastic source}}$ (5) $\displaystyle=\underbrace{\frac{\zeta^{p}}{\Delta
t}\left(\frac{1}{\det{\left(\bm{G^{p}}\right)}^{1/2}}-1\right)G^{p}_{ij}}_{\textrm{density
compatibility}}+\underbrace{\frac{\partial}{\partial
x_{k}}\left({g^{p}}^{*}\frac{\partial G^{p}_{ij}}{\partial
x_{k}}\right)}_{\textrm{artificial diffusion}}.$
Here, $t$ and $\bm{x}$ represent time and the Eulerian position vector,
respectively. $Y_{m}$ describes the mass fraction of each constituent
material, $m$. The variables $\bm{u}$, $\rho$, $e$, and
$\bm{\underline{\sigma}}$ describe the mixture velocity, density, internal
energy, and Cauchy stress, respectively, which are related to the species-
specific components by the relations $\rho=\sum_{m=1}^{M}\phi_{m}\rho_{m}$,
$e=\sum_{m=1}^{M}Y_{m}e_{m}$, and
$\bm{\underline{\sigma}}=\sum_{m=1}^{M}\phi_{m}\bm{\underline{\sigma}}_{m}$,
in which $\phi_{m}$ is the volume fraction of material $m$, and $M$ is the
total number of material constituents. The variables $g^{e}_{ij}$ and
$G^{p}_{ij}$ are tensors that track elastic and plastic material deformation
in problems with solids. These equations are described in greater detail in
the next section.
The right-hand-side terms describe the localized artificial diffusion (see
also Section 2.5), including the artificial viscous stress,
$\tau^{*}_{ik}=2\mu^{*}S_{ik}+\left(\beta^{*}-2\mu^{*}/3\right)\left(\partial
u_{j}/\partial x_{j}\right)\delta_{ik}$, and the artificial enthalpy flux,
$q^{*}_{i}=-\kappa^{*}\partial T/\partial
x_{i}+\sum_{m=1}^{M}h_{m}\left(J^{*}_{m}\right)_{i}$, with strain rate tensor,
$S_{ik}=\left(\partial u_{i}/\partial x_{k}+\partial u_{k}/\partial
x_{i}\right)/2$, and temperature, $T$. The second term in the artificial
enthalpy flux expression is the enthalpy diffusion term (Cook, 2009), in which
$h_{m}=e_{m}+p_{m}/\rho_{m}$ is the enthalpy of species $m$. The artificial
Fickian diffusion of species $m$ is described by
$\left(J^{*}_{m}\right)_{i}=-\rho\left[D^{*}_{m}\left(\partial Y_{m}/\partial
x_{i}\right)-Y_{m}\sum_{k}D^{*}_{k}\left(\partial Y_{k}/\partial
x_{i}\right)\right]$.
### 2.2 Material deformation and plasticity model
The kinematic equations that describe the deformation of the solid in the
Eulerian framework employ the inverse deformation gradient tensor,
$g_{ij}=\partial X_{i}/\partial x_{j}$, in which $\bm{X}$ and $\bm{x}$
describe the position of a continuum parcel in the material (Lagrangian) and
spatial (Eulerian) perspectives, respectively. In this work, a single inverse
deformation gradient is used to describe the kinematics of the mixture
(Ghaisas et al., 2017, 2018). Following Miller and Colella (2001), a
multiplicative decomposition of the total inverse deformation gradient tensor,
$\bm{\underline{g}}$, into elastic, $\bm{\underline{g^{e}}}$, and plastic,
$\bm{\underline{g^{p}}}$, components is assumed,
$g_{ij}=g^{p}_{ik}g^{e}_{kj}$, reflecting the assumption that the plastic
deformation is recovered when the elastic deformation is reversed,
$g^{p}_{ij}=g_{ik}\left(g^{e}\right)^{-1}_{kj}$. It is additionally assumed
that the plastic deformation is volume preserving (Plohr and Sharp, 1992),
providing compatibility conditions for the inverse deformation gradient tensor
determinants, $\det{\left(\bm{\underline{g^{p}}}\right)}=1$ and
$\det{\left(\bm{\underline{g}}\right)}=\det{\left(\bm{\underline{g^{e}}}\right)}=\rho/\rho_{0}$,
in which $\rho_{0}$ represents the undeformed density and $\det{(\cdot)}$
represents the determinant operator. In this work, the plastic Finger tensor
$G^{p}_{ij}=g^{p}_{ik}g^{p}_{jk}$ is solved for because it tends to be more
stable than the equation for $\textbf{g}^{p}$, and because models for strain
hardening are often parametrized in terms of norms of the plastic Finger
tensor. This choice and its alternatives are discussed in detail in Adler et
al. (2021).
We also assume that the materials with strength are elastic perfectly plastic,
i.e., the material yield stress is independent of strain and strain rate;
thus, only the elastic component of the inverse deformation gradient tensor is
necessary to close the governing equations. As a result, we solve only the
equation for elastic deformation in the present work. The plastic component of
the inverse deformation gradient tensor, or the full tensor, can be employed
to supply the plastic strain and strain rate necessary for more general
plasticity models (Adler and Lele, 2019).
Plastic deformation is incorporated into the numerical framework by means of a
visco-elastic Maxwell relaxation model, which has been employed recently in
several Eulerian approaches (Ndanou et al., 2015, Ortega et al., 2015, Ghaisas
et al., 2018). The plastic relaxation timescale is described by
$\frac{1}{\tau_{\textrm{rel}}}=\frac{1}{\left(\rho/\rho_{0}\right)\tau_{0}}\left[\frac{\textrm{R}\left(\left\lVert\bm{\underline{\sigma}}^{\prime}\right\rVert^{2}-\frac{2}{3}\sigma_{Y}^{2}\right)}{\mu^{2}}\right],$
(6)
in which
$\bm{\underline{\sigma}}^{\prime}=\textrm{dev}\left(\bm{\underline{\sigma}}\right)$
and $\mu$ is the material shear modulus. The ramp function
$R\left(x\right)=\textrm{max}\left(x,0\right)$ turns on plasticity effects
only when the yield criterion is satisfied. In many cases, the elastic-plastic
source term is stiff due to the small value of $\tau_{\textrm{rel}}$ relative
to the convective deformation scales. To overcome this time step restriction,
implicit plastic relaxation is performed at each timestep, based on the method
of Favrie and Gavrilyuk (2011) and described by Ghaisas et al. (2018).
### 2.3 Equations of state and constitutive equations
A hyperelastic constitutive model, in which the elastic stress–strain
relationship is compatible with a strain energy-density functional, is assumed
to close the thermodynamic relationships in the governing equations. The
internal energy, $e$, is additively decomposed into a hydrodynamic component,
$e_{h}$, and an elastic component, $e_{e}$, as in Ndanou et al. (2015). The
hydrodynamic component is analogous to a stiffened gas, with
$e=e_{h}\left(p,\rho\right)+e_{e}\left(\bm{\underline{\hat{g}}}\right),\qquad
e_{h}=\frac{p+\gamma p_{\infty}}{\left(\gamma-1\right)\rho},\qquad
e_{e}=\frac{\mu}{4\rho_{0}}\textrm{tr}\left[\left(\bm{\underline{\hat{g}}}-\bm{\underline{I}}\right)^{2}\right],$
(7)
in which
$\bm{\underline{\hat{g}}}=\det{\left(\bm{\underline{G}^{e}}\right)}^{-1/3}\bm{\underline{G}^{e}}$,
$\bm{\underline{G}^{e}}=\bm{\underline{g}^{e}{}}^{T}\bm{\underline{g}^{e}}$,
$p$ is the pressure, $p_{\infty}$ (with units of pressure) and $\gamma$
(nondimensional) are material constants of the stiffened gas model for the
hydrodynamic component of internal energy. With this EOS, the Cauchy stress,
$\bm{\underline{\sigma}}$, satisfying the Clausius-Duhem inequality is
described by
$\bm{\underline{\sigma}}=-p\bm{\underline{I}}-\mu\frac{\rho}{\rho_{0}}\left\\{\det{\left(\bm{\underline{G}^{e}}\right)^{-2/3}}\textrm{dev}\left[\left(\bm{\underline{G}^{e}}\right)^{2}\right]-\det{\left(\bm{\underline{G}^{e}}\right)}^{-1/3}\textrm{dev}\left(\bm{\underline{G}^{e}}\right)\right\\},$
(8)
in which $\textrm{dev}\left(\bm{\underline{G}^{e}}\right)$ signifies the
deviatoric component of the tensor:
$\textrm{dev}\left(\bm{\underline{G}^{e}}\right)=\bm{\underline{G}^{e}}-\frac{1}{3}\textrm{tr}\left(\bm{\underline{G}^{e}}\right)\bm{\underline{1}}$,
with $\textrm{tr}\left(\cdot\right)$ signifying the trace of the tensor and
$\bm{\underline{1}}$ signifying the identity tensor. The elastic component of
the internal energy, $\epsilon_{e}$, is assumed to be isentropic. Therefore,
the temperature, $T$, and entropy, $\eta$, are defined by the hydrodynamic
stiffened gas component of the EOS, as follows.
$\displaystyle e_{h}=C_{v}T\left(\frac{p+\gamma
p_{\infty}}{p+p_{\infty}}\right),\qquad
R=C_{p}-C_{v},\qquad\gamma=\frac{C_{p}}{C_{v}},$ (9)
$\displaystyle\eta-\eta_{0}=C_{p}\ln{\left(\frac{T}{T_{0}}\right)}+R\ln{\left(\frac{p_{0}+p_{\infty}}{p+p_{\infty}}\right)}.$
Here, $\eta_{0}$ is the reference entropy at pressure, $p_{0}$, and
temperature, $T_{0}$. In the case of compressible flow with no material
strength, the material model reduces to the stiffened gas EOS commonly
employed for liquid/gas-interface interactions (Shukla et al., 2010, Jain et
al., 2020).
### 2.4 Pressure and temperature equilibration method
Many models for multiphase simulation assume that the thermodynamic variables
are not in equilibrium, necessitating the solution of an additional equation
for volume fraction transport (Shukla et al., 2010, Jain et al., 2020). Our
model begins with the assumption that both pressure and temperature remain in
equilibrium between the phases. The equilibration method follows from Cook
(2009) and Subramaniam et al. (2018). For a mixture of $M$ species, we solve
for $2M+2$ unknowns, including the equilibrium pressure $\left(p\right)$, the
equilibrium temperature $\left(T\right)$, the component volume fractions
$\left(\phi_{m}\right)$, and the component internal energies
$\left(e_{m}\right)$, from the following equations.
$p=p_{m},\qquad
T=T_{m},\qquad\sum_{m=1}^{M}\phi_{m}=1,\qquad\sum_{m=1}^{M}Y_{m}e_{m}=e.$ (10)
To achieve a stable equilibrium, it requires that all phases be present with
non-negative volume fractions throughout the entire simulation domain. This is
achieved by initializing the problem with a minimum volume fraction (typically
$\phi_{\text{min}}\lesssim 10^{-6}$) and including additional criteria for
volume fraction diffusion (Sections 2.6 and 2.7) or mass fraction diffusion
(Section 2.5) based on out-of-bounds values of volume fraction and/or mass
fraction. This equilibration method is stable in the well-mixed interface
region, but can result in stability issues outside of the interface region,
where the volume fraction of a material tends to become very small—a
phenomenon exacerbated by high-order discretization methods.
### 2.5 Localized artificial diffusivity
LAD methods have long proven useful in conjunction with high-order compact
derivative schemes to provide necessary solution-adaptive and localized
diffusion to capture discontinuities and introduce a mechanism for subgrid
dissipation. Regardless of the choice of interface-capturing method, LAD is
required in the momentum, energy, and kinematic equations, in all
calculations, to provide necessary regularization. For instance, the
artificial shear viscosity, $\mu^{*}$, primarily serves as a subgrid
dissipation model, whereas the artificial bulk viscosity, $\beta^{*}$, enables
shock capturing, and the artificial thermal conductivity, $\kappa^{*}$,
captures contact discontinuities. The artificial kinematic diffusivities
$\left(g^{e*}~{}\text{and}~{}g^{p*}\right)$ facilitate capturing of strain
discontinuities, particularly in regions of sustained shearing.
When LAD is also used for interface regularization (to capture material
interfaces), the artificial diffusivity of species $m$, $D^{*}_{m}$, is
activated, in which the coefficient $C_{D}$ controls the interface diffusivity
and the coefficient $C_{Y}$ controls the diffusivity when the mass fraction
goes out of bounds. When using the volume-fraction-based approaches for
interface regularization (Sections 2.6 and2.7), it is often unnecessary to
also include the species LAD $\left(D^{*}_{m}=0\right)$; however, the species
LAD seems to be necessary for some problems in conjunction with these other
interface regularization approaches.
The artificial diffusivities are described below, where the overbar denotes a
truncated Gaussian filter applied along each grid direction; $\Delta_{i}$ is
the grid spacing in the $i$ direction; $\Delta_{i,\mu}$, $\Delta_{i,\beta}$,
$\Delta_{i,\kappa}$, $\Delta_{i,Y_{m}}$, and $\Delta_{i,g}$ are weighted grid
length scales in direction $i$; $c_{s}$ is the linear longitudinal wave
(sound) speed; $H$ is the Heaviside function; and $\varepsilon=10^{-32}$.
$\mu^{*}=C_{\mu}\overline{\rho\left\lvert\sum_{k=1}^{3}\frac{\partial^{r}S}{\partial
x_{k}^{r}}\Delta^{r}_{k}\Delta^{2}_{k,\mu}\right\rvert};\qquad\Delta_{i,\mu}=\Delta_{i}.$
(11) $\beta^{*}=C_{\beta}\overline{\rho
f_{sw}\left\lvert\sum_{k=1}^{3}\frac{\partial^{r}\left(\bm{\nabla}\cdot\bm{u}\right)}{\partial
x_{k}^{r}}\Delta^{r}_{k}\Delta^{2}_{k,\beta}\right\rvert};\qquad\Delta_{i,\beta}=\Delta_{i}\frac{\displaystyle\left(\frac{\partial\rho}{\partial
x_{i}}\right)^{2}}{\displaystyle\sum_{k=1}^{3}\left(\frac{\partial\rho}{\partial
x_{k}}\right)^{2}+\varepsilon}.$ (12)
$\kappa^{*}=C_{\kappa}\overline{\frac{\rho
c_{s}}{T}\left\lvert\sum_{k=1}^{3}\frac{\partial^{r}e_{h}}{\partial
x_{k}^{r}}\Delta^{r}_{k}\Delta_{k,\kappa}\right\rvert};\qquad\Delta_{i,\kappa}=\Delta_{i}\frac{\displaystyle\left(\frac{\partial
e_{h}}{\partial
x_{i}}\right)^{2}}{\displaystyle\sum_{k=1}^{3}\left(\frac{\partial
e_{h}}{\partial x_{k}}\right)^{2}+\varepsilon}.$ (13) $\displaystyle
D^{*}_{m}=\textrm{max}$
$\displaystyle\left\\{C_{D}\overline{c_{s}\left\lvert\sum_{k=1}^{3}\frac{\partial^{r}Y_{m}}{\partial
x_{k}^{r}}\Delta^{r}_{k}\Delta_{k,D}\right\rvert},\,C_{Y}\overline{\frac{c_{s}}{2}\left(\left\lvert
Y_{m}\right\rvert-1+\left\lvert
1-Y_{m}\right\rvert\right)\sum_{k=1}^{3}\Delta_{k,Y}}\right\\};$ (14)
$\displaystyle\Delta_{i,D}=\Delta_{i}\frac{\displaystyle\left(\frac{\partial
Y_{m}}{\partial
x_{i}}\right)^{2}}{\displaystyle\sum_{k=1}^{3}\left(\frac{\partial
Y_{m}}{\partial
x_{k}}\right)^{2}+\varepsilon};\qquad\Delta_{i,Y}=\Delta_{i}\frac{\displaystyle\left\lvert\frac{\partial
Y_{m}}{\partial
x_{i}}\right\rvert}{\sqrt{\displaystyle\sum_{k=1}^{3}\left(\frac{\partial
Y_{m}}{\partial x_{k}}\right)^{2}}+\varepsilon}.$
$g^{*}=C_{g}\overline{c_{s}\left\lvert\sum_{k=1}^{3}\frac{\partial^{r}E^{g}}{\partial
x_{k}^{r}}\Delta^{r}_{k}\Delta_{k,g}\right\rvert};\qquad\Delta_{i,g}=\Delta_{i}\frac{\displaystyle\left(\frac{\partial
E^{g}}{\partial
x_{i}}\right)^{2}}{\displaystyle\sum_{k=1}^{3}\left(\frac{\partial
E^{g}}{\partial x_{k}}\right)^{2}+\varepsilon}.$ (15)
$f_{sw}=H\left(-\bm{\nabla}\cdot\bm{u}\right)\frac{\left(\bm{\nabla}\cdot\bm{u}\right)^{2}}{\left(\bm{\nabla}\cdot\bm{u}\right)^{2}+\left|\bm{\nabla}\times\bm{u}\right|^{2}+\varepsilon}.$
(16)
Here, $S=\sqrt{S_{ij}S_{ij}}$ is a norm of the strain rate tensor, $S$, and
$E^{g}=\sqrt{\frac{2}{3}E^{g}_{ij}E^{g}_{ji}}$ is a norm of the Almansi
finite-strain tensor associated with the $g$ equations,
$E^{g}_{ij}=\frac{1}{2}\left(\delta_{ij}-g_{ki}g_{kj}\right)$. The artificial
kinematic diffusivities, ${g^{e}}^{*}$ and ${g^{p}}^{*}$, are obtained by
using the equation for ${g}^{*}$, but with the Almansi strain based on only
the elastic or plastic component of the inverse deformation gradient tensor,
respectively. We observe that LAD is not strictly necessary to ensure
stability for the $\textbf{g}^{e}$ equations; in fact, it has not been
included in previous simulations (Ghaisas et al., 2018, Subramaniam et al.,
2018), because the elastic deformation is often small relative to the plastic
deformation, but LAD is necessary to provide stability for the
$\textbf{G}^{p}$ equations, especially when the interface is re-shocked,
resulting in sharper gradients in the plastic deformation relative to the
elastic deformation.
Typical values for the model coefficients are $\zeta^{e}=\zeta^{p}=0.5$,
$C_{\mu}=2\times 10^{-3}$, $C_{\beta}=1$, $C_{\kappa}=1\times 10^{-2}$,
$C_{D}=3\times 10^{-3}$, $C_{Y}=1\times 10^{2}$, and $C_{g}=1$; these values
are used in the subsequent simulations unless stated otherwise. However, these
coefficients often need to be specifically tailored to the problem; for
example, the bulk viscosity coefficient can be increased to more effectively
capture strong shocks in materials with large stiffening pressures.
### 2.6 Fully conservative divergence-form approach to interface
regularization
In this method, interface regularization is achieved with the use of diffusion
and sharpening terms that balance each other. This results in constant
interface thickness during the simulation, unlike the LAD method, in which the
interface thickness increases over time due to the absence of interface
sharpening fluxes. All regularization terms are constructed in divergence
form, resulting in a method that conserves the mass of individual species as
well as the mixture momentum and total energy.
Following Jain et al. (2020), we consider the implied volume fraction
transport equation for phase $m$, with the interface regularization volume
fraction flux $\left(a_{m}\right)_{k}$,
$\frac{\partial\phi_{m}}{\partial t}+u_{k}\frac{\partial\phi_{m}}{\partial
x_{k}}=\frac{\partial\left(a_{m}\right)_{k}}{\partial x_{k}}.$ (17)
In this work, this equation is not directly solved, because the volume
fraction is closed during the pressure and temperature equilibration process
(Section 2.4), but the action of this volume fraction flux is consistently
incorporated into the system through the coupling terms with the other
governing equations. We employ the coupling terms proposed by Jain et al.
(2020) for the mass, momentum, and energy equations, and propose new
consistent coupling terms for the kinematic equations.
Using the relationship of density $\left(\rho\right)$ and mass fraction
$\left(Y_{m}\right)$ to component density $\left(\rho_{m}\right)$ and volume
fraction $\left(\phi_{m}\right)$ for material $m$ ($\rho
Y_{m}=\rho_{m}\phi_{m}$, with no sum on repeated $m$), we can describe the
interface regularization term for each material mass transport equation,
$J_{m}=\frac{\partial\left(a_{m}\right)_{k}\rho_{m}}{\partial
x_{k}},\qquad\text{with no sum on repeated }m.$ (18)
Consistent regularization terms for the momentum and energy equations follow,
$F_{i}=\frac{\partial\left(a_{m}\right)_{k}\rho_{m}u_{i}}{\partial
x_{k}}\qquad\text{and}\qquad H=\frac{\partial}{\partial
x_{k}}\left\\{\left(a_{m}\right)_{k}\left[\frac{1}{2}\rho_{m}u_{j}u_{j}+\left(\rho
h\right)_{m}\right]\right\\},$ (19)
in which the enthalpy of species $m$ is described,
$\left(\rho h\right)_{m}=e_{m}\rho_{m}+p_{m},\qquad\textrm{with no sum on
repeated }m.$ (20)
Consistent regularization terms for the kinematic equations take the form
$K^{e}_{ij}=\frac{1}{3}\frac{1}{\rho}g^{e}_{ij}\sum_{m}J_{m}$ (21)
This is derived by considering a transport equation for $\det\textbf{g}^{e}$:
$\frac{D}{Dt}(\det{\textbf{g}^{e}})=\frac{\partial\det{\textbf{g}^{e}}}{\partial\textbf{g}^{e}}:\frac{D}{Dt}{\textbf{g}^{e}}$
(22)
where $\frac{D}{Dt}$ denotes the material derivative, and “$:$” denotes the
tensor inner product. Using identities from tensor calculus and the properties
of the multiplicative decomposition, this can be simplified to
$\frac{D\rho}{Dt}=\rho(\textbf{g}^{e})^{-T}:\frac{D}{Dt}{\textbf{g}^{e}}$ (23)
Plugging in the transport equations for $\textbf{g}^{e}$ and $\rho$,
converting to index notation, and ignoring the other artificial terms, this
becomes
$-\rho\frac{\partial u_{k}}{\partial
x_{k}}+\sum_{m}J_{m}=\rho(g^{e})^{-1}_{ji}\Big{(}-g^{e}_{ik}\frac{\partial
u_{k}}{\partial x_{j}}+K^{e}_{ij}\Big{)}$ (24)
The terms involving velocity cancel, leaving
$\sum_{m}J_{m}=\rho\Big{(}(g^{e})^{-1}_{ji}K^{e}_{ij}\Big{)}$ (25)
This relationship can be satisfied in many ways, but the form employed here is
chosen for simplicity. No sharpening term is required in the equations for
plastic deformation because in the multiplicative decomposition, volume change
is entirely described by $\textbf{g}^{e}$.
The volume fraction interface regularization flux for phase $m$ is described
by
$\left(a_{m}\right)_{k}=\Gamma\left[\underbrace{\epsilon\frac{\partial\phi_{m}}{\partial
x_{k}}}_{\begin{subarray}{c}\textrm{interface}\\\
\textrm{diffusion}\end{subarray}}-\underbrace{s_{m}\left(\hat{n}_{m}\right)_{k}}_{\begin{subarray}{c}\textrm{interface}\\\
\textrm{sharpening}\end{subarray}}\right]L_{m}+\underbrace{\Gamma^{*}\epsilon
D_{b}\frac{\partial\phi_{m}}{\partial x_{k}}}_{\begin{subarray}{c}\textrm{out-
of-bounds}\\\ \textrm{diffusion}\end{subarray}},\qquad\text{with no sum on
repeated }m,$ (26)
with the interface sharpening term
$s_{m}=\begin{cases}\left(\phi_{m}-\phi_{m}^{\epsilon}\right)\left(1-\sum_{\begin{subarray}{c}n=1\\\
n\neq m\end{subarray}}^{M}\phi_{n}^{\epsilon}-\phi_{m}\right),&\text{for
}\phi_{m}^{\epsilon}\leq\phi_{m}\leq 1-\sum_{\begin{subarray}{c}n=1\\\ n\neq
m\end{subarray}}^{M}\phi_{n}^{\epsilon}\\\ 0,&\text{else }\end{cases},$ (27)
in which $\phi_{m}^{\epsilon}$ denotes the minimum allowable volume fraction
for phase $m$; this floor promotes physically realizable solutions to the
pressure and temperature equilibria, which would otherwise not be well behaved
if the mass or volume fraction exceeded the physically realizable bounds
between zero and one. We assume $\phi_{m}^{\epsilon}=1\times 10^{-6}$ unless
stated otherwise. The optional mask term,
$L_{m}=\begin{cases}1,&\text{for
}\phi_{m}^{\epsilon}\leq\phi_{m}\leq\sum_{\begin{subarray}{c}n=1\\\ n\neq
m\end{subarray}}^{M}\phi_{n}^{\epsilon}\\\ 0,&\text{else }\end{cases},$ (28)
localizes the interface diffusion and interface sharpening terms to the
interface region, restricting the application of the non-compactly discretized
terms to the interface region. Unlike the gradient-form approach, this mask in
the divergence-form approach is not necessary for stability, as demonstrated
by Jain et al. (2020). The interface normal vector for phase $m$ is given by
$\left(\hat{n}_{m}\right)_{k}=\frac{\partial\phi_{m}}{\partial
x_{k}}/\left\lvert\frac{\partial\phi_{m}}{\partial
x_{i}}\right\rvert,\qquad\left\lvert\frac{\partial\phi_{m}}{\partial
x_{i}}\right\rvert=\sqrt{\frac{\partial\phi_{m}}{\partial
x_{i}}\frac{\partial\phi_{m}}{\partial x_{i}}},\qquad\textrm{with no sum on
repeated }m.$ (29)
The out-of-bounds diffusivity, described by
$D_{b}=\max_{m}\overline{\left[1-\phi_{m}/\left(\phi_{m}^{\epsilon}\right)^{b}\right]}_{\textrm{no
sum in }m}(1-L_{m}),\qquad b=\frac{1}{2},$ (30)
maintains $\phi_{m}\gtrsim\phi_{m}^{\epsilon}$. The overbar denotes the same
filtering operation as applied to the LAD diffusivities. A user-specified
length scale, $\epsilon\approx\Delta x$, typically on the order of the grid
spacing, controls the equilibrium thickness of the diffuse interface. The
velocity scale, $\Gamma\approx u_{max}$ controls the timescale over which the
interface diffusion and interface sharpening terms drive the interface
thickness to equilibrium. The velocity scale for the out-of-bounds volume
fraction diffusivity is also specified by the user, with
$\Gamma^{*}\gtrsim\Gamma$. Volume fraction compatibility is enforced by
requiring that $\sum_{m=1}^{M}\left(a_{m}\right)_{k}=0$.
### 2.7 Quasi-conservative gradient-form approach to interface regularization
As with the divergence-form approach, the interface regularization in this
approach is achieved with the use of diffusion and sharpening terms that
balance each other. Therefore, this method also results in constant interface
thickness during the simulation. Shukla et al. (2010) discuss disadvantages
associated with the divergence-form approach due to the numerical
differentiation of the interface normal vector. The numerical error of these
terms can lead to interface distortion and grid imprinting due to the
anisotropy of the derivative scheme. Ideally, we would like to have a
regularization method that is conservative and that does not require any
numerical differentiation of the interface normal vector. However, starting
with the assumption of conservation, for nonzero regularization flux, we see
that numerical differentiation of the interface normal vector can only be
avoided in the limit that the divergence of the interface normal vector goes
to zero. This limit corresponds to the limit of zero interface curvature,
which cannot be avoided in multidimensional problems. Therefore, this
illustrates that a conservative method cannot be constructed for
multidimensional applications without requiring differentiation of the
interface normal vector; the non-conservative property (undesirable) of the
gradient-form approach is a necessary consequence of the circumvention of
interface-normal differentiation (desirable). This is demonstrated below, in
which the phase subscript has been dropped.
$\begin{gathered}\frac{\partial}{\partial
x_{k}}\left\\{\Gamma\left[\epsilon\frac{\partial\phi}{\partial
x_{k}}+\phi\left(1-\phi\right)\hat{n}_{k}\right]\right\\}\\\
=\hat{n}_{k}\frac{\partial}{\partial
x_{k}}\left\\{\Gamma\epsilon\left\lvert\frac{\partial\phi}{\partial
x_{j}}\right\rvert\right\\}+\left\\{\Gamma\epsilon\left\lvert\frac{\partial\phi}{\partial
x_{j}}\right\rvert\right\\}\frac{\partial\hat{n}_{k}}{\partial
x_{k}}+\frac{\partial\Gamma\phi\left(1-\phi\right)}{\partial
x_{k}}\hat{n}_{k}+\Gamma\phi\left(1-\phi\right)\frac{\partial\hat{n}_{k}}{\partial
x_{k}}\\\ \stackrel{{\scriptstyle[}}{{\nabla}}\cdot\vec{\bm{n}}\rightarrow
0]{\longrightarrow}{}\hat{n}_{k}\frac{\partial}{\partial
x_{k}}\left\\{\Gamma\left[\epsilon\left\lvert\frac{\partial\phi}{\partial
x_{j}}\right\rvert+\phi\left(1-\phi\right)\right]\right\\},\\\ \end{gathered}$
(31)
in which the final expression is obtained in the limit of
$\nabla\cdot\vec{\bm{n}}\rightarrow 0$.
Following Shukla et al. (2010), we arrive at an implied volume fraction
transport equation for phase $m$, with the interface regularization volume
fraction term $\alpha_{m}$,
$\frac{\partial\phi_{m}}{\partial t}+u_{k}\frac{\partial\phi_{m}}{\partial
x_{k}}=\left(n_{m}\right)_{k}\frac{\partial\alpha_{m}}{\partial x_{k}}.$ (32)
Unlike the divergence-form approach, the gradient-form approach requires no
numerical differentiation of interface normal vectors, but it consequently
results in conservation error. Like the divergence-form approach, this volume
fraction transport equation is not directly solved, because the volume
fraction is closed during the pressure and temperature equilibration process
(Section 2.4), but the action of the volume fraction regularization term is
consistently incorporated into the system of equations for mass, momentum,
energy, and kinematic quantities through quasi-conservative coupling terms.
We employ an interface regularization term for each component mass transport
equation consistent with the interface regularization volume fraction term,
$J_{m}=\left(n_{m}\right)_{k}\frac{\partial\alpha_{m}\rho_{m}}{\partial
x_{k}},\qquad\textrm{with no sum on repeated }m.$ (33)
Because of the assumption of pressure and temperature equilibrium (volume
fraction is a derived variable—not an independent state variable), it is
important to form mass transport regularization terms consistently with the
desired volume fraction regularization terms. In the method of Tiwari et al.
(2013), the terms do not need to be fully consistent (e.g., the component
density is assumed to be slowly varying); the terms only need to produce
similar interface profiles in the limit of $\Gamma\rightarrow\infty$ (Shukla
et al., 2010), because the volume fraction is an independent state variable.
Following the assumption of Tiwari et al. (2013) that the velocity, specific
energy, and kinematic variables (but not the mixture density) vary slowly
across the interface, the stability of the method is improved by further
relaxing conservation of the coupled equations. For example, the consistent
regularization term for the momentum equation reduces to
$F_{i}=\sum_{m}\left(n_{m}\right)_{k}\frac{\partial\alpha_{m}\rho_{m}u_{i}}{\partial
x_{k}}\approx\sum_{m}\left(n_{m}\right)_{k}\frac{\partial\alpha_{m}\rho_{m}}{\partial
x_{k}}u_{i}.$ (34)
Similarly, the consistent regularization term for the energy equation reduces
to
$H=\sum_{m}\left(n_{m}\right)_{k}\frac{\partial}{\partial
x_{k}}\left[\alpha_{m}\rho_{m}\left(\frac{1}{2}u_{j}u_{j}+h_{m}\right)\right]\approx\sum_{m}\left(n_{m}\right)_{k}\frac{\partial\alpha_{m}\rho_{m}}{\partial
x_{k}}\left(\frac{1}{2}u_{j}u_{j}+h_{m}\right).$ (35)
As with the divergence method, consistent regularization terms for the
kinematic equations take the form
$K^{e}_{ij}=\frac{1}{3}\frac{1}{\rho}g^{e}_{ij}\sum_{m}J_{m}.$ (36)
No sharpening terms are required for the plastic deformation.
The volume fraction interface regularization flux for phase $m$ is defined by
$\alpha_{m}=\Gamma\left(\underbrace{\epsilon\left\lvert\frac{\partial\phi_{m}}{\partial
x_{i}}\right\rvert}_{\begin{subarray}{c}\textrm{interface}\\\
\textrm{diffusion}\end{subarray}}-\underbrace{s_{m}}_{\begin{subarray}{c}\textrm{interface}\\\
\textrm{sharpening}\end{subarray}}\right)\mathscr{L}_{m},\qquad\textrm{with no
sum on repeated }m.$ (37)
The volume fraction out-of-bounds diffusion term employed in the divergence-
form approach (Eq. 26) is also active in the gradient-form approach. The
gradient-form discretization of this term (including an equivalent volume
fraction out-of-bounds term in Eq. 37) exhibits poor stability away from the
interface, whereas the divergence-form approach does not. Following Shukla et
al. (2010) and Tiwari et al. (2013), a necessary mask term blends the
interface regularization terms to zero as the volume fraction approaches the
specified minimum or maximum, thereby avoiding instability of the method away
from the interface, where the calculation of the surface normal vector may
behave spuriously and lead to compounding conservation error,
$\mathscr{L}_{m}=\begin{cases}\tanh\left[\left(\displaystyle\frac{s_{m}}{\phi_{m}^{\mathscr{L}}}\right)^{2}\right],&\text{for
}\phi_{m}^{\epsilon}\leq\phi_{m}\leq 1-\sum_{\begin{subarray}{c}n=1\\\ n\neq
m\end{subarray}}^{M}\phi_{n}^{\epsilon}\\\ 0,&\text{else }\end{cases},$ (38)
in which $\phi_{m}^{\mathscr{L}}\approx 1\times 10^{-2}$ is a user-specified
value controlling the mask blending function. Other variables are the same as
defined in the context of the divergence-form approach.
### 2.8 Numerical method
The equations are discretized on an Eulerian Cartesian grid. Time advancement
is achieved using a five-stage, fourth-order, Runge-Kutta method, with an
adaptive time step based on a Courant–Friedrichs–Lewy (CFL) condition. Other
than the interface regularization terms for the divergence-form approach, all
spatial derivatives are computed using a high-resolution, penta-diagonal,
tenth-order, compact finite-difference scheme described by Lele (1992). This
scheme is applied in the domain interior and near the boundaries in the cases
of symmetry, anti-symmetry, or periodic boundary conditions. Otherwise,
boundary derivatives are reduced to a fourth-order, one-sided, compact
difference scheme.
The interface sharpening and interface diffusion regularization terms in the
divergence-form approach are discretized using node-centered derivatives, for
which the fluxes to be differentiated are formed at the faces (staggered
locations); linear terms (e.g., $\phi_{i}$) are interpolated from the nodes to
the faces, where the nonlinear terms are formed [e.g.,
$\phi_{i+1/2}\left(1-\phi_{i+1/2}\right)$]. Here, we refer to the finite-
difference grid points as nodes. All variables are stored at the nodes
(collocated). If the nonlinear fluxes are not formed at the faces, poor
stability is observed for node-centered finite-difference schemes of both
compact and non-compact varieties due to the nonlinear interface sharpening
term (see Appendix A). A second-order scheme is used for discretization of the
interface regularization terms throughout this work, with an exception in
Section 3.1 where both second-order and sixth-order (non-compact)
discretization schemes are examined for these terms. The second-order scheme
recovers the finite-volume approach employed by Jain et al. (2020), whereas
the higher-order scheme provides increased resolution and formal accuracy;
however, discrete conservation is not guaranteed. The sixth order explicit
scheme used to compute first derivatives from nodes to faces or vice versa is
$f^{\prime}_{i}=\frac{9}{384}\frac{f_{i+5/2}-f_{i-5/2}}{5h}-\frac{25}{128}\frac{f_{i+3/2}-f_{i-3/2}}{3h}+\frac{225}{192}\frac{f_{i+1/2}-f_{i-1/2}}{h}$
(39)
The sixth order interpolation scheme used for node to face or vice versa is
$\hat{f}_{i}=\frac{3}{256}(f_{i+5/2}+f_{i-5/2})-\frac{25}{256}(f_{i+3/2}+f_{i-3/2})+\frac{75}{128}(f_{i+1/2}+f_{i-1/2})$
(40)
The out-of-bounds diffusion is discretized using the tenth-order pentadiagonal
scheme for all interface regularization approaches.
A spatial dealiasing filter is applied after each stage of the Runge-Kutta
algorithm to each of the conservative and kinematic variables to remove the
top $10\%$ of the grid-resolvable wavenumber content, thereby mitigating
against aliasing errors and numerical instability in the high-wavenumber
range, which is not accurately resolved by the spatial derivative scheme. The
filter is computed using a high-resolution, penta-diagonal, eighth-order,
compact Padé filter, with cutoff parameters described by Ghaisas et al.
(2018).
## 3 Results
In this section, we present the simulation results and evaluate the
performance of the methods using classical test cases, such as: (a) advection
of an air bubble in water, (b) shock interaction with a helium bubble in air,
(c) shock interaction and the collapse of an air bubble in water, and (d)
Richtmyer-Meshkov instability (RMI) of a copper-aluminium interface. The
simulation test cases in the present study were carefully selected to assess:
(1) the conservation property of the method; (2) the accuracy of the method in
maintaining the interface shape; and (3) the ability of the method in
maintaining constant interface thickness throughout the simulation.
Some of these test cases have been extensively studied in the past and have
been used to evaluate the performance of various interface-capturing and
interface-tracking methods. Many studies look at these test cases to evaluate
the performance of the methods in the limit of very fine grid resolutions. For
example, a typical value of the grid size is on the order of hundreds of mesh
points across the diameter of a single bubble/droplet. However, for practical
application of these methods in the large-scale simulations of engineering
interest—where there are thousands of droplets, e.g., in an atomization
process—it is rarely affordable to use such fine grids to resolve a single
droplet/bubble. Therefore, in this study, we examine these methods in the
opposite limit of relatively coarse grid resolution. This limit is more
informative of the true performance of these methods for practical
applications. All three diffuse-interface capturing methods are implemented in
the PadéOps solver (Ghate et al., 2021) to facilitate fair comparison of the
methods with the same underlying numerical methods, thereby eliminating any
solver/implementation-related bias in the comparison.
The first test case (Section 3.1) is the advection of an air bubble in water.
This test case is chosen to evaluate the ability of the interface-capturing
method to maintain the interface shape for long-time numerical integration and
to examine the robustness of the method for high-density-ratio interfaces. It
is known that the error in evaluating the interface normal accumulates over
time and results in artificial alignment of the interface along the grid
(Chiodi and Desjardins, 2017, Tiwari et al., 2013). This behavior is examined
for each of the three methods. The second test case (Section 3.2) is the shock
interaction with a helium bubble in air. This test case is chosen to evaluate
the ability of the methods to conserve mass, to maintain constant interface
thickness throughout the simulation, and to examine the behavior of the under-
resolved features captured by the methods. The third test case (Section 3.3)
is the shock interaction with an air bubble in water. This test case is chosen
to evaluate the robustness of the method to handle strong-shock/high-density-
ratio interface interactions. The fourth test case (Section 3.4) is the RMI of
a copper–aluminum interface. This test case is chosen to illustrate the
applicability of the methods to simulate interfaces between solid materials
with strength, to examine the conservation properties of the methods in the
limit of high interfacial curvature, to examine the ability of the methods to
maintain constant interface thickness, and to assess the behavior of the
under-resolved features captured by the methods.
For all the problems in this work, the mass fractions are initialized using
the relations $Y_{1}=\phi_{1}\rho_{1}/\rho$ and $Y_{2}=1-Y_{1}$. To evaluate
the mass-conservation property of a method, the total mass, $m_{k}$, of the
phase $k$ is calculated as
$m_{k}=\int_{\Omega}\rho Y_{k}dv,$ (41)
where the integral is computed over the computational domain $\Omega$. To
evaluate the ability of a method to maintain constant interface thickness, we
define a new parameter—the interface-thickness indicator ($l$)—as
$l=\left(\frac{1}{\hat{n}\cdot\vec{\nabla}\phi}\right),$ (42)
and compute the maximum and average interface thicknesses in the domain, using
$l_{max}=\max_{0.45\leq\phi\leq 0.55}\left(l\right),\hskip
28.45274ptl_{avg}=\left\langle l\right\rangle_{0.45\leq\phi\leq 0.55}.$ (43)
respectively, where $\langle\cdot\rangle$ denotes an averaging operation. The
range for $\phi$ is used to ensure that the interface thickness is evaluated
around the $\phi=0.5$ isocontour because the quantity $l$ is most accurate in
this region and goes to $\infty$ as $\phi\rightarrow 0,1$. Note that,
occasionally, $l$ can become very large, within the region $0.45\leq\phi\leq
0.55$, when there is breakup due to the presence of a saddle point in the
$\phi$ field at the location of rupture. These unphysical values of $l$ show
up in the computed $l_{max}$ values and are removed during the post-processing
step by plotting a moving average of 5 local minima of $l_{max}$. The
unphysical values of $l$, on the other hand, have only a small effect on the
computed $l_{avg}$ values due to the averaging procedure.
### 3.1 Advection of an air bubble in water
This section examines advection of a circular air bubble in water. A one-
dimensional version of this test case has been extensively studied before, and
has been previously used as a test of robustness of the method using various
diffuse-interface methods in Saurel and Abgrall (1999), Allaire et al. (2002),
Murrone and Guillard (2005), Johnsen and Ham (2012), Saurel et al. (2009),
Johnsen and Colonius (2006), Coralic and Colonius (2014), Beig and Johnsen
(2015), Capuano et al. (2018), and using a THINC method in Shyue and Xiao
(2014). A two-dimensional advection of a bubble/drop has also been studied
using a weighted-essentially non-oscillatory (WENO) and targeted-essentially
non-oscillatory (TENO) schemes in Haimovich and Frankel (2017).
In the current study, this test case is used to evaluate the ability of the
methods in maintaining interface-shape for long-time integrations and as a
test of robustness of the methods for high-density-ratio interfaces. Both
phases are initialized with a uniform advection velocity. The problem domain
spans $\left(0\leq x\leq 1;\,0\leq y\leq 1\right)$, with periodic boundary
conditions in both dimensions. The domain is discretized on a uniform
Cartesian grid of size $N_{x}=100$ and $N_{y}=100$. The bubble has a radius of
$25/89$ and is initially placed at the center of the domain. The material
properties for the water medium used in this test case are $\gamma_{1}=4.4$,
$\rho_{1}=1.0$, ${p_{\infty}}_{1}=6\times 10^{3}$, $\mu_{1}=0$, and
${\sigma_{Y}}_{1}=0$. The material properties for the air medium used in this
test case are $\gamma_{2}=1.4$, $\rho_{2}=1\times 10^{-3}$,
${p_{\infty}}_{2}=0$, $\mu_{2}=0$, and ${\sigma_{Y}}_{2}=0$, where
$\gamma_{k},\rho_{k},{p_{\infty}}_{k},\mu_{k}$, and ${\sigma_{Y}}_{k}$ are the
ratio of specific heats, density, stiffening pressure, shear modulus, and
yield stress of phase $k$, respectively.
The initial conditions for the velocity, pressure, volume fraction, and
density are
$u=5,\quad v=0,\quad
p=1,\quad\phi_{1}=\phi_{1}^{\epsilon}+\left(1-2\phi_{1}^{\epsilon}\right)f_{\phi},\quad\phi_{2}=1-\phi_{1},\quad\rho=\phi_{1}\rho_{1}+\phi_{2}\rho_{2},$
(44)
respectively, in which the volume fraction function, $f_{\phi}$, is given by
$f_{\phi}=\frac{1}{2}\left\\{1-\textrm{erf}\left[\frac{625/7921-\left(x-1/2\right)^{2}-\left(y-1/2\right)^{2}}{3\Delta
x}\right]\right\\}.$ (45)
For this problem, the interface regularization length scale and the out-of-
bounds velocity scale are defined by $\epsilon=\Delta x=1.0\times 10^{-2}$ and
$\Gamma^{*}=5.0$, respectively.
The simulation is integrated for a total physical time of $t=1$ units, and the
bubble at this final time is shown in Figure 1, facilitating comparison among
the LAD, divergence-form, and gradient-form methods. All three methods perform
well and are stable for this high-density-ratio case. The consistent
regularization terms included in the momentum and energy equations are
necessary to maintain stability. The divergence-form approach results in
relatively faster shape distortion compared to the LAD and gradient-form
approaches. This shape distortion is due to the accumulation of error
resulting from numerical differentiation of the interface normal vector, which
is required in the divergence-form approach but not the other approaches. A
similar behavior of interface distortion was seen when the velocity was halved
and the total time of integration was doubled, thereby confirming that this
behavior is reproducible for a given flow-through time (results not shown).
FIGURE 1: Comparison of the final state of the bubble after five flow-through
times using (a) LAD approach, (b) divergence-form approach, and (c) gradient-
form approach. The three solid black lines denote the isocontours of the
volume fraction values of 0.1, 0.5, and 0.9, representing the interface
region. FIGURE 2: Comparison of the state of the bubble after five flow-
through times using the divergence-form approach with (a) second-order scheme
and (b) sixth-order scheme. The three solid black lines denote the isocontours
of the volume fraction values of 0.1, 0.5, and 0.9, representing the interface
region.
Two possible ways to mitigate the interface distortion are by refining the
grid or by using a higher-order scheme for the interface-regularization terms.
Because we are interested in the limit of coarse grid resolution, we study the
effect of using an explicit sixth-order finite-difference scheme to discretize
the interface regularization terms. As described in Section 2.8, finite-
difference schemes may be used to discretize the interface regularization
terms—without resulting in spurious behavior—if the nonlinear interface
sharpening and the counteracting diffusion terms are formed at the grid faces
(staggered locations), from which the derivatives at the grid points (nodes)
may be calculated. Comparing the second-order and sixth-order schemes for the
interface regularization terms of the divergence-form approach, the final
state of the advecting bubble is shown in Figure 2. The interface distortion
is significantly reduced using the sixth-order scheme.
Since the focus of the current work is on the evaluation of methods in a
relatively coarser grid, we repeat the simulation of advection of an air
bubble in water by scaling down the problem (in length and time) by a factor
of $10$ without changing the number of grid points. The domain length is kept
the same, but the new bubble radius is $2.5/89$, and the simulation is
integrated for a total physical time of $t=0.1$ units. At this resolution, the
bubble has $\approx 5$ grid points across its diameter, which represents a
more realistic scenario that is encountered in large-scale engineering
simulations.
The bubble at the final time is shown in Figure 3, for all the three methods.
Similar to the more refined case above with the second-order finite-volume
scheme, the divergence-form approach results in relatively faster shape
distortion compared to the LAD method. Whereas, the gradient-form approach
results in apparent complete loss of the bubble. Comparing this result with
the refined simulation in Figure 1, this observation of mass loss on coarse
grids is in good agreement with our hypothesis that the conservation error is
proprotional to the local interface curvature and the under-resolved features
are more prone to being lost due to the non-conservative nature of the method.
This makes the gradient-form approach unsuitable for large-scale engineering
simulations where it is only possible to afford a couple of grids points
across a bubble/drop. To quantify the amount of mass loss with the gradient-
form approach, the bubble mass is plotted against time for all three methods
in Figure 4. Interestingly, around $40\%$ of the bubble mass is lost during
the early time in the simulation and then the bubble mass saturates. This is
due to the traces of mass of bubble that is still present in the domain, that
is not under-resolved, after all the fine features are lost due to the
conservation error.
FIGURE 3: Comparison of the final state of the bubble on a coarse grid after
five flow-through times using (a) LAD approach, (b) divergence-form approach,
and (c) gradient-form approach. The two solid black lines denote the
isocontours of the volume fraction values of 0.5 and 0.9, representing the
interface region. FIGURE 4: Plot of total mass, $m$, of the bubble by various
methods, where $m_{0}$ is the mass at time $t=0$.
### 3.2 Shock interaction with a helium bubble in air
This section examines the classic test case of a shock wave traveling through
air followed by an interaction with a stationary helium bubble. This case has
been extensively studied using various numerical methods and models, such as a
front-tracking method in Terashima and Tryggvason (2009); an arbitrary-
Lagrangian Eulerian (ALE) method in Daude et al. (2014); anti-diffusion
interface-capturing method in So et al. (2012); a ghost-fluid method (GFM) in
Fedkiw et al. (1999), Bai and Deng (2017); a LAD diffuse-interface approach in
Cook (2009); a gradient-form diffuse-interface approach in Shukla et al.
(2010), and other diffuse-interface methods that implicitly capture the
interface (no explicit interface-capturing method) using a WENO scheme in
Johnsen and Colonius (2006), Coralic and Colonius (2014), using a TENO scheme
in Haimovich and Frankel (2017), and using a WCNS scheme in Wong and Lele
(2017). This test case has also been simulated with an adaptive-mesh-
refinement technique in Quirk and Karni (1996) where a refined grid is used
around the interface to improve the accuracy. More recently, this test case
has also been studied in a three-dimensional setting in Deng et al. (2018).
To examine the interface regularization methods, we model this problem without
physical species diffusion; therefore, the interface regularization methods
for immiscible phases are applicable, because no physical molecular mixing
should be exhibited by the underlying numerical model. The use of immiscible
interface-capturing methods to model the interface between the gases in this
problem is also motivated by the experiments of Haas and Sturtevant (1987). In
these experiments, the authors use a thin plastic membrane to prevent
molecular mixing of helium and air.
The problem domain spans $\left(-2\leq x\leq 4;\,0\leq y\leq 1\right)$, with
periodic boundary conditions in the $y$ direction. A symmetry boundary is
applied at $x=4$, representing a perfectly reflecting wall, and a sponge
boundary condition is applied over $\left(-2\leq x\leq-1.5\right)$, modeling a
non-reflecting free boundary. The problem is discretized on a uniform
Cartesian grid of size $N_{x}=600$ and $N_{y}=100$. The bubble has a radius of
$25/89$ and is initially placed at the location $(x=0,y=1/2)$. The material
properties for the air medium are described by $\gamma_{1}=1.4$,
$\rho_{1}=1.0$, ${p_{\infty}}_{1}=0$, $\mu_{1}=0$, and ${\sigma_{Y}}_{1}=0$.
The material properties for the helium medium are described by
$\gamma_{2}=1.67$, $\rho_{2}=0.138$, ${p_{\infty}}_{2}=0$, $\mu_{2}=0$, and
${\sigma_{Y}}_{2}=0$.
The initial conditions for the velocity, pressure, volume fraction, and
density are
$\begin{gathered}u=u^{(2)}f_{s}+u^{(1)}\left(1-f_{s}\right),\quad v=0,\quad
p=p^{(2)}f_{s}+p^{(1)}\left(1-f_{s}\right),\\\
\phi_{1}=\phi_{1}^{\epsilon}+\left(1-2\phi_{1}^{\epsilon}\right)f_{\phi},\quad\phi_{2}=1-\phi_{1},\quad\rho=\left(\phi_{1}\rho_{1}+\phi_{2}\rho_{2}\right)\left[\rho^{(2)}/\rho^{(1)}f_{s}+\left(1-f_{s}\right)\right],\end{gathered}$
(46)
respectively, in which the volume fraction function, $f_{\phi}$, and the shock
function, $f_{s}$, are given by
$f_{\phi}=\frac{1}{2}\left\\{1-\textrm{erf}\left[\frac{625/7921-x^{2}-\left(y-1/2\right)^{2}}{\Delta
x}\right]\right\\}\quad\textrm{and}\quad
f_{s}=\frac{1}{2}\left[1-\textrm{erf}\left(\frac{x+1}{2\Delta
x}\right)\right],$ (47)
respectively, with jump conditions across the shock for velocity
$\left(u^{(1)}=0;\,u^{(2)}=0.39473\right)$, density
$\left(\rho^{(1)}=1\right.$, $\left.\rho^{(2)}=1.3764\right)$, and pressure
$\left(p^{(1)}=1;\,p^{(2)}=1.5698\right)$. For this problem, the interface
regularization length scale and the out-of-bounds velocity scale are defined
by $\epsilon=\Delta x=0.01$ and $\Gamma^{*}=2.5$, respectively.
FIGURE 5: Comparison of the bubble shapes at different times for the case of
the shock/helium-bubble-in-air interaction using various interface-capturing
methods. The three solid black lines denote the isocontours of the volume
fraction values of 0.1, 0.5, and 0.9, representing the interface region.
FIGURE 6: Comparison of the interface-thickness indicator, $l$, by various
methods, where $l_{0}$ is the maximum interface thickness at time $t=0$. (a)
Average interface thickness $l_{avg}$. (b) Maximum interface thickness
$l_{max}$. To exclude unphysical spikes in $l_{max}$ during breakup events, a
moving average of 5 local minima of $l_{max}$ is plotted.
The interaction of the shock with the helium bubble and the eventual breakup
of the bubble are shown in Figure 5, with depictions of the evolution at
various times, for LAD, divergence-form, and gradient-form approaches. The
bubble can be seen to undergo breakup at an approximate (non-dimensional) time
of $t=2.5$. After this time, the simulation cannot be considered physical
because of the under-resolved processes associated with the breakup and the
lack of explicit subgrid models for these processes; each interface
regularization approach treats the under-resolved processes differently.
Therefore, there is no consensus on the final shape of the bubble among the
three methods. Yet, a qualitative comparison between the three methods can
still be made using the results presented in Figure 5.
Using the LAD approach, the interface diffuses excessively in the regions of
high shear, unlike the divergence-form and gradient-form approaches, where the
interface thickness is constant throughout the simulation. However, using the
LAD approach, the interface remains sharp in the regions where there is no
shearing. To quantify the amount of interface diffusion, the interface-
thickness indicator [$l$ of Eq. (43)] is plotted in Figure 6 for the three
methods. The average thickness, $l_{avg}$, increases slightly for the LAD
method around $t/\tau\approx 2$, but the maximum interface thickness,
$l_{max}$, increases almost $15$ times for the LAD method, whereas it remains
on the order of one for the other two methods. This demonstrates a deficiency
of the LAD approach for problems that involve significant shearing at an
interface that is not subjected to compression.
Furthermore, the behavior of bubble breakup is significantly different among
the various methods. Depending on the application, any one of these methods
may or may not result in an appropriate representation of the under-resolved
processes. However, for the current study that involves modeling interfaces
between immiscible fluids, the grid-induced breakup of the divergence-form
approach may be more suitable than the diffusion of the fine structures in the
LAD approach or the premature loss of fine structures and associated
conservation error of the gradient-form approach. For the LAD approach, the
thin film formed at around time $t=2.1$ does not break; rather, it evolves
into a thin region of well-mixed fluid. This behavior may be considered
unphysical for two immiscible fluids, for which the physical interface is
infinitely sharp in a continuum sense; this behavior would be more appropriate
for miscible fluids. For the divergence-form approach, the thin film forms
satellite bubbles, which is expected when there is a breakage of a thin
ligament between droplets or bubbles due to surface-tension effects. However,
this breakup may not be considered completely physical without any surface-
tension forces, because the breakup is triggered by the lack of grid support.
For the gradient-form approach, the thin film formed at around time $t=2.1$
breaks prematurely and disappears with no formation of satellite bubbles, and
the mass of the film is lost to conservation error.
In Figure 2 of Shukla et al. (2010), without the use of interface
regularization terms, the interface thickness is seen to increase
significantly. Their approach without interface regularization terms is most
similar to our LAD approach, because the LAD approach does not include any
sharpening terms. Therefore, comparing these results suggests that the
thickening of the interface in their case was due to the use of the more
dissipative Riemann-solver/reconstruction scheme. The results from the
gradient-form approach also match well with the results of the similar method
shown in Figure 2 of Shukla et al. (2010), which further verifies our
implementation. Finally, there is no consensus on the final shape of the
bubble among the three methods, which is to be expected, because there are no
surface-tension forces and the breakup is triggered by the lack of grid
resolution.
FIGURE 7: Plot of total mass, $m$, of the helium bubble by various methods,
where $m_{0}$ is the mass at time $t=0$.
To further quantify the amount of mass lost or gained, the total mass of the
bubble is computed using Eq. (41) and is plotted over time in Figure 7. The
mass of the bubble is conserved for the LAD and divergence-form approaches,
but is not conserved for the gradient-form approach, as expected. The loss of
mass is observed to be largest when the bubble is about to break, for the
gradient form approach. This is because the mass-conservation error in the
gradient-form approach is proportional to the local curvature, as described in
Section 2.7. Therefore, at the onset of breakup, thin film rupture is
different from the other two methods, and the satellite bubbles are absent.
### 3.3 Shock interaction with an air bubble in water
This section examines a shock wave traveling through water followed by an
interaction with a stationary air bubble. The material properties are the same
as those described in Section 3.1. This test case is based on the experiments
in Bourne and Field (1992) and has been widely used as a validation case for
various numerical methods and models such as a front-tracking method in
Terashima and Tryggvason (2009); a level-set method in Hu and Khoo (2004),
Nourgaliev et al. (2006); a ghost-fluid method in Bai and Deng (2017); a
volume-of-fluid method in Bo and Grove (2014); an implicit diffuse-interface
method with a Godunov scheme in Ansari and Daramizadeh (2013), and with a TENO
scheme in Haimovich and Frankel (2017); and with a gradient-form diffuse-
interface approach in Shukla et al. (2010), Shukla (2014).
The initial conditions for the velocity, pressure, volume fraction, and
density are
$\begin{gathered}u=u^{(2)}f_{s}+u^{(1)}\left(1-f_{s}\right),\quad v=0,\quad
p=p^{(2)}f_{s}+p^{(1)}\left(1-f_{s}\right),\\\
\phi_{1}=\phi_{1}^{\epsilon}+\left(1-2\phi_{1}^{\epsilon}\right)f_{\phi},\quad\phi_{2}=1-\phi_{1},\quad\rho=\left(\phi_{1}\rho_{1}+\phi_{2}\rho_{2}\right)\left[\rho^{(2)}/\rho^{(1)}f_{s}+\left(1-f_{s}\right)\right],\\\
\end{gathered}$ (48)
respectively, in which the volume fraction function, $f_{\phi}$, and the shock
function, $f_{s}$, are given by,
$f_{\phi}=\frac{1}{2}\left\\{1-\textrm{erf}\left[\frac{1-\left(x-2.375\right)^{2}-\left(y-2.5\right)^{2}}{\Delta
x}\right]\right\\}\quad\textrm{and}\quad
f_{s}=\frac{1}{2}\left[1-\textrm{erf}\left(\frac{x+1}{10\Delta
x}\right)\right],$ (49)
respectively, with jump conditions across the shock for velocity
$\left(u^{(1)}=0;\,u^{(2)}=68.5176\right)$, density
$\left(\rho^{(1)}=1\right.$, $\left.\rho^{(2)}=1.32479\right)$, and pressure
$\left(p^{(1)}=1;\,p^{(2)}=19150\right)$. The problem domain spans
$\left(-2\leq x\leq 8;\,0\leq y\leq 5\right)$, with periodic boundary
conditions in the $y$ direction. A symmetry boundary is applied at $x=8$,
representing a perfectly reflecting wall, and a sponge boundary condition is
applied over $\left(-2\leq x\leq-1.5\right)$, modeling a non-reflecting free
boundary. The problem is discretized on a uniform Cartesian grid of size
$N_{x}=400$ and $N_{y}=200$.
For this problem, the artificial bulk viscosity, artificial thermal
conductivity, artificial diffusivity, interface regularization length scale,
interface regularization velocity scale, and out-of-bounds velocity scale are
defined by $C_{\beta}=20$, $C_{\kappa}=0.1$, $C_{D}=20$, $\epsilon=\Delta
x=2.5\times 10^{-2}$, $\Gamma=2.0$, and $\Gamma^{*}=0.0$, respectively. A
fourth-order, penta-diagonal, Padé filter is employed for dealiasing in this
problem to improve the stability of the shock/bubble interaction. The linear
system defining this filter is given by
$\hat{f}_{i}+\alpha(\hat{f}_{i+1}+\hat{f}_{i-1})+\frac{1-2\alpha}{14}(\hat{f}_{i+2}+\hat{f}_{i-2})=\frac{4+6\alpha}{7}f_{i}+\frac{2+3\alpha}{7}(f_{i+1}+f_{i-1})$
(50)
where $\alpha=0.499$.
Notably, for this problem, the LAD in the mass equations is also necessarily
included in the divergence-form and gradient-form approaches to maintain
stability. The latter approaches become unstable for this problem for large
$\Gamma$ (the velocity scale for interface regularization). Figure 8 describes
the evolution in time of the shock/bubble interaction and the subsequent
bubble collapse. There is no significant difference between the various
regularization methods for this problem. The similarity is due to the short
convective timescale of the flow relative to the maximum stable timescale of
the volume fraction regularization methods; effectively, all methods remain
qualitatively similar to the LAD approach.
FIGURE 8: Comparison of the bubble shapes at different times for the case of
shock/air-bubble-in-water interaction using various interface-capturing
methods. The three solid black lines denote the isocontours of the volume
fraction values of 0.1, 0.5, and 0.9, representing the interface region.
### 3.4 Richtmyer–Meshkov instability of a copper–aluminum interface
FIGURE 9: Comparison of the copper–aluminum interface shapes at different
times for the Cu-Al RMI case using various interface-capturing methods. The
three solid black lines denote the isocontours of the volume fraction values
of 0.1, 0.5, and 0.9, representing the interface region.
FIGURE 10: Comparison of the interface-thickness indicator, $l$, by various
methods, where $l_{0}$ is the maximum interface thickness at time $t=0$. (a)
Average interface thickness $l_{avg}$. (b) Maximum interface thickness
$l_{max}$. To exclude unphysical spikes in $l_{max}$ during breakup events, a
moving average of 5 local minima of $l_{max}$ is plotted.
This section examines a shock wave traveling through copper followed by an
interaction with a sinusoidally distorted copper–aluminum material interface.
Though this problem has not been as widely studied as the previous examples,
it is included to demonstrate how interface regularization methods perform
when extended to problems involving elastic-plastic deformation at material
interfaces. Such deformation may arise in impact welding, where interfacial
instabilities are known to develop as metal plates impact and shear (Nassiri
et al., 2016); as well as material characterization at high strain rates,
which typically employ a metal-gas configuration of the Richtmyer-Meshkov
instability (Dimonte et al., 2011). The copper-aluminum variant of this
problem was previously studied by Lopez Ortega (2013), who used a level-set
method combined with the modified ghost-fluid method to set boundary
conditions at material interfaces. This problem was also studied by
Subramaniam et al. (2018) and Adler and Lele (2019), and the results presented
here are an extension of that work.
The problem domain spans $\left(-2\leq x\leq 4;\,0\leq y\leq 1\right)$, with
periodic boundary conditions in the $y$ direction. A symmetry boundary is
applied at $x=4$, representing a perfectly reflecting wall, and a sponge
boundary condition is applied over $\left(-2\leq x\leq-1.5\right)$, modeling a
non-reflecting free boundary. The problem is discretized on a uniform
Cartesian grid of size $N_{x}=768$ and $N_{y}=128$. The material properties
for the copper medium are described by $\gamma_{1}=2.0$, $\rho_{1}=1.0$,
${p_{\infty}}_{1}=1.0$, $\mu_{1}=0.2886$, and ${\sigma_{Y}}_{1}=8.79\times
10^{-4}$. The material properties for the aluminum medium are described by
$\gamma_{2}=2.088$, $\rho_{2}=0.3037$, ${p_{\infty}}_{2}=0.5047$,
$\mu_{2}=0.1985$, and ${\sigma_{Y}}_{2}=2.176\times 10^{-3}$.
The initial conditions for the velocity, pressure, volume fraction, and
density are
$\begin{gathered}u=u^{(2)}f_{s}+u^{(1)}\left(1-f_{s}\right),\quad v=0,\quad
p=p^{(2)}f_{s}+p^{(1)}\left(1-f_{s}\right),\\\
\phi_{1}=\phi_{1}^{\epsilon}+\left(1-2\phi_{1}^{\epsilon}\right)f_{\phi},\quad\phi_{2}=1-\phi_{1},\quad\rho=\left(\phi_{1}\rho_{1}+\phi_{2}\rho_{2}\right)\left[\rho^{(2)}/\rho^{(1)}f_{s}+\left(1-f_{s}\right)\right],\end{gathered}$
(51)
respectively, in which the volume fraction function, $f_{\phi}$, and the shock
function, $f_{s}$, are given by
$f_{\phi}=\frac{1}{2}\left(1-\textrm{erf}\left\\{\frac{x-\left[2+0.4/\left(4\pi
y\right)\sin\left(4\pi y\right)\right]}{3\Delta
x}\right\\}\right)\quad\textrm{and}\quad
f_{s}=\frac{1}{2}\left[1-\textrm{erf}\left(\frac{x-1}{2\Delta
x}\right)\right],$ (52)
respectively, with jump conditions across the shock for velocity
$\left(u^{(1)}=0;\,u^{(2)}=0.68068\right)$, density
$\left(\rho^{(1)}=1\right.$, $\left.\rho^{(2)}=1.4365\right)$, and pressure
$\left(p^{(1)}=5\times 10^{-2};\,p^{(2)}=1.25\right)$. The kinematic tensors
are initialized in a pre-strained state consistent with the material
compression associated with shock initialization, assuming no plastic
deformation has yet occurred, with
$g_{ij}=g^{e}_{ij}=\begin{cases}\left[\rho^{(2)}f_{s}+\rho^{(1)}\left(1-f_{s}\right)\right]/\rho_{1},&\text{for
}i=j=1\\\ \delta_{ij},&\text{else }\end{cases}\qquad\textrm{and}\qquad
g^{p}_{ij}=\delta_{ij}.$ (53)
For this problem, the interface regularization length scale and the out-of-
bounds velocity scale for the divergence form method are defined by
$\epsilon=\Delta x/2=3.90625\times 10^{-3}$ and $\Gamma^{*}=1.0$,
respectively. For the gradient form method, it is necessary for stability to
use $\epsilon=3\Delta x/4=5.859375\times 10^{-3}$, $\Gamma^{*}=1.0$, and
$\phi_{min}=1\times 10^{-5}$.
The time evolution of the growth of the interface instability is shown in
Figure 9. The simulation is integrated well into the nonlinear regime where
the bubble (lighter medium) and the spike (heavier medium) have
interpenetrated, forming mushroom-shaped structures with fine ligaments. The
qualitative comparison between the methods in this test case is similar to
that of the shock-helium-bubble interaction in air. With the LAD approach, the
interface thickness increases with time, especially in the regions of high
shear at the later stages. However, with the divergence-form and gradient-form
approaches, the interface thickness is constant throughout the simulation.
This is quantified by plotting the interface-thickness indicator [$l$ of Eq.
(43)] for each of the three methods in Figure 10. The average thickness, shown
in Figure 10(a) shows a sharp drop in thickness at $t/\tau\approx 0.5$ when
the shock passes through the interface. After this, the thickness remains
small for both the gradient and divergence form methods, whereas with LAD the
interface thickness grows gradually after $t/\tau\approx 2$, when the
interface begins to roll up. Figure 10(b) shows the maximum interface
thickness, which increases almost $60$ times for the LAD method, whereas it
stays on the order of one for the other two methods. This illustrates that the
LAD method incurs significant artificial diffusion when the interface
deformation cannot be resolved by the grid.
FIGURE 11: Plot of total mass, $m$, of aluminum by various methods, where
$m_{0}$ is the mass at time $t=0$.
It is also evident from Figure 9 that the gradient-form approach results in
significant copper mass loss, and the dominant mushroom structure formed in
the nonlinear regime is completely lost. To quantify the amount of mass lost
or gained, the total mass of the aluminum material [Eq. (41)] is plotted
against time in Figure 11. The gradient-form approach results in significant
gain in the mass of the aluminum material, up to $20\%$, as the grid can no
longer resolve the increased interface curvature during roll-up. This makes it
practically unsuitable for accurate interface representation for long-time
numerical simulations. With the divergence-form approach, the breakage of the
ligaments to form metallic droplets can be seen in Figure 9.
## 4 Summary and concluding remarks
This work examines three diffuse-interface-capturing methods and evaluates
their performance for the simulation of immiscible compressible multiphase
fluid flows and elastic-plastic deformation in solids. The first approach is
the localized-artificial-diffusivity method of Cook (2007), Subramaniam et al.
(2018), and Adler and Lele (2019), in which artificial diffusion terms are
added to the individual phase mass fraction transport equations and are
coupled with the other conservation equations. The second approach is the
gradient-form approach that is based on the quasi-conservative method of
Shukla et al. (2010). In this method, the diffusion and sharpening terms
(together called regularization terms) are added to the individual phase
volume fraction transport equations and are coupled with the other
conservation equations (Tiwari et al., 2013). The third approach is the
divergence-form approach that is based on the fully conservative method of
Jain et al. (2020). In this method, the diffusion and sharpening terms are
added to the individual phase volume fraction transport equations and are
coupled with the other conservation equations. In the present study, all of
these interface regularization methods are used in conjunction with a four-
equation, multicomponent mixture model, in which pressure and temperature
equilibria are assumed among the various phases. The latter two interface
regularization methods are commonly used in the context of a five-equation
model, in which temperature equilibrium is not assumed.
The primary objective of this work is to compare these three methods in terms
of their ability to: maintain constant interface thickness throughout the
simulation; conserve mass of each of the phases, mixture momentum, and total
energy; and maintain accurate interface shape for long-time integration. The
secondary objective of this work is to extend these methods for modeling the
interface between deforming solid materials with strength. The LAD method has
previously been used for simulating material interfaces between solids with
strength (Subramaniam et al., 2018, Adler and Lele, 2019). Here, we introduce
consistent corrections in the kinematic equations for the divergence-form and
the gradient-form approaches to extend these methods for the simulation of
interfaces between solids with strength.
Method Conservation Sharp interface Shape preservation Behavior of under-
resolved ligaments and breakup LAD Yes No (interface diffuses in the regions
of high shear) Yes Artificial diffusion (fine-scale features artificially
diffuse as they approach unresolved scales) Divergence form Yes Yes No
(interface aligns with the grid) Artificial breakup (fine-scale features
artificially break up as they approach unresolved scales) Gradient form No
(under-resolved features will be lost) Yes Yes Artificial loss of mass (fine-
scale features are lost, due to conservation error, as they approach
unresolved scales)
TABLE 1: Summary of the advantages and disadvantages of the three diffuse-
interface capturing methods considered in this study: LAD method based on Cook
(2007), Subramaniam et al. (2018), and Adler and Lele (2019); divergence-form
approach based on Jain et al. (2020); and the gradient-form approach based on
Shukla et al. (2010) and Tiwari et al. (2013). The relative disadvantages of
each approach and the different behaviors of under-resolved processes are
underlined.
We employ several test cases to evaluate the performance of the methods,
including (1) advection of an air bubble in water, (2) shock interaction with
a helium bubble in air, (3) shock interaction and the collapse of an air
bubble in water, and (4) Richtmyer–Meshkov instability of a copper–aluminum
interface. For the application of these methods to large-scale simulations of
engineering interest, it is rarely practical to use hundreds of grid points to
resolve the diameter of a bubble/drop. Therefore, we choose to study the limit
of relatively coarse grid resolution, which is more representative of the true
performance of these methods.
The performance of the three methods is summarized in Table 1. The LAD and the
divergence-form approaches conserve mass, momentum, and energy, whereas the
gradient-form approach does not. The mass-conservation error increases
proportionately with the local interface curvature; therefore, fine
interfacial structures will be lost during the simulation. The divergence-form
and the gradient-form approaches maintain a constant interface thickness
throughout the simulation, whereas the interface thickness of the LAD method
increases in the regions of high shear due to the lack of interface sharpening
terms to counter the artificial diffusion. The LAD and the gradient-form
approaches maintain the interface shape for a long time compared to the
divergence-form approach; however, the interface distortion of the divergence-
form approach can be mitigated with the use of appropriately crafted higher-
order schemes for the interface regularization terms.
For each method, the behavior of under-resolved ligaments and breakup features
is unique. For the LAD approach, thin ligaments that form at the onset of
bubble breakup (or in late-stage RMI) diffuse instead of rupturing. For the
gradient-form approach, the ligament formation is not captured because of
mass-conservation issues, which result in premature loss of these fine-scale
features. For the divergence-form approach, the ligaments rupture due to the
lack of grid support, acting like an artificial surface tension force that
becomes significant at the grid scale.
For broader applications, the optimal method depends on the objectives of the
study. These applications include (1) well-resolved problems, in which
differences in the behavior of under-resolved features is not of concern, (2)
applications involving interfaces between miscible phases, and (3)
applications involving more complex physics, including regimes in which
surface tension or molecular diffusion must be explicitly modeled and problems
in which phase changes occur. We intend this demonstration of the advantages,
disadvantages, and behavior of under-resolved phenomena exhibited by the
various methods to be helpful, albeit being unphysical, in the selection of an
interface-regularization method. These results also provide motivation for the
development of subgrid models for multiphase flows.
## Acknowledgments
S. S. J. was supported by a Franklin P. and Caroline M. Johnson Stanford
Graduate Fellowship. M. C. A. and J. R. W. appreciate the sponsorship of the
U.S. Department of Energy Lawrence Livermore National Laboratory under
contract DE-AC52-07NA27344 (monitor: Dr. A. W. Cook). Authors also acknowledge
the Predictive Science Academic Alliance Program III at Stanford University. A
preliminary version of this work has been published (Adler et al., 2020, Jain
et al., 2020) as the Center for Turbulence Research Annual Research Briefs
(CTR-ARB) and are available
online222http://web.stanford.edu/group/ctr/ResBriefs/2020/32_Adler.pdf333http://web.stanford.edu/group/ctr/ResBriefs/2020/33_Jain.pdf.
S. S. J., M. C. A., and J. R. W. are thankful for Dr. Kazuki Maeda, for
reviewing the CTR-ARBs and for his useful comments which helped improve the
final version of the article.
## Appendix A: Finite-difference operators for the divergence-form approach
The test case of shock interaction with a helium bubble in air is repeated for
the divergence-form approach with the same parameters listed in Section 3.2.
Here, the difference is in the numerical representation of the nonlinear
interface-regularization terms. In Section 3.2, the interface-regularization
fluxes are formed at the faces, as described in Section 2.8, which is
consistent with the finite-volume implementation in Jain et al. (2020).
Whereas, here, a second-order standard central finite-difference scheme is
used instead.
The shock interaction with the helium bubble in air and the subsequent
evolution of the bubble shape are shown in Figure 12. An unphysical wrinkling
of the interface can be seen at the later stages of the bubble deformation.
This behavior is consistent with the observations made by Shukla et al.
(2010), which motivated them to develop the gradient-form approach. However,
discretizing the fluxes at the faces, Jain et al. (2020) showed that this
results in discrete balance between the diffusion and sharpening fluxes,
thereby eliminating the spurious wrinkling of the interface as can be seen in
Figure 5. In this work, this face-evaluated flux formulation has been
succesfully extended for higher-order schemes and is presented in Section 2.8.
FIGURE 12: The bubble shapes at different times for the case of the
shock/helium-bubble-in-air interaction using the divergence-form approach,
where the interface-regularization terms are discretized using a second-order
standard central finite-difference scheme. The three solid black lines denote
the isocontours of the volume fraction values of 0.1, 0.5, and 0.9,
representing the interface region.
## References
* Cook (2007) A. W. Cook, Artificial fluid properties for large-eddy simulation of compressible turbulent mixing, Physics of fluids 19 (2007) 055103.
* Subramaniam et al. (2018) A. Subramaniam, N. S. Ghaisas, S. K. Lele, High-order Eulerian simulations of multimaterial elastic–plastic flow, Journal of Fluids Engineering 140 (2018) 050904.
* Adler and Lele (2019) M. C. Adler, S. K. Lele, Strain-hardening framework for Eulerian simulations of multi-material elasto-plastic deformation, in: Center for Turbulence Research Annual Research Briefs 2019, Stanford University, 2019\.
* Shukla et al. (2010) R. K. Shukla, C. Pantano, J. B. Freund, An interface capturing method for the simulation of multi-phase compressible flows, Journal of Computational Physics 229 (2010) 7411–7439.
* Tiwari et al. (2013) A. Tiwari, J. B. Freund, C. Pantano, A diffuse interface model with immiscibility preservation, Journal of Computational Physics 252 (2013) 290–309.
* Jain et al. (2020) S. S. Jain, A. Mani, P. Moin, A conservative diffuse-interface method for compressible two-phase flows, Journal of Computational Physics (2020) 109606.
* Kataoka (1986) I. Kataoka, Local instant formulation of two-phase flow, International Journal of Multiphase Flow 12 (1986) 745–758.
* Shyue (1998) K.-M. Shyue, An efficient shock-capturing algorithm for compressible multicomponent problems, Journal of Computational Physics 142 (1998) 208–242.
* Venkateswaran et al. (2002) S. Venkateswaran, J. W. Lindau, R. F. Kunz, C. L. Merkle, Computation of multiphase mixture flows with compressibility effects, Journal of Computational Physics 180 (2002) 54–77.
* Marquina and Mulet (2003) A. Marquina, P. Mulet, A flux-split algorithm applied to conservative models for multicomponent compressible flows, Journal of Computational Physics 185 (2003) 120–138.
* Cook (2009) A. W. Cook, Enthalpy diffusion in multicomponent flows, Physics of Fluids 21 (2009) 055109.
* Allaire et al. (2002) G. Allaire, S. Clerc, S. Kokh, A five-equation model for the simulation of interfaces between compressible fluids, Journal of Computational Physics 181 (2002) 577–616.
* Kapila et al. (2001) A. K. Kapila, R. Menikoff, J. B. Bdzil, S. F. Son, D. S. Stewart, Two-phase modeling of deflagration-to-detonation transition in granular materials: Reduced equations, Physics of Fluids 13 (2001) 3002–3024.
* So et al. (2012) K. So, X. Hu, N. Adams, Anti-diffusion interface sharpening technique for two-phase compressible flow simulations, Journal of Computational Physics 231 (2012) 4304–4323.
* Ansari and Daramizadeh (2013) M. Ansari, A. Daramizadeh, Numerical simulation of compressible two-phase flow using a diffuse interface method, International Journal of Heat and Fluid Flow 42 (2013) 209–223.
* Shukla (2014) R. K. Shukla, Nonlinear preconditioning for efficient and accurate interface capturing in simulation of multicomponent compressible flows, Journal of Computational Physics 276 (2014) 508–540.
* Coralic and Colonius (2014) V. Coralic, T. Colonius, Finite-volume WENO scheme for viscous compressible multicomponent flows, Journal of Computational Physics 274 (2014) 95–121.
* Perigaud and Saurel (2005) G. Perigaud, R. Saurel, A compressible flow model with capillary effects, Journal of Computational Physics 209 (2005) 139–178.
* Wong and Lele (2017) M. L. Wong, S. K. Lele, High-order localized dissipation weighted compact nonlinear scheme for shock-and interface-capturing in compressible flows, Journal of Computational Physics 339 (2017) 179–209.
* Chiapolino et al. (2017) A. Chiapolino, R. Saurel, B. Nkonga, Sharpening diffuse interfaces with compressible fluids on unstructured meshes, Journal of Computational Physics 340 (2017) 389–417.
* Garrick et al. (2017a) D. P. Garrick, W. A. Hagen, J. D. Regele, An interface capturing scheme for modeling atomization in compressible flows, Journal of Computational Physics 344 (2017a) 260–280.
* Garrick et al. (2017b) D. P. Garrick, M. Owkes, J. D. Regele, A finite-volume HLLC-based scheme for compressible interfacial flows with surface tension, Journal of Computational Physics 339 (2017b) 46–67.
* Jain et al. (2018) S. S. Jain, A. Mani, P. Moin, A conservative diffuse-interface method for the simulation of compressible two-phase flows with turbulence and acoustics, Center for Turbulence Research Annual Research Briefs (2018) 47–64.
* Yeom and Chang (2013) G.-S. Yeom, K.-S. Chang, A modified HLLC-type Riemann solver for the compressible six-equation two-fluid model, Computers & Fluids 76 (2013) 86–104.
* Baer and Nunziato (1986) M. Baer, J. Nunziato, A two-phase mixture theory for the deflagration-to-detonation transition (ddt) in reactive granular materials, International journal of multiphase flow 12 (1986) 861–889.
* Sainsaulieu (1995) L. Sainsaulieu, Finite volume approximation of two phase-fluid flows based on an approximate Roe-type Riemann solver, Journal of Computational Physics 121 (1995) 1–28.
* Saurel and Abgrall (1999) R. Saurel, R. Abgrall, A simple method for compressible multifluid flows, SIAM Journal on Scientific Computing 21 (1999) 1115–1145.
* Mirjalili et al. (2017) S. Mirjalili, S. S. Jain, M. Dodd, Interface-capturing methods for two-phase flows: An overview and recent developments, Center for Turbulence Research Annual Research Briefs (2017) 117–135.
* Saurel and Pantano (2018) R. Saurel, C. Pantano, Diffuse-interface capturing methods for compressible two-phase flows, Annual Review of Fluid Mechanics 50 (2018).
* Aslani and Regele (2018) M. Aslani, J. D. Regele, A localized artificial diffusivity method to simulate compressible multiphase flows using the stiffened gas equation of state, International Journal for Numerical Methods in Fluids 88 (2018) 413–433.
* Francois et al. (2006) M. M. Francois, S. J. Cummins, E. D. Dendy, D. B. Kothe, J. M. Sicilian, M. W. Williams, A balanced-force algorithm for continuous and sharp interfacial surface tension models within a volume tracking framework, Journal of Computational Physics 213 (2006) 141–173.
* Mencinger and Žun (2007) J. Mencinger, I. Žun, On the finite volume discretization of discontinuous body force field on collocated grid: Application to VOF method, Journal of Computational Physics 221 (2007) 524–538.
* Jain and Moin (2020) S. S. Jain, P. Moin, A kinetic energy and entropy preserving scheme for the simulation of compressible two-phase turbulent flows, Center for Turbulence Research Annual Research Briefs (2020).
* Benson (1992) D. J. Benson, Computational methods in Lagrangian and Eulerian hydrocodes, Computer Methods in Applied Mechanics and Engineering 99 (1992) 235–394.
* Donea et al. (2004) J. Donea, A. Huerta, J.-P. Ponthot, A. Rodríguez-Ferran, Arbitrary Lagrangian–Eulerian methods, Encyclopedia of Computational Mechanics 1 (2004) 413–437. Chapter 14.
* Miller and Colella (2001) G. H. Miller, P. Colella, A high-order Eulerian Godunov method for elastic–plastic flow in solids, Journal of Computational Physics 167 (2001) 131–176.
* Ortega et al. (2014) A. L. Ortega, M. Lombardini, D. Pullin, D. I. Meiron, Numerical simulation of elastic–plastic solid mechanics using an Eulerian stretch tensor approach and HLLD Riemann solver, Journal of Computational Physics 257 (2014) 414–441.
* Ghaisas et al. (2018) N. S. Ghaisas, A. Subramaniam, S. K. Lele, A unified high-order Eulerian method for continuum simulations of fluid flow and of elastic–plastic deformations in solids, Journal of Computational Physics 371 (2018) 452–482.
* Sugiyama et al. (2010) K. Sugiyama, S. Ii, S. Takeuchi, S. Takagi, Y. Matsumoto, Full Eulerian simulations of biconcave neo-Hookean particles in a Poiseuille flow, Computational Mechanics 46 (2010) 147–157.
* Sugiyama et al. (2011) K. Sugiyama, S. Ii, S. Takeuchi, S. Takagi, Y. Matsumoto, A full Eulerian finite difference approach for solving fluid–structure coupling problems, Journal of Computational Physics 230 (2011) 596–627.
* Favrie and Gavrilyuk (2011) N. Favrie, S. Gavrilyuk, Mathematical and numerical model for nonlinear viscoplasticity, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 369 (2011) 2864–2880.
* Valkov et al. (2015) B. Valkov, C. H. Rycroft, K. Kamrin, Eulerian method for multiphase interactions of soft solid bodies in fluids, Journal of Applied Mechanics 82 (2015).
* Jain et al. (2019) S. S. Jain, K. Kamrin, A. Mani, A conservative and non-dissipative eulerian formulation for the simulation of soft solids in fluids, Journal of Computational Physics 399 (2019) 108922.
* Ghaisas et al. (2017) N. S. Ghaisas, A. Subramaniam, S. K. Lele, A. W. Cook, Evaluation of an Eulerian multi-material mixture formulation based on a single inverse deformation gradient tensor field, Center for Turbulence Research Annual Briefs 2017 (2017).
* Plohr and Sharp (1992) B. J. Plohr, D. H. Sharp, A conservative formulation for plasticity, Advances in Applied Mathematics 13 (1992) 462–493.
* Adler et al. (2021) M. C. Adler, J. R. West, S. K. Lele, A high-order method for eulerian simulation of multi-material elasto-plastic deformation with strain hardening (in preparation), 2021.
* Ndanou et al. (2015) S. Ndanou, N. Favrie, S. Gavrilyuk, Multi-solid and multi-fluid diffuse interface model: Applications to dynamic fracture and fragmentation, Journal of Computational Physics 295 (2015) 523–555.
* Ortega et al. (2015) A. L. Ortega, M. Lombardini, P. T. Barton, D. I. Pullin, D. I. Meiron, Richtmyer–Meshkov instability for elastic–plastic solids in converging geometries, Journal of the Mechanics and Physics of Solids 76 (2015) 291–324.
* Lele (1992) S. K. Lele, Compact finite difference schemes with spectral-like resolution, Journal of Computational Physics 103 (1992) 16–42.
* Ghate et al. (2021) A. S. Ghate, A. Subramaniam, N. Ghaisas, J. R. West, M. C. Adler, S. S. Jain, PadéOps source code, 2021. DOI: 10.5281/zenodo.5277989.
* Chiodi and Desjardins (2017) R. Chiodi, O. Desjardins, A reformulation of the conservative level set reinitialization equation for accurate and robust simulation of complex multiphase flows, Journal of Computational Physics 343 (2017) 186–200.
* Murrone and Guillard (2005) A. Murrone, H. Guillard, A five equation reduced model for compressible two phase flow problems, Journal of Computational Physics 202 (2005) 664–698.
* Johnsen and Ham (2012) E. Johnsen, F. Ham, Preventing numerical errors generated by interface-capturing schemes in compressible multi-material flows, Journal of Computational Physics 231 (2012) 5705–5717.
* Saurel et al. (2009) R. Saurel, F. Petitpas, R. A. Berry, Simple and efficient relaxation methods for interfaces separating compressible fluids, cavitating flows and shocks in multiphase mixtures, journal of Computational Physics 228 (2009) 1678–1712.
* Johnsen and Colonius (2006) E. Johnsen, T. Colonius, Implementation of WENO schemes in compressible multicomponent flow problems, Journal of Computational Physics 219 (2006) 715–732.
* Beig and Johnsen (2015) S. A. Beig, E. Johnsen, Maintaining interface equilibrium conditions in compressible multiphase flows using interface capturing, Journal of Computational Physics 302 (2015) 548–566.
* Capuano et al. (2018) M. Capuano, C. Bogey, P. D. Spelt, Simulations of viscous and compressible gas–gas flows using high-order finite difference schemes, Journal of Computational Physics 361 (2018) 56–81.
* Shyue and Xiao (2014) K.-M. Shyue, F. Xiao, An eulerian interface sharpening algorithm for compressible two-phase flow: The algebraic thinc approach, Journal of Computational Physics 268 (2014) 326–354.
* Haimovich and Frankel (2017) O. Haimovich, S. H. Frankel, Numerical simulations of compressible multicomponent and multiphase flow using a high-order targeted eno (teno) finite-volume method, Computers & Fluids 146 (2017) 105–116.
* Terashima and Tryggvason (2009) H. Terashima, G. Tryggvason, A front-tracking/ghost-fluid method for fluid interfaces in compressible flows, Journal of Computational Physics 228 (2009) 4012–4037.
* Daude et al. (2014) F. Daude, P. Galon, Z. Gao, E. Blaud, Numerical experiments using a hllc-type scheme with ale formulation for compressible two-phase flows five-equation models with phase transition, Computers & Fluids 94 (2014) 112–138.
* Fedkiw et al. (1999) R. P. Fedkiw, T. Aslam, B. Merriman, S. Osher, et al., A non-oscillatory eulerian approach to interfaces in multimaterial flows (the ghost fluid method), Journal of computational physics 152 (1999) 457–492.
* Bai and Deng (2017) X. Bai, X. Deng, A sharp interface method for compressible multi-phase flows based on the cut cell and ghost fluid methods, Advances in Applied Mathematics and Mechanics 9 (2017) 1052–1075.
* Quirk and Karni (1996) J. J. Quirk, S. Karni, On the dynamics of a shock–bubble interaction, Journal of Fluid Mechanics 318 (1996) 129–163.
* Deng et al. (2018) X. Deng, S. Inaba, B. Xie, K.-M. Shyue, F. Xiao, High fidelity discontinuity-resolving reconstruction for compressible multiphase flows with moving interfaces, Journal of Computational Physics 371 (2018) 945–966.
* Haas and Sturtevant (1987) J. F. Haas, B. Sturtevant, Interaction of weak shock waves with cylindrical and spherical gas inhomogeneities, Journal of Fluid Mechanics 181 (1987) 41–76.
* Bourne and Field (1992) N. Bourne, J. Field, Shock-induced collapse of single cavities in liquids, Journal of Fluid Mechanics 244 (1992) 225–240.
* Hu and Khoo (2004) X. Y. Hu, B. C. Khoo, An interface interaction method for compressible multifluids, Journal of Computational Physics 198 (2004) 35–64.
* Nourgaliev et al. (2006) R. R. Nourgaliev, T.-N. Dinh, T. G. Theofanous, Adaptive characteristics-based matching for compressible multifluid dynamics, Journal of Computational Physics 213 (2006) 500–529.
* Bo and Grove (2014) W. Bo, J. W. Grove, A volume of fluid method based ghost fluid method for compressible multi-fluid flows, Computers & Fluids 90 (2014) 113–122.
* Nassiri et al. (2016) A. Nassiri, B. Kinsey, G. Chini, Shear instability of plastically-deforming metals in high-velocity impact welding, Journal of the Mechanics and Physics of Solids 95 (2016) 351–373.
* Dimonte et al. (2011) G. Dimonte, G. Terrones, F. J. Cherne, T. C. Germann, V. Dupont, K. Kadau, W. T. Buttler, D. M. Oro, C. Morris, D. L. Preston, Use of the richtmyer-meshkov instability to infer yield stress at high-energy densities, Phys. Rev. Lett. 107 (2011) 264502.
* Lopez Ortega (2013) A. Lopez Ortega, Simulation of Richtmyer-Meshkov flows for elastic-plastic solids in planar and converging geometries using an Eulerian framework, Ph.D. thesis, California Institute of Technology, 2013.
* Adler et al. (2020) M. C. Adler, S. S. Jain, J. R. West, A. Mani, P. Moin, S. K. Lele, Diffuse-interface capturing methods for compressible multiphase fluid flows and elastic-plastic deformation in solids. Part I: Methods (2020).
* Jain et al. (2020) S. S. Jain, M. C. Adler, J. R. West, A. Mani, P. Moin, S. K. Lele, Diffuse-interface capturing methods for compressible multiphase fluid flows and elastic-plastic deformation in solids. Part II: Results (2020).
|
# Specification testing with grouped fixed effects 111We are grateful to Elena
Manresa, Chris Muris, Laura Serlenga, Francesco Bartolucci, Andreas Dzemski,
Arturas Juodis, Amrei Stammann, Riccardo Lucchetti, to the audience at the
28th International Panel Data Conference, and to the participants of the 11th
Workshop of Econometrics and Empirical Economics for their helpful comments
and suggestions. We also thank Carolina Castagnetti and Federico Belotti for
generously sharing their codes. Francesco Valentini would like to acknowledge
the financial support by the European Union - Next Generation EU. Project
Code: ECS00000041; Project CUP: C43C22000380007; Project Title: Innovation,
digitalization and sustainability for the diffused economy in Central Italy -
VITALITY.
Claudia Pigini222Marche Polytechnic University (Italy). E-mail:
<EMAIL_ADDRESS>Alessandro Pionati333Marche Polytechnic University
(Italy).Corresponding Author. Address: Department of Economics and Social
Sciences, P.le Martelli 8, 60121 Ancona (Italy) E-mail<EMAIL_ADDRESS>Francesco Valentini 444Marche Polytechnic University (Italy). E-mail:
<EMAIL_ADDRESS>
###### Abstract
We propose a bootstrap generalized Hausman test for the correct specification
of unobserved heterogeneity in both linear and nonlinear fixed-effects panel
data models. We consider as null hypotheses two scenarios in which the
unobserved heterogeneity is either time-invariant or specified as additive
individual and time effects. We contrast the standard fixed-effects estimators
with the recently developed two-way grouped fixed-effects estimator, that is
consistent in the presence of time-varying heterogeneity under minimal
specification and distributional assumptions for the unobserved effects. The
Hausman test exploits the general formulation for the variance of the vector
of contrasts and critical values are computed via parametric percentile
bootstrap, so as to account for the non-centrality of the asymptotic
$\chi^{2}$ distribution arising from the incidental parameters and
approximation biases. Monte Carlo evidence shows that the test has correct
size and good power properties. We provide two empirical applications to
illustrate the proposed test: the first one is based on a linear model for the
determinants of the wage of working women and the second analyzes the trade
extensive margin.
Keywords: Additive effects, Group fixed-effects Hausman test, Parametric
bootstrap, Time-varying heterogeneity
JEL Classification: C12, C23, C25
## 1 Introduction
Correct specification of unobserved heterogeneity is crucial in panel data
modeling. For long, empirical applications have only considered individual
time-constant fixed effects, but the assumption of time-invariant unobserved
heterogeneity is often hardly tenable in practice, especially over a long time
dimension. Therefore the current mainstream approach includes both subject and
time fixed effects, in order to achieve credible identification of the effects
of interest. The simplest and most widely employed setup is the specification
of additive individual and time heterogeneity, namely the two-way fixed-
effects model, that in the linear model is equivalent to the two-way
correlated random effects approach (Wooldridge, , 2021). For nonlinear models
with additive fixed effects, Fernández-Val and Weidner, (2016) provide
analytical and jackknife bias corrections for the maximum likelihood (ML)
estimator, which is plagued by the incidental parameters problem.
While of simple implementation, the two-way fixed-effects specification fails
to capture the specific impact common factors may have on each subject. There
is now an important stream of literature focused on developing identification
results and estimation strategies for models with interactive time and
individual fixed effects. Contributions have been spurred by the seminal paper
of Bai, (2009), who provides identification results along with the
asymptotics for the interactive fixed-effects estimator in linear models. More
recently, interactive fixed-effects have been introduced in nonlinear panel
data and network models by Chen et al., (2021).
Testing the assumptions on the unobserved heterogeneity specification has also
received considerable attention in the recent econometric literature.
Bartolucci et al., (2015) propose a Hausman-type test for the null hypothesis
of time-constant unobserved heterogeneity in generalized linear models, where
conditional ML estimators are compared with first-differences or pairwise
conditional ML estimators. In the context of large stationary panel models,
the factor specification could be tested by comparing additive to interactive
fixed-effects models, on the basis of the Hausman test suggested by Bai,
(2009) and its fixed-$T$ version, derived by Westerlund, (2019). However, it
has been shown that the Hausman-type test fails to reject the null hypothesis
when individual factor loadings are independent across equations (Castagnetti
et al., 2015b, ). On this basis, Kapetanios et al., (2023) use a Hausman-type
test contrasting additive and interactive fixed-effects to detect such
correlation, whereas Castagnetti et al., 2015a overcome the issue by proposing
an alternative max-type test for the null hypothesis of time-invariant
unobserved heterogeneity.
Despite its increasing popularity, the interactive effects approach based on
Bai, ’s procedure comes with some non trivial issues. First, estimation relies
on solving a non convex objective function with possibly multiple minima (Moon
and Weidner, , 2023). Secondly, the reliability of the iterative procedure
crucially depends on the consistency of the parameter estimates chosen as the
starting point for the algorithm (Hsiao, , 2018). Finally, the number of
latent factors should be known _a priori_ to the econometrician, and even in
this case factors are not uniquely determined (Moon and Weidner, , 2015).
These drawbacks might be even more hampering when nonlinear models with
interacted fixed effects are involved (see Chen et al., , 2021). In light of
these considerations, a simpler specification might therefore be preferable,
provided it gives a good enough representation of the structure of the
unobserved heterogeneity.
In this paper we propose a generalized Hausman test for the fixed-effects
specification, in both linear and nonlinear models and where the unobserved
heterogeneity, under the null hypotheses, is either only individual or
additive. The test contrasts fixed-effects ML estimators with the Two-Way
Grouped Fixed Effects (TW-GFE henceforth) approach, recently put forward by
Bonhomme et al., 2022a . Their proposal is based on a first-step data-driven
approximation of the unobserved heterogeneity, which is clustered by the
kmeans algorithm that uses individual and time-series moments to assign
individual and time group memberships. Cluster dummies are then interacted and
enter the model specification as group effects, and the associated parameters
are estimated along with the regression coefficients in the second step. The
resulting second-step estimator is consistent in the presence of unspecified
forms of the time-varying unobserved heterogeneity with minimal assumptions on
the unobserved components, which makes it a perfect candidate to contrast with
the fixed-effects estimators that are consistent only with time-constant or
time-varying additive heterogeneity. Note that, in order to perform the
proposed test, there is no need to estimate the interactive fixed effects
models, as the TW-GFE encompasses this as well as more sophisticated
specifications for the unobserved heterogeneity.555In principle, our strategy
could be used to test for the null hypothesis of a factor structure against
more complex formulations, provided certain regularity conditions on the
functions governing the unobserved heterogeneity are satisfied.
Under specific choices for the number of clusters outlined by Bonhomme et al.,
2022a for the first step, it can be shown that the TW-GFE estimator is
asymptotically normal, so that the Hausman statistic (Hausman, , 1978) has
asymptotic $\chi^{2}$ distribution. However, as it might be difficult to
verify which estimator is more efficient than the other under the null
hypothesis, we rely on the generalized estimator for the variance of the
vector of contrasts proposed by Bartolucci et al., (2015). In addition, the
asymptotic $\chi^{2}$ distribution is non-central because of two sources of
asymptotic bias: the incidental parameters problem, that in nonlinear models
plagues both estimators, and the approximation bias, that affects the TW-GFE.
We therefore compute critical values of the test statistic distribution by
means of parametric percentile bootstrap (MacKinnon, , 2006; Horowitz, ,
2019). The main advantage of this procedure lies in the bootstrap
distributions correctly capturing the non-centrality without the need for any
bias correction of either estimator. A related strategy is adopted by Kim and
Sun, (2016) and Higgins and Jochmans, (2023) for inferential procedures in
nonlinear fixed-effects models.
We report the results of an extensive Monte Carlo study showing evidence that
the test has correct size and good power in both linear and non linear
specifications. Size properties, however, crucially depend on how effective is
the clustering procedure in approximating the unobserved heterogeneity for the
TW-GFE, that is choosing a sufficiently large number of groups and ensuring
that the moments used for the kmeans clustering are informative about the
latent trains and common factors. Power properties are studied under the
alternative hypothesis of a factor structure. The test exhibits high rejection
rates with the linear model, while has lower power for the probit model when
the ML estimator for additive specification is contrasted with the TW-GFE.
Correct size is also attained with dynamic factors, which represents a
violation of one of the assumptions needed for consistency of the TW-GFE
while, as expected, in scenarios where moments are not informative the test is
not viable. While computationally more intensive than the testing procedures
put forward by Castagnetti et al., 2015a and Bartolucci et al., (2015), the
proposed test represents an improvement as the former can only be applied to
linear models in a large-$T$ framework and the latter, while viable for
generalized linear models, lacks power when time effects are independent.
We also provide two empirical applications for the proposed test. The first
concerns a linear model for the determinants of the wage of working women. The
test detects the presence of time-varying unobserved heterogeneity, when the
TW-GFE is compared with the ML estimator with only individual heterogeneity,
and it also shows that the TW-GFE overfits as the additive fixed-effects
specification is enough to capture unobserved common time trends. In the
second application we analyze the trade extensive margin following Helpman et
al., (2008). The test does not provide evidence of a more complex structure
for the unobserved heterogeneity, as it fails to reject the null hypothesis of
additive importer and exported fixed-effects.
Literature review This paper relates to the stream of literature hat has
studied fixed-effects panel data models with grouped structures for the
unobserved heterogeneity. Discrete heterogeneity has long been considered
within the random-effects approach (Heckman and Singer, , 1984), especially by
a large body of statistical literature; see, for instance, MacLahlan and Peel,
(2000) on finite-mixture models and Bartolucci et al., (2012) on latent
Markov models. On the contrary, the investigation of grouped patterns of
heterogeneity in fixed-effects models is relatively recent in the econometric
literature.
Hahn and Moon, (2010) study the asymptotic bias arising form the incidental
parameters problem in nonlinear panel data models where unobserved
heterogeneity is assumed to be discrete with a finite number of support
points. Bester and Hansen, (2016) investigate the asymptotic behavior of the
ML estimator for nonlinear models with grouped effects, under the assumption
that subjects are clustered according to some external known classification.
Models with unknown grouped membership are studied by Su et al., (2016), who
propose penalized techniques for the estimation of models where regularization
by classifier-Lasso shrinks individual effects to group coefficients, by Ando
and Bai, (2016) who consider unobserved group factor structures in linear
models with interactive fixed effects, and finally by Wang et al., (2023),
studying group structures combined with structural breaks.
Discrete unobserved heterogeneity can serve as a regularization device that
allows to identify the parameters of interest in panel data models with time-
varying individual effects but not necessarily characterized by a factor
structure. In this vein, Bonhomme and Manresa, (2015) introduce a GFE
estimator for linear models where the discrete heterogeneity is assumed to
follow time-varying grouped patterns and cluster membership is left
unrestricted. By contrast, the TW-GFE estimator by Bonhomme et al., 2022a is
consistent even with unspecified forms of time-varying unobserved
heterogeneity. While using discretization as an approximation device
introduces an asymptotic bias, the function of the unobserved heterogeneity
they consider encompasses a variety of specifications, such as additive and
interactive effects, under minimal distributional assumptions. This makes the
TW-GFE estimator a simple and potentially very attractive tool for
practitioners.
Outline The rest of the paper is organized as follows: Section 2 briefly
describes the models and estimators; Section 3 reviews the assumptions
required to characterize the asymptotic distribution of the TW-GFE,
illustrates the proposed approach and the asymptotic behavior of the resulting
test statistic, and finally briefly illustrates the alternative testing
procedures; Section 4 presents the results of the simulation study in both
linear and probit cases and discusses violation of relevant assumptions;
Section 5 illustrates the two empirical applications; Finally, Section 6
concludes.
## 2 Models and estimators
Consider a panel data setup where subjects are indexed by $i=1,\ldots,N$ and
time occasions are indexed by $t=1,\ldots,T$. Throughout the paper, we assume
that observations are independent, conditional on the observed covariates and
unobserved heterogeneity, and that the models are static. The traditional
specification of fixed-effects models depicts unobserved heterogeneity as
individual-specific intercepts, so that the conditional distribution of the
response variable $y_{it}$ given an $r$-vector of exogenous covariates
$x_{it}$ is of the type
$y_{it}|x_{it},\theta_{0},\alpha_{i0}\sim
f(y_{it}|x_{it}^{\prime}\theta_{0}+\alpha_{i0}),\quad$ (1)
where $\theta_{0}$ is the vector of parameters of interest, $\alpha_{i0}$
denotes the permanent individual effect, and $f(\cdot)$ is a generic known
density function, as in Chen et al., (2021). When (1) is a linear regression
model, consistent OLS estimators of $\theta$ can be trivially obtained on the
basis of standard de-meaning or first-differences transformations, whereas ML
estimators in non-linear models are consistent but exhibit a bias in their
limiting distribution under rectangular array asymptotics (Li et al., , 2003),
unless probability formulations admit sufficient statistics for the individual
intercepts (Andersen, , 1970; Chamberlain, , 1980). Therefore bias reduction
techniques, such as analytical or jackknife corrections, are required (Hahn
and Newey, , 2004). These estimators are usually referred to as the one-way
fixed-effects (OW-FE) estimators.
In order to account for time-varying heterogeneity, the widespread approach is
to include common time effects, that enter the specification in an additive
manner. The model is then of the type
$y_{it}|x_{it},\theta_{0},\alpha_{i0},\zeta_{t0}\sim
f(y_{it}|x_{it}^{\prime}\theta_{0}+\alpha_{i0}+\zeta_{t0}),$ (2)
where $\zeta_{t0}$ represents such time-varying heterogeneity. Similarly to
the case with only individual effects, a consistent estimator of $\theta$ can
be obtained under suitable transformations when a linear regression model is
specified, while bias corrections have to be implemented for ML estimators
(Fernández-Val and Weidner, , 2016). We denote them as two-way fixed-effects
(TW-FE) estimators.
In this paper, we use the TW-GFE estimator to contrast with the OW-FE and TW-
FE estimators so as to perform specification tests and possibly detect more
sophisticated structures for the unobserved heterogeneity. Consider the
following model formulation
$y_{it}|x_{it},\theta_{0},\alpha_{it0}\sim
f(y_{it}|x_{it}^{\prime}\theta_{0}+\alpha_{it0}).$ (3)
According to Bonhomme et al., 2022a , the time-varying unobserved
heterogeneity $\alpha_{it0}$ is characterized by two vectors $\xi_{i0}$ and
$\lambda_{t0}$, and a function $\alpha$$(\cdot)$, satisfying requirements that
will be discussed later in more detail, such that
$\alpha_{it0}=\alpha(\xi_{i0},\lambda_{t0})$. This characterization of
$\alpha_{it0}$ can be easily reconciled with the structures for the unobserved
heterogeneity in models (1) and (2) as follows:
$\alpha_{it0}:\left\\{\begin{array}[]{lc}\alpha_{i0}\equiv\alpha(\xi_{i0})&\mathrm{in\quad\eqref{eq:ti}}\\\\[2.0pt]
\alpha_{i0}+\zeta_{t0}\equiv\alpha(\xi_{i0},\lambda_{t0})&\mathrm{in\quad\eqref{eq:add}}\\\\[2.0pt]
\alpha_{it0}\equiv\alpha(\xi_{i0},\lambda_{t0})&\mathrm{in\quad\eqref{eq:tv}}\\\
\end{array}\right.$
It is also important to the GFE strategy that covariates are affected by the
same source of heterogeneity, so that $x_{it}$ depends on $\mu_{it0}$, where
$\mu_{it0}=\mu(\xi_{i0},\lambda_{t0})$, with $\mu(\cdot)$ satisfying the same
requirements as $\alpha(\cdot)$.
The first-step estimation of the TW-GFE approach deals with the classification
of subjects and time occasions into two different sets of groups. It is worth
to stress that clustering here serves as an approximation tool for the
unobserved heterogeneity, so that there is no number of clusters to be known
_a priori_. As a consequence, groups should not be intended as aggregation
levels coming from external information (e.g. sectors for firms, see also
Papke and Wooldridge, , 2023). Classification relies on performing kmeans
clustering twice, using the vectors of moments
$h_{i}=\frac{1}{T}\sum_{t=1}^{T}h(y_{it},x_{it})$ and
$w_{t}=\frac{1}{N}\sum_{i=1}^{N}w(y_{it},x_{it})$ of fixed dimensions. Both
vectors have to be _informative_ about $\xi_{i0}$ and $\lambda_{t0}$,
respectively, meaning that $\xi_{i0}$ can be uniquely recovered from $h_{i}$
for large $T$ and $\lambda_{t0}$ can be uniquely recovered from $w_{t}$ for
large $N$. The two kmeans clustering procedures return a number of $K$ groups
for the subjects and a different number of $L$ groups for the time occasions,
from which two sets of dummies identifying the related group memberships are
created. In the second step, cluster dummies for the cross-sectional and time
dimensions are then interacted and enter the linear index of the model
specified for the response variable as $KL$ group fixed effects. Estimation is
then carried out by ML.
## 3 Specification tests
We propose a generalized Hausman test for the specification of the unobserved
heterogeneity considering, as null hypotheses, the models portrayed by
Equations (1) and (2). The OW-FE and TW-FE estimators are consistent, with an
asymptotic bias in case of nonlinear models. The TW-GFE estimator is also
consistent but always asymptotically biased. In the presence of more
sophisticated forms of unobserved heterogeneity – different from those in (1)
and (2) – such as a factor structure, only the TW-GFE estimator is consistent.
In the following, we first outline the assumptions for the asymptotic results
needed to derive the test statistic. We then describe the proposed approach
and, finally, briefly recall the existing procedures to test for time-varying
unobserved heterogeneity.
### 3.1 Assumptions and supporting results
To derive our main results we make the following assumptions, that recall
those in Bonhomme et al., 2022a ; Bonhomme et al., 2022b .
###### Assumption 1.
Unobserved Heterogeneity: (i) There exist $\xi_{i0}$ of fixed dimension
$d_{\xi}$ and $\lambda_{t0}$ of fixed dimension $d_{\lambda}$ and two
functions $\alpha(\cdot)$ and $\mu(\cdot)$ that are Lipschitz-continuous in
both arguments, such that $\alpha_{it0}=\alpha(\xi_{i0},\lambda_{t0})$ and
$\mu_{it0}=\mu(\xi_{i0},\lambda_{t0})$; (ii) the supports of $\xi_{i0}$ and
$\lambda_{t0}$ are compact.
###### Assumption 2.
Moment informativeness: There exist moment vectors
$h_{i}=(1/T)\textstyle{\sum_{t}}h(y_{it},x_{it}^{\prime})\quad\text{and}\quad
w_{t}=(1/N)\textstyle{\sum_{i}}w(y_{it},x_{it}^{\prime})$
of fixed dimension, and two unknown Lipschitz-continuous function $\phi$ and
$\psi$, such that
$\underset{T\to\infty}{\mathrm{plim}}h_{i}=\phi(\xi_{i0})\quad\text{and}\quad\underset{N\to\infty}{\mathrm{plim}}w_{t}=\psi(\lambda_{t0}),$
and $\frac{1}{N}\|\sum_{i=1}^{N}h_{i}-\phi(\xi_{i0})\|^{2}=O_{p}(1/T)$,
$\frac{1}{T}\|\sum_{i=t}^{T}w_{t}-\psi(\lambda_{t0})\|^{2}=O_{p}(1/N)$ as
$N,T\to\infty$.
###### Assumption 3.
Asymptotics: as $N,T\rightarrow\infty$, $N/T\rightarrow\rho^{2}$, with
$0<\rho<\infty$.
###### Assumption 4.
Sampling: (i) $(y_{it},x^{\prime}_{it})^{\prime}$, for $i=1,\dots N$ and
$t=1,\dots T$, are i.i.d. given $\xi_{i0}$ and $\lambda_{t0}$; (ii) $\xi_{i0}$
and $\lambda_{t0}$ are also i.i.d.
###### Assumption 5.
Regularity: Let $\ell_{it}(\alpha_{it},\theta)$= ln
$f(y_{it}|x_{it},\alpha_{it},\theta)$ and let
$\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}\ell_{it}(\bar{\alpha}(\theta,\xi_{i0},\lambda_{t0}),\theta)$
be the target log-likelihood (Arellano and Hahn, , 2007) and
$\bar{\alpha}(\theta,\xi,\lambda)=\underset{\alpha}{\operatorname{argmax}}\;\mathbb{E}_{\xi_{i0}=\xi,\lambda_{t0}=\lambda}(\ell_{it}(\alpha,\theta))$:
(i) $\ell_{it}(\theta,\alpha)$ is three time differentiable in
$(\theta,\alpha)$; $\theta_{0}$ is an interior point of the parameter space
$\Theta$; $\Theta$ is compact;
(ii) $\ell_{it}$ is strictly concave as a function of $\alpha$,
$\emph{inf}_{\xi,\lambda,\theta}\mathbb{E}_{\xi_{i0}=\xi,\lambda_{t0}=\lambda}\left(-\frac{\partial^{2}\ell_{it}(\bar{\alpha}(\theta,\xi,\lambda),\theta)}{\partial\alpha\partial\alpha^{\prime}}\right)>0;\\\
\mathbb{E}[\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}\ell_{it}(\bar{\alpha}(\theta,\xi_{i0},\lambda_{t0}),\theta)]$
has a unique maximum at $\theta_{0}$ on $\Theta$, and its second derivative is
negative definite.
(iii) Regularity conditions on boundedness of moments and asymptotic
covariances in Bonhomme et al., 2022b Assumption S2 (iv,v) apply.
Assumption 1 gives the minimal properties of the unobserved heterogeneity in
the Bonhomme et al., 2022a ’s setting. Assumption 2 formalizes moments
informativeness in order to have an effective individual and time clustering.
Assumption 3 depicts rectangular array asymptotics, which is required for the
characterization of the asymptotic normal distribution of the considered
estimators. Assumption 4 outlines the sampling requirements that are more
restrictive than that usually required to characterize the asymptotic
distribution of ML estimators under rectangular-array asymptotics for fixed-
effects models with time heterogeneity. For example, Fernández-Val and
Weidner, (2016) assume independence over $i$ while relaxing time independence
by allowing for $\alpha$-mixing.666See Fernández-Val and Weidner, (2016),
Assumption 4.1 (ii). Assumption 4 is instead required for consistency of the
TW-GFE, which effectively rules out the possibility of applying the proposed
test to models with (i) feedback effects and (ii) unobserved heterogeneity
that depends on dynamic factors. The conditions stated in Assumption 5 are
standard requirements for a well-posed maximization problem.
Under Assumptions 1,4, and 5 the OW-FE and TW-FE estimators of $\theta$,
$\hat{\theta}$, for models (1) and (2), respectively, are consistent as
$N,T\to\infty$. Additionally under Assumption 3, $\hat{\theta}$ has the
following asymptotic distribution
$\sqrt{NT}(\hat{\theta}-\theta_{0})\stackrel{{\scriptstyle
d}}{{\rightarrow}}N(B;\,\,I(\theta_{0})^{-1}),$
where $B$ is constant and equal to $\rho C$ for the OW-FE estimator, while it
is equal to $\rho C_{1}+\rho^{-1}C_{2}$ for the TW-FE estimator. Notice that,
in the case of informational orthogonality between the structural and nuisance
parameters, such as in the linear model, $B=0$, whereas characterizations of
these asymptotic biases are given in Hahn and Newey, (2004) and Fernández-Val
and Weidner, (2016) for nonlinear models with OW-FE and TW-FE, respectively.
Finally, $I(\theta_{0})$ is the Information matrix of the profile log-
likelihood.
Consistency of the TW-GFE estimator $\tilde{\theta}$ relies on Lemma S1 in
Bonhomme et al., 2022b , based on Assumptions 1,2,4, and 5. The two-step TW-
GFE estimator is then proved to have asymptotic expansion (Corollary S2
_ibidem_)
$\tilde{\theta}=\theta_{0}+J(\theta_{0})^{-1}\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}s_{it}(\theta_{0})+O_{p}\left(\frac{1}{T}+\frac{1}{N}+\frac{KL}{NT}\right)+O_{p}(K^{-\frac{2}{d_{\xi}}}+L^{-\frac{2}{d_{\lambda}}})+o_{p}\left(\frac{1}{\sqrt{NT}}\right),$
(4)
as $N,T,K,L\rightarrow\infty$, such that $KL/(NT)$ tends to zero. In the above
expression, $J(\cdot)$ and $s_{it}(\cdot)$ are the negative expected Hessian
and the score associated with the likelihood function. Three main different
sources of bias can be identified: the $1/T$ and $1/N$ terms depend on the
number of time occasions and subjects used for $h_{i}$ and $w_{t}$ in the
classification step; the $KL/NT$ term reflects the estimation of $KL$ group-
specific parameters using $NT$ observations; the
$K^{-\frac{2}{d_{\xi}}}+L^{-\frac{2}{d_{\lambda}}}$ terms refer to the
approximation bias arising from the discretization of $\xi_{i0}$ and
$\lambda_{t0}$ via kmeans.
The $O_{p}(\cdot)$ terms in the above expansion can be shown to become
$O_{p}(1/T+1/N)$ under suitable choices for the number of groups, $K$ and $L$,
and for $d_{\xi}=d_{\lambda}=1$. The rule suggested by Bonhomme et al., 2022a
and the consequent simplification of the $O_{p}(\cdot)$ terms are summarized
in the following proposition.
###### Proposition 1.
Number of groups and approximation bias:
i) For $d_{\xi}=d_{\lambda}=1$, the number of groups $K$ and $L$ are chosen
according to the following rules
$\hat{K}=\min_{K\geq
1}\\{K:\hat{Q}(K)\leq\gamma\hat{V}_{h_{i}}\\},\qquad\hat{L}=\min_{L\geq
1}\\{L:\hat{Q}(L)\leq\gamma\hat{V}_{w_{t}}\\},$
where $Q(\cdot)$ is the objective function of the kmeans problem, $V_{h}$ and
$V_{w}$ are the variability of the moments $h_{i}$ and $w_{t}$, respectively,
and $\gamma\in(0,1]$ is a user-specified parameter;
ii) Setting $K$ and $L$ proportional or greater than $\sqrt{T}$ and
$\sqrt{N}$, respectively, will ensure that the approximation errors are
$O_{p}(1/T)$ and $O_{p}(1/N)$, so that the $O_{p}(\cdot)$ terms in (4) become
$O_{p}(1/T+1/N)$.
We refer the reader to Bonhomme et al., 2022a for the proofs and derivations
of these results. Smaller values of $\gamma$ yield a larger number of groups:
lowering this value is suggested if moments are weakly informative about
unobserved heterogeneity. In our simulation study we experiment with different
values of $\gamma$.
In order to derive the asymptotic distribution of the TW-GFE estimator, we
need to provide a minimal characterization of the $O_{p}(1/T+1/N)$ term in
said asymptotic expansion.
###### Assumption 6.
Bias of the TW-GFE estimator: The $O_{p}(1/T+1/N)$ term takes the form
$\frac{D_{1}}{T}+\frac{D_{2}}{N}+o_{p}\left(\frac{1}{T}\vee\frac{1}{N}\right),$
where $D_{1}$ and $D_{2}$ are constant.
This assumption extends Corollary 2 in Bonhomme et al., 2022a , according to
which the $O_{p}(1/T)$ term in the asymptotic expansion for the one-way GFE
estimator with only time-constant unobserved heterogeneity is
$E/T+o_{p}(1/T)$, where $E$ is constant. The asymptotic distribution of
$\tilde{\theta}$ can now be characterized by the following theorem, the proof
of which follow from standard arguments of ML estimation.
###### Theorem 1.
Suppose that Assumptions 1-6 and Lemma S1 and Corollary S2 of Bonhomme et al.,
2022b hold, and let $d_{\xi}=d_{\lambda}=1$ then
$\sqrt{NT}(\tilde{\theta}-\theta_{0})\stackrel{{\scriptstyle
d}}{{\rightarrow}}N\left(D;\,\,J(\theta_{0})^{-1}\right),$
where $D=D_{1}\rho+D_{2}\rho^{-1}$.
### 3.2 Proposed test
In order to test the null hypothesis of correctly specified unobserved
heterogeneity, we rely on a Hausman test based on the difference
$\hat{\delta}=\hat{\theta}-\tilde{\theta}$, namely, by contrasting FE
estimators with the TW-GFE estimator. We examine two cases. In the first one,
the null hypothesis $H_{0}$ is of time-constant unobserved heterogeneity,
under which both the OW-FE and TW-GFE estimators are consistent. In the second
case, under $H_{0}$ the unobserved heterogeneity has an additive structure,
for which both the TW-FE and TW-GFE estimators are consistent. In both
situations, the null hypothesis can therefore be expressed as
$H_{0}:\underset{N,T\rightarrow\infty}{\mathrm{plim}}\hat{\delta}=0$
Instead, only the TW-GFE estimator is consistent for the true parameters under
the alternative hypothesis of a more complex structure of the unobserved
heterogeneity such as, for instance, the factor one. This justifies the use of
an Hausman-type test.
Our test statistic is therefore
$\hat{H}=NT\hat{\delta}^{\prime}\widehat{W}^{-1}\hat{\delta}$ (5)
where $\widehat{W}$ is a consistent estimator of the variance of the contrasts
$\hat{\delta}$, $W$. The asymptotic distribution of $\hat{H}$ is determined by
the distribution of $\hat{\delta}$. Under the assumption of joint normality of
$\hat{\theta},\tilde{\theta}$ and Assumption 3, with $H_{0}$ true we have that
$\sqrt{NT}\,\,\hat{\delta}\stackrel{{\scriptstyle d}}{{\rightarrow}}N(B-D,W).$
As a result, we have that the limiting distribution of $H$ is a $\chi^{2}$
with $r$ degrees of freedom and non-centrality parameter
$\omega=\delta^{\prime}\delta$, with $\delta=B-D$, entailed by the asymptotic
biases in $\hat{\theta}$ and $\tilde{\theta}$, that is
$\hat{H}\stackrel{{\scriptstyle d}}{{\rightarrow}}\chi^{2}_{r,\omega}.$
A test for $H_{0}$ based on $H$ poses two problems. First, there is no
guarantee that using the traditional formulation of the Hausman test (Hausman,
, 1978) will provide a positive definite $\widehat{W}$, as the test always
involves at least one biased estimator. Secondly, quantiles of the limiting
distribution of $\hat{H}$ are unknown.
In order to tackle the first issue, we rely on the generalized formulation for
the variance of $\hat{\delta}$, put forward by Bartolucci et al., (2015). Let
us define $\hat{\phi}$ as the complete vector of estimated parameters for FE
models, including $\hat{\theta}$, OW or TW fixed-effects, and also an
estimator of the error term variance in linear models. Similarly we denote
$\tilde{\phi}$ the complete vector of estimated parameters for the TW-GFE. Let
us denote the estimator of the variance of $\hat{\delta}$ as
$\widehat{W}=NT\left[M\widehat{V}(\hat{\phi},\tilde{\phi})M^{\prime}\right],$
where $M$ is a selector matrix of suitable dimension such that it returns the
difference between the blocks in $\widehat{V}(\hat{\phi},\tilde{\phi})$
corresponding to the variance-covariance matrices of $\hat{\theta}$ and
$\tilde{\theta}$. Moreover
$\displaystyle\hat{V}(\hat{\phi},\tilde{\phi})=\begin{pmatrix}H(\hat{\phi})&0\\\
0&H(\tilde{\phi})\\\
\end{pmatrix}^{-1}S\left(\hat{\phi},\tilde{\phi}\right)\begin{pmatrix}H(\hat{\phi})&0\\\
0&H(\tilde{\phi})\\\ \end{pmatrix}^{-1},$
with
$\displaystyle
S\left(\hat{\phi},\tilde{\phi}\right)=\sum_{i=1}^{N}\sum_{t=1}^{T}\begin{pmatrix}g_{it}(\hat{\phi})\\\
s_{it}(\tilde{\phi})\\\ \end{pmatrix}\left(g_{it}(\hat{\phi})^{\prime},\quad
s_{it}(\tilde{\phi})^{\prime}\right),$
where $H(\hat{\phi})$ and $H(\tilde{\phi})$ are the hessian matrices
associated to the complete parameter vector for FE and TW-GFE respectively,
$g_{it}(\hat{\phi})$ and $s_{it}(\tilde{\phi})$ are the scores of the log-
likelihoods associated to the OW-FE (or TW-FE) and TW-GFE approaches,
respectively.
The second issue concerns the limiting distribution of $\hat{H}$. In order
recover the quantiles of the chi-square with unknown centrality parameter, we
rely on parametric percentile bootstrap, which is used in presence of non-
pivotal statistics (Horowitz, , 2019). In practice, after obtaining the ML
estimates $\hat{\phi}$, $\tilde{\phi}$, and computing $\hat{H}$ on the real
data, we generate the bootstrap samples using $\hat{\phi}$ as the true
parameter values. Then the bootstrap statistic $\hat{H}^{\ast}$ is
$\hat{H}^{\ast}=NT\hat{\delta}^{\ast}\left(\widehat{W}^{\ast}\right)^{-1}\hat{\delta}^{\ast}\stackrel{{\scriptstyle
d^{\ast}}}{{\rightarrow}}\chi^{2}_{r,\omega},$
where $\hat{\delta}^{\ast}=\hat{\phi}^{\ast}-\tilde{\phi}^{\ast}$ uses the ML
estimates obtained from the bootstrap sample and $\stackrel{{\scriptstyle
d^{\ast}}}{{\rightarrow}}$ denotes convergence in distribution of the
bootstrap measure. Therefore, percentiles can be obtained as
$q^{\ast}_{1-\alpha}=\mathrm{inf}\left\\{q^{\ast}:\mathrm{Pr}^{\ast}\left(\hat{H}^{\ast}\leq
q^{\ast}\right)\geq(1-\alpha)\right\\},$
where $\mathrm{Pr}^{\ast}$ denotes the probability conditional on the
bootstrap sample. Therefore we reject $H_{0}$ whenever
$\hat{H}>q^{\ast}_{1-\alpha}$. Our approach is related to the contributions by
Higgins and Jochmans, (2023) and Kim and Sun, (2016), who exploit parametric
bootstrap for inference in (dynamic) fixed-effects models.777These authors
provide results on the asymptotic validity of the bootstrap bias correction in
OW-FE models. The same result for TW-FE and TW-GFE does not follow from our
assumptions. A formal derivation is beyond the scope of the paper, whose
intent is not to provide a bias correction, nor inference on the model
parameters. Nevertheless, applications of the parametric bootstrap for testing
purposes in two-way fixed-effects are available (Dzemski, , 2019). An
alternative approach based on pre-pivoting and pairs bootstrap is proposed by
Cavaliere et al., (2022); in our setting, though, it seems unnecessary to
perform a bias correction only to carry out a pre-test.
### 3.3 Alternative procedures
The performance of the proposed test can be compared with that of two
alternative tools: max-type test put forward by Castagnetti et al., 2015a (CRT
test henceforth) to detect factor structures in a linear framework and the
test for time-invariant unobserved heterogeneity developed by Bartolucci et
al., (2015) (BBP test) that applies to both linear and non-linear frameworks.
#### 3.3.1 CRT test
Consider a model such as (3) in which $\alpha_{it0}$ has a factor structure,
that is $\alpha(\xi_{i0},\lambda_{t0})=\alpha_{i0}^{\prime}\zeta_{t0}$.
The procedure tests the null hypothesis of no factor structure, defined as
$H_{0}:\zeta_{t}=\zeta$, that is a model with only individual effects. The
max-type test statistics is formulated as
$S=\text{max}_{1\leq t\leq
T}\left[N(\hat{\zeta}_{t}-\hat{\bar{\zeta}})^{\prime}\hat{\Sigma}_{t}^{-1}(\hat{\zeta}_{t}-\hat{\bar{\zeta}})\right],$
where factors are estimated using the common correlated effects approach by
Pesaran, (2006),888It is worth recalling that the approach by Castagnetti et
al., 2015a can in general be implemented in models with heterogeneous slopes.
$\hat{\bar{\zeta}}$ is the sample mean of $\hat{\zeta}_{t}$ and the
$\hat{\Sigma}_{t}$ is and estimate of the asymptotic factor covariance matrix
(crf Equation 10 in Castagnetti et al., 2015a, ). The test statistic $S$ has
an asymptotic Gumbel distribution. It is worth noting that CRT test requires
large $T$ settings in order attain the correct size in finite samples.
#### 3.3.2 BBP test
Differently from CRT test, the BBP test can be employed with generalized
linear models models. As the test here proposed, the BBP is also a generalized
Hausman test contrasting estimators that are consistent only under time-
constant unobserved heterogeneity with estimators that are consistent even in
presence of a time-varying latent variable.
For linear models, the BBP test contrasts the OW-FE estimator with the first
difference estimator. It is worth stressing that the BBP test has power only
when certain conditions are met, namely $T$ must be greater than 3, otherwise
the estimators coincides, and common factors must have a dynamic structure.
For binary choice and Poisson models, the BBP test compares estimators based
on two different formulation of the conditional likelihood: the standard
conditional ML (CML) estimator and the Pairwise CML estimator (PCML). The
former is consistent with time-constant heterogeneity since the probabilities
for the models considered admit sufficient statistics for the incidental
parameters. The PCML approach considers pairs of consecutive time observations
for every individual and the corresponding log-likelihood is conditioned on
the sufficient statistic, thereby allowing for different individual effects in
every couple of periods. The PMCL estimator is therefore consistent in
presence of time varying unobserved heterogeneity. This test also lacks power
in the same cases described above.
## 4 Simulation study
In the following, we describe the design and report the results of an
extensive simulation study where we investigate the test empirical size and
power properties for the linear and probit model.
### 4.1 Linear model
We design a Monte Carlo experiment where observations are generated by a
linear regression model with two exogenous covariates. We consider two
scenarios for the null hypotheses, where the unobserved heterogeneity is
specified as in models (1) and (2).
In particular, in the case of only individual time-constant effects, for
$i=1,\ldots,N$ and $t=1,\ldots,T$, we generate samples according to the
following equations, which we denote as DGP-FE-L:
$\displaystyle y_{it}$ $\displaystyle=$ $\displaystyle
x_{it1}\theta_{1}+x_{it2}\theta_{2}+\alpha_{i}+\varepsilon_{it},$
$\displaystyle x_{itj}$ $\displaystyle=$
$\displaystyle\Gamma_{i}+N(0,1),\quad\mathrm{for}\quad j=1,2,$
where $\alpha_{i}=\varrho\Gamma_{i}+\sqrt{(1-\varrho^{2})}A_{i}$ , with
$A_{i},\Gamma_{i}\sim N(1,3)$, and $\varrho=0.5$. Finally, $\varepsilon_{it}$
is an idiosyncratic standard normal error term. We let the coefficients
$\theta=(\theta_{1},\theta_{2})^{\prime}$ be equal to $(1,1)^{\prime}$. With
this design we explore the size properties of the proposed test comparing the
OW-FE with the TW-GFE estimators. Similarly, when we allow for additive
individual and time effects, samples are generated according to:
$\displaystyle y_{it}$ $\displaystyle=$ $\displaystyle
x_{it1}\theta_{1}+x_{it2}\theta_{2}+\alpha_{i}+\zeta_{t}+\varepsilon_{it},$
$\displaystyle x_{itj}$ $\displaystyle=$
$\displaystyle\Gamma_{i}+\zeta_{t}+N(0,1),\quad\mathrm{for}\quad j=1,2,$
where $\zeta_{t}\sim N(1,1)$. This design is denoted by DGP-AFE-L and it will
be used to contrast the TW-FE with the TW-GFE estimators under the null
hypothesis of additive effects.
In order to investigate the power of the proposed test, the scenario generated
under the alternative hypothesis is a linear panel data model with interactive
fixed effects. Specifically, samples are generated according to a simplified
version of the design outlined by Bai, (2009), with one latent factor:
$\displaystyle y_{it}$ $\displaystyle=$ $\displaystyle
x_{it1}\theta_{1}+x_{it2}\theta_{2}+\alpha_{i}\zeta_{t}+\varepsilon_{it},$
$\displaystyle x_{it}$ $\displaystyle=$
$\displaystyle\Gamma_{i}\zeta_{t}+N(0,1).$
We denote the above design as DGP-IFE-L.
For each scenario, we consider $N=50,100$, $T=10,20$, and $399$ bootstrap
draws for each of the $1000$ Monte Carlo replications. It is worth recalling
that the performance of the TW-GFE estimator is closely linked to the number
of groups chosen for the first-step kmeans clusterings. Even following the
rule outlined in Proposition 1, this number depends on the variability in the
data, which affects how informative $h_{i},w_{t}$ are about the unobserved
heterogeneity, and the user-defined parameter $\gamma$. We account for the
former by allowing for large variances in the composite error term, and for
the latter by running scenarios where $\gamma=1,0.5,0.25$, resulting in an
increasing number of clusters.
Tables 1, 2, 3 and 4 report the results of the Monte Carlo experiments in four
cases. First we compare the OW-FE and the TW-GFE estimators using DGP-FE-L to
study the empirical size, while the power of the proposed test is investigated
under the alternative process described by DGP-IFE-L. We then turn to the
comparison between the TW-FE and the TW-GFE estimators in the setting where
heterogeneity is specified as additive effects under the null hypothesis in
DGP-AFE-L and that of DGP-IFE-L under the alternative. The tables report the
average of the Hausman test $H$ in (5) across simulations, along with the
empirical size based on the quantile of the central $\chi^{2}$ with two
degrees of freedom ($\chi^{2}_{2}$ p.05) and the bootstrap rejection rate
(Boot p.05). We also present the average bias (Bias), standard deviation (SD)
and ratio between standard error and SD for both elements of $\hat{\theta}$
and $\tilde{\theta}$. For the TW-GFE estimator, we also report the average
selected number of groups according to rule outlined in Proposition 1 (Avg
$\hat{K}$ and Avg $\hat{L}$).
As expected, the empirical rejection rate based on the percentile of the
centered chi-square distribution does not attain the nominal size (5%), as it
fails to account for the noncentrality parameter arising from the
approximation bias in the TW-GFE. It is in fact worth to notice that, even
with the large number of groups obtained with $\gamma=0.05$, the bias of the
TW-GFE is sensitively larger than that of the OW-FE, and TW-FE estimators.
Indeed the bias of the TW-GFE worsens with fewer clusters, as testified by the
results with $\gamma=0.5,1$ and the resulting values of the Hausman test. We
highlight that the $\gamma$ parameter seems to have no effect on the test
value. Nonetheless, the bootstrap distribution is able to mimic the non-
centrality of the chi-square, giving rise to a rejection rate close to the
nominal one, with a improving performance as $T$ increases. The bootstrap test
also presents good rejection rates when the true data generating process has a
factor structure for the unobserved heterogeneity. Increasing the number of
individuals or enriching information embodied in factors may further increase
the power of our test.
Table 1: Size analysis: DGP-FE-L, OW-FE vs TW-GFE
| | $\gamma=0.25$ | $\gamma=0.5$ | $\gamma=1$
---|---|---|---|---
| | T=10 | T=20 | T=10 | T=20 | T=10 | T=20
| $N$ | $50$ $100$ | $50$ $100$ | $50$ $100$ | $50$ $100$ | $50$ $100$ | $50$ $100$
Hausman | | | | | | | | | | | | |
H | 4.251 | 5.932 | 5.478 | 7.814 | 6.385 | 10.055 | 8.558 | 12.390 | 9.742 | 16.719 | 13.061 | 21.244
$\chi^{2}_{2}$ p.05 | 0.248 | 0.379 | 0.301 | 0.481 | 0.401 | 0.643 | 0.478 | 0.673 | 0.569 | 0.818 | 0.638 | 0.832
| Boot p.05 | 0.051 | 0.044 | 0.052 | 0.048 | 0.053 | 0.045 | 0.062 | 0.031 | 0.053 | 0.042 | 0.049 | 0.051
$\hat{\theta}_{1}$ | | | | | | | | | | | | |
Bias | 0.000 | -0.001 | -0.000 | 0.000 | 0.000 | -0.001 | -0.000 | 0.000 | 0.000 | -0.001 | -0.000 | 0.000
SD | 0.048 | 0.033 | 0.032 | 0.023 | 0.048 | 0.033 | 0.032 | 0.023 | 0.048 | 0.033 | 0.032 | 0.023
| SE/SD | 0.935 | 0.967 | 0.985 | 0.966 | 0.935 | 0.967 | 0.985 | 0.966 | 0.935 | 0.967 | 0.985 | 0.966
$\hat{\theta}_{2}$ | | | | | | | | | | | | |
Bias | -0.000 | -0.000 | -0.000 | -0.000 | -0.000 | -0.000 | -0.000 | -0.000 | -0.000 | -0.000 | -0.000 | -0.000
SD | 0.048 | 0.034 | 0.033 | 0.023 | 0.048 | 0.034 | 0.033 | 0.023 | 0.048 | 0.034 | 0.033 | 0.023
| SE/SD | 0.933 | 0.921 | 0.960 | 0.958 | 0.933 | 0.921 | 0.960 | 0.958 | 0.933 | 0.921 | 0.960 | 0.958
$\tilde{\theta_{1}}$ | | | | | | | | | | | | |
Bias | -0.025 | -0.024 | -0.015 | -0.014 | -0.044 | -0.046 | -0.028 | -0.026 | -0.077 | -0.080 | -0.049 | -0.050
SD | 0.057 | 0.038 | 0.039 | 0.027 | 0.067 | 0.044 | 0.047 | 0.032 | 0.082 | 0.056 | 0.057 | 0.039
| SE/SD | 0.860 | 0.907 | 0.863 | 0.893 | 0.784 | 0.844 | 0.757 | 0.776 | 0.715 | 0.734 | 0.664 | 0.689
$\tilde{\theta_{2}}$ | | | | | | | | | | | | |
Bias | -0.025 | -0.024 | -0.015 | -0.015 | -0.045 | -0.045 | -0.028 | -0.026 | -0.078 | -0.080 | -0.049 | -0.050
SD | 0.058 | 0.040 | 0.041 | 0.028 | 0.068 | 0.045 | 0.048 | 0.033 | 0.081 | 0.055 | 0.058 | 0.039
SE/SD | 0.847 | 0.870 | 0.818 | 0.852 | 0.768 | 0.828 | 0.732 | 0.751 | 0.719 | 0.754 | 0.657 | 0.696
Avg$K$ | 41.222 | 74.980 | 43.708 | 81.696 | 35.725 | 60.883 | 39.421 | 69.948 | 28.476 | 44.061 | 33.373 | 54.968
| Avg$L$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000
1000 Monte Carlo (MC) replications. “$H$” is the average of the Hausman test
statistic, across MC replications. “p.05” denotes the rejection rate for a
nominal size of 5%. “$\chi^{2}_{2}$ p.05” is based on the $95^{th}$ percentile
of the central $\chi^{2}_{2}$ distribution. “Boot p.05” is based on the
$95^{th}$ percentile of the empirical distribution determined via 399
bootstrap replications. “Bias” is the mean bias, “SD” and “SE” denote the
standard deviation over the MC replications and the average estimated standard
error, respectively, those are reported for two regressors. “Avg $\hat{K}$”
and “Avg $\hat{L}$” report the average number of groups for individuals and
time occasions obtained in the first step.
Table 2: Power analysis: DGP-IFE-L, OW-FE vs TW-GFE
| | $\gamma=0.25$
---|---|---
| | T=10 | T=20
| $N$ | $50$ $100$ | $50$ $100$
Hausman | | | | |
H | 32.330 | 74.524 | 53.350 | 125.246
$\chi^{2}_{2}$ p.05 | 0.908 | 0.994 | 0.961 | 0.998
| Boot p.05 | 0.682 | 0.889 | 0.838 | 0.960
$\hat{\theta}_{1}$ | | | | |
Bias | 0.260 | 0.250 | 0.261 | 0.255
SD | 0.107 | 0.079 | 0.085 | 0.060
| SE/SD | 0.826 | 0.792 | 0.738 | 0.741
$\hat{\theta}_{2}$ | | | | |
Bias | 0.252 | 0.257 | 0.257 | 0.256
SD | 0.108 | 0.078 | 0.083 | 0.058
| SE/SD | 0.816 | 0.804 | 0.761 | 0.770
$\tilde{\theta}_{1}$ | | | | |
Bias | -0.047 | -0.132 | -0.008 | -0.085
SD | 0.184 | 0.157 | 0.142 | 0.119
| SE/SD | 0.498 | 0.420 | 0.404 | 0.347
$\tilde{\theta}_{2}$ | | | | |
Bias | -0.049 | -0.132 | -0.008 | -0.085
SD | 0.187 | 0.162 | 0.141 | 0.118
| SE/SD | 0.490 | 0.409 | 0.407 | 0.350
| Avg$K$ | 15.122 | 19.981 | 19.552 | 26.773
| Avg$L$ | 4.531 | 5.523 | 5.753 | 7.376
1000 Monte Carlo (MC) replications. “$H$” is the average of the Hausman test
statistic, across MC replications. “p.05” denotes the rejection rate for a
nominal size of 5%. “$\chi^{2}_{2}$ p.05” is based on the $95^{th}$ percentile
of the central $\chi^{2}_{2}$ distribution. “Boot p.05” is based on the
$95^{th}$ percentile of the empirical distribution determined via 399
bootstrap replications. “Bias” is the mean bias, “SD” and “SE” denote the
standard deviation over the MC replications and the average estimated standard
error, respectively, those are reported for two regressors. “Avg $\hat{K}$”
and “Avg $\hat{L}$” report the average number of groups for individuals and
time occasions obtained in the first step.
Table 3: Size analysis: DGP-AFE-L, TW-FE vs TW-GFE
| | $\gamma=0.25$ | $\gamma=0.5$ | $\gamma=1$
---|---|---|---|---
| | T=10 | T=20 | T=10 | T=20 | T=10 | T=20
| $N$ | $50$ $100$ | $50$ $100$ | $50$ $100$ | $50$ $100$ | $50$ $100$ | $50$ $100$
Hausman | | | | | | | | | | | | |
H | 12.929 | 26.985 | 15.908 | 31.683 | 13.151 | 29.608 | 18.419 | 43.237 | 9.851 | 20.165 | 17.108 | 42.372
$\chi^{2}_{2}$ p.05 | 0.561 | 0.833 | 0.597 | 0.799 | 0.572 | 0.801 | 0.615 | 0.842 | 0.470 | 0.672 | 0.593 | 0.819
| Boot p.05 | 0.041 | 0.048 | 0.045 | 0.047 | 0.045 | 0.040 | 0.046 | 0.041 | 0.039 | 0.041 | 0.049 | 0.039
$\hat{\theta}_{1}$ | | | | | | | | | | | | |
Bias | 0.000 | -0.001 | -0.000 | 0.000 | 0.000 | -0.001 | -0.000 | 0.000 | 0.000 | -0.001 | -0.000 | 0.000
SD | 0.048 | 0.033 | 0.033 | 0.024 | 0.048 | 0.033 | 0.033 | 0.024 | 0.048 | 0.033 | 0.033 | 0.024
| SE/SD | 0.931 | 0.968 | 0.966 | 0.948 | 0.931 | 0.968 | 0.966 | 0.948 | 0.931 | 0.968 | 0.966 | 0.948
$\hat{\theta}_{2}$ | | | | | | | | | | | | |
Bias | -0.000 | -0.000 | 0.000 | -0.000 | -0.000 | -0.000 | 0.000 | -0.000 | -0.000 | -0.000 | 0.000 | -0.000
SD | 0.050 | 0.034 | 0.033 | 0.023 | 0.050 | 0.034 | 0.033 | 0.023 | 0.050 | 0.034 | 0.033 | 0.023
| SE/SD | 0.899 | 0.917 | 0.945 | 0.960 | 0.899 | 0.917 | 0.945 | 0.960 | 0.899 | 0.917 | 0.945 | 0.960
$\tilde{\theta_{1}}$ | | | | | | | | | | | | |
Bias | -0.117 | -0.157 | -0.070 | -0.098 | -0.109 | -0.174 | -0.077 | -0.132 | -0.048 | -0.115 | -0.064 | -0.138
SD | 0.164 | 0.119 | 0.122 | 0.093 | 0.169 | 0.130 | 0.136 | 0.108 | 0.168 | 0.138 | 0.141 | 0.114
| SE/SD | 0.479 | 0.466 | 0.406 | 0.375 | 0.515 | 0.481 | 0.405 | 0.368 | 0.551 | 0.488 | 0.437 | 0.397
$\tilde{\theta_{2}}$ | | | | | | | | | | | | |
Bias | -0.118 | -0.158 | -0.073 | -0.100 | -0.109 | -0.176 | -0.078 | -0.135 | -0.050 | -0.116 | -0.064 | -0.141
SD | 0.157 | 0.122 | 0.124 | 0.095 | 0.175 | 0.132 | 0.137 | 0.110 | 0.169 | 0.144 | 0.144 | 0.115
| SE/SD | 0.503 | 0.452 | 0.398 | 0.368 | 0.494 | 0.472 | 0.403 | 0.362 | 0.548 | 0.467 | 0.428 | 0.394
| Avg$K$ | 16.112 | 21.192 | 20.809 | 28.940 | 10.102 | 11.862 | 13.815 | 17.245 | 6.067 | 6.605 | 8.482 | 9.512
| Avg$L$ | 7.482 | 8.122 | 11.874 | 13.602 | 6.630 | 7.415 | 9.810 | 11.665 | 5.730 | 6.575 | 7.817 | 9.618
1000 Monte Carlo (MC) replications. “$H$” is the average of the Hausman test
statistic, across MC replications. “p.05” denotes the rejection rate for a
nominal size of 5%. “$\chi^{2}_{2}$ p.05” is based on the $95^{th}$ percentile
of central a $\chi^{2}_{2}$ distribution. “Boot p.05” is based on the
$95^{th}$ percentile of the empirical distribution determined via 399
bootstrap replications. “Bias” is the mean bias, “SD” and “SE” denote the
standard deviation over the MC replications and the average estimated standard
error, respectively, those are reported for two regressors. “Avg $\hat{K}$”
and “Avg $\hat{L}$” report the average number of groups for individuals and
time occasions obtained in the first step.
Table 4: Power analysis: DGP-IFE-L, TW-FE vs TW-GFE
| | $\gamma=0.25$
---|---|---
| | T=10 | T=20
| $N$ | $50$ $100$ | $50$ $100$
Hausman | | | | |
H | 27.748 | 66.009 | 44.446 | 108.099
$\chi^{2}_{2}$ p.05 | 0.873 | 0.987 | 0.938 | 0.996
| Boot p.05 | 0.614 | 0.842 | 0.779 | 0.944
$\hat{\theta}_{1}$ | | | | |
Bias | 0.237 | 0.228 | 0.239 | 0.233
SD | 0.108 | 0.080 | 0.089 | 0.062
| SE/SD | 0.812 | 0.786 | 0.709 | 0.718
$\hat{\theta}_{2}$ | | | | |
Bias | 0.230 | 0.236 | 0.235 | 0.234
SD | 0.111 | 0.081 | 0.088 | 0.060
| SE/SD | 0.796 | 0.776 | 0.717 | 0.745
$\tilde{\theta_{1}}$ | | | | |
Bias | -0.047 | -0.132 | -0.008 | -0.085
SD | 0.184 | 0.157 | 0.142 | 0.119
| SE/SD | 0.498 | 0.420 | 0.404 | 0.347
$\tilde{\theta_{2}}$ | | | | |
Bias | -0.049 | -0.132 | -0.008 | -0.085
SD | 0.187 | 0.162 | 0.141 | 0.118
| SE/SD | 0.490 | 0.409 | 0.407 | 0.350
| Avg$K$ | 15.122 | 19.981 | 19.552 | 26.773
| Avg$L$ | 4.531 | 5.523 | 5.753 | 7.376
1000 Monte Carlo (MC) replications. “$H$” is the average of the Hausman test
statistic, across MC replications. “p.05” denotes the rejection rate for a
nominal size of 5%. “$\chi^{2}_{2}$ p.05” is based on the $95^{th}$ percentile
of central a $\chi^{2}_{2}$ distribution. “Boot p.05” is based on the
$95^{th}$ percentile of the empirical distribution determined via 399
bootstrap replications. “Bias” is the mean bias, “SD” and “SE” denote the
standard deviation over the MC replications and the average estimated standard
error, respectively, those are reported for two regressors. “Avg $\hat{K}$”
and “Avg $\hat{L}$” report the average number of groups for individuals and
time occasions obtained in the first step.
### 4.2 Probit model
We also investigate the small sample properties of our test in a nonlinear
setting, specifically by considering the probit model. For $i=1,\ldots,N$ and
$t=1,\ldots,T$, we generate samples according to the following equations,
which we denote as DGP-FE-P:
$\displaystyle y_{it}$ $\displaystyle=$
$\displaystyle\mathbf{I}\left(x_{it}\theta+\alpha_{i}+\varepsilon_{it}\geq
0\right),$ $\displaystyle x_{it}$ $\displaystyle=$
$\displaystyle\kappa\left[\Gamma_{i}+N(0,1)\right],$
where $\mathbf{I}(\cdot)$ is an indicator function,
$\alpha_{i}=\varrho\Gamma_{i}+\sqrt{(1-\varrho^{2})}A_{i}$,
$A_{i},\Gamma_{i}\sim N(1,1)$, $\varrho=0.5$, and $\varepsilon_{it}$ is an
idiosyncratic standard normal error term. The slope parameter $\theta$ is set
equal to 1. Following Hahn and Newey, (2004), the variance contributions of
the three elements in the linear index are roughly 0.2, 1, and 1,
respectively, so that we rescale the variance of $x_{it}$ by letting
$\kappa=\sqrt{(1/10)}$. When we allow for additive individual and time
effects, samples are generated according to
$\displaystyle y_{it}$ $\displaystyle=$
$\displaystyle\mathbf{I}\left(x_{it}\theta+\alpha_{i}+\zeta_{t}+\varepsilon_{it}\geq
0\right),$ $\displaystyle x_{it}$ $\displaystyle=$
$\displaystyle\kappa\left[\Gamma_{i}+\zeta_{t}+N(0,1)\right],$
where $\zeta_{t}\sim N(1,1)$ and the sum $\alpha_{i}+\zeta_{t}$ is rescaled to
have unit variance. This design is denoted by DGP-AFE-P. We evaluate the power
of the proposed test in scenarios generated under the alternative hypothesis
of interactive fixed effects with one latent factor, that is
$\displaystyle y_{it}$ $\displaystyle=$
$\displaystyle\mathbf{I}\\{x_{it}\theta+\alpha_{i}\zeta_{t}+\varepsilon_{it}\geq
0\\},$ $\displaystyle x_{it}$ $\displaystyle=$
$\displaystyle\kappa[\Gamma_{i}\zeta_{t}+N(0,1)],$
where again the product $\alpha_{i}\zeta_{t}$ is rescaled to have unit
variance. We refer to this design as DGP-IFE-P. For each experiment, we
consider $N=50,100$, $T=10,20$, and $399$ bootstrap draws for each of the
$1000$ Monte Carlo replications. When dealing with DGP-FE-P, we try different
values of $\gamma=0.25,0.5,1$ in order to evaluate sensitivity of the test to
the number of groups.
Tables 5 and 6 show the simulation results for experiments where we compare
the OW-FE with TW-GFE estimators under DGP-FE-P and DGP-IFE-P, and the TW-FE
and TW-GFE estimators under DGP-AFE-P and DGP-IFE-P, respectively. As
expected, the bootstrap test approaches the correct size under smaller values
of $\gamma$, i.e. when we impose a larger number of groups, while the one
based on the asymptotic critical value fails to account for the noncentrality
parameter of chi-square distribution. The power analysis shows that our test
behaves well when OW-FE is contrasted with TW-GFE under DGP-IFE-P, while the
power decreases drastically when TW-FE is contrasted with TW-GFE under the
same scenario. This is due to the ability of the two-way fixed effects
specification to approximate a model with interacted effects, as the
relatively small bias of TW-FE estimator with respect to TW-GFE clarifies.
Similar evidence for the linear model is reported by Freeman and Weidner,
(2023). It is useful to recall that population moments used to cluster the
unobserved heterogeneity include the average of the binary dependent variable,
which may not provide enough information to detect more complex forms of
unobserved heterogeneity. Also, the TW-GFE is less biased with respect to OW-
FE under both DGP-FE-P and DGP-IFE-P and under all values of $\gamma$ while,
under DGP-AFE-P, TW-GFE exhibits a larger bias than its counterpart due its
approximation issues.
Table 5: Size and power analyses: Probit model, OW-FE vs TW-GFE
| | DGP-FE-P | | DGP-IFE-P
---|---|---|---|---
| | $\gamma=0.25$ | | $\gamma=0.5$ | | $\gamma=1$ | | $\gamma=0.25$
| | T=10 | T=20 | | T=10 | T=20 | | T=10 | T=20 | | T=10 | T=20
| $N$ | $50$ $100$ | $50$ $100$ | | $50$ $100$ | $50$ $100$ | | $50$ $100$ | $50$ $100$ | | $50$ $100$ | $50$ $100$
Hausman | | | | | | | | | | | | | | | | | | | |
$H$ | 0.926 | 1.406 | 0.855 | 1.039 | | 2.327 | 4.615 | 2.436 | 4.949 | | 4.049 | 8.140 | 4.673 | 9.276 | | 13.107 | 31.054 | 19.591 | 51.124
$\chi^{2}_{1}$ p.05 | 0.028 | 0.086 | 0.032 | 0.047 | | 0.210 | 0.552 | 0.226 | 0.606 | | 0.448 | 0.827 | 0.539 | 0.886 | | 0.783 | 0.964 | 0.925 | 0.994
| Boot p.05 | 0.040 | 0.027 | 0.050 | 0.039 | | 0.021 | 0.017 | 0.033 | 0.027 | | 0.019 | 0.012 | 0.019 | 0.020 | | 0.731 | 0.950 | 0.835 | 0.988
$\hat{\theta}$ | | | | | | | | | | | | | | | | | | | |
Bias | 0.141 | 0.135 | 0.067 | 0.066 | | 0.141 | 0.135 | 0.067 | 0.066 | | 0.141 | 0.135 | 0.067 | 0.066 | | 0.747 | 0.731 | 0.639 | 0.632
SD | 0.282 | 0.193 | 0.174 | 0.122 | | 0.282 | 0.193 | 0.174 | 0.122 | | 0.282 | 0.193 | 0.174 | 0.122 | | 0.281 | 0.224 | 0.175 | 0.140
| SE/SD | 0.907 | 0.934 | 0.971 | 0.975 | | 0.907 | 0.934 | 0.971 | 0.975 | | 0.907 | 0.934 | 0.971 | 0.975 | | 0.707 | 0.627 | 0.741 | 0.652
$\tilde{\theta}$ | | | | | | | | | | | | | | | | | | | |
Bias | 0.110 | 0.090 | 0.066 | 0.051 | | 0.017 | 0.005 | 0.003 | -0.003 | | -0.066 | -0.080 | -0.049 | -0.053 | | 0.176 | 0.126 | 0.142 | 0.086
SD | 0.284 | 0.193 | 0.184 | 0.125 | | 0.253 | 0.174 | 0.166 | 0.117 | | 0.239 | 0.165 | 0.158 | 0.113 | | 0.307 | 0.202 | 0.203 | 0.138
| SE/SD | 0.912 | 0.924 | 0.940 | 0.961 | | 0.946 | 0.962 | 0.984 | 0.977 | | 0.941 | 0.958 | 0.999 | 0.980 | | 0.832 | 0.847 | 0.874 | 0.869
| Avg $\hat{K}$ | 20.445 | 27.279 | 25.870 | 36.731 | | 13.713 | 16.601 | 18.482 | 23.625 | | 8.555 | 9.640 | 12.172 | 14.160 | | 9.619 | 11.456 | 12.635 | 15.373
| Avg $\hat{L}$ | 1.992 | 1.996 | 2.302 | 2.283 | | 1.281 | 1.249 | 1.362 | 1.338 | | 1.010 | 1.011 | 1.001 | 1.000 | | 6.724 | 7.369 | 10.186 | 11.654
1000 Monte Carlo (MC) replications. “$H$” is the average of the Hausman test
statistic, across MC replications. “p.05” denotes the rejection rate for a
nominal size of 5%. “$\chi^{2}_{1}$ p.05” is based on the $95^{th}$ percentile
of a central $\chi^{2}_{1}$ distribution. “Boot p.05” is based on the
$95^{th}$ percentile of the empirical distribution determined via 399
bootstrap replications. “Bias” is the mean bias, “SD” and “SE” denote the
standard deviation over the MC replications and the average estimated standard
error, respectively. “Avg $\hat{K}$” and “Avg $\hat{L}$” report the average
number of groups for individuals and time occasions obtained in the first
step.
Table 6: Size and power analyses: Probit model, TW-FE vs TW-GFE
| | DGP-AFE-P | | DGP-IFE-P
---|---|---|---|---
| | $\gamma=0.25$ | | $\gamma=0.25$
| | T=10 | T=20 | | T=10 | T=20
| $N$ | $50$ $100$ | $50$ $100$ | | $50$ $100$ | $50$ $100$
Hausman | | | | | | | | | |
H | 0.924 | 0.847 | 1.574 | 1.868 | | 1.882 | 4.352 | 2.370 | 4.433
$\chi^{2}_{2}$ p.05 | 0.031 | 0.030 | 0.102 | 0.144 | | 0.152 | 0.367 | 0.196 | 0.381
| Boot p.05 | 0.037 | 0.043 | 0.039 | 0.031 | | 0.093 | 0.221 | 0.158 | 0.337
$\hat{\theta}$ | | | | | | | | | |
Bias | 0.210 | 0.166 | 0.108 | 0.092 | | 0.297 | 0.275 | 0.210 | 0.194
SD | 0.316 | 0.192 | 0.182 | 0.130 | | 0.268 | 0.183 | 0.169 | 0.126
| SE/SD | 0.869 | 0.982 | 0.971 | 0.948 | | 0.851 | 0.867 | 0.883 | 0.828
$\tilde{\theta}$ | | | | | | | | | |
Bias | 0.263 | 0.177 | 0.215 | 0.170 | | 0.185 | 0.120 | 0.137 | 0.090
SD | 0.364 | 0.206 | 0.223 | 0.149 | | 0.299 | 0.189 | 0.207 | 0.135
| SE/SD | 0.829 | 0.963 | 0.903 | 0.916 | | 0.859 | 0.905 | 0.851 | 0.888
| Avg $\hat{K}$ | 13.188 | 16.010 | 17.141 | 21.847 | | 9.784 | 11.490 | 12.528 | 15.121
| Avg $\hat{L}$ | 6.824 | 7.466 | 10.412 | 11.897 | | 6.736 | 7.289 | 10.114 | 11.616
1000 Monte Carlo (MC) replications. “$H$” is the average of the Hausman test
statistic, across MC replications. “p.05” denotes the rejection rate for a
nominal size of 5%. “$\chi^{2}_{1}$ p.05” is based on the $95^{th}$ percentile
of a central $\chi^{2}_{1}$ distribution. “Boot p.05” is based on the
$95^{th}$ percentile of the empirical distribution determined via 399
bootstrap replications. “Bias” is the mean bias, “SD” and “SE” denote the
standard deviation over the MC replications and the average estimated standard
error, respectively. “Avg $\hat{K}$” and “Avg $\hat{L}$” report the average
number of groups for individuals and time occasions obtained in the first
step.
### 4.3 Comparison with CRT and BBP tests
Table 7 reports the simulation results for the CRT and BBP tests under the
null and alternative hypotheses described by DGP-FE-L and DGP-IFE-L. In
particular, the CRT test can be implemented by considering the null hypothesis
of no factor structure with time-constant individual effect and the
alternative hypothesis of a factor model with 1 latent factor (DGP-IFE-L). The
test fails to attain the correct size in short panels, as also reported by
Castagnetti et al., 2015a . The BBP test has good size properties in both
linear and nonlinear frameworks, while it displays remarkably low power, due
to absence of a dynamic factor structure in DGP-IFE-L, as also discussed in
Section 3.3.2.
Table 7: Size and power analyses: Linear model, CRT and BBP tests | | DGP-FE-L | | DGP-IFE-L
---|---|---|---|---
| | T=10 | T=20 | | T=10 | T=20
| $N$ | 50 | 100 | 50 | 100 | | 50 | 100 | 50 | 100
Linear | CRT | 0.994 | 0.993 | 1.000 | 1.000 | | 0.998 | 0.998 | 1.000 | 1.000
| BBP | 0.077 | 0.065 | 0.065 | 0.048 | | 0.111 | 0.155 | 0.093 | 0.137
Probit | BBP | 0.067 | 0.055 | 0.071 | 0.06 | | 0.135 | 0.132 | 0.128 | 0.173
### 4.4 Robustness analyses
In this section we investigate the finite-sample behavior of the proposed test
under departures from some of the assumptions listed in Section 3.1, which are
needed for the consistency of the TW-GFE estimator. In particular, we explore
two scenarios potentially relevant in empirical applications: the first one is
characterized by dynamic factors, representing a violation of Assumption 4;
the second one concerns the lack of informativeness of the population moments
on the unobserved heterogeneity, which is needed for an effective clustering
procedure and is represented by Assumption 2. For both scenarios we consider
linear models and the choice of the number of groups is based on one value of
$\gamma=0.25$.
#### 4.4.1 Dynamic factors
We introduce the following generating equations (DGP-ADF)
$\displaystyle y_{it}$ $\displaystyle=$ $\displaystyle
x_{it1}\theta_{1}+x_{it2}\theta_{2}+\alpha_{i}+\zeta_{t}+\varepsilon_{it},$
$\displaystyle x_{itj}$ $\displaystyle=$
$\displaystyle\Gamma_{i}+\zeta_{t}+N(0,1),\quad\mathrm{for}\quad j=1,2,$
where variables are generate as in DGP-AFE-L except for the factor
$\zeta_{t}$, which is now generated as follows
$\zeta_{t}:\left\\{\begin{array}[]{lc}u_{1}&\mathrm{t=1}\\\\[2.0pt]
\rho\zeta_{t-1}+\sqrt{(1-\rho^{2})}u_{t}&\mathrm{t=2\dots T}\\\
\end{array}\right.$
with $u_{t}\sim N(0,1)$ and $\rho=0.8$. Under DGP-ADF time effects therefore
exhibit a dynamic structure, which effectively violates Assumption 4. The size
of the test is then evaluated by contrasting the TW-FE with the TW-GFE. As for
power, we generate samples as in DGP-IFE-L with a dynamic structure for
$\zeta_{t}$ (DGP-IDF).
Simulation results are reported in Table 8. It is worth noticing that
violating the iid assumption for the time heterogeneity does not affect the
finite sample performance of the proposed test. Interestingly, the dynamic
structure seems to bring more information into the clustering procedure, as
testified by the higher number of groups generated in the kmeans
classification step. This provides a better approximation of the unobserved
heterogeneity and, supposedly, a smaller bias. The test also has a
satisfactory power performance. As it happens with DGP-IFE-L, under DGP-IDF,
when TW-GFE is contrasted with OW-FE and TW-FE estimators, the test is able to
detect a time-varying heterogeneity in the first case, while it fails to
capture the interactive structure when an additive specification is assumed.
Table 8: Size and power analyses with a dynamic factor structure: Linear model
| | DGP-ADF (TW-FE vs TW-GFE) | | DGP-IDF (OW-FE vs TW-GFE) | | DGP-IDF (TW-FE vs TW-GFE)
---|---|---|---|---|---|---
| | T=10 | T=20 | | T=10 | T=20 | | T=10 | T=20
| $N$ | $50$ $100$ | $50$ $100$ | | $50$ $100$ | $50$ $100$ | | $50$ $100$ | $50$ $100$
Hausman | | | | | | | | | | | | | | |
$H$ | 3.515 | 5.098 | 5.029 | 5.592 | | 27.841 | 58.817 | 52.152 | 112.443 | | 22.640 | 48.781 | 42.039 | 93.978
$\chi^{2}_{2}$ p.05 | 0.171 | 0.287 | 0.265 | 0.272 | | 0.859 | 0.938 | 0.885 | 0.954 | | 0.765 | 0.871 | 0.797 | 0.860
| Boot p.05 | 0.036 | 0.050 | 0.051 | 0.046 | | 0.744 | 0.874 | 0.852 | 0.947 | | 0.612 | 0.778 | 0.746 | 0.835
$\hat{\theta}_{1}$ | | | | | | | | | | | | | | |
Bias | -0.001 | 0.001 | 0.001 | -0.001 | | 0.236 | 0.240 | 0.245 | 0.248 | | 0.213 | 0.216 | 0.221 | 0.224
SD | 0.048 | 0.034 | 0.033 | 0.023 | | 0.086 | 0.063 | 0.074 | 0.053 | | 0.088 | 0.066 | 0.079 | 0.055
| SE/SD | 0.929 | 0.936 | 0.947 | 0.967 | | 0.785 | 0.758 | 0.719 | 0.708 | | 0.769 | 0.731 | 0.678 | 0.686
$\hat{\theta}_{2}$ | | | | | | | | | | | | | | |
Bias | 0.002 | -0.000 | 0.000 | -0.000 | | 0.239 | 0.235 | 0.248 | 0.249 | | 0.216 | 0.212 | 0.224 | 0.226
SD | 0.048 | 0.032 | 0.032 | 0.023 | | 0.090 | 0.064 | 0.078 | 0.054 | | 0.094 | 0.067 | 0.080 | 0.057
| SE/SD | 0.928 | 0.984 | 0.986 | 0.965 | | 0.748 | 0.747 | 0.684 | 0.698 | | 0.725 | 0.723 | 0.666 | 0.667
$\tilde{\theta}_{1}$ | | | | | | | | | | | | | | |
Bias | -0.016 | -0.030 | 0.000 | -0.015 | | 0.024 | -0.009 | 0.040 | 0.000 | | 0.024 | -0.009 | 0.040 | 0.000
SD | 0.080 | 0.060 | 0.057 | 0.042 | | 0.161 | 0.149 | 0.148 | 0.151 | | 0.161 | 0.149 | 0.148 | 0.151
| SE/SD | 0.681 | 0.636 | 0.630 | 0.608 | | 0.459 | 0.357 | 0.371 | 0.262 | | 0.459 | 0.357 | 0.371 | 0.262
$\tilde{\theta}_{2}$ | | | | | | | | | | | | | | |
Bias | -0.014 | -0.033 | -0.001 | -0.014 | | 0.027 | -0.009 | 0.040 | 0.001 | | 0.027 | -0.009 | 0.040 | 0.001
SD | 0.081 | 0.058 | 0.057 | 0.040 | | 0.161 | 0.148 | 0.148 | 0.150 | | 0.161 | 0.148 | 0.148 | 0.150
| SE/SD | 0.670 | 0.665 | 0.639 | 0.639 | | 0.460 | 0.360 | 0.370 | 0.264 | | 0.460 | 0.360 | 0.370 | 0.264
| Avg$\hat{K}$ | 36.301 | 62.856 | 38.599 | 67.827 | | 11.415 | 15.618 | 9.491 | 12.376 | | 11.415 | 15.618 | 9.491 | 12.376
| Avg$\hat{L}$ | 3.668 | 4.501 | 4.858 | 6.318 | | 4.724 | 5.705 | 6.538 | 8.228 | | 4.724 | 5.705 | 6.538 | 8.228
DGP-ADF refers to equation 4.4.1. 1000 Monte Carlo (MC) replications. “$H$” is
the average of the Hausman test statistic, across MC replications. “p.05”
denotes the rejection rate for a nominal size of 5%. “$\chi^{2}_{2}$ p.05” is
based on the $95^{th}$ percentile of a central $\chi^{2}_{2}$ distribution.
“Boot p.05” is based on the $95^{th}$ percentile of the empirical distribution
determined via 399 bootstrap replications. “Bias” is the mean bias, “SD” and
“SE” denote the standard deviation over the MC replications and the average
estimated standard error, respectively. “Avg $\hat{K}$” and “Avg $\hat{L}$”
report the average number of groups for individuals and time occasions
obtained in the first step.
#### 4.4.2 Non informative moments
In the following scenario we relax Assumption 2 by letting the moments of
$x_{itj}$ and $y_{it}$ include not Lipschitz-continuous functions of the
unobserved heterogeneity. We therefore generate data according to the
following system (DGP-NIM):
$\displaystyle y_{it}$ $\displaystyle=$ $\displaystyle
x_{it1}\theta_{1}+x_{it2}\theta_{2}+\alpha_{i}+\varepsilon_{it},$
$\displaystyle x_{itj}$ $\displaystyle=$
$\displaystyle\Gamma_{i}+N(0,1),\quad\mathrm{for}\quad j=1,2,$
$\displaystyle\alpha_{i}$ $\displaystyle=$
$\displaystyle\rho\Gamma_{i}+\sqrt{1-\rho^{2}}A_{i}$
where $\rho=0.5$, $\Gamma_{i},A_{i}\sim\sqrt{\mathrm{U}(0,1)}$. In this
setting, we expect population moments to poorly discriminate the latent types,
thereby not bringing relevant information in the first step and worsening the
performance of the test.
Table 9 shows the results for this experiment. As expected, the poor
approximation of the unobserved heterogeneity reflects on the distortion of
the bootstrap empirical size. Even though the bias of the TW-GFE estimator
seems limited, the poor performance of the test might spur from a bias in the
variance estimate, due to possibly unaccounted sources of heterogeneity.
Table 9: Size analysis: DGP-NIM, Linear model, OW-FE vs TW-GFE
| | $\gamma=0.25$
---|---|---
| | T=10 | T=20
| $N$ | $50$ $100$ | $50$ $100$
Hausman | | | | |
$H$ | 3.239 | 6.753 | 2.921 | 5.136
$\chi^{2}_{2}$ p.05 | 0.108 | 0.551 | 0.085 | 0.352
| Boot p.05 | 0.011 | 0.009 | 0.033 | 0.017
$\hat{\theta}_{1}$ | | | | |
Bias | 0.000 | 0.001 | 0.001 | -0.001
SD | 0.048 | 0.034 | 0.033 | 0.023
| SE/SD | 0.924 | 0.939 | 0.965 | 0.979
$\hat{\theta}_{2}$ | | | | |
Bias | 0.002 | 0.000 | 0.000 | -0.001
SD | 0.047 | 0.032 | 0.031 | 0.023
| SE/SD | 0.946 | 0.981 | 1.009 | 0.973
$\tilde{\theta}_{1}$ | | | | |
Bias | -0.021 | -0.021 | -0.010 | -0.011
SD | 0.050 | 0.034 | 0.034 | 0.023
| SE/SD | 0.925 | 0.957 | 0.962 | 0.980
$\tilde{\theta}_{2}$ | | | | |
Bias | -0.019 | -0.021 | -0.010 | -0.011
SD | 0.048 | 0.033 | 0.032 | 0.023
SE/SD | 0.950 | 0.981 | 1.013 | 0.984
$Avg\hat{K}$ | 9.862 | 12.273 | 11.635 | 14.806
| $Avg\hat{L}$ | 2.747 | 2.694 | 3.416 | 3.349
1000 Monte Carlo (MC) replications. “$H$” is the average of the Hausman test
statistic, across MC replications. “p.05” denotes the rejection rate for a
nominal size of 5%. “$\chi^{2}_{2}$ p.05” is based on the $95^{th}$ percentile
of a central $\chi^{2}_{2}$ distribution. “Boot p.05” is based on the
$95^{th}$ percentile of the empirical distribution determined via 399
bootstrap replications. “Bias” is the mean bias, “SD” and “SE” denote the
standard deviation over the MC replications and the average estimated standard
error, respectively. “Avg $\hat{K}$” and “Avg $\hat{L}$” report the average
number of groups for individuals and time occasions obtained in the first
step.
## 5 Empirical applications
We evaluate the behavior of our test with real world data in two applications,
one concerning unobserved heterogeneity in wage determinants of young working
women, the other concerning a gravity equation for the extensive margin of
trade.
### 5.1 Wage of young working women
We here estimate a linear panel data model to study the wage determinants of
young working women. The dataset comes from $10$ non-consecutive waves of the
NLSY survey, in which $315$ employed women, not enrolled in school and at a
stage when education was completed, were interviewed. Interviews were carried
out from $1968$ to $1988$, but only ten non-consecutive periods are considered
for each subject, in order to have data without missing values. The variables
taken into account are: _age_ in current year, _marital status_ , _never
married_ , _not_smsa_ , that is not living in a standard statistical
metropolitan area, _capital city_ , 1 for living in a capital city, _south_ ,
1 for living in south of US, _total work experience_ and _tenure_ , both in
years. The dependent variable is _ln wage_ , measured as logarithm of the
ratio between nominal wage and GNP deflator.
We perform the test in two settings: first contrasting TW-GFE with OW-FE and
then TW-GFE with TW-GFE. Table 10 shows estimates for parameters of interest
when OW-FE is considered while Table 11 presents estimates when TW-FE is
considered. We perform the test with 399 bootstrap replications and 6
different values of $\gamma$. The results of our test strongly suggest to opt
for a specification with time-varying unobserved heterogeneity, as estimating
the model with only time-constant individual effects leads to a systematic
rejection of the null hypothesis with a sufficiently large number of groups,
while the pooled model (TW-GFE with one group) leads to substantially the same
results as the OW-FE. In contrast, the test fails to detect structures for the
time-varying unobserved heterogeneity that are more sophisticated than the
additive one.
It is worth noticing that quantiles of centered chi squared distribution are
not reliable at all and inference based on them would be highly misleading.
Moreover, the bootstrap critical values for the OW-FE vs TW-GFE test reveals a
finite-sample distortion affecting the $\chi^{2}$ distribution, which may be
caused by the variance over-estimation in the OW model as a result of the
neglected heterogeneity. Finally, we highlight that minimal values of $\gamma$
are required in this application in order to find a sufficiently large number
of groups, which is necessary in order to make the test, and the TW-GFE,
reliable.
Table 10: Estimation results and test for wage determinants for young working
women: OW-FE vs TW-GFE
| OW-FE | TW-GFE |
---|---|---|---
| | $\gamma=0.0025$ | $\gamma=0.025$ | $\gamma=0.05$ | $\gamma=0.25$ | $\gamma=0.5$ | $\gamma=1$ |
age | 0.004 | 0.004 | 0.006 | 0.007 | 0.004 | 0.004 | 0.004 |
| (0.002) | (0.003) | (0.002) | (0.002) | (0.002) | (0.002) | (0.002) |
marital status | 0.027 | 0.022 | 0.023 | 0.025 | 0.026 | 0.026 | 0.026 |
| (0.019) | (0.019) | (0.019) | (0.019) | (0.019) | (0.019) | (0.019) |
never married | -0.044 | -0.048 | -0.048 | -0.049 | -0.046 | -0.046 | -0.046 |
| (0.023) | (0.023) | (0.023) | (0.023) | (0.023) | (0.023) | (0.023) |
not_smsa | -0.217 | -0.221 | -0.224 | -0.227 | -0.227 | -0.227 | -0.227 |
| (0.017) | (0.017) | (0.017) | (0.017) | (0.017) | (0.017) | (0.017) |
capital city | -0.036 | -0.032 | -0.044 | -0.042 | -0.046 | -0.046 | -0.046 |
| (0.016) | (0.016) | (0.016) | (0.016) | (0.016) | (0.016) | (0.016) |
south | -0.208 | -0.214 | -0.206 | -0.203 | -0.201 | -0.201 | -0.201 |
| (0.014) | (0.014) | (0.014) | (0.014) | (0.014) | (0.014) | (0.014) |
total experience | 0.018 | 0.037 | 0.025 | 0.024 | 0.018 | 0.018 | 0.018 |
| (0.003) | (0.004) | (0.003) | (0.003) | (0.003) | (0.003) | (0.003) |
tenure | 0.014 | 0.011 | 0.012 | 0.013 | 0.013 | 0.013 | 0.013 |
| (0.002) | (0.002) | (0.002) | (0.002) | (0.002) | (0.002) | (0.002) |
Hausman test value | | 35.811 | 24.232 | 19.862 | 3.998 | 3.998 | 3.998 |
$\chi^{2}_{8}$ crit. | | 15.507 | 15.507 | 15.507 | 15.507 | 15.507 | 15.507 |
Boot crit. | | 16.169 | 9.811 | 9.664 | 9.816 | 10.037 | 9.466 |
K | | 124.000 | 12.000 | 6.000 | 1.000 | 1.000 | 1.000 |
L | | 9.000 | 5.000 | 4.000 | 1.000 | 1.000 | 1.000 |
Standard errors in parentheses. OW-FE refers to fixed-effects estimator, while
estimates of TW-GFE are reported for different values of $\gamma$.
“$\chi^{2}_{8}$ crit.” is the $95^{th}$ percentile of the centered chi-squared
distribution. “Boot crit.” is the $95^{th}$ percentile of the empirical
distribution determined via 399 bootstrap replications. K and L are the number
of estimated groups based on individual and time moments respectively.
Table 11: Test and estimation results of wage determinants for young working
women: TW-FE vs TW-GFE
| TW-FE | TW-GFE
---|---|---
| | $\gamma=0.0025$ | $\gamma=0.025$ | $\gamma=0.05$ | $\gamma=0.25$ | $\gamma=0.5$ | $\gamma=1$
age | 0.010 | 0.006 | 0.007 | 0.006 | 0.004 | 0.004 | 0.004
| (0.002) | (0.003) | (0.002) | (0.002) | (0.002) | (0.002) | (0.002)
marital status | 0.026 | 0.009 | 0.026 | 0.027 | 0.026 | 0.026 | 0.026
| (0.019) | (0.019) | (0.019) | (0.019) | (0.019) | (0.019) | (0.019)
never married | -0.047 | -0.044 | -0.050 | -0.048 | -0.046 | -0.046 | -0.046
| (0.023) | (0.024) | (0.023) | (0.023) | (0.023) | (0.023) | (0.023)
not_smsa | -0.212 | -0.242 | -0.225 | -0.223 | -0.227 | -0.227 | -0.227
| (0.017) | (0.017) | (0.017) | (0.017) | (0.017) | (0.017) | (0.017)
capital city | -0.030 | -0.048 | -0.042 | -0.043 | -0.046 | -0.046 | -0.046
| (0.016) | (0.017) | (0.016) | (0.016) | (0.016) | (0.016) | (0.016)
south | -0.209 | -0.202 | -0.202 | -0.202 | -0.201 | -0.201 | -0.201
| (0.014) | (0.014) | (0.014) | (0.014) | (0.014) | (0.014) | (0.014)
total experience | 0.035 | 0.036 | 0.027 | 0.022 | 0.018 | 0.018 | 0.018
| (0.004) | (0.004) | (0.004) | (0.003) | (0.003) | (0.003) | (0.003)
tenure | 0.011 | 0.011 | 0.012 | 0.013 | 0.013 | 0.013 | 0.013
| (0.002) | (0.002) | (0.002) | (0.002) | (0.002) | (0.002) | (0.002)
Hausman test value | | 9.966 | 20.754 | 36.889 | 49.873 | 49.873 | 49.873
$\chi^{2}_{r}$ crit. | | 15.507 | 15.507 | 15.507 | 15.507 | 15.507 | 15.507
Boot crit. | | 15.887 | 52.448 | 68.533 | 89.087 | 90.186 | 92.863
K | | 122.000 | 12.000 | 6.000 | 1.000 | 1.000 | 1.000
L | | 10.000 | 4.000 | 3.000 | 1.000 | 1.000 | 1.000
Standard errors in parentheses. TW-FE refers to fixed-effects estimator, while
estimates of TW-GFE are reported for different values of $\gamma$.
“$\chi^{2}_{8}$ crit.” is the $95^{th}$ percentile of the centered chi-squared
distribution. “Boot crit.” is the $95^{th}$ percentile of the empirical
distribution determined via 399 bootstrap replications. K and L are the number
of estimated groups based on individual and time moments respectively.
### 5.2 The extensive margin of trade
We apply the proposed test to a dataset consisting of a network of countries
to analyze the extensive margin of trade. We use data from Helpman et al.,
(2008) consisting of a cross section of $157$ countries in $1986$999As in Chen
et al., (2021), we eliminate Congo because it did not export to any country
in 1986. The two-way fixed-effects probit model is used to describe the
probability that an exchange between countries, where covariates and the
related homophily parameters as well as node-specific heterogeneity are
included. The dependent variable is binary and denotes whether a trade flow
occurs from country $i$ to country $j$. We include a set of geographical
regressors : _Distance_ , which is the logarithm of the geographical distance
(in kilometers) between the capitals of each pair of countries; _Border_ ,
which is a dummy indicating whether the two countries share a border.
Furthermore, an additional set of variables captures the institutional and
cultural similarities of each pair of countries: _Legal_ , _Language_ , and
_Currency_ are binary variables equal to 1 if countries i and j share the same
legal origin, the same language, and the same currency (or they are in the
same currency union), respectively; Religion is measure of cult similarities,
and for details on its construction we refer the reader to Helpman et al.,
(2008). Finally, _Colonial Ties_ is a dummy variable assuming value 1 if
country $i$ colonized country $j$ (or vice-versa) and _FTA_ is a binary
variable capturing if the two countries belong to a common trade agreement.
Table 12 presents the estimates of parameters of interest and outcome of the
proposed test when TW-FE is contrasted with TW-GFE. We consider here one value
of $\gamma=1$, generating $51$ groups for both the importer and exporter
unobserved heterogeneity.
Estimated coefficients for the TW-FE and TW-GFE are different, although those
arising form the TW-FE are coherent with the results provided by the empirical
literature using the same data, while the TW-GFE results are not viable as
they are plagued by both the incidental parameters problem and approximation
error. The proposed test is based on 399 bootstrap replications and fails to
reject the null hypothesis, thus suggesting that the additive specification
for the unobserved heterogeneity is complex enough to capture underlying
traits of trade decisions. This results is different from that of Chen et al.,
(2021), who however consider the intensive margin of trade and find that the
unobserved heterogeneity could be modeled by an interactive specification for
the unobserved heterogeneity.
Table 12: Empirical application for the extensive margin of trade
| TW-FE | TW-GFE
---|---|---
Distance | -0.664 | -0.444
| (0.021) | (0.027)
Border | -0.375 | -0.069
| (0.095) | (0.103)
Legal | 0.094 | -0.153
| (0.029) | (0.062)
Language | 0.297 | 0.323
| (0.038) | (0.051)
Currency | 0.462 | -0.013
| (0.134) | (0.438)
Religion | 0.255 | 0.462
| (0.060) | (0.066)
Colonial ties | 0.343 | 0.351
| (0.285) | (0.307)
FTA | 2.029 | 1.064
| (0.308) | (0.466)
Hausman test value | | 61.383
$\chi^{2}_{r}$ crit. | | 15.507
Boot crit. | | 458.693
K | | 51.000
L | | 51.000
No of countries 157, no of dyads 24.806. Standard errors in parentheses. TW-FE
refers to two-way fixed effects estimator. Estimates of TW-GFE are reported
for $\gamma=1$. “$\chi^{2}_{r}$ crit.” is the $95^{th}$ percentile of the
centered chi-squared distribution. “Boot crit” is the $95^{th}$ percentile of
the empirical distribution determined via 399 bootstrap replications. K and L
are the number of estimated groups based on individual and time moments
respectively.
## 6 Final remarks
We propose a specification test for the form of the unobserved heterogeneity
in panel data models. The test is based on the recently proposed grouped
fixed-effects approach and serves to detect departures from the commonly
assumed time-invariant or additive fixed-effects specifications.
The main advantage of our proposal is that it allows practitioners to avoid
the specification and estimation of models with complex forms of time-varying
heterogeneity, which might pose identification and computational problems in
both linear and nonlinear models. By contrast, the TW-GFE approach is a rather
simple non-iterative two-step strategy, involving unsupervised clustering in
the first step and estimation of group effects in the second.
The proposed approach is a generalized Hausman test whose asymptotic
distribution is a non-central chi square because of the bias arising from
incidental parameters, for both the ML estimators that are being contrasted
(at least in the non-linear case), and from the approximation error induced by
discretization of the unobserved heterogeneity, that is performed with the GFE
approach. The use of bootstrap critical values successfully corrects the
empirical size of the test and yields satisfactory power properties. A partial
exception is the case of binary choice models with additive fixed effects, for
which the test exhibits low power. We also show that test maintains the good
finite-sample properties even in presence of violations of the sampling
assumptions, while its use is not advisable when the information in the data
is not able to provide a good enough approximation for the unobserved
heterogeneity.
The empirical applications considered suggest that a specification with time-
varying additive unobserved heterogeneity is needed when evaluating the wage
determinants for young working women, and that a standard gravity equation
with additive importer and exporter fixed-effects is appropriate to model the
trade extensive margin. The proposed test also emerges as a viable alternative
to existing procedures with short panel datasets.
## References
* Andersen, (1970) Andersen, E. B. (1970). Asymptotic properties of conditional maximum-likelihood estimators. Journal of the Royal Statistical Society, Series B, 32:283–301.
* Ando and Bai, (2016) Ando, T. and Bai, J. (2016). Panel data models with grouped factor structure under unknown group membership. Journal of Applied Econometrics, 31(1):163–191.
* Arellano and Hahn, (2007) Arellano, M. and Hahn, J. (2007). Understanding bias in nonlinear panel models: Some recent developments. Econometric Society Monographs, 43:381.
* Bai, (2009) Bai, J. (2009). Panel data models with interactive fixed effects. Econometrica, 77(4):1229–1279.
* Bartolucci et al., (2015) Bartolucci, F., Belotti, F., and Peracchi, F. (2015). Testing for time-invariant unobserved heterogeneity in generalized linear models for panel data. Journal of Econometrics, 184(1):111–123.
* Bartolucci et al., (2012) Bartolucci, F., Farcomeni, A., and Pennoni, F. (2012). Latent Markov models for longitudinal data. CRC Press.
* Bester and Hansen, (2016) Bester, C. A. and Hansen, C. B. (2016). Grouped effects estimators in fixed effects models. Journal of Econometrics, 190(1):197–208.
* (8) Bonhomme, S., Lamadon, T., and Manresa, E. (2022a). Discretizing unobserved heterogeneity. Econometrica, 90(2):625–643.
* (9) Bonhomme, S., Lamadon, T., and Manresa, E. (2022b). Supplement to “discretizing unobserved heterogeneity”. Econometrica supplementary material, 90(2):1–21.
* Bonhomme and Manresa, (2015) Bonhomme, S. and Manresa, E. (2015). Grouped patterns of heterogeneity in panel data. Econometrica, 83(3):1147–1184.
* (11) Castagnetti, C., Rossi, E., and Trapani, L. (2015a). Inference on factor structures in heterogeneous panels. Journal of econometrics, 184(1):145–157.
* (12) Castagnetti, C., Rossi, E., and Trapani, L. (2015b). Testing for no factor structures: On the use of hausman-type statistics. Economics Letters, 130:66–68.
* Cavaliere et al., (2022) Cavaliere, G., Gonçalves, S., and Nielsen, M. Ø. (2022). Bootstrap inference in the presence of bias. arXiv preprint arXiv:2208.02028.
* Chamberlain, (1980) Chamberlain, G. (1980). Analysis of covariance with qualitative data. The Review of Economic Studies, 47:225–238.
* Chen et al., (2021) Chen, M., Fernández-Val, I., and Weidner, M. (2021). Nonlinear factor models for network and panel data. Journal of Econometrics, 220(2):296–324.
* Dzemski, (2019) Dzemski, A. (2019). An empirical model of dyadic link formation in a network with unobserved heterogeneity. Review of Economics and Statistics, 101(5):763–776.
* Fernández-Val and Weidner, (2016) Fernández-Val, I. and Weidner, M. (2016). Individual and time effects in nonlinear panel models with large n, t. Journal of Econometrics, 192(1):291–312.
* Freeman and Weidner, (2023) Freeman, H. and Weidner, M. (2023). Linear panel regressions with two-way unobserved heterogeneity. Journal of Econometrics, 237(1):105498.
* Hahn and Moon, (2010) Hahn, J. and Moon, H. R. (2010). Panel data models with finite number of multiple equilibria. Econometric Theory, 26(3):863–881.
* Hahn and Newey, (2004) Hahn, J. and Newey, W. (2004). Jackknife and analytical bias reduction for nonlinear panel models. Econometrica, 72:1295–1319.
* Hausman, (1978) Hausman, J. A. (1978). Specification tests in econometrics. Econometrica, pages 1251–1271.
* Heckman and Singer, (1984) Heckman, J. and Singer, B. (1984). A method for minimizing the impact of distributional assumptions in econometric models for duration data. Econometrica: Journal of the Econometric Society, pages 271–320.
* Helpman et al., (2008) Helpman, E., Melitz, M., and Rubinstein, Y. (2008). Estimating trade flows: Trading partners and trading volumes. The Quarterly Journal of Economics, 123:441–487.
* Higgins and Jochmans, (2023) Higgins, A. and Jochmans, K. (2023). Bootstrap inference for fixed-effect models. arXiv preprint arXiv:2201.11156.
* Horowitz, (2019) Horowitz, J. L. (2019). Bootstrap methods in econometrics. Annual Review of Economics, 11:193–224.
* Hsiao, (2018) Hsiao, C. (2018). Panel models with interactive effects. Journal of Econometrics, 206(2):645–673.
* Kapetanios et al., (2023) Kapetanios, G., Serlenga, L., and Shin, Y. (2023). Testing for correlation between the regressors and factor loadings in heterogeneous panels with interactive effects. Empirical Economics, pages 1–49.
* Kim and Sun, (2016) Kim, M. S. and Sun, Y. (2016). Bootstrap and k-step bootstrap bias corrections for the fixed effects estimator in nonlinear panel data models. Econometric Theory, 32(6):1523–1568.
* Li et al., (2003) Li, H., Lindsay, B. G., and Waterman, R. P. (2003). Efficiency of projected score methods in rectangular array asymptotics. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65(1):191–208.
* MacKinnon, (2006) MacKinnon, J. G. (2006). Bootstrap methods in econometrics. Economic Record, 82:S2–S18.
* MacLahlan and Peel, (2000) MacLahlan, G. and Peel, D. (2000). Finite mixture models. Wiley Series in Probability and Statistics. John Wiley & Sons, Inc.
* Moon and Weidner, (2015) Moon, H. R. and Weidner, M. (2015). Linear regression for panel with unknown number of factors as interactive fixed effects. Econometrica, 83(4):1543–1579.
* Moon and Weidner, (2023) Moon, H. R. and Weidner, M. (2023). Nuclear norm regularized estimation of panel regression models.
* Papke and Wooldridge, (2023) Papke, L. E. and Wooldridge, J. M. (2023). A simple, robust test for choosing the level of fixed effects in linear panel data models. Empirical Economics, 64(6):2683–2701.
* Pesaran, (2006) Pesaran, M. H. (2006). Estimation and inference in large heterogeneous panels with a multifactor error structure. Econometrica, 74(4):967–1012.
* Su et al., (2016) Su, L., Shi, Z., and Phillips, P. C. B. (2016). Identifying latent structures in panel data. Econometrica, 84(6):2215–2264.
* Wang et al., (2023) Wang, Y., Phillips, P. C., and Su, L. (2023). Panel data models with time-varying latent group structures. arXiv preprint arXiv:2307.15863.
* Westerlund, (2019) Westerlund, J. (2019). Testing additive versus interactive effects in fixed-t panels. Economics Letters, 174:5–8.
* Wooldridge, (2021) Wooldridge, J. M. (2021). Two-way fixed effects, the two-way mundlak regression, and difference-in-differences estimators. Available at SSRN 3906345.
|
# Multilingual AMR Parsing with Noisy Knowledge Distillation††thanks: This
work was supported by Alibaba Group through the Alibaba Innovative Research
(AIR) Program.
Deng Cai♡ Xin Li♠ Jackie Chun-Sing Ho♡ Lidong Bing♠ Wai Lam♡
♡The Chinese University of Hong Kong
♠DAMO Academy, Alibaba Group
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
jackieho<EMAIL_ADDRESS>
###### Abstract
We study multilingual AMR parsing from the perspective of knowledge
distillation, where the aim is to learn and improve a multilingual AMR parser
by using an existing English parser as its teacher. We constrain our
exploration in a strict multilingual setting: there is but one model to parse
all different languages including English. We identify that noisy input and
precise output are the key to successful distillation. Together with extensive
pre-training, we obtain an AMR parser whose performances surpass all
previously published results on four different foreign languages, including
German, Spanish, Italian, and Chinese, by large margins (up to 18.8 Smatch
points on Chinese and on average 11.3 Smatch points). Our parser also achieves
comparable performance on English to the latest state-of-the-art English-only
parser.
## 1 Introduction
Abstract Meaning Representation (AMR) Banarescu et al. (2013) is a broad-
coverage semantic formalism that encodes the meaning of a sentence as a
rooted, directed, and labeled graph, where nodes represent concepts and edges
represent relations among concepts. AMR parsing is the task of translating
natural language sentences into their corresponding AMR graphs, which
encompasses a set of natural language understanding tasks, such as named
entity recognition, semantic role labeling, and coreference resolution. AMR
has proved to be beneficial to a wide range of applications such as text
summarization Liao et al. (2018), machine translation Song et al. (2019), and
question answering Kapanipathi et al. (2020); Xu et al. (2021).
One most critical feature of the AMR formalism is that it abstracts away from
syntactic realization and surface forms. As shown in Figure 1, different
English sentences with the same meaning correspond to the same AMR graph.
Furthermore, there are no explicit alignments between elements (nodes or
edges) in the graph and words in the text. While this property leads to a
distinct difficulty in AMR parsing, it also suggests the potential of AMR to
work as an interlingua Xue et al. (2014); Hajič et al. (2014); Damonte and
Cohen (2018), which could be useful to multilingual applications of natural
language understanding Liang et al. (2020); Hu et al. (2020). An example is
given in Figure 1, we represent the semantics of semantically-equivalent
sentences in other languages using the same AMR graph. This defines the
multilingual AMR parsing problem we seek to address in this paper.
Figure 1: An example of AMR. Sentences written in English and other languages
share the same meaning and therefore correspond to the same AMR graph.
Multilingual AMR parsing is an extremely challenging task due to several
reasons. First, AMR was initially designed for and heavily biased towards
English, thus the parsing has to overcome some structural linguistic
divergences among languages Damonte and Cohen (2018); Zhu et al. (2019).
Second, the human-annotated resources for training are only available in
English and none is present in other languages. Moreover, since the AMR graph
involves rich semantic labels, the AMR annotation for other languages can be
labor-intensive and unaffordable. Third, current modeling techniques focus
mostly on English. For example, existing AMR aligners Flanigan et al. (2014);
Pourdamghani et al. (2014); Liu et al. (2018) and widely-used pointer-
generator mechanisms Zhang et al. (2019b); Cai and Lam (2019, 2020) rely on
the textual overlap between English words and AMR node values (i.e.,
concepts).
Some initial attempts Damonte and Cohen (2018); Blloshmi et al. (2020); Sheth
et al. (2021) towards multilingual AMR parsing mainly investigated the
construction of pseudo parallel data via annotation projection. In this paper,
we study multilingual AMR parsing from the perspective of knowledge
distillation Buciluǎ et al. (2006); Ba and Caruana (2014); Hinton et al.
(2015); Kim and Rush (2016), where our primary goal is to improve a
multilingual AMR parser by using an existing English parser as its teacher. We
focus on a strict multilingual setting for developing one AMR parser that can
parse all different languages. In contrast to the language-specific (one
parser one language) setting, our setting is more challenging yet more
appealing in practice. Intuitively, knowledge distillation is effective
because the teacher’s output provides a rich training signal for the student
parser. We develop both the teacher parser and the student parser with
language-agnostic seq2seq design and expect the student parser to imitate the
behaviors of the teacher parser (i.e., English parser) when processing
semantically-equivalent input in other languages. We first show that
multilingual seq2seq pre-training, including language model and machine
translation pre-training, provides an excellent starting point for model
generalization across languages. We further capitalize on the idea that the
student should be robust to noisy input and introduce noise by machine
translation for improving student performance. To mitigate the risk that the
student learns the mistakes made by the teacher, the student is then fine-
tuned with gold AMR graphs.
We present experiments on the benchmark dataset created by Damonte and Cohen
(2018), covering four different languages with no training data, including
German, Spanish, Italian, and Chinese. To cover as many languages as possible,
we also include the original English test set in our evaluation. On four zero-
resource languages, our single universal parser consistently outperforms the
previous best results by large margins (+11.3 Smatch points on average and up
to +18.8 Smatch points). Meanwhile, our parser achieves competitive results on
English even compared with the latest state-of-the-art English AMR parser in
the literature.
To sum up, our contributions are listed below:
* •
We study AMR parsing in a strict multilingual setting, there is but one parser
for all different languages including English.
* •
We propose to train a multilingual AMR parser with multiple pre-training and
fine-tuning stages including noisy knowledge distillation.
* •
We obtain a performant multilingual AMR parser, establishing new state-of-the-
art results on multiple languages. We hope our parser can facilitate the
multilingual applications of AMR.
## 2 Background
### 2.1 Prior Work
Cross-lingual AMR parsing is the task of mapping a sentence in any language X
to the AMR graph of its English translation. To date, there is no human-
annotated X-AMR parallel dataset for training. Therefore, one straightforward
solution is to translate the sentences from X into English then apply an
English parser Damonte and Cohen (2018); Uhrig et al. (2021). However, it is
argued that the method is not informative in terms of the cross-lingual
properties of AMR Damonte and Cohen (2018); Blloshmi et al. (2020). To tackle
cross-lingual AMR parsing, most previous work relies on pre-trained
multilingual language models and silver training data (i.e., pseudo parallel
data).
#### Pre-trained Multilingual Language Model
Previous work proves that language-independent features provided by pre-
trained multilingual language models can boost cross-lingual parsing
performance. For example, Blloshmi et al. (2020) use mBERT Devlin et al.
(2019) and Sheth et al. (2021) employ XLM-R Conneau et al. (2020).
#### Silver Training Data
There are two typical methods for creating silver training examples: (I)
Parsing English to AMR Damonte and Cohen (2018). This approach creates silver
training examples for the foreign language X through an external X-EN parallel
corpus and an existing English AMR parser. The English sentences of the
parallel corpus are parsed using the existing AMR parser. Then resultant AMR
graphs are used as pseudo targets. Note that the target side of the
constructed X-AMR training corpus is of silver quality. (II) Translating
English to X Blloshmi et al. (2020); Sheth et al. (2021). This approach does
not exploit external X-EN parallel corpus but makes use of the existing EN-AMR
parallel corpus. It uses off-the-shelf machine translation systems to
translate the English side of the EN-AMR pairs into the foreign language X. It
is worth noting that although the source side of the constructed training
examples may contain noise introduced by automatic translation, the target
side consists of gold AMR graphs. However, the number of training examples
created by this approach is limited by the size of the original English
dataset.
### 2.2 Our Task: Multilingual AMR Parsing
Here, we formally define the task of multilingual AMR parsing. As illustrated
in Figure 1, this task aims to predict the semantic graph given the input
sentence in any language. Specifically, we consider five different languages:
German (DE), Spanish (ES), Italian (IS), Chinese (ZH), and English (EN). The
biggest challenge is due to the only access to a set of human-annotated
English training examples. Formally, denote
$\mathbb{Z}=\\{\text{DE},\text{ES},\text{IT},\text{ZH}\\}$ as the set of
foreign languages other than EN. Our goal is to develop a multilingual parser
for all languages in $\\{\text{EN}\\}\cup\mathbb{Z}$. However, there only
exists a set of gold EN-AMR training pairs $(x,y)$ where $x$ and $y$ are the
English sentence and AMR graph, respectively. For any language
X$\in\mathbb{Z}$, there is no gold training example.
Following the recent state-of-the-art practice for English AMR parsing
Bevilacqua et al. (2021), we formulate the problem as a seq2seq task. The
input sentence serves as the source sequence, while the linearization of the
AMR graph is treated as the target sequence.
## 3 Our Parser
### 3.1 Overview
We choose vanilla seq2seq architecture Vaswani et al. (2017); Bevilacqua et
al. (2021) for our multilingual AMR parser to dispose of the need of explicit
word-to-node alignments. Unlike Damonte and Cohen (2018); Sheth et al. (2021),
the advantage of alignment-free parsers is that the training is prevented from
depending on noisy alignments derived from automatic cross-lingual aligners.
The training of our parser consists of multiple pre-training and fine-tuning
stages. First, we initialize both the encoder and decoder of our parser using
parameters pre-trained for multilingual denoising autoencoding and
multilingual machine translation. We argue that both pre-training stages boost
model generalization across languages and the latter is especially beneficial
to AMR parsing because translating to a meaning representation resembles
machine translation. Then, we fine-tune our parser in two stages. In the first
stage, we aim to transfer the knowledge of a high-performing English AMR
parser to our multilingual parser via knowledge distillation. Finally, we
fine-tune our parser with gold AMR graphs to alleviate the drawback of over-
fitting to teacher’s mistakes. Each training stage is detailed in section 3.3
and its individual effect is empirically revealed in section 5.1.
Figure 2: Illustration of different training stages. Stage P1 is omitted for
space limit.
### 3.2 Base Model
#### Model Architecture
We consider the standard Transformer Vaswani et al. (2017) for seq2seq
modeling. The encoder in the Transformer consists of a stack of multiple
identical layers, each of which has two sub-layers: one implements the multi-
head self-attention mechanism and the other is a position-wise fully connected
feed-forward network. The decoder is also composed of a stack of multiple
identical layers. Each layer in the decoder consists of the same sub-layers as
in the encoder layers plus an additional sub-layer that performs multi-head
attention to the output of the encoder stack. See Vaswani et al. (2017) for
more details.
#### Linearization & Post-processing
To formulate AMR parsing as a seq2seq problem, one needs to first obtain the
linearized sequence representation of AMR graphs. To this end, we adopt the
fully graph-isomorphic linearization techniques as in Bevilacqua et al.
(2021). That is, the graph is recoverable from the linearized sequence without
losing adjacency information. We use special tokens <V0>, <V1>, …, <Vn> to
represent variables in the linearized graph and to handle co-referring nodes.
We make a clear distinction between constants and variables, as variable names
do not carry any semantic information. The graph is linearized through a
depth-first traversal starting from the root. For edge ordering, we use the
default order in the release files of AMR datasets as suggested by Konstas et
al. (2017). The bottom right of Figure 2 illustrates the linearization result
of the AMR graph in Figure 1.
The output sequence of our seq2seq model may produce an invalid graph. For
example, the parenthesis parity may be broken, resulting in an incomplete
graph. To ensure the validity of the graph produced in parsing, post-
processing steps such as parenthesis parity restoration and invalid segment
removal are introduced. We use the pre- and post-processing scripts provided
by Bevilacqua et al. (2021).111https://github.com/SapienzaNLP/spring
### 3.3 Training Stages
We now clarify the four different training stages. The whole training process
is referred to as P1$\rightarrow$P2$\rightarrow$F3$\rightarrow$F4.
#### P1: Multilingual Language Model Pre-training
Pre-trained multilingual language representations such as mBERT Devlin et al.
(2019) have greatly improved performance across many cross-lingual language
understanding tasks. For cross-lingual AMR parsing, in particular, Blloshmi et
al. (2020) used mBERT222bert-base-multilingual-cased Devlin et al. (2019)
while Sheth et al. (2021) employed XLM-R333xlm-roberta-large Conneau et al.
(2020) to provide language-independent features. Unlike previous work, we
argue that such encoder-only pre-trained models are not the most suitable
choice for our seq2seq parser. Instead, we adopt mBART, an encoder-decoder
denoising language model pre-trained with monolingual corpora in many
languages Liu et al. (2020b), to initialize both the encoder and decoder of
our seq2seq parser.
#### P2: Multilingual Machine Translation Pre-training (MMT-PT)
The task of multilingual machine translation (MMT) is to learn one single
model to translate between various language pairs. Essentially, natural
languages can be considered as informal meaning representations compared to
formal meaning representation such as AMR. On the other hand, AMR can be
regarded as a special language. The above observations connect the dots
between MMT and multilingual AMR parsing, both of which model the process of
digesting the semantics in one form and and conveying the same semantics in
another form. Therefore, we argue that pre-training our parser using the MMT
task should be helpful. In fact, the usefulness of MT pre-training has also
been validated in English AMR parsing Xu et al. (2020). In practice, we
directly use the mBARTmmt checkpoint Tang et al. (2020), an MMT model covering
50 languages that are trained from mBART.
#### F3: Knowledge Distillation Fine-tuning (KD-FT)
Motivated by the fact that the parsing accuracy on English is significantly
better than those on other languages, we propose to reduce the performance gap
via knowledge distillation Kim and Rush (2016). Specifically, we first pre-
train a high-performance AMR parser for English and treat it as the teacher
model. By considering our multilingual AMR parser as the student model, the
goal is to transfer the knowledge of the teacher model to the student model.
Intuitively, when feeding an English sentence to the teacher model, the
student model, which receives its translation as input, should imitate the
behaviors of the teacher model and make similar predictions. The strategies we
adopt to achieve this goal are detailed in section 3.4.
#### F4: Gold AMR Graph Fine-tuning (Gold-FT)
Note that in the knowledge distillation stage, the student parser is trained
to match the predictions of the teacher model. A potential risk of such
knowledge distillation is that the mistakes made by the teacher model may be
propagated to the student model as well. Another fine-tuning stage, which we
found useful for alleviating the risk, is to further fine-tune our parser with
gold AMR graphs. Following Blloshmi et al. (2020); Sheth et al. (2021), we
transform the gold standard English datasets into other languages using MT
models for fine-tuning our multilingual AMR parser.
### 3.4 Knowledge Distillation
Knowledge distillation (KD) refers to a class of techniques for training a
student model to imitate a teacher model for close or even better performance.
In contrast to most KD applications that focus on reducing the performance gap
caused by architectural differences, our primary goal is to minimize the
mismatch of model behaviors across languages. That is, we expect the student
and teacher to behave similarly even with different input languages.
Recall that we formulate AMR parsing as a seq2seq problem with standard
maximum likelihood estimation training objective.
$L_{\text{MLE}}=\sum_{t=1}^{|y|}\log p(y_{t}|y_{:<t},x)$ (1)
where $y_{t}$ denotes the $t$-th token in the linearized AMR sequence. One
natural and common method for KD is to replace the discrete target with the
soft token-level distributions provided by the teacher model
$p_{T}(y_{t}|y_{:<t},x^{*})$.
$L_{\text{token}}=\sum_{t=1}^{|y|}\text{KL}((p(y_{t}|y_{:<t},x),p_{T}(y_{t}|y_{:<t},x^{*}))$
where KL computes the Kullback–Leibler divergence between two distributions.
We use $x^{*}$ and $x$ to highlight that the input sentences are in different
languages. The above method is referred to as token-level KD as it attempts to
match the local token distributions of the teacher model. Opposed to token-
level KD, sequence-level KD Kim and Rush (2016) allows knowledge transfer at
sequence-level $L_{\text{seq}}=\text{KL}(p(y,x),p_{T}(y,x^{*}))$. Due to the
intractability of sequence-level distribution computation, following Kim and
Rush (2016), we replace the teacher’s distribution with its mode.
Specifically, we use beam search to approximate the teacher’s most probable
output, which is then used as the target to train the student model as in Eq.
1.
One appealing property of sequence-level KD is that it does not require gold
AMR graphs. Therefore, it can be performed with an external X-EN parallel
corpus at scale. However, the inherent noise in the teacher’s output hampers
training with the student often being prone to hallucination Liu et al.
(2020a). To alleviate this problem, we propose to also inject noise to the
input side of the student model. We find that automatic translation can serve
as an effective noise generator for multilingual AMR parsing. That is, instead
of using gold translations, we feed automatic machine translations to the
student model. We find that the noise introduced by machine translation
performs better than random noise likely due to that the translations preserve
the most salient semantics.
## 4 Experimental Setup
### 4.1 Datasets
#### Gold Data
Following conventions, we use the benchmark dataset created in Damonte and
Cohen (2018) as our testbed. This dataset contains human translations of the
test set of AMR2.0 dataset (LDC2017T10) in German (DE), Spanish (ES), Italian
(IT), and Chinese (ZH). For a more complete multilingual setup, we also
include the original English (EN) test set for evaluation. The gold training
corpus in our experiments is the training set of AMR2.0, which contains 36,
521 EN-AMR pairs.
#### Silver Data
For other foreign languages (DE, ES, IT, and ZH), we construct silver training
data following Blloshmi et al. (2020). Specifically, we use OPUS-MT Tiedemann
and Thottingal
(2020)444https://huggingface.co/transformers/model_doc/marian.html, an off-
the-shelf translation tool, to translate English sentences in AMR2.0 to other
foreign languages. To ensure the quality of silver data, we filter out data
with less accurate translations via back-translation consistency check. That
is, the translation quality is measured by the cosine similarity between the
original English sentence and its back-translated counterpart using LASER
Artetxe and Schwenk (2019). We refer readers to Blloshmi et al. (2020) for an
exhaustive description of the data filtering process. Detailed statistics of
our training, dev, and test sets are shown in Table 1.
#### Knowledge Distillation Data
For the knowledge distillation stage, we use 320K English sentences in the
Europarl corpus Koehn (2005), which contains parallel sentence pairs of
En$\Leftrightarrow$DE, En$\Leftrightarrow$ES, and En$\Leftrightarrow$IT.
Unless otherwise specified, we use sequence-level KD with noisy input from
OPUS-MT. Note that essentially our noisy KD only requires monolingual English
data. Nevertheless, we choose Europarl following Damonte and Cohen (2018);
Blloshmi et al. (2020) and use the gold translations as noise-free input to
demonstrate the impact of our noisy KD comparatively (section 5.2).
### 4.2 Settings
We differentiate two settings for training and evaluating multilingual AMR
parsing.
* •
Language-specific. For each target language, a language-specific parser is
trained.
* •
Multilingual. One single parser is trained to parse all target languages.
While this paper focuses on the multilingual setting, we also report the
results of the language-specific parsers in previous work Damonte and Cohen
(2018); Blloshmi et al. (2020); Sheth et al. (2021) for comparative reference.
### 4.3 Models
#### Model Variants
Our full training pipeline consists of multiple pre-training and fine-tuning
stages. To study the effect of each training stage, we implement a series of
model variants:
* •
w/o MMT-PT. To measure the help from MMT-PT, we remove the second pre-training
stage (P2). The training process becomes P1$\rightarrow$F3$\rightarrow$F4.
* •
w/o KD-FT. To show the benefits from KD, we conduct an ablation experiment
where the KD-FT stage (F3) is skipped. The training process becomes
P1$\rightarrow$P2$\rightarrow$F4.
* •
w/o Gold-FT. To validate the necessity of the fine-tuning with gold AMR graph,
we also report the model results without the final Gold-FT (F4) stage. The
training process is then P1$\rightarrow$P2$\rightarrow$F3.
* •
w/o MMT-PT & KD-FT. We exclude both the MMT-PT (P2) stage and the KD-FT (F3)
stage. This variant (P1$\rightarrow$F4) is reminiscent of the best-performing
model of Blloshmi et al. (2020) that fine-tunes multilingual language model
with silver training data.
* •
w/o MMT-PT & Gold-FT. We also report the model performance without MMT-PT and
Gold-FT for reference (P1$\rightarrow$F3).
Language | Train | Dev | Test
---|---|---|---
English(EN) | 36,521∗ | 1,368∗ | 1,371∗
German(DE) | 34,415 | 1,319 | 1,371∗
Spanish(ES) | 34,552 | 1,325 | 1,371∗
Italian(IT) | 34,521 | 1,322 | 1,371∗
Chinese(ZH) | 33,221 | 1,311 | 1,371∗
Table 1: The number of instances per language and for each data split. ∗ marks gold quality and otherwise silver quality. Model | | Smatch
---|---|---
| DE | ES | IT | ZH | EN | AVG${}_{\text{X}}$ | AVG
Language-Specific
Damonte and Cohen (2018) | | 39.0 | 42.0 | 43.0 | 35.0 | - | 39.8 | -
Blloshmi et al. (2020) | | 53.0 | 58.0 | 58.1 | 43.1 | - | 53.1 | -
Sheth et al. (2021)$\dagger$ | | 62.7 | 67.9 | 67.4 | - | - | - | -
Multilingual
Blloshmi et al. (2020)$\dagger$ | | 52.1 | 56.2 | 56.7 | - | - | - | -
Blloshmi et al. (2020) | | 49.9 | 53.2 | 53.5 | 41.0 | - | 49.4 | -
Ours | (P1$\rightarrow$P2$\rightarrow$F3$\rightarrow$F4) | 73.1 | 75.9 | 75.4 | 61.9 | 83.9 | 71.6 | 74.0
w/o MMT-PT | (P1$\rightarrow$F3$\rightarrow$F4) | 72.4 | 75.6 | 75.4 | 60.6 | 83.3 | 71.0 | 73.5
w/o KD-FT | (P1$\rightarrow$P2$\rightarrow$F4) | 71.8 | 74.5 | 73.8 | 61.0 | 82.6 | 70.3 | 72.7
w/o Gold-FT | (P1$\rightarrow$P2$\rightarrow$F3) | 70.9 | 74.0 | 73.1 | 59.5 | 82.4 | 69.4 | 72.0
w/o MMT-PT & KD-FT | (P1$\rightarrow$F4) | 70.8 | 73.8 | 73.2 | 59.9 | 81.8 | 69.4 | 71.9
w/o MMT-PT & Gold-FT | (P1$\rightarrow$F3) | 70.0 | 73.3 | 72.7 | 58.4 | 81.4 | 68.6 | 71.2
State-of-the-art English-only Parser
Bevilacqua et al. (2021)$\ddagger$ | | - | - | - | - | 84.3 | - | -
Table 2: Smatch scores on test sets. AVG${}_{\text{X}}$ and AVG denote the
averages over zero-resource languages (DE, ES IT, and ZH) and all languages
respectively. $\dagger$ indicates that the results do not include ZH.
$\ddagger$ marks that we report the best score without graph re-categorization
considering our models do not use graph re-categorization either.777Graph re-
categorization is a popular technique for reducing the complexity of AMR
graphs, which involves manual efforts for hand-crafting rules. Recent work
Bevilacqua et al. (2021) points out that graph re-categorization may harm the
generalization ability to out-of-domain data.
#### Implementation Details
Following Bevilacqua et al. (2021), we make slight modifications to the
vocabulary of mBART for better suiting linearized AMRs. Specifically, we
augment the original vocabulary of mBART with the names of AMR relations and
frames occurring at least 5 times in the gold training corpus. The augmented
vocabulary allows more compact target sequence after tokenization. As
introduced in section 3.3, the first two pre-training stages are out of scope
for this paper and we directly load pre-trained model checkpoints,
mBART555facebook/mbart-large-50 Liu et al. (2020b) and
mBARTmmt666facebook/mbart-large-50-many-to-many-mmt Tang et al. (2020), from
Huggingface’s transformers library Wolf et al. (2020). At each fine-tuning
stage, models are trained for up to 30,000 steps with a batch size of 5,000
graph linearization tokens, with RAdam Liu et al. (2019) optimizer and a
learning rate of 1e-5. Dropout is set to 0.25. We do model selection according
to the performance on dev sets. At prediction time, we set beam size to 5. The
teacher model is separately trained and obtains 84.2 Smatch score on the
English test set, which is close to the recent state-of-the-art result
Bevilacqua et al. (2021). We release our code, data, and models at
https://github.com/jcyk/XAMR.
## 5 Experimental Results
The performance of AMR parsing is conventionally measured by Smatch score Cai
and Knight (2013), which quantifies the maximum overlap between two AMR
graphs. The reported results are averaged over 3 runs with different random
seeds.
### 5.1 Main Results
In Table 7, we present the Smatch scores of our models and the best-performing
models in the current literature. Our model with the full training pipeline
achieves new state-of-the-art performances on all the four zero-resource
languages, substantially outperforming all previous results. Concretely, the
performance gains over the previous best results Sheth et al. (2021) are 10.4,
8.0, 8.0, and 18.8 Smatch points on German, Spanish, Italian, and Chinese
respectively. This is even more remarkable given that the previous best
results are achieved via a set of language-specific parsers, while ours are
obtained by one single multilingual parser. Notably, our multilingual parser
also obtains close performance on English to that achieved by the state-of-
the-art English-only parser. These results are encouraging for developing AMR
parser in a strict multilingual setting (i.e., using one parser for all
languages).
The results of our ablated model variants further reveal the source of
performance gains. As seen, each of MMT-PT, KD-FT, and Gold-FT make
indispensable contributions to the superior performance. Skipping any of them
leads to a considerable performance drop and removing two further degrades the
model performance. Concretely, the averaged Smatch score across all languages
(AVG) decreases by 0.5 points when removing MMT-PT, which confirms our
hypothesis that MMT is a beneficial pre-training objective for multilingual
AMR parsing. It is also observed that the AVG score drops down from 74.0 to
72.7 ($-$1.3 points) when skipping KD-FT. In other words, introducing KD-FT
boosts the performance by 1.3 Smatch points on average. The improvement is
striking since Ours w/o KD-FT is already a very strong baseline (AVG$=$72.7).
Lastly, by comparing the results of Ours w/o Gold FT and Ours, we can see that
appending Gold-FT to the preceding training stages yields a growth of 2.0 AVG
points. This demonstrates that KD-FT alone is not sufficient and fine-tuning
with gold AMR graphs has a complementary effect. Another interesting finding
is that even our worst-performing variant surpasses previous best methods,
which validates that pre-trained encoder-decoder architecture, mBART, is more
effective for multilingual AMR parsing than encoder-only pre-trained models
used in prior work.
### 5.2 Discussions
Now we delve into more discussions on our key innovation, i.e., the knowledge
distillation stage.
#### Effect of Different Knowledge Distillation Methods
As introduced in section 3.4, there are two kinds of knowledge distillation
(KD) methods for seq2seq tasks: token-level KD (tok) and sequence-level KD
(seq). In Table 3, we compare tok, seq, and their combination (seq+tok). For
seq+tok, we train the student on teacher-generated graphs but still use a
token-level KL term between the teacher/student. Note that tok can only
utilize data with gold AMR graphs (i.e., the constructed silver training
data), while seq and seq+tok leverage additional English sentences. Therefore,
we also report the result of seq using the same English sentences as tok,
denoted as seq∗. As seen, seq performs much better than tok and their
combination does not bring further improvement. However, seq∗ only gives
similar result to tok. These results show that training on more data is
crucial and using seq alone is sufficient for knowledge transfer.
#### Effect of Noise for Knowledge Distillation
Next, we study the effect of noise during knowledge distillation. Recall that
we use automatic machine translation to generate noisy input for the student
model. To show that noise is an important ingredient for superior performance,
we also conduct experiments where the reference translations in Europarl are
used as noise-free input to the student. Also, to show that the noise from MT
is non-trivial, we further employ BART-style random noise Lewis et al. (2020)
for comparison. BART-style noise masks text spans in the input and we tune the
rate of word deletion. The results are presented in Table 4. We show that MT
noise is indeed helpful and its role cannot be replaced by simple random
noise.
Method | DE | ES | IT | ZH | EN | AVG
---|---|---|---|---|---|---
tok | 71.8 | 75.1 | 74.0 | 60.9 | 82.7 | 72.9
seq | 73.1 | 75.9 | 75.4 | 61.9 | 83.9 | 74.0
tok \+ seq | 73.1 | 75.8 | 75.3 | 61.6 | 83.9 | 73.9
seq∗ | 71.9 | 75.0 | 74.1 | 61.2 | 82.9 | 73.0
Table 3: Comparison of different KD methods. Noise | DE | ES | IT | ZH | EN | AVG
---|---|---|---|---|---|---
None | 72.3 | 75.3 | 74.8 | 61.3 | 83.1 | 73.4
Word deletion
10% | 72.4 | 75.1 | 74.6 | 61.3 | 83.5 | 73.4
15% | 72.4 | 75.1 | 74.7 | 61.6 | 83.3 | 73.4
20% | 72.7 | 75.6 | 75.1 | 61.3 | 83.7 | 73.7
25% | 72.5 | 75.2 | 74.3 | 61.1 | 83.3 | 73.3
30% | 72.5 | 75.3 | 74.7 | 61.3 | 83.5 | 73.4
MT | 73.1 | 75.9 | 75.4 | 61.9 | 83.9 | 74.0
Table 4: Comparison of different noise generators. Word deletion k%: randomly
mask k% words. Figure 3: Comparison of different data sizes for KD.
#### Effect of Data Sizes for Knowledge Distillation
Lastly, we study the relation between model performance and the size of
monolingual data used for KD. Figure 3 shows that the Smatch scores
(AVG${}_{\text{X}}$ and AVG) grow approximately logarithmically with the data
size for KD.
## 6 Related Work
#### Cross-lingual AMR Parsing
AMR Banarescu et al. (2013) is a semantic formalism initially designed for
encoding the meanings of English sentences. Over the years, a number of
preliminary studies have investigated the potential of AMR to work as an
interlingua Xue et al. (2014); Hajič et al. (2014); Anchiêta and Pardo (2018);
Zhu et al. (2019). These works attempt to refine and align English AMR-like
semantic graphs labeled in different languages. Damonte and Cohen (2018) show
that it is possible to use the original AMR annotations devised for English as
representation for equivalent sentences in other languages and release a
cross-lingual AMR evaluation benchmark Damonte and Cohen (2020) very recently.
Cross-lingual AMR parsing suffers severely from the data scarcity issue; there
is no gold annotated training data for languages other than English. Damonte
and Cohen (2018) propose to build silver training data based on external
bitext resources and English AMR parser. Blloshmi et al. (2020) find that
translating the source side of existing English AMR dataset into other target
languages produces better silver training data. Sheth et al. (2021) focus on
improving cross-lingual word-to-node alignment for training cross-lingual AMR
parsers that rely on explicit alignment. Our work follows the alignment-free
seq2seq formulation Barzdins and Gosko (2016); Konstas et al. (2017); Van
Noord and Bos (2017); Peng et al. (2017); Zhang et al. (2019a); Ge et al.
(2019); Bevilacqua et al. (2021) and we alternatively study this problem from
the perspective of knowledge distillation, which provides a new way to enable
multilingual AMR parsing.
#### Knowledge Distillation for Sequence Generation
Knowledge distillation (KD) is a classic technique originally proposed for
model compression Buciluǎ et al. (2006); Ba and Caruana (2014); Hinton et al.
(2015). KD suggests training a (smaller) student model to mimic a (larger)
teacher model, by minimizing the loss (typically cross-entropy) between the
teacher/student predictions Romero et al. (2015); Yim et al. (2017); Zagoruyko
and Komodakis (2017). KD has been successfully applied to various natural
language understanding tasks Kuncoro et al. (2016); Hu et al. (2018); Sanh et
al. (2019). For sequence generation tasks, Kim and Rush (2016) first introduce
sequence-level KD, which aims to mimic the teacher’s actions at the sequence-
level. KD has been proved useful in a range of sequence generation tasks such
as machine translation Freitag et al. (2017); Tan et al. (2019), non-
autoregressive text generation Gu et al. (2017); Zhou et al. (2019), and text
summarization Liu et al. (2020a). To the best of our knowledge, our paper is
the first work to investigate the potential of knowledge distillation in the
context of cross-lingual AMR parsing.
## 7 Conclusion
We presented a multilingual AMR parser that significantly advances the state-
of-the-art parsing accuracies on multiple languages. Notably, the superior
results are achieved with one single AMR parser. Our parser is trained with
multiple pre-training and fine-tuning stages including a noisy knowledge
distillation stage. We hope our work can facilitate the application of AMR in
multilingual scenarios.
## References
* Anchiêta and Pardo (2018) Rafael Anchiêta and Thiago Pardo. 2018. Towards AMR-BR: A SemBank for Brazilian Portuguese language. In _Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)_ , Miyazaki, Japan. European Language Resources Association (ELRA).
* Artetxe and Schwenk (2019) Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. _Transactions of the Association for Computational Linguistics_ , 7:597–610.
* Ba and Caruana (2014) Lei Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In _Proceedings of the 27th International Conference on Neural Information Processing Systems-Volume 2_ , pages 2654–2662.
* Banarescu et al. (2013) Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013\. Abstract Meaning Representation for sembanking. In _Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse_ , pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics.
* Barzdins and Gosko (2016) Guntis Barzdins and Didzis Gosko. 2016. RIGA at SemEval-2016 task 8: Impact of Smatch extensions and character-level neural translation on AMR parsing accuracy. In _Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)_ , pages 1143–1147, San Diego, California. Association for Computational Linguistics.
* Bevilacqua et al. (2021) Michele Bevilacqua, Rexhina Blloshmi, and Roberto Navigli. 2021. One SPRING to rule them both: Symmetric AMR semantic parsing and generation without a complex pipeline. In _Proceedings of AAAI_.
* Blloshmi et al. (2020) Rexhina Blloshmi, Rocco Tripodi, and Roberto Navigli. 2020. XL-AMR: Enabling cross-lingual AMR parsing with transfer learning techniques. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 2487–2500, Online. Association for Computational Linguistics.
* Buciluǎ et al. (2006) Cristian Buciluǎ, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In _Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining_ , pages 535–541.
* Cai and Lam (2019) Deng Cai and Wai Lam. 2019. Core semantic first: A top-down approach for AMR parsing. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3799–3809, Hong Kong, China. Association for Computational Linguistics.
* Cai and Lam (2020) Deng Cai and Wai Lam. 2020. AMR parsing via graph-sequence iterative inference. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 1290–1301, Online. Association for Computational Linguistics.
* Cai and Knight (2013) Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In _Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 748–752, Sofia, Bulgaria. Association for Computational Linguistics.
* Conneau et al. (2020) Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 8440–8451, Online. Association for Computational Linguistics.
* Damonte and Cohen (2020) Marco Damonte and Shay Cohen. 2020. Abstract meaning representation 2.0-four translations ldc2020t07. _Web Download, Philadelphia: Linguistic Data Consortium_.
* Damonte and Cohen (2018) Marco Damonte and Shay B. Cohen. 2018. Cross-lingual Abstract Meaning Representation parsing. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1146–1155, New Orleans, Louisiana. Association for Computational Linguistics.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Flanigan et al. (2014) Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014\. A discriminative graph-based parser for the Abstract Meaning Representation. In _Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1426–1436, Baltimore, Maryland. Association for Computational Linguistics.
* Freitag et al. (2017) Markus Freitag, Yaser Al-Onaizan, and Baskaran Sankaran. 2017. Ensemble distillation for neural machine translation. _arXiv preprint arXiv:1702.01802_.
* Ge et al. (2019) DongLai Ge, Junhui Li, Muhua Zhu, and Shoushan Li. 2019. Modeling source syntax and semantics for neural amr parsing. In _IJCAI_ , pages 4975–4981.
* Gu et al. (2017) Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2017\. Non-autoregressive neural machine translation. _arXiv preprint arXiv:1711.02281_.
* Hajič et al. (2014) Jan Hajič, Ondřej Bojar, and Zdeňka Urešová. 2014. Comparing Czech and English AMRs. In _Proceedings of Workshop on Lexical and Grammatical Resources for Language Processing_ , pages 55–64, Dublin, Ireland. Association for Computational Linguistics and Dublin City University.
* Hinton et al. (2015) Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. _arXiv preprint arXiv:1503.02531_.
* Hu et al. (2020) Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In _International Conference on Machine Learning_ , pages 4411–4421.
* Hu et al. (2018) Minghao Hu, Yuxing Peng, Furu Wei, Zhen Huang, Dongsheng Li, Nan Yang, and Ming Zhou. 2018. Attention-guided answer distillation for machine reading comprehension. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2077–2086, Brussels, Belgium. Association for Computational Linguistics.
* Kapanipathi et al. (2020) Pavan Kapanipathi, Ibrahim Abdelaziz, Srinivas Ravishankar, Salim Roukos, Alexander Gray, Ramon Astudillo, Maria Chang, Cristina Cornelio, Saswati Dana, Achille Fokoue, et al. 2020. Question answering over knowledge bases by leveraging semantic parsing and neuro-symbolic reasoning. _arXiv preprint arXiv:2012.01707_.
* Kim and Rush (2016) Yoon Kim and Alexander M. Rush. 2016. Sequence-level knowledge distillation. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , pages 1317–1327, Austin, Texas. Association for Computational Linguistics.
* Koehn (2005) Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In _MT summit_ , volume 5, pages 79–86. Citeseer.
* Konstas et al. (2017) Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR: Sequence-to-sequence models for parsing and generation. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 146–157, Vancouver, Canada. Association for Computational Linguistics.
* Kuncoro et al. (2016) Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Distilling an ensemble of greedy dependency parsers into one MST parser. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , pages 1744–1753, Austin, Texas. Association for Computational Linguistics.
* Lewis et al. (2020) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7871–7880, Online. Association for Computational Linguistics.
* Liang et al. (2020) Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, et al. 2020. Xglue: A new benchmark dataset for cross-lingual pre-training, understanding and generation. _arXiv preprint arXiv:2004.01401_.
* Liao et al. (2018) Kexin Liao, Logan Lebanoff, and Fei Liu. 2018. Abstract Meaning Representation for multi-document summarization. In _Proceedings of the 27th International Conference on Computational Linguistics_ , pages 1178–1190, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
* Liu et al. (2019) Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2019. On the variance of the adaptive learning rate and beyond. In _International Conference on Learning Representations_.
* Liu et al. (2020a) Yang Liu, Sheng Shen, and Mirella Lapata. 2020a. Noisy self-knowledge distillation for text summarization. _arXiv preprint arXiv:2009.07032_.
* Liu et al. (2018) Yijia Liu, Wanxiang Che, Bo Zheng, Bing Qin, and Ting Liu. 2018. An AMR aligner tuned by transition-based parser. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2422–2430, Brussels, Belgium. Association for Computational Linguistics.
* Liu et al. (2020b) Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020b. Multilingual denoising pre-training for neural machine translation. _Transactions of the Association for Computational Linguistics_ , 8:726–742.
* Peng et al. (2017) Xiaochang Peng, Chuan Wang, Daniel Gildea, and Nianwen Xue. 2017. Addressing the data sparsity issue in neural AMR parsing. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_ , pages 366–375, Valencia, Spain. Association for Computational Linguistics.
* Pourdamghani et al. (2014) Nima Pourdamghani, Yang Gao, Ulf Hermjakob, and Kevin Knight. 2014. Aligning English strings with Abstract Meaning Representation graphs. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 425–429, Doha, Qatar. Association for Computational Linguistics.
* Romero et al. (2015) Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2015. Fitnets: Hints for thin deep nets. In _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_.
* Sanh et al. (2019) Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. _arXiv preprint arXiv:1910.01108_.
* Sheth et al. (2021) Janaki Sheth, Young-Suk Lee, Ramon Fernandez Astudillo, Tahira Naseem, Radu Florian, Salim Roukos, and Todd Ward. 2021. Bootstrapping multilingual amr with contextual word alignments. _arXiv preprint arXiv:2102.02189_.
* Song et al. (2019) Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural machine translation using AMR. _Transactions of the Association for Computational Linguistics_ , 7:19–31.
* Tan et al. (2019) Xu Tan, Yi Ren, Di He, Tao Qin, Zhou Zhao, and Tie-Yan Liu. 2019. Multilingual neural machine translation with knowledge distillation. _arXiv preprint arXiv:1902.10461_.
* Tang et al. (2020) Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. _arXiv preprint arXiv:2008.00401_.
* Tiedemann and Thottingal (2020) Jörg Tiedemann and Santhosh Thottingal. 2020. OPUS-MT – building open translation services for the world. In _Proceedings of the 22nd Annual Conference of the European Association for Machine Translation_ , pages 479–480, Lisboa, Portugal. European Association for Machine Translation.
* Uhrig et al. (2021) Sarah Uhrig, Yoalli Rezepka Garcia, Juri Opitz, and Anette Frank. 2021. Translate, then parse! a strong baseline for cross-lingual amr parsing. _arXiv preprint arXiv:2106.04565_.
* Van Noord and Bos (2017) Rik Van Noord and Johan Bos. 2017. Neural semantic parsing by character-based translation: Experiments with abstract meaning representations. _arXiv preprint arXiv:1705.09980_.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Proceedings of the 31st International Conference on Neural Information Processing Systems_ , pages 6000–6010.
* Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 38–45, Online. Association for Computational Linguistics.
* Xu et al. (2020) Dongqin Xu, Junhui Li, Muhua Zhu, Min Zhang, and Guodong Zhou. 2020. Improving AMR parsing with sequence-to-sequence pre-training. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 2501–2511, Online. Association for Computational Linguistics.
* Xu et al. (2021) Weiwen Xu, Huihui Zhang, Deng Cai, and Wai Lam. 2021. Dynamic semantic graph construction and reasoning for explainable multi-hop science question answering. In _Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021_ , pages 1044–1056, Online. Association for Computational Linguistics.
* Xue et al. (2014) Nianwen Xue, Ondřej Bojar, Jan Hajič, Martha Palmer, Zdeňka Urešová, and Xiuhong Zhang. 2014. Not an interlingua, but close: Comparison of English AMRs to Chinese and Czech. In _Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14)_ , pages 1765–1772, Reykjavik, Iceland. European Language Resources Association (ELRA).
* Yim et al. (2017) Junho Yim, Donggyu Joo, Jihoon Bae, and Junmo Kim. 2017. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 4133–4141.
* Zagoruyko and Komodakis (2017) Sergey Zagoruyko and Nikos Komodakis. 2017. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_. OpenReview.net.
* Zhang et al. (2019a) Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019a. AMR parsing as sequence-to-graph transduction. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 80–94, Florence, Italy. Association for Computational Linguistics.
* Zhang et al. (2019b) Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019b. Broad-coverage semantic parsing as transduction. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3786–3798, Hong Kong, China. Association for Computational Linguistics.
* Zhou et al. (2019) Chunting Zhou, Graham Neubig, and Jiatao Gu. 2019. Understanding knowledge distillation in non-autoregressive machine translation. _arXiv preprint arXiv:1911.02727_.
* Zhu et al. (2019) Huaiyu Zhu, Yunyao Li, and Laura Chiticariu. 2019. Towards universal semantic representation. In _Proceedings of the First International Workshop on Designing Meaning Representations_ , pages 177–181, Florence, Italy. Association for Computational Linguistics.
|
# Constraints on the Nieh-Yan modified teleparallel gravity with gravitational
waves
Qiang Wua,b Tao Zhua,b corresponding author<EMAIL_ADDRESS>Rui Niuc,d Wen
Zhaoc,d<EMAIL_ADDRESS>Anzhong Wange<EMAIL_ADDRESS>a Institute
for Theoretical Physics and Cosmology, Zhejiang University of Technology,
Hangzhou, 310032, China
b United Center for Gravitational Wave Physics (UCGWP), Zhejiang University of
Technology, Hangzhou, 310032, China
c CAS Key Laboratory for Research in Galaxies and Cosmology, Department of
Astronomy, University of Science and Technology of China, Hefei 230026, China;
d School of Astronomy and Space Sciences, University of Science and Technology
of China, Hefei, 230026, China;
e GCAP-CASPER, Physics Department, Baylor University, Waco, Texas 76798-7316,
USA
###### Abstract
The discovery of gravitational waves (GWs) from the compact binary components
by LIGO/Virgo Collaboration provides an unprecedented opportunity for testing
gravity in the strong and highly dynamical field regime of gravity. Currently
a lot of model-independent tests have been performed by LIGO/Virgo
Collaboration and no any significant derivation from general relativity has
been found. In this paper, we study the parity violating effects on the
propagation of GWs in the Nieh-Yan modified teleparallel gravity, a theory
which modifies general relativity by a parity violating Nieh-Yan term. We
calculate the corresponding parity violating waveform of GWs produced by the
coalescence of compact binaries. By comparing the two circular polarization
modes, we find the effects of the velocity birefringence of GWs in their
propagation caused by the parity violation due to the Nieh-Yan term, which are
explicitly presented in the GW waveforms by the phase modification. With such
phase modifications to the waveform, we perform the full Bayesian inference
with the help of the open source software Bilby on the GW events of binary
black hole merges (BBH) in the LIGO-Virgo catalogs GWTC-1 and GWTC-2. We do
not find any significant evidence of parity violation due to the parity
violating Nieh-Yan term and thus place an upper bound on the energy scale
$M_{\rm PV}<6.5\times 10^{-42}\;{\rm GeV}$ at 90% confidence level, which
represents the first constraint on the Nieh-Yan modified teleparallel gravity
so far.
## I Introduction
The direct detections of gravitational waves (GWs) emitted from the compact
binary components (CBC) by LIGO/Virgo Collaboration open a new era for
exploring the nature of gravity in the strong and highly-dynamical field
regime of gravity gw150914 ; gw170817 ; gw-other ; LIGOScientific:2017ycc . To
data, there are 50 GW events that have been reported by Advanced Laser
Interferometer Gravitational-Wave Observatory (LIGO) and Advanced Virgo, which
are included in the Gravitational-Wave Transient Catalogs GWTC-1 gwtc1 and
GWTC-2 gwtc2 . In addition, two new GW events, GW200105 and GW200115, have
been released recently, which most likely, are signals from mergers of neutron
star and black hole binaries LIGOScientific:2021qlt . With these events,
various model-independent tests of general relativity (GR) have been performed
by LIGO/Virgo Collaboration and no any significant derivation from GR has been
found gw150914-testGR ; gw170817-testGR ; gw170817-speed ; testGR_GWTC1 ;
testGR_GWTC2 .
Although GR has been considered to be the most successful theory of gravity
since it was proposed, it faces difficulties both theoretically (e.g.,
singularity, quantization, etc), and observationally (e.g., dark matter, dark
energy, etc). Various modified gravities have been proposed to be one of the
effective ways to solve these anomalies MG1 ; MG2 ; MG3 ; MG4 . Therefore, the
tests of the modified gravities are essential to confirm the final theory of
gravity.
As is well known, symmetry permeates nature and is important to all laws of
physics. Thus, one important aspect for tests of gravity is to test its
symmetries. Parity symmetry is one of the fundamental symmetries of GR.
However, it is well known that nature is parity violating, since the first
discovery of parity violation in weak interactions parity_violation . On the
other hand, when one considers the quantization of gravity, such symmetry
could be also violated at high energy regimes. For examples, the parity
violations in gravity can in general arise from the gravitational anomaly of
the standard model of elementary particles, the Green-Schwarz anomaly
canceling mechanism in string theory, or the scalarization of the Barbero-
Immirzi parameter in the loop quantum gravity cs1 ; cs2 , see also cs_review
for a review. In this sense, the parity symmetry can only be treated as an
approximate symmetry, which emerges at low energies and is violated at higher
energies. With these thoughts, a lot of modified gravity theories or
phenomenological models with parity violation in the gravitational interaction
have been proposed, such as the Chern-Simons modified gravity cs_review ;
chern-simons1 ; chern-simons2 ; chern-simons3 ; chern-simons4 ; chern-simons4
; chern-simons5 , the symmetric teleparallel equivalence of GR theory Conroy ,
Horava-Lifshitz theories of quantum gravity horava1 ; horava2 ; horava3 ;
horava4 ; horava5 ; horava6 , chiral scalar-tensor theory chiral_ST ;
chiral_ST1 ; chiral_ST2 , and the standard model extension SME1 ; SME2 ; SME3
; SME4 ; SME5 .
Parity symmetry implies that a directional flipping to the left and right does
not change the laws of physics. The parity violation in gravity in general can
induce an asymmetry of the propagation speed and amplitude damping between
left- and right-hand polarizations of a GW, which leads to the velocity and
amplitude birefringence, respectively. In primordial cosmology, such
birefringence phenomenons can produce circularly polarized primordial GWs,
which leaves the significant imprints in the temperature and polarization
anisotropies of cosmic microwave background radiation (CMB) horava3 ; horava4
; horava5 ; PGW1 ; PGW2 ; PGW3 ; Fu:2020tlw . The detection of GW emitted from
the compact binary components by LIGO-Virgo provides a great opportunity to
test the parity violation in gravity as well. A lot of tests on both the
velocity and amplitude birefringence of GWs have been carried out by using the
observational data from GW events in LIGO-Virgo catalogs SME1 ; chiral_ST1 ;
CS_gb ; SME4 ; SME5 ; Okounkova:2021xjv ; Hu:2020rub ; sai_wang ; tanaka .
Recently, the parity violating effects which induces velocity birefringence
due to the leading-order higher derivative modification in the waveform has
been constrained through the Bayesian parameter estimation on the third Open
Gravitational-wave Catalog events yi-fan1 ; Wang:2021gqm . The parity
violating energy scale due to the leading-order higher derivative modification
has been constrained to be $M_{\rm PV}>0.14\;{\rm GeV}$ at 90% confidence
level, which represents the tightest bound on $M_{\rm PV}$ so far. Here the
$M_{\rm PV}$ denotes the parity violating energy scale from leading high
derivative corrections which corresponds to the case with $\beta_{\mu}=1$ as
defined in waveform . It is worth noting that the parity violating energy
scale $M_{\rm PV}$ in this paper corresponds to $\beta_{\mu}=-1$ waveform .
Recently, a new parity violating gravity model, the Nieh-Yan modified
teleparallel gravity, was proposed in Li:2020xjt ; Li:2021wij . This model is
healthy and simple in form. The Nieh-Yan modified teleparallel gravity is
constructed based on the theory of GR equivalent teleparallel gravity (TEGR)
TEGR which is formulated in flat spacetime with vanishing curvature and
vanishing nonmetricity (see Ref. Bahamonde:2021gfp for a recent review). The
TEGR is equivalent to GR and the Nieh-Yan modified teleparallel gravity
modified TEGR by including an extra Nieh-Yan term into the gravitational
action, which breaks the parity symmetry in gravity. It is interesting to
mention here that such a coupling can also appear in the mechanisms NY-1 ;
NY-2 to regularize the infinities in theories of the Einstein-Cartan
manifold, similar to the QCD axion coupling in the Peccei-Quinn mechanism NY-3
for a solution to the strong CP problem. In contrast to other parity violating
gravities which break parity due to high-order derivative terms, the Nieh-Yan
modified teleparallel gravity has no higher derivatives and successfully
avoids the ghost mode. The cosmological perturbations and parametrized post-
Newtonian limit in this theory have also been explored recently in Li:2020xjt
; Li:2021wij ; PPN . Some other modified theories in the framework of
teleparallel gravity or with Nieh-Yan term and their implications in the GW
observations have also been considered in Bombacigno:2021bpk ;
Bahamonde:2017wwk ; Bahamonde:2015zma ; Hohmann:2020dgy ; Zhang:2021kqn .
Since no higher derivative is introduced in the Nieh-Yan modified teleparallel
gravity, the effects of parity violation due to the Nieh-Yan term on the
propagation of GWs is in the lower energy regime. This is in contrast to those
parity violating gravities due to high derivative terms, while the propagation
of GW is modified at high energy regime. This implies that the effect of Nieh-
Yan term is more sensitive to the low frequency GWs. Another important
property of GW in this model is that it only leads to velocity birefringence
and there is no amplitude birefringence. Similar properties of GW can also
arise from the low dimension modifications to GR in the symmetric teleparallel
gravity Li:2021mdp ; Conroy and the linear gravity of standard model
extension SME1 ; SME3 ; SMExx 111In Refs. SME1 ; SME3 , only the operators
with dimension $d=5$ and $d=6$ are considered. However, as examples, the Nieh-
Yan term considered in this paper or lower dimension term in Li:2021mdp ;
Conroy show that the inclusion of operators with lower dimension $d=3$ in the
linear gravity of the SME is also possible SMExx . It is worth noting that
such lower dimension operators have been also considered in the standard model
extension of electrodynamics SME-EM .. In this paper, we study in detail the
effects of the velocity birefringence due to the parity violating Nieh-Yan
term on the GWs waveform. Decomposing the GWs into the left-hand and right-
hand circular polarization modes, we find that the effects of velocity
birefringence can be explicitly presented by the modifications in the GW
phase. We also mapped such phase modification to the parametrized description
of parity violating waveforms proposed in waveform . With the modified
waveform, we perform the full Bayesian inference with the help of the open
source software Bilby on the 46 GW events of BBH in the LIGO-Virgo catalogs
GWTC-1 and GWTC-2. From our analysis, we do not find any signatures of parity
violation due to the parity violating Nieh-Yan term and then place an upper
bound on the energy scale $M_{\rm PV}$ to be $M_{\rm PV}<6.5\times
10^{-42}\;{\rm GeV}$ at 90% confidence level, which represents the first
constraint on the Nieh-Yan modified teleparallel gravity so far.
This paper is organized as follows. In the next section, we present a brief
introduction of the Nieh-Yan modified teleparallel gravity model and then
discuss the propagation of GWs in Sec. III. In Sec. IV, we discuss the
velocity birefringence effects of GWs, and then calculate the waveform of GWs
produced by the coalescence of compact binary systems and particularly focus
on the deviations from those in GR. In Sec. V, we present the basic
statistical framework of Bayesian analysis used in this work and report the
results of constraints on the Nieh-Yan modified teleparallel gravity from the
Bayesian analysis. We finish with concluding remarks and discussions in Sec.
VI.
Throughout this paper, the metric convention is chosen as $(-,+,+,+)$, greek
indices $(\mu,\nu,\cdot\cdot\cdot)$ and latin indices
$(a,b,c,\cdot\cdot\cdot)$ which run over $0,1,2,3$ denote the spacetime and
tangents space respectively, and the latin and latin indices
$(i,\;j,\;k,\;l,\cdot\cdot\cdot)$ which run over $1,2,3$ indicate the spatial
index. We choose the units $G=c=1$.
## II The Nieh-Yan modified teleparallel gravity
In this section, we present a brief introduction of the Nieh-Yan modified
teleparallel gravity, for details about this theory, see Li:2020xjt ;
Li:2021wij and references therein.
The Nieh-Yan modified teleparallel gravity is constructed based on the theory
of teleparallel gravity (TEGR) TEGR which is equivalent to GR but formulated
in flat spacetime with vanishing curvature and vanishing nonmetricity. In this
theory, the dynamical variable is the tetrad field $e_{a}^{\mu}$. The relation
between the metric $g_{\mu\nu}$ and the tetrad field $e^{a}_{\mu}$ read
$\displaystyle g_{\mu\nu}=e_{\mu}^{a}e_{\nu}^{a}\eta_{ab},$ (2.1)
where $\eta_{ab}=(-1,1,1,1)$ is the Minkowski metric. Therefore at each point
of the spacetime, the tetrad field follows an orthonormal basis for the
tangent field. In the construction of the TEGR, one normally starts from a
flat spacetime where the curvature of the spacetime vanishes and the gravity
is encoded by a nonzero torsion tensor. In this way, the torsion tensor
general depends on both the tetrad field and the spin connection,
$\displaystyle\mathcal{T}^{\lambda}_{\mu\nu}=2e^{\lambda}_{a}(\partial_{[\mu}e_{\nu]}^{a}+\omega^{a}_{b[\mu}e^{b}_{\nu]}),$
(2.2)
where $\omega^{a}_{b\mu}$ is the spin connection which is related to the
Lorentz transformation matrix by
$\displaystyle\omega^{a}_{\;b\mu}=(\Lambda^{-1})^{a}_{\;c}\partial_{\mu}\Lambda^{c}_{\;b},$
(2.3)
and $\omega_{ab\mu}=-\omega_{ba\mu}$ with $\Lambda^{a}_{\;b}$ representing the
element of an arbitrary Lorentz transformation matrix. Here
$\Lambda^{a}_{\;b}$ is position dependent and satisfies the relation
$\eta_{ab}\Lambda^{a}_{\;c}\Lambda^{b}_{\;d}=\eta_{cd}$. With the tetrad
field and the torsion tensor, the action of the TEGR can be written as
$\displaystyle S_{\rm TEGR}$ $\displaystyle=$ $\displaystyle\frac{1}{2}\int
d^{4}x~{}e\mathcal{T}$ (2.4) $\displaystyle\equiv$ $\displaystyle\int
d^{4}x~{}e\Bigg{(}-\frac{1}{2}\mathcal{T}_{\mu}\mathcal{T}^{\mu}+\frac{1}{8}\mathcal{T}_{\alpha\beta\mu}\mathcal{T}^{\alpha\beta\mu}$
$\displaystyle~{}~{}~{}~{}+\frac{1}{4}\mathcal{T}_{\alpha\beta\mu}\mathcal{T}^{\beta\alpha\mu}\Bigg{)},$
where $e={\rm det}(e^{a}_{\mu})=\sqrt{-g}$ is the determinant of the tetrad,
$\mathcal{T}_{\mu}=\mathcal{T}^{\nu}_{\mu\nu}$ is the torsion vector, and
$\mathcal{T}$ is the torsion scalar. It is interesting to note that the
torsion scalar is related to the Ricci scalar by
$\displaystyle R=-\mathcal{T}+\frac{2}{e}\partial_{\mu}(e\mathcal{T}^{\mu}).$
(2.5)
Therefore, the action of the TEGR (2.4) is identical to the Einstein-Hilbert
action up to a surface term,
$\displaystyle S_{\rm TEGR}=\int
d^{4}x\sqrt{-g}\left[-\frac{1}{2}R(e)-\nabla_{\mu}\mathcal{T}^{\mu}\right].$
(2.6)
In the Nieh-Yan modified teleparallel gravity, the theory of TEGR is modified
by introducing a Nieh-Yan term into the TEGR action Li:2021wij ; Li:2020xjt ,
i.e,
$\displaystyle S_{\rm NY}=\frac{{\color[rgb]{0,0,0}c}}{4}\int
d^{4}x\sqrt{-g}\theta\,\mathcal{T}_{a\mu\nu}\widetilde{\mathcal{T}}^{a\mu\nu},$
(2.7)
where $c$ is the coupling constant,
$\widetilde{\mathcal{T}}^{a\mu\nu}=(1/2)\varepsilon^{\mu\nu\rho\sigma}\mathcal{T}^{a}_{~{}~{}\rho\sigma}$
is the dual of the torsion two form with
$\mathcal{T}^{a}_{\mu\nu}=2(\partial_{[\mu}e_{\nu]}^{a}+\omega^{a}_{b[\mu}e^{b}_{\nu]})$
and $\varepsilon^{\mu\nu\rho\sigma}$ being the Levi-Civita tensor. As
mentioned in Li:2020xjt , the Nieh-Yan term is itself a topological term and
thus it does not contribute to the gravitational dynamics. In general, the
Nieh-Yan term can be split into two individual parity violating terms
Hohmann:2020dgy . To incorporate the parity violation with this two terms in
the gravitational dynamics, only one of them can be added into the action. As
shown in PGG_pv , however, including either terms in the theory leads to a
propagating ghost mode so that such theory is not heathy. Another way to
incorporate the parity violating effects is to consider a coupling between the
Nieh-Yan term and a scalar field $\theta$. With such a scalar field, the
introduction of the Nieh-Yan term (2.7) into the action breaks the parity
symmetry of the gravitational interaction. It is shown in Li:2020xjt through
perturbative analysis that such simple theory is ghost-free and healthy. As we
already mentioned in the Introduction, such a term can also appear in the
mechanisms NY-1 ; NY-2 to regularize the infinities in theories of the
Einstein-Cartan manifold, similar to the QCD axion coupling in the Peccei-
Quinn mechanism NY-3 for a solution to the strong CP problem.
By taking into account both the kinetic and potential terms of the scalar
field, the full action of the Nieh-Yan modified teleparallel gravity is
$\displaystyle S$ $\displaystyle=$ $\displaystyle S_{\rm TEGR}+S_{\rm
NY}+S_{\theta}+S_{\rm m}$ (2.8) $\displaystyle=$ $\displaystyle\int
d^{4}x\sqrt{-g}\Bigg{[}-\frac{R(e)}{2}+\frac{c}{4}\,\theta\,\mathcal{T}_{A\mu\nu}\widetilde{\mathcal{T}}^{A\mu\nu}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}+\frac{\mathfrak{b}}{2}\nabla_{\mu}\theta\nabla^{\mu}\theta-\mathfrak{b}V(\theta)\Bigg{]}+S_{\rm
m},$
where $\mathfrak{b}$ is a coupling constant, the curvature scalar $R(e)$ is
defined by the Levi-Civita connection and considered as being fully
constructed from the metric, and in turn from the tetrad. In writing the above
action, we have dropped all the surface terms arising in (2.6) and (2.7). At
classical level, surface terms do not contribute to the gravitational dynamics
and thus will not affect the analysis presented in this paper. It is worth
mention that the nondynamical surface terms in the gravitational action can
play essential roles in the phase integral approach to quantum gravity, the
calculation of black hole entropy, and the interpretation of energy or mass in
gravity Jimenez:2021nup , etc.
Then variation of the action with respect to the tetrad field $e^{a}_{\mu}$
and Lorentz matrix element $\Lambda^{a}_{\;\;b}$, one obtains,
$\displaystyle G^{\mu\nu}+N^{\mu\nu}$ $\displaystyle=$ $\displaystyle
T^{\mu\nu}+T^{\mu\nu}_{\theta}~{},$ (2.9) $\displaystyle N^{[\mu\nu]}$
$\displaystyle=$ $\displaystyle 0,$ (2.10)
where $G^{\mu\nu}$ is the Einstein tensor, $T^{\mu\nu}=-(2/\sqrt{-g})(\delta
S_{m}/\delta g_{\mu\nu})$ and
$T^{\mu\nu}_{\theta}=\mathfrak{b}[V(\theta)-\nabla_{\alpha}\theta\nabla^{\alpha}\theta/2]g^{\mu\nu}+\nabla^{\mu}\theta\nabla^{\nu}\theta$
are the energy-momentum tensors for the matter and the scalar field $\theta$
respectively, and
$N^{\mu\nu}=c\,e_{a}^{~{}\,\nu}\partial_{\rho}\theta\,\widetilde{\mathcal{T}}^{a\mu\rho}$.
Variation of the action with respect to the scalar field $\theta$ leads to the
equation of motion for the scalar field, which is
$\displaystyle\mathfrak{b}\Big{[}\nabla_{\mu}\nabla^{\mu}\theta+V^{\prime}(\theta)\Big{]}-\frac{c}{4}\mathcal{T}_{a\mu\nu}\tilde{T}^{a\mu\nu}=0.$
(2.11)
Here $V^{\prime}(\theta)=dV(\theta)/d\theta$.
It is mentioned in Li:2020xjt ; Li:2021wij that Eq. (2.10) is the
antisymmetric part of Eq. (2.9). From (2.10), it is evident that the
antisymmetric part of the field equations simply vanishes thus there is no
antisymmetric degrees of freedom in this theory. This leads to six strong
constraints on the theory. The theory contains 16+6 basic variables, 16 in the
tetrad fields $e_{\mu}^{a}$ and 6 in the Lorentz matrix $\Lambda^{a}_{\;b}$.
In the meantime, the theory also has 4 spacetime diffeomorphisms and 6 local
Lorentz symmetries. Considering the 6 additional constraints given by (2.10),
the theory can have the same physical degrees of freedom as that in GR. At the
linear perturbative level, it is shown Li:2021wij that both the scalar and
vector modes are not dynamical degrees of freedom and only two dynamical
tensorial modes exist. It is worth mentioning that the degrees of freedom of
the theory may also be hidden in certain backgrounds. This is also known as
the strong coupling problem if such hidden modes do exist strong . It implies
that the perturbative analysis around specific background geometries cannot be
fully trusted strong . The phenomenon of degrees of freedom being hidden under
special backgrounds also appears in $f(T)$ models Ong:2013qja , the models of
massive gravity DeFelice:2012mx , and the Einsteinian cubic gravity
Jimenez:2020gbw .
Similar to the Chern-Simons modified gravity, there are two different versions
of the Nieh-Yan modified teleparallel gravity, the dynamical version and
nondynamical version. The dynamical version corresponds to $\mathfrak{b}\neq
0$, while the non-dynamical version corresponds to $\mathfrak{b}=0$. When
$\mathfrak{b}=0$, the equations of motion now reduce to
$\displaystyle G^{\mu\nu}+N^{\mu\nu}$ $\displaystyle=$ $\displaystyle
T^{\mu\nu},$ (2.12) $\displaystyle N^{[\mu\nu]}$ $\displaystyle=$
$\displaystyle 0~{},$ (2.13)
and the equation of motion for the scalar field now becomes a constraint
$\displaystyle\mathcal{T}_{a\mu\nu}\tilde{T}^{a\mu\nu}=0.$ (2.14)
As shown in Li:2020xjt ; Li:2021wij , the propagation of GWs in both theories
follows the same propagating equation, thus in this paper we will not
distinguish this two versions and take $\mathfrak{b}=1$ hereafter for
simplification.
## III GWs in the Nieh-Yan modified teleparallel gravity
Let us investigate the propagation of GW in the Nieh-Yan modified teleparallel
gravity with the action given by (2.8). According to Eq. (2.10), the theory
does not contain the antisymmetric perturbations thus we only focus on the
symmetric tensorial perturbations, i.e., two independent modes of GW. We
consider the GWs propagating on a homogeneous and isotropic background. The
spatial metric in the flat Friedmann- Robertson-Walker universe is written as
$\displaystyle g_{ij}=a(\tau)(\delta_{ij}+h_{ij}(\tau,x^{i})),$ (3.1)
where $\tau$ denotes the conformal time, which relates to the cosmic time $t$
by $dt=ad\tau$, and $a$ is the scale factor of the universe. Throughout this
paper, we set the present scale factor $a_{0}=1$. $h_{ij}$ denotes the GWs,
which represents the transverse and traceless metric perturbations, i.e.,
$\displaystyle\partial^{i}h_{ij}=0=h^{i}_{i}.$ (3.2)
The above spatial metric in (3.1) corresponds to the perturbations of the
tetrad fields as
$\displaystyle e_{0}^{0}=a,e_{i}^{0}=0,e^{a}_{0}=0,$ (3.3) $\displaystyle
e_{i}^{a}=a\left(\gamma_{i}^{a}+\frac{1}{2}\gamma^{aj}h_{ij}\right),$ (3.4)
where $\gamma_{i}^{a}$ can be regarded as the spatial tetrad on three-
dimensional spatial hypersurface. For a flat universe one has
$\delta_{ij}=\delta_{ab}\gamma^{a}_{i}\gamma^{b}_{j}$. Here we would like to
mention that the tensor perturbations only come from the tetrad field, and the
spin connection or the local Lorentz matrices do not contribute to the tensor
perturbations.
To proceed further one can substitute the above tetrad fields in to the action
(2.8) and expand the second order in $h_{ij}$. After tedious calculations, one
finds Li:2020xjt ; Li:2021wij ,
$\displaystyle S^{(2)}=\int
d^{4}x\frac{a^{2}}{8}\left(h^{\prime}_{ij}h^{\prime}_{ij}-\partial_{k}h_{ij}\partial^{k}h^{ij}-c\theta^{\prime}\epsilon_{ijk}h_{il}\partial_{j}h_{kl}\right),$
where $\epsilon_{ijk}$ is the antisymmetric symbol and a prime denotes the
derivative with respect to the conformal time $\tau$. We consider the GWs
propagating in the vacuum, and ignore the source term. Varying the action with
respect to $h_{ij}$, we obtain
$\displaystyle
h^{\prime\prime}_{ij}+2\mathcal{H}h^{\prime}_{ij}-\partial^{2}h_{ij}+\frac{1}{2}c\theta^{\prime}(\epsilon_{lki}\partial_{l}h_{jk}+\epsilon_{lkj}\partial_{l}h_{ik})=0,$
where $\mathcal{H}\equiv a^{\prime}/a$.
In the parity-violating gravities, it is convenient to decompose the GWs into
the circular polarization modes. To study the evolution of $h_{ij}$, we expand
it over spatial Fourier harmonics,
$\displaystyle h_{ij}(\tau,x^{i})=\sum_{A={\rm
R,L}}\int\frac{d^{3}k}{(2\pi)^{3}}h_{A}(\tau,k^{i})e^{ik_{i}x^{i}}e_{ij}^{A}(k^{i}),$
where $e_{ij}^{A}$ denote the circular polarization tensors and satisfy the
relation
$\displaystyle\epsilon^{ijk}n_{i}e_{kl}^{A}=i\rho_{A}e^{jA}_{l},$ (3.8)
with $\rho_{\rm R}=1$ and $\rho_{\rm L}=-1$. We find that the propagation
equations of these two modes are decoupled, which can be casted into the
standard parametrized form proposed in waveform ,
$\displaystyle
h^{\prime\prime}_{A}+(2+\nu_{A})\mathcal{H}h^{\prime}_{A}+(1+\mu_{A})k^{2}h_{A}=0,$
(3.9)
where
$\displaystyle\nu_{A}=0,\;\;\;\mu_{A}=\frac{\rho_{A}c\theta^{\prime}}{k}.$
(3.10)
In the above parametrization, the effects of the parity violation are fully
characterized by two parameters: $\nu_{A}$ and $\mu_{A}$. As shown in refs.
waveform , such parametrization provides an unifying description for the low-
energy effective description of GWs in generic parity violating gravities,
including Chern-Simons modified gravity, ghost-free parity violating scalar-
tensor theory, symmetric teleparallel equivalence of general relativity,
Hořava-Lifshitz gravities, and the Nieh-Yan modified teleparallel gravity. The
parameter $\mu_{A}$ leads to different velocities of left-hand and right- hand
circular polarizations of GWs, so that the arrival times of the two circular
polarization modes could be different. The parameter $\nu_{A}$, on the other
hand, leads to different damping rates of left-hand and right- hand circular
polarizations of GWs, so that the amplitude of left-hand circular polarization
of gravitational waves will increase (or decrease) during the propagation,
while the amplitude for the right-hand modes will decrease (or increase). In
the Nieh-Yan modified teleparallel gravity, we have $\nu_{A}=0$ and
$\mu_{A}=\rho_{A}c\theta^{\prime}/k$, therefore there is no modification on
the damping rate of GWs and the parity violation due to the Nieh-Yan term can
only affect the velocities of GWs. This is the phenomenon of velocity
birefringence. It is worth noting here that similar corrections on
$\mu_{A}\propto 1/k$ can also arise from the lower dimension operators in the
parity violating symmetric teleparallel equivalence of GR theory Conroy ;
Li:2021mdp .
## IV Velocity birefringence and phase modifications to the waveform of GWs
In this section, we study the velocity birefringence effects during the
propagation of GWs. As shown in waveform , for each circular polarization mode
$h_{A}$, the velocity birefringence effect induces the phase corrections to
the waveform of GWs. In order to derive the phase modification, similar to our
previous works waveform , let us first define
$u_{A}(\tau)=\frac{1}{2}a(\tau)M_{\rm Pl}h_{A}(\tau)$, and then the equation
of motion (3.9) can be casted into the form
$\displaystyle\frac{d^{2}u_{A}}{d\tau^{2}}+\left(\omega_{A}^{2}-\frac{a^{\prime\prime}}{a}\right)h_{A}=0,$
(4.1)
where
$\displaystyle\omega_{A}=k^{2}\left(1+\rho_{A}\frac{c\theta^{\prime}}{k}\right),$
(4.2)
is the modified dispersion relation. Then, one can find that GWs with
different helicities will have different phase velocities
$\displaystyle v_{A}\simeq 1+\rho_{A}\frac{c\theta^{\prime}}{2k}.$ (4.3)
Since $\rho_{A}$ has the opposite signs for left-hand and right-hand
polarization modes, it is straightforward to see that the phase velocities for
these two modes are different. For later convenience, one can introduce a
characteristic energy scale $M_{\rm PV}=c\theta^{\prime}/a=c\dot{\theta}$, and
then one has
$\displaystyle v_{A}=1+\rho_{A}\frac{aM_{\rm PV}}{2k}.$ (4.4)
Now considering a graviton emitted radially at $r=r_{e}$ and received at
$r=0$, we have,
$\displaystyle\frac{dr}{dt}=-\frac{1}{a}\left[1+\rho_{A}\frac{aM_{\rm
PV}}{2k}\right].$ (4.5)
Note that in the above we have assumed $c\dot{\theta}$ to be a constant.
Integrating this equation from the emission time ($r=r_{e}$) to arrival time
($r=0$), one obtains
$\displaystyle r_{e}=\int_{t_{e}}^{t_{0}}\frac{dt}{a(t)}+\rho_{A}\frac{M_{\rm
PV}}{2k}\int_{t_{e}}^{t_{0}}dt.$ (4.6)
Consider gravitons emitted at two different times $t_{e}$ and
$t_{e}^{\prime}$, with wave numbers $k$ and $k^{\prime}$, and received at
corresponding arrival times $t_{0}$ and $t_{0}^{\prime}$ ($r_{e}$ is the same
for both). Assuming $\Delta t_{e}\equiv t_{e}-t_{e}^{\prime}\ll a/\dot{a}$,
then the difference of their arrival times is given by
$\displaystyle\Delta t_{0}=(1+z)\Delta
t_{e}+\frac{\rho_{A}}{2}\left(\frac{M_{\rm PV}}{k^{\prime}}-\frac{M_{\rm
PV}}{k}\right)\int_{t_{e}}^{t_{0}}dt,$ (4.7)
where $z=1/a(t_{e})-1$ is the cosmological redshift.
Let us turn to consider the GW signal emitted from the nonspinning,
quasicircular inspiral of compact binary system in the post-Newtonian
approximation. Relative to the GW in GR, the parity violation due to the Nieh-
Yan term modifies the phase of GWs. In the Fourier domain, $h_{A}$ can be
calculated analytically in the stationary phase approximation, which is given
by
$\displaystyle h_{A}(f)=\frac{{\cal A}_{A}(f)}{\sqrt{df/dt}}e^{i\Psi(f)},$
(4.8)
where $f$ is the frequency at the detector and $\Psi$ denotes the phase of the
GWs. As shown in waveform , the difference of arrival times induces the
modification of GW phase $\Psi$ as follows,
$\displaystyle\Psi_{A}(f)=\Psi^{\rm GR}_{A}(f)+\rho_{A}\delta\Psi_{1}(f),$
(4.9)
where
$\displaystyle\delta\Psi_{1}(f)=A_{\mu}\ln u$ (4.10)
with
$\displaystyle A_{\mu}=\frac{M_{\rm
PV}}{2H_{0}}\int_{0}^{z}\frac{dz}{(1+z)\sqrt{(1+z)^{3}\Omega_{m}+\Omega_{\Lambda}}}.$
(4.11)
Here $u=\pi{\cal M}f$ with $f=k/2\pi$ being the frequency of GWs and ${\cal
M}=(1+z){\cal M}_{\rm c}$, where ${\cal
M}_{c}\equiv(m_{1}m_{2})^{3/5}/(m_{1}+m_{2})^{1/5}$ is the chirp mass of the
binary system with component masses $m_{1}$ and $m_{2}$. We adopt a Planck
cosmology with $\Omega_{m}=0.315$, $\Omega_{\Lambda}=0.685$, and
$H_{0}=67.4\;{\rm km}\;{\rm s}^{-1}\;{\rm Mpc}^{-1}$. With the above phase
correction, one can write the waveform of GWs with the effects of the Nieh-Yan
term in the form
$\displaystyle h_{A}(f)=h_{A}^{\rm GR}(f)e^{i\rho_{A}\delta\Psi_{1}}.$ (4.12)
In order to test the Nieh-Yan modified teleparallel gravity with observations
of GWs, it is convenient to analyze the GWs in the Fourier domain. The
responses of detectors for the GW signals $h(f)$ in the Fourier domain can be
written in terms of waveforms of $h_{+}$ and $h_{\times}$ as
$\displaystyle h(f)=[F_{+}h_{+}(f)+F_{\times}h_{\times}(f)]e^{-2i\pi
f\triangle t},$ (4.13)
where $F_{+}$ and $F_{\times}$ are the beam pattern functions of GW detectors,
depending on the source location and polarization angle GR_wave . $\triangle
t$ is the arrival time difference between the detector and the geocenter. In
GR, the waveform of the two polarizations $h_{+}(f)$ and $h_{\times}(f)$ are
given respectively by GR_wave2
$\displaystyle h^{\rm GR}_{+}=(1+\chi^{2})\mathcal{A}e^{i\Psi},~{}~{}h^{\rm
GR}_{\times}=2\chi\mathcal{A}e^{i(\Psi+\pi/2)},$ (4.14)
where $\mathcal{A}$ and $\Psi$ denote the amplitude and phase of the waveforms
$h^{\rm GR}_{+,\times}$, and $\chi=\cos\iota$ with $\iota$ being the
inclination angle of the binary system. In GR, the explicit forms of
$\mathcal{A}$ and $\Psi$ have been calculated in the high-order PN
approximation (see for instance GR_wave2 and references therein).
Now using the relationship between $h_{+,\times}$ and $h_{\rm R,L}$, i.e.,
$\displaystyle h_{+}=\frac{h_{\rm L}+h_{\rm
R}}{\sqrt{2}},\;\;h_{\times}=\frac{h_{\rm L}-h_{\rm R}}{\sqrt{2}i},$ (4.15)
one can obtain the effects of the Nieh-Yan term on the above waveform for the
plus and cross modes. It is not difficult to get
$\displaystyle h_{+}=h_{+}^{\rm GR}\cos(\delta\Psi_{1})+h_{\times}^{\rm
GR}\sin(\delta\Psi_{1}),$ (4.16) $\displaystyle h_{\times}=h_{\times}^{\rm
GR}\cos(\delta\Psi_{1})-h_{+}\sin(\delta\Psi_{1}).$ (4.17)
Therefore, the Fourier waveform $h(f)$ becomes
$\displaystyle h(f)=\mathcal{A}\delta\mathcal{A}e^{i(\Psi+\delta\Psi)},$
(4.18)
where
$\displaystyle\delta\mathcal{A}$ $\displaystyle=$
$\displaystyle\sqrt{(1+\chi^{2})^{2}F^{2}_{+}+4\chi^{2}F^{2}_{\times}}$
$\displaystyle\times\left[1-\frac{(1-\chi^{2})^{2}F_{+}F_{\times}}{(1+\chi^{2})^{2}F^{2}_{+}+4\chi^{2}F^{2}_{\times}}\delta\Psi_{1}\right],$
$\displaystyle\delta\Psi$ $\displaystyle=$
$\displaystyle\tan^{-1}\left[\frac{2\chi
F_{\times}}{(1+\chi^{2})F_{+}}\right]$ (4.19)
$\displaystyle+\frac{2\chi(1+\chi^{2})(F^{2}_{+}+F^{2}_{\times})}{(1+\chi^{2})^{2}F^{2}_{+}+4\chi^{2}F^{2}_{\times}}\delta\Psi_{1}.$
## V Constraints on Nieh-Yan term
### V.1 Bayesian inference for GW data
In this paper we use the Bayesian inference to obtain the constraints on the
Nieh-Yan term with selected GW events from the LIGO-Virgo catalogs GWTC-1 and
GWTC-2. The Bayesian inference framework has been broadly employing in the
inference of scientific conclusions from GW data. Given a set of compact
binary merger observations from GW detectors with the data $d$ of GW signals,
one in general can infer the distribution of parameters $\vec{\theta}$ by
comparing the predicted GW strain in each instrument with the data. One can
write the Bayes’ theorem in the context of GW astronomy as:
$\displaystyle
P({\vec{\theta}}|d,H)=\frac{P(d|\vec{\theta},H)P(\vec{\theta}|H)}{P(d|H)},$
(5.1)
where $H$ is the waveform model, $P(\vec{\theta}|H)$ is the prior distribution
for model parameters $\vec{\theta}$ and $P(d|\vec{\theta},H)$ is the
likelihood for obtaining the data given a specific set of model parameters,
$P(d|H)$ is the marginalized likelihood or evidence for the model or
“hypothesis” $H$, and $P({\vec{\theta}}|d,H)$ denotes the posterior
probability distributions for physical parameters ${\vec{\theta}}$ which
describe the observational data.
In general, the GW signals from the compact binary mergers are extremely weak
and the matched filtering method has been used to extract these signals from
the noises. Considering Gaussian and stationary noise from GW detectors, the
matched filtering method allows to define the likelihood function in the form
of
$\displaystyle\ln P(d|{\vec{\theta}},H)=-\frac{1}{2}\sum_{j=1}^{N}\langle
d_{j}-h({\vec{\theta}})|d_{j}-h({\vec{\theta}})\rangle,$ (5.2)
where $h(\vec{\theta})$ is the GW waveform template response function in model
$H$ and $j$ represents the $j$th GW detector. The noise weighted inner product
$\langle A|B\rangle$ is defined as
$\displaystyle\langle A|B\rangle=4\;{\rm
Re}\left[\int_{0}^{\infty}\frac{A(f)B(f)^{*}}{S(f)}df\right],$ (5.3)
where $\;{}^{*}$ denotes complex conjugation and $S(f)$ is the power spectral
density (PSD) function of the detector. In Bayes’ theorem, the Bayes’ evidence
$P(d|H)$ in (5.1) can be computed by integrating the likelihood
$P(d|{\vec{\theta}},H)$ of parameters ${\vec{\theta}}$ times the prior
probabilities $P({\vec{\theta}}|H)$ for these parameters within the waveform
model $H$,
$\displaystyle P(d|H)=\int
d{\vec{\theta}}P(d|{\vec{\theta}},H)P({\vec{\theta}}|H).$ (5.4)
In this work, in order to constrain the Nieh-Yan modified teleparallel
gravity, we employ the open-source package Bilby to perform the Bayesian
inference by analyzing the GW data from selected events of binary black hole
mergers in the LIGO-Vrigo catalogs GWTC-1 and GWTC-2. For the GR waveform
$h^{\rm GR}_{+,\;\times}(f)$, we use the spin precessing waveform IMRPhenomPv2
Schmidt:2014iyl ; Hannam:2013oca for all the BBH events except GW190521 with
the parameter vector ${\vec{\theta}}=\\{\alpha,\delta,\psi,\phi,t_{\rm
c},d_{L},{\cal
M},\eta,a_{1},a_{2},\cos\theta_{1},\cos\theta_{2},\phi_{12},\phi_{\rm
JL},\theta_{JN}\\}$, where $\alpha$ and $\delta$ are the right ascension and
declination of the binary system in the sky, $\psi$ is the polarization angle
of the source defined with respect to the Earth centered coordinates, $\phi$
is the binary phase at a reference frequency, $t_{\rm c}$ is the time of
coalescence, $d_{L}$ is the luminosity distance to the source, ${\cal M}$ is
the detector-frame chirp mass of the binary, $\eta$ is the symmetric mass
ratio, $a_{1}(a_{2})$ is the dimensionless spin magnitude of the larger
(smaller) black hole, $\theta_{1}(\theta_{2})$ is the angle between the spin
direction and the orbital angular momentum of the binary for the larger
(smaller) BH, $\phi_{12}$ is the difference between total and orbital angular
momentum azimuthal angles, $\phi_{\rm JL}$ denotes the difference between the
azimuthal angles of the individual spin direction projections onto the orbital
plane, and $\theta_{JN}$ represents the angle between the total angular
momentum and the line of sight. For GW190521, we use the state-of-the-art
approximant IMRPhenomXPHM which includes the subdominant harmonic modes of GW
and accounts for spin-precession effects for a quasicircular-orbit binary
black hole coalescence Pratten:2020ceb .
In order to constrain the parity violating Nieh-Yan term, we construct the
parity-violating waveform based on the above template through (4.16) and
(4.17) with $\delta\Psi_{1}$ being given by (4.10). For this purpose we append
one additional parameter $A_{\mu}$, which represents the effects of the parity
violation due to the Nieh-Yan modified teleparallel gravity, in addition to
the parameter vector ${\vec{\theta}}$. Then we consider a series of GW events
comprised of data $\\{d_{i}\\}$, described by parameters
$\\{{\vec{\theta}_{i}}\\}$, where $i$ runs from 1 to $N$ with $N$ being the
number of the analyzed GW events in the Bayesian inference. Then for each
event, posterior for all the parameters describing the event can be written as
$\displaystyle P(A_{\mu},{\vec{\theta}_{i}}|d_{i})=\frac{P(M_{\rm
PV},\vec{\theta}_{i})P(d_{i}|M_{\rm PV},\vec{\theta}_{i})}{P(d_{i})}.$ (5.5)
To infer the posterior of the parameter $A_{\mu}$, one can marginalize over
all parameters $\vec{\theta}_{i}$ for the individual GW events. This procedure
gives the marginal posterior distribution on $A_{\mu}$ for the $i$th GW event,
$\displaystyle P(A_{\mu}|d_{i})$ $\displaystyle=$
$\displaystyle\frac{P(A_{\mu})}{P(d_{i})}\int
P(\vec{\theta}_{i})P((d_{i}|A_{\mu},\vec{\theta}_{i})d\vec{\theta_{i}}.$ (5.6)
### V.2 Results of constraints
To data, a total of 50 GW events have been reported by the LIGO-Virgo catalogs
GWTC-1 and GWTC-2. Among these events, not all of them are of our interest for
the Bayesian analysis. For our purpose, we consider all 46 GW events of BBH as
presented in Table. 1. We exclude the GW events of binary neutron star or
possible binary neutron star-black hole merges, since these events are not
expected to improve the constraint on $M_{\rm PV}$ drastically. The data of
these 46 GW events are downloaded from the Gravitational Wave Open Science
Center data_GW . Besides strain data, power spectral densities (PSDs) are also
needed for parameter estimation. Instead of directly estimating PSDs from
strain data by the Welch method, we use the event-specific PSDs which are
encapsulated in LVC posterior sample releases for specific events data_GW2 ;
data_GW1 . These PSDs are expected to lead to more stable and reliable
parameter estimation PSD1 ; PSD2 .
Table 1: 90% credible level upper bounds on $M_{\rm PV}$ from the Bayesian inference by analyzing 46 GW events of BBH in the LIGO-Virgo catalogs GWTC-1 and GWTC-2. catalogs | GW events | Constraints [$10^{-41}\;{\rm GeV}$]
---|---|---
GWTC-1 | GW150914 | 6.5
GW151012 | 6.9
GW151226 | 23.5
GW170104 | 16.2
GW170608 | 17.2
GW170729 | 2.9
GW170809 | 6.4
GW170814 | 7.2
GW170818 | 4.5
GW170823 | 3.4
GWTC-2 | GW190408_181802 | 3.6
GW190412 | 7.0
GW190413_052954 | 6.6
GW190413_134308 | 3.2
GW190421_213856 | 6.9
GW190424_180648 | 2.5
GW190503_185404 | 3.7
GW190512_180714 | 4.2
GW190513_205428 | 3.8
GW190514_065416 | 3.9
GW190517_055101 | 9.1
GW190519_153544 | 5.0
GW190521 | 4.5
GW190521_074359 | 3.9
GW190527_092055 | 5.1
GW190602_175927 | 3.1
GW190620_030421 | 3.4
GW190630_185205 | 6.3
GW190701_203306 | 8.7
GW190706_222641 | 5.7
GW190707_093326 | 13.2
GW190708_232457 | 11.5
GW190719_215514 | 7.8
GW190720_000836 | 4.4
GW190727_060333 | 3.3
GW190728_064510 | 9.1
GW190731_140936 | 4.4
GW190803_022701 | 4.4
GW190828_063405 | 3.0
GW190828_065509 | 9.6
GW190909_114149 | 10.3
GW190910_112807 | 2.6
GW190915_235702 | 3.0
GW190924_021846 | 36.7
GW190929_012149 | 11.4
GW190930_133541 | 21.6
| Combined | 0.65
For these events, we perform parameter estimations using Bayesian analysis by
selecting $4\;{\rm s}$ or $8\;{\rm s}$ data over all GW parameters
$\vec{\theta}$ and the parity violating parameter $A_{\mu}$. The prior for the
standard GW parameters ${\vec{\theta}}$ are consistent with those used in
gwtc1 ; gwtc2 . The prior for $A_{\mu}$ is chosen to be uniformly distributed.
We use the package BILBY bilby to perform the analysis and the posterior
distribution is sampled by the nest sampling method dynesty over the fiducial
BBH and the parity violating parameter $A_{\mu}$. We report our main results
in the next subsection.
For all the 46 GW events we analyzed, we find that the posterior distribution
of $A_{\mu}$ are all consistent with its GR value $A_{\mu}=0$, which means we
do not find any signatures of parity violation of Nieh-Yan modified
teleparallel gravity in the data of these events. To illustrate the results of
$A_{\mu}$ from each individual GW event, we plot Fig. 1 to show the
marginalized posterior distribution of $A_{\mu}$. In this figure, the region
in the posterior between the upper and lower bar denotes the $90\%$ credible
interval, and the bar at the middle denotes the median value. It is shown that
the GR value $A_{\mu}=0$ is well within the 90% confidence level for each GW
event.
From the posterior distributions of $A_{\mu}$ and redshift $z$ calculated from
the 46 events, one can convert $A_{\mu}$ and $z$ into $M_{\rm PV}$ through Eq.
(4.11). In Fig. 2 we plot the marginalized posterior distribution of $M_{\rm
PV}$. Then the upper bounds on $M_{\rm PV}$ for each individual event can be
calculated from the corresponding posterior distribution of $M_{\rm PV}$. In
Table. 1, we present the 90% credible level upper bounds on $M_{\rm PV}$ from
the Bayesian inference of each event. From Table. 1, one can see that the best
constraints on $M_{\rm PV}$ are all from those events with ${\cal M}_{\rm
c}\gtrsim 30M_{\odot}$.
The parameter $M_{\rm PV}$ is a universal quantity for all GW events. One can
combine all the individual posterior of $M_{\rm PV}$ for each event to get the
overall constraint. This can be done by multiplying the posterior
distributions of all these events together through
$\displaystyle P(M_{\rm PV}|\\{d_{i}\\},H)\propto\prod_{i=1}^{N}P(M_{\rm
PV}|d_{i},H),$ (5.7)
where $d_{i}$ denotes data of the $i$th GW event. We find that $M_{\rm PV}$
can be constrained to be
$\displaystyle M_{\rm PV}<6.5\times 10^{-42}\;{\rm GeV}$ (5.8)
at 90% confidence level. We thus conclude that we do not find any significant
evidence of parity violation due to the Nieh-Yan modified teleparallel gravity
at $M_{\rm PV}>6.5\times 10^{-42}\;{\rm GeV}$. This constraint in turn can
convert into a constraint on parameter $c\dot{\theta}$ as
$\displaystyle c\dot{\theta}<6.5\times 10^{-42}\;{\rm GeV}.$ (5.9)
So far, this upper bound represents the only observational constraint on the
Nieh-Yan modified teleparallel gravity.
Figure 1: Violin plots of the posteriors of the parameter $A_{\mu}$. The
results are obtained by analyzing the 46 GW events of BBH in the catalogs
GWTC-1 and GWTC-2. The region in the posterior between the upper and lower bar
denotes the $90\%$ credible interval.
Figure 2: The posterior distributions for $M_{\rm PV}$ from 46 GW events in
the LIGO-Virgo catalogs GWTC-1 and GWTC-2. The solid black curve represents
the combined posterior distribution of $M_{\rm PV}$ and the vertical dash line
denotes the 90% upper limits for $M_{\rm PV}$ from combined results.
## VI conclusion and outlook
With the discovery of GWs from the coalescence of compact binary systems by
LIGO/Virgo Collaboration, the testing of gravity in the strong gravitational
fields becomes possible. Therefore, the studies of GWs in the alternative
theories of gravity and inference of their constraints from data of GW events
detected by LIGO/Virgo Collaboration are of crucial importance for
understanding gravity under extreme conditions. In this paper, we focus on a
new parity-violating gravity model, the Nieh-Yan modified teleparallel gravity
Li:2020xjt ; Li:2021wij . This model is healthy and simple in form which
modifies the TEGR by a parity violating Nieh-Yan term. Such a term can arise
from the mechanisms to regularize the infinities in theories of the Einstein-
Cartan manifold as mentioned in NY-1 ; NY-2 . In contrast to other parity
violating gravities which break parity due to high-order derivative terms, the
Nieh-Yan modified teleparallel gravity has no higher derivatives and
successfully avoids the ghost mode.
In order to test this model with GWs, we study in detail the effects of the
velocity birefringence due to the parity violating Nieh-Yan term on the GW
waveforms. Decomposing the GWs into the left-hand and right-hand circular
polarization modes, we find that the effects of velocity birefringence can be
explicitly presented by the modifications in the GW phase. We also mapped such
phase modification to the parametrized description of parity violating
waveforms proposed in waveform . With the modified waveform, we perform the
full Bayesian inference with the help of the open source software BILBY on the
46 GW events of BBH in the LIGO-Virgo catalogs GWTC-1 and GWTC-2. From our
analysis, we do not find any signatures of parity violation due to the parity
violating Nieh-Yan term and then place an upper bound on the energy scale
$M_{\rm PV}$ to be $M_{\rm PV}<6.5\times 10^{-42}\;{\rm GeV}$ at 90%
confidence level, which represents the first constraints on the Nieh-Yan
modified teleparallel gravity so far.
The above constraint on $M_{\rm PV}$ can be straightforwardly mapped to bound
on the lower dimensional parity violating terms in the symmetric teleparallel
gravity or possible operators with dimension $d=3$ in the linear gravity of
the standard model extension. In the framework of the symmetric teleparallel
gravity, one can modify the GR equivalent symmetric teleparallel gravity by a
parity violating interaction, $c\phi Q\tilde{Q}$, here
$Q\tilde{Q}=\epsilon^{\mu\nu\rho\sigma}Q_{\mu\nu\alpha}Q_{\rho\sigma}^{\alpha}$
with $Q_{\mu\nu\sigma}$ being the nonmetricity tensor Li:2021mdp ; Conroy . It
is worth noting that such a parity violating interaction can lead to the ghost
problem in the vector perturbations Li:2021mdp . The propagating equation of
GWs in the symmetric teleparallel gravity with the parity violating term
$c\phi Q\tilde{Q}$ shares exactly the same form of (3.9) with $\nu_{A}=0$ and
$\mu_{A}=\rho_{A}c\dot{\phi}/(ak)$ Li:2021mdp . This corresponds to $M_{\rm
PV}=c\dot{\phi}$, thus one obtains
$\displaystyle|c\dot{\phi}|<6.5\times 10^{-42}\;{\rm GeV}.$ (6.1)
With the constant upgrading of the advanced LIGO and Virgo detectors, we
expect more GW events, especially those with heavier chirp mass and higher
redshifts, will be detected in the future. With more data, we are expected to
improve significantly the constraint on $M_{\rm PV}$ and the corresponding
modified gravities in the future.
## Acknowledgments
T.Z., Q.W., and A.W. are supported in part by the National Key Research and
Development Program of China Grant No. 2020YFC2201503, and the Zhejiang
Provincial Natural Science Foundation of China under Grants No. LR21A050001
and No. LY20A050002, the National Natural Science Foundation of China under
Grant No. 11675143 and No. 11975203, and the Fundamental Research Funds for
the Provincial Universities of Zhejiang in China under Grant No. RF-A2019015.
W.Z. and R.N. are supported by NSFC Grants No. 11773028, No. 11633001, No.
11653002, No. 11421303, No. 11903030, the Fundamental Research Funds for the
Central Universities, and the Strategic Priority Research Program of the
Chinese Academy of Sciences Grant No. XDB23010200.
## References
* (1) B. P. Abbott et al. (LIGO Scientific and Virgo Collaborations), Observation of Gravitational Waves from a Binary Black Hole Merger, Phys. Rev. Lett. 116, 061102 (2016); GW150914: First results from the search for binary black hole coalescence with Advanced LIGO, Phys. Rev. D 93, 122003 (2016); Properties of the Binary Black Hole Merger GW150914, Phys. Rev. Lett. 116, 241102 (2016); GW150914: The Advanced LIGO Detectors in the Era of First Discoveries, Phys. Rev. Lett. 116, 131103 (2016).
* (2) B. P. Abbott et al. (LIGO Scientific and Virgo Collaborations), GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral, Phys. Rev. Lett. 119, 161101 (2017).
* (3) B. P. Abbott et al. (LIGO Scientific and Virgo Collaborations), GW151226: Observation of Gravitational Waves from a 22-Solar-Mass Binary Black Hole Coalescence, Phys. Rev. Lett. 116, 241103 (2016); GW170104: Observation of a 50-Solar-Mass Binary Black Hole Coalescence at Redshift 0.2, Phys. Rev. Lett. 118, 221101 (2017); GW170608: Observation of a 19-solar-mass binary black hole coalescence, Astrophys. J. Lett. 851, L35 (2017).
* (4) B. P. Abbott et al. (LIGO Scientific and Virgo Collaborations), GW170814: A Three-Detector Observation of Gravitational Waves from a Binary Black Hole Coalescence, Phys. Rev. Lett. 119, 141101 (2017).
* (5) B. P. Abbott et al. (LIGO Scientific and Virgo Collaborations), GWTC-1: A Gravitational-Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and Virgo during the First and Second Observing Runs, Phys. Rev. X 9, 031040 (2019).
* (6) R. Abbott et al. (LIGO Scientific and Virgo Collaborations), GWTC-2: Compact Binary Coalescences Observed by LIGO and Virgo During the First Half of the Third Observing Run, Phys. Rev. X 11, 021053 (2021).
* (7) R. Abbott et al. (LIGO Scientific, KAGRA and Virgo Collaborations), Observation of Gravitational Waves from Two Neutron Star–Black Hole Coalescences, Astrophys. J. Lett. 915, L5 (2021).
* (8) B. P. Abbott et al. (LIGO Scientific and Virgo Collaborations),, Tests of general relativity with GW150914, Phys. Rev. Lett. 116, 221101 (2016).
* (9) B. P. Abbott et al. (LIGO Scientific and Virgo Collaborations), Tests of General Relativity with GW170817, Phys. Rev. Lett. 123, 011102 (2019).
* (10) B. P. Abbott et al. [LIGO Scientific, Virgo, Fermi-GBM and INTEGRAL], Gravitational Waves and Gamma-rays from a Binary Neutron Star Merger: GW170817 and GRB 170817A, Astrophys. J. Lett. 848, L13 (2017).
* (11) B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations], Tests of General Relativity with the Binary Black Hole Signals from the LIGO-Virgo Catalog GWTC-1, Phys. Rev. D 100, 104036 (2019).
* (12) R. Abbott et al. (LIGO Scientific and Virgo Collaborations), Tests of general relativity with binary black holes from the second LIGO-Virgo gravitational-wave transient catalog, Phys. Rev. D 103, 122002 (2021).
* (13) G. Cognola, E. Elizalde, S. Nojiri, S. D. Odintsov, and S. Zerbini, Dark energy in modified Gauss-Bonnet gravity: Late-time acceleration and the hierarchy problem, Phys. Rev. D 73, 084007 (2006).
* (14) E. J. Copeland, M. Sami, and S. Tsujikawa, Dynamics of dark energy, Int. J. Mod. Phys. D 15, 1753 (2006).
* (15) J. Frieman, M. Turner, and D. Huterer, Dark Energy and the Accelerating Universe, Ann. Rev. Astron. Astrophys. 46, 385 (2008).
* (16) M. Li, X. D. Li, S. Wang, and Y. Wang, Dark Energy, Commun. Theor. Phys. 56, 525 (2011).
* (17) T. D. Lee and C. N. Yang, Question of Parity Conservation in Weak Interactions, Phys. Rev. 104, 254 (1956).
* (18) K. Fujikawa, Path Integral Measure for Gauge Invariant Fermion Theories, Phys. Rev. Lett. 42, 1195 (1979).
* (19) K. Fujikawa, Path Integral for Gauge Theories with Fermions, Phys. Rev. D 21, 2848 (1980) [erratum: Phys. Rev. D 22, 1499 (1980)].
* (20) S. Alexander and N. Yunes, Chern-Simons Modified General Relativity, Phys. Rept. 480, 1 (2009).
* (21) R. Jackiw and S. Y. Pi, Chern-Simons modification of general relativity, Phys. Rev. D 68, 104012 (2003).
* (22) N. Yunes, R. O’Shaughnessy, B. J. Owen, and S. Alexander, Testing gravitational parity violation with coincident gravitational waves and short gamma-ray bursts, Phys. Rev. D 82, 064017 (2010).
* (23) K. Yagi, N. Yunes, and T. Tanaka, Gravitational Waves from Quasi-Circular Black Hole Binaries in Dynamical Chern-Simons Gravity, Phys. Rev. Lett. 109, 251105 (2012) [erratum: Phys. Rev. Lett. 116, no.16, 169902 (2016); erratum: Phys. Rev. Lett. 124, no.2, 029901 (2020)].
* (24) S. H. Alexander and N. Yunes, Gravitational wave probes of parity violation in compact binary coalescences, Phys. Rev. D 97, 064033 (2018).
* (25) K. Yagi and H. Yang, Probing Gravitational Parity Violation with Gravitational Waves from Stellar-mass Black Hole Binaries, Phys. Rev. D 97, 104018 (2018).
* (26) A. Conroy and T. Koivisto, Parity-Violating Gravity and GW170817 in Non-Riemannian Cosmology, JCAP 12, 016 (2019).
* (27) P. Horava, Quantum Gravity at a Lifshitz Point, Phys. Rev. D 79, 084008 (2009).
* (28) T. Takahashi and J. Soda, Chiral Primordial Gravitational Waves from a Lifshitz Point, Phys. Rev. Lett. 102, 231301 (2009).
* (29) D. Yoshida and J. Soda, Exploring the string axiverse and parity violation in gravity with gravitational waves, Int. J. Mod. Phys. D 27, 1850096 (2018); M. Satoh and J. Soda, Higher Curvature Corrections to Primordial Fluctuations in Slow-roll Inflation, JCAP 09, 019 (2008); M. Satoh, S. Kanno and J. Soda, Circular Polarization of Primordial Gravitational Waves in String-inspired Inflationary Cosmology, Phys. Rev. D 77, 023526 (2008); J. Soda, H. Kodama, and M. Nozawa, Parity Violation in Graviton Non-gaussianity, JHEP 08, 067 (2011); T. Zhu, Q. Wu, A. Wang, and F. W. Shu, U(1) symmetry and elimination of spin-0 gravitons in Horava-Lifshitz gravity without the projectability condition, Phys. Rev. D 84, 101502 (2011).
* (30) A. Wang, Q. Wu, W. Zhao, and T. Zhu, Polarizing primordial gravitational waves by parity violation, Phys. Rev. D 87, 103512 (2013).
* (31) T. Zhu, W. Zhao, Y. Huang, A. Wang, and Q. Wu, Effects of parity violation on non-gaussianity of primordial gravitational waves in Hořava-Lifshitz gravity, Phys. Rev. D 88, 063508 (2013).
* (32) A. Wang, Hořava gravity at a Lifshitz point: A progress report, Int. J. Mod. Phys. D 26, 1730014 (2017).
* (33) M. Crisostomi, K. Noui, C. Charmousis, and D. Langlois, Beyond Lovelock gravity: Higher derivative metric theories, Phys. Rev. D 97, 044034 (2018).
* (34) A. Nishizawa and T. Kobayashi, Parity-violating gravity and GW170817, Phys. Rev. D 98, 124018 (2018).
* (35) X. Gao and X. Y. Hong, Propagation of gravitational waves in a cosmological background, Phys. Rev. D 101, 064057 (2020).
* (36) V. A. Kostelecký and M. Mewes, Testing local Lorentz invariance with gravitational waves, Phys. Lett. B 757, 510 (2016).
* (37) Q. G. Bailey and V. A. Kostelecky, Signals for Lorentz violation in post-Newtonian gravity, Phys. Rev. D 74, 045001 (2006).
* (38) M. Mewes, Signals for Lorentz violation in gravitational waves, Phys. Rev. D 99, 104062 (2019).
* (39) L. Shao, Combined search for anisotropic birefringence in the gravitational-wave transient catalog GWTC-1, Phys. Rev. D 101, 104019 (2020).
* (40) Z. Wang, L. Shao and C. Liu, New limits on the Lorentz/CPT symmetry through fifty gravitational-wave events, Astrophysics. J. 921, 158 (2021).
* (41) A. Lue, L. M. Wang and M. Kamionkowski, Cosmological signature of new parity violating interactions, Phys. Rev. Lett. 83, 1506 (1999).
* (42) S. Alexander and J. Martin, Birefringent gravitational waves and the consistency check of inflation, Phys. Rev. D 71, 063526 (2005).
* (43) N. Bartolo, L. Caloni, G. Orlando and A. Ricciardone, Tensor non-Gaussianity in chiral scalar-tensor theories of gravity, JCAP 03, 073 (2021).
* (44) C. Fu, J. Liu, T. Zhu, H. Yu and P. Wu, Resonance instability of primordial gravitational waves during inflation in Chern–Simons gravity, Eur. Phys. J. C 81, 204 (2021).
* (45) M. Okounkova, W. M. Farr, M. Isi and L. C. Stein, Constraining gravitational wave amplitude birefringence and Chern-Simons gravity with GWTC-2, arXiv:2101.11153 [gr-qc].
* (46) Q. Hu, M. Li, R. Niu and W. Zhao, Joint Observations of Space-based Gravitational-wave Detectors: Source Localization and Implication for Parity-violating gravity, Phys. Rev. D 103, 064057 (2021).
* (47) R. Nair, S. Perkins, H. O. Silva and N. Yunes, Fundamental Physics Implications for Higher-Curvature Theories from Binary Black Hole Signals in the LIGO-Virgo Catalog GWTC-1, Phys. Rev. Lett. 123, 191101 (2019).
* (48) S. Wang and Z. C. Zhao, Tests of CPT invariance in gravitational waves with LIGO-Virgo catalog GWTC-1, Eur. Phys. J. C 80, 1032 (2020).
* (49) K. Yamada and T. Tanaka, Parametrized test of parity-violating gravity using GWTC-1 events, PTEP 2020, 093E01 (2020) [arXiv:2006.11086 [gr-qc]].
* (50) Y. F. Wang, R. Niu, T. Zhu, and W. Zhao, Gravitational Wave Implications for the Parity Symmetry of Gravity in the High Energy Region, Astrophys. J. 908, 58 (2021) [arXiv:2002.05668 [gr-qc]].
* (51) Y. F. Wang, S. M. Brown, L. Shao, and W. Zhao, Tests of Gravitational-Wave Birefringence with the Gravitational-Wave Catalog, arXiv:2109.09718 [astro-ph.HE].
* (52) W. Zhao, T. Zhu, J. Qiao, and A. Wang, Waveform of gravitational waves in the general parityviolating gravities, Phys. Rev. D 101, 024002 (2020); J. Qiao, T. Zhu, W. Zhao, and A. Wang, Waveform of gravitational waves in the ghost-free parity-violating gravities, Phys. Rev. D 100, 124058 (2019) [arXiv:1909.03815 [gr-qc]].
* (53) M. Li, H. Rao, and D. Zhao, A simple parity violating gravity model without ghost instability, JCAP 11, 023 (2020) [arXiv:2007.08038 [gr-qc]].
* (54) M. Li, H. Rao, and Y. Tong, Revisiting a parity violating gravity model without ghost instability: Local Lorentz covariance, Phys. Rev. D 104, 084077 (2021) [arXiv:2104.05917 [gr-qc]].
* (55) J.G. Pereira and R. Aldrovandi, Teleparallel Gravity: An Introduction (Springer, Dordrecht, 2013).
* (56) S. Bahamonde, K. F. Dialektopoulos, C. Escamilla-Rivera, G. Farrugia, V. Gakis, M. Hendry, M. Hohmann, J. L. Said, J. Mifsud, and E. Di Valentino, Teleparallel Gravity: From Theory to Cosmology, [arXiv:2106.13793 [gr-qc]].
* (57) S. Mercuri, Peccei-Quinn mechanism in gravity and the nature of the Barbero-Immirzi parameter, Phys. Rev. Lett. 103, 081302 (2009).
* (58) O. Castillo-Felisola, C. Corral, S. Kovalenko, I. Schmidt and V. E. Lyubovitskij, Axions in gravity with torsion, Phys. Rev. D 91, 085017 (2015).
* (59) R. D. Peccei and H. R. Quinn, CP Conservation in the Presence of Instantons, Phys. Rev. Lett. 38, 1440 (1977).
* (60) H. Rao, Parameterized post-Newtonian limit of the Nieh-Yan modified teleparallel gravity, arXiv:2107.08597 [gr-qc].
* (61) F. Bombacigno, S. Boudet, G. J. Olmo, and G. Montani, Big bounce and future time singularity resolution in Bianchi I cosmologies: The projective invariant Nieh-Yan case, Phys. Rev. D 103, 124031 (2021).
* (62) S. Bahamonde, C. G. Böhmer and M. Krššák, New classes of modified teleparallel gravity models, Phys. Lett. B 775, 37 (2017).
* (63) S. Bahamonde, C. G. Böhmer, and M. Wright, Modified teleparallel theories of gravity, Phys. Rev. D 92, 104042 (2015).
* (64) M. Hohmann and C. Pfeifer, Teleparallel axions and cosmology, Eur. Phys. J. C 81, 376 (2021).
* (65) Y. Zhang and H. Zhang, Distinguish the $f(T)$ model from $\Lambda$CDM model with Gravitational Wave observations, Eur. Phys. J. C 81, 706 (2021).
* (66) M. Li and D. Zhao, A simple parity violating model in the symmetric teleparallel gravity and its cosmological perturbations, arXiv:2108.01337 [gr-qc].
* (67) V. A. Kostelecký and M. Mewes, Lorentz and Diffeomorphism Violations in Linearized Gravity, Phys. Lett. B 779, 136 (2018).
* (68) V. A. Kostelecky and M. Mewes, Electrodynamics with Lorentz-violating operators of arbitrary dimension, Phys. Rev. D 80, 015020 (2009).
* (69) R. Kuhfuss and J. Nitsch, Propagating Modes in Gauge Field Theories of Gravity, Gen. Relat. Gravit. 18, 1207 (1986).
* (70) J. B. Jiménez and T. S. Koivisto, Noether charges in the geometrical trinity of gravity, arXiv:2111.04716 [gr-qc].
* (71) A. Jimënez Cano, METRIC-AFFINE GAUGE THEORIES OF GRAVITY: Foundations and new insights,” Granada U. (2021).
* (72) Y. C. Ong, K. Izumi, J. M. Nester, and P. Chen, Problems with Propagation and Time Evolution in f(T) Gravity, Phys. Rev. D 88, 024019 (2013).
* (73) A. De Felice, A. E. Gumrukcuoglu, and S. Mukohyama, Massive gravity: nonlinear instability of the homogeneous and isotropic universe, Phys. Rev. Lett. 109, 171101 (2012).
* (74) J. B. Jiménez and A. Jiménez-Cano, On the strong coupling of Einsteinian Cubic Gravity and its generalisations, JCAP 01, 069 (2021) [arXiv:2009.08197 [gr-qc]].
* (75) P. Jaranowski, A. Krolak, and B. F. Schutz, Data analysis of gravitational - wave signals from spinning neutron stars. 1. The Signal and its detection, Phys. Rev. D 58, 063001 (1998); W. Zhao and L. Wen, Localization accuracy of compact binary coalescences detected by the third-generation gravitational-wave detectors and implication for cosmology, Phys. Rev. D 97, 064031 (2018).
* (76) B. S. Sathyaprakash and B. F. Schutz, Physics, Astrophysics and Cosmology with Gravitational Waves, Living Rev. Rel. 12, 2 (2009).
* (77) P. Schmidt, F. Ohme, and M. Hannam, Towards models of gravitational waveforms from generic binaries II: Modelling precession effects with a single effective precession parameter, Phys. Rev. D 91, 024043 (2015) [arXiv:1408.1810 [gr-qc]].
* (78) M. Hannam, P. Schmidt, A. Bohé, L. Haegel, S. Husa, F. Ohme, G. Pratten, and M. Pürrer, Simple Model of Complete Precessing Black-Hole-Binary Gravitational Waveforms, Phys. Rev. Lett. 113, 151101 (2014).
* (79) G. Pratten, C. García-Quirós, M. Colleoni, A. Ramos-Buades, H. Estellés, M. Mateu-Lucena, R. Jaume, M. Haney, D. Keitel, and J. E. Thompson, et al. Computationally efficient models for the dominant and subdominant harmonic modes of precessing binary black holes, Phys. Rev. D 103, 104056 (2021).
* (80) R. Abbott et al. (LIGO Scientific and Virgo Collaborations), Open data from the first and second observing runs of Advanced LIGO and Advanced Virgo, SoftwareX 13, 100658 (2021) [arXiv:1912.11716 [gr-qc]].
* (81) B. P. Abbott et al. (LIGO Scientific and Virgo Collaborations), A Guide to LIGO–Virgo Detector Noise and Extraction of Transient Gravitational-Wave Signals, Classical Quantum Gravity 37, 055002 (2020).
* (82) B. P. Abbott et al. (LIGO Scientific and Virgo Collaborations), GWTC-2 Data Release: Parameter Estimation Samples and Skymaps,” https://dcc.ligo.org/LIGO- P2000223/public.
* (83) N. J. Cornish and T. B. Littenberg, BayesWave: Bayesian Inference for Gravitational Wave Bursts and Instrument Glitches, Class. Quant. Grav. 32, 135012 (2015).
* (84) T. B. Littenberg and N. J. Cornish, Bayesian inference for spectral estimation of gravitational wave detector noise, Phys. Rev. D 91, 084034 (2015).
* (85) G. Ashton, M. Hübner, P. D. Lasky, C. Talbot, K. Ackley, S. Biscoveanu, Q. Chu, A. Divakarla, P. J. Easter, and B. Goncharov, et al. BILBY: A user-friendly Bayesian inference library for gravitational-wave astronomy, Astrophys. J. Suppl. 241, 27 (2019) [arXiv:1811.02042 [astro-ph.IM]]; I. M. Romero-Shaw, C. Talbot, S. Biscoveanu, V. D’Emilio, G. Ashton, C. P. L. Berry, S. Coughlin, S. Galaudage, C. Hoy, and M. Hübner, et al. Bayesian inference for compact binary coalescences with bilby: validation and application to the first LIGO–Virgo gravitational-wave transient catalogue, Mon. Not. Roy. Astron. Soc. 499, 3295-3319 (2020) [arXiv:2006.00714 [astro-ph.IM]].
|
# Realizing exceptional points of any order in the presence of symmetry
Sharareh Sayyad<EMAIL_ADDRESS>Flore K. Kunst
<EMAIL_ADDRESS>Max Planck Institute for the Science of Light,
Staudtstraße 2, 91058 Erlangen, Germany
(August 27, 2024)
###### Abstract
Exceptional points (EPs) appear as degeneracies in the spectrum of non-
Hermitian matrices at which the eigenvectors coalesce. In general, an EP of
order $n$ may find room to emerge if $2(n-1)$ real constraints are imposed.
Our results show that these constraints can be expressed in terms of the
determinant and traces of the non-Hermitian matrix. Our findings further
reveal that the total number of constraints may reduce in the presence of
unitary and antiunitary symmetries. Additionally, we draw generic conclusions
for the low-energy dispersion of the EPs. Based on our calculations, we show
that in odd dimensions the presence of sublattice or pseudo-chiral symmetry
enforces $n$th order EPs to disperse with the $(n-1)$th root. For two-, three-
and four-band systems, we explicitly present the constraints needed for the
occurrence of EPs in terms of system parameters and classify EPs based on
their low-energy dispersion relations.
## I Introduction
The appearance of symmetry-protected degeneracies in the energy dispersion of
various Hermitian topological systems has attracted much attention in the past
decades [1, 2, 3, 4, 5, 6, 7, 8]. These Hermitian topological systems, aside
from their space-group symmetry, are classified using ten symmetry classes [9]
identified based on three discrete symmetries, namely time-reversal symmetry,
particle-hole (or charge conjugation) symmetry, and chiral (or sublattice)
symmetry [7]. Topological semimetals [10, 11, 12] and multifold fermions [13,
14, 15] are excellent representatives of such systems in which two- or multi-
band crossings can be observed in the energy spectra. In the absence of
symmetry, these band touchings are generally unstable in lower-dimensional
models due to the hybridization of the bands resulting in the gaping out of
degeneracies. However, this band repulsion mechanism is absent in topological
systems in which crystalline symmetries and/or discrete symmetries, e.g.,
time-reversal symmetry, may protect band touching points [16].
It has further been shown that the commonly observed linear energy dispersion
close to nontrivial degeneracies might be forbidden due to certain symmetry
constraints present in some systems [17]. As a result, higher-order band
dispersions, such as cubic or quadratic, may find room to arise close to band
touching manifolds [18, 19, 20]. These distinct characters of energy spectra
are considered as an additional tool to classify various nontrivial
degeneracies in Hermitian systems [16].
The recent surge of theoretical and experimental interests in the field of
non-Hermitian systems has advanced our understanding of the intrinsic
properties of systems with no Hermitian counterparts. Some of these exotic
properties are i) the piling up of bulk states on the boundaries known as the
non-Hermitian skin effect [21], which goes hand in hand with a violation of
the conventional (Hermitian) bulk-boundary correspondence [22, 23, 24], ii)
the emergence of exceptional points (EPs) [25, 26] as defective degeneracies
at which the geometric multiplicity is smaller than the algebraic
multiplicity, and iii) the observation of different non-Hermitian topological
systems due to the closure of non-Hermitian (line or point) gaps [27, 28, 29].
The emergence of these unique properties of non-Hermitian systems is linked to
the extended 38 symmetry classes [25], which are the non-Hermitian
counterparts of the tenfold Altland–Zirnbauer classification [9, 7] in
Hermitian systems. As Hermiticity is not respected in non-Hermitian systems,
particle-hole symmetry ($\rm PHS$ and ${\rm PHS}^{\dagger}$) and time-reversal
symmetry ($\rm TRS$ and ${\rm TRS}^{\dagger}$) acquire two different flavors,
and chiral symmetry ($\rm CS$) is discerned from sublattice symmetry ($\rm
SLS$). These six symmetries combined with psuedo-Hermiticity ($\rm psH$) [30]
give rise to 38 symmetry classes as defined in Ref. [25]. Aside from these
seven symmetries, pseudo-chiral symmetry ($\rm psCS$) [31], inversion ($\cal
I$) symmetry [32], parity ($\cal P$) symmetry and its combination with time-
reversal ($\cal PT$) [33] and particle-hole ($\cal CP$) symmetries [34] have
been considered in exploring various properties of non-Hermitian systems. We
summarize these symmetries in Table 1. We note that these symmetries can also
be written in terms of the classification of random matrices [35] introduced
by Bernard and LeClair [36], see also Appendix A for details.
Among the unique properties of non-Hermitian systems, acquiring a deeper
understanding regarding exceptional points has been the focus of numerous
recent theoretical [37, 38, 39, 40, 26, 41, 42, 34] and experimental [43, 44,
45, 46, 47] studies because of their putative applications, for instance, in
sensing devices [48, 49] and unidirectional lasing [50, 51]. While the major
focus of these works has been mainly on exceptional points of order two, i.e.,
exceptional points at which two eigenvalues coincide and simultaneously
associated eigenvectors coalesce onto one, a recent shift has been made
towards studying the properties of EPs with higher orders [39, 38, 52, 41, 40,
34].
These investigations, which usually explore case studies, mainly address a
number of questions as follows: i) How many constraints need to be satisfied
to find an $n$th order EP, dubbed as EP$n$? It has been argued that $2(n-1)$
real constraints should be imposed to detect EP$n$s in systems with no
symmetry [26]. Even though a description for these constraints is discussed in
Ref. 34, a generic recipe to generate and understand these constraints in the
presence of any symmetry is absent in the literature. Nevertheless, it has
been suggested that relating each of these constraints to a momentum
coordinate implies that merely EP2s can be realized in three spatial
dimensions [53, 54]. ii) What role is played by symmetries in the appearance
of EP$n$s? Recent researches reveal that including symmetries may reduce the
number of constraints to realize EPs. As a result, various case studies
reported the occurrence of Ep2s [39, 38], Ep3s [52], and Ep4s [41] in one,
two, and three spatial dimensions, respectively. More extended studies have
also explored the link between observing EP$n$s and the presence of either
$\mathcal{PT}$ [40] or antiunitary [34] symmetries. iii) Is it possible to
distinguish EP$n$s based on the low-energy dispersions close to them? Similar
to Hermitian systems at which linear, cubic, and quadratic dispersions were
reported close to nontrivial degeneracies, $n$th-root dispersion in the
vicinity of EP$n$s were numerously identified [39]. A recent study reports the
square-root behavior of band spectra close to EP$3$s in the presence of SLS
[52], where this possibility has also been studied in Ref. 55 without
reference to symmetry.
In this work, we revisit these questions using a generic mathematical
formulation to explore the appearance of EP$n$s in Hamiltonians represented by
$n$-dimensional matrices. Based on our formalism, we are able to count the
number of constraints in the presence of any symmetry and evaluate each
constraint based on the traces and the determinant of the Hamiltonian of our
interests. In particular, we find that one needs to satisfy
$\mathop{\mathrm{tr}}[\mathcal{H}^{k}]=0$ with $k=2,\ldots,n-1$ and
$\det[\mathcal{H}]=0$ to find an EP with order $n$ arriving at a total of
$2(n-1)$ constraints in agreement with the literature. Imposing symmetry
considerations, we show that in the presence of CS, psCS, SLS, psH, $\cal PT$,
and $\cal CP$ symmetries, some traces or the determinant of $n$-band systems
generally disappear. Moreover, when psH, $\mathcal{PT}$ or $\mathcal{CP}$
symmetry is present, we find that the number of constraints is reduced to
half, i.e., $n-1$ constraints. When we instead consider psCS or SLS, we
recover $n$ constraints for $n\in\textrm{even}$ and $n-1$ constraints when
$n\in\textrm{odd}$. CS is only defined in even dimensions, in which case we
find $n-1$ constraints. We summarize these results in Table 2.
We, furthermore, identify conditions to characterize various EP$n$s based on
their low-energy dispersions. To do so, we introduce an alternative approach
based on the Frobenius companion matrix of the characteristic polynomial,
which can be interpreted as representing a perturbation close to an EP$n$.
With this matrix in mind, we rederive the above statement pertaining to the
$2(n-1)$ constraints as well as explicitly calculate the low-energy band
dispersions around an EP$n$. Despite the common assumption that EP$n$s
disperse with the $n$th root, we find that in the presence of SLS or psCS with
$n\in\textrm{odd}$, the leading order term of the dispersion around an EP$n$
generically scales with the $(n-1)$th root.
We emphasize that our formulation is not limited to any specific spatial
dimension. For completeness purposes, we calculate explicit forms for the
nonzero constraints for all twelve symmetries listed in Table 1 for two- ,
three- , and four-band systems and present their nonzero parameters.
The outline of this paper is as follows. In Sec. II, we present our generic
mathematical formulation to describe EP$n$s. We further draw generic symmetry-
based arguments on the behavior of EP$n$s when a specific symmetry is
respected. Using the generic decomposition of two-band systems in terms of
Pauli matrices, we discuss the properties of EP$2$s, explicit forms of
constraints, and collections of nonzero parameters in the presence of each
twelve symmetries in Sec. III. In Sections IV and V we pursue similar lines of
thought for EP$3$s and EP$4$s, respectively. Using the Gell-Mann matrices and
their generalization, we rewrite three- and four-band Hamiltonians and
identify their nonzero components when a symmetry constraint is enforced. We
also discuss various possibilities to observe different energy dispersions
close to EP$3$s and EP$4$s in Sections IV and V, respectively. We conclude our
paper in Sec. VI.
Table 1: Summarized symmetries and their associated energy constraints
Symmetry | | Symmetry constraint | | Energy constraint
---|---|---|---|---
Particle-hole symmetry I (PHS) | | ${\cal H}(-\bm{k})=-{\cal C}_{-}{\cal H}^{T}(\bm{k}){\cal C}_{-}^{\dagger}$ | | $\\{\epsilon(\bm{k})\\}=\\{-\epsilon(-\bm{k})\\}$
Particle-hole symmetry II (PHS†) | | ${\cal H}(-\bm{k})=-{\cal T}_{-}{\cal H}^{*}(\bm{k}){\cal T}_{-}^{\dagger}$ | | $\\{\epsilon(\bm{k})\\}=\\{-\epsilon^{*}(-\bm{k})\\}$
Time-reversal symmetry I (TRS) | | ${\cal H}(-\bm{k})={\cal T}_{+}{\cal H}^{*}(\bm{k}){\cal T}_{+}^{\dagger}$ | | $\\{\epsilon(\bm{k})\\}=\\{\epsilon^{*}(-\bm{k})\\}$
Time-reversal symmetry II (TRS†) | | ${\cal H}(-\bm{k})={\cal C}_{+}{\cal H}^{T}(\bm{k}){\cal C}_{+}^{\dagger}$ | | $\\{\epsilon(\bm{k})\\}=\\{\epsilon(-\bm{k})\\}$
Chiral symmetry (CS) | | ${\cal H}(\bm{k})=-\Gamma{\cal H}^{\dagger}(\bm{k})\Gamma^{-1}$ | | $\\{\epsilon(\bm{k})\\}=\\{-\epsilon^{*}(\bm{k})\\}$
Pseudo-chiral symmetry (${\rm psCS}$) | | ${\cal H}^{T}(\bm{k})=-\Lambda{\cal H}(\bm{k})\Lambda^{-1}$ | | $\\{\epsilon(\bm{k})\\}=\\{-\epsilon(\bm{k})\\}$
Sublattice-symmetry (SLS) | | ${\cal H}(\bm{k})=-{\cal S}{\cal H}(\bm{k}){\cal S}^{-1}$ | | $\\{\epsilon(\bm{k})\\}=\\{-\epsilon(\bm{k})\\}$
Pseudo-Hermiticity ($\rm psH$) | | ${\cal H}(\bm{k})=\varsigma{\cal H}^{\dagger}(\bm{k})\varsigma^{-1}$ | | $\\{\epsilon(\bm{k})\\}=\\{\epsilon^{*}(\bm{k})\\}$
Inversion symmetry ($\cal I$) | | ${\cal H}^{\dagger}(-\bm{k})={\cal I}{\cal H}(\bm{k}){\cal I}^{-1}$ | | $\\{\epsilon(\bm{k})\\}=\\{\epsilon^{*}(-\bm{k})\\}$
Parity ($\cal P$) symmetry | | ${\cal H}(-\bm{k})={\cal P}{\cal H}(\bm{k}){\cal P}^{-1}$ | | $\\{\epsilon(\bm{k})\\}=\\{\epsilon(-\bm{k})\\}$
Parity-time ($\cal PT$) symmetry | | ${\cal H}(\bm{k})=({\cal P}{\cal T}_{+}){\cal H}^{*}(\bm{k})({\cal P}{\cal T}_{+})^{-1}$ | | $\\{\epsilon(\bm{k})\\}=\\{\epsilon^{*}(\bm{k})\\}$
Parity-particle-hole (${\cal CP}$) symmetry | | ${\cal H}(\bm{k})=-({\cal CP}){\cal H}^{*}(\bm{k})({\cal CP})^{-1}$ | | $\\{\epsilon(\bm{k})\\}=\\{-\epsilon^{*}(\bm{k})\\}$
Here the unitary operator $A\in\\{\Gamma,\Lambda,\varsigma,{\cal S},{\cal
P},{\cal I}\\}$ obeys $A^{2}=1$, and the anti-unitary operator $A\in\\{{\cal
C}_{\pm},{\cal T}_{\pm}\\}$ satisfies $AA^{*}=\zeta_{A}1$ with $\zeta_{A}=\pm
1$. Note that the spectra of systems with TRS, PHS, TRS† and PHS† exhibit the
Kramers degeneracy [56, 57]. We refer to Appendix B for the specific form of
the symmetry-preserving Hamiltonians.
## II EPs in n-band systems
Given a generic $n\times n$ matrix ${\cal H}$, the characteristic polynomial
is defined by
$\displaystyle{\cal F}_{\lambda}$ $\displaystyle=\det[\lambda\mathbbm{1}-{\cal
H}]$
$\displaystyle=\lambda^{n}-\sigma_{1}\lambda^{n-1}+\ldots+(-1)^{n}\sigma_{n}=0,$
(1)
where
$\displaystyle\sigma_{1}=\mathop{\mathrm{tr}}[{\cal
H}],\qquad\sigma_{n}=\det[{\cal H}],$ (2)
and other $\sigma_{k}$’s are the sum of $k$th order diagonal minors of $\cal
H$. Defining $p_{k}=(-1)^{k}\sigma_{k}$ and $s_{k}=\mathop{\mathrm{tr}}[{\cal
H}^{k}]$, we have
$\displaystyle p_{k}=-\frac{s_{k}+p_{1}s_{k-1}+\ldots+p_{k-1}s_{1}}{k},\quad
k=1,\ldots n-1.$ (3)
We can thus express all coefficients of ${\cal F}_{\lambda}$ in terms of
$\mathop{\mathrm{tr}}[{\cal H}^{k}]$ and $\det[{\cal H}]$ [58, 59]. Having the
characteristic polynomial in Eq. (1), one can then calculate its discriminant
${\cal D}[{\cal H}]$. ${\cal D}[{\cal H}]$ is zero when ${\cal F}_{\lambda}$
possesses multiple, say $m$ with $m\leq n$, degenerate roots $\lambda_{m}$.
Those $m$ degenerate roots, whose associated eigenvectors in ${\cal H}$
coalesce, are dubbed $m$th order exceptional points (EP$m$s) or defective
degeneracies. As a result, the Jordan canonical form of ${\cal H}$ with EP$m$s
exhibits a Jordan block of dimension $m$ and with eigenvalue $\lambda_{m}$ on
the major diagonal.
Adjusting coefficients in Eq. (1) can give rise to the appearance of EP$n$s in
the eigenspectrum of ${\cal H}$. Subsequently, one can evaluate the number of
constraints to observe EP$n$s. More precisely, by setting
$\sigma_{1}=\mathop{\mathrm{tr}}[{\cal H}]=0$, which is a trivial shift to the
spectrum, we are left with $(n-1)$ complex-valued coefficients, $n-2$
different traces and one determinant. To find EP$n$s, we should thus enforce
$2(n-1)$ constraints, i.e., $\mathop{\mathrm{Re}}[\det[{\cal H}]]=0$,
$\mathop{\mathrm{Im}}[\det[{\cal H}]]=0$,
$\mathop{\mathrm{Re}}[\mathop{\mathrm{tr}}[{\cal H}^{k}]]=0$ and
$\mathop{\mathrm{Im}}[\mathop{\mathrm{tr}}[{\cal H}^{k}]]=0$ with
$k=2,\ldots,n-1$. We emphasize that EP$n$s occur when all of these $2(n-1)$
constraints are simultaneously enforced. In parameter regimes in which smaller
number of constraints are satisfied, lower-order EPs, e.g., EP$m$s with $m\leq
n$, may find room to emerge in the spectrum of $n\times n$ dimensional
matrices.
An alternative approach to counting the number of constraints in matrices with
EP$n$s is based on perturbing ${\cal H}$ close to EP$n$s [60]. Here, we
introduce a Jordan block $J_{n}$ as a description for the EP$n$s in ${\cal H}$
with dimension $n$ and, without loss of generality, diagonal value
$\lambda_{n}=0$. Introducing the perturbation matrix $\delta S$, one can find
all insignificant, trivial perturbations using $[\delta S,J_{n}]$ [60]. The
remaining non-trivial perturbation is a $n\times n$ matrix $\delta J_{n}$. The
matrix elements of $\delta J_{n}$ are always zero except for $n-1$ complex-
valued elements, which are $\delta J_{n,j}$ with $j=1,\ldots n-1$. The
summation of the Jordan block and its nontrivial perturbation, namely
$J_{n}+\delta J$, describes the low-energy behavior of ${\cal H}$ close to
EP$n$s, which reads
$\displaystyle{\cal H}_{0}=J_{n}+\delta
J_{n}=\begin{pmatrix}0&1&0&\ldots&0&0\\\ 0&0&1&\ldots&0&0\\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\\ 0&0&0&\ldots&0&1\\\ \delta
J_{n,1}&\delta J_{n,2}&\delta J_{n,3}&\ldots&\delta J_{n,n-1}&0\end{pmatrix}.$
(4)
Note that when $\mathop{\mathrm{tr}}[{\cal H}]$ is nonzero, the $(n,n)$ matrix
element of $\delta J_{n}$ is also nonzero. The matrix elements $\delta
J_{n,j}$ are related to the coefficients $\sigma_{k}$ since the characteristic
polynomial of $J_{n}+\delta J$ is identical to Eq. (1).
In fact, ${\cal H}_{0}$ constructs the (transpose) _Frobenius companion
matrix_ for the characteristic polynomial in Eq. (1) [61], and each of the
$\delta J_{n,j}$ is proportional to $\sigma_{n+1-j}$, in particular, $\delta
J_{n,j}=(-1)^{n+j}\sigma_{n+1-j}$. This result was also derived in Ref. 62,
and further generalized to describe perturbations of any matrix written in the
Jordan normal form. From this approach, we again realize that $2(n-1)$
constraints are needed to determine the presence of EP$n$s in matrix ${\cal
H}$, i.e., $\mathop{\mathrm{Re}}[\delta J_{n,j}]=0$ and
$\mathop{\mathrm{Im}}[\delta J_{n,j}]=0$ with $j=1,\ldots,n-1$.
Table 2: Number of constraints to realize EP$n$s in $n$-band systems
Symmetry | | $\\#$ constraints
---|---|---
| | $n\in$ even | $n\in$ odd
CS | | $n-1$ $\begin{cases}\mathop{\mathrm{Re}}[\det[{\cal H}]],\\\ \mathop{\mathrm{Re}}[\mathop{\mathrm{tr}}[{\cal H}^{2l}]],\\\ \mathop{\mathrm{Im}}[\mathop{\mathrm{tr}}[{\cal H}^{2l-1}]].\end{cases}$ | $\quad-$
psCS | | $n\quad\,\,$ $\begin{cases}\det[{\cal H}],\\\ \mathop{\mathrm{tr}}[{\cal H}^{2l}].\end{cases}$ | $n-1$ $\begin{cases}\mathop{\mathrm{tr}}[{\cal H}^{2l}].\end{cases}$
SLS | | $n\quad\,\,$ $\begin{cases}\det[{\cal H}],\\\ \mathop{\mathrm{tr}}[{\cal H}^{2l}].\end{cases}$ | $n-1$ $\begin{cases}\mathop{\mathrm{tr}}[{\cal H}^{2l}].\end{cases}$
psH symmetry | | $n-1$ $\begin{cases}\mathop{\mathrm{Re}}[\det[{\cal H}]],\\\ \mathop{\mathrm{Re}}[\mathop{\mathrm{tr}}[{\cal H}^{k}]].\end{cases}$ | $n-1$ $\begin{cases}\mathop{\mathrm{Re}}[\det[{\cal H}]],\\\ \mathop{\mathrm{Re}}[\mathop{\mathrm{tr}}[{\cal H}^{k}]].\end{cases}$
$\cal PT$ symmetry | | $n-1$ $\begin{cases}\mathop{\mathrm{Re}}[\det[{\cal H}]],\\\ \mathop{\mathrm{Re}}[\mathop{\mathrm{tr}}[{\cal H}^{k}]].\end{cases}$ | $n-1$ $\begin{cases}\mathop{\mathrm{Re}}[\det[{\cal H}]],\\\ \mathop{\mathrm{Re}}[\mathop{\mathrm{tr}}[{\cal H}^{k}]].\end{cases}$
$\mathcal{CP}$ symmetry | | $n-1$ $\begin{cases}\mathop{\mathrm{Re}}[\det[{\cal H}]],\\\ \mathop{\mathrm{Re}}[\mathop{\mathrm{tr}}[{\cal H}^{2l}]],\\\ \mathop{\mathrm{Im}}[\mathop{\mathrm{tr}}[{\cal H}^{2l-1}]].\end{cases}$ | $n-1$ $\begin{cases}\mathop{\mathrm{Im}}[\det[{\cal H}]],\\\ \mathop{\mathrm{Re}}[\mathop{\mathrm{tr}}[{\cal H}^{2l}]],\\\ \mathop{\mathrm{Im}}[\mathop{\mathrm{tr}}[{\cal H}^{2l-1}]].\end{cases}$
Here $k\in\\{1,\ldots n\\}$ and $l\in\\{1,\ldots,n/2\\}$. Details are provided
in Appendix C. Behind the number of constraints we write the specific
constraints that need to be satisfied to find EP$n$s. We note that there is no
entry for CS with $n\in\textrm{odd}$ as this symmetry is not defined in that
case.
From the characteristic polynomial in Eq. (1) as well as the perturbed Jordan
block in Eq. (4), we can also deduce how the EP$n$s disperse. While it is
commonly assumed that the series expansion resulting from a perturbation with
$\omega$ around an EP$n$, i.e., writing $\mathcal{H}_{0}=J_{n}+\omega\delta
J_{n}$, results in the Puiseux series,
$\lambda=\lambda_{0}+\sum_{j=1}^{\infty}\omega^{j/n}\lambda_{j}$ with
$\lambda_{1}=\delta J_{n,1}^{1/n}$, this is not generally the case. Indeed,
_only_ when $\delta J_{n,1}\neq 0$, the Puiseux series is recovered for the
energy eigenvalues close to an EP$n$ [63, 55]. When $\delta J_{n,1}=0$, the
perturbed eigenvalues generally split in different cycles of the form
$\lambda=\lambda_{0}+\sum_{j=1}^{\infty}\omega^{j/p}\lambda_{j}$ with $p<n$,
and the different values of $p$ summing up to $n$ [64, 63, 55].
Let us now see how this translates into our perturbed Jordan block in Eq. (4).
In particular, when $\sigma_{n}\neq 0$ and all other $\sigma_{j}=0$ (or
equivalently, when $\delta J_{n,1}\neq 0$ and all other $\delta J_{n,j}=0$),
we straightforwardly find that the characteristic polynomial reduces to
$\lambda^{n}+(-1)^{n}\sigma_{n}=0$ (or $\lambda^{n}-\delta J_{n,1}=0$). In
this case, the EP$n$ disperses with
$e^{2\pi\mathop{\mathrm{i}}r/n}[(-1)^{n}\sigma_{n}]^{1/n}$
($e^{2\pi\mathop{\mathrm{i}}r/n}[\delta J_{n,1}]^{1/n}$) for $r=1,\ldots,n$
[65, 66]. When $\sigma_{n-1}\neq 0$ and all other $\sigma_{j}=0$, (or
equivalently, when $\delta J_{n,2}\neq 0$ and all other $\delta J_{n,j}=0$,)
we find $\lambda(\lambda^{n-1}+(-1)^{n-1}\sigma_{n-1})=0$ (or
$\lambda(\lambda^{n-1}-\delta J_{n,2})=0$) for the characteristic polynomial.
Now, the EP$n$ disperses with the $n-1$th root, i.e.,
$\sim[(-1)^{n-1}\sigma_{n-1}]^{1/(n-1)}$ ($\sim[\delta J_{n,2}]^{1/(n-1)}$),
combined with a flat band with $\lambda=0$. In general, we thus find that when
$\sigma_{j}\neq 0$ ($\delta J_{n,j}\neq 0$) and all other $\sigma_{k}=0$
($\delta J_{n,k}=0$), the low-energy approximation around the EP$n$ reads
$\sim[(-1)^{j}\sigma_{j}]^{1/j}$ ($\sim[\delta J_{n,j}]^{1/(n+1-j)}$). When
all $\sigma_{k}\neq 0$ (or all $\delta J_{n,k}\neq 0$), it is no longer
possible to find complete analytical solutions for the eigenvalues $\lambda$
when $n\geq 5$. Nevertheless, one can numerically compute explicit solutions
for the leading terms [64, 67].
So far, we discussed EP$n$s in systems with no additional symmetries. Let us
now see how the presence of symmetries affects the appearance of EP$n$s.
Writing the determinant and traces as $\det[{\cal H}]=\prod_{i}\epsilon_{i}$
and $\mathop{\mathrm{tr}}[{\cal H}^{k}]=\sum_{i}\epsilon_{i}^{k}$ with
$\epsilon_{i}$ the eigenvalues of $\cal H$ allows us to make general
statements when making use of the energy constraints listed in Table 1. We
immediately see that PHS, PHS†, TRS, TRS†, $\mathcal{I}$, and $\mathcal{P}$
symmetry are nonlocal in parameter space as they relate eigenvalues with
momentum ${\bf k}$ to eigenvalues with momentum $-{\bf k}$. As such, the
presence of these symmetries does not reduce the number of constraints but
instead puts a constraint on whether the entries in the Hamiltonian are
symmetric or antisymmetric. We thus find that the number of constraints for
finding EP$n$s in the presence of these symmetries remains at $2(n-1)$. For
the remaining symmetries listed in Table 1, however, there is a reduction in
the number of constraints.
In the presence of SLS and psCS, $\\{\epsilon({\bf k})\\}=\\{-\epsilon({\bf
k})\\}$ dictates that in the case of $n\in\textrm{odd}$, at least one of the
eigenvalues is necessarily zero, such that $\det[{\cal H}]=0$. For
$n\in\textrm{even}$, there is no such argument and we thus generally find
$\det[{\cal H}]\neq 0$. Turning to the traces, we see that
$\mathop{\mathrm{tr}}[{\cal H}^{k}]\neq 0$ when $k\in\mathrm{even}$, while
$\mathop{\mathrm{tr}}[{\cal H}^{k}]=0$ when $k\in\mathrm{odd}$ for all $n$. To
find EP$n$s, we thus need to satisfy $n$ constraints when $n\in\textrm{even}$,
and $n-1$ constraints when $n\in\textrm{odd}$. The fact that $\\{\epsilon({\bf
k})\\}=\\{-\epsilon({\bf k})\\}$ also leads to an interesting consequence when
considering the possibility of realizing lower-order EPs in $n$-band systems,
namely, the addition of an extra band to an $(n-1)$-band system immediately
promotes a possibly existing EP$(n-1)$ to an EP$n$ _as long as_ this
additional band is coupled to the other bands. As such, there is a notion of
fragility in these systems, as also pointed out in Ref. 52 for the case of
SLS. However, if a band is added that does not couple to any of the other
bands, the EP$(n-1)$ survives even though the energy eigenvalues are $n$-fold
degenerate at the EP.
If we instead consider $\mathcal{PT}$ and psH symmetry, we see that
$\\{\epsilon({\bf k})\\}=\\{\epsilon^{*}({\bf k})\\}$ implies $\\{\det[{\cal
H}],\mathop{\mathrm{tr}}[{\cal H}^{k}]\\}\in\mathbb{R},\forall k<n$. This
means that we need to satisfy $n-1$ constraints to find an EP$n$ in agreement
with what is found in Ref. 34. Lastly, considering CS and $\mathcal{CP}$
symmetry, $\\{\epsilon({\bf k})\\}=\\{-\epsilon^{*}({\bf k})\\}$ leads to
$\det[{\cal H}]\in\mathbb{R}$ for $n\in\textrm{even}$, $\det[{\cal H}]\in
i\mathbb{R}$ for $n\in\textrm{odd}$, $\mathop{\mathrm{tr}}[{\cal
H}^{k}]\in\mathbb{R},k\in\textrm{even}$ and $\mathop{\mathrm{tr}}[{\cal
H}^{k}]\in i\mathbb{R},k\in\textrm{odd}$. This gives us again $n-1$
constraints, which was also found in Ref. 34, see also Ref. 68. We summarize
the results for SLS, psCS, CS, $\mathcal{PT}$, psH and $\mathcal{CP}$ symmetry
in Table 2 and refer to Appendix C for details on the derivation of these
findings.
Now turning back to our results for the dispersion around EP$n$s, we see that
in the case of SLS and psCS with $n\in\textrm{odd}$, where
$\det[\mathcal{H}]=0$ (i.e., $\sigma_{n}=0$ or $\delta J_{1,n}=0$), EP$n$s
disperse with $\mathcal{O}(\omega^{1/(n-1)})$. Interestingly, this is the only
instance of symmetries generically preventing the recovery of the $n$th root
dispersion for EP$n$s.
In the following, we explore EP$n$s and the implications of symmetry in
greater detail by deriving exact results. Galois theory [69] implies that
characteristic polynomials with dimensions greater than four cannot be
expressed as combinations of radicals of rational functions of the polynomial
coefficients. Therefore, to present analytical results in terms of radicals,
we explore the role of symmetries in modifying the structure and numbers of
constraints to detect EP$n$s with $n=2,3,4$.
Table 3: Number of constraints and parameters to realize degenerate points in
2-band systems
Symmetry | | Operator | | $\\#$ constraints | | $\\#$ parameters
---|---|---|---|---|---|---
No symmetry | | - | | 2 $(\eta,\nu)$ | | 2$\times$3 $(d_{x},d_{y},d_{z})$
PHS with ${\cal C}_{-}{\cal C}_{-}^{*}=1$ | | $\mathbbm{1}_{2}$ | | 2 $(\eta,\nu)$ | | 2$\times$3 $(d_{xIa},d_{xRa},d_{yIs},d_{yRs},d_{zIa},d_{zRa})$
PHS with ${\cal C}_{-}{\cal C}_{-}^{*}=-1$ | | $i\sigma_{y}$ | | 2 $(\eta,\nu)$ | | 2$\times$3 $(d_{xRs},d_{xIs},d_{yRs},d_{yIs},d_{zRs},d_{zIs})$
PHS† with ${\cal T}_{-}{\cal T}_{-}^{*}=1$ | | $\mathbbm{1}_{2}$ | | 2 $(\eta,\nu)$ | | 2$\times$3 $(d_{xRa},d_{xIs},d_{yRs},d_{yIa},d_{zRa},d_{zIs})$
PHS† with ${\cal T}_{-}{\cal T}_{-}^{*}=-1$ | | $i\sigma_{y}$ | | 2 $(\eta,\nu)$ | | 2$\times$3 $(d_{xRs},d_{xIa},d_{yRs},d_{yIa},d_{zRs},d_{zIa})$
TRS with ${\cal T}_{+}{\cal T}_{+}^{*}=1$ | | $\mathbbm{1}_{2}$ | | 2 $(\eta,\nu)$ | | 2$\times$3 $(d_{xRs},d_{xIa},d_{yRa},d_{yIs},d_{zRs},d_{zIa})$
TRS with ${\cal T}_{+}{\cal T}_{+}^{*}=-1$ | | $i\sigma_{y}$ | | 2 $(\eta,\nu)$ | | 2$\times$3 $(d_{xRa},d_{xIs},d_{yRa},d_{yIs},d_{zRa},d_{zIs})$
TRS† with ${\cal C}_{+}{\cal C}_{+}^{*}=1$ | | $\mathbbm{1}_{2}$ | | 2 $(\eta,\nu)$ | | 2$\times$3 $(d_{xRs},d_{xIs},d_{yRa},d_{yIa},d_{zRs},d_{zIs})$
TRS† with ${\cal C}_{+}{\cal C}_{+}^{*}=-1$ | | $i\sigma_{y}$ | | 2 $(\eta,\nu)$ | | 2$\times$3 $(d_{xRa},d_{xIa},d_{yRa},d_{yIa},d_{zRa},d_{zIa})$
CS | | $\sigma_{z}$ | | 1 $(\eta)$ | | 3 $(d_{xR},d_{yR},d_{zI})$
${\rm psCS}$ | | $\sigma_{z}$ | | 2 $(\eta,\nu)$ | | 2$\times$1 $(d_{x})$
SLS | | $\sigma_{z}$ | | 2 $(\eta,\nu)$ | | 2$\times$2 $(d_{x},d_{y})$
$\cal I$ symmetry | | $\sigma_{z}$ | | 2 $(\eta,\nu)$ | | 2$\times$ 3 $(d_{xRs},d_{xIa},d_{yRs},d_{yIa},d_{zRa},d_{zIs})$
$\rm psH$ | | $\sigma_{x}$ | | 1 $(\eta)$ | | 3 $(d_{xR},d_{yI},d_{zI})$
$\cal P$ symmetry | | $\sigma_{x}$ | | 2 $(\eta,\nu)$ | | 2$\times$3 $(d_{xs},d_{ya},d_{za})$
$\cal P$ symmetry | | $\sigma_{z}$ | | 2 $(\eta,\nu)$ | | 2$\times$3 $(d_{xa},d_{ya},d_{zs})$
$\cal PT$ symmetry | | $\sigma_{x}$ | | 1 $(\eta)$ | | 3 $(d_{xR},d_{yR},d_{zI})$
${\cal CP}$ symmetry | | $\sigma_{x}$ | | 1 $(\eta)$ | | 3 $(d_{xI},d_{yI},d_{zR})$
Here $d_{\cal O}=d_{{\cal O}R}+id_{{\cal O}I}$ with ${\cal O}\in\\{x,y,z\\}$.
Symmetric and antisymmetric components of $d_{\cal O}$ with respect to
$\bm{k}\to-\bm{k}$ are labelled by $d_{{\cal O}\alpha s}$ and $d_{{\cal
O}\alpha a}$ with $\alpha\in\\{R,I\\}$, respectively. $\eta$ and $\nu$ are
introduced in Eq. (10). Note that non-zero parameters might vary by changing
the chosen Pauli matrix for each symmetry operator, an example of which is
presented for the parity symmetry for which we include two representations.
Nevertheless, the number of parameters and constraints remain intact.
## III EPs in two-band systems
To study second order EPs, we perform a matrix decomposition in the Pauli
basis. The most generic two-band Hamiltonian in this representation is given
by
$\displaystyle{\cal
H}(\bm{k})=d_{0}(\bm{k})\mathbbm{1}_{2}+\bm{d}(\bm{k})\cdot\bm{\sigma},$ (5)
where $\bm{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})$ is the vector of Pauli
matrices (see Appendix D.1), $\mathbbm{1}_{2}$ is the $2\times 2$ identity
matrix, $\bm{k}$ denotes the momentum with the appropriate dimensions, and
$d_{0}$ and $\bm{d}=(d_{x},d_{y},d_{z})$ are complex-valued momentum dependent
variables. In the following, we drop the momentum dependence for the purpose
of brevity, and reinstate it when needed. Considering
$\bm{d}=\bm{d}_{R}+\mathop{\mathrm{i}}\bm{d}_{I}$ and
${d}_{0}={d}_{0R}+\mathop{\mathrm{i}}{d}_{0I}$ with
$\\{\bm{d}_{R},\bm{d}_{I}\\}\in\mathbb{R}$, the eigenvalues cast
$\displaystyle\lambda_{\pm}$
$\displaystyle=d_{0}\pm\sqrt{d_{R}^{2}-d_{I}^{2}+2\mathop{\mathrm{i}}\bm{d}_{R}\cdot\bm{d}_{I}}.$
(6)
The characteristic polynomial given in Eq. (1) in this case reads
$\displaystyle{\cal F}_{\lambda}(\bm{k})$
$\displaystyle=\lambda^{2}-\mathop{\mathrm{tr}}[\mathcal{H}]\lambda+\det[\mathcal{H}]=0,$
(7)
such that
$\lambda=\left(\mathop{\mathrm{tr}}[\mathcal{H}]\pm\sqrt{\mathop{\mathrm{tr}}[\mathcal{H}]^{2}-4\,\det[\mathcal{H}]}\right)/2$.
Comparing these roots with $\lambda_{\pm}$ given in Eq. (6), we get
$\displaystyle\mathop{\mathrm{tr}}[\mathcal{H}]^{2}-4\det[\mathcal{H}]=d_{R}^{2}-d_{I}^{2}+2\mathop{\mathrm{i}}\bm{d}_{R}\cdot\bm{d}_{I}.$
(8)
The degenerate points are then obtained by setting the discriminant of ${\cal
F}_{\lambda}(k)$ in Eq. (7) to zero, i.e.,
$\displaystyle{\cal D}[{\cal H}]$
$\displaystyle=\mathop{\mathrm{tr}}[\mathcal{H}]^{2}-4\det[\mathcal{H}]=0.$
(9)
The defective degenerate points are the EP2s. Without loss of generality, we
can set $\mathop{\mathrm{tr}}[{\cal H}]=2d_{0}=0$. To find EP2s, we introduce
two constraints based on the real ($\eta$) and imaginary ($\nu$) parts of
${\cal D}[{\cal H}]$ as
$\displaystyle\eta=d_{R}^{2}-d_{I}^{2}=0,\quad\&\quad\nu=\bm{d}_{R}\cdot\bm{d}_{I}=0.$
(10)
Here $\eta$ and $\nu$ describe $N$-spatial-dimensional surfaces. Note that in
Hermitian systems, i.e., $\bm{d}_{I}=0$, band touching points occur when all
components of $\bm{d}_{R}$ vanish amounting to at most three constraints.
In the vicinity of EP$2$s, the Hamiltonian casts the perturbed Jordan block
with dimension $n=2$ in Eq. (4) and reads
$\displaystyle{\cal H}_{0}=\begin{pmatrix}0&1\\\ -\det[{\cal
H}(\bm{k})]&0\end{pmatrix}.$ (11)
Using the similarity transformation ${\cal H}_{0}=S\Delta S^{-1}$, the
dispersion relation close to the EP$2$s yields $\pm\sqrt{-\det[{\cal
H}(\bm{k})]}$, which are the diagonal elements of $\Delta$. This result is
central in various studies on systems with $\det[{\cal H}(\bm{k})]=-|\bm{k}|$
due to the nonanalytical energy dispersion [54].
In two spatial systems, solutions to $\eta=\nu=0$ [cf. Eq. (10)] describe two
closed curves in $\bm{k}$ space, such that EP2s appear when these curves
intersect. In three spatial dimensions, the intersection between the two-
dimensional surfaces described by $\eta=0$ and $\nu=0$ forms a closed
exceptional curve, which can give rise to exceptional knots and result in
exotic features such as open real/imaginary Fermi surfaces [54].
We now turn to the symmetries listed in Table 1 and see how the presence of
one or the coexistence of multiple symmetries constraints the appearance of
EP2s. This problem for EP2s was also studied in Ref. 39 for the symmetries
defined by Bernard and LeClair (BLC) [36] (see also Appendix A). We cast it
here in the form of the symmetries as given in Table 1, which also includes
additional symmetries to the BLC classification. To demonstrate our procedure,
we treat two symmetries explicitly as well as their combination in the
following.
As an example, we start by considering PHS† symmetry with
$\mathcal{T}_{-}\mathcal{T}_{-}^{*}=-1$. We choose ${\cal
T}_{-}=\mathop{\mathrm{i}}\sigma_{y}$, such that the most generic form of a
particle-hole (PH)-symmetric Hamiltonian from Eq. (5) casts
$\displaystyle{\cal H}_{{\rm PHS}^{\dagger}}$
$\displaystyle=\left(\mathop{\mathrm{i}}d_{0Is}-d_{0Ra}\right)\mathbbm{1}_{2}+\left({\bf
d}_{Rs}-\mathop{\mathrm{i}}{\bf d}_{Ia}\right)\cdot\bm{\sigma}.$ (12)
Here we have introduced an additional label onto $d$, where each of the $d$
parameters is represented as $d_{{\cal O}\alpha}=d_{{\cal O}\alpha s}+d_{{\cal
O}\alpha a}$ with ${\cal O}\in\\{x,y,z\\}$ and $\alpha\in\\{R,I\\}$ where
$d_{{\cal O}\alpha s}\leavevmode\nobreak\ (d_{{\cal O}\alpha a})$ is
(anti-)symmetric under $\bm{k}\to-\bm{k}$, i.e., $d_{{\cal O}\alpha
s}(\bm{k})=d_{{\cal O}\alpha s}(-\bm{k})$ and $d_{{\cal O}\alpha
a}(\bm{k})=-d_{{\cal O}\alpha a}(-\bm{k})$. The trace and determinant then
read
$\displaystyle\mathop{\mathrm{tr}}[\mathcal{H}_{{\rm PHS}^{\dagger}}]$
$\displaystyle=2(\mathop{\mathrm{i}}d_{0Is}-d_{0Ra}),$ (13)
$\displaystyle\det[\mathcal{H}_{{\rm PHS}^{\dagger}}]$
$\displaystyle=(\mathop{\mathrm{i}}d_{0Is}-d_{0Ra})^{2}-d_{Rs}^{2}+d_{Ia}^{2}+2\mathop{\mathrm{i}}{\bf
d}_{Rs}\cdot{\bf d}_{Ia}.$ (14)
Setting the discriminant in Eq. (9) to zero (${\cal D}[\mathcal{H}_{{\rm
PHS}^{\dagger}}]=0$), we immediately find modified $\eta,\nu$ constraints,
which are
$\eta=d_{Rs}^{2}-d_{Ia}^{2}=0,\quad\&\quad\nu={\bf d}_{Rs}\cdot{\bf
d}_{Ia}=0.$ (15)
The presence of PHS† thus does not reduce the number of constraints for
finding EP2s but merely restricts the momentum-dependency of parameters $d$.
If we instead consider $\mathcal{P}$ symmetry with $\mathcal{P}=\sigma_{x}$,
the most generic form of a parity-symmetric Hamiltonian reads
$\displaystyle{\cal H}_{\mathcal{P}}$
$\displaystyle=d_{0s}\mathbbm{1}_{2}+\left(d_{xs},-d_{ya},-d_{za}\right)\cdot\bm{\sigma}.$
(16)
The trace and determinant then read
$\displaystyle\mathop{\mathrm{tr}}[\mathcal{H}_{\mathcal{P}}]$
$\displaystyle=2(d_{0Rs}+\mathop{\mathrm{i}}d_{0Is}),$ (17)
$\displaystyle\det[\mathcal{H}_{\mathcal{P}}]$
$\displaystyle=(d_{0Rs}+\mathop{\mathrm{i}}d_{0Is})^{2}-d_{R\cal
P}^{2}+d_{I\cal P}^{2}+2i{\bf d}_{R\cal P}\cdot{\bf d}_{I\cal P}.$ (18)
Here we used ${\bf d}_{R\cal P}=(d_{xRs},-d_{yRa},-d_{zRa})$ and ${\bf
d}_{I\cal P}=(d_{xIs},-d_{yIa},-d_{zIa})$. To find EP2s, we satisfy $\eta$ and
$\nu$ constraints for this system as
$\displaystyle\begin{cases}\eta=d_{xRs}^{2}+d_{yRa}^{2}+d_{zRa}^{2}-d_{xIs}^{2}-d_{yIa}^{2}-d_{zIa}^{2}=0,\\\
\nu=d_{xRs}d_{xIs}+d_{yRa}d_{yIa}+d_{zRa}d_{zIa}=0.\end{cases}$ (19)
Similar to PHS†, $\mathcal{P}$ symmetry puts restrictions on the momentum
dependency of the $d_{i}$’s, while not reducing the number of constraints for
realizing EP2s.
If we now consider the presence of both PHS† with ${\cal
T}_{-}=\mathop{\mathrm{i}}\sigma_{y}$ and $\mathcal{P}$ symmetry imposed by
$\sigma_{x}$, we get
$\displaystyle{\cal H}_{\cal{\rm\mathcal{P}-PHS}^{\dagger}}$
$\displaystyle=\mathop{\mathrm{i}}d_{0Is}\mathbbm{1}_{2}+(d_{xRs},-\mathop{\mathrm{i}}d_{yIa},-\mathop{\mathrm{i}}d_{zIa})\cdot\bm{\sigma}.$
(20)
The trace and determinant then read
$\displaystyle\mathop{\mathrm{tr}}[\mathcal{H}_{\cal{\rm\mathcal{P}-PHS}^{\dagger}}]$
$\displaystyle=2\mathop{\mathrm{i}}d_{0Is},$ (21)
$\displaystyle\det[\mathcal{H}_{\cal{\rm\mathcal{P}-PHS}^{\dagger}}]$
$\displaystyle=-d_{0Is}^{2}-d_{xRs}^{2}+d_{yIa}^{2}+d_{zIa}^{2},$ (22)
and we find
$\displaystyle\eta=d_{xRs}^{2}-d_{yIa}^{2}-d_{zIa}^{2}=0,\qquad\&\quad\nu=0.$
(23)
Clearly one merely should satisfy $\eta=0$ to find EP2s in this system.
Therefore, even though PHS† and $\mathcal{P}$ symmetry individually do not
reduce the number of constraints, the combination of these symmetries leaves
only one constraint nonzero.
We summarize the results for these and the other symmetries in Table 3. There
we specify the symmetry generator and number of nonvanishing constraints and
$d$ parameters in the presence of each symmetry. Table 4 summarizes various
combinations of psH symmetry and other symmetries in the system. We note that
one can simply compare the number of parameters in the fourth column of Table
3 to see which terms survive in the presence of multiple symmetries. As
expected, the results in Table 3 are in agreement with our general findings
regarding EP$n$s in Table 2.
Table 4: Summarized combined PsH symmetry with other symmetries and numbers of
constraints and parameters
Symmetry | | $\\#$ constr. | | $\\#$ parameters
---|---|---|---|---
psH + CS | | 1 $(\eta)$ | | 2 $(d_{xR},d_{zI})$
psH + SLS | | 1 $(\eta)$ | | 2 $(d_{xR},d_{yI})$
psH + $\cal I$ | | 1 $(\eta)$ | | 3 $(d_{xRs},d_{yIa},d_{zIs})$
psH+ PHS with ${\cal C}_{-}{\cal C}_{-}^{*}=1$ | | 1 $(\eta)$ | | 3 $(d_{xRa},d_{yIs},d_{zIa})$
psH+ PHS† with ${\cal T}_{-}{\cal T}_{-}^{*}=1$ | | 1 $(\eta)$ | | 3 $(d_{xRa},d_{yIa},d_{zIs})$
psH+ TRS with ${\cal T}_{+}{\cal T}_{+}^{*}=1$ | | 1 $(\eta)$ | | 3 $(d_{xRs},d_{yIs},d_{zIa})$
psH+ TRS† with ${\cal C}_{+}{\cal C}_{+}^{*}=1$ | | 1 $(\eta)$ | | 3 $(d_{xRs},d_{yIa},d_{zIs})$
psH+ PHS with ${\cal C}_{-}{\cal C}_{-}^{*}=-1$ | | 1 $(\eta)$ | | 3 $(d_{xRs},d_{yIs},d_{zIs})$
psH+ PHS† with ${\cal T}_{-}{\cal T}_{-}^{*}=-1$ | | 1 $(\eta)$ | | 3 $(d_{xRa},d_{yIs},d_{zIa})$
psH+ TRS with ${\cal T}_{+}{\cal T}_{+}^{*}=-1$ | | 1 $(\eta)$ | | 3 $(d_{xRa},d_{yIs},d_{zIs})$
psH+ TRS† with ${\cal C}_{+}{\cal C}_{+}^{*}=-1$ | | 1 $(\eta)$ | | 3 $(d_{xRa},d_{yIa},d_{zIa})$
One can find nonzero $d$ parameters by keeping common nonzero parameters given
by each symmetry individually presented in Table 3. While we only list the
coexistence of psH symmetry with other non-Hermitian symmetries, the found
recipe for determining the number of parameters is generic.
To demonstrate our findings in this section by a concrete example, we now look
at an effective description of the driven-dissipative Kitaev model presented
in Ref. 29. Here the traceless Hamiltonian is given by
$\displaystyle{\cal
H}_{\textrm{ddK}}=\begin{pmatrix}-\mathop{\mathrm{i}}2\sqrt{\gamma_{l}\gamma_{g}}&-\mathop{\mathrm{i}}(2Je^{\mathop{\mathrm{i}}k}+\mu)\\\
\mathop{\mathrm{i}}(2Je^{-\mathop{\mathrm{i}}k}+\mu)&\mathop{\mathrm{i}}2\sqrt{\gamma_{l}\gamma_{g}}\end{pmatrix},$
(24)
where $k$ stands for the momentum index, $J$ is the nearest-neighbor hopping
amplitude, $\mu$ denotes the chemical potential, and $\gamma_{l}$ and
$\gamma_{g}$ are, respectively, loss and gain coupling rates between the 1D
system and the dissipative reservoir. This model displays TRS† with generator
$\sigma_{z}$, PHS† with generator $\mathbbm{1}$, and CS with generator
$\sigma_{z}$ [29]. The trace and determinant of ${\cal H}_{\textrm{ddK}}$ read
$\displaystyle\mathop{\mathrm{tr}}[{\cal H}_{\textrm{ddK}}]$
$\displaystyle=0,$ (25) $\displaystyle\det[{\cal H}_{\textrm{ddK}}]$
$\displaystyle=4\gamma_{g}\gamma_{l}-4J^{2}-4J\mu\cos(k)-\mu^{2}.$ (26)
As a result, the $\eta$ and $\nu$ constraints cast
$\displaystyle\eta$
$\displaystyle=4\gamma_{g}\gamma_{l}-4J^{2}-4J\mu\cos(k)-\mu^{2},$ (27)
$\displaystyle\nu$ $\displaystyle=0.$ (28)
At $k=k_{*}$ in which $\eta=0$, EP2s appear in the spectrum of ${\cal
H}_{\textrm{ddK}}$. For instance, when $k=0\leavevmode\nobreak\ (k=\pi)$, EP2s
occur when $2\sqrt{\gamma_{l}\gamma_{g}}=2J+\mu\leavevmode\nobreak\ (2J-\mu)$
which is consistent with the analysis of Ref. 29.
## IV EPs in three-band systems
Table 5: Number of constraints and parameters to realize degenerate points in
3-band systems
Symmetry | | Operator | | $\\#$ constr. | | $\\#$ parameters
---|---|---|---|---|---|---
No symmetry | | - | | 2$\times$2 $(\eta,\nu)$ | | 2$\times$8 $(d_{1},d_{2},d_{3},d_{4},d_{5},d_{6},d_{7},d_{8})$
PHS with ${\cal C}_{-}{\cal C}_{-}^{*}=1$ | | $\mathbbm{1}_{3}$ | | 2$\times$2 $(\eta,\nu)$ | | 2$\times$8 $(d_{1Rs},d_{2Rs},d_{3Rs},d_{4Ra},d_{5Ra},d_{6Ra},d_{7Ra},d_{8Ra}$ $d_{1Is},d_{2Is},d_{3Is},d_{4Ia},d_{5Ia},d_{6Ia},d_{7Ia},d_{8Ia})$
PHS† with ${\cal T}_{-}{\cal T}_{-}^{*}=1$ | | $\mathbbm{1}_{3}$ | | 2$\times$2 $(\eta,\nu)$ | | 2$\times$8 $(d_{1Rs},d_{2Rs},d_{3Rs},d_{4Ra},d_{5Ra},d_{6Ra},d_{7Ra},d_{8Ra}$ $d_{1Ia},d_{2Ia},d_{3Ia},d_{4Is},d_{5Is},d_{6Is},d_{7Is},d_{8Is})$
TRS with ${\cal T}_{+}{\cal T}_{+}^{*}=1$ | | $\mathbbm{1}_{3}$ | | 2$\times$2 $(\eta,\nu)$ | | 2$\times$8 $(d_{1Ra},d_{2Ra},d_{3Ra},d_{4Rs},d_{5Rs},,d_{6Rs},d_{7Rs},d_{8Rs}$ $d_{1Is},d_{2Is},d_{3Is},d_{4Ia},d_{5Ia},d_{6Ia},d_{7Ia},d_{8Ia})$
TRS† with ${\cal C}_{+}{\cal C}_{+}^{*}=1$ | | $\mathbbm{1}_{3}$ | | 2$\times$2 $(\eta,\nu)$ | | 2$\times$8 $d_{1Ra},d_{2Ra},d_{3Ra},d_{4Rs},d_{5Rs},d_{6Rs},d_{7Rs},d_{8Rs}$ $d_{1Ia},d_{2Ia},d_{3Ia},d_{4Is},d_{5Is},d_{6Is},d_{7Is},d_{8Is})$
${\rm psCS}$ | | $\frac{\mathbbm{1}_{3}}{3}+M^{7}-\frac{M^{8}}{\sqrt{3}}$ | | 2 $\times$1 $(\eta)$ | | 6 $(d_{2R},d_{2I},d_{4R},d_{4I},d_{6R},d_{6I})$
SLS | | $\frac{\mathbbm{1}_{3}}{3}+M^{7}-\frac{M^{8}}{\sqrt{3}}$ | | 2 $\times$1 $(\eta)$ | | 8 $(d_{1R},d_{1I},d_{3R},d_{3I},d_{4R},d_{4I},d_{6R},d_{6I})$
$\cal I$ symmetry | | $\frac{\mathbbm{1}_{3}}{3}+M^{6}+\frac{M^{7}}{2}+\frac{M^{8}}{2\sqrt{3}}$ | | 2$\times$2 $(\eta,\nu)$ | | 2$\times$8 $(d_{1},d_{2},d_{3Ra},d_{3Is},d_{4},d_{5},d_{6Rs},d_{6Ia},d_{7},d_{8})$
$\rm psH$ | | $\frac{\mathbbm{1}_{3}}{3}+M^{4}-\frac{M^{8}}{\sqrt{3}}$ | | 2 $(\eta_{R},\nu_{R})$ | | 12 $(d_{1I},d_{2R},d_{2I},d_{3R},d_{3I},d_{4R},d_{5R},d_{5I},d_{6R},d_{6I},d_{7I},d_{8R})$
$\cal P$ symmetry | | $\mathop{\mathrm{i}}\frac{\mathbbm{1}_{3}}{3}+M^{7}-\mathop{\mathrm{i}}\frac{M^{8}}{\sqrt{3}}$ | | 2$\times$2 $(\eta,\nu)$ | | 2$\times$8 $(d_{1Ra},d_{2Rs},d_{3Ra},d_{4Ra},d_{5Rs},d_{6Ra},d_{7Rs},d_{8Rs}$ $d_{1Ia},d_{2Is},d_{3Ia},d_{4Ia},d_{5Is},d_{6Ia},d_{7Is},d_{8Is})$
$\cal PT$ symmetry | | $\mathop{\mathrm{i}}\frac{\mathbbm{1}_{3}}{3}+M^{7}-\mathop{\mathrm{i}}\frac{M^{8}}{\sqrt{3}}$ | | 2 $(\eta_{R},\nu_{R})$ | | 12 $(d_{1R},d_{2R},d_{2I},d_{3R},d_{3I},d_{4I},d_{5R},d_{5I},d_{6R},d_{6I},d_{7R},d_{8R})$
$\cal PT$ symmetry | | $\frac{\mathbbm{1}_{3}}{3}+M^{7}-\frac{M^{8}}{\sqrt{3}}$ | | 2 $(\eta_{R},\nu_{R})$ | | 8 $(d_{1R},d_{2I},d_{3R},d_{4I},d_{5R},d_{6I},d_{7},d_{8})$
${\cal CP}$ symmetry | | $\mathop{\mathrm{i}}\frac{\mathbbm{1}_{3}}{3}+M^{7}-\mathop{\mathrm{i}}\frac{M^{8}}{\sqrt{3}}$ | | 2 $(\eta_{R},\nu_{I})$ | | 8 $(d_{1I},d_{2R},d_{3I},d_{4R},d_{5I},d_{6R},d_{7I},d_{8I})$
Here $d_{\cal O}=d_{{\cal O}R}+\mathop{\mathrm{i}}d_{{\cal O}I}$ with ${\cal
O}\in\\{x,y,z\\}$. Symmetric and antisymmetric components of $d_{\cal O}$ with
respect to $\bm{k}\to-\bm{k}$ are labelled by $d_{{\cal O}s}$ and $d_{{\cal
O}s}$, respectively. Complex-valued $\eta$ and $\nu$ are introduced in Eqs.
(32, 33). $\nu_{R},\eta_{R}$ stand for the real components of $\nu,\eta$. Note
that nonzero parameters might vary by changing the depicted symmetry operators
for each symmetry, see, e.g., the two different choices for $\cal PT$
symmetry. Nevertheless, the number of parameters and constraints remain
intact.
To study EPs of order three, we perform a matrix decomposition in the Gell-
Mann basis. Within this decomposition, the most generic three-band Hamiltonian
is given by
$\displaystyle{\cal
H}(\bm{k})=d_{0}(\bm{k})\mathbbm{1}_{3}+\bm{d}(\bm{k})\cdot{\bf M},$ (29)
where ${\bf M}=(M_{1},M_{2},\ldots,M_{8})$ is the vector of traceless three-
band Gell-Mann matrices (see Appendix D.2), $\mathbbm{1}_{3}$ is the $3\times
3$ identity matrix, $\bm{k}$ denotes the momentum with the appropriate
dimensions, and $(d_{0}(\bm{k}),\bm{d}(\bm{k}))$ are complex-valued momentum
dependent variables.
For the $3\times 3$ matrix ${\cal H}$ in Eq. (29), the characteristic
polynomial in Eq. (1) reads
$\displaystyle{\cal F}_{\lambda}=\lambda^{3}-\mathop{\mathrm{tr}}[{\cal
H}]\lambda^{2}+\frac{(\mathop{\mathrm{tr}}[{\cal
H}])^{2}-\mathop{\mathrm{tr}}[{\cal H}^{2}]}{2}\lambda-\det[{\cal H}]=0.$ (30)
The three solutions $\lambda_{1},\lambda_{2},\lambda_{3}$ of ${\cal
F}_{\lambda}$ are eigenvalues of $\cal H$ in Eq. (29) and are given explicitly
in Appendix E. The associated discriminant for Eq. (30) then casts
$\displaystyle{\cal D}$ $\displaystyle=-\frac{1}{27}[4\eta^{3}+\nu^{2}].$ (31)
Here, the complex-valued constraints read
$\displaystyle\eta$ $\displaystyle=\frac{\mathop{\mathrm{tr}}[{\cal
H}]^{2}}{2}-\frac{3\mathop{\mathrm{tr}}[{\cal H}^{2}]}{2},$ (32)
$\displaystyle\nu$ $\displaystyle=27\det[{\cal
H}]-\frac{5\mathop{\mathrm{tr}}[{\cal
H}]^{3}}{2}+\frac{9\mathop{\mathrm{tr}}[{\cal H}]\mathop{\mathrm{tr}}[{\cal
H}^{2}]}{2}.$ (33)
In the presence of symmetries, the number of nonzero constraints may reduce
and different $d$’s may vanish. Table 5 summarizes these constraints and the
number of nonzero parameters in Hamiltonians with a specific symmetry, listed
in Table 1. As before, although we depict a particular symmetry generator for
each symmetry, the number of constraints and nonzero parameters do not depend
on our choice of generator. This can be explicitly seen for $\cal PT$
symmetry, for which we have presented two possible symmetry operators. Similar
to the case of EP2s, three-band touchings occur in the Hermitian case
($\bm{d}_{I}=0$) when $\bm{d}_{R}=0$ amounting to at most eight constraints.
To find EP3s, however, we may again set $\mathop{\mathrm{tr}}[{\cal H}]=0$
without loss of generality, and satisfy four real constraints,
$\mathop{\mathrm{Re}}[\det[{\cal H}]]=0$, $\mathop{\mathrm{Im}}[\det[{\cal
H}]]=0$, $\mathop{\mathrm{Re}}[\mathop{\mathrm{tr}}[{\cal H}^{2}]]=0$ and
$\mathop{\mathrm{Im}}[\mathop{\mathrm{tr}}[{\cal H}^{2}]]=0$, or equivalently
$\mathop{\mathrm{Re}}[\eta]=0$, $\mathop{\mathrm{Im}}[\eta]=0$,
$\mathop{\mathrm{Re}}[\nu]=0$ and $\mathop{\mathrm{Im}}[\nu]=0$.
Perturbing close to an EP3 gives [cf. Eq. (4)]
$\displaystyle{\cal H}_{0}=\begin{pmatrix}0&1&0\\\ 0&0&1\\\ \det[{\cal
H}]&\frac{\mathop{\mathrm{tr}}[{\cal H}^{2}]}{2}&0\end{pmatrix}.$ (34)
From this, we see that depending on the values of $\det[\cal H]$ and
$\mathop{\mathrm{tr}}[{\cal H}^{2}]$, it is possible to realize different
types of EP3s. Note that, without loss of generality, we again set
$\mathop{\mathrm{tr}}[{\cal H}]$ to zero.
For systems in which $\mathop{\mathrm{tr}}[{\cal H}^{2}]\neq 0$ and
$\det[{\cal H}]=0$, the Jordan decomposition of ${\cal H}_{0}$ reveals EPs
whose low-energy bands consist of a flat band with energy $0$ and two bands
with dispersion $\pm\sqrt{\mathop{\mathrm{tr}}[{\cal H}^{2}]/2}$. In Section
II, we showed that $\det[\cal H]$ is always zero for systems with odd $n$ in
the presence of SLS and psCS. In the presence of these symmetries, one can
thus only find this type of EP3. An explicit example of this type of EP3 is
reported in a system with SLS symmetry in Ref. 52, see also Ref. 70.
For Hamiltonians in which by construction $\mathop{\mathrm{tr}}[{\cal
H}^{2}]=0$ and $\det[{\cal H}]\neq 0$, the Jordan decomposition of ${\cal
H}_{0}$ suggests it is possible to get a second type of EP3, whose low-energy
dispersion yields $(-1)^{j+j/3}\sqrt[3]{\det[{\cal H}]}$ for $j=1,2,3$.
A third type of EP3s can emerge when the constraints $\nu$ and $\eta$,
respectively, in Eqs. (32,33) are purely real, i.e.,
$\mathop{\mathrm{Im}}[\nu]=\mathop{\mathrm{Im}}[\det[{\cal H}]]=0$ and
$\mathop{\mathrm{Im}}[\eta]=\mathop{\mathrm{Im}}[\mathop{\mathrm{tr}}[{\cal
H}^{2}]]=0$. In this case, the low-energy dispersion can be obtained from the
generic solution of $\lambda_{j}$ with $j=1,2,3$ in Eqs. (153, 154, 155). If
$\mathop{\mathrm{Re}}[\eta]\propto\mathop{\mathrm{Re}}[\mathop{\mathrm{tr}}[{\cal
H}^{2}]]$ decrease to zero faster than
$\mathop{\mathrm{Re}}[\nu]\propto\mathop{\mathrm{Re}}[\det[{\cal H}]]$ close
to EP3, the dominant terms in the low-energy dispersion should be proportional
to
$\\{\sqrt[3]{\mathop{\mathrm{Re}}[\nu]},(\mathop{\mathrm{i}}+\sqrt{3})\sqrt[3]{\mathop{\mathrm{Re}}[\nu]},(\mathop{\mathrm{i}}-\sqrt{3})\sqrt[3]{\mathop{\mathrm{Re}}[\nu]}\\}$.
This type of EP3 is explicitly studied in a $\cal PT$-symmetric Hamiltonian in
Ref. 52.
We summarize these three types of EP3s in Table 6. Without reference to
symmetries, types I and III were also reported in Ref. 55.
Table 6: Various possibilities of EP3s and their energy dispersion
| | Condition | | Energy dispersion
---|---|---|---|---
EP3 0 | | $\eta\neq 0,\nu\neq 0$ | | $(\lambda_{1},\lambda_{2},\lambda_{3})$
EP3 I | | $\det[{\cal H}]=0$ | | $0,\pm\sqrt{\frac{\mathop{\mathrm{tr}}[{\cal H}^{2}]}{2}}$
EP3 II | | $\mathop{\mathrm{tr}}[{\cal H}^{2}]=0$ | | $(-1)^{j+j/3}\sqrt[3]{\det[{\cal H}]}$
EP3 III | | $\mathop{\mathrm{Im}}[\eta]=\mathop{\mathrm{Im}}[\nu]=0$ | | $\sqrt[3]{\mathop{\mathrm{Re}}[\nu]},\alpha\sqrt[3]{\mathop{\mathrm{Re}}[\nu]},\alpha^{*}\sqrt[3]{\mathop{\mathrm{Re}}[\nu]}$
Here $j\in\\{1,2,3\\}$ and $\alpha=(\mathop{\mathrm{i}}+\sqrt{3})$. Note that
in all cases _four_ real constraints should be satisfied to observe EP3s.
These constraints are counted by two complex equations either $(\eta=0,\nu=0)$
or $(\mathop{\mathrm{tr}}[{\cal H}^{2}]=0,\det[{\cal H}]=0)$.
$(\lambda_{1},\lambda_{2},\lambda_{3})$ are given in Eqs. (153, 154, 155). For
EP3 III, $\mathop{\mathrm{Re}}[\eta]$ goes faster to zero than
$\mathop{\mathrm{Re}}[\nu]$. See details in the main text.
Aside from EP3s, three-band systems may also host EP2s. To explore the
conditions in which these EPs can be realized, we introduce a subclass of
traceless $3\times 3$ Hamiltonians, which read
$\displaystyle{\cal H}_{1}$ $\displaystyle=\begin{pmatrix}-(b+e)&0&0\\\
0&b&c\\\ 0&d&e\end{pmatrix}=\begin{pmatrix}-(b+e)&0_{1\times 2}\\\ 0_{2\times
1}&h_{2\times 2}\end{pmatrix},$ (35)
$\displaystyle=h_{3}M^{3}+h_{6}M^{6}+h_{7}M^{7}+h_{8}M^{8},$ (36)
where $b,c,d,e$ are complex values, and $a=-(b+e)$,
$h_{3}=\mathop{\mathrm{i}}(c-d)/2$, $h_{6}=(c+d)/2$, $h_{7}=(a-b)/2$, and
$h_{8}=(a+b-2e)/(2\sqrt{3})$. The associated characteristic polynomial for
${\cal H}_{1}$ in Eq. (35) is
$\displaystyle(b+e+\lambda)\left(be-b\lambda-
cd-e\lambda+\lambda^{2}\right)=0,$ (37)
where the second factor originates from $h_{2\times 2}$. For this factor, we
can write the companion matrix
$\displaystyle h_{2\times 2}=\begin{pmatrix}0&1\\\ -\det[h_{2\times
2}]&\mathop{\mathrm{tr}}[h_{2\times 2}]\end{pmatrix},$ (38)
which explicitly shows the possibility of observing EP2s in this subsystem of
three-band Hamiltonians.
Figure 1: (a) The spectrum of the three-band model in Eq. (42) in its
Hermitian limit with $\alpha_{x}=\alpha_{y}=\alpha_{z}=0$. (Middle panels) The
real (b) and imaginary (c) components of the band structure for the non-
Hermitian model in Eq. (42) in the presence of psCS with
$\alpha_{x}=\alpha_{y}=0.3$ and $\alpha_{z}=\mathop{\mathrm{i}}\sqrt{0.6}$.
(Bottom panels) The real (d) and imaginary (e) components of the the band
structure for the non-Hermitian model in the presence of psCS and ${\cal PT}$
symmetry given in Eq. (52) with $\alpha_{x}=\alpha_{y}=0.3$ and
$\alpha_{z}=\mathop{\mathrm{i}}\sqrt{0.6}$. Line colors in middle and bottom
panels are chosen such that largest (smallest) values are presented in red
(blue). Smaller ranges for $k_{x},k_{y}$ are for a better visibility purpose.
Figure 2: (Upper panels) The same as panels (b,c) in Fig. 1 along the
$k_{x}=k_{y}$ direction. (Bottom panels) The same as panels (d,e) in Fig. 1
along the $k_{x}=k_{y}$ direction.
To explore the effect of imposing symmetries on the behavior of EPs and the
associated conditions for their appearance, we introduce an explicit three-
band model in the following. Our model Hamiltonian reads
$\displaystyle{\cal H}=\left(\begin{array}[]{ccc}0&h_{x}&-h_{y}\\\
-h_{x}&0&h_{z}\\\ h_{y}&-h_{z}&0\\\ \end{array}\right),$ (42)
where $h_{x}=\alpha_{x}+\mathop{\mathrm{i}}\sin(k_{x})$,
$h_{y}=\alpha_{y}+\mathop{\mathrm{i}}\sin(k_{y})$, and
$h_{z}=\alpha_{z}+\mathop{\mathrm{i}}(-2+\cos(k_{x})+\cos(k_{y}))$. Our model
is a non-Hermitian generalization of the effective Hamiltonian for three-fold
fermions at $k_{z}=\pi/2$ introduced in Ref. 13. This model Hamiltonian
displays the pseudo-chiral symmetry with generator $-\mathbbm{1}_{3}$ and
hosts a threefold degeneracy in its Hermitian spectrum at
$\alpha_{x}=\alpha_{y}=\alpha_{z}=0$, as shown in Fig. 1 (a). The traces and
the determinant of this model read
$\displaystyle\mathop{\mathrm{tr}}[{\cal H}]$ $\displaystyle=0,$ (43)
$\displaystyle-\frac{\mathop{\mathrm{tr}}[{\cal H}^{2}]}{2}$
$\displaystyle=\alpha_{x}^{2}+\alpha_{y}^{2}+\alpha_{z}(\alpha_{z}-4\mathop{\mathrm{i}})+2\mathop{\mathrm{i}}\alpha_{x}\sin(k_{x})$
$\displaystyle+\cos(k_{x})(2\mathop{\mathrm{i}}\alpha_{z}-2\cos(k_{y})+4)+2\mathop{\mathrm{i}}\alpha_{y}\sin(k_{y})$
$\displaystyle+(4+2\mathop{\mathrm{i}}\alpha_{z})\cos(k_{y})-6,$ (44)
$\displaystyle\det[{\cal H}]$ $\displaystyle=0.$ (45)
For the purpose of simplicity, we set $\alpha_{x}=\alpha$, $\alpha_{y}=\alpha$
and $\alpha_{z}=\mathop{\mathrm{i}}\sqrt{2\alpha^{2}}$ with $\alpha$ a real-
valued number. The real and imaginary parts of the eigenvalues with
$\alpha=0.3$ are shown in Figs. 1(b,c) and Figs. 2(a,b), respectively, and
reveal that our system exhibits an EP3 when $(k_{x},k_{y})\to 0$. Based on the
low-energy dispersion of the spectrum with one flat (with energy zero) and two
dispersive bands, Table 6 suggests that we are dealing with an EP3 I. To
examine this suggestion, we look at the band structure of our model at small
momenta ($\\{k_{x},k_{y}\\}\to 0$)
$\displaystyle\epsilon_{1}$ $\displaystyle=0,$ (46)
$\displaystyle\epsilon_{2}$
$\displaystyle=-\mathop{\mathrm{i}}\sqrt{-k_{x}^{2}+2\mathop{\mathrm{i}}\alpha(k_{x}+k_{y})-k_{y}^{2}},$
(47) $\displaystyle\epsilon_{3}$
$\displaystyle=\mathop{\mathrm{i}}\sqrt{-k_{x}^{2}+2\mathop{\mathrm{i}}\alpha(k_{x}+k_{y})-k_{y}^{2}}.$
(48)
The factor which is under the square root in $\epsilon_{2}$ and $\epsilon_{3}$
is $-\mathop{\mathrm{tr}}[{\cal H}^{2}]/2$. Thus, our model in Eq. (42) indeed
gives rise to type I EP3s.
Imposing ${\cal PT}$ symmetry with generator
$\mathbbm{1}_{3}/3+M^{7}-M^{8}/\sqrt{3}={\rm diag}(1,-1,1)$ on this model
leads to a pseudo-chiral-${\cal PT}$-symmetric Hamiltonian, which reads
$\displaystyle{\cal H}_{\cal PT}$
$\displaystyle=\left(\begin{array}[]{ccc}0&\mathop{\mathrm{i}}\sin(k_{x})&-\alpha\\\
-\mathop{\mathrm{i}}\sin(k_{x})&0&\mathop{\mathrm{i}}h_{\alpha}\\\
\alpha&-\mathop{\mathrm{i}}h_{\alpha}&0\\\ \end{array}\right),$ (52)
where $h_{\alpha}=\left[\sqrt{2}\alpha+\cos(k_{x})+\cos(k_{y})-2\right]$. The
band structure of this system at $\alpha=0.3$ is plotted in Figs. 1 (d,e) and
Fig. 2 (c,d). Even though we observe three-band crossings in the band
structure of this system, we emphasize that EP2s, instead of EP3s, emerge at
momenta slightly away from the origin. To demonstrate this statement, we look
at the characteristic polynomial, which reads
$\displaystyle-\lambda\left[\lambda^{2}-\Omega_{\alpha}(k_{x},k_{y})\right]=0,$
(53)
where
$\Omega_{\alpha}(k_{x},k_{y})=-\alpha^{2}+4\sqrt{2}\alpha-2\sqrt{2}\alpha\cos(k_{x})-2\cos(k_{x})\cos(k_{y})-\sin^{2}(k_{x})-\cos^{2}(k_{x})+4\cos(k_{x})-2\sqrt{2}\alpha\cos(k_{y})-\cos^{2}(k_{y})+4\cos(k_{y})-4$.
The characteristic polynomial in Eq. (53) factorizes into a first order and a
second-order polynomial, similar to Eq. (37). This means that it is possible
to find a unitary transformation such that the Hamiltonian matrix in Eq. (53)
features a zero row and zero column, or in other words, that the zero-energy
flat band is not coupled to the other bands. We also note that
$\Omega_{\alpha}(k_{x},k_{y})=0$ delineates the region in which EP2s exist.
Lastly, we consider the case of finding EP2s in a three-band model in the
presence of SLS. We start with the Hamiltonian in Eq. (35), which is SL-
symmetric with $S=\mathbbm{1}_{3}/3+M^{7}-M^{8}/\sqrt{3}={\rm diag}(1,-1,1)$
as defined in Table 5 when $e=-b$, such that ${\cal
H}_{1,\textrm{SLS}}=h_{3}M^{3}+h_{6}M^{6}$. The eigenvalues for this
Hamiltonian read $0,\pm\sqrt{b^{2}-cd}$. Even though the eigenvalues are
three-fold degenerate when $b^{2}=-cd$, only two eigenvectors coalesce onto
one at this point, and we thus find an EP2 in the system. There are two
important things to note for this example. Firstly, it is only possible to
find EP2s in three-band models with SLS _as long as_ the zero-energy band is
not coupled to the other bands, or in other words, _as long as_ the three-band
model can be described by a Hamiltonian like Eq. (35). Indeed, if the zero-
energy band were to be coupled to the other bands such that the most generic
three-band SL-symmetric Hamiltonian reads ${\cal
H}_{\textrm{SLS}}=d_{1}M^{1}+d_{3}M^{3}+d_{4}M^{4}+d_{6}M^{6}$ [cf. Table 5],
any previously existing EP2 would immediately be promoted to an EP3. Secondly,
to retrieve ${\cal H}_{1,\textrm{SLS}}$, one has to tune $d_{1}=d_{4}=0$. This
means that to find an EP2 in a three-band SL-symmetric model, one has to
satisfy six real constraints, namely the two constraints,
$\mathop{\mathrm{Re}}[b^{2}]=-\mathop{\mathrm{Re}}[cd],\mathop{\mathrm{Im}}[b^{2}]=-\mathop{\mathrm{Im}}[cd]$,
that one needs to satisfy to find an EP2 in the presence of SLS (cf. Table 3)
as well as the additional four constraints
$\mathop{\mathrm{Re}}[d_{1}]=\mathop{\mathrm{Im}}[d_{1}]=\mathop{\mathrm{Re}}[d_{4}]=\mathop{\mathrm{Im}}[d_{4}]=0$.
## V EPs in four-band systems
Table 7: Number of constraints and parameters to realize degenerate points in
4-band systems
Symmetry | | Operator | | $\\#$ constr. | | $\\#$ parameters
---|---|---|---|---|---|---
No symmetry | | - | | 2$\times$3 $(\nu,\eta,\kappa)$ | | 2$\times$15 $(d_{1},d_{2},d_{3},d_{4},d_{5},d_{6}d_{7},d_{8},d_{9},d_{10},d_{11},d_{12},d_{13},d_{14},d_{15})$
PHS with ${\cal C}_{-}{\cal C}_{-}^{*}=1$ | | $\Lambda^{8}+\Lambda^{11}$ | | 2$\times$3 $(\nu,\eta,\kappa)$ | | 2$\times$15 $(d_{1Ra},d_{1Rs},d_{2Ra},d_{3Ra},d_{3Rs},d_{4Ra},d_{4Rs},d_{5Ra},d_{6Ra},d_{6Rs},d_{7Ra},$ $d_{7Rs},d_{8Ra},d_{9Ra},d_{9Rs},d_{10Ra},d_{10Rs},d_{11Ra},d_{12Ra},d_{12Rs},$ $d_{13Ra},d_{13Rs},d_{14Ra},d_{14Rs},d_{15Rs},d_{15Ra},d_{1Ia},d_{1Is},d_{2Ia},d_{3Ia},d_{3Is},$ $d_{4Ia},d_{4Is},d_{5Ia},d_{6Ia},d_{6Is},d_{7Ia},d_{7Is},d_{8Ia},d_{9Ia},d_{9Is},d_{10Ia},d_{10Is},$ $d_{11Ia},d_{12Ia},d_{12Is},d_{13Ia},d_{13Is},d_{14Ia},d_{14Is},d_{15Ia},d_{15Is})$
PHS with ${\cal C}_{-}{\cal C}_{-}^{*}=-1$ | | $-\mathop{\mathrm{i}}(\Lambda^{1}+\Lambda^{6})$ | | 2$\times$3 $(\nu,\eta,\kappa)$ | | 2$\times$15 $(d_{1Rs},d_{2Rs},d_{2Ra},d_{3Ra},d_{3Rs},d_{4Ra},d_{4Rs},d_{5Ra},d_{5Rs},d_{6Rs},$ $d_{7Rs},d_{8Ra},d_{8Rs},d_{9Ra},d_{9Rs},d_{10Ra},d_{10Rs},d_{11Ra},d_{11Rs},$ $d_{12Rs},d_{13Rs},d_{14Ra},d_{14Rs},d_{15Rs},d_{15Ra},d_{1Is},d_{2Ia},$ $d_{2Is},d_{3Ia},d_{3Is},d_{4Ia},d_{4Is},d_{5Ia},d_{5Is},d_{6Is},d_{7Is},d_{8Ia},d_{8Is},d_{9Ia},d_{9Is},$ $d_{10Ia},d_{10Is},d_{11Ia},d_{11Is},d_{12Is},d_{13Is},d_{14Is},d_{14Ia},d_{15Ia},d_{15Is})$
PHS† with ${\cal T}_{-}{\cal T}_{-}^{*}=1$ | | $\Lambda^{8}+\Lambda^{11}$ | | 2$\times$3 $(\nu,\eta,\kappa)$ | | 2$\times$15 $(d_{1Ra},d_{1Rs},d_{2Ra},d_{3Ra},d_{3Rs},d_{4Ra},d_{4Rs},d_{5Ra},d_{6Ra},d_{6Rs},$ $d_{7Ra},d_{7Rs},d_{8Ra},d_{9Ra},d_{9Rs},d_{10Ra},d_{10Rs},d_{11Ra},d_{12Ra},d_{12Rs},$ $d_{13Ra},d_{13Rs},d_{14Ra},d_{14Rs},d_{15Ra},d_{15Rs},d_{1Ia},d_{1Is},d_{2Is},d_{3Ia},d_{3Is},$ $d_{4Ia},d_{4Is},d_{5Is},d_{6Ia},d_{6Is},d_{7Ia},d_{7Is},d_{8Is},d_{9Ia},d_{9Is},d_{10Ia},d_{10Is},$ $d_{11Is},d_{12Ia},d_{12Is},d_{13Ia},d_{13Is},d_{14Ia},d_{14Is},d_{15Is},d_{15Ia})$
PHS† with ${\cal T}_{-}{\cal T}_{-}^{*}=-1$ | | $-\mathop{\mathrm{i}}(\Lambda^{1}+\Lambda^{6})$ | | 2$\times$3 $(\nu,\eta,\kappa)$ | | 2$\times$15 $d_{1Rs},d_{2Ra},d_{2Rs},d_{3Ra},d_{3Rs},d_{4Ra},d_{4Rs},d_{5Ra},d_{5Rs},,d_{6Rs}d_{7Rs},$ $d_{8Ra},d_{8Rs},d_{9Ra},d_{9Rs},d_{10Ra},d_{10Rs},d_{11Ra},d_{11Rs},d_{12Rs},d_{13Rs},$ $d_{14Rs},d_{14Ra},d_{15Ra},d_{15Rs},d_{1Ia},d_{2Ia},d_{2Is},d_{3Ia},d_{3Is},$ $d_{4Ia},d_{4Is},d_{5Ia},d_{5Is},d_{6Ia},d_{7Ia},d_{8Ia},d_{8Is},d_{9Ia},d_{9Is},$ $d_{10Ia},d_{10Is},d_{11Ia},d_{11Is},d_{12Ia},d_{13Ia},d_{14Ia},d_{14Is},d_{15Is},d_{15Ia}$
TRS with ${\cal T}_{+}{\cal T}_{+}^{*}=1$ | | $\Lambda^{8}+\Lambda^{11}$ | | 2$\times$3 $(\nu,\eta,\kappa)$ | | 2$\times$15 $(d_{1Ra},d_{1Rs},d_{2Rs},d_{3Ra},d_{3Rs},d_{4Ra},d_{4Rs},d_{5Rs},d_{6Ra},d_{6Rs},d_{7Ra},$ $d_{7Rs},d_{8Rs},d_{9Ra},d_{9Rs},d_{10Ra},d_{10Rs},d_{11Rs},d_{12Rs},d_{12Ra},d_{13Ra},$ $d_{13Rs},d_{14Ra},d_{14Rs},d_{15Rs},d_{15Ra},d_{1Is},d_{1Ia},d_{2Ia}d_{3Ia},d_{3Is},$ $d_{4Ia},d_{4Is},d_{5Ia},d_{6Ia},d_{6Is},d_{7Ia},d_{7Is},d_{8Ia},d_{9Ia},d_{9Is},$ $d_{10Ia},d_{10Is},d_{11Ia},d_{12Ia},d_{12Is},d_{13Ia},d_{13Is},d_{14Ia},d_{14Is},d_{15Ia},d_{15Is})$
TRS with ${\cal T}_{+}{\cal T}_{+}^{*}=-1$ | | $-\mathop{\mathrm{i}}(\Lambda^{1}+\Lambda^{6})$ | | 2$\times$3 $(\nu,\eta,\kappa)$ | | 2$\times$15 $(d_{1Ra},d_{2Ra},d_{2Rs},d_{3Ra},d_{3Rs},d_{4Ra},d_{4Rs},d_{5Ra},d_{5Rs},$ $d_{6Ra},d_{7Ra},d_{8Ra},d_{8Rs},d_{9Ra},d_{9Rs},d_{10Ra},d_{10Rs},d_{11Ra},d_{11Rs},d_{12Ra},$ $d_{13Ra},d_{14Rs},d_{14Ra},d_{15Ra},d_{15Rs},d_{1Is},d_{2Ia},d_{2Is},d_{3Ia},d_{3Is},$ $d_{4Ia},d_{4Is},d_{5Ia},d_{5Is},d_{6Is},d_{7Is},d_{8Ia},d_{8Is},d_{9Ia},d_{9Is},$ $d_{10Ia},d_{10Is},d_{11Ia},d_{11Is},d_{12Is}d_{13Is},d_{14Ia},d_{14Is},d_{15Is},d_{15Ia})$
TRS† with ${\cal C}_{+}{\cal C}_{+}^{*}=1$ | | $\Lambda^{8}+\Lambda^{11}$ | | 2$\times$3 $(\nu,\eta,\kappa)$ | | 2$\times$15 $(d_{1Ra},d_{1Rs},d_{2Rs},d_{3Ra},d_{3Rs},d_{4Ra},d_{4Rs},d_{5Rs},d_{6Ra},d_{6Rs},$ $d_{7Ra},d_{7Rs},d_{8Rs},d_{9Ra},d_{9Rs},d_{10Ra},d_{10Rs},d_{11Rs},d_{12Ra},d_{12Rs},$ $d_{13Ra},d_{13Rs},d_{14Ra},d_{14Rs},d_{15Rs},d_{15Ra},d_{1Ia},d_{1Is},d_{2Is},d_{3Ia},d_{3Is},$ $d_{4Ia},d_{4Is},d_{6Ia},d_{6Is},d_{5Is},d_{7Ia},d_{7Is},d_{8Is},d_{9Ia},d_{9Is},$ $d_{10Is},d_{10Ia},d_{11Is},d_{12Ia},d_{12Is},d_{13Ia},d_{13Is},d_{14Ia},d_{14Is},d_{15Is},d_{15Ia})$
TRS† with ${\cal C}_{+}{\cal C}_{+}^{*}=-1$ | | $-\mathop{\mathrm{i}}(\Lambda^{1}+\Lambda^{6})$ | | 2$\times$3 $(\nu,\eta,\kappa)$ | | 2$\times$15 $d_{1Ra},d_{2Ra},d_{2Rs},d_{3Ra},d_{3Rs},d_{4Ra},d_{4Rs},d_{5Ra},d_{5Rs},$ $d_{6Ra},d_{7Ra},d_{8Ra},d_{8Rs},d_{9Ra},d_{9Rs},d_{10Ra},d_{10Rs},$ $d_{11Ra},d_{11Rs},d_{12Ra},d_{13Ra},d_{14Rs},d_{14Ra},d_{15Ra},d_{15Rs},$ $d_{1Ia},d_{2Ia},d_{2Is},d_{3Ia},d_{3Is},d_{4Ia},d_{4Is},d_{5Ia},d_{5Is},$ $d_{6Ia},d_{7Ia},d_{8Ia},d_{8Is},d_{9Ia},d_{9Is},d_{10Ia},d_{10Is},$ $d_{11Ia},d_{11Is},d_{12Ia},d_{13Ia},d_{14Is},d_{14Ia},d_{15Ia},d_{15Is}$
Here $d_{\cal O}=d_{{\cal O}R}+\mathop{\mathrm{i}}d_{{\cal O}I}$ with ${\cal
O}\in\\{x,y,z\\}$. Symmetric and antisymmetric components of $d_{\cal O}$ with
respect to $\bm{k}\to-\bm{k}$ are labelled by $d_{{\cal O}s}$ and $d_{{\cal
O}s}$, respectively. $\eta,\nu,$ and $\kappa$ are introduced in Eq. (61, 62,
63). Note that nonzero parameters might vary by changing the depicted
generators for each symmetry operator. Nevertheless, the number of parameters
and constraints remain intact.
Table 8: Number of constraints and parameters to realize degenerate points in
4-band systems
Symmetry | | Operator | | $\\#$ constr. | | $\\#$ parameters
---|---|---|---|---|---|---
CS | | $\Gamma_{5}$ | | 3 $(\nu_{R},\eta_{R},\kappa_{I})$ | | 26 $(d_{1R},d_{3R},d_{4R},d_{6R},d_{7R},d_{8R},d_{9R},d_{10R},$ $d_{11R},d_{12R},d_{13R},d_{14R},d_{15R},d_{1I},d_{2I},d_{3I},$ $d_{4I},d_{5I},d_{6I},d_{7I},d_{9I},d_{10I},d_{12I},d_{13I},d_{14I},d_{15I})$
${\rm psCS}$ | | $\Gamma_{5}$ | | 4 $(\eta,\nu)$ | | $2\times 15$ $(d_{1R},d_{2R},d_{3R},d_{4R},d_{5R},d_{6R},d_{7R},d_{8R},d_{9R},d_{10R},$ $d_{11R},d_{12R},d_{13R},d_{14R},d_{15R},d_{1I},d_{2I},d_{3I},d_{4I},d_{5I},d_{6I},$ $d_{7I},d_{8I},d_{9I},d_{10I},d_{11I},d_{12I},d_{13I},d_{14I},d_{15I})$
SLS | | $\mathop{\mathrm{i}}\Gamma_{5}$ | | 4 $(\eta,\nu)$ | | 26 $(d_{1R},d_{3R},d_{4R},d_{6R},d_{7R},d_{8R},d_{9R},d_{10R},$ $d_{11R},d_{12R},d_{13R},d_{14R},d_{15R},d_{1I},d_{3I},d_{4I},d_{6I},$ $d_{7I},d_{8I},d_{9I},d_{10I},d_{11I},d_{12I},d_{13I},d_{14I},d_{15I})$
$\cal I$ symmetry | | $\Lambda^{13}-\frac{\Lambda^{14}}{\sqrt{3}}+\sqrt{\frac{2}{3}}\Lambda^{15}$ | | 2$\times$3 $(\nu,\eta,\kappa)$ | | 2$\times$15 $(d_{1Ra},d_{2Rs},d_{3Ra},d_{4Ra},d_{5Rs},d_{6Ra}d_{7Ra},d_{8Rs},d_{9Ra},d_{10Ra},$ $d_{11Rs},d_{12Ra},d_{13Rs},d_{14Rs},d_{15Rs},d_{1Is},d_{2Ia},d_{3Is},d_{4Is},d_{5Ia},$ $d_{6Is},d_{7Is},d_{8Ia},d_{9Is},d_{10Is},d_{11Ia},d_{12Is},d_{13Ia},d_{14Ia},d_{15Ia})$
$\rm psH$ | | $\Gamma_{1}$ | | 3 ($\eta_{R},\nu_{R},\kappa_{R}$) | | 26 $(d_{1R},d_{3R},d_{4R},d_{6R},d_{7R},d_{8R},d_{9R},d_{10R},$ $d_{11R},d_{12R},d_{13R},d_{14R},d_{15R},d_{1I},d_{2I},d_{3I},$ $d_{4I},d_{5I},d_{6I},d_{7I},d_{9I},d_{10I},d_{12I},d_{13I},d_{14I},d_{15I})$
$\cal P$ symmetry | | $\Lambda^{13}-\frac{\Lambda^{14}}{\sqrt{3}}+\sqrt{\frac{2}{3}}\Lambda^{15}$ | | 2$\times$3 $(\nu,\eta,\kappa)$ | | 2$\times$15 $(d_{1Ra},d_{2Rs},d_{3Ra},d_{4Ra},d_{5Rs},d_{6Ra},d_{8Rs},d_{9Ra},d_{7Ra},d_{10Ra},$ $d_{11Rs},d_{12Ra},d_{13Rs},d_{14Rs},d_{15Rs},d_{1Ia},d_{2Is},d_{3Ia},d_{4Ia},d_{5Is},$ $d_{6Ia},d_{7Ia},d_{8Is},d_{9Ia},d_{10Ia},d_{11Is},d_{12Ia},d_{13Is},d_{14Is},d_{15Is})$
$\cal PT$ symmetry | | ${\cal P}\times(\Lambda^{8}+\Lambda^{11})$ | | 3 $(\eta_{R},\nu_{R},\kappa_{R})$ | | 26 $(d_{1R},d_{2R},d_{3R},d_{4R},d_{5R},d_{6R},d_{7R},d_{8R},d_{9R},$ $d_{10R},d_{11R},d_{12R},d_{13R},d_{14R},d_{15R},d_{3I},d_{4I},$ $d_{1I},d_{6I},d_{7I},d_{9I},d_{10I},d_{12I},d_{13I},d_{14I},d_{15I})$
${\cal CP}$ symmetry | | $(\Lambda^{8}-\Lambda^{11})$ | | 3 $(\eta_{R},\nu_{R},\kappa_{I})$ | | 26 $(d_{1R},d_{3R},d_{4R},d_{6R},d_{7R},d_{9R},d_{10R},$ $d_{12R},d_{13R},d_{14R},d_{15R},d_{1I},d_{2I},d_{3I},d_{4I},d_{5I},$ $d_{6I},d_{7I},d_{8I},d_{9I},d_{10I},d_{11I},d_{12I},d_{13I},d_{14I},d_{15I})$
Here $d_{\cal O}=d_{{\cal O}R}+\mathop{\mathrm{i}}d_{{\cal O}I}$ with ${\cal
O}\in\\{x,y,z\\}$. Symmetric and antisymmetric components of $d_{\cal O}$ with
respect to $\bm{k}\to-\bm{k}$ are labelled by $d_{{\cal O}s}$ and $d_{{\cal
O}s}$, respectively. $\Gamma_{1}$ and $\Gamma_{5}$ are given in Eqs (65, 66).
$\eta,\nu$ and $\kappa$ are introduced in Eq. (61, 62, 63). Note that nonzero
parameters might vary by changing the depicted generators for each symmetry
operator. Nevertheless, the number of parameters and constraints remain
intact.
We now turn to EP4s and present the most generic four-band Hamiltonian
decomposed in the generalized Gell-Mann basis
$\displaystyle{\cal H}(k)=d_{0}(k)\mathbbm{1}_{4}+\bm{d}(k)\cdot\bm{\Lambda},$
(54)
where $\bm{\Lambda}=(\Lambda_{1},\Lambda_{2},\ldots,\Lambda_{15})$ is the
vector of four-band Gell-Mann matrices (see Appendix D.3), $\mathbbm{1}_{4}$
is the $4\times 4$ identity matrix, $\bm{k}$ denotes the momentum with the
appropriate dimensions, and $(d_{0}(\bm{k}),\bm{d}(\bm{k}))$ are complex-
valued momentum dependent variables.
The associated characteristic polynomial for $\cal H$ in Eq. (54), from Eq.
(1), is given by
$\displaystyle{\cal F}_{\lambda}$
$\displaystyle=\lambda^{4}-a\lambda^{3}+b\lambda^{2}-c\lambda+d=0,$ (55)
where
$\displaystyle a$ $\displaystyle=\mathop{\mathrm{tr}}[{\cal H}],$ (56)
$\displaystyle b$ $\displaystyle=\frac{(\mathop{\mathrm{tr}}[{\cal
H}])^{2}-\mathop{\mathrm{tr}}[{\cal H}^{2}]}{2},$ (57) $\displaystyle c$
$\displaystyle=\frac{\left(\mathop{\mathrm{tr}}[{\cal
H}]^{3}-3\mathop{\mathrm{tr}}[{\cal H}]\mathop{\mathrm{tr}}[{\cal
H}^{2}]+2\mathop{\mathrm{tr}}[{\cal H}^{3}]\right)}{6},$ (58) $\displaystyle
d$ $\displaystyle=\det[{\cal H}].$ (59)
The four solutions $\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}$ of ${\cal
F}_{\lambda}$ are eigenvalues of $\cal H$ in Eq. (54) and are given explicitly
in Appendix F. The discriminant associated with Eq. (55) reads
$\mathcal{D}=\frac{4\eta^{3}-\nu^{2}}{27},$ (60)
where $\eta$, $\nu$ and $\kappa$ are
$\displaystyle\eta$ $\displaystyle=-3ac+b^{2}+12d,$ (61) $\displaystyle\nu$
$\displaystyle=27a^{2}d-9abc+2b^{3}-72bd+27c^{2},$ (62) $\displaystyle\kappa$
$\displaystyle=a^{3}-4ab+8c,$ (63)
with $a,b,c,d$ in Eqs. (56,57,58,59), respectively. From the structure of this
discriminant, one may naively expect that merely _four_ real constraints,
namely
$\mathop{\mathrm{Re}}[\eta]=\mathop{\mathrm{Im}}[\eta]=\mathop{\mathrm{Re}}[\nu]=\mathop{\mathrm{Im}}[\nu]=0$,
should be satisfied to observe EP4s in 4-band systems. However, to force all
roots of $\mathcal{D}$ to coincide, a third constraint, namely $\kappa$ in Eq.
(63), should also be set to zero. This can be better understood if we follow
the argument mentioned in Sec. II by counting numbers of available traces
($\mathop{\mathrm{tr}}[{\cal H}^{2}],\mathop{\mathrm{tr}}[{\cal H}^{3}]$) and
the determinant ($\det[{\cal H}]$) in the companion matrix of EP4s given by
$\displaystyle{\cal H}_{0}=\begin{pmatrix}0&1&0&0\\\ 0&0&1&0\\\ 0&0&0&1\\\
-\det[{\cal H}]&\frac{\mathop{\mathrm{tr}}[{\cal
H}^{3}]}{3}&\frac{\mathop{\mathrm{tr}}[{\cal H}^{2}]}{2}&0\end{pmatrix}.$ (64)
Note that, without loss of generality, we set $\mathop{\mathrm{tr}}[{\cal
H}]=0$ as before. As a result, _six_ real constraints should be imposed to
obtain EP4s in a four-band system. We summarize these constraints in the
presence of various symmetries in Tables 7 and 8. Here aside from considering
$\Lambda$ matrices as symmetry generators, we also use two Gamma matrices (cf.
Appendix D.3) defined as
$\displaystyle\Gamma_{1}$ $\displaystyle=\sigma_{x}\otimes\mathbbm{1}_{2},$
(65) $\displaystyle\Gamma_{5}$ $\displaystyle=\sigma_{y}\otimes\tau_{z},$ (66)
where $\bm{\sigma}$ and $\bm{\tau}$ are Pauli matrices. We again note that in
the case of a Hermitian model ($\bm{d}_{I}=0$), a four-band crossing requires
solving 15 constraints $\bm{d}_{R}=0$.
Perturbing in the vicinity of EP4s with $\mathop{\mathrm{tr}}[{\cal H}]=0$ is
described by ${\cal H}_{0}$ in Eq. (64). To find various types of EP4s, we
consider different cases, summarized in Table 9: i) For Hamiltonians with
$\mathop{\mathrm{tr}}[{\cal H}^{2}]=\mathop{\mathrm{tr}}[{\cal H}^{3}]=0$ the
energy dispersion close to EP4s casts
$\mathop{\mathrm{i}}^{k}\sqrt[4]{\det[{\cal H}]}$ with $k=1,2,3,4$. ii) If the
Hamiltonian is constructed in such a way that $\det[{\cal
H}]=\mathop{\mathrm{tr}}[{\cal H}^{2}]=0$, the energy dispersion of ${\cal
H}_{0}$ reads $0,(-1)^{j+j/3}\sqrt[3]{\mathop{\mathrm{tr}}[{\cal H}^{3}]/3}$
with $j=1,2,3$. iii) When $\det[{\cal H}]=\mathop{\mathrm{tr}}[{\cal
H}^{3}]=0$ for a 4-band Hamiltonian, the system exhibits two flat bands with
energy zero and two dispersive bands $\pm\sqrt{\mathop{\mathrm{tr}}[{\cal
H}^{2}]/2}$. iv) The fourth situation is when close to an EP4, $\eta$ and
$\kappa$ given in Eqs. (61, 63) decrease to zero faster that $\nu$ in Eq.
(62). In this case, the low-energy dispersion should be computed from the
general eigenvalues given in Eqs. (156, 157, 158, 159). As a result, the four
bands close to the EP4 are proportional to
$\mp\sqrt{2}\sqrt{-8b-2^{2/3}\sqrt[3]{\nu}}\mp\sqrt{-8b+2^{5/3}\sqrt[3]{\nu}}$.
Table 9: Various possibilities of EP4s and their energy dispersion
| | Condition | | Energy dispersion
---|---|---|---|---
EP4 0 | | $\eta\neq 0,\nu\neq 0,\kappa\neq 0$ | | $(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})$
EP4 I | | $\mathop{\mathrm{tr}}[{\cal H}^{2}]=\mathop{\mathrm{tr}}[{\cal H}^{3}]=0$ | | $\mathop{\mathrm{i}}^{k}\sqrt[4]{\det[{\cal H}]}$
EP4 II | | $\det[{\cal H}]=\mathop{\mathrm{tr}}[{\cal H}^{2}]=0$ | | $0,(-1)^{j+j/3}\sqrt[3]{\mathop{\mathrm{tr}}[{\cal H}^{3}]/3}$
EP4 III | | $\det[{\cal H}]=\mathop{\mathrm{tr}}[{\cal H}^{3}]=0$ | | $0,0,\pm\sqrt{\mathop{\mathrm{tr}}[{\cal H}^{2}]/2}$
EP4 IV | | $\eta,\kappa\to 0$ faster than $\nu\to 0$ | | $\pm\sqrt{2}\omega_{1}\pm\omega_{2}$
Here $k\in\\{1,2,3,4\\}$, $j\in\\{1,2,3\\}$,
$\omega_{1}=\sqrt{-8b-2^{2/3}\sqrt[3]{\nu}}$ and
$\omega_{2}=\sqrt{-8b+2^{5/3}\sqrt[3]{\nu}}$ with $b$ given in Eq. (57). Note
that in all cases _six_ real constraints should be satisfied to observe EP4s.
These constraints are counted by three complex equations either
$(\eta=0,\nu=0,\kappa=0)$ or $(\mathop{\mathrm{tr}}[{\cal
H}^{2}]=0,\mathop{\mathrm{tr}}[{\cal H}^{3}]=0,\det[{\cal H}]=0)$.
$(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})$ are given in Eqs. (156,
157, 158, 159) .
Aside from EP4s, one might also encounter EP3s and EP2s in four-band systems.
Let us first consider the case in which EP3s can be realized. The effective
Hamiltonian reads
$\displaystyle{\cal H}_{1}=\begin{pmatrix}a&0&0&0\\\ 0&b&c&d\\\ 0&e&f&g\\\
0&h&i&j\end{pmatrix}=\begin{pmatrix}a&0_{1\times 3}\\\ 0_{3\times
1}&h_{3\times 3}\end{pmatrix}.$ (67)
Without loss of generality we consider $\mathop{\mathrm{tr}}[h_{3\times
3}]=b+f+j=0$. Based on our results in Sec. IV, we conclude that $h_{3\times
3}$ can host EP3s if $\eta$ and $\nu$ given in Eqs. (32, 33) with ${\cal
H}=h_{3\times 3}$ are simultaneously zero.
To explore EP2s in four-band systems, we consider two possibilities: a four-
band system with i) two trivial bands and an EP2, or ii) two EP2s. For the
former scenario, we introduce a generic Hamiltonian which reads
$\displaystyle{\cal H}_{2}$ $\displaystyle=\begin{pmatrix}a&0&0&0\\\
0&b&0&0\\\ 0&0&c&d\\\ 0&0&e&f\\\ \end{pmatrix}=\begin{pmatrix}a&0&0_{1\times
2}\\\ 0&b&0_{1\times 2}\\\ 0_{2\times 1}&0_{2\times 1}&h_{2\times 2}\\\
\end{pmatrix}.$ (68)
Following the results in Sec. III, we conclude that ${\cal H}$ possesses an
EP2 when $\eta$ and $\nu$ constraints in Eq. (10) are satisfied by $h_{2\times
2}$. The second plausible situation to detect EP2s can be described by an
effective Hamiltonian given by
$\displaystyle{\cal H}_{3}$ $\displaystyle=\begin{pmatrix}a&b&0&0\\\
c&d&0&0\\\ 0&0&e&f\\\ 0&0&g&h\end{pmatrix}=\begin{pmatrix}\tilde{h}_{2\times
2}&0_{2\times 2}\\\ 0_{2\times 2}&\overline{h}_{2\times 2}\\\ \end{pmatrix}.$
(69)
${\cal H}_{3}$ displays EP2s if discriminants of $\tilde{h}_{2\times 2}$ and
$\overline{h}_{2\times 2}$ are set to zero, i.e., Eq. (10) is satisfied for
$\tilde{h}_{2\times 2}$ and $\overline{h}_{2\times 2}$. In very special cases
in which both discriminants acquire zero in a particular parameter regime, we
can realize the coexistence of two EP2s.
Figure 3: (a) The spectrum of the four-band model in Eq. (74) in its Hermitian
limit with $\alpha_{p}=\alpha_{m}=\alpha_{z}=\alpha_{b}=0$ and
$\theta_{1}=\theta_{2}=\pi/2$. (Middle panels) The real (b) and imaginary (c)
components of the band structure for the non-Hermitian model in Eq. (74) with
$\alpha_{p}=\alpha_{m}=0.15$, $\alpha_{z}=0.15\mathop{\mathrm{i}}$, and
$\alpha_{b}=0$. (Bottom panels) The real (d) and imaginary (e) components of
the band structure for the non-Hermitian model in the presence of psH symmetry
in Eq. (86). Bands in panels (d-e) are twofold degenerate. Line colors in
middle and bottom panels are chosen such that lowest to higher bands are
presented in blue, orange, green, and red colors, respectively. Figure 4:
(Upper panels) The same as panels (b,c) in Fig. 3 along the $k_{x}=k_{z}$
direction. (Bottom panels) The same as panels (d,e) in Fig. 3 at $k_{z}=0$ and
along the $k_{x}$ direction.
To exemplify the role of symmetries on the low-energy dispersion of EP4s, we
present a case study in the following. We start with a traceless non-Hermitian
four-band model, which reads
$\displaystyle{\cal
H}=\left(\begin{array}[]{cccc}0&\alpha_{p}+k_{x}&h_{zz2}&h_{bx}\\\
k_{x}-\alpha_{p}&0&\tilde{h}_{bx2}&h_{zz1}\\\
\tilde{h}_{zz2}&h_{bx2}&0&k_{x}-\alpha_{m}\\\
\tilde{h}_{bx}&\tilde{h}_{zz1}&\alpha_{m}+k_{x}&0\\\ \end{array}\right),$ (74)
where $h_{zz2}=\alpha_{z}-e^{\mathop{\mathrm{i}}\theta_{1}}k_{z}$,
$\tilde{h}_{zz2}=-\alpha_{z}-e^{-\mathop{\mathrm{i}}\theta_{1}}k_{z}$,
$h_{bx}=\alpha_{b}+e^{-\mathop{\mathrm{i}}\theta_{2}}k_{x}$,
$\tilde{h}_{bx}=-\alpha_{b}+e^{\mathop{\mathrm{i}}\theta_{2}}k_{x}$,
$\tilde{h}_{bx2}=-\alpha_{b}+e^{-\mathop{\mathrm{i}}\theta_{2}}k_{x}$,
$h_{bx2}=\alpha_{b}+e^{\mathop{\mathrm{i}}\theta_{2}}k_{x}$,
$h_{zz1}=\alpha_{z}+e^{\mathop{\mathrm{i}}\theta_{1}}k_{z}$, and
$\tilde{h}_{zz1}=-\alpha_{z}+e^{-\mathop{\mathrm{i}}\theta_{1}}k_{z}$. Here
$\alpha_{\cal O}$ with ${\cal O}\in\\{p,m,z,b\\}$ are complex-valued non-
Hermitian parameters, $(\theta_{1},\theta_{2})$ denote phase variables, and
$(k_{x},k_{z})$ indicate momenta. When non-Hermitian variables $\alpha_{\cal
O}$ vanish, the Hamiltonian in Eq. (74) describes the low-energy band
structure of four-fold fermions at $k_{y}=0$ [71]. We plot the dispersion
relation for this four-band model in the Hermitian limit with
$\alpha_{p}=\alpha_{m}=\alpha_{z}=\alpha_{b}=0$ and
$\theta_{1}=\theta_{2}=\pi/2$ in Fig. 3(a). This Hermitian band structure
displays a fourfold degeneracy in its spectrum at $k_{x}=k_{z}=0$. The traces
and the determinant of the Hamiltonian at $\theta_{1}=\theta_{2}=\pi/2$ in Eq.
(74) read
$\displaystyle\mathop{\mathrm{tr}}[{\cal H}]$ $\displaystyle=0,$ (75)
$\displaystyle\mathop{\mathrm{tr}}[{\cal H}^{2}]$
$\displaystyle=-2\left(2\alpha_{b}^{2}+\alpha_{m}^{2}+\alpha_{p}^{2}+2\alpha_{z}^{2}\right)+8k_{x}^{2}+4k_{z}^{2},$
(76) $\displaystyle\mathop{\mathrm{tr}}[{\cal H}^{3}]$
$\displaystyle=24i\alpha_{z}k_{x}^{2},$ (77) $\displaystyle\det[{\cal H}]$
$\displaystyle=k_{x}^{2}\left(-(\alpha_{m}-\alpha_{p})^{2}+4\alpha_{z}^{2}+4k_{z}^{2}\right)$
$\displaystyle\quad+\left(\alpha_{b}^{2}+\alpha_{m}\alpha_{p}+\alpha_{z}^{2}+k_{z}^{2}\right)^{2}.$
(78)
For simplicity purpose, we merely consider cases in which
$\alpha_{m}=\alpha_{p}=\alpha$, $\alpha_{z}=\mathop{\mathrm{i}}\alpha$,
$\alpha_{b}=0$ with $\alpha$ be a real-valued number. In this parameter
regime, constraints in Eqs. (61, 62, 63) cast
$\displaystyle\eta$
$\displaystyle=\left(-4k_{x}^{2}-2k_{z}^{2}\right)^{2}+12\left(k_{x}^{2}\left(4k_{z}^{2}-4\alpha^{2}\right)+k_{z}^{4}\right),$
(79) $\displaystyle\frac{\nu}{2}$
$\displaystyle=864\alpha^{2}k_{x}^{4}+\left(-4k_{x}^{2}-2k_{z}^{2}\right)^{3}$
$\displaystyle-36\left(-4k_{x}^{2}-2k_{z}^{2}\right)\left(k_{x}^{2}\left(4k_{z}^{2}-4\alpha^{2}\right)+k_{z}^{4}\right),$
(80) $\displaystyle\kappa$ $\displaystyle=-64\alpha k_{x}^{2}.$ (81)
These constraints simultaneously vanish when $k_{x}=k_{z}=0$. As a result,
EP4s appear in this system as we have shown in Figs. 3(b,c) and Fig. 4(a,b).
As close to this EP4, $\eta$, $\nu$ and $\kappa$ are nonzero, we identify this
EP4 as type 0, see Table (9). Aside from EP4s, our model also exhibits EP2s
close to $k_{x}=k_{z}\approx 0.47$, as shown in Figs. 3(b,c) and Fig. 4(a,b).
To further explore the effect of symmetry on the appearance of EPs, we impose
psH symmetry with generator $\Gamma_{1}$ on our Hamiltonian in Eq. (74). The
psH-symmetric Hamiltonian then reads
$\displaystyle{\cal H}_{\rm
psH}=\left(\begin{array}[]{cccc}0&h_{1}&h_{zz}&h_{x2}\\\
h_{mpx}&0&h_{x2}&\tilde{h}_{zz}\\\ -\tilde{h}_{zz}&h_{x2}&0&h_{mpx}\\\
h_{x2}&-h_{zz}&h_{1}&0\\\ \end{array}\right),$ (86)
where $h_{1}=\frac{1}{2}(\alpha_{m}+\alpha_{p}+2k_{x})$,
$h_{zz}=\alpha_{z}-k_{z}\cos(\theta_{1})$,
$\tilde{h}_{zz}=\alpha_{z}+k_{z}\cos(\theta_{1})$
$h_{mpx}=-\frac{\alpha_{m}}{2}-\frac{\alpha_{p}}{2}+k_{x}$, and
$h_{x2}=k_{x}\cos(\theta_{2})$. The associated characteristic polynomial at
$\theta_{1}=\theta_{2}=\pi/2$ factorizes into two second-order polynomials as
$\displaystyle\left(-\alpha^{2}-\lambda^{2}+k_{x}^{2}\right)^{2}=0.$ (87)
This twofold degeneracy is evident in Figs. 3(d,e) and Figs. 4(c,d) in which
we plot the band structure of ${\cal H}_{\rm psH}$ at $k_{z}=0$,
$\alpha_{m}=\alpha_{p}=\alpha$, $\alpha_{z}=\mathop{\mathrm{i}}\alpha$,
$\alpha_{b}=0$ with $\alpha=0.2$. Here we see that bands are doubly degenerate
come in pairs as merely two bands are visible. The momenta at which EP2s occur
are $k_{x}=\pm\alpha$. This can be obtained from the associated constraints
for ${\cal H}_{\rm psH}$
$\displaystyle\eta$ $\displaystyle=16\left(k_{x}^{2}-\alpha^{2}\right)^{2},$
(88) $\displaystyle\nu$
$\displaystyle=128\left(k_{x}^{2}-\alpha^{2}\right)^{3},$ (89)
$\displaystyle\kappa$ $\displaystyle=0.$ (90)
$\eta$ and $\nu$ are zero when $k_{x}=\pm\alpha$. Finally, in agreement with
our findings, the number of constraints reduces when we impose psH symmetry to
our non-Hermitian system in Eq. (74).
## VI Discussion and Conclusion
In this work, we have studied the appearance of exceptional points of any
order in the presence of symmetries. In particular, we have addressed three
questions pertaining to the number of constraints to find EP$n$s, the
implications of symmetries on the number of constraints for realizing EPs, and
the low-energy behavior of these EPs. By expressing the characteristic
polynomial of an $n$-dimensional non-Hermitian Hamiltonian in terms of the
determinant and traces of the Hamiltonian, we have shown that one can identify
$2n-2$ real constraints for finding EP$n$s. We, furthermore, have discussed
that in the presence of various symmetries, the number of constraints may
reduce. Our results show that combining symmetries generally results in
further decreasing the number of constraints. By interpreting the companion
matrix as a perturbation close to an EP$n$, we have explicitly identified
plausible low-energy dispersions of EP$n$s. Besides these general
considerations for EPs of any order, we have derived exact results for EPs of
orders two, three, and four. Through looking at the companion matrix, we have
also calculated explicit expressions for the dispersion around an EP, allowing
us to characterize EP3s and EP4s based on their low-energy spectrum. In
addition, we have presented the appearance of lower-order EPs in
$n$-dimensional models and find that EP2s can be realized in three-band
systems, while both EP2s and EP3s can appear in four-band systems.
While we have focused on EP$n$s in this work, we emphasize that our results
can be straightforwardly generalized to exceptional structures of higher
dimensions. Associating a parameter with each constraint, we have shown that
EP$n$s generally appear in $n-1$-dimensional setups in the presence of, e.g.,
psH symmetry. Consequently, exceptional one-dimensional lines or two-
dimensional surfaces of order $n$ appear generically in $n$\- and
$n+1$-dimensional systems, respectively. In other words, the number of
constraints is related to the _codimension_ of the exceptional structure,
i.e., the difference between the total dimension of the system and the
dimension of the exceptional structure, and our results can thus be readily
applied to study the realization of higher-dimensional exceptional structures
in the presence of symmetries.
Besides exceptional degeneracies, ordinary (Hermitian) degeneracies may appear
in non-Hermitian systems where the eigenvalues coalesce, but the eigenbasis is
complete. As we briefly discussed in Sects. III-V, this requires setting ${\bf
d}={\bf 0}$ for the various Hamiltonians such that these Hamiltonians are
proportional to an identity matrix. Generally, this results in having to
satisfy a large number of constraints to find these degeneracies. Indeed, one
needs to satisfy $2(n^{2}-1)$ constraints to find an ordinary $n$-fold
degeneracy, where $n^{2}-1$ is the number of dimensions of the group SU($n$).
Clearly, ${\bf d}={\bf 0}$ is a solution to the characteristic polynomial in
Eq. (1). We note that one of the crucial differences between EPs and ordinary
degeneracies on the level of polynomial equations sits in the relation between
the characteristic and the minimal polynomials: For EPs, the characteristic
polynomial equals the minimal polynomial, whereas, for ordinary degeneracies,
the characteristic polynomial is a multiple of the minimal polynomial [72].
In Ref. 34, it is stated that symmetry-protected multifold exceptional points
are points at which the symmetry is spontaneously broken. This is indeed the
case for the symmetries the authors consider there (CS, psH, $\cal PT$, and
$\cal CP$ symmetry), which are antiunitary symmetries that are local in
parameter space. We here show that unitary, local symmetries such as SLS and
psCS can also stabilize higher-order EPs in lower dimensions. These EPs do not
mark a transition between broken and unbroken symmetry, thus showing that not
all symmetry-protected EP$n$s necessarily correspond to spontaneous symmetry-
breaking points.
While we have presented an extensive study here on the realization of
exceptional points of any order in the presence of symmetry, we did not touch
upon the possibility of defining topological invariants. Former studies
proposed to define $\mathbb{Z}_{2}$ index based on either
$\textrm{sign}(\det[\cal H])$ ($\textrm{sign}(\det[\mathop{\mathrm{i}}\cal
H])$) in two-band models with $\cal PT$ ($\cal CP$) symmetry [73, 38] or the
sign of the discriminant [34] for systems of any dimension with CS, psH, $\cal
PT$, and $\cal CP$ symmetry. It would be intriguing to investigate whether
more generic invariants could be defined based on our rigorous mathematical
framework. We leave this problem for later studies.
_Acknowledgments.—_ We would like to thank Emil J. Bergholtz for pointing out
Ref. 60.
## References
* Wan _et al._ [2011] X. Wan, A. M. Turner, A. Vishwanath, and S. Y. Savrasov, Topological semimetal and fermi-arc surface states in the electronic structure of pyrochlore iridates, Phys. Rev. B 83, 205101 (2011).
* Burkov _et al._ [2011] A. A. Burkov, M. D. Hook, and L. Balents, Topological nodal semimetals, Phys. Rev. B 84, 235126 (2011).
* Wang _et al._ [2012] Z. Wang, Y. Sun, X.-Q. Chen, C. Franchini, G. Xu, H. Weng, X. Dai, and Z. Fang, Dirac semimetal and topological phase transitions in ${A}_{3}$bi ($a=\text{Na}$, k, rb), Phys. Rev. B 85, 195320 (2012).
* Chiu and Schnyder [2014] C.-K. Chiu and A. P. Schnyder, Classification of reflection-symmetry-protected topological semimetals and nodal superconductors, Phys. Rev. B 90, 205136 (2014).
* Senthil [2015] T. Senthil, Symmetry-protected topological phases of quantum matter, Annual Review of Condensed Matter Physics 6, 299 (2015).
* Liang _et al._ [2016] Q.-F. Liang, J. Zhou, R. Yu, Z. Wang, and H. Weng, Node-surface and node-line fermions from nonsymmorphic lattice symmetries, Phys. Rev. B 93, 085427 (2016).
* Chiu _et al._ [2016] C.-K. Chiu, J. C. Y. Teo, A. P. Schnyder, and S. Ryu, Classification of topological quantum matter with symmetries, Rev. Mod. Phys. 88, 035005 (2016).
* Bansil _et al._ [2016] A. Bansil, H. Lin, and T. Das, Colloquium: Topological band theory, Rev. Mod. Phys. 88, 021004 (2016).
* Altland and Zirnbauer [1997] A. Altland and M. R. Zirnbauer, Nonstandard symmetry classes in mesoscopic normal-superconducting hybrid structures, Phys. Rev. B 55, 1142 (1997).
* Xu _et al._ [2011] G. Xu, H. Weng, Z. Wang, X. Dai, and Z. Fang, Chern semimetal and the quantized anomalous hall effect in ${\mathrm{hgcr}}_{2}{\mathrm{se}}_{4}$, Phys. Rev. Lett. 107, 186806 (2011).
* Young _et al._ [2012] S. M. Young, S. Zaheer, J. C. Y. Teo, C. L. Kane, E. J. Mele, and A. M. Rappe, Dirac semimetal in three dimensions, Phys. Rev. Lett. 108, 140405 (2012).
* Armitage _et al._ [2018] N. P. Armitage, E. J. Mele, and A. Vishwanath, Weyl and dirac semimetals in three-dimensional solids, Rev. Mod. Phys. 90, 015001 (2018).
* Bradlyn _et al._ [2016] B. Bradlyn, J. Cano, Z. Wang, M. G. Vergniory, C. Felser, R. J. Cava, and B. A. Bernevig, Beyond dirac and weyl fermions: Unconventional quasiparticles in conventional crystals, Science 353, 5037 (2016).
* Flicker _et al._ [2018] F. Flicker, F. de Juan, B. Bradlyn, T. Morimoto, M. G. Vergniory, and A. G. Grushin, Chiral optical response of multifold fermions, Phys. Rev. B 98, 155145 (2018).
* Tian _et al._ [2021] L. Tian, Y. Liu, W.-W. Yu, X. Zhang, and G. Liu, Triple degenerate point in three dimensions: Theory and realization, Phys. Rev. B 104, 045137 (2021).
* Wu _et al._ [2020] W. Wu, Z. M. Yu, X. Zhou, Y. X. Zhao, and S. A. Yang, Higher-order Dirac fermions in three dimensions, Physical Review B 101, 1 (2020).
* Fang _et al._ [2012] C. Fang, M. J. Gilbert, X. Dai, and B. A. Bernevig, Multi-weyl topological semimetals stabilized by point group symmetry, Phys. Rev. Lett. 108, 266802 (2012).
* Yu _et al._ [2018] W. C. Yu, X. Zhou, F.-C. Chuang, S. A. Yang, H. Lin, and A. Bansil, Nonsymmorphic cubic dirac point and crossed nodal rings across the ferroelectric phase transition in ${\mathrm{lioso}}_{3}$, Phys. Rev. Materials 2, 051201 (2018).
* Yu _et al._ [2019] Z. M. Yu, W. Wu, X. L. Sheng, Y. X. Zhao, and S. A. Yang, Quadratic and cubic nodal lines stabilized by crystalline symmetry, Physical Review B 99, 1 (2019).
* Zhang _et al._ [2021] Z. Zhang, Z. M. Yu, and S. A. Yang, Magnetic higher-order nodal lines, Physical Review B 103, 1 (2021).
* Yao and Wang [2018] S. Yao and Z. Wang, Edge states and topological invariants of non-hermitian systems, Phys. Rev. Lett. 121, 086803 (2018).
* Lee [2016] T. E. Lee, Anomalous edge state in a non-hermitian lattice, Phys. Rev. Lett. 116, 133903 (2016).
* Kunst _et al._ [2018] F. K. Kunst, E. Edvardsson, J. C. Budich, and E. J. Bergholtz, Biorthogonal bulk-boundary correspondence in non-hermitian systems, Phys. Rev. Lett. 121, 026808 (2018).
* Kunst and Dwivedi [2019] F. K. Kunst and V. Dwivedi, Non-hermitian systems and topology: A transfer-matrix perspective, Phys. Rev. B 99, 245116 (2019).
* Kawabata _et al._ [2019] K. Kawabata, T. Bessho, and M. Sato, Classification of exceptional points and non-hermitian topological semimetals, Phys. Rev. Lett. 123, 066405 (2019).
* Bergholtz _et al._ [2021] E. J. Bergholtz, J. C. Budich, and F. K. Kunst, Exceptional topology of non-hermitian systems, Rev. Mod. Phys. 93, 015005 (2021).
* Yao _et al._ [2018] S. Yao, F. Song, and Z. Wang, Non-hermitian chern bands, Phys. Rev. Lett. 121, 136802 (2018).
* Yin _et al._ [2018] C. Yin, H. Jiang, L. Li, R. Lü, and S. Chen, Geometrical meaning of winding number and its characterization of topological phases in one-dimensional chiral non-hermitian systems, Phys. Rev. A 97, 052115 (2018).
* Sayyad _et al._ [2021] S. Sayyad, J. Yu, A. G. Grushin, and L. M. Sieberer, Entanglement spectrum crossings reveal non-hermitian dynamical topology, Phys. Rev. Research 3, 033022 (2021).
* Mostafazadeh [2002] A. Mostafazadeh, Pseudo-hermiticity versus pt symmetry: The necessary condition for the reality of the spectrum of a non-hermitian hamiltonian, Journal of Mathematical Physics 43, 205 (2002).
* Rivero and Ge [2020] J. D. H. Rivero and L. Ge, Pseudochirality: A manifestation of noether’s theorem in non-hermitian systems, Phys. Rev. Lett. 125, 083902 (2020).
* Yoshida _et al._ [2021] T. Yoshida, R. Okugawa, and Y. Hatsugai, Discriminant indicators with generalized inversion symmetry (2021).
* Bender and Boettcher [1998] C. M. Bender and S. Boettcher, Real spectra in non-hermitian hamiltonians having $\mathcal{PT}$ symmetry, Phys. Rev. Lett. 80, 5243 (1998).
* Delplace _et al._ [2021] P. Delplace, T. Yoshida, and Y. Hatsugai, Symmetry-protected higher-order exceptional points and their topological characterization, Physical Review Letters 127, 186602 (2021).
* Lee _et al._ [2019] J. Y. Lee, J. Ahn, H. Zhou, and A. Vishwanath, Topological Correspondence between Hermitian and Non-Hermitian Systems: Anomalous Dynamics, Physical Review Letters 123, 206404 (2019).
* Bernard and LeClair [2002] D. Bernard and A. LeClair, A classification of non-hermitian random matrices, Statistical Field Theories , 207–214 (2002).
* Yuce [2018] C. Yuce, Stable topological edge states in a non-hermitian four-band model, Phys. Rev. A 98, 012111 (2018).
* Okugawa and Yokoyama [2019] R. Okugawa and T. Yokoyama, Topological exceptional surfaces in non-Hermitian systems with parity-time and parity-particle-hole symmetries, Physical Review B 99, 1 (2019).
* Budich _et al._ [2019] J. C. Budich, J. Carlström, F. K. Kunst, and E. J. Bergholtz, Symmetry-protected nodal phases in non-Hermitian systems, Physical Review B 99, 1 (2019).
* Stålhammar and Bergholtz [2021] M. Stålhammar and E. J. Bergholtz, Classification of exceptional nodal topologies protected by $\mathcal{PT}$ symmetry, Phys. Rev. B 104, L201104 (2021).
* Crippa _et al._ [2021] L. Crippa, J. C. Budich, and G. Sangiovanni, Fourth-order exceptional points in correlated quantum many-body systems, Physical Review B 104, 1 (2021).
* Yang _et al._ [2021] Z. Yang, A. P. Schnyder, J. Hu, and C. K. Chiu, Fermion Doubling Theorems in Two-Dimensional Non-Hermitian Systems for Fermi Points and Exceptional Points, Physical Review Letters 126, 7 (2021).
* Lü _et al._ [2018] H. Lü, C. Wang, L. Yang, and H. Jing, Optomechanically induced transparency at exceptional points, Phys. Rev. Applied 10, 014006 (2018).
* Miri and Alù [2019] M.-A. Miri and A. Alù, Exceptional points in optics and photonics, Science 363, eaar7709 (2019).
* Dembowski _et al._ [2001] C. Dembowski, H.-D. Gräf, H. L. Harney, A. Heine, W. D. Heiss, H. Rehfeld, and A. Richter, Experimental observation of the topological structure of exceptional points, Phys. Rev. Lett. 86, 787 (2001).
* Sakhdari _et al._ [2021] M. Sakhdari, M. Hajizadegan, Q. Zhong, D. N. Christodoulides, R. El-Ganainy, and P. Y. Chen, Experimental observation of pt symmetry breaking near divergent exceptional points (2021), arXiv:2103.13299 [physics.app-ph] .
* Ding _et al._ [2021] L. Ding, K. Shi, Q. Zhang, D. Shen, X. Zhang, and W. Zhang, Experimental determination of $\mathcal{PT}$-symmetric exceptional points in a single trapped ion, Phys. Rev. Lett. 126, 083604 (2021).
* Kazemi _et al._ [2020] H. Kazemi, M. Y. Nada, A. Nikzamir, F. Maddaleno, and F. Capolino, Experimental demonstration of exceptional points of degeneracy in linear time periodic systems and exceptional sensitivity (2020), arXiv:1908.08516 [physics.app-ph] .
* Wiersig [2020] J. Wiersig, Review of exceptional point-based sensors, Photon. Res. 8, 1457 (2020).
* Longhi and Feng [2017] S. Longhi and L. Feng, Unidirectional lasing in semiconductor microring lasers at an exceptional point, Photon. Res. 5, B1 (2017).
* Huang _et al._ [2017] Y. Huang, Y. Shen, C. Min, S. Fan, and G. Veronis, Unidirectional reflectionless light propagation at exceptional points, Nanophotonics 6, 977 (2017).
* Mandal and Bergholtz [2021] I. Mandal and E. J. Bergholtz, Symmetry and Higher-Order Exceptional Points, Physical Review Letters 127, 186601 (2021).
* Berry [2004] M. Berry, Physics of nonhermitian degeneracies, Czechoslovak Journal of Physics 54, 1039 (2004).
* Carlström and Bergholtz [2018] J. Carlström and E. J. Bergholtz, Exceptional links and twisted Fermi ribbons in non-Hermitian systems, Physical Review A 98, 1 (2018).
* Demange and Graefe [2011] G. Demange and E.-M. Graefe, Signatures of three coalescing eigenfunctions, Journal of Physics A-mathematical and Theoretical 45 (2011).
* Esaki _et al._ [2011] K. Esaki, M. Sato, K. Hasebe, and M. Kohmoto, Edge states and topological phases in non-Hermitian systems, Physical Review B - Condensed Matter and Materials Physics 84, 1 (2011).
* Sato _et al._ [2012] M. Sato, K. Hasebe, K. Esaki, and M. Kohmoto, Time-Reversal Symmetry in Non-Hermitian Systems, Progress of Theoretical Physics 127, 937 (2012).
* Brown [1994] L. Brown, _Quantum Field Theory_ (Cambridge University Press, 1994).
* Curtright _et al._ [2020] T. L. Curtright, D. B. Fairlie, and H. Alshal, A galileon primer (2020), arXiv:1212.6972 [hep-th] .
* Jiang [2020] L. Jiang, _Nonreciprocal dynamics in a cryogenic optomechanical system_ , Ph.D. thesis (2020).
* Brand [1964] L. Brand, The companion matrix and its properties, The American Mathematical Monthly 71, 629 (1964).
* Arnold [1971] V. I. Arnold, On matrices depending on parameters, Russian Mathematical Surveys 26, 29 (1971).
* Ma and Edelman [1998] Y. Ma and A. Edelman, Nongeneric eigenvalue perturbations of jordan blocks, Linear Algebra and Its Applications 273, 45 (1998).
* Moro _et al._ [1997] J. Moro, J. V. Burke, and M. L. Overton, On the lidskii–vishik–lyusternik perturbation theory for eigenvalues of matrices with arbitrary jordan structure, SIAM J. Matrix Anal. Appl. 18, 793 (1997).
* Wilkinson [1965] J. Wilkinson, _The algebraic eigenvalue problem_ (Clarendon, Oxford, 1965).
* Burke and Overton [1992] J. V. Burke and M. L. Overton, Stable perturbations of nonsymmetric matrices, Linear Algebra Appl. 171, 249 (1992).
* Lidskii [1966] V. Lidskii, Perturbation theory of non-conjugate operators, USSR Computational Mathematics and Mathematical Physics 6, 73 (1966).
* [68] We note that what we call chiral symmetry in this paper is called pseudo-chiral symmetry in Ref. 34.
* Everitt [2018] B. Everitt, Galois theory - a first course (2018), arXiv:1804.04657 [math.GR] .
* [70] The parity symmetry in Ref. 52 coincides with the SLS in the nomenclature of this paper.
* Jin _et al._ [2021] L. Jin, Y. Liu, X. Zhang, X. Dai, and G. Liu, Sixfold, fourfold, and threefold excitations in the rare-earth metal carbide ${R}_{2}{\mathrm{c}}_{3}$ (2021).
* Lang [2002] S. Lang, _Algebra_ (Springer, New York, NY, 2002).
* Gong _et al._ [2018] Z. Gong, Y. Ashida, K. Kawabata, K. Takasan, S. Higashikawa, and M. Ueda, Topological phases of non-hermitian systems, Phys. Rev. X 8, 031079 (2018).
* Liu and Chen [2019] C.-H. Liu and S. Chen, Topological classification of defects in non-hermitian systems, Phys. Rev. B 100, 144106 (2019).
* Georgi and Glashow [1982] H. Georgi and S. Glashow, _Lie Algebras In Particle Physics: From Isospin To Unified Theories_, Advanced Book Program (Basic Books, 1982).
* Barnett _et al._ [2012] R. Barnett, G. R. Boyd, and V. Galitski, SU(3) spin-orbit coupling in systems of ultracold atoms, Phys. Rev. Lett. 109, 235308 (2012).
## Appendix A Non-Hermitian Bernard-LeClair symmetries
Bernard and LeClair define non-Hermitian symmetries as follows [36].
* i.
_Q symmetry:_
$\displaystyle{\cal H}(\bm{k})=\varepsilon_{q}q{\cal
H}^{\dagger}(\bm{k})q^{-1},\quad q^{2}=1.$ (91)
From the Q symmetry, we have
$\displaystyle{\cal H}(\bm{k})q|L_{n}(\bm{k})\rangle$
$\displaystyle=\varepsilon_{q}q{\cal
H}^{\dagger}(\bm{k})|L_{n}(\bm{k})\rangle,$ (92)
$\displaystyle\Rightarrow\epsilon(\bm{k})$
$\displaystyle=\varepsilon_{q}\epsilon^{*}(\bm{k}).$ (93)
The discriminant of ${\cal H}$ given by ${\cal
D}(k):=(-1)^{N(N-1)/2}\prod_{n\neq
n^{\prime}}(\epsilon_{n}-\epsilon_{n^{\prime}})$ then mimics the behaviour of
$\epsilon$ and reads
$\displaystyle{\cal D}(\bm{k})=\varepsilon_{q}{\cal D}^{*}(\bm{k}).$ (94)
* ii.
_C symmetry:_
$\displaystyle{\cal H}(-\bm{k})=\varepsilon_{c}c{\cal
H}^{T}(\bm{k})c^{-1},\quad cc^{*}=\eta_{c}\mathbbm{1}.$ (95)
where $\varepsilon_{c},\eta_{c}\in\\{1,-1\\}$. From the $\rm C$ symmetry, we
have
$\displaystyle c^{-1}{\cal H}(-\bm{k})|R_{n}(\bm{k})\rangle$
$\displaystyle=\varepsilon_{c}{\cal H}^{T}(\bm{k})c^{-1}|R_{n}(\bm{k})\rangle$
(96) $\displaystyle\Rightarrow\epsilon(-\bm{k})$
$\displaystyle=\varepsilon_{c}\epsilon(\bm{k}).$ (97)
To reach the last equality, we have used
$(A-\lambda\mathbbm{1})^{T}=(A^{T}-\lambda\mathbbm{1})$ and
$\det[(A-\lambda\mathbbm{1})^{T}]=\det[(A^{T}-\lambda\mathbbm{1})]$ which
leads to the conclusion that $A,A^{T}$ have the same eigenvalues. The
discriminant of ${\cal H}$ then reads
$\displaystyle{\cal D}(-\bm{k})=\varepsilon_{c}{\cal D}(\bm{k}).$ (98)
* iii.
_K symmetry:_
$\displaystyle{\cal H}(-\bm{k})=\varepsilon_{\kappa}\kappa{\cal
H}^{*}(\bm{k})\kappa^{-1},\quad\kappa\kappa^{*}=\eta_{\kappa}\mathbbm{1}.$
(99)
where $\varepsilon_{k},\eta_{k}\in\\{1,-1\\}$. From the $\rm K$ symmetry, we
have
$\displaystyle\kappa^{-1}{\cal H}(-\bm{k})|R_{n}(\bm{k})\rangle$
$\displaystyle=\varepsilon_{k}{\cal
H}^{*}(\bm{k})\kappa^{-1}|R_{n}(\bm{k})\rangle$ (100)
$\displaystyle\Rightarrow\epsilon(-\bm{k})$
$\displaystyle=\varepsilon_{k}\epsilon^{*}(\bm{k}).$ (101)
The discriminant of ${\cal H}$ then reads
$\displaystyle{\cal D}(-\bm{k})=\varepsilon_{k}{\cal D}^{*}(\bm{k}).$ (102)
* iv.
_P symmetry:_
$\displaystyle{\cal H}(\bm{k})=-p{\cal H}(\bm{k})p^{-1},\quad
p^{2}=\mathbbm{1}.$ (103)
From the $\rm P$ symmetry, we have
$\displaystyle{\cal H}(\bm{k})p|R_{n}(\bm{k})\rangle$ $\displaystyle=-p{\cal
H}(\bm{k})|R_{n}(\bm{k})\rangle$ (104)
$\displaystyle\Rightarrow\epsilon(\bm{k})$ $\displaystyle=-\epsilon(\bm{k}).$
(105)
The discriminant of ${\cal H}$ then reads
$\displaystyle{\cal D}(\bm{k})=-{\cal D}(\bm{k}).$ (106)
Table 10: Summarized Bernard-LeClair symmetries and their associated energy
constraints
Symmetry | | Symmetry Constraint | | Energy Constraint
---|---|---|---|---
$\rm Q$ symmetry | | ${\cal H}(\bm{k})=\varepsilon_{q}q{\cal H}^{\dagger}(\bm{k})q^{-1}$ | | $\epsilon(\bm{k})=\varepsilon_{q}\epsilon^{*}(\bm{k})$
$\rm C$ symmetry | | ${\cal H}(-\bm{k})=\varepsilon_{c}c{\cal H}^{T}(\bm{k})c^{-1}$ | | $\epsilon(-\bm{k})=\varepsilon_{c}\epsilon(\bm{k})$
$\rm K$ symmetry | | $v(-\bm{k})=\varepsilon_{k}\kappa{\cal H}^{*}(\bm{k})\kappa^{-1}$ | | $\epsilon(-\bm{k})=\varepsilon_{k}\epsilon^{*}(\bm{k})$
$\rm P$ symmetry | | ${\cal H}(\bm{k})=-p{\cal H}(\bm{k})p^{-1}$ | | $\epsilon(\bm{k})=-\epsilon(\bm{k})$
Here
$q^{2}=\mathbbm{1},cc^{*}=\eta_{c}\mathbbm{1},\kappa\kappa^{*}=\eta_{k}\mathbbm{1}$,
and $p^{2}=\mathbbm{1}$. $\eta_{\cal O},\varepsilon_{\cal O}\in\\{1,-1\\}$.
The above four unitary matrices satisfy
$\displaystyle
c=\varepsilon_{pc}pcp^{T},\quad\kappa=\varepsilon_{p\kappa}p\kappa p^{T},$
(107) $\displaystyle c=\varepsilon_{qc}qcq^{T},\quad
p=\varepsilon_{pq}qpq^{\dagger},$ (108)
where
$\varepsilon_{pc},\varepsilon_{p\kappa},\varepsilon_{qc},\varepsilon_{pq}\in\\{-1,1\\}$
[74].
The energy constraints from this classification, summarized in Table 10, is in
agreement with our results from the other classification, summarized in Table
1. Note that the nomenclature in these two classification are linked as
follows. The $C$-symmetry corresponds to the $\rm PHS$/$\rm TRS^{\dagger}$,
$Q$ symmetry corresponds to the $\rm CS$/$\rm psH$, $K$ symmetry is related to
the $\rm TRS$/$\rm PHS^{\dagger}$, $P$ symmetry is the same as the $\rm SLS$.
## Appendix B Symmetry-allowed Hamiltonians
We summarize Hamiltonians allowed by a specific symmetry, listed in Table 1,
in Table 11.
Table 11: Summarized Hamiltonians with a particular symmetry
Symmetry | | Associated Hamiltonians
---|---|---
Particle-hole symmetry I (PHS) | | ${\cal H}_{\rm PHS}=\frac{1}{2}\Big{[}{\cal H}(-\bm{k})-{\cal C}_{-}{\cal H}^{T}(\bm{k}){\cal C}_{-}^{\dagger}\Big{]}$
Particle-hole symmetry II (PHS†) | | ${\cal H}_{\rm PHS^{\dagger}}=\frac{1}{2}\Big{[}{\cal H}(-\bm{k})-{\cal T}_{-}{\cal H}^{*}(\bm{k}){\cal T}_{-}^{\dagger}\Big{]}$
Time-reversal symmetry I (TRS) | | ${\cal H}_{\rm TRS}=\frac{1}{2}\Big{[}{\cal H}(-\bm{k})+{\cal T}_{+}{\cal H}^{*}(\bm{k}){\cal T}_{+}^{\dagger}\Big{]}$
Time-reversal symmetry II (TRS†) | | ${\cal H}_{\rm TRS^{\dagger}}=\frac{1}{2}\Big{[}{\cal H}(-\bm{k})+{\cal C}_{+}{\cal H}^{T}(\bm{k}){\cal C}_{+}^{\dagger}\Big{]}$
Chiral symmetry (CS) | | ${\cal H}_{\rm CS}=\frac{1}{2}\Big{[}{\cal H}(\bm{k})-\Gamma{\cal H}^{\dagger}(\bm{k})\Gamma^{-1}\Big{]}$
Pseudo-chiral symmetry (psCS) | | ${\cal H}_{\rm psCS}=\frac{1}{2}\Big{[}{\cal H}^{T}(\bm{k})-\Lambda{\cal H}(\bm{k})\Lambda^{-1}\Big{]}$
Sublattice-symmetry (SLS) | | ${\cal H}_{\rm SLS}=\frac{1}{2}\Big{[}{\cal S}{\cal H}(\bm{k}){\cal S}^{-1}-{\cal H}(\bm{k})\Big{]}$
Pseudo-Hermiticity ($\rm psH$) | | ${\cal H}_{\rm psH}=\frac{1}{2}\Big{[}{\cal H}(\bm{k})+\varsigma{\cal H}^{\dagger}(\bm{k})\varsigma^{-1}\Big{]}$
Inversion symmetry ($\cal I$) | | ${\cal H}_{\cal I}=\frac{1}{2}\Big{[}{\cal H}^{\dagger}(-\bm{k})+{\cal I}{\cal H}(\bm{k}){\cal I}^{-1}\Big{]}$
Parity ($\cal P$) symmetry | | ${\cal H}_{\cal P}=\frac{1}{2}\Big{[}{\cal H}(-\bm{k})+{\cal P}{\cal H}(\bm{k}){\cal P}^{-1}\Big{]}$
Parity-time ($\cal PT$) symmetry | | ${\cal H}_{\cal PT}=\frac{1}{2}\Big{[}{\cal H}(\bm{k})+({\cal P}{\cal T}_{+}){\cal H}^{*}(\bm{k})({\cal P}{\cal T}_{+})^{-1}\Big{]}$
Parity-particle-hole (${\cal CP}$) symmetry | | ${\cal H}_{\cal CP}=\frac{1}{2}\Big{[}{\cal H}(\bm{k})-({\cal CP}){\cal H}^{*}(\bm{k})({\cal CP})^{-1}\Big{]}$
Here the unitary operator $A\in\\{\Gamma,\Lambda,\varsigma,{\cal S},{\cal
P},{\cal I}\\}$ obeys $A^{2}=1$, and the anti-unitary operator $A\in\\{{\cal
C}_{\pm},{\cal T}_{\pm}\\}$ satisfies $AA^{*}=\zeta_{A}1$ with $\zeta_{A}=\pm
1$.
## Appendix C General considerations for number of constraints to realize
EP$n$s
In the main text, we present that $2(n-1)$ constraints should be satisfied to
find an EP$n$. We further show that these constraints explicitly read
$\mathop{\mathrm{Re}}[\det[{\cal H}]]=0$, $\mathop{\mathrm{Im}}[\det[{\cal
H}]]=0$, $\mathop{\mathrm{Re}}[\mathop{\mathrm{tr}}[{\cal H}^{2}]]=0$,
$\mathop{\mathrm{Im}}[\mathop{\mathrm{tr}}[{\cal H}^{2}]]=0$, $\ldots$,
$\mathop{\mathrm{Re}}[\mathop{\mathrm{tr}}[{\cal H}^{n-1}]]=0$ and
$\mathop{\mathrm{Im}}[\mathop{\mathrm{tr}}[{\cal H}^{n-1}]]=0$.
Based on these form of constraints, we can deduce that i) for
$n=2j,j\in\mathbb{Z}\backslash\\{0\\}$, aside from two constraints for
$\det[H]=0$, $(n-2)$ constraints are for setting traces of even powers of $H$
to zero, i.e., $\mathop{\mathrm{tr}}[{\cal H}^{2l}]]=0$ with $l<j$, and the
remaining $(n-2)$ constraints enforce $\mathop{\mathrm{tr}}[{\cal
H}^{2l+1}]]=0$ with $l<j$. ii) For $n=2j+1,j\in\mathbb{Z}\backslash\\{0\\}$,
two constraints impose $\det[H]=0$, $(n-1)$ constraints ensures
$\mathop{\mathrm{tr}}[{\cal H}^{2l}]=0$ with $l<j$ and the rest of $(n-3)$
constraints impose $\mathop{\mathrm{tr}}[{\cal H}^{2l+1}]=0$ with $l<j$.
In the following, we derive general statements for EP$n$s based on the energy
constraints listed in Table 1 and using $\det[{\cal H}]=\prod_{i}\epsilon_{i}$
and $\mathop{\mathrm{tr}}[{\cal H}^{k}]=\sum_{i}\epsilon_{i}^{k}$ with
$\epsilon$ be the eigenvalues of ${\cal H}$.
### C.1 Sublattice and pseudo-chiral symmetry
In the presence of SLS or psCS, symmetry constraints enforce that
$\\{\epsilon({\bf k})\\}=\\{-\epsilon({\bf k})\\}$. As a result, for $n=2j$ we
get $\mathop{\mathrm{tr}}[{\cal
H}^{k}]=\epsilon_{1}^{k}+\epsilon_{2}^{k}+\ldots+\epsilon_{j}^{k}+(-\epsilon_{1})^{k}+(-\epsilon_{2})^{k}+\ldots+(-\epsilon_{j})^{k}=\epsilon_{1}^{k}+\epsilon_{2}^{k}+\ldots+\epsilon_{j}^{k}-\epsilon_{1}^{k}-\epsilon_{2}^{k}+\ldots-\epsilon_{j}^{k}=0$,
$\forall k\in\textrm{odd}$, while $\det[{\cal H}]\neq 0$ and
$\mathop{\mathrm{tr}}[{\cal H}^{k}]\neq 0$, $\forall k\in\mathrm{even}$.
Therefore, one needs to satisfy $2+n-2=n$ constraints to find an EP$n$ with
$n=2j$.
When $n=2j+1$, at least one of the eigenvalues needs to be zero, such that
$\det[{\cal H}]=0$. We also find $\mathop{\mathrm{tr}}[{\cal H}^{k}]=0$,
$\forall k\in\mathrm{odd}$ as before. We thus are left with $n-1$ constraints
that need to be satisfied to find an EP$n$ with $n=2j+1$.
### C.2 Parity-time and pseudo-Hermitian symmetries
In the presence of PT or psH symmetry, eigenvalues satisfy $\\{\epsilon({\bf
k})\\}=\\{\epsilon^{*}({\bf k})\\}$. This implies that for $n=2j+1$ at least
one of the eigenvalues should be real. We save this real eigenvalue in
$\epsilon_{j+1}$ for $n=2j+1$ in the following.
For $n=2j$, we find that $\det[{\cal
H}]=\epsilon_{1}\times\epsilon_{2}\times\ldots\times\epsilon_{j}\times\epsilon_{1}^{*}\times\epsilon_{2}^{*}\times\ldots\times\epsilon_{j}^{*}=|\epsilon_{1}|^{2}|\epsilon_{2}|^{2}\ldots|\epsilon_{j}|^{2}\in\mathbb{R}$,
whereas for $n=2j+1$, the determinant yields $\det[{\cal
H}]=|\epsilon_{1}|^{2}|\epsilon_{2}|^{2}\ldots|\epsilon_{j}|^{2}\epsilon_{j+1}\in\mathbb{R}$
with $\epsilon_{j+1}\in\mathbb{R}$. Similarly, using that
$(c^{*})^{k}=(c^{k})^{*}$, we find for $n=2j$ that $\mathop{\mathrm{tr}}[{\cal
H}^{k}]=\epsilon_{1}^{k}+\epsilon_{2}^{k}+\ldots+\epsilon_{j}^{k}+(\epsilon_{1}^{*})^{k}+(\epsilon_{2}^{*})^{k}+\ldots+(\epsilon_{j}^{*})^{k}=\epsilon_{1}^{k}+\epsilon_{2}^{k}+\ldots+\epsilon_{j}^{k}+(\epsilon_{1}^{k})^{*}+(\epsilon_{2}^{k})^{*}+\ldots+(\epsilon_{j}^{k})^{*}=2\mathop{\mathrm{Re}}[\epsilon_{1}^{k}+\epsilon_{2}^{k}+\ldots+\epsilon_{j}^{k}]\in\mathbb{R}$
and for $n=2j+1$ that $\mathop{\mathrm{tr}}[{\cal
H}^{k}]=2\mathop{\mathrm{Re}}[\epsilon_{1}^{k}+\epsilon_{2}^{k}+\ldots+\epsilon_{j}^{k}]+\epsilon_{j+1}\in\mathbb{R}$.
We thus conclude that $\mathop{\mathrm{Im}}[\det[{\cal H}]]=0$ and
$\mathop{\mathrm{Im}}[\mathop{\mathrm{tr}}[{\cal H}^{k}]]=0$ generically, and
we are left with $n-1$ constraints to realize EP$n$s, namely,
$\mathop{\mathrm{Re}}[\det[{\cal H}]]=0$ and
$\mathop{\mathrm{Re}}[\mathop{\mathrm{tr}}[{\cal H}^{k}]]=0$.
### C.3 Chiral and parity-particle-hole symmetries
In the presence of CS or $\mathcal{CP}$ symmetry, eigenvalues display
$\\{\epsilon({\bf k})\\}=\\{-\epsilon^{*}({\bf k})\\}$. We note that CS is not
defined for $n=2j+1$ as a result for odd dimensions in the following are
merely relevant for the $\mathcal{CP}$ symmetry. For $n=2j+1$, we infer from
the relation between the sets of eigenvalues that at least one of the
eigenvalues is imaginary. We save this eigenvalue in $\epsilon_{j+1}$ for
$n=2j+1$.
For $n=2j$, we find that $\det[{\cal
H}]=\epsilon_{1}\times\epsilon_{2}\times\ldots\times\epsilon_{j}\times(-\epsilon_{1}^{*})\times(-\epsilon_{2}^{*})\times\ldots\times(-\epsilon_{j}^{*})=(-1)^{j}|\epsilon_{1}|^{2}|\epsilon_{2}|^{2}\ldots|\epsilon_{j}|^{2}\in\mathbb{R}$,
whereas for $n=2j+1$, we find that $\det[{\cal
H}]=(-1)^{j}|\epsilon_{1}|^{2}|\epsilon_{2}|^{2}\ldots|\epsilon_{j}|^{2}\epsilon_{j+1}\in\mathop{\mathrm{i}}\mathbb{R}$
with $\epsilon_{j+1}\in\mathop{\mathrm{i}}\mathbb{R}$.
For the traces we find for $n=2j$ and $k\in\mathrm{odd}$ that
$\mathop{\mathrm{tr}}[{\cal
H}^{k}]=\epsilon_{1}^{k}+\epsilon_{2}^{k}+\ldots+\epsilon_{j}^{k}+(-\epsilon_{1}^{*})^{k}+(-\epsilon_{2}^{*})^{k}+\ldots+(-\epsilon_{j}^{*})^{k}=\epsilon_{1}^{k}+\epsilon_{2}^{k}+\ldots+\epsilon_{j}^{k}+(-1)^{k}(\epsilon_{1}^{k})^{*}+(-1)^{k}(\epsilon_{2}^{k})^{*}+\ldots+(-1)^{k}(\epsilon_{j}^{k})^{*}=\epsilon_{1}^{k}+\epsilon_{2}^{k}+\ldots+\epsilon_{j}^{k}-(\epsilon_{1}^{k})^{*}-(\epsilon_{2}^{k})^{*}+\ldots-(\epsilon_{j}^{k})^{*}=2\mathop{\mathrm{i}}\mathop{\mathrm{Im}}[\epsilon_{1}^{k}+\epsilon_{2}^{k}+\ldots+\epsilon_{j}^{k}]\in\mathop{\mathrm{i}}\mathbb{R}$,
while for $k\in\textrm{even}$, we get $\mathop{\mathrm{tr}}[{\cal
H}^{k}]=\epsilon_{1}^{k}+\epsilon_{2}^{k}+\ldots+\epsilon_{j}^{k}+(-1)^{k}(\epsilon_{1}^{k})^{*}+(-1)^{k}(\epsilon_{2}^{k})^{*}+\ldots+(-1)^{k}(\epsilon_{j}^{k})^{*}=\epsilon_{1}^{k}+\epsilon_{2}^{k}+\ldots+\epsilon_{j}^{k}+(\epsilon_{1}^{k})^{*}+(\epsilon_{2}^{k})^{*}+\ldots+(\epsilon_{j}^{k})^{*}=2\mathop{\mathrm{Re}}[\epsilon_{1}^{k}+\epsilon_{2}^{k}+\ldots+\epsilon_{j}^{k}]\in\mathbb{R}$.
For $n=2j+1$, we find for $k\in\textrm{odd}$ that $\mathop{\mathrm{tr}}[{\cal
H}^{k}]=2\mathop{\mathrm{i}}\mathop{\mathrm{Im}}[\epsilon_{1}^{k}+\epsilon_{2}^{k}+\ldots+\epsilon_{j}^{k}]+\epsilon_{j+1}^{k}\in
i\mathbb{R}$, where we use that
$\epsilon_{j+1}^{k}=\mathop{\mathrm{i}}^{k}(\mathop{\mathrm{Im}}[\epsilon_{j+1}])^{k}\in\mathop{\mathrm{i}}\mathbb{R}$
for odd $k$. For $k\in\textrm{even}$, we find $\mathop{\mathrm{tr}}[{\cal
H}^{k}]=2\mathop{\mathrm{Re}}[\epsilon_{1}^{k}+\epsilon_{2}^{k}+\ldots+\epsilon_{j}^{k}]+\epsilon_{j+1}^{k}\in\mathbb{R}$,
where we use that
$\epsilon_{j+1}^{k}=\mathop{\mathrm{i}}^{k}(\mathop{\mathrm{Im}}[\epsilon_{j+1}])^{k}\in\mathbb{R}$
for even $k$.
For any $n=2j$, we thus get $\det[{\cal H}]\in\mathbb{R}$,
$\mathop{\mathrm{tr}}[{\cal H}^{k}]\in\mathbb{R}$ $\forall k\in\textrm{even}$
and $\mathop{\mathrm{tr}}[{\cal H}^{k}]\in\mathop{\mathrm{i}}\mathbb{R}$
$\forall k\in\textrm{odd}$. This gives us $1+(n-2)/2+(n-2)/2=n-1$ constraints.
For $n=2j+1$, we obtain $\det[{\cal H}]\in\mathop{\mathrm{i}}\mathbb{R}$,
$\mathop{\mathrm{tr}}[{\cal H}^{k}]\in\mathbb{R}$, $\forall k\in\textrm{even}$
and $\mathop{\mathrm{tr}}[{\cal H}^{k}]\in\mathop{\mathrm{i}}\mathbb{R}$,
$\forall k\in\textrm{odd}$ leading to $1+(n-1)/2+(n-3)/2=n-1$ constraints.
## Appendix D Basis matrices for two-, three-, and four-band systems
### D.1 Basis matrices for two-band systems
The basis matrices for two-band systems are Pauli matrices which read
$\displaystyle\sigma_{x}=\begin{pmatrix}0&1\\\
1&0\end{pmatrix},\quad\sigma_{y}=\begin{pmatrix}0&-\mathop{\mathrm{i}}\\\
\mathop{\mathrm{i}}&0\end{pmatrix},\quad\sigma_{z}=\begin{pmatrix}1&0\\\
0&-1\end{pmatrix}.$ (109)
### D.2 Basis matrices for three-band systems
The basis matrices for three-band systems are the Gell-Mann matrices, that
span the Lie algebra of the SU(3) group,
$\displaystyle\mathop{\mathrm{tr}}[M^{\alpha}]=0\quad\forall\alpha\in\\{1,\ldots
8\\},\,\text{with }(M^{\alpha})^{\dagger}=M^{\alpha},$ (110) $\displaystyle
M^{1}$ $\displaystyle=\begin{pmatrix}0&-\mathop{\mathrm{i}}&0\\\
\mathop{\mathrm{i}}&0&0\\\ 0&0&0\end{pmatrix},\quad
M^{2}=\begin{pmatrix}0&0&-\mathop{\mathrm{i}}\\\ 0&0&0\\\
\mathop{\mathrm{i}}&0&0\end{pmatrix},$ (111) $\displaystyle M^{3}$
$\displaystyle=\begin{pmatrix}0&0&0\\\ 0&0&-\mathop{\mathrm{i}}\\\
0&\mathop{\mathrm{i}}&0\end{pmatrix},\quad M^{4}=\begin{pmatrix}0&1&0\\\
1&0&0\\\ 0&0&0\end{pmatrix},$ (112) $\displaystyle M^{5}$
$\displaystyle=\begin{pmatrix}0&0&1\\\ 0&0&0\\\ 1&0&0\end{pmatrix},\quad
M^{6}=\begin{pmatrix}0&0&0\\\ 0&0&1\\\ 0&1&0\end{pmatrix},$ (113)
$\displaystyle M^{7}$ $\displaystyle=\begin{pmatrix}1&0&0\\\ 0&-1&0\\\
0&0&0\end{pmatrix},\quad M^{8}=\begin{pmatrix}\frac{1}{\sqrt{3}}&0&0\\\
0&\frac{1}{\sqrt{3}}&0\\\ 0&0&-\frac{2}{\sqrt{3}}\end{pmatrix}.$ (114)
These matrices satisfy (anti-)commutation relations and the $SU(3)$ Fierz
completeness relations
$\displaystyle[M^{\alpha},M^{\beta}]$
$\displaystyle=2\mathop{\mathrm{i}}f_{\alpha\beta\gamma}M^{\gamma},$ (115)
$\displaystyle\\{M^{\alpha},M^{\beta}\\}$
$\displaystyle=\frac{4}{3}\delta_{\alpha\beta}\mathbbm{1}+2d_{\alpha\beta\gamma}M^{\gamma},$
(116) $\displaystyle\delta_{il}\delta_{kj}$
$\displaystyle=\frac{1}{3}\delta_{ij}\delta_{kl}+\frac{1}{2}M^{\alpha}_{ij}M^{\alpha}_{kl},$
(117) $\displaystyle M^{\alpha}_{ij}M^{\alpha}_{kl}$
$\displaystyle=\frac{16}{9}\delta_{il}\delta_{kj}-\frac{1}{3}M^{\alpha}_{il}M^{\alpha}_{kj}.$
(118)
Here $d_{abc}$ ($f_{abc}$) are the (anti-)symmetric structure constant of
SU(3) [75, 76].
### D.3 Basis matrices for four-band systems
The basis matrices for four-band systems are the generalized Gell-Mann
matrices, that span the Lie algebra of the SU(4) group,
$\displaystyle\mathop{\mathrm{tr}}[\Lambda^{\alpha}]=0\quad\forall\alpha\in\\{1,\ldots
15\\},\,\text{with }(\Lambda^{\alpha})^{\dagger}=\Lambda^{\alpha},$ (119)
$\displaystyle\Lambda^{1}$
$\displaystyle=\begin{pmatrix}0&-\mathop{\mathrm{i}}&0&0\\\
\mathop{\mathrm{i}}&0&0&0\\\ 0&0&0&0\\\ 0&0&0&0\\\
\end{pmatrix},\quad\Lambda^{2}=\begin{pmatrix}0&0&-\mathop{\mathrm{i}}&0\\\
0&0&0&0\\\ \mathop{\mathrm{i}}&0&0&0\\\ 0&0&0&0\\\ \end{pmatrix},$ (120)
$\displaystyle\Lambda^{3}$
$\displaystyle=\begin{pmatrix}0&0&0&-\mathop{\mathrm{i}}\\\ 0&0&0&0\\\
0&0&0&0\\\ \mathop{\mathrm{i}}&0&0&0\\\
\end{pmatrix},\quad\Lambda^{4}=\begin{pmatrix}0&0&0&0\\\
0&0&-\mathop{\mathrm{i}}&0\\\ 0&\mathop{\mathrm{i}}&0&0\\\ 0&0&0&0\\\
\end{pmatrix},$ (121) $\displaystyle\Lambda^{5}$
$\displaystyle=\begin{pmatrix}0&0&0&0\\\ 0&0&0&-\mathop{\mathrm{i}}\\\
0&0&0&0\\\ 0&\mathop{\mathrm{i}}&0&0\\\
\end{pmatrix},\quad\Lambda^{6}=\begin{pmatrix}0&0&0&0\\\ 0&0&0&0\\\
0&0&0&-\mathop{\mathrm{i}}\\\ 0&0&\mathop{\mathrm{i}}&0\\\ \end{pmatrix},$
(122) $\displaystyle\Lambda^{7}$ $\displaystyle=\begin{pmatrix}0&1&0&0\\\
1&0&0&0\\\ 0&0&0&0\\\ 0&0&0&0\\\
\end{pmatrix},\quad\Lambda^{8}=\begin{pmatrix}0&0&1&0\\\ 0&0&0&0\\\ 1&0&0&0\\\
0&0&0&0\\\ \end{pmatrix},$ (123) $\displaystyle\Lambda^{9}$
$\displaystyle=\begin{pmatrix}0&0&0&1\\\ 0&0&0&0\\\ 0&0&0&0\\\ 1&0&0&0\\\
\end{pmatrix},\quad\Lambda^{10}=\begin{pmatrix}0&0&0&0\\\ 0&0&1&0\\\
0&1&0&0\\\ 0&0&0&0\\\ \end{pmatrix},$ (124) $\displaystyle\Lambda^{11}$
$\displaystyle=\begin{pmatrix}0&0&0&0\\\ 0&0&0&1\\\ 0&0&0&0\\\ 0&1&0&0\\\
\end{pmatrix},\quad\Lambda^{12}=\begin{pmatrix}0&0&0&0\\\ 0&0&0&0\\\
0&0&0&1\\\ 0&0&1&0\\\ \end{pmatrix},$ (125) $\displaystyle\Lambda^{13}$
$\displaystyle=\begin{pmatrix}1&0&0&0\\\ 0&-1&0&0\\\ 0&0&0&0\\\ 0&0&0&0\\\
\end{pmatrix},\,\Lambda^{14}=\begin{pmatrix}\frac{1}{\sqrt{3}}&0&0&0\\\
0&\frac{1}{\sqrt{3}}&0&0\\\ 0&0&-\frac{2}{\sqrt{3}}&0\\\ 0&0&0&0\\\
\end{pmatrix},$ (126) $\displaystyle\Lambda^{15}$
$\displaystyle=\begin{pmatrix}\frac{1}{\sqrt{6}}&0&0&0\\\
0&\frac{1}{\sqrt{6}}&0&0\\\ 0&0&\frac{1}{\sqrt{6}}&0\\\
0&0&0&-\sqrt{\frac{3}{2}}\\\ \end{pmatrix}.$ (127)
Aside from the above matrices, one can use the $\Gamma$ matrices, basis of the
$\Gamma-$group, as
$\displaystyle\Gamma_{1}=\sigma_{x}\otimes\tau_{0}=\Lambda^{8}+\Lambda^{11}=\left(\begin{array}[]{cccc}0&0&1&0\\\
0&0&0&1\\\ 1&0&0&0\\\ 0&1&0&0\\\ \end{array}\right),$ (132)
$\displaystyle\Gamma_{2}=\sigma_{y}\otimes\tau_{y}=\Lambda^{10}-\Lambda^{9}=\left(\begin{array}[]{cccc}0&0&0&-1\\\
0&0&1&0\\\ 0&1&0&0\\\ -1&0&0&0\\\ \end{array}\right),$ (137)
$\displaystyle\Gamma_{3}=\sigma_{z}\otimes\tau_{0}=\frac{2}{\sqrt{3}}\Lambda^{14}+\sqrt{\frac{2}{3}}\Lambda^{15}=\left(\begin{array}[]{cccc}1&0&0&0\\\
0&1&0&0\\\ 0&0&-1&0\\\ 0&0&0&-1\\\ \end{array}\right),$ (142)
$\displaystyle\Gamma_{4}=\sigma_{y}\otimes\tau_{x}=\Lambda^{3}+\Lambda^{4}=\left(\begin{array}[]{cccc}0&0&0&-\mathop{\mathrm{i}}\\\
0&0&-\mathop{\mathrm{i}}&0\\\ 0&\mathop{\mathrm{i}}&0&0\\\
\mathop{\mathrm{i}}&0&0&0\\\ \end{array}\right),$ (147)
$\displaystyle\Gamma_{5}=\sigma_{y}\otimes\tau_{z}=\Lambda^{2}-\Lambda^{5}=\left(\begin{array}[]{cccc}0&0&-\mathop{\mathrm{i}}&0\\\
0&0&0&\mathop{\mathrm{i}}\\\ \mathop{\mathrm{i}}&0&0&0\\\
0&-\mathop{\mathrm{i}}&0&0\\\ \end{array}\right).$ (152)
The above $\Gamma$ matrices satisfy the Clifford algebra such that
$\\{\Gamma^{\mu},\Gamma^{\nu}\\}=2\eta^{\mu\nu}\mathbbm{1}_{4\times 4}$ with
$\mu,\nu\in\\{1,\ldots,4\\}$. $\eta^{\mu\nu}$ denotes the metric signature of
the space, i.e., Minkowski or Euclidean signatures. Using $\Gamma-$matrices in
Hermitian systems implies that we have spatial rotations and Lorentz boosts in
these systems.
## Appendix E Generic eigenvalue solutions for a three-band model
Here we present the solutions to the fourth-order characteristic polynomial in
Eq. (30), which are the eigenvalues to the Hamiltonian in Eq. (54). These
solutions read
$\displaystyle\lambda_{1}$
$\displaystyle=\frac{1}{6}\left(2^{2/3}\sqrt[3]{\sqrt{4\eta^{3}+\nu^{2}}+\nu}-\frac{2\sqrt[3]{2}\eta}{\sqrt[3]{\sqrt{4\eta^{3}+\nu^{2}}+\nu}}+2\mathop{\mathrm{tr}}[{\cal
H}]\right),$ (153) $\displaystyle\lambda_{2}$
$\displaystyle=\frac{1}{12}\left(2^{2/3}\mathop{\mathrm{i}}\left(\mathop{\mathrm{i}}+\sqrt{3}\right)\sqrt[3]{\sqrt{4\eta^{3}+\nu^{2}}+\nu}+\frac{\sqrt[3]{2}\left(2+2\mathop{\mathrm{i}}\sqrt{3}\right)\eta}{\sqrt[3]{\sqrt{4\eta^{3}+\nu^{2}}+\nu}}+4\mathop{\mathrm{tr}}[{\cal
H}]\right),$ (154) $\displaystyle\lambda_{3}$
$\displaystyle=\frac{1}{12}\left(2^{2/3}\mathop{\mathrm{i}}\left(\mathop{\mathrm{i}}-\sqrt{3}\right)\sqrt[3]{\sqrt{4\eta^{3}+\nu^{2}}+\nu}+\frac{\sqrt[3]{2}\left(2-2\mathop{\mathrm{i}}\sqrt{3}\right)\eta}{\sqrt[3]{\sqrt{4\eta^{3}+\nu^{2}}+\nu}}+4\mathop{\mathrm{tr}}[{\cal
H}]\right).$ (155)
Here $\eta$ and $\nu$ are defined in Eqs. (32, 33) in the main text.
## Appendix F Generic eigenvalue solutions for a four-band model
Here we present the solutions to the fourth-order characteristic polynomial in
Eq. (55), which are the eigenvalues to the Hamiltonian in Eq. (54). These
solutions read
$\displaystyle\lambda_{1}=$
$\displaystyle\frac{-\sqrt{6}\sqrt{-\frac{3\sqrt{3}\kappa}{\sqrt{3a^{2}-8b+2\
2^{2/3}\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}+\frac{4\sqrt[3]{2}\eta}{\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}}}}+3a^{2}-8b-2^{2/3}\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}-\frac{2\sqrt[3]{2}\eta}{\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}}}}{12}$
$\displaystyle-\frac{\sqrt{3}\sqrt{3a^{2}-8b+2^{5/3}\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}+\frac{4\sqrt[3]{2}\eta}{\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}}}+3a}{12},$
(156) $\displaystyle\lambda_{2}$
$\displaystyle=\frac{\sqrt{6}\sqrt{-\frac{3\sqrt{3}\kappa}{\sqrt{3a^{2}-8b+2\
2^{2/3}\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}+\frac{4\sqrt[3]{2}\eta}{\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}}}}+3a^{2}-8b-2^{2/3}\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}-\frac{2\sqrt[3]{2}\eta}{\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}}}}{12}$
$\displaystyle-\frac{\sqrt{3}\sqrt{3a^{2}-8b+2^{5/3}\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}+\frac{4\sqrt[3]{2}\eta}{\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}}}+3a}{12},$
(157) $\displaystyle\lambda_{3}$
$\displaystyle=\frac{-\sqrt{6}\sqrt{\frac{3\sqrt{3}\kappa}{\sqrt{3a^{2}-8b+2\
2^{2/3}\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}+\frac{4\sqrt[3]{2}\eta}{\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}}}}+3a^{2}-8b-2^{2/3}\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}-\frac{2\sqrt[3]{2}\eta}{\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}}}}{12}$
$\displaystyle+\frac{\sqrt{3}\sqrt{3a^{2}-8b+2^{5/3}\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}+\frac{4\sqrt[3]{2}\eta}{\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}}}+3a}{12},$
(158) $\displaystyle\lambda_{4}$
$\displaystyle=\frac{\sqrt{6}\sqrt{\frac{3\sqrt{3}\kappa}{\sqrt{3a^{2}-8b+2\
2^{2/3}\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}+\frac{4\sqrt[3]{2}\eta}{\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}}}}+3a^{2}-8b-2^{2/3}\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}-\frac{2\sqrt[3]{2}\eta}{\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}}}}{12}$
$\displaystyle+\frac{\sqrt{3}\sqrt{3a^{2}-8b+2^{5/3}\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}+\frac{4\sqrt[3]{2}\eta}{\sqrt[3]{\sqrt{\nu^{2}-4\eta^{3}}+\nu}}}+3a}{12}.$
(159)
Here $\eta$, $\nu$ and $\kappa$ are defined in Eqs. (61, 62, 63) in the main
text.
|
./images/
# Quantum optical classifier with superexponential speedup
Simone Roncallo 0000-0003-3506-9027<EMAIL_ADDRESS>Dipartimento di Fisica, Università degli Studi di Pavia, Via Agostino Bassi 6,
I-27100, Pavia, Italy INFN Sezione di Pavia, Via Agostino Bassi 6, I-27100,
Pavia, Italy Angela Rosy Morgillo 0009-0006-6142-0692
<EMAIL_ADDRESS>Dipartimento di Fisica, Università degli
Studi di Pavia, Via Agostino Bassi 6, I-27100, Pavia, Italy INFN Sezione di
Pavia, Via Agostino Bassi 6, I-27100, Pavia, Italy Chiara Macchiavello
0000-0002-2955-8759<EMAIL_ADDRESS>Dipartimento di Fisica,
Università degli Studi di Pavia, Via Agostino Bassi 6, I-27100, Pavia, Italy
INFN Sezione di Pavia, Via Agostino Bassi 6, I-27100, Pavia, Italy Lorenzo
Maccone 0000-0002-6729-5312<EMAIL_ADDRESS>Dipartimento di Fisica,
Università degli Studi di Pavia, Via Agostino Bassi 6, I-27100, Pavia, Italy
INFN Sezione di Pavia, Via Agostino Bassi 6, I-27100, Pavia, Italy Seth Lloyd
0000-0003-0353-4529<EMAIL_ADDRESS>Massachusetts Institute of Technology,
Cambridge, MA 02139, USA
###### Abstract
We present a quantum optical pattern recognition method for binary
classification tasks. Without direct image reconstruction, it classifies an
object in terms of the rate of two-photon coincidences at the output of a
Hong-Ou-Mandel interferometer, where both the input and the classifier
parameters are encoded into single-photon states. Our method exhibits the same
behaviour of a classical neuron of unit depth. Once trained, it shows a
constant $\mathcal{O}(1)$ complexity in the number of computational operations
and photons required by a single classification. This is a superexponential
advantage over a classical neuron (that is at least linear in the image
resolution). We provide simulations and analytical comparisons with analogous
neural network architectures.
Quantum image classifier; Quantum optical neuron; Quantum neural networks;
Hong-Ou-Mandel effect;
## I Introduction
Image classification has been significantly affected by the introduction of
deep learning algorithms, providing several architectures that can learn and
extract image features [1, 2, 3, 4]. The large number of parameters involved
is motivating a consistent effort in reducing the cost of these methods, e.g.
by leveraging all-optical implementations that bypass hardware usage [5, 6, 7,
8, 9, 10, 11], or quantum mechanical effects that can provide a significant
speedup in these computations [12, 13, 14, 15, 16]. Quantum optical neural
networks harness the best of both worlds, i.e. deep learning capabilities from
quantum optics [17, 18, 19, 20, 21].
In this paper, we introduce a quantum optical setup to classify objects
without reconstructing their images. Our approach relies on the Hong-Ou-Mandel
effect, for which the probability that two photons exit a beam splitter in
different modes, depends on their distinguishability [22]. In our
implementation, an input object is targeted by a single-photon source, and
eventually followed by an arbitrary lens system. The single-photon state
interferes with another one, which encodes a set of trainable parameters, e.g.
through a spatial light modulator. Classification occurs by measuring the rate
of two-photon coincidences at the Hong-Ou-Mandel output (see Fig. 1). The
Hong-Ou-Mandel effect has been successfully applied to quantum kernel
evaluation [23], which can compute distances between pairs of data points in
the feature space. In this case, each point is sent to one branch of the
interferometer, encoded in the temporal modes of a single-photon state. In our
method, the interferometer has only one independent branch, which takes the
spatial modes of a single-photon state reflected off the target object. The
other branch remains fixed after training, and contains the layer of
parameters. After the measurement, the response function of our apparatus
mathematically resembles that of a classical neuron. For this reason, we refer
to our setup as quantum optical neuron. By analytically comparing the resource
cost of the classical and quantum neurons, we show that our method requires
constant $\mathcal{O}(1)$ computational operations and injected photons,
whereas the classical methods are at least linear in the image resolution: a
superexponential advantage.
Figure 1: Quantum optical neuron implemented by the Hong-Ou-Mandel
interferometric setup. An object is targeted by a single-photon source and
classified through the rate of two-photon coincidences at the interferometer
output, without reconstructing its full image. In the top branch, an
additional thin lens can translate the classification problem to the Fourier
domain.
## II METHOD
In this section, we discuss the apparatus of Fig. 1, without explicitly
modelling the probe. Two single-photon states are fed into the left and top
branches of a $50\\!:\\!50$ beam splitter, acting as input and processing
layers, respectively. In the left branch, the single-photon source reflects
off the object, and reaches the beam splitter after a linear optical system.
In the top branch, we consider a generic single-photon state, which depends on
a set of trainable real parameters. We count the two-photon coincidences at
the beam splitter output. We show how to interpret the Hong-Ou-Mandel response
as the one produced by a single-layer neural network-like operation on the
object image.
We call input and probe modes, i.e. $a$ and $b$, those fed into the left and
top branches of the interferometer. In the input branch, a single photon with
spectrum $\phi$ is generated at the longitudinal origin $z=0$, followed by an
object with two-dimensional shape $\mathcal{O}$. An imaging system with
transfer function $\mathcal{L}_{d}$, e.g a pinhole or a linear optical
apparatus, is placed after the object. Here, $z_{o}$ and $z_{i}$ are the
longitudinal positions of the object and the image plane, respectively, and
$d=z_{i}-z_{o}$ their displacement.
The output of the imaging optics reads (see Appendix A)
$\ket{\Psi_{\mathcal{I}}}=\int\mathop{}\\!\mathrm{d}^{2}k\
\hat{\mathcal{I}}_{\omega}(k|\mathcal{O})a^{\dagger}_{\omega}(k)\ket{0}\ ,$
(1)
with
$\hat{\mathcal{I}}_{\omega}(\cdot|\mathcal{O})=[(\hat{\phi}_{\omega}\hat{\mathfrak{H}}_{z_{o}})*\hat{\mathcal{O}}]\hat{\mathcal{L}}_{d}$
the total transfer function from the single-photon source to the image plane,
and $a^{\dagger}_{\omega}(k)$ the creation operator of a photon in the input
mode, acting on the vacuum state $\ket{0}$. The hat operator denotes the two-
dimensional Fourier transform on the transverse coordinates plane, $*$ the
convolution operation, $\mathfrak{H}_{z_{o}}$ the transfer function from the
source to the object plane, $k=(k_{x},k_{y})$ the transverse momentum, and
$\omega$ the frequency conjugated to the temporal degree of freedom of the
electromagnetic potential.
In the probe branch, a generic quantum state is prepared, eventually followed
by a linear optical system. At the beam splitter plane, the probe state reads
$\ket{\Psi_{\mathcal{U}}}=\int\mathop{}\\!\mathrm{d}^{2}k\
\hat{\mathcal{U}}_{\omega}(k|\lambda)b_{\omega}^{\dagger}(k)\ket{0}\ ,$ (2)
with $\lambda=\\{\lambda_{i_{1}\ldots i_{n}}\\}$ a collection of (trainable)
parameters, $\mathcal{U}$ the spatial spectrum of the probe, and
$b^{\dagger}_{\omega}(k)$ the creation operator of a photon in the probe mode.
A photodetector is placed at the output of each branch. After feeding both
states into a $50\\!:\\!50$ beam splitter, the rate of two-photon coincidences
reads
$\displaystyle p(1_{a}\cap
1_{b}|\lambda,\mathcal{O})=\frac{1}{2}\left[\alpha_{\lambda}(\mathcal{O})-f_{\lambda}(\mathcal{O})\right]\
,$ (3) $\displaystyle\text{with}\ \ \begin{aligned}
\alpha_{\lambda}(\mathcal{O})&=||\mathcal{I}_{\omega}(\cdot|\mathcal{O})||^{2}||\mathcal{U}_{\omega}(\cdot|\lambda)||^{2}\
,\\\
f_{\lambda}(\mathcal{O})&=\left|\langle\mathcal{I}_{\omega}(\cdot|\mathcal{O}),\mathcal{U}_{\omega}(\cdot|\lambda)\rangle\right|^{2}\
,\end{aligned}$ (4)
where $||\cdot||$ and $\langle\cdot,\cdot\rangle$ denote the $L^{2}$-norm and
inner product, respectively. Here, $\alpha_{\lambda}(\mathcal{O})$ depends on
the normalization of the input and probe states, which can be
$\alpha_{\lambda}<1$ in the presence of optical losses. Whenever the two
spectra are indistinguishable, i.e. when $\mathcal{U}$ perfectly matches
$\mathcal{I}$, coincidences are not observed. On the other hand, the more
distinguishable the input and the probe states are, the smaller
$\langle\mathcal{I}(\cdot|\mathcal{O}),\mathcal{U}(\cdot|\lambda)\rangle$
becomes and the rate of coincidences increases. See Appendix B for a
derivation, and Appendix D, for a similar result in the Fourier domain.
At the image plane $I$, with transverse coordinates $r=(x,y)$, we have
$f_{\lambda}(\mathcal{O})=\left|\int_{I}\mathop{}\\!\mathrm{d}r\
\mathcal{I}_{\omega}(r|\mathcal{O})\mathcal{U}_{\omega}^{*}(r|\lambda)\right|^{2}\
.$ (5)
This integral measures the point-wise overlap between the input image and the
probe. We interpret it as the prediction of our classification model, where
$f_{\lambda}\in[0,1]$ represents the probability that $\mathcal{I}$ belongs to
the class of $\mathcal{U}$. In particular, $f_{\lambda}\to 0$ ($f_{\lambda}\to
1$) when the class of $\mathcal{I}$ is orthogonal to (is the same of)
$\mathcal{U}$. In the next section, we show how to encode a generic class in
$\mathcal{U}$, by means of the optimization of the set of parameters
$\lambda$.
The output measurement introduces a non-linear operation after the beam
splitter, represented by the squared absolute value in the left-hand side of
Eq. 5. We increase the predictability of our model, by enhancing this non-
linearity through the following post-processing operations. Consider the
sigmoid (logistic) function
$\sigma(x):=\frac{1}{1+e^{-\beta x+\gamma}}\ ,$ (6)
where $\beta,\gamma$ are hyperparameters, i.e. constants with respect to the
training process. We introduce an additional trainable parameter
$b\in\mathbb{R}$, called bias, which, combined with $f_{\lambda}$ and
$\sigma$, yields
$F_{b\lambda}(\mathcal{O})=\sigma(f_{\lambda}(\mathcal{O})+b)\ ,$ (7)
which determines the label predicted by the Hong-Ou-Mandel apparatus. These
modifications can improve the performance of the neuron. The sigmoid increases
the non-linearity introduced by the squared absolute value, and so the
predictability of the model. In addition, the bias is introduced on heuristic
motivations: it compensates the constraint given by the normalization in Eq.
3, while enhancing the robustness of our protocol against optical losses
(which may affect the above-mentioned normalizability, yielding
$\alpha_{\lambda}<1$).
We now discuss the training stage. Consider a training set, i.e. an ensemble
of objects $\\{\mathcal{O}_{j}\\}$ with target labels
$\\{y_{j}\in\\{0,1\\}\\}$. We separately feed each object into the input
branch of the interferometer. Predicted and target classes are compared in
terms of their binary cross-entropy, which is used as loss function of a
gradient descent optimizer. The optimizer updates $\lambda$ through the
derivative of the loss function, whose only model-dependent contribution is
$\partial_{\lambda}f=2\real\left[\langle\mathcal{I}_{\omega},\mathcal{U}_{\omega}\rangle\langle\mathcal{I}_{\omega},\partial_{\lambda}\mathcal{U}_{\omega}\rangle^{*}\right]\
.$ (8)
Ideally, the training is complete after finding a set of parameters that
minimizes the loss. Notice that our model is resilient against the issue of
gradient explosion [24], since it depends on physical data and functions only
(see Appendix C for a discussion).
There is a formal relationship between the post-processed output of the Hong-
Ou-Mandel interferometer of Eq. 7 and that of a classical neuron. Consider
$f_{\lambda}(\mathcal{O})$ discretized and vectorized in a mesh of $N$ cells,
either in the spatial or in the Fourier domain. Then, Eq. 7 corresponds to the
composition of a real-valued neuron, with $N$ trainable weights, square
absolute value activation function and no bias, and a second neuron, with a
scalar unit weight, sigmoid activation function and a trainable bias. Namely
$G_{bw}(x)=\sigma\big{(}|w\cdot x|^{2}+b\big{)}\ ,$ (9)
where $x\in\mathbb{C}^{N}$ is the input, while $w\in\mathbb{C}^{N}$ and
$b\in\mathbb{R}$ are the weights and bias, respectively. We can formally
identify $G_{bw}(x)$ with $F_{b\lambda}(\mathcal{O})$ under the substitution
$\left(x,w\right)\xleftarrow{\sim}\big{(}\mathcal{I}_{\omega}(r|\mathcal{O}),\mathcal{U}_{\omega}(r|\lambda)\big{)}\
,$ (10)
where $\xleftarrow{\sim}$ is the discretization and vectorization to
$\mathbb{C}^{N}$. This analogy is represented in Fig. 2.
Figure 2: Mathematical relationship between the Hong-Ou-Mandel apparatus of
Fig. 1 and the classical neuron of Eq. 9. Each operation is identified with
the corresponding component of the optical interferometer.
A classical neuron requires at least $N$ photons and $N$ computational
operations to classify an image composed of $N$ pixels. Our setup bypasses
both costs, by leveraging two essential features. On the one hand, it is
completely optical, avoiding the computational need of processing the image.
On the other hand, it classifies patterns through the Hong-Ou-Mandel effect,
reducing the photon cost of imaging. In both ways, it provides a
superexponential speedup, from $\mathcal{O}(N)$ to $\mathcal{O}(1)$. Photon
losses due to absorption introduce a constant overhead in both the classical
and quantum strategies, which depends on the total reflectivity of the object.
We summarize this discussion in Table 1. See Appendix E for a detailed
derivation.
| | QON | Classical
---|---|---|---
Computational (# of operations) | $\mathcal{O}(1)$ | $N$
Optical (# of photons) | Imaging | None | $\Theta(\varsigma^{-2}\langle x\rangle N)$
Classification | $\mathcal{O}(\varepsilon^{-2})$ | $\Omega\left(\varepsilon^{-2}\langle x\rangle N\right)$
Table 1: Computational and optical resources comparison between the quantum
optical neuron (QON) and its classical counterparts, when reconstructing and
classifying an image $x$ of $N$ pixels. Here, $\varsigma$ and $\langle
x\rangle$ are the standard deviation and the average brightness of the image
(which depend on the reflectivity of the object), while $\varepsilon$ is the
uncertainty on the classification outcome. Our method achieves a
superexponential speedup over its classical counterpart: $\mathcal{O}(1)$ vs.
$\mathcal{O}(N)$.
## III AMPLITUDE MODULATED PROBE
We specialize our discussion by replacing the generic probe state
$\mathcal{U}$ with a toy model of an amplitude spatial light modulator (SLM),
placed in the top branch of the Hong-Ou-Mandel interferometer, e.g. a liquid
crystal (LC) grid with negligible losses [25]. Different approaches can be
investigated, such as phase-only SLM [26], which may exhibit superior
resiliency against losses.
Figure 3: Comparison between the quantum optical neuron (QON), a single
classical neuron and a convolutional network, all trained with the same number
of $\sim 1024$ parameters, optimizer and learning rates. The quantum optical
neuron is modelled by an amplitude modulated probe with resolution of
$32\times 32$ pixels, both in the spatial and in the Fourier domains. The
optimization is performed with learning rates $\eta_{\lambda}=0.075$ and
$\eta_{b}=0.005$. (a) Accuracy versus the number of training epochs for the
MNIST dataset. The models are trained to distinguish among images of _zeros_
and _ones_ , showing compatible results in terms of trainability and accuracy,
whose final value is above $99\%$. The inset is a history plot of the binary
cross-entropy, used as loss function in the gradient descent optimization.
(b-c) Accuracy and binary cross-entropy plots versus the number of training
epochs for the CIFAR-10 dataset. The models are trained to classify images of
_cats_ and _dogs_. Our method reaches an asymptotic accuracy above $58\%$,
showing an advantage with respect to its classical counterparts.
Consider a pattern on a greyscale LC grid with $N$ real amplitudes
$\\{\lambda_{\mu\nu}\\}$. Each pixel, labelled by $(\mu,\nu)$, is represented
by an $L\times L$ square with center $r_{\mu\nu}=(\mu+1/2,\nu+1/2)L$. Upon an
overall parameter-independent normalization, the probe can be approximated as
a combination of top-hat functions
$\mathcal{U}_{\omega}(r|\lambda)=\sum_{\mu,\nu}u(r-r_{\mu\nu})\frac{\lambda_{\mu\nu}}{||\lambda||}\
,$ (11)
where $||\lambda||^{2}=\sum_{\mu,\nu}\lambda_{\mu\nu}^{2}$ and
$u(r):=\theta(r+L/2)-\theta(r-L/2)$, with $\theta$ the two-dimensional
Heaviside step function. Under this choice, Eq. 5 simplifies to
$f_{\lambda}(\mathcal{O})=\left|\sum_{\mu,\nu}(u\star\mathcal{I}_{\omega})(r_{\mu\nu})\frac{\lambda_{\mu\nu}}{||\lambda||}\right|^{2}\
,$ (12)
where $\star$ is the cross-correlation operation. We introduce a bias and a
sigmoid activation function, so that the post-processed output reads
$F_{b\lambda}(\mathcal{O})=\sigma(f_{\lambda}(\mathcal{O})+b)$. Assuming that
$\mathcal{I}$ is real, Eq. 8 simplifies to
$\partial_{\mu\nu}f\simeq
2\frac{\sqrt{f}}{||\lambda||}\left[(u\star\mathcal{I}_{\omega})(r_{\mu\nu})-\sqrt{f}\frac{\lambda_{\mu\nu}}{||\lambda||}\right]\
,$ (13)
with $\partial_{\mu\nu}f:=\partial f/\partial\lambda_{\mu\nu}$. This
expression can be evaluated in an all-optical way, by taking the amplitude
measurement of $\mathcal{I}$ directly in the left branch of the
interferometer, before the beam splitter. This operation can be done off-line,
and once per training object. In the next section, we present a simulation of
these results, for different choices of the dataset.
### III.1 Simulations
We present a simulation of the model introduced above, comparing its
performance against those of classical neural network-based techniques, for
different datasets. All the simulations are run in Python and TensorFlow [27],
and summarized in Fig. 3.
We tested our model using two widely recognized datasets: the MNIST, which
contains $28\times 28$ images of handwritten digits from $0$ to $9$, and the
CIFAR-10, comprised of $32\times 32$ color images, distributed across $10$
different classes. We guaranteed a fair comparison by increasing the MNIST
resolution to $32\times 32$ pixels (separately padding each image of the
dataset), while converting the CIFAR-10 to greyscale. We represent each
element of the dataset as $(x_{j},\ y_{j})$, where $y_{j}\in\\{0,1\\}$ is the
true class label, and $x_{j}$ is the input vector, obtained by discretizing
and vectorizing either the amplitudes $\mathcal{I}$ or their Fourier spectrum
$\hat{\mathcal{I}}$, thus bypassing the simulation of the imaging optics. We
adopt the binary cross-entropy as loss function, combined with the standard
(non-stochastic) gradient descent optimizer. We use the accuracy, i.e. the
proportion of correct predictions over the total ones, as figure of merit of
our results.
Our model demonstrates significative performances in both datasets (see Fig.
3). In the MNIST, it achieves accuracy rates exceeding $99\%$, when discerning
between _zeros_ and _ones_. In the CIFAR-10, it reaches accuracy above $58\%$,
when distinguishing between _cats_ and _dogs_. This difference reflects the
complexity of the two classification tasks.
We compared our model against conventional neural network designs with a
similar number of parameters. Specifically, we considered a single neuron and
a convolutional neural network, commonly employed in pattern recognition tasks
[28, 29, 2]. Adopting the TensorFlow notation, the convolutional structure is:
Conv2D ($10$, $3\times 3$) $\to$ Conv2D ($4$, $2\times 2$) $\to$ MaxPooling2D
($2\times 2$). Roughly, all the architectures have $\sim 10^{3}$ trainable
parameters. All the models equally perform in the MNIST dataset, both in terms
of trainability and final accuracy. When applied to the CIFAR-10 dataset, our
classifier outperforms the conventional ones, showing superior efficiency
under a strongly-constrained parameters count. All the findings emphasize the
competitive accuracy of our method, and also its comparative advantage in
pattern recognition tasks with a limited number of parameters.
## IV Conclusions
In summary, we introduced an interferometric setup of a quantum optical
classifier, with the Hong-Ou-Mandel effect as cornerstone of our
classification method. We demonstrated the mathematical relation between our
model and a classical neuron, constrained to unit depth, showing their
similarity in terms of structure and response function. Our design is
completely optical and single-photon based: it provides a superexponential
speedup with respect to its classical counterpart, in terms of number of
photons and computational resources. After modelling the classifier in terms
of a spatial light modulator, we numerically compared our performances against
those of standard neural network architectures, showing compatible to superior
capabilities in terms of accuracy and training convergence, under the same
number of parameters and depending on the pattern complexity.
## Acknowledgements
S.R. acknowledges support from the PRIN MUR Project 2022SW3RPY. A.R.M.
acknowledges support from the PNRR MUR Project PE0000023-NQSTI. C.M.
acknowledges support from the National Research Centre for HPC, Big Data and
Quantum Computing, PNRR MUR Project CN0000013-ICSC. L.M. acknowledges support
from the PRIN MUR Project 2022RATBS4 and from the U.S. Department of Energy,
Office of Science, National Quantum Information Science Research Centers,
Superconducting Quantum Materials and Systems Center (SQMS) under Contract No.
DE-AC02-07CH11359. S.L. acknowledges support from ARO, DOE, and DARPA.
## Data availability
The underlying code that generated the data for this study is openly available
in GitHub [30].
## APPENDIX A SINGLE-PHOTON ENCODING
In this section, we consider the single-photon state obtained at the output of
the left branch of the Hong-Ou-Mandel apparatus, providing a detailed
discussion of Eq. 1. We adopt units in which $c=1$.
Consider a generic single-photon state, generated by a monochromatic source
with longitudinal position $z$
$\ket{\Psi}=\int\mathop{}\\!\mathrm{d}^{3}k\
\hat{\Phi}(\mathbf{k})a^{\dagger}(\mathbf{k})\ket{0}\ ,$ (14)
with momentum spectrum $\Phi$ and $\mathbf{k}=(k_{x},k_{y},k_{z})$. We neglect
the polarization of the photon and consider the single-frequency-mode
assumption [31], i.e. we assume that the wavefront propagates along definite-
sign $z$-directions only. Then, $k=(k_{x},k_{y})$ represents the only
independent degrees of freedom of the single-photon state, which reads
$\displaystyle\ket{\Psi}$ $\displaystyle=\int\mathop{}\\!\mathrm{d}^{2}k\
\hat{\phi}_{\omega}(k)a^{\dagger}_{\omega}(k)\ket{0}\ ,$ (15)
$\displaystyle=\int_{S}\mathop{}\\!\mathrm{d}^{2}r\
\phi_{\omega}(r)a_{\omega}^{\dagger}(r)\ket{0}\ ,$ (16)
where
$\hat{\phi}_{\omega}(k)=\hat{\Phi}\left(k_{x},k_{y},\sqrt{\omega^{2}-k_{x}^{2}-k_{y}^{2}}\right)$
and $r=(r_{x},r_{y})$ labels the transverse coordinates on the source plane
$S$.
For simplicity, we assume that the source is placed at the longitudinal origin
$z=0$. Consider an object with two dimensional shape $\mathcal{O}$, placed at
longitudinal position $z_{o}$. After free-space propagation occurs, the
single-photon spectrum undergoes spatial amplitude modulation [32], that is
$\Psi_{\mathcal{O}}(r)=\mathcal{O}(r)\Psi(r)_{\to}$, with $\Psi(r)_{\to}$ the
spatial input wavefront on the object plane $O$. Namely
$\ket{\Psi_{\mathcal{O}}}=\int_{O}\mathop{}\\!\mathrm{d}^{2}r\
[\phi_{\omega}*\mathfrak{H}_{z_{o}}](r)\mathcal{O}(r)a^{\dagger}_{\omega}(r)\ket{0}\
,$ (17)
where $\mathfrak{H}_{z_{o}}$ denotes the free-space transfer function between
the $S$ and $O$ planes. Using twice the convolution theorem, it follows that
$\ket{\Psi_{\mathcal{O}}}=\int\mathop{}\\!\mathrm{d}^{2}k\
[(\hat{\phi}_{\omega}\mathfrak{H}_{z_{o}})*\hat{\mathcal{O}}](k)a^{\dagger}_{\omega}(k)\ket{0}\
.$ (18)
Consider a linear optical system with transfer function $\mathcal{L}$, with
image plane at longitudinal position $z_{i}$. By applying again the
convolution theorem to
$\mathcal{I}_{\omega}(\cdot|\mathcal{O})=\left((\phi_{\omega}*\mathfrak{H}_{z_{o}})\mathcal{O}\right)*\mathcal{L}_{z_{o}-z}$,
we obtain
$\ket{\Psi_{\mathcal{I}}}=\int\mathop{}\\!\mathrm{d}^{2}k\
\hat{\mathcal{I}}_{\omega}(k|\mathcal{O})a^{\dagger}_{\omega}(k)\ket{0}\ ,$
(19)
with
$\hat{\mathcal{I}}_{\omega}(k|\mathcal{O})=[(\hat{\phi}_{\omega}\mathfrak{H}_{z_{o}})*\hat{\mathcal{O}}]\hat{\mathcal{L}}_{d}$.
Notice that $\mathcal{I}_{\omega}(r|\mathcal{O})$ describes the image formed
on a screen placed at distance $d$ from the object.
## APPENDIX B HONG-OU-MANDEL COINCIDENCES
In this section, we compute the rate of coincidences at the output of the
Hong-Ou-Mandel interferometer of Fig. 1, with left and top branch states given
by Eqs. 1 and 2, respectively. We write the input-probe bipartite state as
$\ket{\Psi_{\mathcal{I}}}\otimes\ket{\Psi_{\mathcal{U}}}=\int\mathop{}\\!\mathrm{d}^{2}k_{1}\mathop{}\\!\mathrm{d}^{2}k_{2}\
\hat{\Psi}(k_{1},k_{2})a^{\dagger}(k_{1})b^{\dagger}(k_{2})\ket{0}\ ,$ (20)
with
$\hat{\Psi}(k_{1},k_{2})=\hat{\mathcal{I}}(k_{1}|\mathcal{O})\hat{\mathcal{U}}(k_{2}|\lambda)$,
where we dropped the $\omega$ subscript for simplicity. The $50\\!:\\!50$ beam
splitter acts as the unitary operation [33]
$\begin{cases}a^{\dagger}\to\frac{1}{\sqrt{2}}\left(a^{\dagger}+b^{\dagger}\right)\\\
b^{\dagger}\to\frac{1}{\sqrt{2}}\left(a^{\dagger}-b^{\dagger}\right)\end{cases}\
,$ (21)
yielding
$\ket{\Psi_{\mathcal{I}}}\otimes\ket{\Psi_{\mathcal{U}}}\to\ket{\Phi}=\frac{1}{2}\int\mathop{}\\!\mathrm{d}^{2}k_{1}\mathop{}\\!\mathrm{d}^{2}k_{2}\
\hat{\Psi}(k_{1},k_{2})\left[a^{\dagger}(k_{1})+b^{\dagger}(k_{1})\right]\left[a^{\dagger}(k_{2})-b^{\dagger}(k_{2})\right]\ket{0}\
.$ (22)
Detection of mode $m\in\\{a,b\\}$ is described by the projector
$\Pi_{m}=\int\mathop{}\\!\mathrm{d}^{2}k\
m^{\dagger}(k)\ket{0}\\!\bra{0}m(k)$. The rate of coincidences, i.e. the
probability that one and only one photon is detected in each mode, reads
$\displaystyle p(1_{a}\cap
1_{b})=\Tr[\ket{\Phi}\\!\bra{\Phi}\Pi_{a}\otimes\Pi_{b}]\ ,$ (23)
$\displaystyle\text{with}\
\Pi_{a}\otimes\Pi_{b}=\int\mathop{}\\!\mathrm{d}^{2}k_{3}\mathop{}\\!\mathrm{d}^{2}k_{4}\
a^{\dagger}(k_{3})b^{\dagger}(k_{4})\ket{0}\\!\bra{0}a(k_{3})b(k_{4})\ .$ (24)
By substitution of Eq. 22, we get
$p(1_{a}\cap
1_{b})=\frac{1}{4}\int\prod_{i=1}^{6}\mathop{}\\!\mathrm{d}^{2}k_{i}\
\hat{\Psi}(k_{1},k_{2})\hat{\Psi}^{*}(k_{5},k_{6})W_{1}(k_{1},k_{2},k_{3},k_{4})W_{2}(k_{3},k_{4},k_{5},k_{6})\
,$ (25)
where
$\displaystyle\begin{split}W_{1}(k_{1},k_{2},k_{3},k_{4})&=\bra{0}a(k_{3})b(k_{4})\left[a^{\dagger}(k_{1})a^{\dagger}(k_{2})-a^{\dagger}(k_{1})b^{\dagger}(k_{2})+b^{\dagger}(k_{1})a^{\dagger}(k_{2})-b^{\dagger}(k_{1})b^{\dagger}(k_{2})\right]\ket{0}\\\
&=\delta(k_{2}-k_{3})\delta(k_{1}-k_{4})-\delta(k_{1}-k_{3})\delta(k_{2}-k_{4})\
,\end{split}$ (26)
$\displaystyle\begin{split}W_{2}(k_{3},k_{4},k_{5},k_{6})&=\bra{0}\left[a(k_{6})a(k_{5})-b(k_{6})a(k_{5})+a(k_{6})b(k_{5})-b(k_{6})b(k_{5})\right]a^{\dagger}(k_{3})b^{\dagger}(k_{4})\ket{0}\\\
&=\delta(k_{3}-k_{6})\delta(k_{4}-k_{5})-\delta(k_{3}-k_{5})\delta(k_{4}-k_{6})\
.\end{split}$ (27)
By integrating out the Dirac deltas in Eq. 25, we obtain
$p(1_{a}\cap
1_{b})=\frac{1}{2}\int\mathop{}\\!\mathrm{d}^{2}k_{1}\mathop{}\\!\mathrm{d}^{2}k_{2}\mathop{}\\!\mathrm{d}^{2}k_{5}\mathop{}\\!\mathrm{d}^{2}k_{6}\
\hat{\Psi}(k_{1},k_{2})\hat{\Psi}^{*}(k_{5},k_{6})\left[\delta(k_{1}-k_{5})\delta(k_{2}-k_{6})-\delta(k_{1}-k_{6})\delta(k_{2}-k_{5})\right]\
.$ (28)
Finally, the rate of coincidences reads
$p(1_{a}\cap
1_{b}|\lambda,\mathcal{O})=\frac{1}{2}\int\mathop{}\\!\mathrm{d}^{2}k_{1}\
|\hat{\mathcal{I}}(k_{1}|\mathcal{O})|^{2}\int\mathop{}\\!\mathrm{d}^{2}k_{2}\
|\hat{\mathcal{U}}(k_{2}|\lambda)|^{2}-\frac{1}{2}\left|\int\mathop{}\\!\mathrm{d}^{2}k\
\hat{\mathcal{I}}(k|\mathcal{O})\hat{\mathcal{U}}^{*}(k|\lambda)\right|^{2}\
.$ (29)
More compactly,
$p(1_{a}\cap
1_{b}|\lambda,\mathcal{O})=\frac{1}{2}\left[||\mathcal{I}_{\omega}(\cdot|\mathcal{O})||^{2}||\mathcal{U}_{\omega}(\cdot|\lambda)||^{2}-\left|\langle\mathcal{I}_{\omega}(\cdot|\mathcal{O}),\mathcal{U}_{\omega}(\cdot|\lambda)\rangle\right|^{2}\right]\
,$ (30)
with $||\cdot||$ and $\langle\cdot,\cdot\rangle$ denoting the $L^{2}$-norm and
inner product, which is precisely the results of Eq. 3.
## APPENDIX C TRAINING
In this section, we discuss how to train the Hong-Ou-Mandel interferometer as
a binary classifier. We separately feed each element of the training set (an
ensemble of objects with known labels) into the input branch of Fig. 1,
comparing the predicted classes with the target ones. We optimize the probe
parameters $\lambda$ by means of the gradient descent algorithm, and using the
binary cross-entropy as loss function.
Consider a training set made of $M$ objects $\\{\mathcal{O}_{j}\\}$, each
associated to a binary target label $y_{j}\in\\{0,1\\}$, with $0\leq j\leq
M-1$. We denote $f^{(j)}_{\lambda}=f_{\lambda}(\mathcal{O}_{j})$ our model
prediction. After feeding $\mathcal{O}_{j}$ into the input branch of the
interferometer
$\displaystyle f^{(j)}_{\lambda}=C-2p(1_{a}\cap
1_{b}|\lambda,\mathcal{O}_{j})\ ,$ (31) $\displaystyle
F_{b\lambda}^{(j)}=\sigma(f^{(j)}_{\lambda}+b)\ ,$ (32)
where $p\in[0,1/2]$. For simplicity, we assumed that the losses are
independent on both the input and the probe, that is
$C:=\alpha_{\lambda}(\mathcal{O}_{j})\ \forall\lambda,j$.
Given a sample object, the binary cross-entropy between the target label and
the predicted one reads
$H\left(y_{j},F_{b\lambda}^{(j)}\right)=-y_{j}\log(F_{b\lambda}^{(j)})-\left(1-y_{j}\right)\log(1-F_{b\lambda}^{(j)})\
.$ (33)
We optimize the probe parameters by means of the gradient descent algorithm,
where the binary cross-entropy, averaged on the training set, is used as loss
function. Namely
$\displaystyle\lambda\to\lambda-\frac{\eta_{\lambda}}{M}\sum_{j=0}^{M-1}\partial_{\lambda}H\left(y_{j},F^{(j)}_{b\lambda}\right)\
,$ (34) $\displaystyle b\to
b-\frac{\eta_{b}}{M}\sum_{j=0}^{M-1}\partial_{b}H\left(y_{j},F^{(j)}_{b\lambda}\right)\
,$ (35)
with $\eta_{\lambda},\eta_{b}$ the learning rates of the probe and bias
parameters, respectively. The derivatives with respect to the parameters and
the bias yield
$\displaystyle\partial_{\lambda}H=\left(\partial_{F}H\right)\left(\partial_{\xi}\sigma\right)\partial_{\lambda}f\
,$ (36)
$\displaystyle\partial_{b}H=\left(\partial_{F}H\right)\partial_{\xi}\sigma\ ,$
(37)
with $\xi_{b\lambda}=f_{\lambda}+b$. Then,
$\displaystyle\partial_{F}H=\frac{F-y}{F(1-F)}\ ,$ (38)
$\displaystyle\partial_{\xi}\sigma=\beta F(1-F)\ ,$ (39)
with $\beta$ the hyperparameter of Eq. 6. For any complex function of real
variable $h:\mathbb{R}\rightarrow\mathbb{C}$, it follows that
$\partial_{\lambda}\left|h(\lambda)\right|=\real\left[h(\lambda)(\partial_{\lambda}h(\lambda))^{*}\right]/\left|h(\lambda)\right|$.
Hence,
$\partial_{\lambda}f=2\real\left[\langle\mathcal{I}_{\omega},\mathcal{U}_{\omega}\rangle\langle\mathcal{I}_{\omega},\partial_{\lambda}\mathcal{U}_{\omega}\rangle^{*}\right]\
.$ (40)
Neglecting the phase of $\langle\mathcal{I},\mathcal{U}\rangle$,
$\partial_{\lambda}f\simeq
2\sqrt{f}\real\left[\langle\mathcal{I}_{\omega},\partial_{\lambda}\mathcal{U}_{\omega}\rangle\right]\
.$ (41)
This assumption, which we verified in our simulations under a self-consistency
test, simplifies the computation of the first factor of Eq. 40, which is
directly determined at the output of the Hong-Ou-Mandel interferometer.
## APPENDIX D CLASSIFICATION IN THE FOURIER DOMAIN
In this section, we discuss the effect of adding a single lens in the probe
branch of the Hong-Ou-Mandel interferometer, as shown in Fig. 1. We summarize
the main calculations, which closely follow that of Section II.
A thin lens is placed at one focal length $\ell$ from both the probe image
plane and the beam splitter. In the near-field limit, the lens performs a
Fourier transform of the probe state [31], yielding
$\ket{\Psi_{\mathcal{U}}}\to\ket{\Psi_{\mathcal{U}^{\prime}}}$, where
$\mathcal{U}^{\prime}_{\omega}(r|\lambda)=-i\frac{\omega}{\ell}e^{2i\omega
f}\hat{\mathcal{U}}_{\omega}\left(\frac{\omega}{\ell}r\big{|}\lambda\right)\
.$ (42)
After the beam splitter, the rate of coincidences is
$\displaystyle p(1_{a}\cap
1_{b}|\lambda,\mathcal{O})=\frac{1}{2}\left[\alpha_{\omega}(\mathcal{O})-\widetilde{f}_{\lambda}(\mathcal{O})\right]\
,$ (43)
$\displaystyle\widetilde{f}_{\lambda}(\mathcal{O})=\left|\langle\mathcal{I}_{\omega}(\cdot|\mathcal{O}),\hat{\mathcal{U}}_{\omega}(\cdot|\lambda)\rangle\right|^{2}\
,$ (44)
yielding
$\displaystyle\widetilde{F}_{b\lambda}(\mathcal{O})=\sigma(\widetilde{f}_{\lambda}(\mathcal{O})+b)\
,$ (45)
$\displaystyle\widetilde{f}_{\lambda}(\mathcal{O})=\left|\int_{I}\mathop{}\\!\mathrm{d}r\mathop{}\\!\mathrm{d}r^{\prime}\
\mathcal{I}_{\omega}(r|\mathcal{O})\mathcal{U}_{\omega}^{*}(r^{\prime}|\lambda)e^{ir\cdot
r^{\prime}}\right|^{2}\ ,$ (46)
with $\sigma$ and $b$ the sigmoid activation function and bias, already
introduced in Eq. 7. In contrast to Eq. 5,
$\widetilde{f}_{\lambda}(\mathcal{O})$ is not a point-wise evaluation: it
combines the image spatial modes with the momentum spectrum of the probe
state. Using the duality of the Fourier transform, it follows that
$\widetilde{f}_{\lambda}(\mathcal{O})=\left|\langle\hat{\mathcal{I}}_{\omega}(\cdot|\mathcal{O}),\mathcal{U}_{\omega}(\cdot|\lambda)\rangle\right|^{2}\
,$ (47)
which corresponds to the output of the same scheme of Fig. 1, but with the
thin lens placed in the left branch, before the beam splitter. Equivalently,
this takes the Fourier transform of the image, instead of that of the probe.
In the next section, we leverage this symmetry to simplify both the training
process and the numerical simulations.
The training of the model follows the same procedure of Appendix C. By placing
the lens on the top branch of the interferometer, while using the duality of
the Fourier transform, we get
$\partial_{\lambda}\widetilde{f}\simeq
2\sqrt{\widetilde{f}}\real\left[\langle\hat{\mathcal{I}}_{\omega},\partial_{\lambda}\mathcal{U}_{\omega}\rangle\right]\
.$ (48)
Under the same conditions of Sections III and C, the last two equations become
$\displaystyle\widetilde{f}_{\lambda}(\mathcal{O})=\left|\sum_{\mu,\nu}(u\star\hat{\mathcal{I}}^{*}_{\omega})(r_{\mu\nu})\frac{\lambda_{\mu\nu}}{||\lambda||}\right|^{2}\
,$ (49) $\displaystyle\partial_{\mu\nu}\widetilde{f}\simeq
2\frac{\sqrt{\widetilde{f}}}{||\lambda||}\real\left[(u\star\hat{\mathcal{I}}^{*}_{\omega})(r_{\mu\nu})-\sqrt{\widetilde{f}}\frac{\lambda_{\mu\nu}}{||\lambda||}\right]\
,$ (50)
where in the last step we neglected the phase of
$\langle\hat{\mathcal{I}},\mathcal{U}\rangle$. Similarly to Eq. 13, Eq. 50 can
be evaluated in an all-optical way through the characterization of the real
part of $\hat{\mathcal{I}}$, namely, by performing an amplitude and phase
measurement at the output of a thin lens, placed in the left branch, before
the beam splitter. In Fig. 3, we compare the predictability of the neuron in
the spatial and Fourier domains.
## APPENDIX E OPTICAL AND COMPUTATIONAL ADVANTAGE
In this section, we discuss the optical and computational advantage as the
number of photons and operations required by a single image classification.
Assuming that all the parameters have been previously trained with optimal
accuracy, we show that our protocol requires a constant number of resources,
i.e. $\mathcal{O}(1)$ complexity, independently of the input image resolution:
it provides a superexponential speedup over its classical counterpart.
We first discuss the computational advantage when substituting a classical
neuron with a quantum optical one. From now on, we denote $\Omega$, $\Theta$
and $\mathcal{O}$, respectively the lower, tight and upper bounds on the
number of resources needed by a certain (optical or computational) operation.
Consider a digital image $x$ of $N$ pixels, fed into a neuron
$G_{bw}(x)=\sigma(w\cdot x+b)\ ,$ (51)
where $x,w\in\mathbb{R}^{N}$, $b\in\mathbb{R}$ and $\sigma$ is the sigmoid
activation of Eq. 6, with hyperparameters $\beta=1$ and $\gamma=0$. Eq. 51
costs $N$ operations to compute $w\cdot x$. The Hong-Ou-Mandel interferometer
performs the same operation in an all-optical way, leaving the computational
cost of the activation function and bias only, which is $\mathcal{O}(1)$.
We now discuss the optical advantage when using coincidences to classify
single-photon states instead of a classical neuron on fully reconstructed
images. After targeting an object with light, a digital image $x$ is an
ensemble of grey levels obtained by counting the number of photons collected
by different pixels on a sensor grid, e.g. a charge-coupled device [34]. Let
$n_{p}$ be the average number of photons in the input state, and $\mu_{i}$ the
average number of photons collected by the $i$-th pixel of the grid, with
$i\in\\{0,\ldots,N\\}$. Assuming perfect quantum efficiency and sufficiently
low exposure times to neglect the saturation of the sensor, the grey values at
each pixel read
$x_{i}=\frac{\mu_{i}L}{\mu_{w}}\ ,$ (52)
with $L$ the number of grey levels, i.e. the depth of the image, and
$\mu_{w}=\max_{i}\mu_{i}$ the maximum number of photons collected in a single
pixel. Indeed, $x_{i}\in\\{0,1,\ldots,L-1\\}$ with $0$ and $L$ labelling the
_black_ and _white_ colors, respectively. Each pixel has variance
$\varsigma_{i}^{2}=\Delta\mu_{i}^{2}L^{2}/\mu_{w}^{2}$, with
$\Delta\mu_{i}^{2}$ the variance on the number of collected photons. For
coherent light, the photo-detection process undergoes the standard quantum
limit (SQL) [35, 36], with Poissonian fluctuations that satisfy
$\Delta\mu_{i}^{2}\simeq\mu_{i}$. The average uncertainty reads
$\varsigma^{2}:=\frac{1}{N}\sum_{i=0}^{N}\varsigma_{i}^{2}\stackrel{{\scriptstyle\text{SQL}}}{{\simeq}}\langle
x\rangle^{2}Nn_{p}^{-1}\ ,$ (53)
with $\langle x\rangle=N^{-1}\sum_{i}x_{i}\in[0,L]$ the average brightness of
the image, which we assume to be independent of its resolution. Hence, the
number of photons $n_{p}$ required by a full image reconstruction with
_average_ variance $\varsigma^{2}$ is $\Theta(\varsigma^{-2}\langle x\rangle
N)$. This is the cost of image reconstruction only. We now take into account
the information propagation through the neuron of Eq. 51.
###### Proposition 1.
Consider a neuron with sigmoid activation function. Suppose that there exists
a sequence of parameters $\\{(w_{N},b_{N})\in\mathbb{R}^{N+1}\\}_{N\gg 1}$
that optimally solve the $N$-pixel image classification task, with $b_{N}$ and
the $\ell^{1}$-norm $||w_{N}||_{1}$ asymptotically bounded for $N\to\infty$.
Then, the number of photons $n_{p}$ required to classify an image $x$ with
uncertainty $\varepsilon$, is $\Omega\left(\varepsilon^{-2}\langle x\rangle
N\right)$.
###### Proof.
Consider the output of the neuron $G_{bw}(x)=\sigma(w_{N}\cdot x+b_{N})$, and
its derivative $\partial G(x)=G_{bw}(x)(1-G_{bw}(x))$. By neglecting the
spatial neighbourhood correlations, which may introduce at most a constant
overhead in our estimation, we propagate the uncertainty of $x$ as
$\varepsilon^{2}=\langle x\rangle(\partial
G)^{2}(x)\sum_{i=0}^{N-1}(w_{N})_{i}^{2}x_{i}N\tilde{n}^{-1}_{p}\ ,$ (54)
where $\tilde{n}_{p}=n_{r}n_{p}$, with $n_{r}$ is the number of independent
image acquisition and classification. Since black pixels do not contribute to
this summation, we get
$\sum_{i=0}^{N-1}(w_{N})_{i}^{2}x_{i}\geq\sum_{i\notin\mathcal{B}}(w_{N})_{i}^{2}=||w_{N}||^{2}-\sum_{i\in\mathcal{B}}(w_{N})_{i}^{2}\
,$ (55)
with $\mathcal{B}=\\{i\in\mathbb{N}\ |\ x_{i}=0\ \text{for}\ 0\leq i\leq
N-1\\}$ the set of black pixels labels. However,
$||w_{N}||^{2}\gg\sum_{i\in\mathcal{B}}(w_{N})_{i}^{2}$. Otherwise,
$||w_{N}||^{2}\simeq\sum_{i\in\mathcal{B}}(w_{N})_{i}^{2}$ would imply either
that the image is mostly black, independently of its resolution, or that
$(w_{N})_{i}\simeq 0$ for all non-black pixels, which are both conditions that
prevent the learnability of the neuron. By substitution into Eq. 54 we get
$\tilde{n}_{p}\geq\varepsilon^{-2}\langle x\rangle(\partial
G)^{2}(x)||w_{N}||^{2}N\ .$ (56)
Since $w_{N}$ is a sequence of non-trivial solutions of the classification
problem, the $\ell^{2}$-norm $||w_{N}||^{2}$ cannot go to zero for
$N\to\infty$. Finally, we show that $(\partial G(x))^{2}$ does not converge to
$0$ for $N\to\infty$. Consider
$(\partial G)^{2}(x)=\frac{e^{-2(w_{N}\cdot x+b_{N})}}{[1+e^{-(w_{N}\cdot
x+b_{N})}]^{4}}\ .$ (57)
If $b_{N}$ is asymptotically limited, $(\partial G)^{2}$ converges to zero if
and only if $w_{N}\cdot x\to\pm\infty$. By splitting this scalar product into
positive and negative contributions $w_{N}\cdot
x=\sum_{(w_{N})_{i}>0}(w_{N})_{i}x_{i}-\sum_{(w_{N})_{i}<0}|(w_{N})_{i}|x_{i}$,
it follows that
$\displaystyle w_{N}\cdot x\leq\sum_{(w_{N})_{i}>0}(w_{N})_{i}x_{i}\leq
L||w_{N}||_{1}\ ,$ (58) $\displaystyle w_{N}\cdot
x\geq-\sum_{(w_{N})_{i}<0}|(w_{N})_{i}|x_{i}\geq-L||w_{N}||_{1}\ ,$ (59)
namely that $|w_{N}\cdot x|\leq L||w_{N}||_{1}$. Since the $\ell^{1}$-norm is
limited, $(\partial G)^{2}$ admits strictly positive lower bound for
$N\to\infty$. Finally, this imply that
$\tilde{n}_{p}=\Omega\left(\varepsilon^{-2}\langle x\rangle N\right)$. ∎
In the previous discussion, two conditions lead to the above lower bound. On
the one hand, that $||w_{N}||^{2}\not\to 0$ for $N\to\infty$, which is
essential to guarantee that the neuron is trainable at any resolution. On the
other hand, that $||w_{N}||_{1}$ is bounded for $N\to\infty$, which is
compatible with LASSO and Tikhonov’s regularization techniques [37, 38].
We show that our protocol exponentially reduces this cost, requiring only the
estimation of the rate of coincidences of the Hong-Ou-Mandel interferometer of
Fig. 1. Let $\tilde{n}_{p}=2n_{p}$ be the number input photons, and
$\tilde{p}\in[0,1/2]$ the empirical rate of coincidences. Under the normal
approximation, with the $95\%$ confidence level [39], the estimation
uncertainty reads
$\varepsilon=2\sqrt{\frac{\tilde{p}(1-\tilde{p})}{\tilde{n}_{p}}}\ .$ (60)
Since $4\tilde{p}(1-\tilde{p})\leq 1$, the total number of photons is
$\mathcal{O}(\varepsilon^{-2})$, which is constant with respect to the
resolution of the image.
In conclusion, the quantum optical neuron provides a superexponential
advantage over its classical counterpart, both in the number of operations and
photons saved to classify a single image. We summarize these results in Table
1.
## References
* Lecun _et al._ [1998] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proc. IEEE 86, 2278 (1998).
* Krizhevsky _et al._ [2017] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet classification with deep convolutional neural networks, Commun. ACM 60, 84–90 (2017).
* [3] K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in _IEEE Conference on Computer Vision and Pattern Recognition_, CVPR ’16, p. 770.
* [4] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, An image is worth 16x16 words: Transformers for image recognition at scale, in _International Conference on Learning Representations_ , ICLR ’21, arXiv:2010.11929 [cs.CV] .
* Shastri _et al._ [2021] B. J. Shastri, A. N. Tait, T. Ferreira de Lima, W. H. P. Pernice, H. Bhaskaran, C. D. Wright, and P. R. Prucnal, Photonics for artificial intelligence and neuromorphic computing, Nat. Photon. 15, 102–114 (2021).
* Lin _et al._ [2018] X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, All-optical machine learning using diffractive deep neural networks, Science 361, 1004–1008 (2018).
* Zuo _et al._ [2019] Y. Zuo, B. Li, Y. Zhao, Y. Jiang, Y.-C. Chen, P. Chen, G.-B. Jo, J. Liu, and S. Du, All-optical neural network with nonlinear activation functions, Optica 6, 1132 (2019).
* Colburn _et al._ [2019] S. Colburn, Y. Chu, E. Shilzerman, and A. Majumdar, Optical frontend for a convolutional neural network, Appl. Opt. 58, 3179 (2019).
* Li _et al._ [2021] S. Li, B. Ni, X. Feng, K. Cui, F. Liu, W. Zhang, and Y. Huang, All-optical image identification with programmable matrix transformation, Opt. Express 29, 26474 (2021).
* Luo _et al._ [2022] Y. Luo, Y. Zhao, J. Li, E. Çetintaş, Y. Rivenson, M. Jarrahi, and A. Ozcan, Computational imaging without a computer: seeing through random diffusers at the speed of light, eLight 2, 4 (2022).
* McMahon [2023] P. L. McMahon, The physics of optical computing, Nat. Rev. Phys. 5, 717–734 (2023).
* Benatti _et al._ [2019] F. Benatti, S. Mancini, and S. Mangini, Continuous variable quantum perceptron, Int. J. Quantum Inf. 17, 1941009 (2019).
* Tacchino _et al._ [2019] F. Tacchino, C. Macchiavello, D. Gerace, and D. Bajoni, An artificial neuron implemented on an actual quantum processor, Npj Quantum Inf. 5, 26 (2019).
* Cerezo _et al._ [2021] M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii, J. R. McClean, K. Mitarai, X. Yuan, L. Cincio, and P. J. Coles, Variational quantum algorithms, Nat. Rev. Phys. 3, 625–644 (2021).
* Senokosov _et al._ [2024] A. Senokosov, A. Sedykh, A. Sagingalieva, B. Kyriacou, and A. Melnikov, Quantum machine learning for image classification, Mach. Learn.: Sci. Technol. 5, 015040 (2024).
* Cerezo _et al._ [2024] M. Cerezo, M. Larocca, D. García-Martín, N. L. Diaz, P. Braccia, E. Fontana, M. S. Rudolph, P. Bermejo, A. Ijaz, S. Thanasilp, E. R. Anschuetz, and Z. Holmes, Does provable absence of barren plateaus imply classical simulability? Or, why we need to rethink variational quantum computing (2024), arXiv:2312.09121 [quant-ph] .
* Steinbrecher _et al._ [2019] G. R. Steinbrecher, J. P. Olson, D. Englund, and J. Carolan, Quantum optical neural networks, Npj Quantum Inf. 5, 60 (2019).
* Killoran _et al._ [2019] N. Killoran, T. R. Bromley, J. M. Arrazola, M. Schuld, N. Quesada, and S. Lloyd, Continuous-variable quantum neural networks, Phys. Rev. Res. 1, 033063 (2019).
* Sui _et al._ [2020] X. Sui, Q. Wu, J. Liu, Q. Chen, and G. Gu, A review of optical neural networks, IEEE Access 8, 70773 (2020).
* Stanev _et al._ [2023] D. Stanev, N. Spagnolo, and F. Sciarrino, Deterministic optimal quantum cloning via a quantum-optical neural network, Phys. Rev. Res. 5, 013139 (2023).
* Wood _et al._ [2024] C. Wood, S. Shrapnel, and G. J. Milburn, A Kerr kernel quantum learning machine (2024), arXiv:2404.01787 [quant-ph] .
* Hong _et al._ [1987] C. K. Hong, Z. Y. Ou, and L. Mandel, Measurement of subpicosecond time intervals between two photons by interference, Phys. Rev. Lett. 59, 2044 (1987).
* Bowie _et al._ [2023] C. Bowie, S. Shrapnel, and M. J. Kewming, Quantum kernel evaluation via Hong–Ou–Mandel interference, Quantum Sci. Technol. 9, 015001 (2023).
* Glorot and Bengio [2010] X. Glorot and Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, in _Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics_, PMLR, Vol. 9 (2010) pp. 249–256.
* Neff _et al._ [1990] J. Neff, R. Athale, and S. Lee, Two-dimensional spatial light modulators: a tutorial, Proc. IEEE 78, 826 (1990).
* Zhang _et al._ [2014] Z. Zhang, Z. You, and D. Chu, Fundamentals of phase-only liquid crystal on silicon (LCOS) devices, Light Sci. Appl. 3, e213 (2014).
* TensorFlow Developers [2023] TensorFlow Developers, TensorFlow (2023).
* [28] R. Collobert and S. Bengio, Links between perceptrons, MLPs and SVMs, in _Proceedings of the Twenty-First International Conference on Machine Learning_, ICML ’04, p. 23.
* Rosenblatt [1958] F. Rosenblatt, The perceptron: A probabilistic model for information storage and organization in the brain, Psychol. Rev. 65, 386 (1958).
* [30] A. R. Morgillo and S. Roncallo, https://github.com/simoneroncallo/quantum-optical-neuron.
* Rezai and Salehi [2022] M. Rezai and J. A. Salehi, Fundamentals of quantum Fourier optics, IEEE Trans. Quantum Eng. 4, 1 (2022).
* Saleh and Teich [1991] B. E. A. Saleh and M. C. Teich, _Fundamentals of Photonics_ (Wiley, 1991).
* Brańczyk [2017] A. M. Brańczyk, Hong-Ou-Mandel interference (2017), arXiv:1711.00080 [quant-ph] .
* Boyle and Smith [1970] W. S. Boyle and G. E. Smith, Charge coupled semiconductor devices, Bell Syst. Tech. J. 49, 587 (1970).
* Kolobov [1999] M. I. Kolobov, The spatial behavior of nonclassical light, Rev. Mod. Phys. 71, 1539 (1999).
* Delaubert _et al._ [2008] V. Delaubert, N. Treps, C. Fabre, H. A. Bachor, and P. Réfrégier, Quantum limits in image processing, EPL 81, 44001 (2008).
* Santosa and Symes [1986] F. Santosa and W. W. Symes, Linear inversion of band-limited reflection seismograms, SIAM J. Sci. Stat. Comput. 7, 1307–1330 (1986).
* Tikhonov and Glasko [1965] A. Tikhonov and V. Glasko, Use of the regularization method in non-linear problems, USSR Comput. Math. Math. Phys. 5, 93 (1965).
* Rotondi _et al._ [2022] A. Rotondi, P. Pedroni, and A. Pievatolo, _Probability, Statistics and Simulation: With Application Programs Written in R_ (Springer, 2022).
|
# Gross-Neveu model with O(2)${}_{L}\times$O(2)R chiral symmetry: duality with
Zakharov-Mikhailov model and large $N$ solution
Michael<EMAIL_ADDRESS>Institut für Theoretische Physik,
Universität Erlangen-Nürnberg, D-91058, Erlangen, Germany
###### Abstract
The two-flavor Gross-Neveu model with U(2)${}_{L}\times$U(2)R chiral symmetry
in 1+1 dimensions is used to construct a novel variant of four-fermion
theories with O(2)${}_{L}\times$O(2)R chiral symmetry. The spontaneous
breaking of the group O(2), a continuous group with two connected components
(rotations and reflections), gives rise to new phenomena. It is ideally suited
to describe a situation where two distinct kinds of condensation compete, in
particular chiral symmetry breaking (particle-hole condensation) and Cooper
pairing (particle-particle condensation). After solving the O(2) chiral Gross-
Neveu model in detail, we demonstrate that it is dual to another classically
integrable model due to Zakharov and Mikhailov. The duality enables us to
solve the quantum version of this model in the large $N$ limit with
semiclassical methods, supporting its integrability at the quantum level. The
resulting model is the unique four-fermion theory sharing the full Pauli-
Gürsey symmetry with free, massless fermions (“perfect Gross-Neveu model”) and
provides us with a solvable model for competing chiral and Cooper pair
condensates, including explicit soliton dynamics and the phase diagram.
## I Introduction
Back in 1978, Zakharov and Mikhailov L1 proved the integrability of three
classical spinor models in 1+1 dimensions for any number of components $N$.
The quantum versions of two of them are by now also well under control, at
least in the large $N$ limit, namely the Gross-Neveu (GN) model L2
${\cal L}_{\rm
GN}=-2i\psi_{1}^{(i)*}\partial\psi_{1}^{(i)}+2i\psi_{2}^{(i)*}\bar{\partial}\psi_{2}^{(i)}+\frac{g^{2}}{2}(\psi_{1}^{(i)*}\psi_{2}^{(i)}+\psi_{2}^{(i)*}\psi_{1}^{(i)})^{2}$
(1)
and the chiral GN model or 2d Nambu–Jona-Lasinio (NJL) model L3
${\cal L}_{\rm
NJL}=-2i\psi_{1}^{(i)*}\partial\psi_{1}^{(i)}+2i\psi_{2}^{(i)*}\bar{\partial}\psi_{2}^{(i)}+2g^{2}(\psi_{1}^{(i)*}\psi_{2}^{(i)})(\psi_{2}^{(j)*}\psi_{1}^{(j)}).$
(2)
We use the notation
$z=x-t,\quad\bar{z}=x+t,\quad\psi_{1}=\psi_{L},\quad\psi_{2}=\psi_{R}$ (3)
and sum implicitly over “color” indices $i,j$ from 1 to $N$. Apparently,
integrability at the classical level allows one to solve the quantized theory
in the large $N$ limit with semiclassical methods, including time dependent
multi-soliton interactions, in explicit analytical form L4 ; L5 ; L6 . At the
classical level (i.e., with $c$-number fermion fields), these two models are
connected to chiral fields on the symplectic group Sp(2$N,\mathbb{R}$) (GN
model) or the special unitary group SU($N$) (NJL model). The third model
presented in L1 and related to chiral fields on the orthogonal group O($N$)
has so far not had any significant impact in particle physics. It is sometimes
referred to as Zakharov-Mikhailov (ZM) model and has the less familiar
Lagrangian
${\cal L}_{\rm
ZM}=-2i\psi_{1}^{(i)*}\partial\psi_{1}^{(i)}+2i\psi_{2}^{(i)*}\bar{\partial}\psi_{2}^{(i)}+g^{2}(\psi_{1}^{(i)*}\psi_{1}^{(j)}-\psi_{1}^{(j)*}\psi_{1}^{(i)})(\psi_{2}^{(i)*}\psi_{2}^{(j)}-\psi_{2}^{(j)*}\psi_{2}^{(i)}).$
(4)
Interestingly, the quantum version of the ZM model has appeared again in a
different context in the meantime. When studying four-fermion theories that
give rise to Cooper pairing as opposed to fermion-antifermion pairing, Chodos,
Minakata and Cooper (CMC) L7 proposed a model whose Lagrangian is equivalent
to
${\cal L}_{\rm
CMC}=-2i\psi_{1}^{(i)*}\partial\psi_{1}^{(i)}+2i\psi_{2}^{(i)*}\bar{\partial}\psi_{2}^{(i)}+2g^{2}(\psi_{1}^{(i)}\psi_{2}^{(i)})(\psi_{2}^{(j)*}\psi_{1}^{(j)*}).$
(5)
They noticed many similarities with the chiral GN model such as asymptotic
freedom, mass generation and a massless bound state. If written in the form
(2) and (5), one sees that the quantized NJL and CMC models are “dual” to each
other in the sense that they are related by a simple Bogoliubov transformation
L8 ,
$\psi_{1}^{(i)}\to\psi_{1}^{(i)\dagger},\quad\psi_{2}^{(i)}\to\psi_{2}^{(i)}.$
(6)
Hence both models are mathematically equivalent, although their physics looks
quite different at first sight. This observation incited us to study yet
another four-fermion theory obtained by “self-dualizing” the NJL model, i.e.
adding the interaction terms of models (2) and (5) with the same coupling
constant L9 . The resulting theory is singled out from all other variants of
the GN model in that it shares the full Pauli-Gürsey symmetry L10 ; L11 with
free, massless Dirac fermions, in addition to a O($N$) color symmetry. Due to
this high degree of symmetry, it has been dubbed “perfect GN model” (pGN) in
L12 ,
${\cal L}_{\rm
pGN}=-2i\psi_{1}^{(i)*}\partial\psi_{1}^{(i)}+2i\psi_{2}^{(i)*}\bar{\partial}\psi_{2}^{(i)}+2g^{2}\left[(\psi_{1}^{(i)*}\psi_{2}^{(i)})(\psi_{2}^{(j)*}\psi_{1}^{(j)})+(\psi_{1}^{(i)}\psi_{2}^{(i)})(\psi_{2}^{(j)*}\psi_{1}^{(j)*})\right].$
(7)
As a matter of fact, the Lagrangians of the pGN model and the ZM model become
identical once fermion fields are treated as anticommuting variables. This
observation has stimulated our interest in the large $N$ limit of the quantum
ZM model (hereafter referred to as pGN model). However, a systematic solution
of the soliton problem or other questions has so far resisted all our
attempts. Judging from the experience with the GN and NJL models, if the
classical ZM model is integrable, one would expect the same kind of
solvability for the GN, NJL and pGN models.
Here we propose a solution to this problem. We have found a duality between
the one-flavor pGN model and the two-flavor NJL model, for which the general
soliton solution has already been given at large $N$ L13 ; L14 . In the case
of the GN model, the most efficient way to solve soliton dynamics has been to
start from the solution of the NJL model with U(1)${}_{L}\times$U(1)R chiral
symmetry and specialize to real mean field solutions, thereby solving the
O(1)${}_{L}\times$O(1)R [or Z${}_{2,L}\times$Z2,R] GN model L6 . Here we
generalize this approach by first reducing known solutions of the
U(2)${}_{L}\times$U(2)R 2-flavor NJL model to a novel O(2)${}_{L}\times$O(2)R
variant of the GN model, again by selecting real mean fields. This model in
turn will be shown to be dual to the pGN model, thus providing the key to the
missing large $N$ solution of the pGN model.
This paper is organized as follows. After a reminder of some elementary facts
about O(2) group theory in Sect. II, we propose the O(2)${}_{L}\times$O(2)R
symmetric descendent of the unitary two-flavor chiral GN model in Sect. III.
The vacuum structure and gap equation are determined in Sect. IV, the meson
spectrum in Sect. V, using the random phase approximation (RPA). Sects. VI and
VII are dedicated to the most elementary solitonic multifermion bound states,
the kink, and interactions of several kinks. In Sect. VIII we then show that
the analogue of twisted kinks exist as constituents of bound states, similar
to what happens in the GN model. The simplest breather is also constructed.
Sect. IX addresses a topic well known from the NJL model, namely massless
multi-fermion bound states and inhomogeneous structures at finite chemical
potentials (chiral spirals, kink-antikink crystal). Sect. X is perhaps the
most important one of this paper. Here we show the equivalence between the
O(2) chiral GN model and the so far unsolved ZM (or pGN) model). In Sect. XI
we summarize our findings, reviewing the preceding results in the light of the
duality from a physical point of view.
## II Elementary group theory: from U(2) to O(2)
In the defining representation, elements of the unitary group
U(2)=U(1)$\times$SU(2) can be parametrized as
$U=e^{-i\psi}e^{i\vec{n}\vec{\tau}\delta},\quad\vec{n}=\left(\begin{array}[]{c}\sin\theta\cos\phi\\\
\sin\theta\sin\phi\\\ \cos\theta\end{array}\right),$ (8)
with $\tau_{i}$ in the standard form of the Pauli matrices. An O(2) matrix is
a real U(2) matrix. There are two distinct ways to get a real matrix out of
(8):
1. 1.
$\theta=\phi=\pi/2,\psi=0$ (hence $n_{1}=n_{3}=0$):
$R(\delta)=e^{i\tau_{2}\delta}=\left(\begin{array}[]{rr}\cos\delta&\sin\delta\\\
-\sin\delta&\cos\delta\end{array}\right),\quad{\rm det}R(\delta)=1.$ (9)
2. 2.
$\psi=\delta=\pi/2,\phi=0$ (hence $n_{2}=0$):
$\displaystyle I(\theta)=n_{1}\tau_{1}+n_{3}\tau_{3}$ $\displaystyle=$
$\displaystyle\tau_{3}e^{i\tau_{2}\theta}=\left(\begin{array}[]{rr}\cos\theta&\sin\theta\\\
\sin\theta&-\cos\theta\end{array}\right),\quad{\rm det}I(\theta)=-1.$ (12)
The matrix $R(\delta)$ corresponds to a rotation around the center of the
($x,y$) plane by an angle $\delta$. The matrix $I(\theta)$ is a rotation by an
angle $\theta$ followed by a reflection in the new $x$-axis. These two
elements belong to the two connected components of the group O(2)
characterized by their determinants $\pm 1$, with only rotations forming a
subgroup, SO(2). The group manifolds of U(2) and O(2) are S${}_{1}\times$S3
and S1+S1, respectively. What we have done here is the two-dimensional analog
of the transition from U(1) to O(1) (or Z2) where one restricts the function
$e^{-i\psi}$ to its real values, $\pm 1$. Finally, note that O(2) is a non-
Abelian group. Products of its elements can easily be evaluated with the help
of
$I(\theta)=\tau_{3}R(\theta)=R(-\theta)\tau_{3},\quad
R(\theta_{1})R(\theta_{2})=R(\theta_{1}+\theta_{2}).$ (13)
## III Gross-Neveu model with O(2)${}_{L}\times$O(2)R symmetry
The Lagrangian of the U(2)${}_{L}\times$ U(2)R symmetric two-flavor NJL model
reads L14
${\cal L}_{\rm
U(2)}=\bar{\psi}i\partial\\!\\!\\!/\psi+\frac{g^{2}}{4}\left[(\bar{\psi}\psi)^{2}+(\bar{\psi}\vec{\tau}\psi)^{2}+(\bar{\psi}i\gamma_{5}\psi)^{2}+(\bar{\psi}i\gamma_{5}\vec{\tau}\psi)^{2}\right].$
(14)
We choose a “chiral” representation of Dirac matrices ($\gamma_{5}$ diagonal),
$\gamma^{0}=\sigma^{1},\quad\gamma^{1}=i\sigma^{2},\quad\gamma_{5}=-\sigma_{3}.$
(15)
If we expand the isovector interaction terms, the interaction part of
Lagrangian (14) consists of eight squares of bilinears. We can now generate
simpler, Lorentz invariant Lagrangians by deleting some of these terms. We
propose the following choice, keeping only half of the interaction terms:
${\cal L}_{\rm
O(2)}=\bar{\psi}i\partial\\!\\!\\!/\psi+\frac{g^{2}}{4}\left[(\bar{\psi}\psi)^{2}+(\bar{\psi}\tau_{1}\psi)^{2}+(\bar{\psi}\tau_{3}\psi)^{2}+(\bar{\psi}i\gamma_{5}\tau_{2}\psi)^{2}\right].$
(16)
Although it is not obvious, the resulting model has a O(2)${}_{L}\times$O(2)R
chiral symmetry. Let us evaluate the transformation of the four remaining
bilinears induced by orthogonal chiral transformations of the spinor fields.
Isospin rotation of a left-handed spinor around the 2-axis,
$P_{L}e^{i\alpha\tau_{2}}+P_{R}:\quad\left(\begin{array}[]{c}\bar{\psi}\psi\\\
\bar{\psi}i\gamma_{5}\tau_{2}\psi\end{array}\right)^{\prime}=R(-\alpha)\left(\begin{array}[]{c}\bar{\psi}\psi\\\
\bar{\psi}i\gamma_{5}\tau_{2}\psi\end{array}\right),\quad\left(\begin{array}[]{c}\bar{\psi}\tau_{3}\psi\\\
\bar{\psi}\tau_{1}\psi\end{array}\right)^{\prime}=R(\alpha)\left(\begin{array}[]{c}\bar{\psi}\tau_{3}\psi\\\
\bar{\psi}\tau_{1}\psi\end{array}\right).$ (17)
Isospin rotation of a right-handed spinor around the 2-axis,
$P_{L}+P_{R}e^{i\alpha\tau_{2}}:\quad\left(\begin{array}[]{c}\bar{\psi}\psi\\\
\bar{\psi}i\gamma_{5}\tau_{2}\psi\end{array}\right)^{\prime}=R(\alpha)\left(\begin{array}[]{c}\bar{\psi}\psi\\\
\bar{\psi}i\gamma_{5}\tau_{2}\psi\end{array}\right),\quad\left(\begin{array}[]{c}\bar{\psi}\tau_{3}\psi\\\
\bar{\psi}\tau_{1}\psi\end{array}\right)^{\prime}=R(\alpha)\left(\begin{array}[]{c}\bar{\psi}\tau_{3}\psi\\\
\bar{\psi}\tau_{1}\psi\end{array}\right).$ (18)
Isospin reflection and rotation around the 2-axis of a left-handed spinor
$P_{L}\tau_{3}e^{i\alpha\tau_{2}}+P_{R}:\quad\left(\begin{array}[]{c}\bar{\psi}\psi\\\
\bar{\psi}i\gamma_{5}\tau_{2}\psi\end{array}\right)^{\prime}=R(\alpha)\left(\begin{array}[]{c}\bar{\psi}\tau_{3}\psi\\\
\bar{\psi}\tau_{1}\psi\end{array}\right),\quad\left(\begin{array}[]{c}\bar{\psi}\tau_{3}\psi\\\
\bar{\psi}\tau_{1}\psi\end{array}\right)^{\prime}=R(-\alpha)\left(\begin{array}[]{c}\bar{\psi}\psi\\\
\bar{\psi}i\gamma_{5}\tau_{2}\psi\end{array}\right).$ (19)
Isospin reflection and rotation around the 2-axis of a right-handed spinor
$P_{L}+P_{R}\tau_{3}e^{i\alpha\tau_{2}}:\quad\left(\begin{array}[]{c}\bar{\psi}\psi\\\
\bar{\psi}i\gamma_{5}\tau_{2}\psi\end{array}\right)^{\prime}=I(\alpha)\left(\begin{array}[]{c}\bar{\psi}\tau_{3}\psi\\\
\bar{\psi}\tau_{1}\psi\end{array}\right),\quad\left(\begin{array}[]{c}\bar{\psi}\tau_{3}\psi\\\
\bar{\psi}\tau_{1}\psi\end{array}\right)^{\prime}=I(\alpha)\left(\begin{array}[]{c}\bar{\psi}\psi\\\
\bar{\psi}i\gamma_{5}\tau_{2}\psi\end{array}\right).$ (20)
Thus rotations leave the combinations
$(\bar{\psi}\psi)^{2}+(\bar{\psi}i\gamma_{5}\tau_{2}\psi)^{2}$ and
$(\bar{\psi}\tau_{1}\psi)^{2}+(\bar{\psi}\tau_{3}\psi)^{2}$ separately
invariant. Reflections leave only the sum of all four terms invariant, as they
induce hopping between the two pairs of bilinears in addition to a rotation.
This confirms that model (16) indeed possesses a O(2)${}_{L}\times$O(2)R
chiral symmetry. Consequently, the $\tau_{2}$-components of the isospin vector
and axial vector currents are conserved,
$\partial_{\mu}\bar{\psi}\gamma^{\mu}\tau_{2}\psi=0,\quad\partial_{\mu}\bar{\psi}\gamma^{\mu}\gamma_{5}\tau_{2}\psi=0.$
(21)
In addition the model has SU($N$) color symmetry and U(1) fermion number
($\psi\to e^{i\alpha}\psi$). It also shares with the GN model charge
conjugation with the familiar consequences, i.e., a real mean field in the
Hartree-Fock (HF) approach and a fermion spectrum that is symmetric about 0,
$C:\quad\psi\to\gamma_{5}\psi^{*},\quad\gamma_{5}H^{*}\gamma_{5}=-H.$ (22)
Here, $H$ is the HF Hamiltonian in coordinate space. In view of the conserved
charges, there are three chemical potentials one can add to the HF
Hamiltonian,
${\cal H}\to{\cal
H}-\mu\psi^{\dagger}\psi-\mu_{2}\psi^{\dagger}\tau_{2}\psi-\mu_{5,2}\psi^{\dagger}\gamma_{5}\tau_{2}\psi.$
(23)
This is important if one considers the phase diagram of the model.
## IV Vacua and dynamical mass
We start solving the chiral O(2) GN model (16) in the large $N$ limit by
determining its gap equation and vacuum structure. The HF equation reads
$(i\gamma^{\mu}\partial_{\mu}-S_{0}-S_{1}\tau_{1}-S_{3}\tau_{3}-i\gamma_{5}P_{2}\tau_{2})\psi=0$
(24)
with the mean fields given by the vacuum expectation values
$\left(\begin{array}[]{c}S_{0}\\\ S_{1}\\\ S_{3}\\\
P_{2}\end{array}\right)=-\frac{g^{2}}{2}\left(\begin{array}[]{c}\langle\bar{\psi}\psi\rangle\\\
\langle\bar{\psi}\tau_{1}\psi\rangle\\\
\langle\bar{\psi}\tau_{3}\psi\rangle\\\
\langle\bar{\psi}i\gamma_{5}\tau_{2}\psi\rangle\end{array}\right).$ (25)
Denoting 2-component isospinors of chirality $L/R$ by $\psi_{1,2}$, Eq. (24)
may be rewritten in canonical form as
$i\partial_{t}\left(\begin{array}[]{c}\psi_{1}\\\
\psi_{2}\end{array}\right)=\left(\begin{array}[]{cc}i\partial_{x}&\Delta^{T}\\\
\Delta&-i\partial_{x}\end{array}\right)\left(\begin{array}[]{c}\psi_{1}\\\
\psi_{2}\end{array}\right).$ (26)
Here, $\Delta$ is the 2$\times$2 flavor matrix
$\Delta=S_{0}+S_{1}\tau_{1}+S_{3}\tau_{3}-iP_{2}\tau_{2}=\left(\begin{array}[]{cc}S_{0}+S_{3}&S_{1}-P_{2}\\\
S_{1}+P_{2}&S_{0}-S_{3}\end{array}\right).$ (27)
The fact that this matrix is real reflects the orthogonal symmetry. In order
to determine the HF ground state, we have to diagonalize the 4$\times$4
Hamiltonian
$h=\left(\begin{array}[]{cc}-k&\Delta^{T}\\\ \Delta&k\end{array}\right)$ (28)
with a constant, real matrix $\Delta$. We find four modes with energies
$\pm\sqrt{m_{1}^{2}+k^{2}},\pm\sqrt{m_{2}^{2}+k^{2}}$ where
$m_{1,2}^{2}=\left(\sqrt{S_{0}^{2}+P_{2}^{2}}\pm\sqrt{S_{1}^{2}+S_{3}^{2}}\right)^{2}.$
(29)
Minimizing the HF vacuum energy density
${\cal E}_{\rm
vac}=-N\int_{-\Lambda/2}^{\Lambda/2}\frac{dk}{2\pi}\left(\sqrt{k^{2}+m_{1}^{2}}+\sqrt{k^{2}+m_{2}^{2}}\,\right)+\frac{m_{1}^{2}+m_{2}^{2}}{2g^{2}}$
(30)
with respect to $m_{1},m_{2}$ yields two gap equations
$0=1+\frac{Ng^{2}}{2\pi}\ln\frac{m_{i}^{2}}{\Lambda^{2}}\quad(i=1,2)$ (31)
that are only compatible if $m_{1}=m_{2}=m$. The final gap equation is
identical to that of the GN model and has the same implications (dimensional
transmutation, asymptotic freedom). The renormalized vacuum energy density
becomes
${\cal E}_{\rm vac}=-N\frac{m^{2}}{2\pi}.$ (32)
According to Eq. (29), there are two possible choices leading to
$m_{1}=m_{2}$:
$\displaystyle 1)$ $\displaystyle S_{1}=S_{3}=0,\quad
S_{0}^{2}+P_{2}^{2}=m^{2},\quad S_{0}=m\cos\alpha,\quad
P_{2}=m\sin\alpha,\quad\Delta=mR(-\alpha)$ $\displaystyle 2)$ $\displaystyle
S_{0}=P_{2}=0,\quad S_{1}^{2}+S_{3}^{2}=m^{2},\quad S_{3}=m\cos\alpha,\quad
S_{1}=m\sin\alpha,\quad\Delta=mI(\alpha)$ (33)
As expected, the vacuum manifold is the group manifold of O(2) comprising two
disjoint circles in the ($S_{0},P_{2}$) and ($S_{1},S_{3})$ planes. For
simplicity we shall denote a vacuum in the ($S_{0},P_{2}$) plane as “rotation
vacuum” and in the ($S_{1},S_{3}$) plane as “reflection vacuum” to indicate
their origin in the two components of the group O(2). When breaking
spontaneously the O(2) chiral symmetry, the system has to pick one circle
(rotation or reflection) and a particular point on this circle (angle
$\alpha$). All of these vacua are of course on an equal footing. If not
indicated otherwise, our standard choice will be ($S_{0}=m,P_{2}=0,\Delta=m$)
for the rotation vacuum and ($S_{1}=0,S_{3}=m,\Delta=m\tau_{3}$) for the
reflection vacuum, so that $\Delta$ is diagonal in the standard vacua.
## V Meson spectrum in random phase approximation (RPA)
Here we follow closely previous works on other GN model variants, using the
equations of motion method for the one-body density matrix $Q(x,y)$ L15 ; L16
. Starting point is the equation
$\displaystyle i\partial_{t}Q(x,y)$ $\displaystyle=$
$\displaystyle-i\left[\partial_{y}Q(x,y)\gamma_{5}+\gamma_{5}\partial_{x}Q(x,y)\right]$
(34) $\displaystyle-\frac{Ng^{2}}{2}\sum_{n=1}^{4}\left\\{{\rm Tr}[{\cal
O}_{n}Q(x,y)]{\cal O}_{n}Q(x,y)-Q(x,y){\cal O}_{n}{\rm Tr}[{\cal
O}_{n}Q(x,y)]\right\\}$
with
${\cal O}_{1}=\gamma^{0},\quad{\cal O}_{2}=\gamma^{0}\tau_{1},\quad{\cal
O}_{3}=\gamma^{0}\tau_{3},\quad{\cal O}_{4}=i\gamma^{1}\tau_{2}$ (35)
matching the interactions in Lagrangian (16). Let us first assume the rotation
vacuum ($S_{0}=m,P_{2}=0$). Using free positive and negative energy spinors
for one flavor ($E_{k}=\sqrt{k^{2}+m^{2}}$)
$\left(\begin{array}[]{cc}-k&m\\\ m&k\end{array}\right)u(k)=E_{k}u(k),\quad
u(k)=\left(\begin{array}[]{c}u_{1}\\\
u_{2}\end{array}\right)=\frac{1}{\sqrt{2E_{k}(E_{k}+k)}}\left(\begin{array}[]{c}m\\\
E_{k}+k\end{array}\right),$ (36) $\left(\begin{array}[]{cc}-k&m\\\
m&k\end{array}\right)v(k)=E_{k}v(k),\quad
v(k)=\left(\begin{array}[]{c}v_{1}\\\
v_{2}\end{array}\right)=\frac{1}{\sqrt{2E_{k}(E_{k}-k)}}\left(\begin{array}[]{c}m\\\
E_{k}-k\end{array}\right),$ (37)
the corresponding free massive states in the 2-flavor case are
$u_{I}=\left(\begin{array}[]{c}u_{1}\\\ 0\\\ u_{2}\\\
0\end{array}\right),\quad u_{II}=\left(\begin{array}[]{c}0\\\ u_{1}\\\ 0\\\
u_{2}\end{array}\right),\quad v_{I}=\left(\begin{array}[]{c}v_{1}\\\ 0\\\
v_{2}\\\ 0\end{array}\right),\quad v_{II}=\left(\begin{array}[]{c}0\\\
v_{1}\\\ 0\\\ v_{2}\end{array}\right).$ (38)
The isospin labels $I,II$ will be denoted by Greek letters below. Linearizing
Eq. (34) in the fluctuation around the vacuum density matrix and sandwiching
it between vacuum and one meson states of momentum $P$ and energy ${\cal
E}(P)$, we arrive at the RPA equations
$\displaystyle{\cal E}(P)X_{\alpha\beta}(P,k)$ $\displaystyle=$ $\displaystyle
E(k-P,k)X_{\alpha\beta}(P,k)-\frac{Ng^{2}}{2}\sum_{n=1}^{4}v_{\alpha}^{\dagger}(k-P){\cal
O}_{n}u_{\beta}(k)\Phi_{n}(P),$ $\displaystyle{\cal E}(P)Y_{\alpha\beta}(P,k)$
$\displaystyle=$
$\displaystyle-E(k-P,k)Y_{\alpha\beta}(P,k)+\frac{Ng^{2}}{2}\sum_{n=1}^{4}u_{\alpha}^{\dagger}(k-P){\cal
O}_{n}v_{\beta}(k)\Phi_{n}(P),$ $\displaystyle E(k-P,k)$ $\displaystyle=$
$\displaystyle E_{k-P}+E_{k},$ (39)
with
$\Phi_{n}(P)=\int\frac{dq}{2\pi}\left\\{{\cal
B}^{n}_{\delta\gamma}(P,q)Y_{\gamma\delta}(P,q)+{\cal
C}^{n}_{\delta\gamma}(P,q)X_{\gamma\delta}(P,q)\right\\}$ (40)
and
$\displaystyle{\cal B}^{n}_{\delta\gamma}(P,q)$ $\displaystyle=$
$\displaystyle v_{\delta}^{\dagger}(q){\cal O}_{n}u_{\gamma}(q-P),$
$\displaystyle{\cal C}^{n}_{\delta\gamma}(P,q)$ $\displaystyle=$
$\displaystyle u_{\delta}^{\dagger}(q){\cal O}_{n}v_{\gamma}(q-P).$ (41)
Eqs. (39) are integral equations with a separable kernel as is characteristic
for all GN type models. They can thus be solved analytically. By first solving
Eq. (39) for $X_{\alpha\beta}$ and $Y_{\alpha\beta}$, we get
$\displaystyle X_{\alpha\beta}(P,k)$ $\displaystyle=$
$\displaystyle\sum_{n=1}^{4}{\cal A}_{\alpha\beta}^{n}(P,k)\Phi_{n}(P),$
$\displaystyle Y_{\alpha\beta}(P,k)$ $\displaystyle=$
$\displaystyle\sum_{n=1}^{4}{\cal D}_{\alpha\beta}^{n}(P,k)\Phi_{n}(P),$ (42)
with the definitions
$\displaystyle{\cal A}_{\alpha\beta}^{n}(P,k)$ $\displaystyle=$
$\displaystyle-Ng^{2}\frac{1}{{\cal
E}(P)-E(k-P,k)}v_{\alpha}^{\dagger}(k-P){\cal O}_{n}u_{\beta}(k),$
$\displaystyle{\cal D}_{\alpha\beta}^{n}(P,k)$ $\displaystyle=$ $\displaystyle
Ng^{2}\frac{1}{{\cal E}(P)+E(k-P,k)}u_{\alpha}^{\dagger}(k-P){\cal
O}_{n}v_{\beta}(k).$ (43)
Inserting the results (42) into Eq. (40) leads to the homogeneous linear
system for $\Phi_{n}(P)$,
$\Phi_{n}(P)=\sum_{m}{\cal M}_{nm}(P)\Phi_{m}(P).$ (44)
The 4$\times$4 matrix ${\cal M}_{nm}(P)$ is given by
${\cal M}_{nm}(P)=\int\frac{dq}{2\pi}\left\\{{\cal
B}_{\delta\gamma}^{n}(P,q){\cal D}_{\gamma\delta}^{m}(P,q)+{\cal
C}_{\delta\gamma}^{n}(P,q){\cal A}_{\gamma\delta}^{m}(P,q)\right\\}.$ (45)
An explicit analytical calculation shows that ${\cal M}_{nm}$ is diagonal,
hence Eq. (44) reduces to
${\cal M}_{nn}=1\quad({\rm no\ sum}).$ (46)
One finds only two different diagonal matrix elements ${\cal M}_{nn}$. In the
isovector pseudoscalar channel $P_{2}$,
${\cal
M}_{44}=\frac{Ng^{2}}{2}\int\frac{dk}{2\pi}\left(\frac{1}{E(k-P)}+\frac{1}{E(k)}\right)\left(\frac{P^{2}-E^{2}(k-P,k)}{{\cal
E}^{2}(P)-E^{2}(k-P,k)}\right).$ (47)
The choice ${\cal E}^{2}(P)=P^{2}$ converts the condition ${\cal M}_{44}=1$
into the vacuum gap equation. This proves the existence of a massless mode,
the “would be Goldstone boson” fluctuating in the direction tangential to the
vacuum circle. In the other three channels, there is a (marginally bound)
massive state with the common mass $2m$, the same as in the GN model. The
corresponding diagonal matrix elements ${\cal M}_{nn}$ read
${\cal M}_{11}={\cal M}_{22}={\cal
M}_{33}=\frac{Ng^{2}}{2}\int\frac{dk}{2\pi}\left(\frac{1}{E(k-P)}+\frac{1}{E(k)}\right)\left(\frac{4m^{2}+P^{2}-E^{2}(k-P,k)}{{\cal
E}^{2}(P)-E^{2}(k-P,k)}\right).$ (48)
Here the ansatz ${\cal E}^{2}(P)=4m^{2}+P^{2}$ yields again back to the gap
equation. The massive bound states are scalar mesons in the
$S_{0},S_{1},S_{3}$ channels.
One can repeat the same calculation by starting form the reflection vacuum
($S_{3}=m,S_{1}=0$). The only change in the formalism is the fact that the
spinors for isospin down have to be evaluated with $m\to-m$, since the vacuum
has $\Delta=m\tau_{3}$ rather than $\Delta=m$. The results are the same,
except that the massless mode now appears in the $S_{1}$ channel, again
tangential to the reflection vacuum circle in the chosen vacuum point. The
mesons in the $S_{0},S_{3},P_{2}$ channels all have the same mass 2$m$. At
first sight, it looks as if now a scalar meson would be massless and the
pseudoscalar one massive. This is not the case. When using the rotation
vacuum, the parity operation on spinors has the usual form,
$P:\quad\psi(x)\to\gamma^{0}\psi(-x),$ (49)
since the vacuum is also standard. To reach the reflection vacuum $m\tau_{3}$,
one has to perform the chiral transformation $P_{L}\tau_{3}+P_{R}$. Under this
transformation, the matrix $\gamma^{0}$ in (49) goes over into
$\tau_{3}\gamma^{0}$, so that now $S_{0},S_{3},P_{2}$ are scalars whereas
$S_{1}$ is pseudoscalar.
It is noteworthy that all massive mesons found in GN-type models so far share
the common mass $M=2m$. This does not mean that they are non-interacting,
otherwise the dispersion relation would not be ${\cal
E}(P)=\sqrt{4m^{2}+P^{2}}$. This “universality” is most likely a side effect
of the large $N$ limit. The RPA goes beyond the leading order (HF) by taking
into account fluctuations of O($1/\sqrt{N}$). It is plausible that meson
binding energies are suppressed by at least a factor of $1/N$ and therefore
not yet visible at this order of the large $N$ expansion.
We conclude this chapter with a table comparing the meson content of various
chirally symmetric GN-type models, to emphasize the common aspects as well as
the differences due to different symmetries. Only two rules govern the whole
picture in table 1: The total number of mesons is equal to the number of
squares of bilinears in the interaction Lagrangian, and the number of massless
modes equals the number of flat directions on the vacuum manifold.
| chiral symmetry | vacuum manifold | mesons | massless | massive
---|---|---|---|---|---
GN | O(1)${}_{L}\times$O(1)R | Z2 | 1 | - | 1
NJL | U(1)${}_{L}\times$U(1)R | S1 | 2 | 1 | 1
isoNJL | SU(2)${}_{L}\times$SU(2)R | S3 | 4 | 3 | 1
O(2) GN | O(2)${}_{L}\times$O(2)R | S${}_{1}+$S1 | 4 | 1 | 3
U(2) GN | U(2)${}_{L}\times$U(2)R | S${}_{1}\times$S3 | 8 | 4 | 4
Table 1: Overview over meson content of different chiral GN models.
## VI Basic kink
Kink denotes a multifermion bound state connecting two different vacua at
$x\to\pm\infty$. We use units where $m=1$ from now on to make contact with the
literature. To set the stage, let us briefly recall the twisted kink of the
one-flavor NJL model originally due to Shei L17 . The kink at rest connecting
the U(1) vacua $e^{i\alpha}$ at $x\to-\infty$ and $e^{i\beta}$ at $x\to\infty$
has the mean field
$\Delta(x)=e^{i\alpha}(1-f(x))+e^{i\beta}f(x),\quad
f(x)=\frac{V(x)}{1+V(x)},\quad V(x)=e^{2x\sin\theta}.$ (50)
We have written it in a form where the interpolating structure is clear, since
$f$ and $(1-f)$ are kink-like scalar functions. The angle $\theta$ in $V(x)$
is called twist angle and related to the difference of the two asymptotic
vacuum phases via
$\theta=\frac{1}{2}(\alpha-\beta).$ (51)
Since U(1) is Abelian and $\Delta$ transforms under chiral transformations as
follows
$\psi_{1}\to e^{i\alpha_{1}}\psi_{1},\quad\psi_{2}\to
e^{i\alpha_{2}}\psi_{2},\quad\Delta\to e^{i\alpha_{1}}\Delta
e^{-i\alpha_{2}},$ (52)
the twist angle is chirally invariant. As such it has a physical meaning,
determining the slope of the kink profile and its fermion number. There is a
bound state with energy $\epsilon_{0}=\cos\theta$, occupied by $N\sin\theta$
fermions. If we restrict ourselves to real $\Delta$ as appropriate for the GN
model with discrete chiral symmetry, we have to choose $\theta=\pm\pi/2$ and
either $\alpha=\pi,\beta=0$ (kink) or $\alpha=0,\beta=\pi$ (antikink). Here,
the kink profile can only assume the steepest shape. The bound state moves to
0 energy, the center of the mass gap. We get
$\Delta(x)\to S(x)=\pm\tanh x,$ (53)
an early result attributed to Callan, Coleman, Gross and Zee in Refs. L18 ;
L19 .
Let us repeat the reduction from unitary to orthogonal chiral symmetry, now
for two flavors. We can start directly from the known result for the twisted
kink in the U(2)${}_{L}\times$U(2)R GN model L13 ; L14 . If the vacuum at
$x\to-\infty$ is taken as 1, the formalism yields the expression
$\Delta(x)=\frac{1+UV(x)}{1+V(x)}$ (54)
with
$V(x)=e^{2\sin\theta x},\quad
U=(1-\vec{p}\,\vec{p}^{\,\dagger})+e^{-2i\theta}\vec{p}\,\vec{p}^{\,\dagger}.$
(55)
$U$ is the vacuum at $x\to\infty$, the vector $\vec{p}$ a complex,
2-dimensional vector normalized to $\vec{p}^{\,\dagger}\vec{p}=1$. We can
interpret the expression for $U$ as spectral representation of a unitary
2$\times$2 matrix with eigenvalues 1 and $e^{-2i\theta}$. Choosing a frame
where $\vec{p}=(1,0)$, we recover the NJL twisted kink for isospin up and the
vacuum for isospin down. This reduces the U(2) twisted kink to the U(1)
twisted kink, with $\theta$ the (chirally invariant) twisting angle. The
unique way to get a real solution as needed for the O(2) model is again to
choose the maximal twist angle, $\theta=\pi/2$. We then recover the real GN
kink in the isospin up state. From the O(2) point of view, the kink connects
the rotational vacuum 1 with the reflection vacuum $\tau_{3}$. It is
impossible to find a kink connecting two points on the same vacuum circle.
Going back to the form (55) of $U$, we can construct a more general kink by
choosing a real $\vec{p}$,
$\vec{p}=\left(\begin{array}[]{c}\cos\alpha\\\ \sin\alpha\end{array}\right).$
(56)
The mean field then becomes the matrix
$\Delta=(1-f)-I(2\alpha)f,\quad f=\frac{V}{1+V}=\frac{1}{2}(1+\tanh x).$ (57)
By a further chiral transformation we arrive at the most general O(2) kink
interpolating between arbitrary points on the rotation and reflection circles,
$\Delta=R(\alpha_{1})(1-f)-I(\alpha_{2})f.$ (58)
Here one has to choose $\alpha=\alpha_{1}+\alpha_{2}$ in $\vec{p}$. This kink
evolves from the rotation vacuum $R(\alpha_{1})$ at $x\to-\infty$ to the
reflection vacuum $-I(\alpha_{2})$ at $x\to\infty$. Unlike the twist angle in
the unitary models, here the angles $\alpha_{1},\alpha_{2}$ have no direct
physical significance, being dependent on the chiral frame chosen. This is
consistent with the fact that the $x$ dependence is independent of the angles
$\alpha_{i}$. The true twist angle $\theta=\pi/2$ is always maximal, like in
the GN model. The opposite kink connecting a point on the reflection circle to
one on the rotation circle can easily be found by changing the sign of $x$,
$\Delta=-I(2\alpha_{2})(1-f)+R(2\alpha_{1})f.$ (59)
The reason why we have written down the most general form of the kinks is the
fact that in scattering or bound state configurations with several kinks, the
vectors $\vec{p}_{i}$ cannot all be rotated simultaneously into a specific
direction. Then the differences $\alpha_{i}-\alpha_{j}$ do acquire physical
significance.
Finally we mention that the mass of the kink is the same as in the GN model,
$M=N/(2\pi)$, independently of its fermion number.
## VII Scattering of kinks
We first use the existing tools to derive scattering of two twisted kinks in
the U(2)${}_{L}\times$U(2)R GN model L14 . In order to turn the result into a
solution of the O(2)${}_{L}\times$O(2)R symmetric model, we choose as input
two real vectors
$\vec{p}_{i}=\left(\begin{array}[]{c}\cos\alpha_{i}\\\
\sin\alpha_{i}\end{array}\right),$ (60)
twist angles $\theta_{i}=\pi/2$ and positions of poles in the complex spectral
parameter plane
$\zeta_{i}=\frac{i}{\eta_{i}},\quad\eta_{i}=\sqrt{\frac{1+v_{i}}{1-v_{i}}}.$
(61)
We also introduce the ratio $\eta=\eta_{1}/\eta_{2}$. The matrix $\omega$ is
taken to be diagonal. The result for kink-antikink scattering in the O(2)
model are the real mean fields
$\displaystyle{\cal D}S_{0}$ $\displaystyle=$ $\displaystyle
1-\left(1-\frac{2(1+\eta^{2})\cos^{2}\alpha_{12}}{(1+\eta)^{2}}\right)V_{1}V_{2},$
$\displaystyle{\cal D}S_{1}$ $\displaystyle=$ $\displaystyle-V_{1}\sin
2\alpha_{1}-V_{2}\sin 2\alpha_{2},$ $\displaystyle{\cal D}S_{3}$
$\displaystyle=$ $\displaystyle-V_{1}\cos 2\alpha_{1}-V_{2}\cos 2\alpha_{2},$
$\displaystyle{\cal D}P_{2}$ $\displaystyle=$
$\displaystyle-\left(\frac{1-\eta}{1+\eta}\right)\sin 2\alpha_{12}V_{1}V_{2},$
(62)
with $\alpha_{12}=\alpha_{1}-\alpha_{2}$. Here, ${\cal D}$ is the common
denominator
${\cal D}=1+V_{1}+V_{2}+\kappa
V_{1}V_{2},\quad\kappa=\left(1-\frac{4\eta\cos^{2}\alpha_{12}}{(1+\eta)^{2}}\right).$
(63)
The $V_{i}$ factors carry the ($x,t$) dependence,
$V_{i}=e^{2x_{i}^{\prime}},\quad
x_{i}^{\prime}=\frac{x-x^{0}_{i}-v_{i}t}{\sqrt{1-v_{i}^{2}}}.$ (64)
The vacuum at $x\to-\infty$ has been taken to be 1. Orthogonal matrices
$U_{1},U_{2}$ appear in intermediate states of the scattering process, whereas
$U_{12}$ is the vacuum at $x\to\infty$. One finds
$U_{i}=1-2\vec{p}_{i}\vec{p}_{i}^{\,\dagger}=-I(2\alpha_{i})$ (65)
and
$U_{12}=\frac{1}{1+\eta^{2}-2\eta\cos
2\alpha_{12}}\left(\begin{array}[]{cc}(1+\eta^{2})\cos
2\alpha_{12}-2\eta&(1-\eta^{2})\sin 2\alpha_{12}\\\ -(1-\eta^{2})\sin
2\alpha_{12}&(1+\eta^{2})\cos 2\alpha_{12}-2\eta\end{array}\right).$ (66)
$U_{12}$ is a rotation matrix, in contrast to the reflection matrices $U_{i}$,
$U_{12}=R(\Phi),\quad\tan\Phi=\frac{(1-\eta^{2})\sin
2\alpha_{12}}{(1+\eta^{2})\cos 2\alpha_{12}-2\eta}.$ (67)
The mean field for the kink-antikink collision can be concisely represented as
$\Delta=\frac{1+V_{1}U_{1}+V_{2}U_{2}+\kappa
V_{1}V_{2}U_{12}}{1+V_{1}+V_{2}+\kappa V_{1}V_{2}}$ (68)
where the four matrices ($1,\,U_{1},\,U_{2},\,U_{12}$) are now orthogonal as
opposed to unitary in the U(2) model. The physical meaning of these matrices
is that they represent all possible vacua if the kinks are well separated in
space L14 .
Let us pause for a moment and ask ourselves what is the intrinsic (chiral
frame independent) content of expression (68). Chiral symmetry has already
been employed to set the initial vacuum equal to 1. The only residual O(2)
transformations allowed without changing the initial vacuum are
$\Delta\to R(\beta)\Delta R(-\beta),\quad\Delta\to I(\gamma)\Delta I(\gamma).$
(69)
If applied to the final vacuum $U_{12}$, the rotation acts as the identity
whereas the reflection changes the sign of $\Phi$, or, equivalently,
interchanges $\vec{p}_{1}$ and $\vec{p}_{2}$. In the intermediate vacua
$U_{1},U_{2}$, both angles are shifted by the same amount. This reflects the
fact that the only chirally invariant quantity one can form out of the
$\vec{p}_{i}$ is the scalar product $\vec{p}_{1}\vec{p}_{2}$.
Due to the four components $S_{0},S_{1},S_{3},P_{2}$, the kink-kink collision
process looks rather complicated. What happens can be described qualitatively
as follows. Let us assume that kink 1 is incident from the left, kink 2 from
the right. The asymptotic vacua are the rotational vacua 1 (at $x\to-\infty$)
and $R(\Phi)$ (at $x\to\infty$) throughout the collision process. In between
the kinks, the system is in the reflection vacuum $U_{1}$ before the collision
and $U_{2}$ after the collision. The region where the system is in the
reflection vacuum is always bounded by the position of the two kinks. This is
exactly what one would expect from a collision between two domain walls, here
separating the rotational from the reflection phases. The difference between
the U(2) and the O(2) chirally symmetric cases is the fact that the intrinsic
twist of the kinks is always maximal in the O(2) model, similar to the
difference between the U(1) (NJL) and O(1) (GN) models.
Finally, let us remark that one also finds a bound state at rest by
specializing to $\eta_{1}=\eta_{2}=1$. This is similar to the unitary models,
but no such bound state exists in the one-flavor GN model.
## VIII Baryon and breather with twisted kink constituents
Here we start from the general two-soliton solution of the U(2) model L14 and
specialize it as follows. We choose $\eta_{1}=\eta_{2}=1$ (bound state at
rest) and a diagonal matrix $\omega$ (no breathers). In order to get a real
mean field $\Delta$, we have to pair the twist angles to
$(\theta_{1}=\pi-\theta_{2}=\theta)$ and to choose $\omega_{11}=\omega_{22}$,
like in the GN model. Moreover, it turns out that we need
$\alpha_{1}=\alpha_{2}=\alpha$. Consider the simplest case first, $\alpha=0$.
We find
$\Delta=\left(\begin{array}[]{cc}\frac{1+2V\cos
2\theta+V^{2}\cos^{2}\theta}{1+2V+V^{2}\cos^{2}\theta}&0\\\
0&1\end{array}\right),\quad V=e^{2x\sin\theta}.$ (70)
This is nothing but the Dashen, Hasslacher, Neveu (DHN) baryon L18 in the
upper component and the vacuum in the lower component. If $\theta$ is close to
its maximum value of $\pi/2$, the DHN scalar potential has the form of a well
separated kink-antikink pair. Asymptotically, the vacuum is 1, but inside the
baryon the vacuum -1 is approached. In the present two-flavor case, the
corresponding vacua are 1 (rotation) and $\tau_{3}$ (reflection). A more
transparent representation of the baryon mean field (70) is
$\displaystyle\Delta(\alpha=0)$ $\displaystyle=$
$\displaystyle(1-g)-\tau_{3}g,$ $\displaystyle g$ $\displaystyle=$
$\displaystyle\frac{2V\sin^{2}\theta}{1+2V+V^{2}\cos^{2}\theta},$
$\displaystyle S_{\rm DHN}$ $\displaystyle=$ $\displaystyle 1-2g.$ (71)
The function $g$ vanishes asymptotically on both sides, in contrast to the
kink-like function $f$ introduced above for a single kink. We can now perform
a rotation, leaving the asymptotic vacua unchanged but transforming the
central vacuum into an arbitrary reflection matrix. This yields the baryon
solution for any $\alpha$,
$\Delta(\alpha)=1-g-I(2\alpha)g.$ (72)
The same result would have been obtained by setting
$\alpha_{1}=\alpha_{2}=\alpha$ rather than 0 from the beginning. The maximum
of $g$ is at
$x_{\rm max}=-\frac{\ln(\cos\theta)}{2\sin\theta},\quad g_{\rm
max}=1-\cos\theta.$ (73)
Hence, at the center of the baryon,
$\Delta=\cos\theta-(1-\cos\theta)I(2\alpha).$ (74)
If $\theta\approx\pi/2$ as appropriate for well separated kink and antikink,
$\cos\theta\approx 0$ and we see that we are in the reflection phase inside
the baryon. Since the unitary transformation may be viewed as a change of
frame, the angle $\alpha$ is irrelevant for the single baryon. However it will
become relevant in problems involving more than two kinks. The single particle
spectrum of the O(2) baryon is the same as that of the DHN baryon in the GN
model. There are two bound states at energy $\pm\cos\theta$ and the spectrum
is symmetric about 0. Also the occupation fractions match those of the DHN
baryon.
To get a breather, we have to repeat the above calculation with an off-
diagonal $\omega$-matrix. We choose
$\omega=\left(\begin{array}[]{cc}\cosh\chi&\sinh\chi\\\
\sinh\chi&\cosh\chi\end{array}\right),\quad{\rm det\,}\omega=1.$ (75)
The results for the breather have the same general form as for the baryon,
except that $g$ acquires a time dependence in the rest frame,
$g=\frac{2V\sin\theta\left[\cosh\chi\sin\theta-\sinh\chi\sin(2t\cos\theta+\theta)\right]}{1+2V\left[\cosh\chi-\sin\theta\sinh\chi\sin(2t\cos\theta+\theta)\right]+V^{2}\cos^{2}\theta}.$
(76)
At $\chi=0$ one recovers the static baryon, Eq. (71).
So far, we have essentially reproduced the results of DHN L18 in a new
setting. Now we can also look at more complicated multibaryon and breather
problems where new phenomena are expected. All we have to do is perform the
calculation in the U(2) GN model and choose the parameters such that all mean
fields are real. Judging from the experience with the one flavor GN and NJL
models, this should be more efficient than trying to determine solutions of
the O(2) model directly.
## IX Massless hadrons, chiral spiral and phase diagram
In this section, we briefly consider topics which have come up previously in
the NJL model, the SU(2)${}_{L}\times$SU(2)R isoNJL model and the
U(2)${}_{L}\times$U(2)R GN model. Whenever such a model possesses a “chiral
circle” and massless mesons, the possibility arises to generate both massless
multi-fermion bound states and “chiral spiral” type matter phases L15 ; L20 .
Whereas the massless mesons are related to small fluctuations around the
vacuum into some flat direction (Goldstone modes), massless baryons correspond
to one full turn around the chiral circle. The axial anomaly links winding
number to fermion density. Due to the common charge conjugation symmetry, the
situation of the O(2) chiral GN model is perhaps closest to the one of the
isoNJL model. We refer to Sect. V of Ref. L21 for the pertinent discussion in
the isoNJL model. Here, only the condensates $\bar{\psi}\psi$ and
$\bar{\psi}i\gamma_{5}\tau_{3}\psi$ had to be used. The crucial ingredient was
the unitary transformation
$\psi^{\prime}=U\psi=e^{-ibx\gamma_{5}\tau_{3}}\psi.$ (77)
If applied to the vacuum HF equation with potential $\gamma^{0}m$, it
generates an isospin chemical potential $b\tau_{3}$ from the kinetic term and
changes the mass term into the characteristic chiral spiral condensate
$U^{\dagger}(-i\gamma_{5}\partial_{x})U=-i\gamma_{5}\partial_{x}-b\tau_{3},\quad
U^{\dagger}\gamma^{0}U=\gamma^{0}\cos 2bx+i\gamma^{1}\tau_{3}\sin 2bx.$ (78)
This construction cannot be used for generating fermion density and the
corresponding chemical potential in the two-flavor model. Here, the GN model
with discrete chiral symmetry has taught us how to minimize the energy, namely
by generating a real kink-antikink crystal described by cnoidal functions.
Referring to the literature for the details L22 , let us denote the self-
consistent scalar potential by $S_{\rm GN}(x)$, with temperature and density
dependent shape. If one takes into account both fermionic and isospin chemical
potential, the HF potential in the isoNJL model assumes the product form
$\Delta(x)=S_{\rm GN}(x)e^{-ibx\gamma_{5}\tau_{3}}.$ (79)
The axial isospin chemical potential conjugate to the density
$\psi^{\dagger}\gamma_{5}\tau_{3}\psi$ can also be invoked if one is
interested in chirally imbalanced states, but does not affect the mean field.
The phase diagram of the isoNJL model in ($\mu,\mu_{3},T$) space following
from this scenario consists of the GN phase diagram in the ($\mu,T$) plane
translated rigidly into the $\mu_{3}$ direction, see L21 .
What does this teach us about the O(2) chiral GN model? In the above sketched
results for the isoNJL model, only the condensates $\bar{\psi}\psi$ and
$\bar{\psi}i\gamma_{5}\tau_{3}\psi$ have played a role. The choice of the
3-direction in isospin is arbitrary and only used for convenience, since
$\tau_{3}$ is diagonal. One could equally well have used the 2-direction. But
then we would be in the same situation as in the O(2) case with a rotation
vacuum and a chiral circle in the ($S_{0},P_{2}$) plane. There is a one-to-one
correspondence between SU(2) and O(2) chiral GN models, as far as these
particular aspects are concerned. Thus we can borrow the results L21 from the
isoNJL model directly and get massless bound states and the whole phase
diagram of the O(2) model almost for free.
What would happen, had we started from the reflection vacuum in the
($S_{1},S_{3}$) plane instead of the rotation vacuum? Here the transition to
the isoNJL model is less straightforward, but we certainly expect an
equivalent picture. To induce rotation around the vacuum circle in the
($S_{1},S_{3}$) plane now requires the vector transformation
$\psi^{\prime}=U\psi=e^{-ibx\tau_{2}}\psi,$ (80)
without $\gamma_{5}$ in the exponent. This yields an axial isovector chemical
potential and a chiral spiral mean field in the ($S_{1},S_{3}$) reflection
plane,
$U^{\dagger}(-i\gamma_{5}\partial_{x})U=-i\gamma_{5}\partial_{x}-b\gamma_{5}\tau_{2},\quad
U^{\dagger}\gamma^{0}\tau_{3}U=\gamma^{0}\left(\tau_{3}\cos 2bx+\tau_{1}\sin
2bx\right).$ (81)
In this case, it is the axial isospin density that induces the inhomogeneous
chiral spiral structure. This change of vector into axial chemical potentials
is also known from other dualities L8 , so that everything fits nicely
together.
## X Duality between O(2)${}_{L}\times$O(2)R GN model and ${\bf p}$GN model
We label the fermion fields of the O(2) chiral GN model (16) as
$\psi_{k\ell}^{(i)}$ with $k$ the chirality ($1=L,2=R$), $\ell$ the flavor
index (1,2) and $i$ the color index ($i=1,..,N)$. Each Dirac field can be
decomposed into two Majorana fields,
$\left(\begin{array}[]{c}\psi_{11}^{(i)}\\\ \psi_{12}^{(i)}\\\
\psi_{21}^{(i)}\\\
\psi_{22}^{(i)}\end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array}[]{c}\chi_{1}^{(i)}-i\chi_{1}^{(N+i)}\\\
\chi_{3}^{(i)}-i\chi_{3}^{(N+i)}\\\ \chi_{2}^{(N+i)}+i\chi_{2}^{(i)}\\\
\chi_{4}^{(N+i)}+i\chi_{4}^{(i)}\end{array}\right).$ (82)
The labelling of the Majorana spinors has been chosen for later convenience.
Here we only note that the even subscripts belong to right-handed, the odd
subscripts to left-handed Majorana spinors satisfying the anticommutation
relations
$\left\\{\chi_{n}^{(i)}(x),\chi_{m}^{(j)}(y)\right\\}=\delta_{ij}\delta_{nm}\delta(x-y).$
(83)
The terms in the interaction Lagrangian can be regrouped as follows,
$\displaystyle{\cal L}_{\rm int}$ $\displaystyle=$
$\displaystyle\frac{g^{2}}{4}\left[(\bar{\psi}\psi)^{2}+(\bar{\psi}\tau_{1}\psi)^{2}+(\bar{\psi}\tau_{3}\psi)^{2}+(\bar{\psi}i\gamma_{5}\tau_{2}\psi)^{2}\right]$
(84) $\displaystyle=$
$\displaystyle\frac{g^{2}}{8}\left[(\bar{\psi}(1+\tau_{3})\psi)^{2}+(\bar{\psi}(1-\tau_{3})\psi)^{2}+(\bar{\psi}(\tau_{1}+i\gamma_{5}\tau_{2})\psi)^{2}+(\bar{\psi}(\tau_{1}-i\gamma_{5}\tau_{2})\psi)^{2}\right].$
The motivation for the last line can be seen once we express everything in
terms of Majorana spinors,
$\sum_{i=1}^{N}\bar{\psi}^{(i)}(1+\tau_{3})\psi^{(i)}=2i\sum_{i=1}^{N}\left(\chi_{1}^{(i)}\chi_{2}^{(i)}+\chi_{1}^{(N+i)}\chi_{2}^{(N+i)}\right)=2i\sum_{i=1}^{2N}\left(\chi_{1}^{(i)}\chi_{2}^{(i)}\right),$
(85)
and similarly
$\displaystyle\sum_{i=1}^{N}\bar{\psi}^{(i)}(1-\tau_{3})\psi^{(i)}=2i\sum_{i=1}^{2N}\left(\chi_{3}^{(i)}\chi_{4}^{(i)}\right),$
$\displaystyle\sum_{i=1}^{N}\bar{\psi}^{(i)}(\tau_{1}+i\gamma_{5}\tau_{2})\psi^{(i)}=2i\sum_{i=1}^{2N}\left(\chi_{1}^{(i)}\chi_{4}^{(i)}\right),$
$\displaystyle\sum_{i=1}^{N}\bar{\psi}^{(i)}(\tau_{1}-i\gamma_{5}\tau_{2})\psi^{(i)}=2i\sum_{i=1}^{2N}\left(\chi_{3}^{(i)}\chi_{2}^{(i)}\right).$
(86)
The Lagrangian of the O(2) chiral GN model is thus turned into
$\displaystyle{\cal L}_{\rm O(2)}$ $\displaystyle=$
$\displaystyle-i\chi_{1}\partial\chi_{1}-i\chi_{3}\partial\chi_{3}+i\chi_{2}\bar{\partial}\chi_{2}+i\chi_{4}\bar{\partial}\chi_{4}$
(87)
$\displaystyle-\frac{g^{2}}{2}\left[(\chi_{1}\chi_{2})^{2}+(\chi_{3}\chi_{4})^{2}+(\chi_{1}\chi_{4})^{2}+(\chi_{3}\chi_{2})^{2}\right],$
with an implicit summation over $2N$ colors ($N$ Dirac fields are equivalent
to 2$N$ Majorana fields). In contrast to the original form in Eq. (84),
expression (87) is manifestly invariant under O(2)${}_{L}\times$O(2)R since
the vector with components ($\chi_{1},\chi_{3}$) transforms under O(2)L, the
vector with components ($\chi_{2},\chi_{4}$) under O(2)R.
Consider now the pGN model, Eq. (7), obtained by “self-dualizing” the NJL
model. As pointed out in the introduction, the Lagrangian coincides with that
of the classical ZM model originally written in the form (4). Introducing
Majorana spinors
$\left(\begin{array}[]{c}\psi_{1}^{(i)}\\\
\psi_{2}^{(i)}\end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array}[]{c}\chi_{1}^{(i)}-i\chi_{3}^{(i)}\\\
\chi_{4}^{(i)}+i\chi_{2}^{(i)}\end{array}\right),\quad(i=1,..,N),$ (88)
it has already been shown in Ref. L9 that the Lagrangian becomes
$\displaystyle{\cal L}_{\rm pGN}$ $\displaystyle=$
$\displaystyle-i\chi_{1}\partial\chi_{1}-i\chi_{3}\partial\chi_{3}+i\chi_{2}\bar{\partial}\chi_{2}+i\chi_{4}\bar{\partial}\chi_{4}$
(89)
$\displaystyle-g^{2}\left[(\chi_{1}\chi_{2})^{2}+(\chi_{3}\chi_{4})^{2}+(\chi_{1}\chi_{4})^{2}+(\chi_{3}\chi_{2})^{2}\right].$
Remarkably, this expression agrees with (87). The coupling constants differ by
a factor of 2, but this is compensated by the number of colors, $N$ in (89)
instead of $2N$ in (87). Hence the O(2)${}_{L}\times$O(2)R symmetric two-
flavor GN model with $N$ colors is dual to the one-flavor pGN model with $2N$
colors. Since we have derived the O(2) symmetric model from the U(2) NJL model
which has already been solved in the large $N$ limit, we can now easily infer
the solution of the dual model, not yet available in L9 . The solution of many
aspects of the O(2) GN model has already been discussed in the preceding
sections. All we have to do is to re-interpret everything in the dual
language. This is the topic of the following section.
By eliminating the Majorana spinors from (82) and (88), we can express the
Dirac fields of the O(2) GN model through Dirac fields of the pGN model or
vice versa,
$\left(\begin{array}[]{c}\psi_{11}^{(i)}\\\ \psi_{12}^{(i)}\\\
\psi_{21}^{(i)}\\\
\psi_{22}^{(i)}\end{array}\right)=\frac{1}{2}\left(\begin{array}[]{c}\psi_{1}^{(i)}+\psi_{1}^{(i)*}-i\psi_{1}^{(N+i)}-i\psi_{1}^{(N+i)*}\\\
\psi_{1}^{(N+i)}-\psi_{1}^{(N+i)*}+i\psi_{1}^{(i)}-i\psi_{1}^{(i)*}\\\
\psi_{2}^{(i)}-\psi_{2}^{(i)*}-i\psi_{2}^{(N+i)}+i\psi_{2}^{(N+i)*}\\\
\psi_{2}^{(N+i)}+\psi_{2}^{(N+i)*}+i\psi_{2}^{(i)}+i\psi_{2}^{(i)*}\end{array}\right).$
(90)
As one can eaily check, this is a Bogoliubov transformation at the level of
Dirac spinors. It would have been difficult to find this transformation
without introducing Majorana spinors at an intermediate step, but we can now
discuss both sides of the duality in the more familiar Dirac language. In
particular, using (90), we can express the relevant bilinears of the two-
flavor model through bilinears of the one-flavor model as follows
$\displaystyle\bar{\psi}^{(i)}\psi^{(i)}$ $\displaystyle=$
$\displaystyle\psi_{1}^{(i)*}\psi_{2}^{(i)}+\psi_{2}^{(i)*}\psi_{1}^{(i)}+\psi_{1}^{(N+i)*}\psi_{2}^{(N+i)}+\psi_{2}^{(N+i)*}\psi_{1}^{(N+i)},$
$\displaystyle\bar{\psi}^{(i)}\tau_{1}\psi^{(i)}$ $\displaystyle=$
$\displaystyle
i\left(\psi_{1}^{(i)*}\psi_{2}^{(i)*}-\psi_{2}^{(i)}\psi_{1}^{(i)}+\psi_{1}^{(N+i)*}\psi_{2}^{(N+i)*}-\psi_{2}^{(N+i)}\psi_{1}^{(N+i)}\right),$
$\displaystyle\bar{\psi}^{(i)}\tau_{3}\psi^{(i)}$ $\displaystyle=$
$\displaystyle\psi_{1}^{(i)}\psi_{2}^{(i)}+\psi_{2}^{(i)*}\psi_{1}^{(i)*}+\psi_{1}^{(N+i)}\psi_{2}^{(N+i)}+\psi_{2}^{(N+i)*}\psi_{1}^{(N+i)*},$
$\displaystyle\bar{\psi}^{(i)}i\gamma_{5}\tau_{2}\psi^{(i)}$ $\displaystyle=$
$\displaystyle
i\left(\psi_{1}^{(i)*}\psi_{2}^{(i)}-\psi_{2}^{(i)*}\psi_{1}^{(i)}+\psi_{1}^{(N+i)*}\psi_{2}^{(N+i)}-\psi_{2}^{(N+i)*}\psi_{1}^{(N+i)}\right).$
(91)
The left hand side refers to the O(2) GN model, the right hand side to the pGN
model, and the equations hold for each color index $i=1,..,N$. The first line
shows that the scalar condensate has the same meaning on both sides of the
duality. The $i\gamma_{5}\tau_{2}$ condensate of the O(2) model corresponds to
the pseudoscalar condensate of the pGN model. The $\tau_{3}$ and $\tau_{1}$
condensates of the O(2) model go over into the two kinds of Cooper pair
condensates.
Important bilinear observables not present in the O(2) Lagrangian are the
isospin vector and axial vector densities (only the $\tau_{2}$-components
belong to a conserved current),
$\displaystyle\psi^{(i)\dagger}\tau_{2}\psi^{(i)}$ $\displaystyle=$
$\displaystyle\psi_{1}^{(i)*}\psi_{1}^{(i)}+\psi_{2}^{(i)*}\psi_{2}^{(i)}+\psi_{1}^{(N+i)*}\psi_{1}^{(N+i)}+\psi_{2}^{(N+i)*}\psi_{2}^{(N+i)},$
$\displaystyle\psi^{(i)\dagger}\gamma_{5}\tau_{2}\psi^{(i)}$ $\displaystyle=$
$\displaystyle-\psi_{1}^{(i)*}\psi_{1}^{(i)}+\psi_{2}^{(i)*}\psi_{2}^{(i)}-\psi_{1}^{(N+i)*}\psi_{1}^{(N+i)}+\psi_{2}^{(N+i)*}\psi_{2}^{(N+i)}.$
(92)
Thus the $\tau_{2}$ isospin density corresponds to the fermion density
$\psi^{\dagger}\psi$ of the pGN model, the $\gamma_{5}\tau_{2}$ axial isospin
density to the fermion axial density $\psi^{\dagger}\gamma_{5}\psi$.
Note that all observables discussed so far are color singlets on both sides of
the duality. It is interesting to understand what happens to the U(1) vector
symmetry and fermion density of the O(2) model upon dualization. If we
translate the U(1) transformation $\psi_{k\ell}^{(i)}\to
e^{i\alpha}\psi_{k\ell}^{(i)}$ into the dual language, we find that it reduces
to the following orthogonal, color dependent transformation
$\displaystyle\left(\begin{array}[]{c}\psi_{1}^{(i)}\\\
\psi_{1}^{(N+i)}\end{array}\right)$ $\displaystyle\to$
$\displaystyle\left(\begin{array}[]{rr}\cos\alpha&\sin\alpha\\\
-\sin\alpha&\cos\alpha\end{array}\right)\left(\begin{array}[]{c}\psi_{1}^{(i)}\\\
\psi_{1}^{(N+i)}\end{array}\right),$ (99)
$\displaystyle\left(\begin{array}[]{c}\psi_{2}^{(i)}\\\
\psi_{2}^{(N+i)}\end{array}\right)$ $\displaystyle\to$
$\displaystyle\left(\begin{array}[]{rr}\cos\alpha&\sin\alpha\\\
-\sin\alpha&\cos\alpha\end{array}\right)\left(\begin{array}[]{c}\psi_{2}^{(i)}\\\
\psi_{2}^{(N+i)}\end{array}\right).$ (106)
This is a subgroup of the original O(2$N$) symmetry of the pGN model with 2$N$
colors. Consequently, the conserved current in the dual pGN model is not a
color singlet, but has the form
$\displaystyle\rho_{R}$ $\displaystyle\to$ $\displaystyle
i\sum_{i=1}^{N}\left(\psi_{2}^{(N+i)*}\psi_{2}^{(i)}-\psi_{2}^{(i)*}\psi_{2}^{(N+i)}\right),$
$\displaystyle\rho_{L}$ $\displaystyle\to$ $\displaystyle
i\sum_{i=1}^{N}\left(\psi_{1}^{(N+i)*}\psi_{1}^{(i)}-\psi_{1}^{(i)*}\psi_{1}^{(N+i)}\right),$
$\displaystyle\rho$ $\displaystyle=$ $\displaystyle\rho_{R}+\rho_{L},\quad
j=\rho_{5}=\rho_{R}-\rho_{L}.$ (107)
Since we are not dealing with a gauge theory and color confinement, we see
nothing wrong with this color dependence.
Finally, let us look at yet another self-dual model that has been discussed in
Ref. L9 . By self-dualizing the GN model with discrete chiral symmetry rather
than the NJL model, one gets the Lagrangian of the self-dual GN (sdGN) model
${\cal L}_{\rm
sdGN}=-2i\psi_{1}^{(i)*}\partial\psi_{1}^{(i)}+2i\psi_{2}^{(i)*}\bar{\partial}\psi_{2}^{(i)}+\frac{g^{2}}{2}\left[(\psi_{1}^{(i)*}\psi_{2}^{(i)}+\psi_{2}^{(i)*}\psi_{1}^{(i)})^{2}+(\psi_{1}^{(i)*}\psi_{2}^{(i)*}+\psi_{2}^{(i)}\psi_{1}^{(i)})^{2}\right].$
(108)
In L9 it came as a surprise that this is again equivalent to two independent
GN models. If we apply the strategy developed in the present section to the
sdGN model and first translate it into Majorana spinors using (88), we find
${\cal L}_{\rm
sdGN}=-i\chi_{1}\partial\chi_{1}-i\chi_{3}\partial\chi_{3}+i\chi_{2}\bar{\partial}\chi_{2}+i\chi_{4}\bar{\partial}\chi_{4}-g^{2}\left[(\chi_{1}\chi_{2})^{2}+(\chi_{3}\chi_{4})^{2}\right].$
(109)
Upon using the “dictionary” (85,86), this is equivalent to a U(2) model where
only two out of the eight original interaction terms are kept,
${\cal
L}=\bar{\psi}i\partial\\!\\!\\!/\psi+\frac{g^{2}}{4}\left[(\bar{\psi}\psi)^{2}+(\bar{\psi}\tau_{3}\psi)^{2}\right].$
(110)
Since the two isospin channels decouple, the fact that one gets two
independent GN models is now trivial. This shows once more the advantage of
using Majorana spinors as an intermediate step for discovering or elucidating
dualities.
## XI Summary: an integrable model with chiral symmetry breaking and Cooper
pairing
There has been quite some interest in four-fermion theories featuring a
competition between particle-hole pairing and Cooper pairing L23 ; L24 ; L25 ;
L26 ; L27 ; L28 , triggered partly by predictions of color superconductivity
in quantum chromodynamics L29 . These works are dealing mostly with the phase
diagrams of the models considered. As we have shown in the present paper, the
pGN model is an example where the coexistence of chiral symmetry breaking
(CSB) and Cooper pairing arises in a highly symmetric fashion. As a
consequence, the resulting model is integrable and can be solved as completely
as the GN or NJL models. This confirms our earlier speculation L9 and extends
the range of integrable field theory models in 1+1 dimensions into an
interesting direction. Crucial for the new insights was a mapping between a
O(2)${}_{L}\times$O(2)R symmetric GN model, which had to be constructed for
this purpose, and the pGN model, as summarized in Table 2. The pGN model has
been obtained by “self-dualizing” the one-flavor NJL model with respect to the
transformation (6),
$\psi=\left(\begin{array}[]{c}\psi_{1}\\\
\psi_{2}\end{array}\right)\quad\longrightarrow\quad\psi_{d}=\left(\begin{array}[]{c}\psi_{1}^{\dagger}\\\
\psi_{2}\end{array}\right).$ (111)
Table 2 shows the correspondence between the condensates of the two equivalent
models. Since the solution of the O(2) chiral GN model could be derived with
the machinery develpoped for the U(2) chiral GN model, one can take over all
the results collected above and reinterpret them in terms of the physics of
relativistic superconductors. Recall that the Dirac fields and Majorana
spinors have $N$ color components in the O(2) model but 2$N$ color components
in the pGN model. The labelling of the Majorana spinor indices in Eqs. (82)
and (88) has been chosen with hindsight, so that the expressions in the last
column of Table 2 are identical in both models. Likewise, the Lagrangians are
indistinguishable if expressed in terms of Majorana spinors, see Eqs. (87) and
(89).
condensate | O(2)${}_{L}\times$O(2)R | pGN | Majorana spinors
---|---|---|---
$S_{0}$ | $\bar{\psi}\psi$ | $\bar{\psi}\psi$ | $i(\chi_{1}\chi_{2}+\chi_{3}\chi_{4})$
$P_{2}$ | $\bar{\psi}i\gamma_{5}\tau_{2}\psi$ | $\bar{\psi}i\gamma_{5}\psi$ | $i(\chi_{1}\chi_{4}+\chi_{2}\chi_{3})$
$S_{3}$ | $\bar{\psi}\tau_{3}\psi$ | $\bar{\psi}_{d}\psi_{d}$ | $i(\chi_{1}\chi_{2}-\chi_{3}\chi_{4})$
$S_{1}$ | $\bar{\psi}\tau_{1}\psi$ | $\bar{\psi}_{d}i\gamma_{5}\psi_{d}$ | $i(\chi_{1}\chi_{4}-\chi_{2}\chi_{3})$
Table 2: Correspondence between bilinears in the two dual models. The Majorana
notation is common to both, but summation is over $N$ colors (O(2) model) and
2$N$ colors (pGN model), respectively.
Let us briefly review the results of the large $N$ O(2) model in the light of
the pGN model.
* •
Lagrangian: The Lagrangians can be cast into a form which emphasizes the
correspondence, using Table 2 and the definition (111),
$\displaystyle{\cal L}_{\rm O(2)}$ $\displaystyle=$
$\displaystyle\bar{\psi}i\partial\\!\\!\\!/\psi+\frac{g^{2}}{4}\left[(\bar{\psi}\psi)^{2}+(\bar{\psi}i\gamma_{5}\tau_{2}\psi)^{2}+(\bar{\psi}\tau_{3}\psi)^{2}+(\bar{\psi}\tau_{1}\psi)^{2}\right],$
$\displaystyle{\cal L}_{\rm pGN}$ $\displaystyle=$
$\displaystyle\bar{\psi}i\partial\\!\\!\\!/\psi+\frac{g^{2}}{4}\left[(\bar{\psi}\psi)^{2}+(\bar{\psi}i\gamma_{5}\tau_{2}\psi)^{2}+(\bar{\psi}_{d}\psi_{d})^{2}+(\bar{\psi}_{d}i\gamma_{5}\psi_{d})^{2}\right].$
(112)
The O(2)${}_{L}\times$O(2)R chiral symmetry of the first model matches the
Pauli-Gürsey symmetry of the second one.
* •
Vacuum: The O(2) model has two possible vacuum circles referred to as rotation
and reflection vacua above. In the pGN model, this correponds to the CSB
vacuum (condensates $\bar{\psi}\psi,\bar{\psi}i\gamma_{5}\psi$) or the Cooper
paired vacuum (condensates
$\bar{\psi}_{d}\psi_{d},\bar{\psi}_{d}i\gamma_{5}\psi_{d}$). The two connected
components of the O(2) group are exactly what it takes to describe these two
distinct possibilities. The pGN model gives a more physical picture of what it
means to break chiral symmetry spontaneosuly if the symmetry group is
continuous, but not connected.
* •
Kink: The kink becomes a domain wall between superconducting and normal phase.
There is no kink inside either of the two phases. One can study dynamical
problems like scattering of any number of such domain walls in closed
analytical form, describing time dependent configurations of CSB and Cooper
pairing domains.
* •
DHN-type baryon: This bound state of two twisted kinks in the O(2) model is
nothing but a configuration where one phase is separated from the other phase
by two walls. If the exterior phase is chosen as Cooper paired phase, then the
interior phase is normal and we get a relativistic toy model for a Josephson
junction. Dynamical problems including several such objects as well as single
domain walls can be solved, as well as the interaction of fermions or Cooper
pairs with domain walls. Breathers can be interpreted as excited, periodically
oscillating Josephson junctions.
* •
Massless many-fermion states and phase diagram: In the pGN model, fermion
density can be generated by a local chiral transformation, like in the NJL
model. This makes the chiral spiral configuration optimal for fermionic
matter, in the normal phase. In the Cooper pairing phase, the same mathematics
would give rise to an inhomogeneous LOFF phase of the superconductor L30 ; L31
. The two are just two ways of interpreting the same physics, in the pGN
model. The phase diagram with vector and axial vector fermion chemical
potentials of the pGN model can be taken over from the NJL model. Fermion
density of the O(2) model becomes a color dependent density (107). If one
introduces the conjugated chemical potential, the kink crystal of the GN model
must come into the picture and color O(2$N$) breaks down to down to O($N$).
While it is easy to write down four-fermion models with both pp- and ph-
pairing, it is not easy to find integrable ones. The only one which had been
found so far is the sdGN model L9 . Since this has turned out to be a trivial
double copy of the standard GN model, it gives little new insights. All other
integrable models known so far had either CSB or Cooper pairing, but not both.
In this sense, the quantum ZM or pGN model is a novel and potentially useful
member of the family of exactly solvable field theoretic models which deserves
further studies.
## References
* (1) V. E. Zakharov and A. V. Mikhailov, Commun. Math. Phys. 74, 21 (1980).
* (2) D. J. Gross and A. Neveu, Phys. Rev. D 10, 3235 (1974).
* (3) Y. Nambu and G. Jona-Lasinio, Phys. Rev. 124, 246 (1961).
* (4) D. A. Takahashi and M. Nitta, Phys. Rev. Lett. 110, 131601 (2013).
* (5) G. V. Dunne and M. Thies, Phys. Rev. Lett. 111, 121602 (2013).
* (6) G. V. Dunne and M. Thies, Phys. Rev. D 89, 025008 (2014).
* (7) A. Chodos, H. Minakata, and F. Cooper, Phys. Lett. B 449, 260 (1999).
* (8) M. Thies, Phys. Rev. D 68, 047703 (2003).
* (9) M. Thies, Phys. Rev. D 90, 105017 (2014).
* (10) W. Pauli, Nuovo Cimento 6, 2014 (1957).
* (11) F. Gürsey, Nuovo Cimento 7, 411 (1958).
* (12) J. Milanovic, Das perfekte Gross-Neveu Modell, Diplomarbeit (Universität Erlangen-Nürnberg, Erlangen, 2004).
* (13) D. A. Takahashi, Phys. Rev. B 93, 024512 (2016).
* (14) M. Thies, J. of Phys. A: Math. Theor. 55, 015401 (2022).
* (15) L. L. Salcedo, S. Levit, and J. W. Negele, Nucl. Phys. B 361, 585 (1991).
* (16) R. Pausch, M. Thies, and V. L. Dolman, Z. Phys. A 338, 441 (1991).
* (17) S.-S. Shei, Phys. Rev. D 14, 535 (1976).
* (18) R. F. Dashen, B. Hasslacher, and A. Neveu, Phys. Rev. D 12, 2443 (1975).
* (19) J. Feinberg, Phys. Rev. D 51, 045021 (2014).
* (20) V. Schön and M. Thies, Phys. Rev. D 62, 096002 (2000).
* (21) M. Thies, Phys. Rev. D 93, 085024 (2016).
* (22) O. Schnetz, M. Thies, and K. Urlichs, Ann. Phys. 314, 425 (2004).
* (23) N. Ilieva and W. Thirring, Nucl. Phys. B 565, 629 (2000).
* (24) A. Chodos, F. Cooper, W. Mao, H. Minakata, and A. Singh, Phys. Rev. D 61, 045011 (2000).
* (25) K. G. Klimenko, R. N. Zhukov, and V. Ch. Zhukovsky, Phys. Rev. D 86, 105010 (2012).
* (26) D. Ebert, T. G. Khunjua, K. G. Klimenko, and V. Ch. Zhukovsky, Int. J. Phys. A 29, 1450025 (2014).
* (27) D. Ebert, T. G. Khunjua, K. G. Klimenko, and V. Ch. Zhukovsky, Phys. Rev. D 91, 105024 (2015).
* (28) D. Ebert, T. G. Khunjua, K. G. Klimenko, and V. C. Zhukovsky, Phys. Rev. D 93, 105022 (2016).
* (29) K. Rajagopal and F. Wilczek, in At the Frontier of Particle Physics: Handbook of QCD, Boris Ioffe Festschrift, edited by M. Shifman (World Scientific, Singapore, 2001), Vol. 3, Ch. 35, p. 2061.
* (30) P. Fulde and R. A. Ferrell, Phys. Rev. 135, A550 (1964).
* (31) A. I. Larkin and Yu. N. Ovchinnikov, Sov. Phys. JETP 20, 762 (1965).
|
# A classification of nonexpansive Bratteli-Vershik systems
Karl Petersen Department of Mathematics, CB 3250 Phillips Hall, University of
North Carolina, Chapel Hill, NC 27599 USA<EMAIL_ADDRESS>and Sandi
Shields College of Charleston, 66 George St., Charleston, SC 29424-0001 USA
<EMAIL_ADDRESS>
###### Abstract.
We study simple, properly ordered nonexpansive Bratteli-Vershik ($BV$)
systems. Correcting a mistake in an earlier paper, we redefine the classes
standard nonexpansive ($SNE$) and strong standard nonexpansive ($SSNE$). We
define also the classes of very well timed and well timed systems, their
opposing classes of untimed and very untimed systems (which feature, as
subclasses of “Case (2)”, in the work of Downarowicz and Maass as well as
Hoynes on expansiveness of $BV$ systems of finite topological rank), and
several related classes according to the existence of indistinguishable pairs
(of some “depth”) and their synchronization (“common cuts”). We establish some
properties of these types of systems and some relations among them. We provide
several relevant examples, including a problematic one that is conjugate to a
well timed system while also (vacuously) in the classes “Case (2)”. We prove
that the class of all simple, properly ordered nonexpansive $BV$ systems is
the disjoint union of the ones conjugate to well timed systems and those
conjugate to untimed systems, thereby showing that nonexpansiveness in $BV$
systems arises in one of two mutually exclusive ways.
###### Key words and phrases:
Bratteli-Vershik system, expansiveness, odometer
###### 2020 Mathematics Subject Classification:
37B10, 37B02, 28D05
## 1\. Introduction
Bratteli-Vershik ($BV$) systems present visually and combinatorially the
hierarchical mechanisms that drive measure-preserving and topological
dynamical systems. Vershik [Vershik1981Uniform, Vershik1981Markov] and Herman-
Putnam-Skau [HPS1992] showed that every measure-preserving system and every
minimal homeomorphism on the Cantor set is isomorphic (in the appropriate
sense) to a $BV$ system. As with any system, it is useful, when possible, to
code the dynamics as a subshift, so that the methods of symbolic dynamics and
formal languages can be brought into action. This is possible exactly when the
system is expansive (in the topological setting), or essentially expansive (in
the measure-preserving setting; see [AFP]). There has been considerable
progress on the question of when $BV$ systems are expansive or not, or when a
sequence of morphisms is recognizable: [dm2008,
BezuglyiKwiatkowskiMedynets2009, FPS2017, AFP, Berthe2017, FPS2020], for
example. Here we explore the reasons why a $BV$ system might be nonexpansive.
We focus on $BV$ systems that are simple (so the topological dynamical system
is minimal) and properly ordered (there is a unique minimal path and a unique
maximal path), but not necessarily of finite topological rank (conjugate to
one with a bounded number of vertices per level).
Many nonexpansive systems are conjugate to odometers, which can be represented
by diagrams of bounded width, or seem to be similar to the Gjerde-Johansen
example [GJ2000]*Figure 4, which has unbounded width and is not conjugate to
an odometer. For brevity we will refer to this system, which motivated much of
this investigation, as the GJ example. In an earlier paper [FPS2017],
abstracting key properties of this example, the authors (including us)
proposed a class of systems (standard nonexpansive, $SNE$) as a model for how
nonexpansivesness can arise in $BV$ systems. Proposition 5.4 of [FPS2017]
asserted that every standard nonexpansive system, according to the definition
given there, has unbounded width and cannot be conjugate to an odometer.
Example 3.7 below shows that neither of these properties is guaranteed under
the old definition of $SNE$; that definition was flawed, being based on the
assumption that two paths with the same sequence of ordinal edge labels had to
move simultaneously to new vertices. We cannot even guarantee that $SNE$
systems, as originally defined, are nonexpansive. Here we correct those
mistakes by providing a new definition of $SNE$ (Definition 3.4), as well as
its invariant version $SSNE$ (Definition 3.5). Nonexpansiveness now follows
from Proposition 3.8, and Proposition 3.11 and Theorem 3.12 show that any
system satisfying the new definition of $SNE$ is not conjugate to any odometer
and has unbounded width.
In Section 4 we study the GJ example in detail, to prove that it is standard
nonexpansive and to show how a slight modification ruins this property. For
the latter, we use the technique of splitting $j$-symbols from [dm2008] and
prove that the $2$-coding of every path in the GJ example is aperiodic
(Observation 4.4) and the related fact that a certain factor system gives rise
to a recognizable family of morphisms (Observation 4.5).
We define two types of systems, well timed or untimed, as well as their
subclasses very well timed and very untimed. The GJ example is very well
timed. The untimed systems are related to what Downarowicz and Maass called
“Case (2)” in [dm2008]. Theorem 5.15 shows that the family of all nonexpansive
systems is the disjoint union of the class of systems which are conjugate to
some well timed system and the class of those conjugate to some untimed
system.
Hoynes [Hoynes2017] used a slightly different “Case (2)”, and both [dm2008]
and [Hoynes2017] used telescoping to arrive at stronger properties. Definition
5.3 states the definitions of these properties and a few of the others that
would arise from permuting or negating the quantifiers concerning depth and
cuts. Proposition 5.7 clarifies what happens to depth and cuts under
telescoping. Working towards the proof of Theorem 5.15, we characterize in
Proposition 5.10 the systems that are conjugate to some well timed system as
those that are “weakly well timed”. We establish relations among these various
types of systems. Example 5.6 shows that odometers can be either untimed but
not very untimed, or very untimed, depending on how they are presented.
Example 5.13 is a system that is both weakly well timed and (vacuously)
satisfies the property in each “Case (2)”; this indicates that, as written,
the proofs of the main theorem of [dm2008, Hoynes2017] are slightly
incomplete.
In Section 6 we describe the possible diagrams for bounded width very untimed
systems: they are all “kite shaped”, like the example in Figure 2. In the
final section we mention several questions suggested by the foregoing, for
example whether there are any untimed systems that have unbounded width and
whether every well timed system is conjugate to a system that is in $SNE$.
###### Acknowledgment.
We thank Sarah Bailey Frick for valuable contributions during the first half
of this project.
## 2\. Some definitions and notation
We deal with Bratteli-Vershik ($BV$) systems $(X,T)$. Each system is built on
an ordered Bratteli diagram, which is a countably infinite, directed, graded,
graph. For each $n=0,1,2,\dots$ there is a finite nonempty set of vertices
$V_{n}$. $V_{0}$ consists of a single vertex, called the “root”. The set of
edges is the disjoint union of finite nonempty sets $E_{n},n\geq 0$, with
$E_{n}$ denoting the set of edges with source in $V_{n}$ and target in
$V_{n+1}$. We assume that every vertex has at least one outgoing edge, and
every vertex other than the root has at least one incoming edge. There can be
multiple edges between pairs of vertices. The space $X$ is the set of infinite
paths (sequences $x=x_{0}x_{1}\dots$, each $x_{i}$ denoting an edge from level
$i$ to level $i+1$) starting at the root at level $i=0$. For a path $x$ we
denote by $v_{i}(x)$ the vertex of the path at level $i$. If $i<j$, we will
say that level $j$ is after or later than level $i$, which is earlier or
before level $j$. Since diagrams are often drawn with later levels below
earlier ones, we may sometimes say that level $j$ is below level $i$ if $i<j$
and use up and down to refer to relative positions in such diagrams. $X$ is a
compact metric space when we specify that two paths have distance $1/2^{n}$ if
they agree from levels $0$ to $n$ and disagree leaving level $n$. To avoid
degenerate situations we assume that $X$ is homeomorphic to the Cantor set.
The edges entering each vertex are totally ordered, and this yields a partial
order on the set of infinite paths as follows. Two paths $x$ and $y$ are
comparable in case they are cofinal: there is a smallest $N>0$ such that
$x_{n}=y_{n}$ for all $n\geq N$. In this case $v_{N}(x)=v_{N}(y)$, and
$x_{N-1}\neq y_{N-1}$; we agree that $x<y$ if $x_{N-1}<y_{N-1}$, and $x>y$ if
not. The set of minimal paths, meaning those all of whose edges are minimal
into all of their vertices, will be denoted by $X_{\min}$, and similarly the
set of maximal paths will be denoted by $X_{\max}$. The Vershik map $T$ is
defined from the set of nonmaximal paths to the set of nonminimal paths by
mapping each path $x$ to its successor, the smallest $y>x$. We assume that the
diagrams are properly ordered, which means that there is a unique minimal path
and a unique maximal path. This implies that the Vershik map is perfectly
ordered: it extends to a homeomorphism $T$ on the set $X$ of all infinite
paths from the root by mapping the unique maximal path to the unique minimal
path. (See [BezuglyiKwiatkowskiYassawi2014, BezuglyiYassawi2016]).
The system is called minimal if every orbit is dense, and it is called simple
if it has a telescoping for which there are complete connections between
adjacent levels. It can be proved that a properly ordered system is minimal if
and only if it is simple.
We will say that two systems are conjugate if there is an equivariant
homeomorphism from one to the other that sends minimal points to minimal
points. The relation of diagram equivalence is the one generated by graph
isomorphism and telescoping. Herman-Putnam-Skau [HPS1992]*Section 4 (see also
[GPS1995]*p. 70 and p. 72, Theorem 3.6) characterized conjugacy of minimal
pointed topological dynamical systems defined by properly ordered (called
there “essentially simple”) $BV$ diagrams (with unique minimal points), as
follows: two systems are conjugate if and only if their diagrams are
equivalent, and this happens if and only if there exists a diagram $Z$ that
telescopes on odd levels to a telescoping of one of the diagrams and on even
levels to a telescoping of the other.
We emphasize that in the following, unless stated otherwise, every system is a
$BV$ system defined by a properly ordered, simple diagram, and every conjugacy
is an equivariant homeomorphism between two such systems that sends the unique
minimal point in one system to the unique minimal point in the other (but we
may include reminders about these hypotheses anyway). For convenience we may
use the same symbol, such as $X$ or $Y$, to denote a diagram as well as the
system that it defines, and we may also use the same symbol, such as $T$, for
Vershik maps on different systems, or suppress it entirely. By “pair” we mean
a pair of distinct elements, unless stated otherwise. The width of a level is
the number of vertices at that level. Recall that in [dm2008] the topological
rank of a system $(X,T)$ is defined to be the minimum among all $BV$ systems
$(Y,T)$ conjugate to $(X,T)$ of the supremum of the widths of the levels of
$(Y,T)$. For further terminology and background, see [HPS1992, GPS1995,
Durand2010, FPS2017] and the references cited there.
For each $k\geq 1$ denote by $A_{k}$ the finite alphabet whose elements are
the finite paths (segments, strings of edges) from the root to level $k$. For
each $a\in A_{k}$, the set $[a]=\\{x\in X:x_{0}\dots x_{k-1}=a\\}$ is a clopen
cylinder set, and $\mathcal{P}_{k}=\\{[a]:a\in A_{k}\\}$ is a partition of $X$
into clopen sets. The map $\pi_{k}:X\to A_{k}^{\mathbb{Z}}$ is defined by
$(\pi_{k}x)_{n}=a$ if and only if $T^{n}x\in[a]$. The doubly infinite sequence
$\pi_{k}x$ is called the $k$-coding of $x$. Let $\Sigma_{k}=\pi_{k}X$, and
denote by $\sigma$ the shift transformation on $A_{k}^{\mathbb{Z}}$. Then
$\pi_{k}:(X,T)\to(\Sigma_{k},\sigma)$ is a factor map: it is continuous, onto,
and it commutes with the transformations.
###### Definition 2.1.
For each vertex $v$ at level $k\geq 1$, denote by $P(v)$ the set of paths from
the root to $v$, define the dimension of the vertex $v$ to be $\dim v=|P(v)|$,
and concatenate the elements of $P(v)$, in their lexicographic order (defined
by the edge ordering), as $p_{1}\dots p_{\dim v}$ (so that
$[p_{j+1}]=T[p_{j}]$ for $j=1,\dots,\dim v-1$). We call the string $p_{1}\dots
p_{\dim v}$ on symbols from the alphabet $A_{k}$ the $k$-basic block at $v$
and denote it by $B_{k}(v)$. Each path from the root to level $k$ determines,
by truncation, for each $i<k$ a unique path from the root to level $i$, so
there is a natural factor map $A_{k}\to A_{i}$ which converts $B_{k}(v)$ to a
string, which we denote by $B_{i}(v)$ and call the $i$-basic block at $v$, of
the same length on the alphabet $A_{i}$.
###### Definition 2.2.
The coding by vertices at level $j<n$ of a vertex $w$ at level $n$, denoted by
$C_{j}(w)$, is defined as follows. List in their order in the diagram the
paths entering $w$ from vertices at level $j$ as $\\{p_{1},\dots,p_{m}\\}$ and
denote the source of $p_{i}$ by $v_{i},i=1,\dots,m$. Then $C_{j}(w)=v_{1}\dots
v_{m}$. (For $j=n-1$, this is the “morphism read on $\mathcal{V}_{n}$” in
[Durand2010]*p. 342).
###### Definition 2.3.
We say that a $BV$ system is nonexpansive (abbreviated $NE$) if for every
$k\geq 1$ there exists a pair of paths with the same $k$-coding (by finite
paths from level $0$ to level $k$).
Thus a system is expansive if and only if there is a $k\geq 1$ such that the
map $\pi_{k}:(X,T)\to(\Sigma_{k},\sigma)$ is injective (and hence a
conjugacy).
###### Definition 2.4.
We say two paths $x$ and $x^{\prime}$ are depth $k\geq 0$ if they have the
same $k$-coding but not the same $(k+1)$-coding.
If two paths have the same $k$-coding, then they agree from the root to level
$k$.
For any pair of (distinct) paths $x$ and $x^{\prime}$ there exists a $j$ such
that $x$ and $x^{\prime}$ do not have the same $j$-coding. Hence, telescoping
can be used to convert a nonexpansive $BV$ system into one with the property
that for every $k\geq 1$ there exists a depth $k$ pair of paths.
###### Definition 2.5.
We say two paths $x$ and $x^{\prime}$ have a (common) $j$ cut if there is an
integer $m$ such that the initial segments of $T^{m}x$ and $T^{m}x^{\prime}$
are minimal into level $j$. Then we say that the pair has a cut at time $m$.
We say that two paths differ at level $j$ if they follow different edges into
level $j$. Note that if two paths $x$ and $x^{\prime}$ differ at level $k+1$
and have a $k+1$ cut, then $m$ can be chosen so that $T^{m}x$ and
$T^{m}x^{\prime}$ are minimal into distinct vertices at level $k+1$. (For
detailed explanation, see top of page 7 in [Hoynes2017]).
## 3\. Standard nonexpansive systems
###### Definition 3.1.
Let $k\geq 1$ and $n>k$. We say that two vertices $v,w$ at level $n$ are
$k$-equivalent, and write $v\sim_{k}w$, if they are strongly uniformly ordered
with respect to level $k$, meaning that the set of paths from vertices at
level $k$ to $v$ is order isomorphic with the set of paths from vertices at
level $k$ to $w$. Equivalently, the $k$-basic blocks at $v$ and $w$ are
identical. (In [FPS2017] a uniformly ordered level $n+1$ was defined to be a
level all of whose vertices had $n$-basic blocks that were powers of a single
block on the alphabet of paths from the root to level $n$.)
Any pair of vertices that is $(k+1)$-equivalent is also $k$-equivalent. (If
$v$ and $w$ at level $n>k+1$ have equal basic blocks in terms of the paths
from the root to level $k+1$, when these blocks are rewritten in terms of
paths to level $k$ the results will still be identical.)
###### Definition 3.2.
We say that two paths $x$ and $x^{\prime}$ are $k$-equivalent at level $n$ if
we have $v_{n}(x)\sim_{k}v_{n}(x^{\prime})$ and moreover their paths from the
root to level $n$ have the same ordinal path label. We say $x$ and
$x^{\prime}$ are $k$-equivalent, and write $x\sim_{k}x^{\prime}$, if they
agree from the root to level $k$ and, for all $n>k$, $x$ and $x^{\prime}$ are
$k$-equivalent at level $n$.
###### Remark 3.3.
Recall that for a path $x$ and $n\geq k\geq 1$, the $k$-basic block
$B_{k}(v_{n}(x))$ lists in their assigned order the truncations to levels from
$0$ to $k$ of paths from the root to $v_{n}(x)$, say as $q_{1}\dots
q_{\dim(v_{n}(x))}$ (see Definition 2.1). Suppose that $i$ is the index in
$[1,\dim(v_{n}(x))]$ for which $q_{i}$ is the initial segment of $x$ from the
root to level $n$. This information can be conveyed by writing out the string
$B_{k}(v_{n}(x))$ with a “dot” immediately preceding $q_{i}$. For example, if
$x$ follows only minimal edges to level $n$, then $B_{k}(v_{n}(x))=.q_{1}\dots
q_{\dim(v_{n}(x))}$. Two paths $x,x^{\prime}$ are $k$-equivalent at level
$n>k$ if $B_{k}(v_{n}(x))=B_{k}(v_{n}(x^{\prime}))$ and, informally, these two
basic blocks have the “dot” in the same place.
###### Definition 3.4.
We say that a nonexpansive $BV$ system is standard nonexpansive (SNE) if for
every $k\geq 1$ there is a pair of $k$-equivalent paths.
###### Definition 3.5.
We say that two paths $x$ and $x^{\prime}$ are $k$-same, and write
$x\approx_{k}x^{\prime}$, if $T^{j}x\sim_{k}T^{j}x^{\prime}$ for all
$j\in\mathbb{Z}$. We say that a system is strong standard nonexpansive (SSNE)
if for every $k\geq 1$ there is a pair of distinct $k$-same paths.
###### Remark 3.6.
The paper [FPS2017] introduced a different definition of standard
nonexpansive: For every $k$ there should exist a pair of paths that agree from
the root to a level $n>k$, have the same sequence of edge labels, and have the
same $k$-basic blocks at all levels after $k$. In the new definition (3.4) we
replace the requirement that the sequences of edge labels be the same with the
stronger requirement that the two paths always have the “dot” in the same
place of their equal $k$-basic blocks.
Proposition 5.4 of [FPS2017] asserted that every standard nonexpansive system,
according to the definition given there, has unbounded width. The following
example shows that this is not correct. In Theorem 3.12 below we show that the
new definition of $SNE$ given here does suffice to guarantee unbounded width.
###### Example 3.7.
This example satisfies the old definition of $SNE$, but not the new one. It
has bounded width and is conjugate to an odometer (since after telescoping to
the levels with exactly two vertices, all levels are uniformly ordered, see
[FPS2017]); so it is a counterexample to Prop 5.4 of [FPS2017] and the
assertion in its proof that any system conjugate to an odometer cannot have
pairs of paths with the same edge labels.
In this example (see Figure 1) the dark path and the dotted path are identical
from the root to the vertex labeled $b$ at level $3$, where they diverge. They
continue below with the same sequence of ordinal edge labels. At level $1$
(not shown) there are two vertices, $u$ and $v$, and at level $2$ there are
four vertices, with codings by vertices of the previous level
$u^{2},u^{3},uv^{2},v^{2}u$. The vertices $u$ and $v$ connect to $a$ and $b$
via edges in the order that leads to identical codings by vertices
$u^{2}uvvu^{3}$ at $a$ and $u^{3}vvuu^{2}$ at $b$. At all levels $2$ and
later, the paths have identical codings by vertices on the symbols $u$ and
$v$, but the “dots” for the two paths are in different places after level $4$.
After level $0$, the diagram repeats levels and edges with period two, so it
has bounded width.
To find such pairs of paths for values of $k$ larger than $1$, use the
periodicity of the diagram. For example, to deal with $1$ replaced by $3$ (and
hence also by $2$), we may form a pair of paths that is identical from the
root to the vertex labeled $B$ at level $5$, where it splits into two paths
following the same sequence of edge labels as before ($22121212\dots$). These
paths always enter vertices with the same codings by level $3$ vertices $a,b$,
but not with the dot in the same place (after level $6$). Then for $3$
replaced by $5$ (and hence also $4$), take two paths that follow identical
edges from the root to the vertex $D$ at level $7$, where they split and
follow the same sequence of edges $22121\dots$. These paths always enter
vertices with the same codings by level $5$ vertices $A,B$, and hence also by
the vertices at level $4$, but not with the dots in the same place. Etc.
$a=u^{2}uvvu^{3}$$b=u^{3}vvuu^{2}$$a^{2}$$a^{3}$$a.bb$$b.ba$$A=a^{2}a.bba^{3}$$B=a^{3}b.baa^{2}$$A^{2}$$A^{3}$$ABB$$BBA$$C$$D$$C^{2}$$C^{3}$$CDD$$DDC$$2$$2$$2$$2$$1$$1$$2$$2$$1$$1$
Figure 1. Part of a diagram whose system satisfies the old definition of $SNE$
but not the new one
The strict synchronizing structure of $k$-equivalent pairs of paths guarantees
that they cannot be separated by their $k$-codings.
###### Proposition 3.8.
In any system, if two paths are $k$-equivalent at infinitely many levels, then
they have the same $k$-codings.
###### Proof.
Suppose that $x,x^{\prime}$ is a pair of distinct paths for which there is an
infinite increasing sequence $(n_{j})$ such that for each $j$ the paths
$x,x^{\prime}$ are $k$-equivalent at level $n_{j}$. Their (identical)
$k$-basic blocks at their $n_{j}$-level vertices
$v_{n_{j}}(x),v_{n_{j}}(x^{\prime})$ have lengths increasing to infinity,
$j\geq 1$. If the lengths of the segments of the blocks both to the left and
right of their dots increase unboundedly, $x$ and $x^{\prime}$ will have
identical $k$-codings. Suppose that the segments to one side of the dot, for
example the left, stay bounded. Then there are $m\in[0,\infty)$ and $j_{0}$
such that the segment to the left has length $m$ at all levels $n_{j}$ for
$j\geq j_{0}$. At these levels $T^{-m}x,T^{-m}x^{\prime}$ are minimal paths
from the root to the vertices $v_{n_{j}}(x),v_{n_{j}}(x^{\prime})$. Since
$n_{j}$ is arbitrarily large, $T^{-m}x$ and $T^{-m}x^{\prime}$ consist
entirely of minimal edges, so both equal the minimal path $x_{\min}$,
contradicting our assumption that the paths are distinct. Analogously, using
uniqueness of the maximal path, the segments of the basic blocks to the right
cannot stay bounded. ∎
###### Corollary 3.9.
Standard nonexpansive implies nonexpansive.
###### Proposition 3.10.
If two paths $x$ and $x^{\prime}$ are $k$-equivalent, then for every $n>k$
they have an $n$ cut, and there is a $j\geq k$ such that they are depth $j$.
Hence, in any system, if two $k$-equivalent paths are depth $n\geq k$, then
they have an $n+1$ cut.
###### Proof.
For any path and any $n>k$, the position of the dot in the $k$-basic block at
level $n$ represents the smallest number of applications of $T^{-1}$ applied
to that path necessary to produce a path that is minimal into level $n$. Since
$x$ and $x^{\prime}$ are $k$-equivalent, the dot is in the same position for
their $k$-basic blocks at level $n$, so the paths $x$ and $x^{\prime}$ have an
$n$ cut.
Given paths $x$ and $x^{\prime}$ in a system that are $k$-equivalent, by
Proposition 3.8 they have the same $k$-codings. There exists a largest $j\geq
k$ such that for all $m\in\mathbb{Z}$, $T^{m}x$ and $T^{m}x^{\prime}$ agree to
level $j$. It follows that $x$ and $x^{\prime}$ are depth $j$. ∎
Note that the converse of each statement in Proposition 3.10 is not true: it
can happen for a depth $k$ pair and some $n>k$ that the $k$-basic block at
$v_{n}(x)$ is a proper prefix of the $k$-basic block at $v_{n}(x^{\prime})$,
and then the paths would not be $k$-equivalent but could still have an $n$
cut.
The following Proposition will be extended in Theorem 5.14.
###### Proposition 3.11.
No $SNE$ system can be conjugate to any odometer.
###### Proof.
Suppose that $(X,T)$ is a system that is conjugate to an odometer. By
[FPS2017]*Theorem 5.3 it has a telescoping that has infinitely many uniformly
ordered levels. We claim that for $k=1$ no pair in the telescoped system can
be $k$-equivalent, so the telescoped system in fact cannot be $SNE$.
Suppose that $x,x^{\prime}$ is a $1$-equivalent pair in the telescoped system.
Since $x\neq x^{\prime}$ but their first edges are equal,
$x_{0}=x^{\prime}_{0}$, the pair first disagrees at some level $j>1$, that is
to say, they follow different edges from level $j-1$ to level $j$. If they
enter the same vertex at level $j$ along these different edges, they cannot
have their dots in the same place. So the paths are at different vertices at
level $j$, and then, by induction, they must be at different vertices at each
level $n$ for all $n\geq j$.
Look at the first uniformly ordered level $n>j$. At the vertices $v_{n}(x)$
and $v_{n}(x^{\prime})$, because of $1$-equivalence we have $(n-1)$-basic
blocks of the same length, and then because of uniform order these
$(n-1)$-basic blocks are equal. Since $x,x^{\prime}$ are at different vertices
at level $n-1$, their dots are at different places in the $(n-1)$-basic blocks
at $v_{n}(x),v_{n}(x^{\prime})$. When these $(n-1)$-basic blocks are expanded
to $1$-basic blocks, the dots for $x,x^{\prime}$ will be at different places,
contradicting $1$-equivalence of $x$ and $x^{\prime}$. ∎
The following Theorem follows directly from the preceding Proposition combined
with the main result of [dm2008], but we give a direct proof here in order to
re-establish Proposition 5.4 of [FPS2017] in our context, with the new
definition of $SNE$.
###### Theorem 3.12.
SNE implies unbounded width.
###### Proof.
Starting with $j_{0}=1$, using Proposition 3.10, we can find two paths
$x^{(0)},y^{(0)}$ that are $j_{0}$-equivalent and follow different edges into
some level $j_{1}>j_{0}$. As above, because they are $j_{0}$-equivalent, they
must pass through different vertices at level $j_{1}$, and hence they pass
through different $j_{0}$-equivalent vertices for all $j\geq j_{1}$.
If they pass through $j_{1}$-equivalent vertices at infinitely many levels,
they must have the dot in the same place in their basic blocks at those
levels, because when the $j_{1}$-basic blocks are expanded to $j_{0}$-basic
blocks, they are the same and have the dot in the same place, so the dot had
to be in the same place for the $j_{1}$-basic blocks. Then $x^{(0)}$ and
$y^{(0)}$ would be $j_{1}$-equivalent at these infinitely many levels, and so
by Proposition 3.8 would have the same $j_{1}$-coding, but they do not. Thus
for all large enough $n$, $x^{(0)}$ and $y^{(0)}$ pass through vertices that
are $j_{0}$-equivalent and not $j_{1}$-equivalent.
Continuing, given $K\geq 1$, for each $k=0,\dots,K-1$ we can find $N$,
integers $j_{0}<j_{1}<\dots<j_{K}$, and pairs $x^{(k)},y^{(k)}$ that for each
$n\geq N$ and $k=0,\dots,K-1$ pass through vertices that are
$j_{k}$-equivalent and not $j_{k+1}$-equivalent:
(3.1) $v_{n}(x^{(k)})\sim_{j_{k}}v_{n}(y^{(k)})\quad\text{ while }\quad
v_{n}(x^{(k)})\nsim_{j_{k+1}}v_{n}(y^{(k)}).$
But these pairs of vertices at any level $n\geq N$ are all different, because
not $j_{k}$-equivalent implies not $j_{k+1}$-equivalent. If there are at most
$R$ vertices at every level and $K>R^{2}$, this is a contradiction. ∎
## 4\. Modifing the GJ example to ruin the property $SNE$
We will show that the example of Gjerde and Johansen [GJ2000]*Figure 4, which
has unbounded width and is not conjugate to any odometer, is $SNE$ (even
$SSNE$) by specifying for any $k$ exactly which pairs of paths are
$k$-equivalent. We will also determine for any $k$ which pairs are depth $k$.
Denote the system in the GJ example by $X$. Given $j>0$ there are $2j$
vertices in the diagram at level $j$. For any $i\leq 2j$, let $v(j,i)$ denote
the $i$’th vertex from the left (beginning with $i=1$) in level $j$. For
$i=1,2,\dots$ and $j>i$, paths that at level $j$ pass only through vertices
$v(j,2i)$ or $v(j,2i+1)$ will be said to constitute the $i$’th Morse component
$MC(i)$ of the diagram. In particular, $MC(i)$ begins at level $i+1$.
###### Observation 4.1.
(1) For any $i\geq 1$, at any level of $MC(i)$ the $i$-basic blocks are the
same (at both vertices), as can be verified by writing them out. It can also
be verified that the $(i+1)$-basic blocks at the vertices of $MC(i)$ differ
after level $i+1$.
(2) In the diagram for $X$, each vertex is connected to every vertex at the
previous level by exactly one edge, so all vertices at any level have the same
dimension. From this it can be argued inductively that for every $j\geq 1$ two
paths have the same ordinal path label from the root into level $j$ precisely
when they have the same sequence of edge labels into level $j$.
###### Proposition 4.2.
. The following statements hold for the GJ example, $X$.
(1) If two edges at level $n\geq 2$ have the same label, then they have
different targets. If they also have different sources, then they are
contained in $MC(j)$ for some $j<n.$
(2) A pair of paths $x$, $x^{\prime}$ with the same sequence of edge labels
first differ at level $j\geq 1$ if and only if they enter a Morse component at
level $j$ and remain in that Morse component at all levels $j$ and higher,
never again meeting the same vertex.
(3) If two paths have the same sequence of edge labels and first differ at
level $j$, then for some $k<j$ they enter $MC(k)$ at level $j$ and are
$k$-equivalent and not $(k+1)$-equivalent. Conversely, if two paths are
$k$-equivalent and not $(k+1)$-equivalent, then they have the same sequence of
edge labels and enter $MC(k)$ at the level where they first differ.
(4) For any $k\geq 2$, a pair of paths is depth $k$ if and only if those paths
are $k$-equivalent but not $(k+1)$-equivalent. (Equivalently, in view of (3),
two paths are depth $k$ if and only they have the same sequence of edge labels
and enter $MC(k)$ at the level where they first differ.)
###### Proof.
Proof of (1): Fix $n\geq 2$. All edges with source $v(n,1)$ are labeled 1 and
all edges with source $v(n,2n)$ are labeled $2n$. For all $j=1,...,n-1$, all
edges with source $v(n,2j)$ are labeled $2j$, with the exception of the single
edge between $v(n,2j)$ and $v(n+1,2j+1)$, which is labeled $2j+1$. Likewise,
all edges with source $v(n,2j+1)$ are labeled $2j+1$, with the exception of
the single edge between $v(n,2j+1)$ and $v(n+1,2j+1)$, which is labeled $2j$.
If two edges between levels $n$ and $n+1$ have the same label $1$ or $2n$,
they must have the same source (and different targets). If two edges between
levels $n$ and $n+1$ have the same label and different sources, then for some
$j=1,...,n-1$ one of these edges has source $v(n,2j)$, the other has source
$v(n,2j+1)$, and the two edges have different targets and lie in $MC(j)$.
Proof of (2): Two paths with the same sequence of edge labels cannot first
differ into level $1$, because the edges out of $v(1,1)$ all have label $1$,
while the edges out of $v(1,2)$ all have label $2$; so the paths first differ
into level $j\geq 2$, arriving at different vertices at level $j$. Leaving
level $j$, the two paths traverse edges with the same label and different
sources. By part (1), both of these edges are in the same Morse component and
have different targets. Hence, both paths also traverse different edges with
the same label and different sources into level $j+2$. By repeated
applications of part (1) we get that the two paths enter a Morse component at
level $j$ and remain in that Morse component at all levels $j$ and higher.
Conversely, if a pair of paths $x$, $x^{\prime}$ with the same sequence of
edge labels enters a Morse component at level $j$ and remains in that Morse
component at all levels $j$ and higher, then $x$ and $x^{\prime}$ first differ
at level $j$. This is because, from above, the pair cannot differ before
entering a Morse component, so they meet different vertices at a first level
$n>j$. But since two edges in a Morse component with the same label have
different sources, $n=j$. In fact, $x$ and $x^{\prime}$ do not ever meet the
same vertex in that Morse component, since the edges entering any vertex have
distinct labels.
Proof of (3): If paths $x$ and $x^{\prime}$ have the same sequence of edge
labels, then by (2) $x$ and $x^{\prime}$ enter a Morse component $MC(k)$ at
the first level $j>k$ where they differ, and they will meet distinct vertices
in $MC(k)$ at all levels $j$ and higher. Hence, by Observation 4.1 (1), they
will have the same $k$-basic blocks at levels $j$ and higher, whereas their
$(k+1)$-basic blocks at level $j+1$ will differ. By Observation 4.1 (2), they
will have the same ordinal path label from the root into all levels. Thus the
dot will be in the same place in their identical $k$-basic blocks at all
levels $k+1$ and higher: the pair will be $k$-equivalent but not
$(k+1)$-equivalent.
Conversely, we show that any pair of $k$-equivalent paths $x$, $x^{\prime}$
that are not $(k+1)$-equivalent enter $MC(k)$ at the first level where they
differ and have the same sequence of edge labels. By the definition of
$k$-equivalence, such paths agree to level $k$ and their $k$-basic blocks at
levels $k+1$ and higher are the same with the dot in the same place. In
particular, $x$ and $x^{\prime}$ have the same ordinal path label from level
$k$ into any higher level and they agree to level $k$, so they have the same
ordinal path label from the root into any level. Hence, by Observation 4.1
(2), they have the same sequence of edge labels. Denote by $j$ the first level
at which $x$ and $x^{\prime}$ differ (so that $j>k$). Then, by (2), for some
$i<j$ both paths enter $MC(i)$ at level $j$ and are contained in that
component at all higher levels. By the preceding paragraph, $x$ and
$x^{\prime}$ are $i$-equivalent but not $(i+1)$-equivalent. Therefore, $i=k$.
Proof of (4): Suppose that $k\geq 2$ and $x$, $x^{\prime}$ is a depth $k$
pair. Then $x$ and $x^{\prime}$ have the same $k$-coding and therefore the
same $2$-coding. As shown later in Example 4.6, this means that $x$,
$x^{\prime}$ are $2$-equivalent, in other words, at all levels after level
$2$, they have the same $2$-basic block with the dot in the same place. At any
level $n\geq k$, the $k$-basic block $B_{k}(v)$ at any vertex $v$ factors via
the $1$-block code mentioned in Definition 2.1 onto $B_{2}(v)$, and the result
has the same length with the dot in the same position. It follows that the
$k$-basic blocks at $v_{n}(x)$ and $v_{n}(x^{\prime})$ have equal length and
the dot in the same position. Since $x$ and $x^{\prime}$ have the same
$k$-coding, then their $k$-basic blocks at all levels must be the same.
Therefore, $x$ and $x^{\prime}$ are $k$-equivalent. Furthermore, $x$ and
$x^{\prime}$ are not $(k+1)$-equivalent, since at some time in their orbits
they follow different paths into level $(k+1)$ (by definition of depth $k$),
whereas by Proposition 3.8 $(k+1)$-equivalent paths must have the same
$(k+1)$-coding. Hence, for all $k\geq 2$, any depth $k$ pair is $k$-equivalent
but not $(k+1)$-equivalent.
Conversely, for all $k\geq 2$, any pair of paths that is $k$-equivalent has
the same $k$-coding by Proposition 3.8. If that pair is not
$(k+1)$-equivalent, then, it is not $i$-equivalent for any $i>k$. As just
shown, this means it cannot be depth $i$ for any $i>k$. Therefore that pair is
depth $k$. ∎
###### Corollary 4.3.
The GJ example is standard nonexpansive
###### Proof.
By Part (4) of the Proposition, given $k\geq 2$ two paths with the same
sequence of edge labels that enter $MC(k)$ at the level where they first
differ are depth $k$. For example, the path $x$ that passes through the
vertices $v(1,j)$ for all $j,1\leq j\leq k$, then $v(j,2k)$ for all $j>k$ and
the path $y$ that passes through the vertices $v(1,j)$ for $1\leq j\leq k$,
then $v(j,2k+1)$ for all $j>k$, are depth $k$. ∎
The following observations are related to Example 4.6.
###### Observation 4.4.
In the GJ example, the $2$-coding of every path is aperiodic (also sometimes
called “nonperiodic”).
###### Proof.
We will show that the forward coding of the minimal path $x_{\min}$ by
vertices at level $2$, $(v_{2}(T^{j}x),j\geq 0)$, is aperiodic. This implies
that the $2$-coding of $x_{\min}$ is aperiodic, and hence, since the orbit of
$x_{\min}$ is dense, the $2$-coding of every path in $X$ is aperiodic. Fix a
large $n\geq 3$. The idea is to reduce $C_{2}(v(n,1))$ to a long initial block
of the famous Prouhet-Thue-Morse sequence, which is known to be aperiodic.
For $n\geq 2$ we have
(4.1) $C_{n-1}(v(n,1))=v(n-1,1)v(n-1,2)v(n-1,3)\dots v(n-1,2n-2),$
but note that for $n\geq 4$
(4.2) $C_{n-2}(v(n-1,3))=v(n-2,1)v(n-2,3)v(n-2,2)v(n-2,)\dots v(n-2,2n-4).$
This switch in order of adjacent symbols occurs at every level $n-1\geq 3$.
Working towards the expansion of $C_{2}(v(n,1))$ as a string on symbols
$v(2,m),m=1,2,3,4$, in Equation 4.1 replace each $v(n-1,i)$ by
$C_{n-2}(v(n-1,i))$, then in the result (which is $C_{n-2}(v(n,1))$) replace
each $v(n-2,i)$ by $C_{n-3}(v(n-2,i))$, etc., until we finally arrive at
$C_{2}(v(n,1))$. Note that, reading from left to right in any
$C_{j}(v(n,i)),2\leq j<n$, from time to time the symbols $v(j,2)$ and $v(j,3)$
switch order.
We will now repeat this process of repeatedly expanding $C_{n-1}(v(n,1))$,
deliberately losing some information at each step, to produce for each
$j=n-1,\dots,2$ a string $\tilde{C}_{j}$ on the alphabet
$\\{v(j,2),v(j,3),0_{j}\\}$. In Equation 4.1, replace each $v(n-1,i)$ for $i$
not equal to $2$ or $3$ by $0_{n-1}$, arriving at a block $\tilde{C}_{n-1}$ on
the alphabet $\\{v(n-1,2),v(n-1,3),0_{n-1}\\}$.
Then in $\tilde{C}_{n-1}$ replace each $v(n-1,2)$ by $C_{n-2}(v(n-1,2)$, each
$v(n-1,3)$ by $C_{n-2}(v(n-1,3))$, each $0_{n-1}$ by $(0_{n-2})^{2n-4}$, and
finally each $v(n-2,i)$ for $i$ not equal to $2$ or $3$ by $0_{n-2}$, arriving
at a block $\tilde{C}_{n-2}$ on the alphabet
$\\{v(n-2,2),v(n-2,3),0_{n-2}\\}$. Noting that $|C_{n-2}(v(n-1,i))|=2n-4$ for
all $i$, we have formed a $1$-block factor $\tilde{C}_{n-2}$ of
$C_{n-2}(v(n,1))$ on the alphabet $\\{v(n-2,2),v(n-2,3),0_{n-2}\\}$.
Continue analogously until in the end we have a block $\tilde{C}_{2}$ on the
alphabet $\\{v(2,2),v(2,3),0_{2}\\}$ which is a symbol-by-symbol ($1$-block)
factor of $C_{2}(v(n,1))$.
If the forward coding of $x_{\min}$ by vertices at level $2$ were periodic,
then there would be a nonempty block $P$ on the symbols $v(2,m)\,(m=1,2,3,4)$,
and a (possibly empty) prefix $Q$ of $P$, such that for large enough $n$ we
would have $C_{2}(v(n,1))=P^{k}Q$ for some $k\geq 2$. Then $\tilde{C}_{2}$
would have the form $\tilde{P}^{k}\tilde{Q}$ with $\tilde{P},\tilde{Q}$
$1$-block images of $P,Q$.
But note that for each $j=n-1,\dots,3$ a symbol $v(j,2)$ in $\tilde{C}_{j}$
expands to a block $0_{j-1}v(j-1,2)v(j-1,3)0_{j-1}^{2j-5}$ in
$\tilde{C}_{j-1}$, while a symbol $v(j,3)$ in $\tilde{C}_{j}$ expands to a
block $0_{j-1}v(j-1,3)v(j-1,2)0_{j-1}^{2j-5}$ in $\tilde{C}_{j-1}$. Thus if we
ignore the symbol $0_{j-1}$ in each $\tilde{C}_{j}$ we see substrings on
$\\{a=v(\cdot,2),b=v(\cdot,3)\\}$ that expand according to the Prouhet-Thue-
Morse (PTM) substitution $a\to ab,b\to ba$.
If $\tilde{C}_{2}$ were of the form $\tilde{P}^{k}\tilde{Q}$ as above, with
$n$ large enough that $k\geq 2$, then deleting the symbol $0_{2}$ from
$\tilde{P}$ and $\tilde{Q}$ would present a long initial block of the PTM
sequence in the form $P_{0}^{k}Q_{0}$ with $P_{0},Q_{0}$ blocks on
$\\{a,b\\}$, and $|P_{0}|\geq 1$ (because $\tilde{C}_{2}$ contains symbols
other than $0_{2}$). But the PTM sequence $abba\,baab\dots$ is aperiodic, in
fact it cannot begin with $BB$ for any block $B$. Therefore the forward coding
of $x_{\min}$ by vertices at level $2$ is not periodic, and hence the
$2$-coding of $x_{\min}$ is not periodic. ∎
###### Observation 4.5.
In the GJ example, identifying at every level $j>2$ the vertices other than
$v(j,3)$ produces a sequence of morphisms that forms a recognizable family.
###### Proof.
Denote by $\mathcal{V}_{j}$ the set of vertices in our diagram at level $j$.
For each $j\geq 2$ define an alphabet $A_{j}=\\{D_{j},E_{j}\\}$ and a (many-
to-one) map $\phi_{j}:\mathcal{V}_{j}\to A_{j}$ as follows. Assign to $v(j,3)$
the symbol $D_{j}=\phi_{j}(v(j,3))$, and to each $v(j,i)$ for $i\neq 3$ assign
the symbol $E_{j}=\phi_{j}(v(j,i))$. When we expand each symbol (vertex)
$v(j,i)$ to $C_{j-1}(v(j,i))$ the effect on the $\phi_{j}$-images
($D_{j},E_{j},D_{j-1},E_{j-1}$) at both levels produces the morphism
(4.3) ${E_{j}\to E_{j-1}^{2}D_{j-1}E_{j-1}^{2j-5},\qquad D_{j}\to
E_{j-1}D_{j-1}E_{j-1}^{2j-4}}$
for all $j\geq 3$, in the sense of concatenation of blocks.
Denoting as usual by $A_{j}^{+}$ the set of nonempty words on the alphabet
$A_{j}$, Equation (4.3) defines a sequence of morphisms $\tau_{j}:A_{j}\to
A_{j-1}^{+}$, as (for example) in [Berthe2017]. By keeping track of the
position of $D_{j}$ in codings of paths by images of vertices at level $j$
under $\phi_{j}$ one can prove directly that the sequence is recognizable, in
the sense that every sequence on the alphabet $A_{j}$ has at most one
desubstitution, or factorization, on the alphabet $A_{j+1}$. To see this,
suppose that we are given a bisequence on the alphabet $A_{j}$. When we see a
block $F(j,q)=D_{j}E_{j}^{q}D_{j}$, since the block $D_{j+1}D_{j+1}$ does not
appear in the coding of any path by the images of vertices at level $j+1$, it
must be the case that $F(j,q)$ is a subblock, in a uniquely determined
position, of
(4.4) $\displaystyle
E_{j+1}E_{j+1}=E_{j}E_{j}D_{j}E_{j}^{2j-3}\,\,E_{j}E_{j}D_{j}E_{j}^{2j-3}\quad\text{
if }q=2j-1,$ $\displaystyle
E_{j+1}D_{j+1}=E_{j}E_{j}D_{j}E_{j}^{2j-3}\,\,E_{j}D_{j}E_{j}^{2j-2}\quad\text{
if }q=2j-2,\text{ or }$ $\displaystyle
D_{j+1}E_{j+1}=E_{j}D_{j}E_{j}^{2j-2}\,\,E_{j}E_{j}D_{j}E_{j}^{2j-3}\quad\text{
if }q=2j.$
Thus the coding of an orbit by $D_{2},E_{2}$ determines its coding by
$D_{j},E_{j}$ for all $j>2$. This also follows from [Berthe2017]*Theorem 5.1,
since each of these alphabets has only two elements and each infinite
bisequence generated by the sequence of morphisms is aperiodic, by the
arguments in the proof of Observation 4.4. ∎
Note that the natural sequence of morphisms for the diagram defined by the
coding of vertices at each level $j$ beyond some level $j_{0}$ by the vertices
at level $j-1$ cannot be recognizable, because the GJ system is nonexpansive.
(The coding of any path by vertices at some level would then determine the
codings at all subsequent levels and hence the entire path.) Moreover, the BV
system corresponding to the sequence of morphisms $\tau_{j}$ is of finite
topological rank (as well as expansive). In fact it has topological rank $2$:
it cannot be $1$ (conjugate to an odometer) by [FPS2017]*Theorem 5.3 and
Observation 4.4, since for every $n>1$ the $n$-coding of the orbit of any path
is not periodic.
We show now how the GJ example can be modified to spoil the strict
requirements of the definition of $SSNE$ to produce an example of a
nonexpansive system that is not even $SNE$ and is not conjugate to any
odometer. (By [dm2008] any such example necessarily has unbounded width.)
Further, this example is very well timed (see Definition 5.2).
In [dm2008], Downarowicz and Maass introduced a handy way to visualize paths
and their orbits in a $BV$ system by means of 3-sidedly infinite arrays of
$j$-symbols. Every vertex $v$ at level $j\geq 0$ has an associated $j$-symbol
which is also labeled $v$. The $j$-symbol is a finite rectangular matrix with
$j+1$ rows consisting of subrectangles ($i$-symbols for $0\leq i\leq j$), as
described in [dm2008]*p. 741. An array represents the entire orbit of a path
and its $k$-codings for all $k\geq 0$. The path itself is indicated by an
arrow pointing to the left edge of its time $0$ rectangle at level $0$, and
this arrow indicates, by extending it vertically downward, all the other
rectangles (vertices) through which the path passes at levels $n>0$. We call
this extension the vertical that corresponds to the path. Further development
and use of these arrays can be seen in [BezuglyiKwiatkowskiMedynets2009] and
[Hoynes2017].
The family of arrays determined by a diagram has a type of consistency called
“agreeable” in [BezuglyiKwiatkowskiMedynets2009, Def. 3.2]: each $j$-symbol
with a fixed name $v$ has for its first $j$ rows the same concatenation of
$(j-1)$-symbols. Conversely, given an agreeable family of arrays, we can
construct its unique associated $BV$ diagram (which may or may not be properly
ordered). For each vertex $v$ at level $j$, the names, in order, of the
$(j-1)$-symbols comprising the first $j$ rows of its associated $j$-symbol
specify the edges connecting certain vertices at level $j-1$ to it in a
certain order, possibly with repeats. This concatenation of $(j-1)$-symbols
determines the $(j-1)$-basic block $B_{j-1}(v)$ at the vertex v, since it
lists in order the paths from the root to $v$. The Vershik map $T$ on the
diagram corresponds to sliding each array one “notch”, i.e. level-$0$
rectangle width, to the right. Thus there is a natural correspondence between
$BV$ systems, their diagrams, and agreeable families of arrays; in the
following we deal with them interchangeably.
###### Example 4.6.
Denote the system in the GJ example [GJ2000]*Figure 4, p. 1699 by $X$.
In their proof in [dm2008], Downarowicz and Maass make use of a modification
of the Bratteli diagram and hence of all arrays representing orbits of the
system. See [dm2008, pp. 743–744, Figure 4] and [Hoynes2017, p. 204]). We will
use a version of their splitting technique to ensure that for every $i\geq 1$
and every $j\geq i+1$ the two vertices at level $j$ in the $i$’th Morse
component have different $1$-basic blocks.
For every $j\geq 2$ and every $i<j$, we replace the $j$-symbol $v_{i}=v(j,2i)$
with two symbols $v_{i}^{\prime}$ and $v_{i}^{\prime\prime}$ so that
$|B_{1}(v_{i}^{\prime})|=|B_{1}(v(j-1,1))|$ and the concatenation of
$B_{1}(v_{i}^{\prime})$ and $B_{1}(v_{i}^{\prime\prime})$ is $B_{1}(v_{i})$.
In particular, $B_{1}(v_{i}^{\prime})$ is a proper prefix of $B_{1}(v_{i})$.
In the DM arrays, every occurrence of the $j$-symbol with label $v_{i}$ is
then replaced by two $j$-symbols labeled $v_{i}^{\prime}$ and
$v_{i}^{\prime\prime}$ respectively. We leave the $j$-symbols for all other
vertices at level $j$ unchanged, so their $1$-basic blocks are the same in the
new diagram as they are in $X$. The result is that the left vertices in every
Morse component of $X$ have been split into two vertices, whereas the right
vertices remain intact.
Note that for every $j\geq 2$, the modifications at level $j$ extend vertical
bars (i.e rectangle boundaries) at level $j-1$ in the original diagram for $X$
by only one level. None of these bars is extended further by a modification at
level $j+1$. Hence, no new infinitely long vertical lines consisting entirely
of rectangle boundaries are created. This means that after all modifications,
the system represented by the diagram remains in the class of properly ordered
systems. Moreover, every orbit still meets every vertex, so the system is
simple. Also note that since this new system is conjugate to $X$, it is not
conjugate to any odometer.
The new system has the property that at any level $j\geq 2$ and $i=1,...,j-1$,
the $1$-basic block at the right vertex $v(j,2i+1)$ of the i’th Morse
component through level $j$ is longer than the $1$-basic blocks at the two new
vertices. Hence, the right vertex cannot be $1$-equivalent to either of these
new vertices. We claim that for every $k\geq 1$ paths that were $k$-equivalent
in the old diagram are no longer $k$-equivalent in the new diagram. This is
because $k$-equivalent paths in the old diagram are also $1$-equivalent in
that diagram. By Proposition 4.2, any pair of paths that are $1$-equivalent in
the original diagram are contained in a Morse component at all levels after
which they first differ, so they can no longer be $1$-equivalent in the new
diagram. Then since $k$-equivalence implies $1$-equivalence, the paths are no
longer $k$-equivalent in the new diagram.
The $1$-coding of every path in $X$ is periodic with period length 2. However,
at every level $j>2$ the $2$-basic block at $v(j,3)$ differs from the
$2$-basic blocks at at all other vertices, and as a result the $2$-coding is
not periodic (see Observation 4.4). We will exploit this property of $2$-basic
blocks to show that no new $2$-equivalent paths are created by our
modifications. It then follows that the new system has no $2$-equivalent
paths, hence is not $SNE$.
By Prop 3.8, any $2$-equivalent pair in the new system has the same new
$2$-coding. In fact, it is the image of a pair in $X$ that has the same old
$2$-coding. This is because there is an invertible mapping between the old and
the new codings. Specifically, in both the old and the new system each path in
$X$ is represented by a vertical in its array. Row 2 of the array displays the
$2$-coding of the path and the placement of the dot is specified by the
vertical. The splitting of rectangles and relabeling at level 2 is reversible
and does not change the position of the vertical. So a pair of $2$-equivalent
paths in the new diagram is the image of a pair in the diagram for $X$ with
the same $2$-coding. We now show that any such pair in $X$ is $2$-equivalent.
Let $x$, $x^{\prime}$ be paths in the diagram for $X$ with the same
$2$-coding. Using the preceding observations about $D_{j}$ and $E_{j}$ (see
Observation 4.5), we argue inductively that for every $j\geq 3$, any time the
orbit of $x$ is minimal from the root to $v(j,3)$, the orbit of $x^{\prime}$
is as well (and vice versa). Then because all $j$-basic blocks at level $j$
have the same length ($\dim v(j,1)$), any time the orbit of one of these paths
changes vertices at level $j$, the other one does as well. Since $x$ and
$x^{\prime}$ have the same 2-coding, it follows that they must be
$2$-equivalent at level $j$ (i.e not only do they have the same $2$-basic
block at level $j$ but their dot is in the same place).
First note that, since $x$ and $x^{\prime}$ have the same $2$-coding, their
orbits must meet $v(2,3)$ always at the same time; in particular, any time the
orbit of one of these paths is minimal from the root into $v(2,3)$ the orbit
of the other is as well.
Next fix $j\geq 2$ and assume that the orbits of $x$ and $x^{\prime}$ are
minimal into $v(j,3)$ always at the same time. Since the number of paths is
the same from the root into any vertex at level $j$, these orbits must be
minimal into each vertex at level $j$ always at the same time.
Now suppose that for some $m$, $T^{m}x$ is minimal from the root into
$v(j+1,3)$, so that $\phi_{j+1}(v_{j+1}(T^{m}x))=D_{j+1}$ and
$T^{m+\dim(v(j,1))}x$ is minimal to $v(j,3)$, which maps under $\phi_{j}$ to
$D_{j}$. We claim that since the orbits of $x$ and $x^{\prime}$ always hit
$v(j,3)$ at the same time, we cannot have $v_{j+1}(T^{m}x^{\prime})=v(j+1,i)$
for some $i\neq 3$, i.e. we cannot have
$\phi_{j+1}(v_{j+1}(T^{m}x^{\prime}))=E_{j+1}$ rather than $D_{j+1}$. This
follows from looking at the next vertices at level $j+1$ hit by the orbits of
$x$ and $x^{\prime}$. Since the block $v(j+1,3)v(j+1,3)$ cannot appear in the
coding of any path by vertices at level $j+1$, the orbit of $x$ next hits
$v(j+1,i)$ for some $i\neq 3$, which has image $E_{j+1}$, while the orbit of
$x^{\prime}$ next hits a vertex with image either $E_{j+1}$ or $D_{j+1}$. As
seen in the proof of Observation 4.5, in the block $D_{j+1}E_{j+1}$ (on
symbols $D_{j},E_{j}$) in the coding of the orbit of $x$ (by images of
vertices under $\phi_{j}$), consecutive appearances of $D_{j}$ are separated
by a distance $2j$, while in the two possible blocks $E_{j+1}E_{j+1}$ and
$E_{j+1}D_{j+1}$ in the coding of the orbit of $x^{\prime}$ the consecutive
appearances of $D_{j}$ are separated by distance either $2j-1$ or $2j-2$.
Therefore $v_{j+1}(T^{m}x^{\prime})=v(j+1,3)$.
Since $T^{m+\dim(v(j,1))}x$ is minimal to $v(j,3)$ and the orbits of $x$ and
$x^{\prime}$ always hit $v(j,3)$ at the same time, we must have that
$T^{m+\dim(v(j,1))}x^{\prime}$ is also minimal to $v(j,3)$. Since the dot for
both $\phi_{j}(T^{m+\dim(v(j,1))}x)$ and
$\phi_{j}(T^{m+\dim(v(j,1))}x^{\prime})$ is at the beginning of an appearance
of $D_{j}$, and $D_{j}$ appears only once in each $\phi_{j}(v(j+1,3))$,
applying $T^{-\dim((v(j,1))}$ shows that both $T^{m}x$ and $T^{m}x^{\prime}$
are minimal from the root into $v(j+1,3)$.
## 5\. Well timed and untimed systems
In this section we define several classes of nonexpansive $BV$ systems
($W,W_{0},DM2$, $H2,U,U_{0},U_{1}$, and $U_{2}$) according to various
possibilities for the existence of pairs of paths with cuts. Recall that all
systems under consideration are nonexpansive, properly ordered, and simple. It
will turn out that every such system is conjugate to a system in exactly one
of $W$ (“well timed”) or $U$ (“untimed”).
###### Definition 5.1.
For any class $S$ of systems, we will denote the class of systems conjugate to
some system in $S$ by $CS$:
(5.1) $CS=\\{X:\text{there is }Y\in S\text{ such that }X\text{ is conjugate to
}Y\\}.$
We denote the class of systems not conjugate to any system in $S$ by $NCS$,
and the class of systems not in $S$ by $\neg S$. Note that it is not true that
if $X,Y\in CS$ then $X$ must be conjugate to $Y$.
In their proof that every bounded width system is either expansive or
conjugate to an odometer, Downarowicz and Maass [dm2008] considered a class of
systems that they called Case (2), which we denote here by $DM2$. In
[Hoynes2017] Hoynes’ Case (2) is a slightly weaker condition, which we call
$H2$, apparently still sufficient for the proof to succeed. Downarowicz and
Maass as well as Hoynes actually used stronger properties, opposite to well
timed, which here we call very untimed ($U_{0}$) and $U_{2}$. Here is a list
of some relevant classes of systems, obtained by varying quantifiers.
###### Definition 5.2.
We say that a depth $k$ pair of paths has long cuts if for every $n>k$ the
pair has an $n$ cut.
###### Definition 5.3.
We define the following classes of systems:
(1) We say that a system is well timed if for every $k\geq 1$ for every $j>k$
there is a depth $k$ pair with a $j$ cut. We denote the class of well timed
systems by $W$.
(2) We say that a system is very well timed if for every $k\geq 1$ there
exists a depth $k$ pair with long cuts. We denote the class of very well timed
systems by $W_{0}$.
(3) $DM2$: For infinitely many $k$ there is a $j(k)>k$ such that no depth $k$
pair has a $j(k)$ cut. The smallest such $j(k)$ is called a $k$ cutoff.
(4) $H2$: For infinitely many $k$ for every depth $k$ pair $x,x^{\prime}$
there is a $j(k,x,x^{\prime})>k$ such that $x,x^{\prime}$ has no
$j(k,x,x^{\prime})$ cut. The smallest such $j(k,x,x^{\prime})$ could be called
a $k$ pair cutoff for the pair $x,x^{\prime}$.
(5) $U$ (untimed): For every $k$ there is a $k$ cutoff. (I.e., for every
$k\geq 1$ there is a $j(k)>k$ such that no depth $k$ pair has a $j(k)$ cut.)
(6) $U_{0}$ (very untimed): For every $k$ no depth $k$ pair has a $k+1$ cut.
(I.e., for every $k$, $k+1$ is a $k$ cutoff).
(7) $U_{2}$: For every $k$ there is a depth $k$ pair with no $k+1$ cut.
(8) $U_{1}$: For infinitely many $k$ there is a depth $k$ pair $x,x^{\prime}$
and there is a $j(k,x,x^{\prime})>k$ such that the pair $x,x^{\prime}$ has no
$j(k,x,x^{\prime})$ cut. (I.e., for infinitely many $k$ there is a depth $k$
pair with a pair cutoff.)
Recall that the GJ example is $SNE$ and is very well timed. The modified GJ
example (Example 4.6) is not $SNE$ but it is very well timed, because the only
changes we made to the DM arrays were to add additional vertical bars, which
will not destroy the existing cuts.
###### Remark 5.4.
In [dm2008] the proof of the main theorem in Case (2) begins by telescoping
any system in $DM2$ so that the result is in $U_{0}$. Hoynes
[Hoynes2017]*Remark 4.4 does not see why this should always be possible, but
suggests that changing the universal quantifier to existential, i.e. replacing
$DM2$ with $U_{1}$, does allow one to telescope such a system to one in
$U_{2}$, and that should be enough to let the proof proceed.
The proof in [Hoynes2017] assumes $H2$, telescopes so that for every $k\geq 1$
there is a depth $k$ pair, and then telescopes to obtain a system in $U_{2}$.
The proof of Sublemma 4.1, though, applies $H2$ to possibly ineligible pairs
$y_{i},y_{i^{\prime}}$, because $H2$ is not closed under telescoping. Because
the telescoped system is in $U_{2}$, for every $i$ there is a pair
$x_{i},x_{i}^{\prime}$ with no $i+1$ cut. But the pair $y_{i},y_{i^{\prime}}$
of the proof could be depth $i^{\prime}$ and without a cutoff, if it were the
image under the telescoping of a pair of a depth other than one of the
infinitely many “good” $k$ in the definition of $H2$ (see Proposition 5.7).
Indeed, in Example 5.13 we present a system that is in both classes $DM2$
(vacuously) and $WW$ (see Definition 5.8, below) and for which telescoping to
any strictly increasing sequence of levels takes it out of the class $H2$ (and
hence out of $DM2$). Such a system cannot be telescoped into $U_{0}$, because
then it would be in both $CU$ and $WW=CW$ (see Proposition 5.10), but by
Theorem 5.14 these classes are disjoint.
We think that Sublemma 4.1 of [Hoynes2017] can be proved as follows. Assuming
that $H2$ is satisfied nonvacuously, let $i_{0}$ and all the other $i$’s
mentioned in the argument be good $k$’s according to the definition of $H2$.
They may not fill up an interval in the integers, but given $L$ one can pick a
sequence of them of length $L$ and proceed to write the same argument, being
careful to choose the pairs $x_{i},x_{i}^{\prime}$ so that their cutoffs
$j(i,i^{\prime})$ interleave the levels with good $k$’s.
###### Remark 5.5.
(1) $U_{0}\subset U\subset DM2\subset H2$, and $U_{2}\subset U_{1}$.
(2) Each of the classes $W,W_{0},U,U_{0}$ is closed under telescoping. We will
later (in Remark 5.12) provide a proof of this for $W$ and $W_{0}$. To see
that the very untimed property persists under telescoping, note that if in a
telescoping to levels $\\{n_{l},l\geq 0\\}$ of a very untimed system we found
a depth $j$ pair with a $j+1$ cut, that pair would correspond to a pair in the
original system of some depth $k\geq n_{j}$ with an $n_{j+1}$ cut, and hence
with a $k+1$ cut—cf. Proposition 5.7, (3) and (4).
###### Example 5.6.
Every odometer presented with one vertex at every level is in $U_{0}$ (very
untimed). To see this, suppose that $x,x^{\prime}$ are depth $k\geq 1$, so
that at some time $m$ in their orbits they follow different edges from level
$k$ to level $k+1$, i.e. $(T^{m}x)_{k}\neq(T^{m}x^{\prime})_{k}$.
Because $(T^{j}x)_{k}$ and $(T^{j}x^{\prime})_{k}$, $j\in\mathbb{Z}$, follow
the same periodic sequence of edges as $j$ varies, we have that, for all
$j\in\mathbb{Z}$, $(T^{j}x)_{k}\neq(T^{j}x^{\prime})_{k}$. In particular, $x$
and $x^{\prime}$ cannot have a $k+1$ cut, since this would require that for
some $j$ they both follow the unique minimal edge from level $k$ to level
$k+1$.
Note that in this example for every $k\geq 1$ there is a depth $k$ pair, but
that is not a requirement in the definition of $U_{0}$.
We used here a general principle that applies in any system: If level $k+1$ is
strongly uniformly ordered with respect to level $k$ (see Definition 3.1) and
$x,x^{\prime}$ follow edges with different ordinal labels from level $k$ to
level $k+1$, so do $T^{j}x$ and $T^{j}x^{\prime}$ for all $j\in\mathbb{Z}$.
On the other hand, every odometer can be presented (up to conjugacy) with at
least two vertices per level after the root and all levels strongly uniformly
ordered. We claim that for such a system for every $k\geq 1$ there is a depth
$k$ pair with a $k+1$ cut, but no depth $k$ pair can have a $k+2$ cut. Thus
every such system is in $U\setminus U_{0}$, with a $k$ cutoff of $k+2$ for
every $k\geq 1$.
At each level $n\geq 1$, the system has vertices $v(n,1),\dots,v(n,q_{n})$ for
some $q_{n}\geq 2$. Edges with source $v(n,i)$ have ordinal label $i$.
Given $k\geq 1$, let $x$ be a path that is minimal from the root to
$v(k+1,1)$, and let $x^{\prime}$ be a path that is minimal from the root to
$v(k,1)$ at level $k$ and then follows the edge (labeled $1$) to $v(k+1,2)$ at
level $k+1$. Because $T^{j}x,T^{j}x^{\prime}$ are at strongly uniformly
ordered vertices at level $k+1$ for all $j\in\mathbb{Z}$, they follow edges
with the same ordinal label from level $k$ to level $k+1$. Thus $x,x^{\prime}$
have the same $k$-coding and hence the pair is depth $k$. The paths
$x,x^{\prime}$ also follow minimal edges to level $k+1$, so they have a $k+1$
cut. Thus the system is not in $U_{0}$.
We show now that if $x,x^{\prime}$ is a depth $k$ pair, then it cannot have a
$k+2$ cut, so the system is in $U$, with $k$ cutoff equal to $k+2$. For
suppose that $x,x^{\prime}$ is a depth $k$ pair. Applying a power of $T$ if
necessary, we may assume that these paths follow different edges from level
$k$ to level $k+1$. Because level $k+1$ is strongly uniformly ordered with
respect to level $k$, for all $j\in\mathbb{Z}$ the paths $T^{j}x$ and
$T^{j}x^{\prime}$ follow different edges from level $k$ to level $k+1$, and
hence they are at different vertices at level $k+1$:
(5.2) $v_{k+1}(T^{j}x)\neq v_{k+1}(T^{j}x^{\prime}).$
Thus the edges downward from these vertices to level $k+2$ always have
different ordinal labels, precluding existence of a $k+2$ cut.
We aim to show that $CW\subset NCU$ and $NCW\subset CU$, so that the family of
simple, perfectly ordered nonexpansive $BV$ systems is the disjoint union of
$CW$ and $CU$;
(5.3) ${W_{0}\subset W\subset CW=NCU\subset NCU_{0}.}$
For this purpose we need to know how pairs of some depth and cuts in a system
relate to those in a telescoping of that system. If $\tilde{X}$ is a
telescoping of $X$, we will call $X$ a lift of $\tilde{X}$. The following
Proposition says, informally, that pairs of some depth in one of $X,\tilde{X}$
telescope (lift) to pairs of a related depth in the other, as do cuts for
those pairs. One consequence is that nonexistence of cutoffs is preserved
under telescoping and lifts.
###### Proposition 5.7.
Let $(X,T)$ be a system and $(\tilde{X},\tilde{T})$ be another system obtained
by telescoping $X$. The following statements hold for every $k\geq 1:$
(1) There exists an $\tilde{i}(k)\leq k$ such that the image of any depth $k$
pair in $X$ under the telescoping is depth $\tilde{i}(k)$ in $\tilde{X}$.
Furthermore, $\tilde{i}(k)\rightarrow\infty$ as $k\rightarrow\infty$.
(2) For all sufficiently large $j$ there exists $\tilde{J}(j)\leq j$ such that
if a depth $k$ pair in $X$ has a $j$ cut then the depth $\tilde{i}(k)$ image
of that pair in $\tilde{X}$ has a $\tilde{J}(j)$ cut. Furthermore,
$\tilde{J}(j)\rightarrow\infty$ as $j\rightarrow\infty$.
(3) For every depth $k$ pair $\tilde{x}^{(k)},\tilde{y}^{(k)}$ in $\tilde{X}$
there exists an $i(\tilde{x}^{(k)},\tilde{y}^{(k)})\geq k$ such that
$\tilde{x}^{(k)},\tilde{y}^{(k)}$ is the image under the telescoping of a
depth $i(\tilde{x}^{(k)},\tilde{y}^{(k)})$ pair in $X$.
(4) For any $j>k$ there exists a $J(j)\geq j$ such that if
$\tilde{x}^{(k)},\tilde{y}^{(k)}$ is a depth $k$ pair in $\tilde{X}$ with a
$j$ cut, then that pair is the image under the telescoping of a depth
$i(\tilde{x}^{(k)},\tilde{y}^{(k)})$ pair in $X$ with a $J(j)$ cut.
###### Proof.
Suppose that $\tilde{X}$ is a telescoping of $X$ to levels $(n_{l},l\geq 0)$
(with the root being at level $n_{0}=0$).
Proof of (1) and (2):
If $k<n_{1}$, then after telescoping from the root to level $n_{1}$, the image
of any pair of paths that is depth $k$ in $(X,T)$ is a pair of paths with
different 1-codings in $(\tilde{X},\tilde{T})$, i.e., $\tilde{i}(k)=0$. So
assume $k\geq n_{1}.$ In other words, there exists $l_{k}\geq 1$ such that
$n_{l_{k}}\leq k<k+1\leq n_{l_{k}+1}$. In particular, $l_{k}\to\infty$ as
$k\to\infty$.
Let $x^{(k)},y^{(k)}$ be a depth $k$ pair of paths in $X$. Since these paths
agree to level $k$ and differ at level $k+1$, they have the same $n_{l_{k}}$
coding, but not the same $n_{l_{k}+1}$ coding. This means row $n_{l_{k}}$ in
the array of $j$-symbols for $x^{(k)}$ is identical to row $n_{l_{k}}$ in the
array for $y^{(k)}$ and is met in the same position by the verticals (see the
discussion preceding Example 4.6) for both paths, whereas the same is not true
for row $n_{l_{k}+1}$. For every $l\geq 1$, the telescoping removes all rows
of the arrays between rows $n_{l}$ and $n_{l+1}$, so that what used to be row
$n_{l}$ becomes row $l$. Afterwards, row $l_{k}$ in the array for the image of
$x^{(k)}$ is identical to row $l_{k}$ in the array for the image of $y^{(k)}$
and is met in the same position by the verticals for both paths, whereas the
same is not true for row $l_{k+1}$. Hence, the image of the pair
$x^{(k)},y^{(k)}$ under the telescoping is depth $l_{k}$. Letting
$\tilde{i}(k)=l_{k}$, (1) is proved.
Suppose that $j>k$ and the paths $x^{(k)}$ and $y^{(k)}$ have a $j$ cut. This
cut appears as a pair of vertical segments in the arrays for the two paths
that begin in the same position in row 0, end in row $j$, and consist entirely
of rectangle boundaries. Find $l_{j}$ such that $n_{l_{j}}\leq j<n_{l_{j}+1}$.
After removing the rows for the telescoping, these vertical segments in the
arrays for $x^{(k)}$ and $y^{(k)}$ extend from level $0$ to level $l_{j}$ and
still consist entirely of rectangle boundaries. Hence both represent minimal
paths in $\tilde{X}$ from the root to level $l_{j}$. It follows that the image
of $x^{(k)},y^{(k)}$ has an $l_{j}$ cut. Letting $\tilde{J}(j)=l_{j}$, (2) is
proved.
Proof of (3) and (4):
Now let $\tilde{x}^{(k)},\tilde{y}^{(k)}$ be a depth $k$ pair of paths in
$\tilde{X}$. This means row $k$ in the array for $\tilde{x}^{(k)}$ is
identical to row $k$ in the array for $\tilde{y}^{(k)}$ and is met in the same
position by the verticals for both paths, whereas the same is not true for row
$k+1$. In the original diagram, there is a pair of paths whose image under the
telescoping is $\tilde{x}^{(k)},\tilde{y}^{(k)}$. If we restore the rows in
their respective arrays that were removed by the telescoping, rows $k$ and
$k+1$ in the arrays for $\tilde{x}^{(k)}$ and $\tilde{y}^{(k)}$ become rows
$n_{k}$ and $n_{k+1}$. Hence, for some
$i(\tilde{x}^{(k)},\tilde{y}^{(k)})\in[n_{k},n_{k+1})$, it must be the case
that these arrays are now the same from row $0$ to row
$i(\tilde{x}^{(k)},\tilde{y}^{(k)})$ with the vertical in the same position,
and that the same is not true for row $i(\tilde{x}^{(k)},\tilde{y}^{(k)})+1$.
It follows that $\tilde{x}^{(k)},\tilde{y}^{(k)}$ is the image of a depth
$i(\tilde{x}^{(k)},\tilde{y}^{(k)})$ pair of paths in $X$ under the
telescoping. This proves (3).
For (4), it is important to note that when we reinsert rows into an array for
$\tilde{X}$ that were removed during the telescoping to get an array for some
path in $X$, any vertical segment bounding a rectangle in row $l\geq 1$ of the
former becomes a rectangle boundary in row $n_{l}$ of the new array. Moreover,
the vertical line containing this rectangle boundary at level $n_{l}$ must
include rectangle boundaries from row $n_{l}$ all the way up to row 0. So any
$j$ cut for a depth $k$ pair in $\tilde{X}$ just becomes longer when we
reinsert into their respective arrays rows that were removed by the
telescoping. Specifically, any $j$ cut for a depth $k$ pair of paths
$\tilde{x}^{(k)},\tilde{y}^{(k)}$ in $\tilde{X}$ corresponds to an $n_{j}$ cut
for the preimage of $\tilde{x}^{(k)},\tilde{y}^{(k)}$ before the telescoping.
Letting $J(j)=n_{j}$, (4) is proved. ∎
Some of the systems that we shall encounter while proving our main results
will have the property encapsulated in the following definition.
###### Definition 5.8.
We say that a system is _weakly well timed_ if for infinitely many $k$ for
every $j>k$ there is a depth $k$ pair with a $j$ cut. $WW$ denotes the class
of systems with this property.
By definition, $W\subset\neg DM2\subset WW$. We will want to know what happens
to the well timed and $WW$ properties under microscoping and telescoping.
It is not necessarily the case that if $(\tilde{X},\tilde{T})$ is a
telescoping of $(X,T)$ and is well timed, then $(X,T)$ is well timed. For
example, suppose we telescope $X$ to even levels. It could happen that $(X,T)$
has no pairs with odd depth, and yet for every $k\geq 1$ and every $j>k$ there
exists a depth $k$ pair in $(\tilde{X},\tilde{T})$ with a $j$ cut whose lift
to $(X,T)$ is depth $2k$. In this case, $(\tilde{X},\tilde{T})$ is well timed
while $(X,T)$ is not.
###### Lemma 5.9.
Let $(X,T)$ be a system and $(\tilde{X},\tilde{T})$ be another system obtained
by telescoping $X$ to levels $(n_{l},l\geq 0)$.
(1) If $(X,T)\in W$, then $(\tilde{X},\tilde{T})\in W$. Likewise, if $(X,T)\in
W_{0}$, then $(\tilde{X},\tilde{T})\in W_{0}$.
(2) If $(X,T)\in WW$, then $(\tilde{X},\tilde{T})\in WW$.
(3) If $(\tilde{X},\tilde{T})\in WW$, then $(X,T)\in WW$.
###### Proof.
Proof of (1):
Suppose $(X,T)$ is well timed. Fix $k\geq 1$ and let $j>k$. There exists in
$X$ a depth $n_{k}$ pair with an $n_{j}$ cut. As shown in the proof of
Proposition 5.7, parts (1) and (2), after the telescoping this yields a depth
$k$ pair in $\tilde{X}$ with a $j$ cut.
If $(X,T)$ is very well timed, then for every $k\geq 1$ there exists a depth
$n_{k}$ pair with an $n_{j}$ cut for all $j>k$. The above argument shows that
this yields a depth $k$ pair with long cuts after the telescoping.
Proof of (2):
Suppose $(X,T)\in WW$. In other words, there exists an increasing sequence
$(m_{k})$ such that for every $k$ and every $j>m_{k}$ there is a depth $m_{k}$
pair in $(X,T)$ with a $j$ cut.
By Proposition 5.7 (1), for every $k\geq 1$ there exists $\tilde{i}(m_{k})\leq
m_{k}$ such that the image of any depth $m_{k}$ pair in $X$ under the
telescoping is depth $\tilde{i}(m_{k})$ in $\tilde{X}$. Let
$\tilde{m}_{k}=\tilde{i}(m_{k})$. Since $\tilde{i}(k)\rightarrow\infty$ as
$j\rightarrow\infty$, the sequence $(\tilde{m}_{k})$ is increasing.
Given $k\geq 1$ and $j>m_{k}$, find in $(X,T)$ a depth $m_{k}$ pair with a $j$
cut. By Proposition 5.7 (2), there exists a $\tilde{J}(j)\leq j$ such that the
depth $\tilde{m}_{k}$ image of this pair in $(\tilde{X},\tilde{T})$ has a
$\tilde{J}(j)$ cut. Since $\tilde{J}(j)\rightarrow\infty$ as
$j\rightarrow\infty$, it follows that $(\tilde{X},\tilde{T})\in WW$.
Proof of (3):
Suppose that $(\tilde{X},\tilde{T})\in WW$. There exists an increasing
sequence $(\tilde{m}_{k})$ such that for every $k$ and every $j>\tilde{m}_{k}$
there is a depth $\tilde{m}_{k}$ pair
$\tilde{x}^{({\tilde{m}_{k}})},\tilde{y}^{({\tilde{m}_{k}})}$ in $\tilde{X}$
with a $j$ cut.
Fix $k\geq 1$. As we vary $j$, the pair
$\tilde{x}^{({\tilde{m}_{k}})},\tilde{y}^{({\tilde{m}_{k}})}$ may change, and
hence $i(\tilde{x}^{({\tilde{m}_{k}})},\tilde{y}^{({\tilde{m}_{k}})})$ may
vary. However, it was shown in the proof of Proposition 5.7 (3) that for every
depth $\tilde{m}_{k}$ pair
$\tilde{x}^{({\tilde{m}_{k}})},y^{({\tilde{m}_{k}})}$, we have
$n_{\tilde{m}_{k}}\leq
i(\tilde{x}^{({\tilde{m}_{k}})},\tilde{y}^{({\tilde{m}_{k}})})<n_{\tilde{m}_{k}+1}$.
Hence, there exists $m_{k}\in[n_{\tilde{m}_{k}},n_{\tilde{m}_{k+1}})$ such
that for infinitely many $j>\tilde{m}_{k}$, the corresponding pair
$\tilde{x}^{({\tilde{m}_{k}})},\tilde{y}^{({\tilde{m}_{k}})}$ lifts to a depth
$m_{k}$ pair, which by Proposition 5.7 (4) has a $J(j)>j$ cut. Since
$J(j)\rightarrow\infty$ as $j\rightarrow\infty$ (and the $m_{k}$ are all
distinct), it follows that $(X,T)\in WW$. ∎
###### Proposition 5.10.
For a system $(X,T)$ the following statements are equivalent:
(1) $(X,T)$ is weakly well timed.
(2) $(X,T)$ has a telescoping that is well timed.
(3) $(X,T)$ is conjugate to a well timed system.
Thus $WW=CW$.
###### Proof.
We show first that (1) implies (2). Suppose there are infinitely many $k$ for
which for every $j>k$ there exists a depth $k$ pair in $X$ with a $j$ cut. In
other words, there exists an increasing sequence $(m_{k})$ such that for all
$k$ and for every $j>m_{k}$ there is a depth $m_{k}$ pair in $X$ with a $j$
cut. Let $\tilde{X}$ be the telescoping of $X$ to levels $(m_{k})$.
Given $k\geq 1$ and $j>m_{k}$, find in $(X,T)$ a depth $m_{k}$ pair with a $j$
cut. By Proposition 5.7 (1) and (2), the image of this pair is depth $k$ with
a $\tilde{J}(j)$ cut. Since $\tilde{J}(j)\rightarrow\infty$ as
$j\rightarrow\infty$, it follows that $\tilde{X}$ is well timed.
That (2) implies (3) is clear, since conjugacy of $BV$ systems is the
equivalence relation that corresponds to the one for diagrams that is
generated by telescoping and isomorphism [HPS1992, Section 4]. So if $(X,T)$
has a telescoping that is well timed then it is conjugate to that well timed
system.
To prove that (3) implies (1), suppose that $(X,T)$ is conjugate to a well
timed system $(Y,S)$. As remarked above, then there are a system $Z$ and
telescopings $\tilde{X}$ of $X$ and $\tilde{Y}$ of $Y$ such that $Z$
telescopes on even levels $0,2,4,\dots$ to $\tilde{X}$ and on odd levels
$0,1,3,\dots$ to $\tilde{Y}$. By Lemma 5.9 (1), any telescoping of a well
timed system is also well timed. So we may assume that $\tilde{Y}=Y$.
Since $W\subset WW$, $Y$ is also weakly well timed. Hence by Lemma 5.9 (3),
$Z\in WW$. Next consider the telescoping of $Z$ to $\tilde{X}$. By Lemma 5.9
(2), $(\tilde{X},\tilde{T})\in WW$.
Applying part (3) of the lemma again, we conclude that $(X,T)\in WW$. ∎
With appropriate adjustments the foregoing Proposition and its proof adapt to
very well timed systems.
###### Proposition 5.11.
For a system $(X,T)$ the following statements are equivalent:
(1) There are infinitely many $k$ for which there exists a depth $k$ pair with
long cuts.
(2) $(X,T)$ has a telescoping that is very well timed.
(3) $(X,T)$ is conjugate to a very well timed system.
###### Remark 5.12.
Part (3) of Proposition 5.11 shows that parts (1) and (2) persist under
telescoping.
###### Example 5.13.
As promised in Remark 5.4, we show that $WW\cap DM2\neq\emptyset$. We modify
the diagram for the GJ example $X$ by changing every other (even-numbered)
Morse component so that all its vertices are uniformly ordered: for all $j\geq
4$ and all even $i$ such that $1\leq i<j-1$ we change the ordering at
$v(j,2i+1)$ so that it is left to right. Let $Y$ denote the new diagram and
system, with the same vertices $v(j,i)$ and “Morse components” $MC(k)$ as in
$X$. Then for every odd $k$, there is still a depth $k$ pair with long cuts
(these are just the same as they were in the GJ example, see Proposition 4.2
and Corollary 4.3), so the new system is in $WW$ .
(In more detail, as in Corollary 4.3, let $x$ and $x^{\prime}$ be paths that
agree to level $k+1$ and pass through $v(k+1,2k)$ and $v(k+1,2k+1)$,
respectively, and such that the ordinal edge label for every edge in $x$ and
in $x^{\prime}$ after level $k+1$ is $2k$. In other words, $x$ and
$x^{\prime}$ enter the Morse component $MC(k)$ at its top and then follow its
two sides all the way down. By Proposition 4.2, $x$ and $x^{\prime}$ are depth
$k$. It is easily verified that for every $j>k$, there exists a time $m<0$
such that $T^{m}x$ and $T^{m}x^{\prime}$ agree to level $j-1$ and are minimal
from the root into $v(j,2k)$ and $v(j,2k+1)$, respectively. Therefore $x$ and
$x^{\prime}$ are depth $k$ and have long cuts.)
We show now that for all even $k$ there are no longer any depth $k$ pairs, so
the new system is (vacuously) in $DM2$ (and hence in $H2$). The idea is that,
as in $X$, the only candidates for depth $k$ pairs are paths down $MC(k)$, but
the changed edge orders cause these pairs to have different $2$-codings, so
they have become depth $1$.
All changes to $X$ were at levels $4$ and higher. Moreover, for every $j\geq
2$, the coding of $v(j,3)$ by vertices at level $j-1$ remains the same in $Y$
as it was in $X$; and at vertices in $Y$ where this coding is different than
it was in $X$, it is now the same as the coding at $v(j,1)$. Hence, if as in
Observation 4.5 we expand each symbol (vertex) $v(j,i)$ to $C_{j-1}(v(j,i))$,
the effect on the $\phi_{j}$-images (strings on $D_{j},E_{j},D_{j-1},E_{j-1}$)
produces the same morphisms as before. The argument made in Example 4.6 that
paths in $X$ with the same $2$-coding are $2$-equivalent can then be applied
to show the same holds true in $Y$. Hence, the proof of Proposition 4.2 (4)
still works to show that for any $k\geq 2$ any depth $k$ pair in the new
system represented by $Y$ is $k$-equivalent but not $(k+1)$-equivalent.
Now let $x$, $x^{\prime}$ be a pair of paths in $Y$ that for some $k\geq 2$ is
depth $k$ in the corresponding system. Since Observation 4.1 (2) still holds
for $Y$, we can argue as in the proof of Proposition 4.2 (3) that $x$ and
$x^{\prime}$ in $Y$ have the same sequence of edge labels, enter $MC(k)$ at
the level at which they first differ and are contained in $MC(k)$ at all
subsequent levels. At any level of $Y$ only pairs of edges inside one of the
odd Morse components in $Y$ can have the same label and distinct sources.
Hence, $k$ must be odd, and so there are no pairs of paths with even depth in
the new system.
Telescoping the new diagram to a strictly increasing sequence of levels
$j_{n}$ will produce a system that is not in $H2$ (or $DM2$). This is because
given any $n$ there are an odd integer $k\in[j_{n},j_{n+1})$ and a depth $k$
pair with long cuts. After the telescoping this pair will be depth $n$ (by the
proof of Proposition 5.7 (1)) and will still have long cuts (by Proposition
5.7 (2)), so the telescoped system is in $W_{0}$. By definition, $W_{0}\cap
H2=\emptyset$, so the class $H2$ is not closed under telescoping.
Remark 5.4 mentioned that $H2$ not being closed under telescoping impacts the
proofs in [dm2008, Hoynes2017]. If in some system for every odd $k$ there were
a depth $k$ pair with a cutoff (so the system is nonvacuously in $H2$, unlike
our example), and for every even $k$ there were a depth $k$ pair with long
cuts (so the system is also in $WW$), the same argument as above would show
that telescoping to any strictly increasing sequence of levels produces a
system that is not in $H2$. If the levels were chosen to produce a system in
$U_{2}$, the result would be in $U_{2}\setminus H2$. The proofs in [dm2008,
Hoynes2017] could be clarified by ruling out this case or dealing with it,
maybe along the lines we suggest in Remark 5.4.
The following Theorem subsumes Proposition 3.11.
###### Theorem 5.14.
No system that is conjugate to a well timed system can also be conjugate to an
untimed system: $CW\subset NCU$.
###### Proof.
Suppose that $X$ is a system that is conjugate to a well timed system and is
also conjugate to an untimed system $Y$. As mentioned above, by [HPS1992,
Theorem 4.7], then there are a system $Z$ and telescopings $\tilde{X}$ of $X$
and $\tilde{Y}$ of $Y$ such that $Z$ telescopes on even levels $0,2,4,\dots$
to $\tilde{X}$ and on odd levels $0,1,3,\dots$ to $\tilde{Y}$.
By Proposition 5.10, $X\in WW$. Repeated application of Lemma 5.9 gives
$\tilde{X}\in WW$, then $Z\in WW$, then $\tilde{Y}\in WW$, and finally $Y\in
WW$. This is a contradiction, since by definition no system can be both
untimed and weakly well timed. ∎
###### Theorem 5.15.
The family of nonexpansive systems is the disjoint union of those conjugate to
well timed systems and those conjugate to untimed systems:
(5.4) $NE=CW\sqcup CU.$
###### Proof.
By Theorem 5.14, $CW\subset NCU$, so it remains only to show that $NCW\subset
CU$. By Proposition 5.10, if $(X,T)\in NCW$, then there is a $k_{0}$ such that
for all $k\geq k_{0}$ there is a $k$ cutoff $j(k)>k$: no depth $k$ pair has a
$j(k)$ cut. Let us telescope to levels $n_{l},l\geq 0$, with $n_{1}>k_{0}$ to
produce the system $(\tilde{X},\tilde{T})$. Suppose that $\tilde{k}\geq 1$ and
$\tilde{x},\tilde{y}$ is a depth $\tilde{k}$ pair in $\tilde{X}$ with a
$\tilde{j}$ cut. Proposition 5.7 (3) and (4) tell us that then
$\tilde{x},\tilde{y}$ is the image under the telescoping of a pair $x,y$ in
$X$ of depth $i(\tilde{x},\tilde{y})\in[n_{\tilde{k}},n_{\tilde{k}+1})$ with a
$J(\tilde{j})=n_{\tilde{j}}$ cut. Every $i\in[n_{\tilde{k}},n_{\tilde{k}+1})$
has a cutoff $j(i)$ in $X$, in particular
$J(\tilde{j})<j(i(\tilde{x},\tilde{y}))$. Since
(5.5) $\tilde{j}\leq
J(\tilde{j})<j(i(\tilde{x},\tilde{y}))\leq\max\\{j(i):i\in[n_{\tilde{k}},n_{\tilde{k}+1})\\},$
$\tilde{j}$ is bounded, over all pairs in $\tilde{X}$ of depth $\tilde{k}$.
Thus in $\tilde{X}$ for every $\tilde{k}$ there is a $\tilde{k}$ cutoff. This
shows that telescoping past $k_{0}$ produces a system in $U$, and hence
$(X,T)\in CU$. ∎
## 6\. Describing untimed systems
Now we take a few steps towards determining exactly which $BV$ systems are
very untimed. If a system $(X,T)$ is very untimed and not conjugate to an
odometer, then, by [dm2008], it must have infinite “topological rank”. At the
moment we do not have an example of a very untimed system that has unbounded
width.
###### Lemma 6.1.
If in a $BV$ diagram level $n+1$ is uniformly ordered and has more than one
vertex, then in the $BV$ system there is a depth $n$ pair of paths with an
$n+1$ cut.
###### Proof.
Choose two paths $x,x^{\prime}$ that are minimal into distinct vertices at
level $n+1$. Since level $n+1$ is uniformly ordered, the $n$-basic block at
each vertex at level $n+1$ is periodic with shortest repeating block $P$. (By
this we mean that for each vertex $v$ at level $n+1$, there is an integer
$n_{v}$ such that the $n$-basic block at $v$ is $P^{n_{v}}$). Thus the
$n$-factor is a rotation on $|P|$ points, and the $n$-coding of every path is
the two-sided sequence $P^{\infty}$, with a choice of the center position.
Since $x$ and $x^{\prime}$ are minimal into level $n+1$, their “dot” is at the
beginning of an explicit appearance of $P$ in the $n$-basic blocks at
$v_{n+1}(x)$ and $v_{n+1}(x^{\prime})$ respectively, so $x,x^{\prime}$ have
the same $n$-coding. Hence $x$ and $x^{\prime}$ are depth $n$ with an $n+1$
cut. ∎
###### Proposition 6.2.
Every bounded width very untimed system has only finitely many levels with
more than one vertex.
###### Proof.
By [dm2008], bounded width and nonexpansive implies conjugate to an odometer.
By [FPS2017] (and also mentioned in [GJ2000]) there is a telescoping that has
infinitely many uniformly ordered levels. Let $n+1$ ($n>0$ ) be a uniformly
ordered level in the telescoped system, which is still very untimed (see
Remark 5.5, (2)). Because of the Lemma and because no depth $n$ pair can have
an $n+1$ cut, this level $n+1$ must have just one vertex. Returning to the
original (not telescoped) system, the level $m$ corresponding to level $n+1$
in the telescoped system has just one vertex. Any level that follows a level
with just one vertex is uniformly ordered, and so, by the Lemma, level $m+1$
in the original system must also have just one vertex. Thus every level in the
original system from level $m+1$ on has just one vertex. ∎
###### Example 6.3.
So one way to produce very untimed systems is to begin with levels with more
than one vertex with edges ordered so that all the edges downward from each
vertex have different ordinal labels. Exiting these levels downward there can
be no cuts. Then continue forever with single-vertex levels connected by
multiple edges. Any such system is conjugate to an odometer, upon telescoping
from the root to the first single-vertex level.
###### Definition 6.4.
A $BV$ system is deterministic if for every $n\geq 1$ the edges downward from
each vertex at level $n$ have different ordinal labels.
The following Proposition provides more detail about the possible form of
deterministic diagrams such as the one in Example 6.3.
###### Proposition 6.5.
Let $(X,T)$ be a deterministic system. For each $n\geq 1$ let $k_{n}$ denote
the number of vertices at level $n$. Then $k_{1}\geq k_{2}\geq\dots$, and
eventually all $k_{n}=1$. (So the diagram has the form of the previous
Example.)
###### Proof.
If $k_{n}<k_{n+1}$ for some $n\geq 1$, then the label $1$ must be repeated on
edges between levels $n$ and $n+1$, so $k_{n}\geq k_{n+1}$ for all $n$.
If eventually all $k_{n}=k$, we claim that there would be $k$ minimal paths,
and so we must have $k=1$. This is because from each low (late) enough level
$n+1$ there are $k$ minimal edges (labeled $1$) upward to $k$ different
vertices at level $n$ (because each vertex at level $n$ has a single edge
downward that is labeled $1$), and hence $k$ minimal paths from level $n+1$ up
to the root. Looking at larger and larger values of $n$, we can see $k$
distinct paths that follow only minimal edges. ∎
###### Example 6.6.
Not every bounded width very untimed system must be deterministic, as Figure 2
shows. The small numbers are ordinal edge labels, and the strings in
parentheses are $1$-basic blocks. In this example the only possible cuts are
for pairs of paths among $x=0auA...$ (thick edges), $x^{\prime}=0avB\dots$
(dashed edges) and $x^{\prime\prime}=0bwB\dots$ (or pairs in the orbits of
such pairs). While $x$ and $x^{\prime}$ have a $2$ cut, they are not depth
$1$; $x$ and $x^{\prime\prime}$ have $2$ and $3$ cuts, but they are not depth
$1$ or $2$; and $x^{\prime}$,$x^{\prime\prime}$ have a $2$ cut, but they are
not depth $1$. In fact there are no depth $1$ or depth $2$ pairs.
$0$$a$$b$$c$$u\,(ab)$$1$$2$$1$$2$$1$$2$$1$$2$$2$$1$$1$$2$$1$$1$$v\,(ab)$$w\,(bc)$$A\,(abbc)$$B\,(bcab)$$AB$
Figure 2. A nondeterministic bounded width very untimed system
## 7\. Conclusion and questions
The family of simple, properly ordered Bratteli-Vershik systems is the
disjoint union of the expansive systems, the systems conjugate to well timed
systems, and the systems conjugate to untimed systems:
(7.1) $BV=E\sqcup CW\sqcup CU.$
The foregoing suggests several questions:
1\. Is there an example of an untimed (or even very untimed) system that has
unbounded width? Is there an untimed system that has infinite topological
rank, equivalently is not conjugate to any odometer? Is the class $U$ closed
under conjugacy?
2\. Is every well timed system conjugate to an $SSNE$ (or $SNE$) system? If
so, one could regularize nonexpansiveness by modifying any member of this
class $W$ of well timed $NE$ systems so that it becomes $SSNE$.
3\. There are many questions about the relations among the classes of systems
that we have defined, and others that could be defined. Exactly which classes
can overlap, exactly what inclusions are there among them, exactly which of
these inclusions are strict, etc.? Is every odometer (by which we here mean
all levels strictly uniformly ordered, cf. Example 5.6) in $U$? Is every
system conjugate to an odometer in $U$? Is $SNE$ contained in $W$? Can $H2$
and $W$ intersect? (We know $DM2\cap W=\emptyset$.) Is $U_{1}\cap
W\neq\emptyset$? Is $U_{2}\cap W\neq\emptyset$?
## References
|
11institutetext: INAF – Osservatorio Astronomico di Roma, Via Frascati 33,
I-00078 Monte Porzio Catone (RM), Italy 22institutetext: “Sapienza” Università
di Roma – Dip. di Fisica, P.le A. Moro 5, I-00185 Roma, Italy 33institutetext:
Università degli Studi di Roma “Tor Vergata” – Dip. di Fisica, Via della
Ricerca Scientifica 1, I-00133 Roma, Italy 44institutetext: INFN – Roma Tor
Vergata, Via della Ricerca Scientifica 1, I-00133 Roma, Italy 55institutetext:
University of Maryland – Dept. of Astronomy, College Park, MD-20742, USA
66institutetext: NASA – Goddard Space Flight Center, Greenbelt Rd. 8800,
MD-20771 Greenbelt, USA 77institutetext: ASI – Space Science Data Center, Via
del Politecnico snc, I-00133 Roma, Italy 88institutetext: Scuola Normale
Superiore di Pisa, P.zza dei Cavalieri 7, I-56126 Pisa, Italy
# Unveiling the periodic variability patterns of the X-ray emission from the
blazar PG 1553+113
T. Aniello 112233 L. A. Antonelli 11 F. Tombesi 1133445566 A. Lamastra 11 R.
Middei 1177 M. Perri 1177 F. G. Saturni 1177 A. Stamerra 1188 F. Verrecchia
1177
(Received 18 March 2024; accepted 5 April 2024)
The search for periodicity in the multi-wavelength high variable emission of
blazars is a key feature to understand dynamical processes at work in this
class of active galactic nuclei. The blazar PG 1553+113 is an attractive
target due to the evidence of periodic oscillations observed at different
wavelengths, with a solid proof of a 2.2-year modulation detected in the
$\gamma$-ray, UV and optical bands. We aim at investigating the variability
pattern of the PG 1553+113 X-ray emission using a more than 10-years long
light curve, in order to robustly assess the presence or lack of a periodic
behavior whose evidence is only marginal so far. We conducted detailed
statistical analyses, studying in particular the variability properties of the
X-ray emission of PG 1553+113 by computing the Lomb-Scargle periodograms,
which are suited for the analyses of unevenly sampled time series, and
adopting epoch folding techniques. We find out a modulation pattern in the
X-ray light curve of PG 1553+113 with a period of $\sim$1.4 years, about 35%
shorter than the one observed in the $\gamma$-ray domain. Our finding is in
agreement with the recent spectro-polarimetric analyses and supports the
presence of more dynamical phenomena simultaneously at work in the central
engine of this quasar.
###### Key Words.:
black hole physics — galaxies: active — BL Lacertae objects: general — BL
Lacertae objects: individual: PG 1553+113 — X-rays: general — X-rays: galaxies
— X-rays: individual: PG 1553+113
## 1 Introduction
Blazars are a class of active galactic nuclei (AGN) characterized by extreme
luminosity and variability over the entire electromagnetic spectrum. Their
emission is dominated by a single component, i.e. a jet of relativistic
particles directly pointing towards the observer (Urry & Padovani, 1995). From
a spectroscopic point of view, these sources show a peculiar spectral energy
distribution (SED) that is characterized by a double-humped shape. The low-
frequency peak, that can be observed from the radio up to the X-ray domain, is
attributed to synchrotron radiation arising from high-energy electrons that
spiral around magnetic field lines (Padovani & Giommi, 1995); its properties
mainly depend on the strength of the magnetic field and on the energy
distribution of the relativistic electrons in the jet (Maraschi et al., 1992).
The second hump is usually observed in the X-to-$\gamma$-ray range, and its
origin is commonly associated with inverse Compton (IC) emission (Ghisellini
et al., 1992; Abdo et al., 2011a; Zdziarski & Bottcher, 2015), or synchotron
self-Compton where the seed photons emerge from either external radiation
fields (external Compton; EC) or the synchrotron radiation of the jet itself,
respectively. Several evidences, from SED modeling (e.g., Abdo et al., 2011b)
and energetic considerations (Zdziarski & Bottcher, 2015; Liodakis &
Petropoulou, 2020) to observations of correlated flux variations across
different wavebands (e.g., Agudo et al., 2011b, a; Liodakis et al., 2018),
and, recently, polarimetry (Middei et al., 2023a; Peirson et al., 2023),
support the leptonic origin of the second hump of the blazars SED.
Figure 1: MWL LCs of PG1153+113 in B (yellow dots), M2 (blue dots), X-ray (red
dots) bands and spectral index (dark blue dots) from 2012 to 2023 of Swift
satellite. The X-ray fluxes are in 0.2-10 keV band. B, M2 and X-rays LCs are
showed along with the corresponding 1$\sigma$ uncertainties.
A hallmark of the blazar phenomenon is the prominent flux and spectral
variability, which is thought to arise from changes in the jet properties such
its particle density, magnetic field strength and orientation (Padovani &
Giommi, 1995). Variability can be observed over different timescales from
hours up to decades (Kellermann, 1992). In this context, the search of
periodic signals is object of growing attention, and the high synchrotron
peaked source PG 1553+113 represents one of the most interesting cases. PG
1553+113, a blazar with optical magnitude $V\,\sim\,14.5$ at redshift
$z\,\sim\,0.4\div 0.5$ (Danforth et al., 2010), shows evidence of a
periodicity ($T\sim 2.2$ yr) in the $\gamma$-ray band ($E\geq\,100\,\rm MeV$)
sampled by the Fermi-LAT satellite and at lower frequencies ($R$-band;
Ackermann et al., 2015; Sobacchi et al., 2017; Peñil et al., 2024). A possible
explanation for the periodic signal was proposed by Sobacchi et al. (2017),
who discussed a scenario where a couple of asymmetric super massive black
holes (SMBHs), of which the smallest carries a jet, interacts producing a
precession of the jet itself.
The temporal properties of the PG 1553+113 X-ray emission are instead more
debated: Huang et al. (2021) claimed the finding of the same periodicity with
respect to the $\gamma$-ray emission within a scenario in which both SMBHs
possess a jet. Conversely, Peñil et al. (2024), working on a blazar sample,
identified a periodicity of $\sim$1.5 years with a significance of
$\sim$2$\sigma$. In this paper, we report on the temporal properties of PG
1553+113 by investigating the data collected in the rich Swift archive, which
provides 617 X-ray and $>$400 optical/UV observations. By constructing the
source multi-wavelength (MWL) light curves (LCs), we correlate the various
emission bands and search for an unambiguous periodic signal at each
wavelength with robust methods of time series analysis, such as the
construction of the Lomb-Scargle periodograms (Lomb, 1976; Scargle, 1982) and
the application of epoch folding techniques (e.g., Larsson, 1996), with a
particular attention to the X-ray signal in order to identify with enough
significance an associated periodicity.
The paper is organised as follows: in Sect. 2 we illustrate the data selection
and reduction; in Sect. 3 we compute correlations between different bands; in
Sect. 4 we present the X-ray variability analysis; in Sect. 6 we discuss our
findings; finally, in Sect. 7 we summarize the obtained results. Throughout
the text, we adopt a concordance cosmology with $H_{0}=70$ km s-1 Mpc-1,
$\Omega_{M}=0.3$ and $\Omega_{\Lambda}=0.7$.
## 2 Observations and data reduction
Launched in 2004, the Swift satellite carries the X-ray Telescope (XRT), which
is sensitive in the $0.2\div 10$ keV energy band, and the UV/Optical Telescope
(UVOT), capable of observations in the $170\div 600$ nm band. XRT has high
spatial resolution of 18′′, allowing for precise localization of celestial
objects. XRT and UVOT operate simultaneously, to enable concurrent
observations in different electromagnetic bands (Roming et al., 2005; Burrows
et al., 2005). Since our e retrieved the X-ray, UV and optical reduced data
from 2005 to 2023 from the Swift public mirror archive 111Available at
https://swift.ssdc.asi.it/. of the Space Science Data Center (SSDC) at the
Italian Space Agency (ASI). In this analysis, we excluded the data taken
before 2012, characterized by a sparse temporal sampling, to avoid the
introduction of biases that could be ascribed to time intervals containing no
data points.
Correlation | Pearson coeff. | Degrees of freedom
---|---|---
U/X | 0.54 | 265
B/X | 0.50 | 264
V/X | 0.48 | 254
W1/X | 0.57 | 279
M2/X | 0.58 | 269
W2/X | 0.60 | 278
X PhIdx/flux | 0.55 | 303
Table 1: Results of the correlation analysis (Pearson correlation coefficients
and number of degrees of freedom) between pairs of wavebands, and between the
X-ray photon index and flux.
The Swift-XRT observations were carried out in the Windowed Timing (WT) and
Photon Counting (PC) readout modes. The data were first reprocessed locally
with the XRTDAS software package (version v3.7.0), developed by the ASI-SSDC
and included in the NASA-HEASARC HEASoft package222Available at
https://heasarc.gsfc.nasa.gov/docs/software/heasoft/. (version v6.31.1).
Standard calibration and filtering processing steps were applied to the data
using the xrtpipeline task. The calibration files available from the Swift-XRT
CALDB (version 20220803) were used. Events for the temporal and spectral
analysis were selected within a circle of 20-pixel ($\sim$47′′) radius, while
the background was estimated from nearby circular regions with a radius of 40
pixels. For each observation, the X-ray energy spectrum was first binned with
the grppha tool of the FTOOLS package333Available at
https://heasarc.gsfc.nasa.gov/ftools/. to ensure a minimum of 20 counts per
bin, and then modeled using the XSPEC software package444Available at
https://heasarc.gsfc.nasa.gov/xanadu/xspec/. adopting a single power-law
model.
We obtained dereddened UV and optical fluxes with a dedicated ASI-SSDC
pipeline for the analysis of UVOT sky images (Giommi et al., 2012). We first
executed the aperture photometry task included in the UVOT official software
from the HEASoft package (version v6.26), extracting source counts within a
standard 5$\arcsec$ circular aperture and the background counts from three
circular 18$\arcsec$ apertures that were selected to exclude nearby stars. We
then derived the source dereddened fluxes by applying the official UVOT
calibrations from the CALDB (Breeveld et al., 2011), and adopting a standard
UV/optical mean interstellar extinction law (Fitzpatrick, 1999) with a mean
$E(B-V)$ value of 0.0447 mag (Schlafly & Finkbeiner, 2011). We show the X-ray,
UV (M2 filter) and optical ($B$-band) LCs obtained in this way in Fig. 1,
along with the X-ray photon index: a visual inspection already reveals
considerable flux variability. Furthermore, while the UV and optical trends
can be overlapped, the X-ray band exhibits some peaks that do not appear in
the other bands, hinting to a possible X-ray periodicity that is different
with respect to the optical/UV one.
## 3 Correlations between different bands, and between photon index and X-ray
flux
Figure 2: PG 1553+113 X-ray photon index–to-flux correlation. The two linear
fits of photon index versus flux and vice-versa (red dotted lines) are shown
along with the corresponding bisector fit (red dashed line). The data points
are color-coded on the basis of their MJD.
We first proceeded studying the correlations among the various bands, and
between X-ray photon index and flux. The photon index and the X-ray flux of PG
1553+113 in the 0.3-10 keV band are moderately anti-correlated (see Fig. 2) as
we computed a Pearson coefficient (Bevington & Robinson, 2003) of $r=-0.55$
and an associated null-hypothesis probability $p($¿$r)=3.6\times 10^{-24}$
(see Tab. 1). The X-ray photon index is flatter as the source flux increases.
This is commonly observed in blazars (the so-called “harder when brighter”
behavior; e.g., Giommi et al., 2021), and is expected when particles are
injected and accelerated in the jet (Abdo et al., 2010).
We then tested the correlation properties between the UV, optical and X-ray
bands, respectively. Moderate correlations were found as shown in Fig. 3 and
Tab. 1. Such a study highlights the typical behavior of the radiation emitted
from blazars in adjacent wavebands such as the X-rays and the optical/UV,
since the respective emitting regions are partially overlapping in the jet and
produce photons through the same underlying physical process (i.e. the
synchrotron or IC radiation) – or by processes that make it varying in a
quasi-simultaneous way (Dhiman et al., 2021).
Figure 3: Correlations of the UV-to-X-ray and optical-to-X-ray bands of PG
1553+113. As in Fig. 2, the two linear correlation fits (red dotted lines) are
shown along with the corresponding bisector fit (red dashed line).
## 4 Multi-wavelength variability analysis
To investigate the periodic behavior of the X-ray emission in PG 1553+113, we
performed an analysis of the LC through the employment of numerical methods
that are commonly used in studies of time series. In particular, we mainly
relied on the construction of the Lomb-Scargle (LS) periodogram (Lomb, 1976;
Scargle, 1982), a technique that is widely adopted to search for periodic
signals in astronomical data sets (e.g., VanderPlas, 2018; Vio et al., 2013)
and especially suited for analysing unevenly sampled time series (Baluev,
2008). This method consists in the calculation of the power spectral density
(PSD) of a time series, estimating the signal likelihood at each frequency on
the basis of a least-squares fit of a sinusoidal model to the data. For our
analysis, we adopted the LS routines contained inside the AstroPy (v5.0)
Python package (Astropy Collaboration et al., 2022).
In carrying on our analysis, we already knew that the optical (and the
associated UV) LCs exhibit a $\sim$2.2-yr period (Sobacchi et al., 2017);
therefore, we expected that the calculation of the LS periodogram for such LCs
should yield a comparable result as a confirmation of the goodness of the
method. We thus computed the LS periodograms associated with the PG 1553+113
X-ray, UV and optical LCs, and identified prominent peaks in the PSD. We show
the results in the left panels of Fig. 4: a visual inspection reveals that,
while the main peak associated with the optical and UV variability patterns is
located at the same frequency, a clearly prominent peak also appears for the
X-ray signal, but – at variance with the optical/UV case – is shifted at a
larger frequency (corresponding to a shorter period).
We found that such a peak was located at a frequency of $\sim$2.3 $\times
10^{-8}$ Hz, corresponding to $T_{\rm X}\sim 1.4$ years. This value is
significantly lower of $\sim$35% than the $\gamma$-ray period $T_{\gamma}\sim
2.2$ years identified in the Fermi-LAT data by Ackermann et al. (2015) and in
the optical LC by Sobacchi et al. (2017), thus hinting at the presence of a
different periodic process. To quantify the significance of this peak, we then
estimated its false alarm probability (FAP) level, i.e. the probability of
accidentally obtaining a given peak power due to noise fluctuations. We
considered statistically significant only those peaks whose FAP level was
falling below $\sim$10% (Sturrock & Scargle, 2010), setting the corresponding
LS power of $\sim$0.04 to be our 1$\sigma$ significance; in doing so, we
obtained a significance of 9.2$\sigma$ for the 1.4-yr X-ray peak. We also
repeated the procedure on the entire PG 1553+113 X-ray LC from 2005 to 2023,
i.e. including those data that were discarded in the selection described in
Sect. 2, to check the persistence of the peak in the LS periodogram: the test
yielded the same $T_{\rm X}$ with a significance of 7.5$\sigma$. We argue that
this lower significance is due to the presence of large time gaps in the
complete X-ray LC (see Fig. 1).
Figure 4: Left panels: PG 1553+113 LS periodograms of the X-ray, UV (W2) and
optical bands ($V$; blue solid lines). In each panel, the frequency of the
main peak (black dashed line) is highlighted, and its value is reported (see
legend). Right panels: X-ray, UV and optical LCs (blue points), along with the
relative sinusoids of periods corresponding to the LS most significant
frequencies (red dashed lines). In such panels, the sinusoid maxima
approximately coinciding with flux peaks in the LCs (grey dot-dashed lines)
are marked, and the corresponding period is reported (see legend).
In the UV and optical LS periodograms, we found a frequency of the most
significant peak of $\sim$1.5 $\times 10^{-8}$ Hz that corresponds to $T_{\rm
opt}\sim 2.1$ years with a significance of 5.8$\sigma$ (see Fig. 4). For
completeness, we also tested the LS analysis on the publicly available Fermi-
LAT $\gamma$-ray data555Available at
https://fermi.gsfc.nasa.gov/ssc/data/access/. from 2008 to 2023: in doing so,
we again obtained the peak at a frequency of $\sim$1.5 $\times 10^{-8}$ Hz
with a significance of 7.3$\sigma$, corresponding to the well-known
$T_{\gamma}\sim 2.2$ years found by Ackermann et al. (2015). In light of
having retrieved a comparable result with that reported in the literature for
the PG 1553+113 joint optical/$\gamma$-ray signal (Sobacchi et al., 2017),
this analysis strengthens our finding of a discrepant $T_{\rm X}$ with respect
to the variability period of the LCs at other wavebands.
To further confirm the temporal properties of the PG 1553+113 X-ray emission,
we applied two different methods. First, we used the timing tasks provided by
the Xronos software package666Available at
https://heasarc.gsfc.nasa.gov/xanadu/xronos/xronos.html., designed for the
analysis of high-energy astrophysical data (Stella & Angelini, 1992). We
started from the power spectrum (powerspec) method to calculate the PSD of the
LCs in each energy band; in this way, we identified the significant peaks
corresponding to potential periods in the X-ray LC that match the results
obtained with the LS analysis. Then, we applied the epoch-folding search
(efsearch) method to perform the calculation of the posterior distribution of
the LC periodicities: the distribution peak yielded a best-fit period of
$\sim$1.4 years, further confirming the LS findings. Finally, we employed the
epoch folding (efold) method to generate folded LCs at the specific periods of
interest. We show the outcome of this cross-check in Fig. 5.
Then, we also used the “significance spectrum” (SigSpec) algorithm developed
for asteroseismology (Reegen, 2007; Chang et al., 2011; Maceroni et al.,
2014), which is based on the analysis of frequency- and phase-dependent
spectral significance levels $S$ of peaks in a discrete Fourier transform of
the signal, through the computation of the probability density function and
its associated FAP due to white noise. Like the LS analysis, this method is
particularly suited for periodicity studies on sparse data. We thus executed
the SigSpec algorithm on the UVOT W2 and $V$ time series and finally on the
X-ray time series, obtaining the highest significance periods $T_{\rm W2}\sim
2.08$ years ($S_{\rm W2}\sim 28.0$), $T_{V}\sim 2.11$ years ($S_{V}\sim 29.6$)
and $P_{\rm X}\sim 1.39$ years ($S_{\rm X}\sim 27.5$), respectively. Also such
values are in agreement with those obtained with the LS, thus confirming the
goodness of our result.
Figure 5: Left panel: PG 1553+113 PSD of the X-ray LC obtained with the
powerspec package of Xronos. Middle panel: best-fit period (solid line) of the
PG 1553+113 X-ray LC obtained with the efsearch task, along with its numerical
value in seconds corresponding to $\sim$1.4 years (see text). Right panel:
epoch folding of the PG 1553+113 X-ray LC obtained with the efold method on
the basis of the best-fit period determined by efsearch.
## 5 X-ray period uncertainty estimate and red-noise bias analysis
The calculation of the LS periodogram does not provide any estimate of the
uncertainty on the position of the significant peaks. To estimate this
quantity, we performed an extensive LS analysis over many altered versions of
the PG 1553+113 X-ray LC, in which we replaced each point with a new observing
epoch and flux level that differ from the original ones by random amounts
extracted from appropriate distributions centered on the actual data. For
altering the observing epochs, we adopted uniform distributions, with widths
equal to the full width at half maximum (FWHM) of $\sim$1 year derived from
the Gaussian fit to the distribution of best-fit periods computed by the
efsearch task (see Fig. 5); for the flux values, we instead adopted Gaussian
distributions with standard deviations equal to the associated 1$\sigma$
errors. In this way, we produced $10^{3}$ realizations of the PG 1553+113
X-ray LC; for each realization, we then computed the LS periodogram, and
derived the posterior distribution of frequencies of the most significant
peak. The statistical analysis of this distribution yielded a best estimate of
the PG 1553+113 X-ray period of $T_{\rm X}=1.41\pm 0.68$ years.
Despite the high significance level of our results, we are unable to
automatically exclude that the LS analysis is biased by sources of uncertainty
that could produce fake signals in the periodogram. This could be due e.g. to
non-periodic processes that can mimic a periodic temporal behavior on the time
scales of our interest, such as random variability in the AGN flux that is
usually distributed according to a red noise spectrum (Bhatta, 2017; Vaughan
et al., 2016). Since the PSD of blazar LCs is of the red-noise type (Vaughan,
2005), the power level – and thus the FAP – is expected to increase at low
frequencies. To assess that our results are not biased by red-noise processes,
we simulated the response of the LS analysis to a sample of mock LCs generated
according to a pure red noise PSD.
To this aim, we generated such LCs by extracting, from a red-noise spectral
distribution (e.g., Gardiner, 1994), a number of fake flux points
corresponding to the amount of X-ray data at our disposal, and associating
each of them to an MJD time of our observations. We iterated this process
$10^{5}$ times to reach statistical significance of the results; for each mock
LC produced in this way, we computed the associated LS periodogram using the
same methodology described in Sect. 4. Requesting a minimum LS power of
$\sim$0.04 as our 1$\sigma$ level – i.e. the same amount associated with the
threshold FAP of our real data (see Sect. 4) – we found that at most
$\sim$16$\%$ of our mock LCs rise above 5$\sigma$ in the frequency range of
interest $(2\div 3)\times 10^{-8}$ Hz (see Fig. 6). Such a fraction is non-
negligible, and thus suggests some caution to be adopted in claiming a firm
discovery of the $\sim$1.4-year periodicity; nevertheless, these findings
point toward the plausible detection of a true periodic signal in the X-ray LC
of PG 1553+113 at an $\sim$84% confidence level.
## 6 Discussion
The presence of multiple periodic patterns in the MWL LCs of PG 1553+113 has
been object of various studies (Ackermann et al., 2015; Sobacchi et al., 2017;
Huang et al., 2021; Peñil et al., 2022; Adhikari et al., 2023); recently,
there has also been the indication of a potential long-term variability trend
in the $\gamma$-ray emission ($\sim$22 years; Adhikari et al., 2023; Peñil et
al., 2024). Different scenarios have been proposed to explain this behaviour:
the most widely accepted one relies upon the presence, inside the PG 1553+113
central engine, of a binary system of SMBHs in which one of the two carries a
jet that is gravitationally affected by the SMBH motions (Ackermann et al.,
2015). This may happen in several ways, such as (i) a jet precession (Sobacchi
et al., 2017), (ii) a helical shaping of the jet (Abdo et al., 2010), or (iii)
instabilities in the jet structure (Huang et al., 2021). Alternatively, (iv)
accretion modulations (Tavani et al., 2018) and (v) evaporation processes
possibly coexisting with disk overdensities (Adhikari et al., 2023) may also
lead to similar results.
The $\gamma$-ray and optical period was already modeled by Sobacchi et al.
(2017), considering a jet precession with a period $T_{\gamma}\sim 2.2$ years.
To account for an overall different X-ray period, Huang et al. (2021) fit the
Swift-XRT LC with a two-jet model, each carried by one of the PG 1553+113
SMBHs, assuming a precession with $T_{X}\sim 2.2$ years acting on both jets;
however, their result is based on the analysis of a less extended X-ray LC
with respect to ours. Also Peñil et al. (2024), by analyzing the Swift-XRT
data of a sample of $\gamma$-ray detected blazars, found for PG 1553+113 an
X-ray period of $\sim$1.5 years with a significance of $\sim$2$\sigma$ after
averaging on the temporal properties of the entire sample.
A possible explanation that does not take into account a binary SMBH system
could be the cyclic injection of large quantities of matter from the innermost
regions of the central engine into the jet base (Lewis et al., 2019). If this
input of matter occurs regularly, this could produce the emission of a
modulated X-ray signal with a different period with respect to that of the
$\gamma$-ray, UV and optical emission (due to the jet precession). It is
interesting to note that recent observations from the IXPE satellite (Middei
et al., 2023b) indicate the presence of different emitting regions in the jet
structure, hinting at either a stratified jet or different levels of
turbulence inside the jet structure.
The plausible detection of a different time modulation of the PG 1553+113
X-ray emission with respect to the $\gamma$-ray, UV and optical ones has
further complicated the road to understand the physical mechanisms acting in
the central engine of this blazar. Future MWL observations and studies
involving a more detailed modeling of the PG 1553+113 innermost structure
(SMBH system, accretion disk, jet, and the respective interplay), will be
crucial to eventually explain the origin of its temporal properties.
Figure 6: LS periodograms calculated on randomly generated LCs of pure red
noise. For plotting purposes, we show $10^{2}$ (grey solid lines) out of the
total $10^{5}$ realizations, superimposed to the LS periodogram of the real PG
1553+113 X-ray data (blue dot-dashed line) and the corresponding 5$\sigma$
significance level (black dashed line) (see Sect. 4). The relevant frequency
interval for our analysis of $(2\div 3)\times 10^{-8}$ Hz (black dotted lines)
is also highlighted.
## 7 Summary and future work
In this study, we have conducted a comprehensive analysis of the X-ray, UV and
optical data of the blazar PG 1553+113, aimed at investigating the possible
presence of a characteristic X-ray periodicity that differs from the already
ascertained $\gamma$-ray and optical variability period. We summarize our main
findings below:
1. 1.
the PG 1553+113 X-ray, UV and optical LCs are all moderately correlated to
each other according to the Pearson analysis ($r\sim 0.5$); the X-ray photon
index is correlated with the X-ray flux in a similar way. This behavior is
typical of blazars, where the light is emitted almost entirely from the jet
due to the synchrotron and IC processes (Padovani & Giommi, 1995; Maraschi et
al., 1992).
2. 2.
The X-ray LC constructed over $\gtrsim$10 observer-frame years of Swift-XRT
data likely ($>$80% confidence level) exhibits a periodic emission, but with a
shorter characteristic period $T_{\rm X}\sim 1.4$ years with respect to that
found in the optical and $\gamma$-ray bands ($T_{\rm opt}=T_{\rm
UV}=T_{\rm\gamma}\sim 2.2$ years; Ackermann et al., 2015; Sobacchi et al.,
2017; Peñil et al., 2024).
Current scenarios are not able to properly explain such a difference in the
widely accepted framework of a binary system of SMBHs carrying a preceding jet
in the PG 1553+113 central engine (Huang et al., 2021; Tavani et al., 2018;
Sobacchi et al., 2017; Adhikari et al., 2023); therefore, further theoretical
investigations and observational data are needed to better disentangle the
physical mechanisms that lie at the base of the different variability periods
of the PG 1553+113 MWL emission.
###### Acknowledgements.
We thank the anonymous referee for their helpful comments. We acknowledge M.
Imbrogno (INAF-OAR) for useful discussion about the use of the Xronos
software. This article is part of TA’s work for the Ph.D. in Astronomy,
Astrophysics and Space Science, jointly organized by the “Sapienza” University
of Rome, University of Rome “Tor Vergata” and INAF. Reproduced with permission
from Astronomy & Astrophysics, © ESO.
## References
* Abdo et al. (2010) Abdo, A. A., Ackermann, M., Ajello, M., et al. 2010, ApJS, 188, 405
* Abdo et al. (2011a) Abdo, A. A., Ackermann, M., Ajello, M., et al. 2011a, ApJ, 730, 101
* Abdo et al. (2011b) Abdo, A. A., Ackermann, M., Ajello, M., et al. 2011b, ApJ, 730, 101
* Ackermann et al. (2015) Ackermann, M., Ajello, M., Albert, A., et al. 2015, ApJ, 813, L41
* Adhikari et al. (2023) Adhikari, S., Penil, P., Westernacher-Schneider, J. R., et al. 2023, arXiv e-prints, arXiv:2307.11696
* Agudo et al. (2011a) Agudo, I., Jorstad, S. G., Marscher, A. P., et al. 2011a, ApJ, 726, L13
* Agudo et al. (2011b) Agudo, I., Marscher, A. P., Jorstad, S. G., et al. 2011b, ApJ, 735, L10
* Astropy Collaboration et al. (2022) Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, ApJ, 935, 167
* Baluev (2008) Baluev, R. V. 2008, MNRAS, 385, 1279
* Bevington & Robinson (2003) Bevington, P. R. & Robinson, D. K. 2003, Data reduction and error analysis for the physical sciences
* Bhatta (2017) Bhatta, G. 2017, ApJ, 847, 7
* Breeveld et al. (2011) Breeveld, A. A., Landsman, W., Holland, S. T., & et al. 2011, AIPC, 1358, 373
* Burrows et al. (2005) Burrows, D. N., Hill, J. E., Nousek, J. A., et al. 2005, Space Sci. Rev., 120, 165
* Chang et al. (2011) Chang, D. C., Ngeow, C. C., & Chen, W. P. 2011, in Astronomical Society of the Pacific Conference Series, Vol. 451, 9th Pacific Rim Conference on Stellar Astrophysics, ed. S. Qain, K. Leung, L. Zhu, & S. Kwok, 143
* Danforth et al. (2010) Danforth, C. W., Keeney, B. A., Stocke, J. T., Shull, J. M., & Yao, Y. 2010, ApJ, 720, 976
* Dhiman et al. (2021) Dhiman, V., Gupta, A. C., Gaur, H., & Wiita, P. J. 2021, MNRAS, 506, 1198
* Fitzpatrick (1999) Fitzpatrick, E. L. 1999, PASP, 111, 63
* Gardiner (1994) Gardiner, C. W. 1994, Handbook of stochastic methods for physics, chemistry and the natural sciences
* Ghisellini et al. (1992) Ghisellini, G., Padovani, P., Celotti, A., & Maraschi, L. 1992, in American Institute of Physics Conference Series, Vol. 254, Testing the AGN paradigm, ed. S. S. Holt, S. G. Neff, & C. M. Urry, 398–408
* Giommi et al. (2021) Giommi, P., Perri, M., Capalbi, M., et al. 2021, MNRAS, 507, 5690
* Giommi et al. (2012) Giommi, P., Polenta, G., Lähteenmäki, A., et al. 2012, A&A, 541, A160
* Huang et al. (2021) Huang, S., Yin, H., Hu, S., et al. 2021, ApJ, 922, 222
* Kellermann (1992) Kellermann, K. I. 1992, Science, 258, 145
* Larsson (1996) Larsson, S. 1996, A&AS, 117, 197
* Lewis et al. (2019) Lewis, T. R., Finke, J. D., & Becker, P. A. 2019, ApJ, 884, 116
* Liodakis & Petropoulou (2020) Liodakis, I. & Petropoulou, M. 2020, ApJ, 893, L20
* Liodakis et al. (2018) Liodakis, I., Romani, R. W., Filippenko, A. V., et al. 2018, MNRAS, 480, 5517
* Lomb (1976) Lomb, N. R. 1976, Ap&SS, 39, 447
* Maceroni et al. (2014) Maceroni, C., Lehmann, H., da Silva, R., et al. 2014, A&A, 563, A59
* Maraschi et al. (1992) Maraschi, L., Celotti, A., & Ghisellini, G. 1992, in Physics of Active Galactic Nuclei, ed. W. J. Duschl & S. J. Wagner, 605
* Middei et al. (2023a) Middei, R., Liodakis, I., Perri, M., et al. 2023a, ApJ, 942, L10
* Middei et al. (2023b) Middei, R., Perri, M., Puccetti, S., et al. 2023b, ApJ, 953, L28
* Padovani & Giommi (1995) Padovani, P. & Giommi, P. 1995, MNRAS, 277, 1477
* Peñil et al. (2022) Peñil, P., Ajello, M., Buson, S., et al. 2022, arXiv e-prints, arXiv:2211.01894
* Peñil et al. (2024) Peñil, P., Westernacher-Schneider, J. R., Ajello, M., et al. 2024, MNRAS, 527, 10168
* Peirson et al. (2023) Peirson, A. L., Negro, M., Liodakis, I., et al. 2023, ApJ, 948, L25
* Reegen (2007) Reegen, P. 2007, A&A, 467, 1353
* Roming et al. (2005) Roming, P. W. A., Kennedy, T. E., Mason, K. O., et al. 2005, Space Sci. Rev., 120, 95
* Scargle (1982) Scargle, J. D. 1982, ApJ, 263, 835
* Schlafly & Finkbeiner (2011) Schlafly, E. F. & Finkbeiner, D. P. 2011, ApJ, 737, 103
* Sobacchi et al. (2017) Sobacchi, E., Sormani, M. C., & Stamerra, A. 2017, MNRAS, 465, 161
* Stella & Angelini (1992) Stella, L. & Angelini, L. 1992, XRONOS, a timing analysis software package : user’s guide : version 3.00
* Sturrock & Scargle (2010) Sturrock, P. A. & Scargle, J. D. 2010, ApJ, 718, 527
* Tavani et al. (2018) Tavani, M., Cavaliere, A., Munar-Adrover, P., & Argan, A. 2018, ApJ, 854, 11
* Urry & Padovani (1995) Urry, C. M. & Padovani, P. 1995, PASP, 107, 803
* VanderPlas (2018) VanderPlas, J. T. 2018, ApJS, 236, 16
* Vaughan (2005) Vaughan, S. 2005, A&A, 431, 391
* Vaughan et al. (2016) Vaughan, S., Uttley, P., Markowitz, A. G., et al. 2016, MNRAS, 461, 3145
* Vio et al. (2013) Vio, R., Diaz-Trigo, M., & Andreani, P. 2013, Astronomy and Computing, 1, 5
* Zdziarski & Bottcher (2015) Zdziarski, A. A. & Bottcher, M. 2015, MNRAS, 450, L21
|
# Models Out Of Line:
A Fourier Lens On Distribution Shift Robustness
Sara Fridovich-Keil†, Brian R. Bartoldson$\ddagger$, James
Diffenderfer$\ddagger$,
Bhavya Kailkhura$\ddagger$, Peer-Timo Bremer$\ddagger$
†UC Berkeley, $\ddagger$Lawrence Livermore National Laboratory Corresponding
author<EMAIL_ADDRESS>
###### Abstract
Improving the accuracy of deep neural networks (DNNs) on out-of-distribution
(OOD) data is critical to an acceptance of deep learning (DL) in real world
applications. It has been observed that accuracies on in-distribution (ID)
versus OOD data follow a linear trend and models that outperform this baseline
are exceptionally rare (and referred to as “effectively robust”). Recently,
some promising approaches have been developed to improve OOD robustness: model
pruning, data augmentation, and ensembling or zero-shot evaluating large
pretrained models. However, there still is no clear understanding of the
conditions on OOD data and model properties that are required to observe
effective robustness. We approach this issue by conducting a comprehensive
empirical study of diverse approaches that are known to impact OOD robustness
on a broad range of natural and synthetic distribution shifts of CIFAR-10 and
ImageNet. In particular, we view the “effective robustness puzzle" through a
Fourier lens and ask how spectral properties of both models and OOD data
influence the corresponding effective robustness. We find this Fourier lens
offers some insight into why certain robust models, particularly those from
the CLIP family, achieve OOD robustness. However, our analysis also makes
clear that no known metric is consistently the best explanation (or even a
strong explanation) of OOD robustness. Thus, to aid future research into the
OOD puzzle, we address the gap in publicly-available models with effective
robustness by introducing a set of pretrained models— _RobustNets_ —with
varying levels of OOD robustness.
## 1 Introduction
Deep learning (DL) holds great promise for solving difficult real-world
problems in domains such as healthcare, autonomous driving, and cyber-physical
systems. The major roadblock in adopting DL for real-world applications is
that real-world data often deviates from the training data due to noise,
corruptions, or other changes in distribution that may have temporal or
spatial causes [3, 14, 29, 31]. DL models are known to be highly brittle under
such distribution shifts, which raises the risk of making incorrect
predictions with harmful consequences. Designing approaches to learn models
that achieve both high accuracy on in-distribution data and high robustness to
distribution shifts is paramount to the safe adoption of DL in real-world
applications.
Although a performance drop under distribution shifts is expected, several
recent works [1, 23] have noticed an intriguing phenomenon: accuracy on in-
distribution (ID) versus accuracy on distribution-shifted data obeys a linear
trend across a wide range of models, corruption benchmarks, and prediction
tasks. Unfortunately, as nearly all current DL models bear a substantial loss
under distribution shifts, this linear relationship implies that our current
training strategies are insufficient to bridge the gap between ID and out-of-
distribution (OOD) performance. However, despite being exceedingly rare,
models that break this linear trend to obtain effective robustness [1]
exist—such models defy explanation by the wisdom that OOD accuracy and ID
accuracy should correlate strongly, they are “out of line”. Identifying when
and why such models arise is key to overcoming OOD brittleness in DL models.
Related research efforts have focused on improving OOD robustness via new
techniques such as model pruning [7], data augmentation [15], and ensembling
large pre-trained models [25, 34]. While these efforts have created
significant excitement in the ML robustness community, it remains unclear (a)
under which conditions on OOD data these methods can improve effective
robustness (ER), and (b) which underlying model properties make a DNN
effectively robust. A more in-depth understanding of these phenomena is likely
to help in bridging the gap between ID and OOD performance.
Towards achieving this goal, we carry out a comprehensive empirical study
leveraging models trained using each of the aforementioned state-of-the-art
approaches to improving OOD robustness. We analyze the impact of these
robustness improvement methods on various model architectures, two clean
datasets (CIFAR-10 and ImageNet), and a broad range of natural and synthetic
distribution shifts. In particular, we view the OOD robustness puzzle through
a Fourier lens and ask how spectral properties of OOD data and models
influence their effective robustness. In this process, we design new metrics
that capture Fourier sensitivity of models as test data moves farther away
from the training data manifold. We make the following contributions:
* •
_A new perspective on the state of OOD robustness:_ across diverse models,
metrics, and distribution shifts, we observe that no one single metric rules
them all, not even in-distribution accuracy. This accents the potential need
for a multi-faceted approach to understanding OOD robustness, in contrast to
ID generalization where single properties like model flatness can enjoy
predictive success across an array of models [18].
* •
_Design of Fourier sensitivity metrics:_ our new metrics quantify various
model properties, including spectral ones that can explain the effective
robustness of ensembled CLIP models [34] better than ID accuracy.
* •
_The first collection of publicly-available effectively robust models:_ to
help crack the OOD robustness puzzle, we make available (coming soon) a
dataset of CIFAR-10 models with effective robustness that stems from training
under a variety of data augmentation and pruning schemes.
## 2 Related work
Distribution shift fragility is a well-documented phenomenon among neural
networks [28, 29, 14, 13, 22] in which a model trained on an ID dataset
performs markedly worse when evaluated on OOD data, even when humans find the
OOD data just as easy to classify. These OOD accuracy gaps often follow a
linear relationship between ID and OOD accuracy, but the slope and intercept
of the linear trendline varies depending on the specific ID and OOD datasets
[22, 10]. In our work, we focus on two popular image classification benchmark
ID datasets, CIFAR-10 [20] and ImageNet [5], and two OOD test datasets for
each, CIFAR-10.1 [28], CIFAR-10-C [14], ImageNetV2 [29], and ImageNet-C [13].
While most models follow the linear ID-OOD trendlines of Miller et al. [22],
some models are “effectively robust" and appear above the trendline, with OOD
accuracy higher than expected given ID accuracy [1]. These robust models can
arise through various methods: by pruning a model as a sort of regularization
[7], by training with data augmentation intended to mimic the expected OOD
test data [15], or by pretraining on a larger and more diverse dataset and
evaluating zero-shot [25], partially finetuning [1], or interpolating the
weights of a zero-shot and a finetuned model [34].
Although as a community we are beginning to uncover robust training methods,
it remains a mystery why these particular methods achieve robustness on
particular distribution shifts. We study this mystery via a Fourier lens based
on prior work involving both image frequency and function frequency analyses.
Ortiz-Jimenez et al. [24], Yin et al. [36], and Sun et al. [33] study model
sensitivity to Fourier perturbations of the input images and analyze how
different data augmentations produce different Fourier sensitivities (and
correspondingly robustness to image perturbations of different frequencies).
Another line of research focuses on spectral bias [9, 2, 35, 26], wherein
models prioritize learning low frequency functions over the input space. This
bias towards low function frequencies can also affect robustness and is
influenced by training hyperparameters like data augmentation and weight decay
[9].
Among the model properties we study as potential predictors of OOD accuracy
are ID accuracy [22], model Jacobian norm [16], the linear pixel-space
interpolation metrics of Fridovich-Keil et al. [9], and our own Fourier
amplitude and phase interpolation metrics. We study these metrics over a wide
range of robust and nonrobust models, including sparse models from
Diffenderfer et al. [7] on CIFAR-10 and pretrained models from CLIP [25, 34]
and RobustBench [4] on ImageNet.
## 3 Methods
#### Datasets.
We consider two standard image classification tasks: CIFAR-10 [20] and
ImageNet [5]. CIFAR-10 consists of low-resolution (32 by 32 pixels) color
images from 10 animal and object classes, divided into 50000 training and
10000 test images; ImageNet consists of higher-resolution (cropped to 224 by
224 pixels) color images from 1000 animal and object classes, with more than a
million training images and 50000 validation images. For each of these "in-
distribution" (ID) datasets we consider two "out of distribution" (OOD)
datasets: one defined by a set of synthetic corruptions of the original
test/validation images (CIFAR-10-C [14] and ImageNet-C [13], respectively) and
another defined by a re-collection of new test images following as closely as
possible the original dataset creation procedure (CIFAR-10.1 [28] and
ImageNetV2 [29]).
#### Models.
In our CIFAR-10 experiments we use 3 different model architectures: Conv8 [8],
ResNet18 [12], and VGG16 [32]. For each model architecture, we consider a
range of model pruning strategies that have been studied for their effect on
OOD robustness [7, 17] as some of these pruned models were demonstrated to
have higher OOD robustness than dense models. Namely, lottery-ticket style
pruning methods [8, 30, 27, 6] were able to provide robustness gains on
CIFAR-10-C [7]. We outline the various pruning techniques and sparsity levels
we utilized in detail in Section 4.2.
In our ImageNet experiments we use 28 standard (nonrobust) pretrained models
from Torchvision [21], 5 pretrained models from the ImageNet-C leaderboard on
RobustBench [4], and 33 models obtained by robustness-enhancing weight-space
interpolation [34] between zero-shot and ImageNet-finetuned CLIP ViT-B/16,
ViT-B/32, and ViT-L/14 [25].
#### Previously-proposed metrics: ID accuracy, Jacobian norm, pixel
interpolation.
Because OOD robustness is a complex and multi-faceted problem, we make use of
multiple previously-proposed model measurements and apply them to study
effectively robust models. To the best of our knowledge, our work is the first
application of most of these metrics (except for in-distribution accuracy) to
the study of effective robustness. Specifically, we evaluate four previously-
proposed metrics that have been observed to correlate with some notion of
generalization: accuracy on unseen in-distribution data ("ID accuracy") [22],
Jacobian norm [16], and within-class and between-class pixel interpolation
high frequency fraction [9]. These latter metrics are analogous to the high
frequency fractions we compute for amplitude and phase interpolation
(described below), except that interpolating paths simply interpolate in pixel
space between two images, where the two images are from either the same class
("within-class") or different classes ("between-class") [9]. Prior work
observed that in-distribution generalization typically improves as within-
class high frequency fraction decreases and between-class high frequency
fraction increases [9]. Reducing the norm of the Jacobian of model outputs
with respect to input data can push the decision boundary away from the
training points, providing robustness to random perturbations of the input
data [16]. Thus, models with smaller Jacobian norms may perform better on
corrupted/shifted data. We estimate the norm of the Jacobian using a random-
projection-based approach [16]; further details can be found in the
supplement.
#### Fourier interpolation metrics.
We also introduce novel Fourier interpolation metrics to capture more
information about model behavior that may help answer the robustness puzzle.
Inspired by the pixel-space interpolation procedure of Fridovich-Keil et al.
[9], we propose evaluating the smoothness of a model’s predictions along image
paths that perturb the Fourier amplitude or phase information of one test
image towards another, while preserving the Fourier phase or amplitude
(respectively) information of the original image. The intuition for this type
of interpolation is that most of the semantic content of an image is contained
within its Fourier phases, which encode structural information like edges.
Accordingly, we can perturb the Fourier amplitude of an image without
destroying its semantic meaning, and we can reliably destroy semantic content
by perturbing Fourier phase. Example Fourier amplitude and phase interpolating
paths on ImageNet are shown in Figure 1; example paths on CIFAR-10 are
included in the appendix.
Figure 1: Example Fourier amplitude (top) and phase (bottom) interpolating
paths from ImageNet. Each path includes 100 images; every 15th image is
visualized here. The first image along each path is an unaltered image from
the original test set; the last image has the same Fourier phases (top) or
amplitudes (bottom) as the original but has some of the Fourier amplitudes
(top) or phases (bottom) of a random other image from the validation set.
Amplitude interpolation produces a corruption that preserves semantic content,
whereas phase interpolation destroys semantic content.
More precisely, we compute an interpolating path by first randomly selecting
two images $\textbf{x}_{0}$ and $\textbf{x}_{1}$ from the ID test/validation
set. We perform standard preprocessing on each image, which involves cropping
to a standard size (for ImageNet) and normalizing each pixel by the training
mean and standard deviation (for both CIFAR-10 and ImageNet). We then compute
the two-dimensional discrete Fourier transform (DFT) of each image and
separate the complex-valued results into amplitude and phase components
$\textbf{a}_{i}$ and $\textbf{p}_{i}$ for $i\in\\{0,1\\}$. To construct an
image along an amplitude interpolating path, we retain the original phase
$\textbf{p}_{0}$ and interpolate the low-frequency amplitude as
$(1-\lambda)\textbf{a}_{0}+\lambda\textbf{a}_{1}$ (while preserving the high
frequency amplitude), where $\lambda\in[0,1]$, and produce an image via the
inverse discrete Fourier transform. Phase interpolation is defined
analogously; we retain the original amplitude and interpolate the low-
frequency phase. We choose a frequency threshold for what is considered low
frequency for interpolation based on visual inspection to ensure that
amplitude interpolation preserves semantic meaning and phase interpolation
destroys it. We interpolate the lowest 40% of image frequencies for both
amplitude and phase on CIFAR-10, the lowest 20% of image frequencies for phase
on ImageNet, and all image frequencies for amplitude on ImageNet. We choose
100 evenly-spaced $\lambda$ values to define each path, and choose 5000 random
paths for CIFAR-10 and 7000 for ImageNet. The images in Figure 1 are rescaled
for visualization (essentially undoing the initial pixel-wise normalization).
We evaluate each model on each path, producing a probability vector over the
available classes for each image along the path. We compute two metrics based
on these path predictions: high frequency fraction (HFF) and consistent
distance (CD). For high frequency fraction, we take the one-dimensional DFT of
the 100 model predictions along the path, average the Fourier amplitudes among
all the classes, and compute the fraction of the total amplitude above a
frequency threshold. This metric is always between 0 and 1 (usually between
0.1 and 0.3). The higher the average high frequency fraction, the more
sensitive a model is to the given Fourier perturbation of its input images.
We also consider consistent distance, which is defined as the index of the
first image along the path that the model classifies differently (by highest
softmax prediction) than the original image. This metric is always between 1
and 100, and is often at least 40. Consistent distance is intended as a more
intuitive metric to capture the same notion of robustness to corruptions of
image Fourier content. In particular, Fourier amplitude corruptions change
image statistics but do not hamper human semantic identification; a more
robust model in this sense should have a lower high frequency fraction and a
higher consistent distance on average compared to a less robust model.
## 4 Results
We are interested in understanding _when_ (under what conditions on the
distribution shift) and _why_ (via model properties) neural networks can
exhibit effective robustness. We perform a systematic study of three tunable
"knobs" that are available to the neural network practitioner and have been
shown to impact OOD robustness: pruning [7], data augmentation [15], and
weight ensembling [34]. For each knob, we evaluate a benchmark set of
pretrained CIFAR-10 or ImageNet models on the original in-distribution test
set, several OOD test sets, and a suite of model property metrics capturing
local smoothness (via model Jacobian norm), frequency response to pixel-space
interpolation within and between classes [9], and Fourier amplitude and phase
sensitivity (via high frequency fraction and consistent distance).
Representative results from our analysis of these three knobs are presented in
Sections 4.2, 4.3, and 4.4. For each dataset we consider one natural
distribution shift (CIFAR-10.1 or ImageNetV2) as well as six synthetic
corruptions (from CIFAR-10-C or ImageNet-C) comprised of two low-frequency
corruptions, two mid-frequency corruptions, and two high-frequency corruptions
(in that order). Full results on all the corruptions in CIFAR-10-C and
ImageNet-C are deferred to the supplement. In all figures, we show 95%
confidence intervals around each measurement (Clopper-Pearson for accuracy
measurements and Gaussian bounds for averages of other metrics).
We begin by comparing the spectral properties of different distribution shifts
in Section 4.1. In the following sections we find that the relationship
between model properties and OOD robustness is dependent upon these spectral
statistics of the specific distribution shift. This analysis highlights the
difficulty of using a single model property to understand OOD performance but
also indicates that our proposed metrics may play an important role in solving
the OOD puzzle.
### 4.1 Spectral characterization of distribution shifts
To explore the behavior of models on OOD data, it is beneficial to
characterize the nature of the OOD data with respect to the ID test data.
Specifically, considering the difference in power spectral density (PSD)
between each of the OOD test sets and the original ID test set allows us to
categorize each OOD test set as being concentrated on low, mid, or high
frequency perturbations with respect to the ID test set. We perform this
characterization for CIFAR-10.1 and each of the 15 corruptions in CIFAR-10-C;
computational details for PSD are provided in the appendix. Figure 2 shows the
distribution shift PSDs for CIFAR-10.1 together with two low (brightness,
contrast), mid (defocus blur, pixelate), and high (gaussian noise, impulse
noise) frequency CIFAR-10-C corruptions. Of note, this PSD characterization
illustrates that the OOD shift encountered on CIFAR-10.1 data is composed of
low-frequency information, much like the brightness and contrast CIFAR-10-C
corruptions.
CIFAR-10.1
Brightness
Contrast
Defocus Blur
Pixelate
Gaussian Noise
Impulse Noise
Figure 2: Power spectral densities for a selection of low (CIFAR-10.1,
brightness, contrast), mid (defocus blur, pixelate), and high (gaussian noise,
impulse noise) frequency shifts w.r.t. CIFAR-10.
### 4.2 When and why does model pruning confer effective robustness?
Diffenderfer et al. [7] demonstrated the nuanced behavior that model pruning
has on OOD robustness, specifically with respect to the distribution shift
from CIFAR-10 to CIFAR-10-C. Notably, pruned models are capable of providing
superior OOD robustness compared to dense, or unpruned, models. Furthermore,
the effect, either positive or negative, and degree of robustness arising from
model pruning is dependent on both the model architecture and the pruning
algorithm. It remains unknown which properties of these pruned models
contribute to their robustness on OOD data.
In an effort to better understand these findings, we further investigate this
behavior by considering the three categories of pruning used in Diffenderfer
et al. [7]: _traditional_ (fine-tuning [11], gradual magnitude pruning [37]),
_rewinding lottery-tickets_ (weight-rewinding [8], learning-rate rewinding
[30]), and _initialization lottery-tickets_ (edgepopup [27], biprop [6]). Note
that we often abbreviate "lottery tickets" as LT. We provide details on each
of the pruning techniques in the supplement. For each pruning strategy and
each architecture, we prune models to 50%, 60%, 80%, 90%, and 95% sparsity.
For all pruning methods, pruning is performed in an unstructured (i.e.,
individual weights are pruned) and global manner (i.e., prune to a given
sparsity across the entire network). Additionally, for traditional and
initialization LTs, pruning was performed in a layerwise fashion (i.e., where
each network layer is pruned to the given sparsity level). In Figure 3 we
investigate _when_ (on which types of distribution shifts) and in Table 1 we
study _why_ (via corresponding model property measurements) these differently-
pruned CIFAR-10 models achieve their varying degrees of robustness.
Figure 3: When does pruning affect OOD robustness?
| Traditional | Rewinding LT | Initialization LT
---|---|---|---
$m$ | $R^{2}$ | $m$ | $R^{2}$ | $m$ | $R^{2}$
CIFAR-10.1 | ID accuracy | 0.85 | 0.85 | 0.9 | 0.59 | 0.79 | 0.87
Jacobian | -0.0 | 0.16 | -0.01 | 0.36 | 0.0 | 0.19
Pixel - within | -6.42 | 0.79 | -8.54 | 0.66 | -6.7 | 0.78
Pixel - between | 8.45 | 0.48 | -1.29 | 0.25 | 7.45 | 0.45
Amp - HFF | -6.11 | 0.72 | -5.97 | 0.31 | -4.73 | 0.75
Amp - CD | 0.04 | 0.6 | 0.05 | 0.52 | 0.04 | 0.75
Phase - HFF | -8.09 | 0.26 | -5.66 | 0.24 | -6.78 | 0.41
Phase - CD | 0.03 | 0.33 | 0.01 | 0.26 | 0.02 | 0.4
Brightness | ID accuracy | 1.0 | 0.96 | 0.9 | 0.89 | 0.97 | 0.97
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.2 | -0.01 | 0.49 | 0.0 | 0.29
Pixel - within | -7.41 | 0.88 | -7.47 | 0.79 | -8.09 | 0.85
Pixel - between | 9.73 | 0.51 | 0.6 | 0.34 | 9.32 | 0.57
Amp - HFF | -6.73 | 0.7 | -6.47 | 0.53 | -5.9 | 0.88
Amp - CD | 0.04 | 0.63 | 0.04 | 0.67 | 0.05 | 0.84
Phase - HFF | -7.7 | 0.18 | -8.02 | 0.31 | -6.66 | 0.35
Phase - CD | 0.03 | 0.27 | 0.02 | 0.23 | 0.02 | 0.4
Contrast | ID accuracy | 0.91 | 0.85 | 0.89 | 0.66 | 0.87 | 0.91
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.15 | -0.01 | 0.4 | 0.0 | 0.25
Pixel - within | -6.59 | 0.74 | -8.13 | 0.64 | -7.4 | 0.81
Pixel - between | 8.08 | 0.39 | 3.92 | 0.3 | 8.46 | 0.51
Amp - HFF | -6.13 | 0.63 | -5.9 | 0.4 | -5.45 | 0.86
Amp - CD | 0.04 | 0.65 | 0.04 | 0.54 | 0.05 | 0.83
Phase - HFF | -8.83 | 0.26 | -9.17 | 0.3 | -7.18 | 0.4
Phase - CD | 0.03 | 0.29 | 0.02 | 0.23 | 0.02 | 0.42
Defocus Blur | ID accuracy | 0.55 | 0.51 | 0.86 | 0.63 | 0.42 | 0.39
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.17 | -0.01 | 0.64 | 0.01 | 0.45
Pixel - within | -4.54 | 0.55 | -7.82 | 0.61 | -4.49 | 0.41
Pixel - between | 6.35 | 0.36 | 8.82 | 0.26 | 7.84 | 0.52
Amp - HFF | -4.01 | 0.42 | -5.96 | 0.43 | -2.79 | 0.45
Amp - CD | 0.03 | 0.37 | 0.04 | 0.55 | 0.02 | 0.37
Phase - HFF | -4.78 | 0.16 | -6.51 | 0.34 | 1.13 | 0.18
Phase - CD | 0.02 | 0.19 | 0.01 | 0.37 | -0.0 | 0.12
Pixelate | ID accuracy | 0.73 | 0.44 | 0.85 | 0.61 | -0.31 | 0.33
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.11 | -0.01 | 0.75 | 0.0 | 0.13
Pixel - within | -6.8 | 0.46 | -8.09 | 0.6 | 2.67 | 0.22
Pixel - between | 10.92 | 0.45 | 14.0 | 0.25 | 0.01 | 0.23
Amp - HFF | -3.92 | 0.3 | -7.48 | 0.57 | 2.11 | 0.44
Amp - CD | 0.02 | 0.31 | 0.05 | 0.62 | -0.02 | 0.48
Phase - HFF | -0.17 | 0.26 | -8.55 | 0.31 | 11.15 | 0.61
Phase - CD | 0.01 | 0.25 | 0.02 | 0.39 | -0.03 | 0.5
Gaussian Noise | ID accuracy | 0.55 | 0.16 | 0.24 | 0.1 | -1.16 | 0.47
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.05 | -0.0 | 0.15 | -0.0 | 0.05
Pixel - within | -6.17 | 0.24 | -2.89 | 0.14 | 10.65 | 0.38
Pixel - between | 7.48 | 0.11 | 9.49 | 0.14 | -10.94 | 0.31
Amp - HFF | -2.28 | 0.09 | -5.56 | 0.11 | 7.09 | 0.47
Amp - CD | 0.01 | 0.12 | 0.03 | 0.17 | -0.06 | 0.47
Phase - HFF | -0.52 | 0.18 | -11.92 | 0.25 | 16.98 | 0.49
Phase - CD | 0.0 | 0.13 | 0.03 | 0.18 | -0.04 | 0.42
Impulse Noise | ID accuracy | 0.04 | 0.0 | -0.17 | 0.5 | -0.58 | 0.39
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.05 | 0.01 | 0.5 | -0.0 | 0.05
Pixel - within | -1.26 | 0.03 | 1.31 | 0.54 | 5.47 | 0.33
Pixel - between | 3.21 | 0.07 | -17.81 | 0.35 | -5.25 | 0.28
Amp - HFF | 0.13 | 0.01 | 0.48 | 0.36 | 3.43 | 0.38
Amp - CD | -0.01 | 0.04 | -0.0 | 0.47 | -0.03 | 0.38
Phase - HFF | 4.05 | 0.1 | -7.43 | 0.41 | 8.61 | 0.43
Phase - CD | -0.01 | 0.06 | 0.02 | 0.26 | -0.02 | 0.38
Table 1: Can model metrics explain why pruning affects OOD robustness?
Because different model architectures respond differently to different pruning
methods, in each subplot of Figure 3 we color-code models by their
architecture and fit a separate probit-domain OOD vs. ID accuracy regression
line to each architecture, so that each regression captures a range of
different pruning amounts (what fraction of weights are pruned). We display
these regressions with opacity proportional to the fit quality $R^{2}$, so
that regression lines that fit the data well are dark and easily seen, while
lower-quality linear fits are more transparent. In the corresponding Tables 1,
each reported slope ($m$) and $R^{2}$ value is the average of the
corresponding values among the three architectures we consider. For ease of
reading, we bold average $R^{2}$ values above 0.5, and among these entries we
highlight the best (bright green) and second-best (faded green) $R^{2}$ values
for each pruning method on each OOD dataset. Although these results are
nuanced and raise additional open questions, we summarize a few key takeaways
from our CIFAR-10 model pruning experiments:
* •
As reported in Miller et al. [22], ID accuracy is often a strong predictor of
OOD accuracy; this trend holds across different pruning methods. However, the
linear trend breaks down for mid and high frequency image corruptions like
pixelate, Gaussian noise, and impulse noise.
* •
Even for low-frequency corruptions, different model architectures might lie on
either the same or different robustness lines (as evidenced by comparing
brightness and contrast).
* •
Initialization LT pruning confers robustness to high-frequency corruptions and
can induce a negative correlation between ID and OOD accuracy for these
corruptions.
* •
Jacobian norm is rarely a good predictor of OOD accuracy, but it does
correlate negatively (lower norm is better) for models pruned by rewinding
methods on mid-frequency corruptions.
* •
As reported in [9], lower high frequency fraction HFF (i.e. smoother
predictions) for within-class pixel interpolation is a good predictor of OOD
accuracy, at least for low-frequency distribution shifts where ID accuracy is
also predictive.
* •
Amplitude and phase interpolation are sometimes predictive of OOD accuracy,
particularly for initialization LT pruning, and in the expected direction of
correlation (a better model should have lower HFF and higher CD). However,
even these metrics fail to explain most of the robustness behavior for high
(and some mid) frequency distribution shifts on CIFAR-10.
### 4.3 When and why does data augmentation confer effective robustness?
A common practice to protect against distribution shift is to train with data
augmentation [15]. Accordingly, we revisit each of the CIFAR-10 models
considered in Section 4.2 with three different variations of data augmentation
used during training: no augmentation (clean), AugMix [15], and Gaussian noise
augmentation [19]. In Figure 4 we investigate _when_ (on which types of
distribution shifts) and in Table 2 we study _why_ (via corresponding model
property measurements) these differently-augmented CIFAR-10 models achieve
varying degrees of robustness. As in Figure 3 and Table 1, we fit a separate
regression line to each model architecture and average the resulting slope and
$R^{2}$ values. The models that contribute to each regression line therefore
share an architecture and an augmentation strategy, but vary by pruning method
and amount. Our findings in this experiment echo those in Section 4.2, but we
make the following additional observations:
* •
The closer the match between the training data and the OOD evaluation data in
terms of their spectral profile, the better ID accuracy, within-class pixel
interpolation, and amplitude interpolation HFF and CD are at predicting OOD
accuracy. This is evident from the high $R^{2}$ values for these metrics when
predicting defocus blur accuracy for models trained with AugMix, and when
predicting Gaussian noise or impulse noise accuracy for models trained with
Gaussian noise.
* •
The closer the match (in terms of PSD) between the training data and the OOD
data, the better the OOD accuracy. This unsurprising result is evident from
the higher accuracies of AugMix-trained models on contrast and defocus blur
corruptions and the higher accuracies of Gaussian-noise-trained models on
Gaussian noise and impulse noise corruptions.
### 4.4 CLIP: Weight ensembling a robust model on ImageNet
Perhaps the most stunning example of OOD robustness for image classification
is CLIP [25], which showed that a large model pretrained on a massive dataset
and evaluated zero-shot on ImageNet achieves effective robustness on many OOD
test sets simultaneously. Andreassen et al. [1] found that finetuning these
robust models towards ImageNet improves ID accuracy on ImageNet but erodes
effective robustness, and Wortsman et al. [34] offered a "best of both worlds"
solution by interpolating in weight space between a pretrained zero-shot model
and its finetuned counterpart, improving ID accuracy on ImageNet without
sacrificing robustness. Accordingly, we study a set of 33 CLIP models obtained
by interpolating the pretrained and finetuned versions of CLIP ViT-B/16,
ViT-B/32, and ViT-L/14 models [25, 34] with various relative weights on the
two checkpoints. We compare this CLIP model set to a benchmark set of 28
standard (nonrobust) ImageNet pretrained models from Torchvision [21] as well
as 5 pretrained models from the RobustBench [4] ImageNet-C leaderboard; these
models typically use data augmentation to improve corruption performance but
are not nearly as effectively robust as the CLIP models. We make the following
observations in Figure 5 and Table 3:
* •
Although in-distribution accuracy is a reasonable predictor of OOD accuracy
for most models and distribution shifts, it performs markedly worse when
predicting OOD accuracy for CLIP models (and a couple of effectively robust
RobustBench models on some OOD shifts).
* •
Fourier interpolation metrics, particularly HFF and CD for amplitude
interpolation, are strong predictors (often $R^{2}\geq 0.9$) of OOD
performance for CLIP models across OOD shifts. This observation helps answer
_why_ CLIP models are more robust: they are less sensitive to perturbations of
Fourier amplitude, which preserve image semantics for humans but confuse
nonrobust models.
Figure 4: When does augmentation affect OOD robustness?
| Clean | AugMix | Gaussian
---|---|---|---
$m$ | $R^{2}$ | $m$ | $R^{2}$ | $m$ | $R^{2}$
CIFAR-10.1 | ID accuracy | 0.91 | 0.92 | 0.91 | 0.93 | 0.89 | 0.86
Jacobian | 0.01 | 0.14 | 0.01 | 0.35 | -0.0 | 0.05
Pixel - within | -8.45 | 0.79 | -6.88 | 0.86 | -7.13 | 0.73
Pixel - between | 10.68 | 0.57 | 8.55 | 0.67 | 6.95 | 0.45
Amp - HFF | -5.43 | 0.79 | -5.93 | 0.83 | -5.64 | 0.71
Amp - CD | 0.04 | 0.74 | 0.06 | 0.66 | 0.04 | 0.68
Phase - HFF | -7.36 | 0.36 | 2.63 | 0.06 | 4.18 | 0.09
Phase - CD | 0.02 | 0.45 | 0.02 | 0.19 | 0.03 | 0.29
Brightness | ID accuracy | 0.99 | 0.98 | 1.02 | 0.99 | 1.01 | 0.97
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.12 | 0.01 | 0.37 | 0.0 | 0.06
Pixel - within | -9.36 | 0.87 | -7.7 | 0.91 | -7.51 | 0.74
Pixel - between | 11.52 | 0.61 | 9.73 | 0.73 | 8.54 | 0.6
Amp - HFF | -5.94 | 0.84 | -6.68 | 0.88 | -6.32 | 0.78
Amp - CD | 0.04 | 0.78 | 0.06 | 0.7 | 0.04 | 0.69
Phase - HFF | -7.38 | 0.33 | 3.32 | 0.07 | 7.44 | 0.18
Phase - CD | 0.02 | 0.44 | 0.02 | 0.19 | 0.03 | 0.24
Contrast | ID accuracy | 1.01 | 0.87 | 1.25 | 0.93 | 0.55 | 0.54
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.08 | 0.02 | 0.44 | -0.0 | 0.05
Pixel - within | -9.88 | 0.79 | -9.79 | 0.9 | -5.22 | 0.61
Pixel - between | 11.55 | 0.5 | 12.76 | 0.78 | 3.12 | 0.16
Amp - HFF | -6.55 | 0.86 | -8.23 | 0.84 | -3.64 | 0.44
Amp - CD | 0.05 | 0.83 | 0.07 | 0.6 | 0.02 | 0.47
Phase - HFF | -9.91 | 0.44 | 8.98 | 0.11 | 0.25 | 0.03
Phase - CD | 0.03 | 0.53 | 0.02 | 0.12 | 0.02 | 0.22
Defocus Blur | ID accuracy | 0.47 | 0.34 | 0.98 | 0.98 | 0.7 | 0.6
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.04 | 0.01 | 0.39 | -0.0 | 0.12
Pixel - within | -5.33 | 0.45 | -7.44 | 0.94 | -6.77 | 0.72
Pixel - between | 5.81 | 0.24 | 9.54 | 0.76 | 5.42 | 0.31
Amp - HFF | -3.27 | 0.42 | -6.39 | 0.88 | -5.19 | 0.64
Amp - CD | 0.02 | 0.36 | 0.06 | 0.67 | 0.03 | 0.6
Phase - HFF | -2.38 | 0.11 | 4.59 | 0.08 | 3.9 | 0.08
Phase - CD | 0.01 | 0.13 | 0.02 | 0.18 | 0.03 | 0.28
Pixelate | ID accuracy | 0.11 | 0.13 | 0.42 | 0.45 | 0.68 | 0.55
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.13 | 0.0 | 0.04 | -0.01 | 0.14
Pixel - within | -2.12 | 0.14 | -3.27 | 0.49 | -6.33 | 0.64
Pixel - between | 2.03 | 0.16 | 3.7 | 0.29 | 5.31 | 0.3
Amp - HFF | -0.4 | 0.18 | -3.0 | 0.49 | -5.1 | 0.57
Amp - CD | -0.0 | 0.13 | 0.03 | 0.47 | 0.03 | 0.54
Phase - HFF | 5.4 | 0.2 | 2.31 | 0.05 | 5.28 | 0.1
Phase - CD | -0.01 | 0.15 | 0.01 | 0.08 | 0.03 | 0.25
Gaussian Noise | ID accuracy | -1.11 | 0.23 | 0.08 | 0.11 | 0.88 | 0.94
---|---|---|---|---|---|---|---
Jacobian | -0.03 | 0.31 | -0.01 | 0.11 | -0.0 | 0.08
Pixel - within | 6.83 | 0.09 | -0.81 | 0.12 | -7.08 | 0.85
Pixel - between | -17.36 | 0.23 | -0.34 | 0.07 | 7.32 | 0.57
Amp - HFF | 5.63 | 0.25 | -1.26 | 0.08 | -5.74 | 0.79
Amp - CD | -0.04 | 0.23 | 0.03 | 0.15 | 0.03 | 0.7
Phase - HFF | 10.47 | 0.26 | 0.8 | 0.05 | 6.39 | 0.16
Phase - CD | -0.03 | 0.27 | 0.02 | 0.15 | 0.03 | 0.25
Impulse Noise | ID accuracy | -0.64 | 0.32 | 0.33 | 0.26 | 0.75 | 0.67
---|---|---|---|---|---|---|---
Jacobian | -0.01 | 0.27 | 0.0 | 0.13 | -0.0 | 0.1
Pixel - within | 4.67 | 0.16 | -2.64 | 0.26 | -6.54 | 0.71
Pixel - between | -9.44 | 0.26 | 2.62 | 0.23 | 5.93 | 0.42
Amp - HFF | 3.08 | 0.3 | -2.23 | 0.27 | -5.39 | 0.61
Amp - CD | -0.02 | 0.31 | 0.03 | 0.25 | 0.03 | 0.53
Phase - HFF | 5.28 | 0.25 | 0.32 | 0.03 | 4.36 | 0.16
Phase - CD | -0.01 | 0.27 | 0.02 | 0.2 | 0.03 | 0.2
Table 2: Can model metrics explain why augmentation affects OOD robustness?
## 5 Discussion
Our work is only the beginning of a true understanding of what makes models
effectively robust to distribution shifts. For example, Section 4.2 showed
different types of model pruning can confer robustness to different
distribution shifts, and while some of this robustness can be explained by
improved behavior in response to amplitude and phase perturbations, much
remains an open mystery.
As we saw in Section 4.3 (and has been noted by prior work [36, 15]), models
can achieve robustness to some distribution shifts by training with data
augmentations designed to imitate the expected OOD data. This approach can be
a powerful way to mitigate expected test-time distribution shifts but leaves
open the question of how to prepare for the unexpected.
We learned in Section 4.4 that one way in which CLIP models achieve OOD
robustness is by reducing their sensitivity to perturbations of Fourier
amplitude, perhaps making them more attuned to human perceptions of semantic
meaning. This finding provides an exciting opportunity for future work to both
confirm our results across additional robust models and incorporate these
Fourier interpolation metrics into training paradigms, for instance as
regularizers, to explicitly encode robustness in future models.
Figure 5: When and why does CLIP ensembling affect OOD robustness?
| All | CLIP | Non-CLIP
---|---|---|---
$m$ | $R^{2}$ | $m$ | $R^{2}$ | $m$ | $R^{2}$
ImageNet V2 | ID accuracy | 0.98 | 0.94 | 0.64 | 0.96 | 0.93 | 1.0
Jacobian | 0.0 | 0.14 | -0.0 | 0.96 | 0.0 | 0.0
Pixel - within | -10.2 | 0.88 | -6.21 | 1.0 | -9.8 | 0.92
Pixel - between | 8.39 | 0.74 | 6.46 | 0.78 | 14.29 | 0.73
Amp - HFF | -12.27 | 0.36 | -11.56 | 0.57 | -0.66 | 0.0
Amp - CD | 0.02 | 0.77 | 0.03 | 0.9 | 0.03 | 0.67
Phase - HFF | -4.68 | 0.05 | -5.05 | 0.08 | 19.65 | 0.42
Phase - CD | 0.02 | 0.64 | 0.02 | 0.42 | 0.04 | 0.57
Brightness | ID accuracy | 1.06 | 0.91 | 0.66 | 0.79 | 1.12 | 0.92
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.06 | -0.0 | 0.81 | -0.0 | 0.0
Pixel - within | -10.9 | 0.81 | -6.8 | 0.91 | -11.4 | 0.79
Pixel - between | 8.36 | 0.59 | 6.03 | 0.51 | 18.99 | 0.81
Amp - HFF | -11.2 | 0.25 | -15.99 | 0.83 | -8.87 | 0.01
Amp - CD | 0.02 | 0.71 | 0.03 | 0.99 | 0.04 | 0.82
Phase - HFF | -2.09 | 0.01 | -11.16 | 0.29 | 23.56 | 0.39
Phase - CD | 0.02 | 0.51 | 0.03 | 0.7 | 0.05 | 0.65
Contrast | ID accuracy | 1.38 | 0.6 | 0.25 | 0.16 | 1.28 | 0.6
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.2 | -0.0 | 0.2 | -0.0 | 0.01
Pixel - within | -13.46 | 0.48 | -3.2 | 0.3 | -12.12 | 0.45
Pixel - between | 14.19 | 0.67 | 0.83 | 0.01 | 24.8 | 0.7
Amp - HFF | -23.97 | 0.44 | -13.62 | 0.91 | -26.29 | 0.06
Amp - CD | 0.04 | 0.86 | 0.02 | 0.64 | 0.05 | 0.85
Phase - HFF | -12.78 | 0.11 | -15.9 | 0.88 | 25.25 | 0.22
Phase - CD | 0.04 | 0.69 | 0.03 | 0.98 | 0.07 | 0.59
Defocus Blur | ID accuracy | 1.36 | 0.75 | 0.27 | 0.23 | 1.39 | 0.81
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.15 | -0.0 | 0.27 | -0.0 | 0.01
Pixel - within | -13.75 | 0.66 | -3.38 | 0.38 | -14.13 | 0.7
Pixel - between | 12.1 | 0.64 | 1.29 | 0.04 | 24.5 | 0.78
Amp - HFF | -18.59 | 0.35 | -13.03 | 0.94 | -17.62 | 0.03
Amp - CD | 0.03 | 0.79 | 0.02 | 0.71 | 0.05 | 0.82
Phase - HFF | -6.15 | 0.03 | -14.51 | 0.83 | 32.63 | 0.43
Phase - CD | 0.03 | 0.62 | 0.03 | 0.99 | 0.07 | 0.61
Pixelate | ID accuracy | 1.26 | 0.6 | 0.46 | 0.5 | 1.24 | 0.55
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.08 | -0.0 | 0.55 | -0.0 | 0.06
Pixel - within | -12.06 | 0.47 | -5.12 | 0.67 | -11.47 | 0.4
Pixel - between | 12.07 | 0.59 | 3.47 | 0.22 | 25.27 | 0.71
Amp - HFF | -18.74 | 0.33 | -15.18 | 0.97 | -29.89 | 0.08
Amp - CD | 0.03 | 0.76 | 0.03 | 0.93 | 0.05 | 0.83
Phase - HFF | -5.38 | 0.02 | -14.03 | 0.59 | 32.24 | 0.36
Phase - CD | 0.03 | 0.55 | 0.03 | 0.93 | 0.07 | 0.57
Gaussian Noise | ID accuracy | 1.6 | 0.65 | 0.44 | 0.46 | 1.73 | 0.65
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.05 | -0.0 | 0.5 | -0.0 | 0.04
Pixel - within | -15.68 | 0.54 | -4.93 | 0.62 | -16.75 | 0.51
Pixel - between | 13.82 | 0.52 | 3.18 | 0.19 | 33.06 | 0.74
Amp - HFF | -20.7 | 0.27 | -15.19 | 0.98 | -50.62 | 0.14
Amp - CD | 0.04 | 0.73 | 0.03 | 0.9 | 0.07 | 0.93
Phase - HFF | -4.93 | 0.01 | -14.38 | 0.62 | 35.48 | 0.26
Phase - CD | 0.04 | 0.51 | 0.03 | 0.94 | 0.1 | 0.72
Impulse Noise | ID accuracy | 1.86 | 0.66 | 0.54 | 0.53 | 1.94 | 0.64
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.07 | -0.0 | 0.57 | -0.0 | 0.04
Pixel - within | -18.45 | 0.55 | -5.88 | 0.69 | -19.1 | 0.52
Pixel - between | 16.68 | 0.56 | 4.15 | 0.25 | 37.04 | 0.72
Amp - HFF | -25.88 | 0.31 | -17.02 | 0.96 | -58.31 | 0.14
Amp - CD | 0.05 | 0.76 | 0.03 | 0.94 | 0.08 | 0.9
Phase - HFF | -7.67 | 0.02 | -15.28 | 0.55 | 40.22 | 0.26
Phase - CD | 0.05 | 0.55 | 0.04 | 0.91 | 0.11 | 0.69
Table 3: Can model metrics explain why CLIP ensembling affects OOD robustness?
Our results provide a new perspective on the effective robustness problem by
showing that no one single metric rules them all. This dictates the need for a
multi-faceted analysis to understand OOD robustness, which is in sharp
contrast to the in-distribution setting where a single property such as model
flatness can reasonably explain generalization performance.
#### RobustNets benchmark.
To aid further research into understanding, modeling, and designing for OOD
robustness, we will publish a database of our pretrained models on CIFAR-10
(with diverse effective robustness) with the full range of model
architectures, data augmentations, and pruning types we study. The models we
study on ImageNet are already available publicly. We will also publish the ID
and OOD accuracy values for each model, and our code used to evaluate Jacobian
norm and high frequency fraction and consistent distance for Fourier amplitude
and phase interpolation, so that future researchers can apply it to study new
and more robust models as they become available.
## Acknowledgements
We would like to thank Jonathan Frankle, Mitchell Wortsman, and Ludwig Schmidt
for helpful discussion and pointers to resources. SFK acknowledges support
from NSF GRFP. This work was performed under the auspices of the U.S.
Department of Energy by Lawrence Livermore National Laboratory under Contract
DE-AC52-07NA27344 and was supported by the LLNL-LDRD Program under Project No.
2022-SI-004 (LLNL-CONF-835574).
## References
* Andreassen et al. [2021] Anders Andreassen, Yasaman Bahri, Behnam Neyshabur, and Rebecca Roelofs. The evolution of out-of-distribution robustness throughout fine-tuning, 2021. URL https://arxiv.org/abs/2106.15831.
* Basri et al. [2019] Ronen Basri, David Jacobs, Yoni Kasten, and Shira Kritchman. The convergence rate of neural networks for learned functions of different frequencies. 2019\. doi: 10.48550/ARXIV.1906.00425. URL https://arxiv.org/abs/1906.00425.
* Bulusu et al. [2020] Saikiran Bulusu, Bhavya Kailkhura, Bo Li, Pramod K Varshney, and Dawn Song. Anomalous example detection in deep learning: A survey. _IEEE Access_ , 8:132330–132347, 2020.
* Croce et al. [2020] Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark, 2020. URL https://arxiv.org/abs/2010.09670.
* Deng et al. [2009] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_ , pages 248–255. Ieee, 2009.
* Diffenderfer and Kailkhura [2021] James Diffenderfer and Bhavya Kailkhura. Multi-prize lottery ticket hypothesis: Finding accurate binary neural networks by pruning a randomly weighted network. _International Conference on Learning Representations_ , 2021.
* Diffenderfer et al. [2021] James Diffenderfer, Brian Bartoldson, Shreya Chaganti, Jize Zhang, and Bhavya Kailkhura. A winning hand: Compressing deep networks can improve out-of-distribution robustness. In _Advances in Neural Information Processing Systems_ , 2021. URL https://proceedings.neurips.cc/paper/2021/file/0607f4c705595b911a4f3e7a127b44e0-Paper.pdf.
* Frankle and Carbin [2018] Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. _arXiv preprint arXiv:1803.03635_ , 2018.
* Fridovich-Keil et al. [2021] Sara Fridovich-Keil, Raphael Gontijo Lopes, and Rebecca Roelofs. Spectral bias in practice: The role of function frequency in generalization. _CoRR_ , abs/2110.02424, 2021. URL https://arxiv.org/abs/2110.02424.
* Guillory et al. [2021] Devin Guillory, Vaishaal Shankar, Sayna Ebrahimi, Trevor Darrell, and Ludwig Schmidt. Predicting with confidence on unseen distributions, 2021. URL https://arxiv.org/abs/2107.03315.
* Han et al. [2015] Song Han, Jeff Pool, John Tran, and William J Dally. Learning both weights and connections for efficient neural networks. _arXiv preprint arXiv:1506.02626_ , 2015.
* He et al. [2015] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015.
* Hendrycks and Dietterich [2019a] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations, 2019a. URL https://arxiv.org/abs/1903.12261.
* Hendrycks and Dietterich [2019b] Dan Hendrycks and Thomas G. Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In _7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019_. OpenReview.net, 2019b. URL https://openreview.net/forum?id=HJz6tiCqYm.
* Hendrycks et al. [2019] Dan Hendrycks, Norman Mu, Ekin Dogus Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty. In _International Conference on Learning Representations_ , 2019.
* Hoffman et al. [2019] Judy Hoffman, Daniel A Roberts, and Sho Yaida. Robust learning with jacobian regularization. _arXiv preprint arXiv:1908.02729_ , 2019.
* Hooker et al. [2019] Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, and Andrea Frome. What do compressed deep neural networks forget? _arXiv preprint arXiv:1911.05248_ , 2019.
* Jiang et al. [2019] Yiding Jiang, Behnam Neyshabur, Hossein Mobahi, Dilip Krishnan, and Samy Bengio. Fantastic generalization measures and where to find them. _arXiv preprint arXiv:1912.02178_ , 2019.
* Kireev et al. [2021] Klim Kireev, Maksym Andriushchenko, and Nicolas Flammarion. On the effectiveness of adversarial training against common corruptions. _arXiv preprint arXiv:2103.02325_ , 2021.
* Krizhevsky [2009] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
* Marcel and Rodriguez [2010] Sébastien Marcel and Yann Rodriguez. Torchvision the machine-vision package of torch. In _Proceedings of the 18th ACM International Conference on Multimedia_ , MM ’10, page 1485–1488, New York, NY, USA, 2010. Association for Computing Machinery. ISBN 9781605589336. doi: 10.1145/1873951.1874254. URL https://doi.org/10.1145/1873951.1874254.
* Miller et al. [2021a] John Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, and Ludwig Schmidt. Accuracy on the line: On the strong correlation between out-of-distribution and in-distribution generalization, 2021a. URL https://arxiv.org/abs/2107.04649.
* Miller et al. [2021b] John P Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, and Ludwig Schmidt. Accuracy on the line: on the strong correlation between out-of-distribution and in-distribution generalization. In _International Conference on Machine Learning_ , pages 7721–7735. PMLR, 2021b.
* Ortiz-Jimenez et al. [2020] Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Hold me tight! influence of discriminative features on deep network boundaries, 2020. URL https://arxiv.org/abs/2002.06349.
* Radford et al. [2021] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision, 2021. URL https://arxiv.org/abs/2103.00020.
* Rahaman et al. [2018] Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred A. Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. 2018\. doi: 10.48550/ARXIV.1806.08734. URL https://arxiv.org/abs/1806.08734.
* Ramanujan et al. [2020] Vivek Ramanujan, Mitchell Wortsman, Aniruddha Kembhavi, Ali Farhadi, and Mohammad Rastegari. What’s hidden in a randomly weighted neural network? In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 11893–11902, 2020.
* Recht et al. [2018] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do cifar-10 classifiers generalize to cifar-10?, 2018. URL https://arxiv.org/abs/1806.00451.
* Recht et al. [2019] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize to imagenet?, 2019. URL https://arxiv.org/abs/1902.10811.
* Renda et al. [2020] Alex Renda, Jonathan Frankle, and Michael Carbin. Comparing rewinding and fine-tuning in neural network pruning, 2020.
* Shankar et al. [2019] Vaishaal Shankar, Achal Dave, Rebecca Roelofs, Deva Ramanan, Benjamin Recht, and Ludwig Schmidt. Do image classifiers generalize across time?, 2019. URL https://arxiv.org/abs/1906.02168.
* Simonyan and Zisserman [2014] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. _arXiv 1409.1556_ , 09 2014.
* Sun et al. [2021] Jiachen Sun, Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Dan Hendrycks, Jihun Hamm, and Z Morley Mao. Certified adversarial defenses meet out-of-distribution corruptions: Benchmarking robustness and simple baselines. _arXiv preprint arXiv:2112.00659_ , 2021.
* Wortsman et al. [2021] Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, and Ludwig Schmidt. Robust fine-tuning of zero-shot models, 2021. URL https://arxiv.org/abs/2109.01903.
* Xu et al. [2018] Zhi-Qin John Xu, Yaoyu Zhang, and Yanyang Xiao. Training behavior of deep neural network in frequency domain, 2018. URL https://arxiv.org/abs/1807.01251.
* Yin et al. [2019] Dong Yin, Raphael Gontijo Lopes, Jonathon Shlens, Ekin D. Cubuk, and Justin Gilmer. A fourier perspective on model robustness in computer vision, 2019. URL https://arxiv.org/abs/1906.08988.
* Zhu and Gupta [2017] Michael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of pruning for model compression, 2017.
## Appendix A Appendix
### A.1 Pruning methods
In Section 4.2 we present results considering three categories of model
pruning used in Diffenderfer et al. [7]: _traditional_ (fine-tuning [11],
gradual magnitude pruning [37]), _rewinding lottery-tickets_ (weight-rewinding
[8], learning-rate rewinding [30]), and _initialization lottery-tickets_
(edgepopup [27], biprop [6]). Here we provide a brief description of each
pruning method.
#### Traditional.
As the name suggests, fine-tuning prunes a model at the end of the regular
training period by removing $p\%$ of the weights with the smallest magnitude,
then fine tunes the remaining weights using the learning rate at the end of
the regular training period. Gradual magnitude pruning progressively removes
weights during training in accordance with a sparsity scheduler (see Fig. 1 in
Zhu and Gupta [37]) until the target sparsity is reached. Typically, some
training takes place before and after pruning in gradual magnitude pruning.
#### Rewinding lottery tickets.
Rewinding lottery tickets perform repeated training-then-pruning steps in
which some percentage of the remaining weights are pruned at each pruning step
(traditionally 20% of the remaining weights are pruned at each step). After
each training-then-pruning step, either the weights and learning rate are
rewound to their values at initialization (i.e. weight-rewinding) or the
learning rate is rewound to a value earlier in the training process (learning-
rate rewinding). The first rewinding lottery ticket method supported the
Lottery Ticket Hypothesis [8] which suggests that sparse subnetworks exist
within randomly initialized dense networks that can be trained to the same
accuracy as dense networks.
#### Initialization lottery tickets.
Stronger versions of this hypothesis, such as the Strong [27] and Multi-Prize
Lottery Ticket Hypotheses [6] suggest that such subnetworks exist at
initialization that achieve comparable accuracy to the dense network without
requiring any training and, further, that these subnetworks’ weights can be
binarized. Edgepopup identifies such subnetworks using surrogate scores to
identify the most important weights at initialization. Biprop integrates
weight and/or activation binarization into the search process for such
subnetworks resulting in binary-weight or binary-weight and activation
networks (BNNs). Note that biprop networks in this paper only make use of
binarized weights.
### A.2 Fourier interpolation on CIFAR-10
Figure 1 in the main paper shows example Fourier amplitude and phase
interpolating paths on ImageNet. Figure 6 shows example Fourier amplitude and
phase interpolating paths on CIFAR-10.
Figure 6: Example Fourier amplitude (top) and phase (bottom) interpolating
paths from CIFAR-10. Each path includes 100 images; every 15th image is
visualized here. The first image along each path is an unaltered image from
the original test set; the last image has the same Fourier phases (top) or
amplitudes (bottom) as the original but has some of the Fourier amplitudes
(top) or phases (bottom) of a random other image from the validation set.
Amplitude interpolation produces a corruption that preserves semantic content,
whereas phase interpolation destroys semantic content.
### A.3 Jacobian norm computational details
We estimate the norm of the Jacobian using a random-projection-based approach
that estimates the Jacobian norm as a function of a batch of samples of size
$B$ and $n_{proj}$ projection vectors [16]. For CIFAR-10 and ImageNet, we set
$n_{proj}$ to $10$ and $20$, respectively; we use $B=400$ for both datasets.
Error bars for our estimates are 95% confidence intervals constructed under
the assumption that the $n_{proj}B$ unique Jacobian norm estimates made by the
projection method are i.i.d., which is consistent with the error order of
$(n_{proj}B)^{-\frac{1}{2}}$ expected by Hoffman et al. [16] when $B\gg 1$.
Under this assumption, which may lead to underestimation of the true error if
violated, our choice of $n_{proj}$ and $B$ ensures that the average error bar
is less than 2% of the size of the estimated Jacobian norm (and all error bars
are less than 8% of the size of the estimated Jacobian norm). Our estimated
Jacobian norms are between 60 and 500 for ImageNet models and between 2 and 50
for CIFAR-10 models.
### A.4 Power spectral density of distribution shifts
The PSDs shown in Figure 2 reflect the distribution _shift_ between each OOD
dataset and the original CIFAR-10 test set. As each of the CIFAR-10-C
corruptions are transformations applied to the original CIFAR-10 test images,
the PSD for each CIFAR-10-C corruption is computed by taking the difference
between each corrupted and original test image, computing the PSD of the
resulting difference image, then averaging the PSDs for all images of the
given corruption. As CIFAR-10.1 images are not corruptions of the original
CIFAR-10 test images, the PSD characterization must be computed in a different
manner. First, for each of the ten CIFAR-10 classes we compute the average PSD
for CIFAR-10 test images and CIFAR-10.1 images and take the difference of
these class PSDs. The final approximated PSD representing CIFAR-10.1 shift is
taken to be the average of the difference PSDs for the ten classes.
In Figure 7, we show the PSDs for all perturbations to CIFAR-10 test data we
considered including the nine CIFAR-10-C corruptions not provided in the main
text (fog, snow, frost, elastic transform, zoom blur, motion blur, glass blur,
JPEG compression, and shot noise). As in the main text, we sort the
corruptions roughly in order of increasing frequency.
CIFAR-10.1
Brightness
Contrast
Fog
Frost
Snow
Elastic
Motion Blur
Zoom Blur
Glass Blur
Defocus Blur
Pixelate
JPEG
Gaussian Noise
Impulse Noise
Shot Noise
Figure 7: Power spectral densities for low (CIFAR-10.1, brightness, contrast,
fog, frost, snow), mid (elastic transform, motion blur, zoom blur, glass blur,
defocus blur, pixelate, JPEG compression), and high (Gaussian noise, impulse
noise, shot noise) frequency shifts with respect to CIFAR-10.
### A.5 Pruning results on additional corruptions from CIFAR-10-C
Figure 8 and Table 4 extend Figure 3 and Table 1 to the nine remaining
corruptions from CIFAR-10-C.
Figure 8: When does pruning affect OOD robustness?
| Traditional | Rewinding LT | Initialization LT
---|---|---|---
$m$ | $R^{2}$ | $m$ | $R^{2}$ | $m$ | $R^{2}$
Fog | ID accuracy | 0.96 | 0.83 | 0.63 | 0.66 | 0.98 | 0.89
Jacobian | -0.0 | 0.19 | -0.01 | 0.43 | 0.01 | 0.36
Pixel - within | -6.74 | 0.72 | -5.6 | 0.68 | -8.65 | 0.84
Pixel - between | 8.77 | 0.41 | 3.75 | 0.24 | 9.88 | 0.59
Amp - HFF | -6.91 | 0.68 | -4.54 | 0.38 | -6.1 | 0.84
Amp - CD | 0.05 | 0.67 | 0.03 | 0.53 | 0.05 | 0.78
Phase - HFF | -9.69 | 0.27 | -4.88 | 0.29 | -6.36 | 0.33
Phase - CD | 0.03 | 0.32 | 0.01 | 0.21 | 0.02 | 0.36
Snow | ID accuracy | 0.82 | 0.71 | 1.04 | 0.84 | 0.4 | 0.4
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.14 | -0.01 | 0.68 | 0.0 | 0.34
Pixel - within | -6.97 | 0.76 | -9.12 | 0.8 | -3.48 | 0.46
Pixel - between | 9.75 | 0.55 | 9.66 | 0.4 | 5.5 | 0.6
Amp - HFF | -4.94 | 0.44 | -8.36 | 0.66 | -2.46 | 0.4
Amp - CD | 0.03 | 0.35 | 0.05 | 0.76 | 0.02 | 0.26
Phase - HFF | -3.8 | 0.12 | -10.37 | 0.4 | 0.55 | 0.04
Phase - CD | 0.01 | 0.14 | 0.02 | 0.28 | -0.0 | 0.0
Frost | ID accuracy | 0.95 | 0.62 | 1.04 | 0.64 | 0.18 | 0.21
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.16 | -0.01 | 0.49 | 0.0 | 0.31
Pixel - within | -8.3 | 0.69 | -8.82 | 0.62 | -1.67 | 0.17
Pixel - between | 12.22 | 0.57 | 11.92 | 0.3 | 3.42 | 0.29
Amp - HFF | -5.81 | 0.4 | -8.44 | 0.47 | -0.92 | 0.26
Amp - CD | 0.03 | 0.31 | 0.05 | 0.61 | 0.01 | 0.23
Phase - HFF | -4.79 | 0.16 | -12.66 | 0.43 | 4.57 | 0.19
Phase - CD | 0.02 | 0.16 | 0.02 | 0.22 | -0.01 | 0.15
Elastic Transform | ID accuracy | 0.73 | 0.78 | 0.89 | 0.81 | 0.49 | 0.51
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.22 | -0.01 | 0.7 | 0.01 | 0.48
Pixel - within | -6.09 | 0.86 | -7.82 | 0.84 | -4.67 | 0.64
Pixel - between | 8.83 | 0.6 | 6.19 | 0.34 | 6.34 | 0.64
Amp - HFF | -5.19 | 0.6 | -6.83 | 0.51 | -3.05 | 0.52
Amp - CD | 0.03 | 0.43 | 0.04 | 0.69 | 0.03 | 0.35
Phase - HFF | -4.96 | 0.13 | -6.3 | 0.33 | -0.49 | 0.05
Phase - CD | 0.02 | 0.2 | 0.01 | 0.25 | 0.0 | 0.06
Zoom Blur | ID accuracy | 0.67 | 0.45 | 1.05 | 0.68 | 0.35 | 0.31
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.19 | -0.01 | 0.6 | 0.01 | 0.44
Pixel - within | -5.87 | 0.51 | -9.33 | 0.67 | -4.44 | 0.33
Pixel - between | 8.37 | 0.35 | 2.55 | 0.3 | 8.85 | 0.47
Amp - HFF | -4.58 | 0.34 | -7.43 | 0.39 | -2.48 | 0.38
Amp - CD | 0.03 | 0.3 | 0.05 | 0.54 | 0.02 | 0.32
Phase - HFF | -4.65 | 0.18 | -7.2 | 0.29 | 3.66 | 0.22
Phase - CD | 0.02 | 0.2 | 0.01 | 0.23 | -0.01 | 0.18
Motion Blur | ID accuracy | 0.62 | 0.54 | 1.13 | 0.72 | 0.43 | 0.31
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.18 | -0.02 | 0.69 | 0.01 | 0.52
Pixel - within | -4.93 | 0.54 | -10.02 | 0.69 | -4.59 | 0.36
Pixel - between | 6.87 | 0.33 | 6.94 | 0.36 | 7.72 | 0.5
Amp - HFF | -4.41 | 0.42 | -9.25 | 0.56 | -2.78 | 0.35
Amp - CD | 0.03 | 0.38 | 0.06 | 0.63 | 0.02 | 0.27
Phase - HFF | -5.86 | 0.18 | -10.9 | 0.41 | 1.7 | 0.09
Phase - CD | 0.02 | 0.23 | 0.02 | 0.28 | -0.0 | 0.05
Glass Blur | ID accuracy | 0.17 | 0.26 | 0.99 | 0.55 | -0.85 | 0.43
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.05 | -0.01 | 0.57 | 0.0 | 0.01
Pixel - within | -2.97 | 0.34 | -8.77 | 0.52 | 7.07 | 0.32
Pixel - between | 5.93 | 0.21 | 9.81 | 0.28 | -5.23 | 0.2
Amp - HFF | -0.31 | 0.29 | -8.75 | 0.54 | 5.26 | 0.44
Amp - CD | -0.0 | 0.36 | 0.05 | 0.56 | -0.05 | 0.49
Phase - HFF | 2.57 | 0.32 | -13.16 | 0.48 | 14.49 | 0.59
Phase - CD | -0.01 | 0.32 | 0.03 | 0.45 | -0.04 | 0.54
JPEG Compression | ID accuracy | 0.58 | 0.58 | 0.36 | 0.36 | -0.06 | 0.51
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.2 | -0.01 | 0.49 | 0.0 | 0.26
Pixel - within | -5.48 | 0.66 | -3.19 | 0.43 | 1.36 | 0.48
Pixel - between | 9.92 | 0.72 | -0.09 | 0.21 | -1.74 | 0.39
Amp - HFF | -4.26 | 0.48 | -4.26 | 0.37 | 0.53 | 0.52
Amp - CD | 0.02 | 0.24 | 0.02 | 0.38 | -0.0 | 0.48
Phase - HFF | -0.89 | 0.06 | -4.44 | 0.28 | 3.26 | 0.24
Phase - CD | 0.01 | 0.09 | 0.01 | 0.21 | -0.0 | 0.15
Shot Noise | ID accuracy | 0.67 | 0.22 | 0.34 | 0.11 | -0.77 | 0.46
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.08 | -0.0 | 0.19 | 0.0 | 0.06
Pixel - within | -6.76 | 0.29 | -3.82 | 0.19 | 7.06 | 0.37
Pixel - between | 7.69 | 0.16 | 9.73 | 0.16 | -6.9 | 0.3
Amp - HFF | -2.99 | 0.1 | -5.63 | 0.14 | 4.66 | 0.46
Amp - CD | 0.02 | 0.12 | 0.04 | 0.22 | -0.04 | 0.46
Phase - HFF | -1.88 | 0.16 | -11.59 | 0.3 | 12.66 | 0.45
Phase - CD | 0.01 | 0.09 | 0.03 | 0.2 | -0.03 | 0.37
Table 4: Can model metrics explain why pruning affects OOD robustness?
### A.6 Augmentation results on additional corruptions from CIFAR-10-C
Figure 9 and Table 5 extend Figure 4 and Table 2 to the nine remaining
corruptions from CIFAR-10-C.
Figure 9: When does augmentation affect OOD robustness?
| Clean | AugMix | Gaussian
---|---|---|---
$m$ | $R^{2}$ | $m$ | $R^{2}$ | $m$ | $R^{2}$
Fog | ID accuracy | 1.03 | 0.87 | 1.08 | 0.97 | 0.62 | 0.65
Jacobian | 0.01 | 0.13 | 0.01 | 0.39 | -0.0 | 0.06
Pixel - within | -9.85 | 0.8 | -8.17 | 0.91 | -5.75 | 0.72
Pixel - between | 12.02 | 0.56 | 10.58 | 0.77 | 3.78 | 0.22
Amp - HFF | -6.59 | 0.87 | -7.11 | 0.89 | -4.29 | 0.59
Amp - CD | 0.05 | 0.81 | 0.07 | 0.69 | 0.03 | 0.58
Phase - HFF | -9.64 | 0.42 | 4.88 | 0.08 | -0.32 | 0.04
Phase - CD | 0.03 | 0.51 | 0.02 | 0.17 | 0.03 | 0.34
Snow | ID accuracy | 0.59 | 0.5 | 0.86 | 0.92 | 0.84 | 0.8
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.06 | 0.01 | 0.31 | -0.0 | 0.08
Pixel - within | -6.52 | 0.58 | -6.68 | 0.88 | -7.24 | 0.79
Pixel - between | 7.08 | 0.29 | 8.48 | 0.71 | 6.52 | 0.43
Amp - HFF | -3.59 | 0.42 | -5.67 | 0.83 | -5.85 | 0.78
Amp - CD | 0.03 | 0.4 | 0.05 | 0.66 | 0.04 | 0.75
Phase - HFF | -2.4 | 0.06 | 4.59 | 0.08 | 4.88 | 0.12
Phase - CD | 0.01 | 0.08 | 0.02 | 0.18 | 0.03 | 0.31
Frost | ID accuracy | 0.39 | 0.2 | 0.82 | 0.88 | 0.94 | 0.85
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.14 | 0.01 | 0.25 | -0.0 | 0.06
Pixel - within | -5.24 | 0.31 | -6.38 | 0.85 | -7.64 | 0.76
Pixel - between | 4.29 | 0.13 | 7.97 | 0.66 | 7.48 | 0.48
Amp - HFF | -2.38 | 0.24 | -5.52 | 0.83 | -6.4 | 0.76
Amp - CD | 0.02 | 0.21 | 0.05 | 0.68 | 0.04 | 0.7
Phase - HFF | -0.47 | 0.08 | 4.42 | 0.07 | 6.13 | 0.15
Phase - CD | 0.0 | 0.08 | 0.02 | 0.16 | 0.03 | 0.26
Elastic Transform | ID accuracy | 0.59 | 0.66 | 0.73 | 0.96 | 0.77 | 0.83
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.06 | 0.01 | 0.34 | -0.0 | 0.08
Pixel - within | -6.11 | 0.77 | -5.61 | 0.94 | -6.82 | 0.86
Pixel - between | 7.01 | 0.45 | 7.08 | 0.74 | 6.55 | 0.49
Amp - HFF | -3.59 | 0.58 | -4.83 | 0.87 | -5.3 | 0.77
Amp - CD | 0.03 | 0.49 | 0.05 | 0.67 | 0.03 | 0.7
Phase - HFF | -2.83 | 0.09 | 3.28 | 0.08 | 5.16 | 0.12
Phase - CD | 0.01 | 0.16 | 0.01 | 0.19 | 0.03 | 0.29
Zoom Blur | ID accuracy | 0.48 | 0.28 | 0.95 | 0.96 | 0.69 | 0.57
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.06 | 0.01 | 0.37 | -0.0 | 0.12
Pixel - within | -5.71 | 0.39 | -7.3 | 0.93 | -6.79 | 0.69
Pixel - between | 6.6 | 0.24 | 9.38 | 0.76 | 5.5 | 0.3
Amp - HFF | -3.19 | 0.32 | -6.27 | 0.87 | -5.23 | 0.62
Amp - CD | 0.02 | 0.26 | 0.06 | 0.66 | 0.03 | 0.58
Phase - HFF | -0.9 | 0.05 | 4.92 | 0.09 | 4.11 | 0.08
Phase - CD | 0.01 | 0.06 | 0.02 | 0.17 | 0.03 | 0.26
Motion Blur | ID accuracy | 0.56 | 0.41 | 0.99 | 0.95 | 0.67 | 0.53
---|---|---|---|---|---|---|---
Jacobian | -0.0 | 0.04 | 0.01 | 0.37 | -0.0 | 0.11
Pixel - within | -6.25 | 0.5 | -7.62 | 0.93 | -6.65 | 0.67
Pixel - between | 6.62 | 0.25 | 9.89 | 0.77 | 5.21 | 0.27
Amp - HFF | -3.7 | 0.43 | -6.56 | 0.88 | -5.17 | 0.62
Amp - CD | 0.03 | 0.38 | 0.06 | 0.66 | 0.03 | 0.6
Phase - HFF | -2.75 | 0.09 | 5.9 | 0.1 | 2.9 | 0.06
Phase - CD | 0.01 | 0.12 | 0.02 | 0.16 | 0.03 | 0.31
Glass Blur | ID accuracy | -0.43 | 0.13 | 0.31 | 0.27 | 0.39 | 0.34
---|---|---|---|---|---|---|---
Jacobian | -0.01 | 0.23 | -0.0 | 0.14 | -0.01 | 0.22
Pixel - within | 1.83 | 0.08 | -2.82 | 0.31 | -4.99 | 0.47
Pixel - between | -5.7 | 0.13 | 3.29 | 0.26 | 1.75 | 0.24
Amp - HFF | 2.4 | 0.2 | -2.23 | 0.26 | -3.46 | 0.38
Amp - CD | -0.02 | 0.17 | 0.02 | 0.16 | 0.02 | 0.32
Phase - HFF | 8.08 | 0.28 | 4.34 | 0.08 | 0.08 | 0.09
Phase - CD | -0.02 | 0.26 | 0.0 | 0.05 | 0.02 | 0.19
JPEG Compression | ID accuracy | -0.03 | 0.27 | 0.28 | 0.35 | 0.75 | 0.85
---|---|---|---|---|---|---|---
Jacobian | -0.01 | 0.21 | -0.0 | 0.06 | -0.0 | 0.09
Pixel - within | -0.52 | 0.22 | -2.02 | 0.36 | -6.4 | 0.85
Pixel - between | -1.5 | 0.17 | 1.92 | 0.13 | 6.08 | 0.48
Amp - HFF | -0.04 | 0.33 | -2.07 | 0.4 | -5.09 | 0.74
Amp - CD | -0.0 | 0.28 | 0.03 | 0.53 | 0.03 | 0.68
Phase - HFF | 1.7 | 0.18 | -0.57 | 0.08 | 5.39 | 0.13
Phase - CD | -0.0 | 0.22 | 0.01 | 0.23 | 0.03 | 0.27
Shot Noise | ID accuracy | -0.73 | 0.2 | 0.26 | 0.18 | 0.9 | 0.96
---|---|---|---|---|---|---|---
Jacobian | -0.02 | 0.3 | -0.0 | 0.06 | -0.0 | 0.07
Pixel - within | 3.92 | 0.09 | -2.18 | 0.22 | -7.23 | 0.84
Pixel - between | -12.26 | 0.2 | 1.73 | 0.09 | 7.45 | 0.57
Amp - HFF | 3.66 | 0.25 | -2.3 | 0.21 | -5.87 | 0.79
Amp - CD | -0.03 | 0.23 | 0.03 | 0.3 | 0.04 | 0.71
Phase - HFF | 7.47 | 0.25 | 1.22 | 0.05 | 6.31 | 0.16
Phase - CD | -0.02 | 0.26 | 0.02 | 0.15 | 0.03 | 0.25
Table 5: Can model metrics explain why augmentation affects OOD robustness?
### A.7 CLIP results on additional corruptions from ImageNet-C
Figure 10 and Table 6 extend Figure 5 and Table 3 to the nine remaining
corruptions from ImageNet-C.
Figure 10: When and why does CLIP ensembling affect OOD robustness?
| All | CLIP | Non-CLIP
---|---|---|---
$m$ | $R^{2}$ | $m$ | $R^{2}$ | $m$ | $R^{2}$
Fog | ID accuracy | 1.41 | 0.7 | 0.94 | 0.71 | 1.26 | 0.7
Jacobian | 0.0 | 0.48 | 0.0 | 0.66 | -0.0 | 0.0
Pixel - within | -14.49 | 0.57 | -9.87 | 0.66 | -12.4 | 0.57
Pixel - between | 13.29 | 0.76 | 9.59 | 0.39 | 24.15 | 0.8
Amp - HFF | -11.58 | 0.62 | -7.99 | 0.93 | -21.57 | 0.05
Amp - CD | 0.03 | 0.9 | 0.02 | 0.97 | 0.05 | 0.85
Phase - HFF | -14.72 | 0.43 | -11.73 | 0.91 | 25.81 | 0.28
Phase - CD | 0.03 | 0.75 | 0.02 | 0.95 | 0.07 | 0.62
Snow | ID accuracy | 1.66 | 0.72 | 1.44 | 0.75 | 1.21 | 0.64
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.57 | 0.0 | 0.67 | -0.0 | 0.01
Pixel - within | -17.05 | 0.58 | -14.89 | 0.67 | -11.69 | 0.5
Pixel - between | 15.22 | 0.75 | 15.5 | 0.45 | 24.06 | 0.78
Amp - HFF | -14.64 | 0.74 | -12.02 | 0.93 | -27.11 | 0.08
Amp - CD | 0.04 | 0.96 | 0.04 | 0.99 | 0.05 | 0.89
Phase - HFF | -19.15 | 0.54 | -17.5 | 0.9 | 26.64 | 0.3
Phase - CD | 0.03 | 0.86 | 0.03 | 0.98 | 0.07 | 0.63
Frost | ID accuracy | 1.38 | 0.77 | 1.09 | 0.8 | 1.23 | 0.71
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.49 | 0.0 | 0.64 | -0.0 | 0.01
Pixel - within | -14.3 | 0.63 | -11.29 | 0.72 | -12.17 | 0.58
Pixel - between | 12.2 | 0.74 | 12.16 | 0.51 | 23.58 | 0.81
Amp - HFF | -11.0 | 0.64 | -8.8 | 0.93 | -24.59 | 0.07
Amp - CD | 0.03 | 0.92 | 0.03 | 0.98 | 0.05 | 0.9
Phase - HFF | -13.7 | 0.43 | -12.47 | 0.85 | 26.54 | 0.32
Phase - CD | 0.03 | 0.78 | 0.02 | 0.94 | 0.07 | 0.66
Elastic Transform | ID accuracy | 0.97 | 0.76 | 0.82 | 0.92 | 0.75 | 0.59
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.41 | 0.0 | 0.42 | -0.0 | 0.01
Pixel - within | -9.99 | 0.62 | -8.56 | 0.84 | -7.31 | 0.46
Pixel - between | 9.05 | 0.82 | 10.21 | 0.74 | 15.5 | 0.77
Amp - HFF | -7.5 | 0.6 | -5.52 | 0.75 | -13.64 | 0.05
Amp - CD | 0.02 | 0.85 | 0.02 | 0.81 | 0.03 | 0.74
Phase - HFF | -8.65 | 0.34 | -7.06 | 0.55 | 21.1 | 0.44
Phase - CD | 0.02 | 0.68 | 0.01 | 0.68 | 0.04 | 0.46
Zoom Blur | ID accuracy | 1.08 | 0.79 | 0.99 | 0.68 | 0.99 | 0.83
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.58 | 0.0 | 0.78 | -0.0 | 0.01
Pixel - within | -11.47 | 0.69 | -10.02 | 0.58 | -10.43 | 0.76
Pixel - between | 8.33 | 0.58 | 10.61 | 0.4 | 16.94 | 0.74
Amp - HFF | -8.78 | 0.69 | -8.88 | 0.98 | -3.79 | 0.0
Amp - CD | 0.02 | 0.85 | 0.02 | 0.94 | 0.03 | 0.75
Phase - HFF | -11.12 | 0.47 | -12.57 | 0.89 | 23.37 | 0.44
Phase - CD | 0.02 | 0.78 | 0.02 | 0.95 | 0.04 | 0.52
Motion Blur | ID accuracy | 1.37 | 0.74 | 1.03 | 0.76 | 1.15 | 0.72
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.53 | 0.0 | 0.7 | -0.0 | 0.01
Pixel - within | -14.19 | 0.6 | -10.58 | 0.67 | -11.66 | 0.62
Pixel - between | 12.65 | 0.77 | 11.25 | 0.47 | 21.15 | 0.76
Amp - HFF | -11.45 | 0.67 | -8.67 | 0.96 | -13.66 | 0.03
Amp - CD | 0.03 | 0.91 | 0.02 | 0.96 | 0.04 | 0.79
Phase - HFF | -14.18 | 0.44 | -12.2 | 0.86 | 28.92 | 0.44
Phase - CD | 0.03 | 0.78 | 0.02 | 0.94 | 0.06 | 0.52
Glass Blur | ID accuracy | 1.25 | 0.67 | 0.93 | 0.9 | 0.85 | 0.55
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.45 | 0.0 | 0.48 | -0.0 | 0.03
Pixel - within | -12.62 | 0.52 | -9.56 | 0.79 | -8.15 | 0.43
Pixel - between | 13.03 | 0.88 | 11.74 | 0.74 | 17.35 | 0.72
Amp - HFF | -10.69 | 0.63 | -6.52 | 0.79 | -22.87 | 0.1
Amp - CD | 0.03 | 0.88 | 0.02 | 0.83 | 0.03 | 0.82
Phase - HFF | -12.77 | 0.39 | -8.27 | 0.57 | 21.5 | 0.34
Phase - CD | 0.02 | 0.72 | 0.01 | 0.72 | 0.05 | 0.55
JPEG Compression | ID accuracy | 1.16 | 0.85 | 0.92 | 0.89 | 1.23 | 0.81
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.36 | 0.0 | 0.5 | -0.0 | 0.02
Pixel - within | -12.29 | 0.73 | -9.5 | 0.8 | -12.42 | 0.69
Pixel - between | 9.42 | 0.69 | 11.19 | 0.68 | 21.94 | 0.8
Amp - HFF | -7.82 | 0.51 | -6.59 | 0.82 | -9.03 | 0.01
Amp - CD | 0.02 | 0.78 | 0.02 | 0.86 | 0.04 | 0.77
Phase - HFF | -8.7 | 0.27 | -8.57 | 0.63 | 29.26 | 0.44
Phase - CD | 0.02 | 0.61 | 0.01 | 0.75 | 0.06 | 0.54
Shot Noise | ID accuracy | 1.52 | 0.65 | 0.83 | 0.67 | 1.71 | 0.64
---|---|---|---|---|---|---|---
Jacobian | 0.0 | 0.35 | 0.0 | 0.7 | -0.0 | 0.05
Pixel - within | -15.38 | 0.51 | -8.45 | 0.58 | -16.57 | 0.5
Pixel - between | 13.81 | 0.66 | 9.22 | 0.43 | 32.77 | 0.73
Amp - HFF | -11.21 | 0.47 | -7.28 | 0.92 | -51.38 | 0.14
Amp - CD | 0.03 | 0.78 | 0.02 | 0.84 | 0.07 | 0.93
Phase - HFF | -12.72 | 0.26 | -9.81 | 0.76 | 35.09 | 0.26
Phase - CD | 0.03 | 0.59 | 0.02 | 0.82 | 0.1 | 0.73
Table 6: Can model metrics explain why CLIP ensembling affects OOD robustness?
|
# Topological Defects, Inherent Structures, and Hyperuniformity
Duyu Chen<EMAIL_ADDRESS>Materials Research Laboratory, University
of California, Santa Barbara, California 93106, United States Yu Zheng
Department of Physics, Arizona State University, Tempe, AZ 85287 Yang Jiao
<EMAIL_ADDRESS>Materials Science and Engineering, Arizona State
University, Tempe, AZ 85287 Department of Physics, Arizona State University,
Tempe, AZ 85287
###### Abstract
Disordered hyperuniform systems are exotic states of matter that completely
suppress large-scale density fluctuations like crystals, and yet possess no
Bragg peaks similar to liquids or glasses. Such systems have been discovered
in a variety of equilibrium and non-equilibrium physical and biological
systems, and are often endowed with novel physical properties. While it is
well known that long-range interactions are necessary to sustain
hyperuniformity in thermal equilibrium at positive temperatures, such
condition is not required for the realization of disordered hyperuniformity in
systems out of equilibrium. However, the mechanisms associated with the
emergence of disordered hyperuniformity in nonequilibrium systems, in
particular inherent structures (i.e., local potential-energy minima) are often
not well understood, which we will address from a topological perspective in
this work. Specifically, we consider a representative class of disordered
inherent structures which are constructed by continuously introducing randomly
distributed topological defects (dislocations and disclinations) often seen in
colloidal systems and atomic-scale two-dimensional materials. We demonstrate
that these inherent structures can be viewed as topological variants of
ordered hyperuniform states (such as crystals) linked through continuous
topological transformation pathways, which remarkably preserve
hyperuniformity. Moreover, we develop a continuum theory to demonstrate that
the large-scale density fluctuations in these inherent structures are mainly
dominated by the elastic displacement fields resulted from the topological
defects, which at low defect concentrations can be approximated as
superposition of the displacement fields associated with each individual
defect (strain source). We find that hyperuniformity is preserved as long as
the displacement fields generated by each individual defect decay sufficiently
fast from the source (i.e., the volume integrals of the displacements and
squared displacements caused by individual defect are finite) and the
displacement-displacement correlation matrix of the system is diagonalized and
isotropic. Our results also highlight the importance of decoupling the
positional degrees of freedom from the vibrational degrees of freedom when
looking for disordered hyperuniformity, since the hyperuniformity property is
often cloaked by thermal fluctuations (i.e., vibrational degrees of freedom).
## I Introduction
Disordered hyperuniform (DHU) systems are exotic states of matter Torquato and
Stillinger (2003); Torquato (2018) that lie between a perfect crystal and
liquid. These systems are similar to liquids or glasses in that they are
statistically isotropic and possess no Bragg peaks, and hence lack any
conventional long-range order, and yet they completely suppress large-scale
density fluctuations like crystals and in this sense possess a hidden long-
range order Torquato and Stillinger (2003); Zachary and Torquato (2009);
Torquato (2018). Specifically, the static structure factor $S(k)$, which is
directly proportional to the scattering intensity measured in scattering
experiments, vanishes for DHU systems in the infinite-wavelength (or zero-
wavenumber) limit, i.e., $\lim_{k\rightarrow 0}S(k)=0$, where $k$ is the
wavenumber Torquato and Stillinger (2003); Torquato (2018). Here $S(k)$ is
defined as $S(k)\equiv 1+\rho\tilde{h}(k)$, where $\tilde{h}(k)$ is the
Fourier transform of the total correlation function $h(r)=g_{2}(r)-1$,
$g_{2}(r)$ is the pair correlation function, and $\rho$ is the number density
of the system. Note that this definition implies that the forward scattering
contribution to the diffraction pattern is omitted. Equivalently, the local
number variance $\sigma_{N}^{2}(R)\equiv\langle N^{2}(R)\rangle-\langle
N(R)\rangle^{2}$ associated with a spherical observation window of radius $R$
grows more slowly than the window volume (i.e., a scaling of $R^{d}$ in
$d$-dimensional Euclidean space) for DHU systems in the large-$R$ limit
Torquato and Stillinger (2003); Torquato (2018), where $N(R)$ is the number of
particles in a spherical window with radius $R$ randomly placed into the
system. The small-$k$ scaling behavior of $S(k)\sim k^{\alpha}$ dictates the
large-$R$ asymptotic behavior of $\sigma_{N}^{2}(R)$, based on which all DHU
systems can be categorized into three classes: $\sigma_{N}^{2}(R)\sim R^{d-1}$
for $\alpha>1$ (class I); $\sigma_{N}^{2}(R)\sim R^{d-1}\ln(R)$ for $\alpha=1$
(class II); and $\sigma_{N}^{2}(R)\sim R^{d-\alpha}$ for $0<\alpha<1$ (class
III) Torquato (2018). It is also noteworthy that the direct correlation
function $c(r)$, defined via the Ornstein–Zernike relation $h(r)=c(r)+\rho
c(r)\bigotimes h(r)$ (where $\bigotimes$ denotes convolution) Ornstein and
Zernike (1914), becomes long-ranged in the sense that it has an unbounded
volume integral. This is in diametric contrast to standard thermal critical
points in which $h(r)$ is long-ranged, and hence a system at a hyperuniform
state is considered an “inverted” critical point Torquato and Stillinger
(2003).
DHU states have been discovered in a wide spectrum of equilibrium and non-
equilibrium physical and biological systems Gabrielli et al. (2002); Donev et
al. (2005); Zachary et al. (2011a); Jiao and Torquato (2011); Chen et al.
(2014); Zachary and Torquato (2011); Torquato et al. (2015); Uche et al.
(2004); Batten et al. (2008, 2009); Lebowitz (1983); Zhang et al. (2015a, b);
Kurita and Weeks (2011); Hunter and Weeks (2012); Dreyfus et al. (2015);
Hexner and Levine (2015); Jack et al. (2015); Weijs et al. (2015); Torquato et
al. (2008); Feynman and Cohen (1956); Jiao et al. (2014); Mayer et al. (2015);
Hejna et al. (2013); Klatt et al. (2019); Lei et al. (2019); Lei and Ni
(2019); Chremos and Douglas (2018); Rumi et al. (2019); Sánchez et al. (2019,
2020); Huang et al. (2021); Torquato (2021); see Ref. Torquato (2018) for a
thorough overview. The exotic structural features of DHU systems appear to
endow such systems with novel physical properties. For example, disordered
hyperuniform dielectric networks were found to possess complete photonic band
gaps comparable in size to photonic crystals, while at the same time
maintaining statistical isotropy, enabling waveguide geometries not compatible
with photonic crystals Florescu et al. (2009); Man et al. (2013). Moreover,
certain disordered hyperuniform patterns have superior color-sensing
capabilities, as demonstrated by avian photoreceptors Jiao et al. (2014).
Recent evidences also suggest that adding disorder into crystalline low-
dimensional materials in a hyperuniform manner through the introduction of
topological defects may enhance electronic transport in such materials Zheng
et al. (2020); Chen et al. (2021); Zheng et al. (2021), which is complementary
to the conventional wisdom of the landmark “Anderson localization” Anderson
(1958) that disorder generally diminishes electronic transport.
While it is well known that effective long-ranged interactions are required to
drive an equilibrium many-particle system to a hyperuniform state, this
condition is not necessary to achieve hyperuniformity in systems out of
equilibrium Torquato (2018). Among the wide spectrum of hyperuniform
nonequilibrium systems discovered previously, many fall into the category of
inherent structures, i.e., local potential-energy minima associated with
certain forms of interactions Torquato (2018). For instance, a variety of
maximally-random-jammed (MRJ) hard-particle packings Zachary et al. (2011a);
Zachary and Torquato (2011); Zachary et al. (2011b, c); Atkinson et al.
(2013); Chen et al. (2014) are demonstrated to be hyperuniform; since in
athermal systems increasing the density plays the same role as decreasing
temperature of a molecular liquid and MRJ packings are local density maxima,
these MRJ packings are considered inherent structures. The amorphous inherent
structures in the quantizer problem also possess a high degree of
hyperuniformity Klatt et al. (2019). Interestingly, avian photoreceptor
patterns are inherent structures associated with isotropic short-range hard-
core repulsions between any pair of cells and isotropic long-range soft-core
repulsions between pairs of cells of the subtype, and they are shown to be
multihyperuniform, i.e., the photoreceptor patterns of both the total
population and the individual cell types are simultaneously hyperuniform Jiao
et al. (2014). Another examples are the inherent structures associated with
the $k$-space overlap potentials, which are shown to be hyperuniform Batten et
al. (2011). It is also noteworthy that not all inherent structures are found
to be hyperuniform Batten et al. (2011). For example, the inherent structures
associated with the Lennard-Jones and steeply repulsive potentials are in
general not hyperuniform due to the dominance of grain boundaries and vacancy
defects Weber and Stillinger (1985); Batten et al. (2011).
Despite the ubiquitous nature of disordered hyperuniform inherent structures,
the mechanisms associated with the emergence of disordered hyperuniformity in
many such systems are still not well understood. In this work, we provide a
topological perspective to shed lights on this issue. In particular, we
consider a representative class of disordered inherent structures in two-
dimensional Euclidean space $\mathbb{R}^{2}$ which can be viewed as defected
states of perfect triangular lattice crystal Halperin and Nelson (1978);
Nelson (1983) obtained by continuously introducing topological defects such as
bound dislocations, free dislocations, and disclinations that are the key
elements in the Kosterlitz-Thouless-Halperin-Nelson-Young (KTHNY) two-stage
melting theory in two dimensions Kosterlitz and Thouless (1973); Young (1979);
Nelson and Halperin (1979). These defects are also commonly seen in 2D
colloidal systems Zahn et al. (1999); Wierschem and Manousakis (2011) and 2D
semiconductors Zheng et al. (2020); Chen et al. (2021) and play an important
role in determining the physical properties of such materials.
Using various structural descriptors, we demonstrate that these inherent
structures preserve the class-I hyperuniformity of the original triangular
lattice crystal. We also show that disclinations result in the strongest
“degradation” of the translational and orientational order of the crystal,
followed by free dislocations and bound dislocations at comparable defect
concentrations. The bond-orientational correlations in these structures
rapidly decay to their long-range values over a short length scale, regardless
of the defect types and concentrations. These behaviors are in stark contrast
to those observed in thermally equilibrium configurations during the 2D
melting process, which shows a two-step change in their translational and
bond-orientational order correlations as temperature increases, corresponding
to the two Kosterlitz-Thouless (KT) type transitions (i.e., solid-hexatic, and
hexatic-liquid). In addition, the structures sampled from this 2D melting
process are typically nonhyperuniform. These results highlight the importance
of decoupling the positional degrees of freedom from the vibrational degrees
of freedom and investigate inherent structures that correspond to local energy
minima of the systems (i.e., positional degrees of freedom) when looking for
disordered hyperuniformity, since the hyperuniformity property is often
cloaked by thermal fluctuations (i.e., vibrational degrees of freedom).
Moreover, we derive a continuum theory to explain the hyperuniformity-
preserving nature of the topological transformations that link the disordered
inherent structures and the original hyperuniform crystals at low defect
concentrations. We demonstrate that the large-scale density fluctuations in
these inherent structures are mainly dominated by the elastic displacement
fields resulted from the topological defects, which at low defect
concentrations can be approximated as superposition of the displacement fields
associated with each individual defect (strain source). Remarkably, the
class-I hyperuniformity of the original crystal is preserved as long as the
displacement fields of individual defects decay sufficiently fast from the
source (i.e., the volume integrals of the displacements and squared
displacements caused by individual defect are finite) and the displacement-
displacement correlation matrix of the system is diagonalized and isotropic.
In addition, the structure factor approaches zero with a universal quadratic
scaling at small wavenumbers, regardless of the types and exact concentrations
of topological defects. Our numerical results and theoretical analysis uncover
the mechanisms underlying the emergence of disordered hyperuniformity in a
wide spectrum of disordered structures, and provide insights to the discovery,
design, and generation of novel disordered hyperuniform materials.
The rest of the paper is organized as follows: in Sec II, we describe the
procedures to generate disordered inherent structures via continuous
topological transformations from the reference triangular lattice crystal
state in $\mathbb{R}^{2}$. In Sec. III, we employ various statistical
descriptors to characterize the large-scale structural features, in particular
hyperuniformity of the resulting inherent structures. In Sec. IV, we derive
continuum theory to explain the class-I hyperuniformity of the inherent
structures. In Sec. V, we provide concluding remarks.
## II Realizations of disordered inherent structures containing randomly
distributed topological defects
### II.1 Dislocations and disclinations induced by topological
transformations
$\begin{array}[]{c}\\\
\includegraphics[width=411.93767pt]{fig1.png}\end{array}$
Figure 1: (Color online) Illustration of the formation of inherent structures
containing bound dislocations (top panel), free dislocations (middle panel),
and disclinations with associated dislocations (bottom panel) through series
of topological transformations (i.e., rearrangement of bonding network) and
subsequent structural relaxation in a triangular lattice. Vertices with seven
bonds are highlighted with yellow circles, and vertices with five bonds
highlighted with green circles.
To introduce bound dislocations (i.e., a pair of dislocations that are next to
each other) into the triangular lattice, we first randomly pick a bond in the
lattice. Note that any bond in the triangular lattice is also the short
diagonal of a rhombus. Next, we break this chosen bond and connect the two
vertices associated with the long diagonal of the corresponding rhombus with a
new bond, resulting in a pair of dislocations next to one another. If the
vertices associated with the old and new bonds all possess six bonds before
the transformation, then the transformation would lead to two five-coordinated
vertices and two seven-coordinated vertices; otherwise, we would obtain
higher-order defected structures. Here we impose the constraint that the
vertices after the transformation should each possess at leave five bonds to
ensure local structural stability. The process of introducing a single pair of
bound dislocations is illustrated in the top panel of Fig. 1. We quantify the
amount of bound dislocations by the defect concentration defined as $p\equiv
N_{op}/N_{b}$, where $N_{op}$ is the number of successful topological
transformations, and $N_{b}$ is the number of bonds in the triangular lattice.
Note that a single topological transformation described in this paragraph
would introduce a pair of dislocations, in the absence of other topological
defects.
To generate free dislocations, we start from bound dislocations with the
additional constraint that the initial bound dislocations should consist of
two five-coordinated vertices and two seven-coordinated vertices, and randomly
pick a five-coordinated vertex and a seven-coordinated vertex that are part of
the bound dislocations. We then let these two defected vertices “glide” in the
lattice by continuously breaking existing bonds and forming new bonds. Note
that the direction that the defected vertices can “glide” is fixed once the
two vertices are picked given the local bonding constraints. To form free
dislocations and minimize the spatial correlations of the free dislocations,
we let the defects glide for at least one step; beyond that, the defects have
$1/2$ of probability to stop and $1/2$ of probability to continue gliding at
each lattice site. If the gliding defects stop before hitting any “road
block”, i.e., vertices that are not six coordinated, then we count this as one
successful topological transformation in the context of free dislocations. The
process of introducing a single pair of free dislocations is illustrated in
the middle panel of Fig. 1. We also experiment with other stopping rules, and
find that the details of different stopping rules do not affect the large-
scale structural features of the resulting structures, which is the focus of
this work. We quantify the amount of free dislocations by the defect
concentration defined as $p\equiv N_{op}/N_{b}$, where $N_{op}$ is the number
of successful topological transformations described in this paragraph, and
$N_{b}$ is the number of bonds in the triangular lattice. Note that similar to
the case of bound dislocations, a single topological transformation described
in this paragraph would introduce a pair of dislocations (each consisting of a
5-coordinated vertex and a 7-coordinated vertex), in the absence of other
topological defects.
To generate disclinations, we start from a free dislocation and break the bond
between the seven-coordinated vertex and one of its six-coordinated neighbors
and connect the long diagonal of the corresponding rhombus with a new bond.
This six-coordinated neighbor should have two six-coordinated neighbors that
are not neighbor of the five-coordinated vertex. This bond-breaking and bond-
forming process creates an isolated 5-coordinated vertex, and another
5-coordinated vertex surrounded by two 7-coordinated vertices. We then let one
of the two 7-coordinated vertices and its neighboring 5-coordinated vertex
glide away in the same way as that in the case of free dislocations. If these
steps can be completed, then we count this as a successful topological
transformation in the context of disclinations, which would create an isolated
five-fold disclination, an isolated seven-fold disclination, and two free
dislocations (each consisting of a 5-coordinated vertex and a 7-coordinated
vertex), in the absence of other topological defects. Such a topological
transformation is illustrated in the bottom panel of Fig. 1. Note that the
disclinations are accompanied by free dislocations, which is consistent with
previous observations Zahn et al. (1999); Wierschem and Manousakis (2011) that
disclinations typically arise with free dislocations. We quantify the amount
of disclinations by the defect concentration defined as $q\equiv
N_{op}/N_{b}$, where $N_{op}$ is the number of successful topological
transformations described in this paragraph, and $N_{b}$ is the number of
bonds in the triangular lattice. It is noteworthy that structures containing
disclinations at $q$ should be compared to structures containing bound and
free dislocations at $p=\frac{3}{2}q$ for a fair comparison at the same
effective defect concentration, given the number of 5-coordinated and
7-coordinated vertices that each case generates in the absence of other
topological defects.
### II.2 Inherent structures
To obtain the inherent structures, we allow the transformed structures to
undergo elastic relaxation by perturbing the positions of the vertices in a
way that drive the bond lengths in the network towards values associated with
the triangular lattice. In particular, this involves local minimization of the
energy function $E$ defined as follows:
$E=\sum_{\mathrm{bonds}}k_{b}(b_{i}-b_{0})^{2},$ (1)
where $b_{i}$ is the bond length associated with bond $i$, and $b_{0}=1$ is
the side length of a triangle in a triangular lattice. Since we are looking at
local energy minima, the choice of the spring constant $k_{b}$ does not affect
the obtained structure, and without loss of generality, we set $k_{b}$ to
unity.
$\begin{array}[]{c}\\\
\includegraphics[width=411.93767pt]{fig2.png}\end{array}$
Figure 2: (Color online) (Top section) Representative inherent structures
containing primarily bound dislocations at defect concentration $p$=0.02,
0.06, 0.10, and 0.17, respectively. (Bottom left section) Representative
inherent structures containing primarily free dislocations at defect
concentration $p$=0.02, and 0.04, respectively. (Bottom right section)
Representative inherent structures containing disclinations at defect
concentration $q$=0.01, and 0.015, respectively. Vertices with more than six
bonds are highlighted in yellow, and vertices with less than six bonds in
green.
We investigate inherent structures containing the aforementioned three types
of topological defects for a wide range of $p$ (or $q$). In the cases of free
dislocations and disclinations, we generate structures up to values close to
saturation, i.e., more topological defects can no longer be inserted into the
system after a sufficiently large number of attempts (e.g., $10N$ attempts,
where $N$ is the number of vertices in the lattice). In the case of bound
dislocations, we stop at $p=0.17$ since increasing $p$ beyond that sometimes
leads to unphysical local bonding networks. In the top section of Fig. 2 we
show representative inherent structures containing primarily bound
dislocations with $N=1200$ particles, in the bottom left section of Fig. 2
representative inherent structures containing primarily free dislocations with
$N=1200$ particles, and in the bottom right section of Fig. 2 representative
inherent structures containing disclinations with $N=1200$ particles.
## III Structural characterization and hyperuniformity
To characterize the inherent structures with the aforementioned topological
defects, in particular at large length scales, we generate configurations with
$N=10,800$ particles at different $p$ (or $q$) and look at various statistics
including pair statistics such $g_{2}(r)$, $S(k)$ and $\sigma_{N}^{2}(R)$
Torquato and Stillinger (2003); Torquato (2018), and bond-orientational
statistics such as the bond-orientational order metric $Q_{6}$ and correlation
function $C_{6}(r)$ that have been routinely used to study the 2D melting
process Zahn et al. (1999); Wierschem and Manousakis (2011); Li and Ciamarra
(2018). To compute the statistics accurately, we average over 10
configurations at each $p$ (or $q$).
Specifically, the pair correlation function $g_{2}(r)$ is proportional to the
probability density function of finding two centers separated by distance $r$
Torquato (2002), and in practice is computed via the relation
$g_{2}(r)=\frac{\langle N(r)\rangle}{\rho 2\pi r\Delta r},$ (2)
where $\langle N(r)\rangle$ is the average number of particle centers that
fall into the circular ring at distance $r$ from a central particle center
(arbitrarily selected and averaged over all particle centers in the system),
$2\pi r\Delta r$ is the area of the circular ring, and $\rho$ is the number
density of the system Torquato (2002); Atkinson et al. (2013). The static
structure factor $S(k)$ is the Fourier counterpart, and for computational
purposes, $S({k})$ is the angular-averaged version of $S({\bf k})$, which can
be obtained directly from the particle positions ${\bf r}_{j}$, i.e.,
$S({\bf k})=\frac{1}{N}\left|{\sum_{j=1}^{N}\exp(i{\bf k}\cdot{\bf
r}_{j})}\right|^{2}\quad({\bf k}\neq{\bf 0}),$ (3)
where $N$ is the total number of points in the system Zachary and Torquato
(2009); Atkinson et al. (2013); Chen et al. (2014). The trivial forward
scattering contribution (${\bf k}=0$) in Eq. 3 is omitted, which makes Eq. 3
completely consistent with the aforementioned definition of $S(k)$ in the
ergodic infinite-system limit Torquato (2018). To compute $\sigma_{N}^{2}(R)$,
we randomly place circular observation windows with radius $R$ in the system,
and count the number of particles $N(R)$ that fall into the observation
window, which is a random variable. The variance associated with $N(R)$ is
denoted by $\sigma_{N}^{2}(R)\equiv\langle N(R)^{2}\rangle-\langle
N(R)\rangle^{2}$, which measure density fluctuations of particles within a
window of radius $R$. In this work we sample 100,000 windows at each window
radius $R$ to obtain $\sigma_{N}^{2}(R)$.
On the other hand, the order metric $Q_{6}$ is defined as
$Q_{6}\equiv|\langle\Psi_{6}\rangle|,$ (4)
where
$\Psi_{6}({\bf r}_{i})=\frac{1}{n_{i}}\sum_{j=1}^{n_{i}}e^{6\theta_{ij}},$ (5)
and $\langle\cdots\rangle$ denotes ensemble average, $n_{i}$ is the number of
neighbors of vertex $i$ located at ${\bf r}_{i}$, and $\theta_{ij}$ is the
polar angle associated with the vector from vertex $i$ to the $j$-th bonded
neighbor of vertex $i$. The bond-orientational correlation function $C_{6}(r)$
is defined as
$C_{6}(r)\equiv\langle\Psi_{6}({\bf r}_{i})\Psi^{*}_{6}({\bf
r}_{j})\rangle\mid r=|{\bf r}_{i}-{\bf r}_{j}|,$ (6)
where $\Psi^{*}_{6}$ is the complex conjugate of $\Psi_{6}$. In practice, to
calculate $C_{6}(r)$, for each pair of particles located at ${\bf r}_{i}$ and
${\bf r}_{j}$, we compute $\Psi_{6}({\bf r}_{i})\Psi^{*}_{6}({\bf r}_{j})$,
and bin the results according to the distance $r=|{\bf r}_{i}-{\bf r}_{j}|$.
We note that $Q_{6}=1$ and $C_{6}(r)=1$ for a perfect triangular network;
while for isotropic fluid phase, $Q_{6}\approx 0$ and $C_{6}(r)$ decays with
an exponential envelop at large $r$ Zahn et al. (1999); Wierschem and
Manousakis (2011).
### III.1 Bound dislocations
We first present the results of the inherent structures with primarily bound
dislocations at different $p$. In particular, as shown in Fig. 3(a)-(c), the
structure factor $S(k)$ decreases to essentially zero as $k$ approaches zero
and local number variance $\sigma_{N}^{2}(R)$ grows roughly linearly as $R$
increases at large $R$, indicating the hyperuniformity of these inherent
structures. The pair correlation function $g_{2}(r)$ decays to its long-range
value over a short range of $r$, and the long-range value of $|g_{2}(r)-1|$
and the magnitudes of the Bragg peaks in $S(k)$ also decrease significantly as
$p$ increases, indicating the possible loss of translational order in these
systems as dislocations are introduced into the system. However, we note that
the absence of Bragg peaks alone does not guarantee that the underlying
structure is truly amorphous, since long-range order can be hidden at the two-
point level, but still can be present at higher-point levels, as explicitly
demonstrated by Klatt et al. in the context of random, uncorrelated
displacements of particles on a lattice Klatt et al. (2020). There are clear
wiggles in $S(k)$ at large $k$ and significant oscillations in $g_{2}(r)$ as
well, which are manifestations of the remaining short-range structures in the
defected networks. We further analyze the small-wavenumber behavior of $S(k)$,
and find that the exponent $\alpha$ in $S(k)\sim k^{\alpha}$ oscillates around
2, as shown in Fig. 3(d), demonstrating that bound dislocations preserve the
class-I hyperuniformity of the triangular lattice.
We also analyze the bond-orientional order of the inherent structures. The
results of $Q_{6}$ and $C_{6}(r)$ are shown in Fig. 3(e) and 3(f),
respectively. It can be clearly seen that $Q_{6}$ decreases rapidly as $p$
increases, indicating the loss of the global preferred orientation of the
lattice. On the other hand, $C_{6}(r)$ decays to its long-range value rapidly
over a short length scale regardless of $p$, which can be attributed to the
fact that the bound dislocations are randomly introduced in the system, and
the spatial correlations of defect positions are minimized. The long-range
value of $C_{6}(r)$ also decreases as $p$ increases, indicating the loss of
large-scale orientational correlation as bound dislocations are introduced.
$\begin{array}[]{c}\\\
\includegraphics[width=411.93767pt]{fig3.png}\end{array}$
Figure 3: (Color online) Statistics associated with inherent structures
containing primarily bound dislocations at different defect concentrations $p$
with N=10,800 particles. (a) Structure factor $S(k)$. (b) Local number
variance $\sigma_{N}^{2}(R)$. (c) Log-log plot of $|g_{2}(r)-1|$. (d) Small-
wavenumber scaling exponent $\alpha$ of $S(k)$. (e) Bond-orientational order
metric $Q_{6}$. (f) Bond-orientational order correlation function $C_{6}(r)$.
### III.2 Free dislocations
We employ similar procedures to investigate inherent structures containing
primarily free dislocations, and the computed statistics are shown in Fig. 4.
Interestingly, there are many structural similarities that these inherent
structures share with inherent structures containing primarily bound
dislocations. For example, these inherent structures also preserve the class-I
hyperuniformity of the triangular lattice, as manifested by the fact that
$S(k)$ essentially decreases to zero with an approximately quadratic scaling
as $k$ approaches zero and $\sigma_{N}^{2}(R)$ increases linearly as $R$
increases at large $R$. Both $g_{2}(r)$ and $C_{6}(r)$ decay to their
respective long-range values over a short length scale, and $Q_{6}$ decreases
rapidly as $p$ increases, indicating the loss of large-scale structural order
as free dislocations are introduced into the system. However, we note that at
the same $p$, free dislocations degrade the translational and orientational
order of the triangular lattice much more than bound dislocations, as
evidenced by $g_{2}(r)$, $Q_{6}$, and $C_{6}(r)$. This is not surprising that
the impact of bound dislocations are much more localized than that of free
dislocations.
It is noteworthy that in colloidal systems during 2D melting, as free
dislocations begin to emerge, the systems start to enter the hexatic phase
regime, and the $h(r)$ and $C_{6}(r)$ typically show an exponential and an
algebraic decay, respectively Zahn et al. (1999); Wierschem and Manousakis
(2011). These behaviors are distinctly different from those of our inherent
structures containing primarily free dislocations, where $h(r)$ and $C_{6}(r)$
decay to their respective long-range values over a short range of $r$ and
oscillate around certain constants afterwards. These differences may be
attributed to the fact that in our systems, the free dislocations are
introduced in a mostly uncorrelated manner, while in those colloidal systems
during 2D melting, the free dislocations arise as a result of thermal
excitation and possess certain degrees of spatial correlation. Our results
suggest that not only the types of topological defects, but also the spatial
correlation of topological defects affect the structural behaviors of the
defected lattices.
$\begin{array}[]{c}\\\
\includegraphics[width=411.93767pt]{fig4.png}\end{array}$
Figure 4: (Color online) Statistics associated with inherent structures
containing primarily free dislocations at different defect concentrations $p$
with N=10,800 particles. (a) Structure factor $S(k)$. (b) Local number
variance $\sigma_{N}^{2}(R)$. (c) Log-log plot of $|g_{2}(r)-1|$. (d) Small-
wavenumber scaling exponent $\alpha$ of $S(k)$. (e) Bond-orientational order
metric $Q_{6}$. (f) Bond-orientational order correlation function $C_{6}(r)$.
### III.3 Disclinations
Next, we investigate inherent structures containing primarily isolated
disclinations, and the results are shown in Fig. 5. Clearly, the inherent
structures preserve class-I hyperuniformity of the triangular lattice, and the
translational and orientional order of the system are greatly degraded by the
introduced disclinations. Remarkably, both $h(r)$ and $C_{6}(r)$ decay to
their respective long-range values over a short range of $r$ and oscillate
around certain constants afterwards. These results are surprising since
previously isolated disclinations caused by thermal excitation as temperature
increases in colloidal systems were known to induce large-scale structural
distortions, and lead the systems to transition into isotropic liquids, which
essentially lose translational and orientational order, and are generally not
hyperuniform Zahn et al. (1999); Wierschem and Manousakis (2011). In
particular, in those systems $h(r)$ and $C_{6}(r)$ both exhibit an exponential
decay as $r$ increases Zahn et al. (1999); Wierschem and Manousakis (2011).
These different behaviors can be attributed to the fact that in our systems
the disclinations are randomly placed into the systems, which do not affect
large-scale density fluctuations or orientational correlations. Nonetheless,
we find that in our systems disclinations degrade the translational and
orientational order much more than bound and free dislocations at comparable
defect concentration, which is not surprising given that disclinations cause
larger-scale structural distortions than dislocations.
$\begin{array}[]{c}\\\
\includegraphics[width=411.93767pt]{fig5.png}\end{array}$
Figure 5: (Color online) Statistics associated with inherent structures
containing disclinations at different defect concentrations $p$ with N=10,800
particles. (a) Structure factor $S(k)$. (b) Local number variance
$\sigma_{N}^{2}(R)$. (c) Log-log plot of $|g_{2}(r)-1|$. (d) Small-wavenumber
scaling exponent $\alpha$ of $S(k)$. (e) Bond-orientational order metric
$Q_{6}$. (f) Bond-orientational order correlation function $C_{6}(r)$.
## IV Continuum theory of hyperuniformity in inherent structures containing
topological defects
In this section, we devise a continuum theory to explain our observations from
Sec. III of the impact of the topological defects on hyperuniformity, i.e.,
how the topological transformations involving dislocations and disclinations
preserve the class-I hyperuniformity of the original triangular-lattice
crystal? We note that the introduction of topological defects preserves the
total number of particles in the system, i.e., no particles were removed or
added. Therefore, the impacts on the local number density fluctuations are
resulted from the perturbation of particle positions at the core of the
defects and the associated elastic displacement field. Specifically, we assume
that the particle displacement (at low defect concentrations) at position
$\mathbf{x}$ is the linear superposition of the displacements introduced by
different topological defects at $\mathbf{r}_{1}$, $\cdots$, $\mathbf{r}_{M}$,
where $M$ is the number of topological defects, i.e.,
$\mathbf{u}(\mathbf{x})=\sum_{i=1}^{M}\mathbf{f}(\mathbf{x}-\mathbf{r}_{i}).$
(7)
Therefore, the average displacement field
$\langle\mathbf{u}(\mathbf{x})\rangle$ is given by
$\begin{split}\langle\mathbf{u}(\mathbf{x})\rangle&=\int\sum_{i=1}^{M}\mathbf{f}(\mathbf{x}-\mathbf{r}_{i})P_{M}(\mathbf{r}^{M})d\mathbf{r}^{M}\\\
&=\int\mathbf{f}(\mathbf{x}-\mathbf{r}_{1})\rho_{1s}(\mathbf{r}_{1})d\mathbf{r}_{1}\\\
&=\rho_{s}\int\mathbf{f}(\mathbf{r})d\mathbf{r},\end{split}$ (8)
where $P_{M}(\mathbf{r}^{M})$ is the probability density function Torquato and
Stillinger (2003) associated with finding defects $1,2,\cdots,M$ at position
$\mathbf{r}_{1}$, $\mathbf{r}_{2}$, $\cdots$, $\mathbf{r}_{M}$, and
$\rho_{ms}(\mathbf{r}^{m})(m<M)$ is the reduced generic density function
Torquato and Stillinger (2003) of the defects defined as
$\rho_{ms}(\mathbf{r}^{m})=\frac{M!}{(M-m)!}\int\cdots\int
P_{M}(\mathbf{r}^{M})d\mathbf{r}^{M-m},$ (9)
and because of statistical homogeneity, the one-point density function
$\rho_{1s}(\mathbf{r}_{1})$ is equal to the average defect density $\rho_{s}$
in the system. Similarly, the different components of the displacement-
displacement correlation
$\mathbf{\Psi}_{\mu\nu}(\mathbf{r}=\mathbf{y}-\mathbf{x})\equiv\langle
u_{\mu}(\mathbf{x})u_{\nu}(\mathbf{y})\rangle-\langle
u_{\mu}(\mathbf{x})\rangle\langle u_{\nu}(\mathbf{y})\rangle$ are given by
$\begin{split}\Psi_{\mu\nu}(\mathbf{r})&=\int\sum_{i=1}^{M}\sum_{j\neq
i}^{M}f_{\mu}(\mathbf{x}-\mathbf{r}_{i})f_{\nu}(\mathbf{y}-\mathbf{r}_{j})P_{M}(\mathbf{r}^{M})d\mathbf{r}^{M}\\\
&+\int\sum_{i=1}^{M}f_{\mu}(\mathbf{x}-\mathbf{r}_{i})f_{\nu}(\mathbf{y}-\mathbf{r}_{i})P_{M}(\mathbf{r}^{M})d\mathbf{r}^{M}\\\
&-\int\rho_{s}^{2}f_{\mu}(\mathbf{x}-\mathbf{r}_{1})f_{\nu}(\mathbf{y}-\mathbf{r}_{2})d\mathbf{r}_{1}d\mathbf{r}_{2}\\\
&=\int\rho_{s}^{2}h_{s}(\mathbf{r}_{2}-\mathbf{r}_{1})f_{\mu}(\mathbf{x}-\mathbf{r}_{1})f_{\nu}(\mathbf{y}-\mathbf{r}_{2})d\mathbf{r}_{1}d\mathbf{r}_{2}\\\
&+\int\rho_{s}f_{\mu}(\mathbf{x}-\mathbf{r}_{1})f_{\nu}(\mathbf{y}-\mathbf{r}_{1})d\mathbf{r}_{1},\end{split}$
(10)
where $h_{s}(\mathbf{r})\equiv
g_{2s}(\mathbf{r})-1=[\rho_{2s}(\mathbf{r})-\rho_{s}^{2}]/\rho_{s}^{2}$ is the
total correlation function of the topological defects. If the topological
defects are randomly introduced into the system, then $h_{s}(\mathbf{r})=0$,
which gives
$\begin{split}\Psi_{\mu\nu}(\mathbf{r})&=\int\rho_{s}f_{\mu}(\mathbf{x}-\mathbf{r}_{1})f_{\nu}(\mathbf{y}-\mathbf{r}_{1})d\mathbf{r}_{1}\\\
&=\int\rho_{s}f_{\mu}(\mathbf{r}_{1})f_{\nu}(\mathbf{r}_{1}+\mathbf{r})d\mathbf{r}_{1}\end{split}$
(11)
In the Fourier space, this corresponds to
$\tilde{\Psi}_{\mu\nu}(\mathbf{k})=\rho_{s}\tilde{f}_{\mu}(\mathbf{k})\tilde{f}_{\nu}^{*}(\mathbf{k})=\rho_{s}\tilde{f}_{\mu}(\mathbf{k})\tilde{f}_{\nu}(-\mathbf{k}),$
(12)
where $\tilde{\mathbf{\Psi}}$, $\tilde{\mathbf{f}}$ are the Fourier transforms
of $\mathbf{\Psi}$ and $\mathbf{f}$, respectively, and
$\tilde{\mathbf{f}}^{*}$ is the complex conjugate of $\tilde{\mathbf{f}}$.
Next, we derive the expression for the structure factor $S(k)$ of triangular
lattice affected by displacement fields $\mathbf{u}$. Previously, it was shown
Kim and Torquato (2018) in general that when the displacement field has a
finite variance $\langle|\mathbf{u}|^{2}\rangle$ and the displacement-
displacement correlation matrix is isotropic and diagonalized, i.e.,
$\Psi_{\mu\nu}(\mathbf{r})=\delta_{\mu\nu}\Psi(\mathbf{r})$, the structure
factor $S(\mathbf{k})$ of a displaced hyperuniform point pattern at small
$|\mathbf{k}|$ is approximated by
$\begin{split}S(\mathbf{k})&\approx[|\mathbf{k}|^{2}\Psi(\mathbf{0})+(1-|\mathbf{k}|^{2}\Psi(\mathbf{0}))S_{0}(\mathbf{k})]\\\
&+\rho_{0}|\mathbf{k}|^{2}(\tilde{\Psi}(\mathbf{k})+\int
d(\mathbf{r})h_{0}(\mathbf{r})\Psi(\mathbf{r})e^{-i\mathbf{k}\cdot\mathbf{r}})\end{split}$
(13)
where $S_{0}(\mathbf{k})$ and $h_{0}(\mathbf{r})$ are the structure factor and
total correlation function of the original point patterns. In the cases where
the original point patterns are crystals, we can use the properties of
crystals to simplify Eq. 13. In particular, the structure factor
$S_{0}(\mathbf{k})=0$ holds for $|\mathbf{k}|<K$, where $K$ is the wavenumber
associated with the first Bragg peaks, and pair correlation function
$g_{20}(\mathbf{r})=h_{0}(\mathbf{r})+1$ is simply a collection of
$\delta$-functions at lattice sites and zero otherwise. By Taylor-expanding
the second line of Eq. 13 at small $k$ and invoking the continuum
approximation, we obtain the following expression:
$\begin{split}S(k)&\approx k^{2}\Psi(0)+\rho_{0}k^{2}\tilde{\Psi}(0)\\\
&=\rho_{s}k^{2}\int
f_{1}^{2}(\mathbf{r})d\mathbf{r}+\rho_{0}\rho_{s}k^{2}|\tilde{f}_{1}(0)|^{2}\\\
&=\rho_{s}k^{2}\int
f_{2}^{2}(\mathbf{r})d\mathbf{r}+\rho_{0}\rho_{s}k^{2}|\tilde{f}_{2}(0)|^{2}\end{split}$
(14)
Interestingly, Eq. 14 suggests that as long as the volume integrals of
$\mathbf{f}(\mathbf{r})$ and $|\mathbf{f}(\mathbf{r})|^{2}$ are finite, the
structure factor $S(k)$ of the triangular lattice affected by the displacement
fields generated by the collection of randomly distributed source functions
$\mathbf{f}$ scales as $S(k)\sim k^{2}$, which indicates that such
displacement fields preserve the class-I hyperuniformity of the original
crystals.
Subsequently, we test our continuum theory against numerical examples of
inherent structures investigated in Sec. III. We first check whether the
assumptions of our theory are satisfied in these cases. In Fig. 6(a) we
visualize the magnitude of the displacement field $\mathbf{u}$ in an inherent
structure containing a single pair of bound dislocations. Clearly, the
displacement field concentrates around the center of the topological defect,
i.e., the center of the old broken bond (which is the same as the center of
the new formed bond). We further compute the decay of $|\mathbf{u}(r)|$ as a
function of the distance $r$ from the core of the topological defect, which
appears to decay exponentially as shown in Fig. 6(b), although we note that
$|\mathbf{u}(\mathbf{r})|$ appears to be anisotropic around the core as shown
in Fig. 6(a). The exponential decay suggests that the volume integrals of the
source $\mathbf{f}$ and $|\mathbf{f}|^{2}$ should be finite.
We then look at the cases where a substantial amount of topological defects
are affecting the structures. Specifically, in Fig. 7 we show the spatial
distribution of the vector displacement field $\mathbf{u}(\mathbf{r})$ for
three representative examples: an inherent structure containing primarily
bound dislocations at $p=0.17$, an inherent structure containing primarily
free dislocations at $p=0.04$, and an inherent structure containing
disclinations at $q=0.015$. All of the three fields in Fig. 7 appear to be
approximately isotropic, suggesting a finite $\Psi(0)$. We also compute the
different components of displacement-displacement correlation matrix $\Psi(r)$
for all these three cases, and the results are shown in Fig. 8. The orthogonal
components $\Psi_{12}$ appear to vanish, and the diagonal components are
roughly the same, i.e., $\Psi_{11}(r)\approx\Psi_{22}(r)$, which satisfies the
condition $\Psi_{\mu\nu}(\mathbf{r})=\delta_{\mu\nu}\Psi(\mathbf{r})$ in our
theory. In addition, the diagonal components $\Psi_{11}$ and $\Psi_{22}$ in
Fig. 8 show relatively short-ranged correlations, which are consistent with
the visualizations in Fig. 7.
With the assumptions in our theory largely satisfied by our numerical examples
in Sec. III, we proceed to investigate whether the numerically determined
small-$k$ behavior of $S(k)$ matches the prediction by our theory. Indeed, as
shown in Figs. 3-5, at low and intermediate defect concentrations $p$ and $q$,
the scaling exponent $\alpha$ in $S(k)\sim k^{\alpha}$ at small $k$ oscillates
around 2, matching the quadratic scaling predicted by our theory; however, at
very small or large $p$ and $q$ (relative to saturation), the exponent
$\alpha$ slightly deviates from 2. The deviation at very small $p$ or $q$ can
be attributed to the fact that at these defect concentrations the systems are
not entirely homogeneous, which degrades the accuracy of our continuum-theory
prediction; on the other hand, at large $p$ or $q$, the topological defects
begin to interact with each other and modify the inherent structures
accordingly, which is not taken into account in our continuum theory.
$\begin{array}[]{c}\\\
\includegraphics[width=195.12767pt]{fig6.png}\end{array}$
Figure 6: (Color online) Displacement field $\mathbf{u}$ in an inherent
structure containing a single pair of bound dislocations. (a) Spatial
distribution of normalized $|u(r)|/|u(r)|_{max}$. (b) Decay of
$|\mathbf{u}(r)|$ as a function of the distance $r$ from the core of the
topological defect, i.e., the center of the old broken bond (the same as the
center of the new formed bond), and the red solid line is an exponential fit
(note that the vertical axis is log-scale).
$\begin{array}[]{c}\\\
\includegraphics[width=346.89731pt]{fig7.png}\end{array}$
Figure 7: (Color online) (a) Spatial distribution of the vector displacement
field $\mathbf{u}(\mathbf{r})$ in an inherent structure containing primarily
bound dislocations at $p=0.17$ with $N=1200$ particles. (b) Spatial
distribution of the vector displacement field $\mathbf{u}(\mathbf{r})$ in an
inherent structure containing primarily free dislocations at $p=0.04$ with
$N=1200$ particles. (c) Spatial distribution of the vector displacement field
$\mathbf{u}(\mathbf{r})$ in an inherent structure containing disclinations at
$q=0.015$ with $N=1200$ particles.
$\begin{array}[]{c}\\\
\includegraphics[width=390.25534pt]{fig8.png}\end{array}$
Figure 8: (Color online) (a) Different components of the displacement-
displacement correlation matrix $\mathbf{\Psi}(r)$ of an inherent structure
containing primarily bound dislocations at $p=0.17$ with $N=10,800$ particles.
(b) Different components of the displacement-displacement correlation matrix
$\mathbf{\Psi}(r)$ of an inherent structure containing primarily free
dislocations at $p=0.04$ with $N=10,800$ particles. (c) Different components
of the displacement-displacement correlation matrix $\mathbf{\Psi}(r)$ of an
inherent structure containing disclinations at $q=0.015$ with $N=10,800$
particles.
## V Conclusions and Discussion
In this work, we made an attempt to elucidate a possible mechanism for the
observed hyperuniformity in disordered inherent structures of a wide spectrum
of systems. In particular, we considered a representative class of disordered
inherent structures which are linked to an original crystal state via
continuous topological transformations involving dislocations and
disclinations. We show via both numerical simulations and theoretical analysis
that these topological transformations preserve the class-I hyperuniformity of
the triangular lattice, and the structure factor $S(k)$ possesses a universal
quadratic scaling as $k$ decreases at small $k$ at low defect concentrations.
Our continuum theory connects the large-scale density fluctuations in these
inherent structures to the elastic displacement fields resulted from the
topological defects. It indicates that class-I hyperuniformity can be
preserved as long as the displacement fields resulted from individual defects
decay fast enough from the source (i.e., the volume integrals of the
displacements and squared displacements caused by individual defect are
finite) and the displacement-displacement correlation matrix of the system is
diagonalized and isotropic. Conceptually, the introduction of topological
defects into a crystal does not affect the average particle density of the
system (since the total number of particles are conserved), and any change in
density fluctuations of the resulting disordered inherent structures could
only come from the elastic displacement fields caused by the topological
defects. As long as these elastic fields are homogenized and sufficiently
localized, the salient features of large-scale density fluctuations of the
original crystal, i.e., hyperuniformity, should be preserved. These results
suggest promising new venues for the discovery, design, and generation of
novel disordered hyperuniform materials.
Moreover, the inherent structures containing dislocations and disclinations
studied in this work are quite different from the equilibrium structures
containing the same type of topological defects in colloidal systems during 2D
melting Zahn et al. (1999); Wierschem and Manousakis (2011) in terms of
various structural features, in particular the hyperuniformity and the
large-$r$ scaling behavior of $g_{2}(r)$ and $C_{6}(r)$. These differences
suggest that not only the types of defects, but also the spatial correlations
of defects are key to understand the impact of defects on the structural
features of crystalline systems. These results also highlights the cloaking
effect of thermal fluctuation on hyperuniformity, and indicates that when
looking for disordered hyperuniformity, in many cases one should probably look
at potential-energy minima (local or global), which are only functions of the
positional degrees of freedom and not affected by the vibrational degrees of
freedom.
Here we studied the introduction of topological defects into triangular
lattice, but given the duality of triangular lattice and honeycomb lattice,
the dual of the various inherent structures obtained in this work are similar
to the network structures obtained previously Chen et al. (2021) that are
topologically transformed from the honeycomb lattice through the introduction
of Stone-Wales (SW) defects. Interestingly, those defected honeycomb network
structures were found to capture the salient structural features of amorphous
graphene and other 2D materials Chen et al. (2021). However, we note that
previously it was demonstrated that as the SW defect concentration reaches
certain critical value, the corresponding systems changes from class-I
hyperuniformity to class-II hyperuniformity, which can be attributed to the
fact that in those systems the bond angles were also regulated because of the
underlying chemical constraints Chen et al. (2021) and the additional coupling
between defects likely modifies the behavior of large-scale density
fluctuations and leads to the change in the class of hyperuniformity at
sufficiently large defect concentrations.
It is also noteworthy that topological defects such as dislocations and
disclinations appear as point defects in two dimensions, while in three
dimensions they are known to appear as line defects. Given this difference, it
would be interesting to explore the impact of topological defects on the
large-scale structural features of crystals in three dimensions, and the
potential findings may shed light on the emergence of hyperuniformity in
certain disordered inherent structures. For example, there were theories
Nelson (1983) suggesting that MRJ packings (or random close packings as termed
by many experimentalists) might be considered as disordered topological
variants of the tetrahedral particle packings, just like that Frank-Kasper
phases Nelson (1983); Reddy et al. (2018); Barbon et al. (2020) are considered
as ordered topological variants.
###### Acknowledgements.
We are grateful for Dr. Jaeuk Kim for very helpful discussion.
## References
* Torquato and Stillinger (2003) S. Torquato and F. H. Stillinger, Phys. Rev. E 68, 041113 (2003).
* Torquato (2018) S. Torquato, Phys. Rep. 745, 1 (2018).
* Zachary and Torquato (2009) C. E. Zachary and S. Torquato, J. Stat. Mech. Theor. Exp. 2009, P12015 (2009).
* Ornstein and Zernike (1914) L. S. Ornstein and F. Zernike, Proc. Akad. Sci. 17, 793 (1914).
* Gabrielli et al. (2002) A. Gabrielli, M. Joyce, and F. S. Labini, Phys. Rev. D 65, 083523 (2002).
* Donev et al. (2005) A. Donev, F. H. Stillinger, and S. Torquato, Phys. Rev. Lett. 95, 090604 (2005).
* Zachary et al. (2011a) C. E. Zachary, Y. Jiao, and S. Torquato, Phys. Rev. Lett. 106, 178001 (2011a).
* Jiao and Torquato (2011) Y. Jiao and S. Torquato, Phys. Rev. E 84, 041309 (2011).
* Chen et al. (2014) D. Chen, Y. Jiao, and S. Torquato, J. Phys. Chem. B 118, 7981 (2014).
* Zachary and Torquato (2011) C. E. Zachary and S. Torquato, Phys. Rev. E 83, 051133 (2011).
* Torquato et al. (2015) S. Torquato, G. Zhang, and F. H. Stillinger, Phys. Rev. X 5, 021020 (2015).
* Uche et al. (2004) O. U. Uche, F. H. Stillinger, and S. Torquato, Phys. Rev. E 70, 046122 (2004).
* Batten et al. (2008) R. D. Batten, F. H. Stillinger, and S. Torquato, J. Appl. Phys. 104, 033504 (2008).
* Batten et al. (2009) R. D. Batten, F. H. Stillinger, and S. Torquato, Phys. Rev. Lett. 103, 050602 (2009).
* Lebowitz (1983) J. L. Lebowitz, Phys. Rev. A 27, 1491 (1983).
* Zhang et al. (2015a) G. Zhang, F. H. Stillinger, and S. Torquato, Phys. Rev. E 92, 022119 (2015a).
* Zhang et al. (2015b) G. Zhang, F. H. Stillinger, and S. Torquato, Phys. Rev. E 92, 022120 (2015b).
* Kurita and Weeks (2011) R. Kurita and E. R. Weeks, Phys. Rev. E 84, 030401(R) (2011).
* Hunter and Weeks (2012) G. L. Hunter and E. R. Weeks, Rep. Prog. Phys. 75, 066501 (2012).
* Dreyfus et al. (2015) R. Dreyfus, Y. Xu, T. Still, L. A. Hough, A. G. Yodh, and S. Torquato, Phys. Rev. E 91, 012302 (2015).
* Hexner and Levine (2015) D. Hexner and D. Levine, Phys. Rev. Lett. 114, 110602 (2015).
* Jack et al. (2015) R. L. Jack, I. R. Thompson, and P. Sollich, Phys. Rev. Lett. 114, 060601 (2015).
* Weijs et al. (2015) J. H. Weijs, R. Jeanneret, R. Dreyfus, and D. Bartolo, Phys. Rev. Lett. 115, 108301 (2015).
* Torquato et al. (2008) S. Torquato, A. Scardicchio, and C. E. Zachary, J. Stat. Mech. Theor. Exp. 2008, P11019 (2008).
* Feynman and Cohen (1956) R. P. Feynman and M. Cohen, Phys. Rev. 102, 1189 (1956).
* Jiao et al. (2014) Y. Jiao, T. Lau, H. Hatzikirou, M. Meyer-Hermann, J. C. Corbo, and S. Torquato, Phys. Rev. E 89, 022721 (2014).
* Mayer et al. (2015) A. Mayer, V. Balasubramanian, T. Mora, and A. M. Walczak, Proc. Natl. Acad. Sci. USA 112, 5950 (2015).
* Hejna et al. (2013) M. Hejna, P. J. Steinhardt, and S. Torquato, Phys. Rev. B 87, 245204 (2013).
* Klatt et al. (2019) M. A. Klatt, J. Lovrić, D. Chen, S. C. Kapfer, F. M. Schaller, P. W. A. Schönhöfer, B. S. Gardiner, A. Smith, G. E. Schröder-Turk, and S. Torquato, Nat. Commun. 10, 1 (2019).
* Lei et al. (2019) Q.-L. Lei, M. P. Ciamarra, and R. Ni, Sci. Adv. 5, eaau7423 (2019).
* Lei and Ni (2019) Q.-L. Lei and R. Ni, Proc. Natl. Acad. Sci. U.S.A. 116, 22983 (2019).
* Chremos and Douglas (2018) A. Chremos and J. F. Douglas, Phys. Rev. Lett. 121, 258002 (2018).
* Rumi et al. (2019) G. Rumi, J. A. Sánchez, F. Elías, R. C. Maldonado, J. Puig, N. R. C. Bolecek, G. Nieva, M. Konczykowski, Y. Fasano, and A. B. Kolton, Phys. Rev. Res. 1, 033057 (2019).
* Sánchez et al. (2019) J. A. Sánchez, R. C. Maldonado, N. R. C. Bolecek, G. Rumi, P. Pedrazzini, M. I. Dolz, G. Nieva, C. J. van der Beek, M. Konczykowski, C. D. Dewhurst, et al., Commun. Phys. 2, 1 (2019).
* Sánchez et al. (2020) J. A. Sánchez, G. Rumi, R. C. Maldonado, N. R. C. Bolecek, J. Puig, P. Pedrazzini, G. Nieva, M. I. Dolz, M. Konczykowski, C. J. van der Beek, et al., Sci. Rep. 10, 1 (2020).
* Huang et al. (2021) M. Huang, W. Hu, S. Yang, Q.-X. Liu, and H. P. Zhang, Proc. Natl. Acad. Sci. U.S.A. 118 (2021).
* Torquato (2021) S. Torquato, Proc. Natl. Acad. Sci. U.S.A. 118 (2021).
* Florescu et al. (2009) M. Florescu, S. Torquato, and P. J. Steinhardt, Proc. Natl. Acad. Sci. U.S.A. 106, 20658 (2009).
* Man et al. (2013) W. Man, M. Florescu, E. P. Williamson, Y. He, S. R. Hashemizad, B. Y. C. Leung, D. R. Liner, S. Torquato, P. M. Chaikin, and P. J. Steinhardt, Proc. Natl. Acad. Sci. U.S.A. 110, 15886 (2013).
* Zheng et al. (2020) Y. Zheng, L. Liu, H. Nan, Z.-X. Shen, G. Zhang, D. Chen, L. He, W. Xu, M. Chen, Y. Jiao, et al., Sci. Adv. 6, eaba0826 (2020).
* Chen et al. (2021) D. Chen, Y. Zheng, L. Liu, G. Zhang, M. Chen, Y. Jiao, and H. Zhuang, Proc. Natl. Acad. Sci. U.S.A. 118, e2016862118 (2021).
* Zheng et al. (2021) Y. Zheng, D. Chen, L. Liu, Y. Liu, M. Chen, H. Zhuang, and Y. Jiao, Phys. Rev. B 103, 245413 (2021).
* Anderson (1958) P. W. Anderson, Phys. Rev. 109, 1492 (1958).
* Zachary et al. (2011b) C. E. Zachary, Y. Jiao, and S. Torquato, Phys. Rev. E 83, 051308 (2011b).
* Zachary et al. (2011c) C. E. Zachary, Y. Jiao, and S. Torquato, Phys. Rev. E 83, 051309 (2011c).
* Atkinson et al. (2013) S. Atkinson, F. H. Stillinger, and S. Torquato, Phys. Rev. E 88, 062208 (2013).
* Batten et al. (2011) R. D. Batten, F. H. Stillinger, and S. Torquato, J. Chem. Phys. 135, 054104 (2011).
* Weber and Stillinger (1985) T. A. Weber and F. H. Stillinger, Phys. Rev. B 31, 1954 (1985).
* Halperin and Nelson (1978) B. I. Halperin and D. R. Nelson, Phys. Rev. Lett. 41, 121 (1978).
* Nelson (1983) D. R. Nelson, Phys. Rev. B 28, 5515 (1983).
* Kosterlitz and Thouless (1973) J. M. Kosterlitz and D. J. Thouless, J. Phys. C 6, 1181 (1973).
* Young (1979) A. Young, Phys. Rev. B 19, 1855 (1979).
* Nelson and Halperin (1979) D. R. Nelson and B. I. Halperin, Phys. Rev. B 19, 2457 (1979).
* Zahn et al. (1999) K. Zahn, R. Lenke, and G. Maret, Phys. Rev. Lett. 82, 2721 (1999).
* Wierschem and Manousakis (2011) K. Wierschem and E. Manousakis, Physical Rev. B 83, 214108 (2011).
* Li and Ciamarra (2018) Y.-W. Li and M. P. Ciamarra, Phys. Rev. Mater. 2, 045602 (2018).
* Torquato (2002) S. Torquato, _Random Heterogeneous Materials:
Microstructure and Macroscopic Properties_ (Springer-Verlag, New York, 2002).
* Klatt et al. (2020) M. A. Klatt, J. Kim, and S. Torquato, Phys. Rev. E 101, 032118 (2020).
* Kim and Torquato (2018) J. Kim and S. Torquato, Phys. Rev. B 97, 054105 (2018).
* Reddy et al. (2018) A. Reddy, M. B. Buckley, A. Arora, F. S. Bates, K. D. Dorfman, and G. M. Grason, Proc. Natl. Acad. Sci. U.S.A. 115, 10233 (2018).
* Barbon et al. (2020) S. M. Barbon, J.-A. Song, D. Chen, C. Zhang, J. Lequieu, K. T. Delaney, A. Anastasaki, M. Rolland, G. H. Fredrickson, M. W. Bates, et al., ACS Macro Lett. 9, 1745 (2020).
|
# Leveraging Contextual Information for Robustness in Vehicle Routing Problems
Ali İrfan Mahmutoğulları
<EMAIL_ADDRESS>
KU Leuven, Leuven, Belgium Tias Guns
<EMAIL_ADDRESS>
KU Leuven, Leuven, Belgium
###### Abstract
We investigate the benefit of using contextual information in data-driven
demand predictions to solve the robust capacitated vehicle routing problem
with time windows. Instead of estimating the demand distribution or its mean,
we introduce contextual machine learning models that predict demand quantiles
even when the number of historical observations for some or all customers is
limited. We investigate the use of such predicted quantiles to make routing
decisions, comparing deterministic with robust optimization models.
Furthermore, we evaluate the efficiency and robustness of the decisions
obtained, both using exact or heuristic methods to solve the optimization
models. Our extensive computational experiments show that using a robust
optimization model and predicting multiple quantiles is promising when
substantial historical data is available. In scenarios with a limited demand
history, using a deterministic model with just a single quantile exhibits
greater potential. Interestingly, our results also indicate that the use of
appropriate quantile demand values within a deterministic model results in
solutions with robustness levels comparable to those of robust models. This is
important because, in most applications, practitioners use deterministic
models as the industry standard, even in an uncertain environment.
Furthermore, as they present fewer computational challenges and require only a
single demand value prediction, deterministic models paired with an
appropriate machine learning model hold the potential for robust decision-
making.
Keywords: Routing, Robust optimization, Quantile predictions, Contextual
information, Time windows
## 1 Introduction
The increasing availability of data in various industries has had a profound
impact on the applications of operations research (OR) in recent years.
Examples, as mentioned in Bertsimas et al., (2018), include transaction data
stored by retailers, order trends observed by suppliers, and real-time power
usage data in energy markets. This vast amount of transaction data is being
archived by retailers, amounting to terabytes of information. Similarly,
suppliers continuously monitor and track order patterns throughout their
supply chains. Furthermore, energy markets have access to data, including
global weather information, historical demand profiles and real-time power
consumption data. This trend has sparked research focused on integrating
machine learning (ML) and OR, with a strong emphasis on data analysis.
On the other hand, in this rapidly evolving and data-rich environment,
practitioners face the challenge of ensuring the feasibility and quality of
their decisions. Robust optimization models provide a viable solution by
generating tractable problems and delivering solutions that are comparable to
those obtained through alternative methods, given the appropriate uncertainty
sets for the uncertain problem parameters (see, for example, Ben-Tal and
Nemirovski, (2002) and Bertsimas et al., (2011) for a deeper discussion of
robust optimization models). These models prove particularly valuable when
dealing with problems where no distributional information is available for the
problem parameters.
Another recent development in the OR community is the use of contextual
information in decision-making. Contextual information, also known as features
or covariates, refers to auxiliary data that exhibits a correlation with the
actual problem parameters. For instance, the demand for a specific product by
a customer is influenced by the customer’s profile. Thus, in practical
applications, the price of a new product is determined by observing the
customers’ features and predicting their demands rather than observing the
true demand of the customers. Another example is a real-life scenario
encountered by a large logistics service provider (LSP), which also motivates
our study on the utilization of contextual information for addressing the
robust capacitated vehicle routing problem with time windows (RCVRPTW). The
LSP provides routing services to over 6,000 customers and the demand values
for these customers exhibit uncertainty. Notably, while more than half of all
customers have only one recorded demand observation, a mere 3% of customers
possess over 30 demand observations. This contrast renders accurate
forecasting of each customer’s demand value impossible. However, the LSP
possesses contextual information pertaining to customer orders, including
origin, destination and packaging type. It is known that contextual
information exhibits a correlation with demand values. Occasionally, the LSP
encounters new customers lacking a demand history; nonetheless, certain
contextual information about these new customers remains available, such as
the location of the customers, the types of goods they ship and possibly some
contextual information about the orders, such as packaging types.
Consequently, the LSP aims to devise optimal vehicle routes by effectively
predicting customer demand, thereby enabling the provision of services at a
minimal cost. Moreover, the LSP would like to ensure the robustness of vehicle
routes to avoid infeasible solutions after realizing the actual value of
customer demands since they require expensive recourse actions.
In order to make routing decisions for its vehicles, the LSP uses demand
estimates of customers, denoted by the vector $\tilde{d}$. However, due to the
uncertainty of demand values, some of the routes obtained with $\tilde{d}$ may
become infeasible after the actual demand realization $d$. In such cases, the
vehicles have to diverge from their routes at some point for replenishment or
unloading before serving the remaining customers on the route. Consider a
given route $(0,i_{1},i_{2},\dots,i_{K},0)$ where 0 denotes the depot and
$i_{1},i_{2},\dots,i_{K}$ are the customers served on this route. If the
traveling cost from $i$ to $j$ is $c_{ij}$ then the initial cost of the route
is $c_{0i_{1}}+c_{i_{1}i_{2}}+\cdots+c_{i_{K-1}i_{K}}+c_{i_{K}0}$. Now, assume
that for a specific demand vector $d$, a vehicle with capacity $Q$ can serve
the first $k<K$ customers on the route but is unable to serve the next
customer; that is, $d_{1}+\cdots+d_{k}\leq Q$ but $d_{1}+\cdots+d_{k+1}>Q$. In
this scenario, the vehicle should visit the depot instead of customer
$i_{k+1}$ immediately after serving customer $i_{k}$ as a recourse action,
incurring an additional cost of $c_{i_{k}0}+c_{0i_{k+1}}-c_{i_{k}i_{k+1}}$ due
to this detour. Note that it is also possible to have multiple detours on a
single route. Then, the overall operational cost is the sum of the initial
route cost and all the required recourse costs due to detours. Another
important measure for evaluating the robustness of solutions is the amount of
time window violations and the percentage of customers that cannot be served
within their specified time windows. Although the planned routes may satisfy
the service time window constraints of all customers, delays caused by detours
can lead to potential violations for some of the customers and result in
additional costs due to customer dissatisfaction.
This fact raises the question of how to obtain demand predictions,
$\tilde{d}$, that lead to the best decisions in terms of total operating cost
and robustness of the solution against time window violations. One approach is
to estimate the demand distribution for each customer separately. However,
this may not always be possible due to limited demand realization data for
each customer, as in the LSP case. Alternatively, one can make the routing
decisions based on the mean or average demand predictions for the customers.
However, particularly when demand variability is high, relying solely on
predicted means can result in routes with a large number of customers. This
can be problematic, especially if the actual demand realizations significantly
exceed the mean demand predictions, as it may lead to infeasible routes and
hence expensive recourse actions. In this case, predicting an overestimation,
e.g., a value above the expected demand, will create a buffer for demand
realizations where the true value is higher than the expected value. A
technical way to achieve this is to predict not the expected value but to do a
quantile regression at a specific quantile value. For instance, for a 0.6
quantile value, the prediction will be such that, in expectation, 60% of the
demand realizations will be below the predicted value. Different quantile
values will hence create different amounts of buffer in its prediction while
automatically taking the variance observed in the data into account without
any assumption of the distribution (e.g., the amount of buffer at 0.6 quantile
for a highly volatile customer will be much higher than that of a low-variance
customer for the same expected mean).
Therefore, in this paper, we propose the use of predicted demand quantiles
obtained from the contextual information of the customers to make robust
decisions in vehicle routing applications. The use of quantiles offers two
distinct advantages. Firstly, it does not require any assumptions regarding
the distribution of customer demand; instead, it directly predicts the
quantile values. Secondly, in contrast to mean predictions, it enables the
decision-maker to craft solutions with a degree of robustness by predicting
values beyond the mean or median, such as a quantile for the demands. In the
deterministic formulation of the problem, a single demand prediction is made
for each customer. However, in the robust formulation where demand uncertainty
is explicitly represented in the model, multiple predictions are required
(e.g., two values are needed to define an interval). These predictions are
utilized to create the uncertainty set in the robust formulation, which
includes all possible realizations of the demand values.
Therefore, in this paper, we explore the following research questions:
* •
Can quantile predictions for customer demands be used in the context of
deterministic or robust capacitated vehicle routing problems with time windows
(CVRPTW) when the number of past observations is limited for some or all
customers?
* •
Can the utilization of contextual information from existing and new customers
facilitate the prediction of uncertainty sets for demands in RCVRPTW, thereby
enabling the formulation of robust decisions?
* •
Is it possible for non-expert practitioners to obtain robust solutions by
solving the deterministic CVRPTW using appropriately selected quantile
predictions?
We provide answers to these questions and contribute to the existing
literature in the following ways:
* (i)
We propose contextual prediction models that enable the estimation of customer
demand quantiles or means, even in cases where the number of past observations
is limited for some or all customers. To the best of our knowledge, this study
is the first to consider contextual prediction models in vehicle routing
problems. These models are designed to make predictions for both existing and
new customers based on contextual information such as the type of good,
location of the customer, etc. We explore both linear and non-linear
prediction models, where the non-linear models are defined by neural networks.
The predicted quantile values facilitate the solution of CVRPTW and its robust
counterpart, RCVRPTW, enhancing the decision-making process in terms of cost
and robustness.
* (ii)
We assess the robustness of decisions obtained using mathematical models or
heuristics when quantile demand predictions are used. For small problem
instances, mathematical models can be effectively solved using commercial
mixed-integer programming (MIP) solvers. However, for larger instances,
solvers are not able to find the optimal solution at reasonable solution
times, and therefore heuristic methods can be employed to solve the problem
efficiently. Therefore, we extend the standard implementation of an adaptive
large neighborhood search (ALNS) based heuristic for the deterministic problem
CVRPTW to its robust counterpart RCVRPTW.
* (iii)
We demonstrate that using appropriate quantile demand values in a
deterministic formulation yields solutions that are almost at least as robust
as those obtained from the robust models in the data-scarce setting. This
finding carries significance since the formulation and solution methods for
the robust model may pose challenges for practitioners, while the
deterministic formulation may be preferred by them. Also, the deterministic
models are industry standards, as they are widely utilized in practice.
The rest of the paper is structured as follows: In Section 2, we provide a
comprehensive review of the related literature. In Section 3, we present the
notation, formal definition of the problem and mathematical models for both
the CVRPTW and its robust counterpart, the RCVRPTW. In Section 4, we present
an ALNS heuristic for solving large instances that are not solvable in a
reasonable time with the mathematical models presented in Section 3.
Prediction of quantiles using contextual information is explained in Section
5. The results of our computational experiments are discussed in Section 6.
Finally, we present our concluding remarks and possible extensions in Section
7.
## 2 Literature Review
The literature review section is divided into four subsections to provide a
comprehensive overview of the relevant research. The first subsection focuses
on the related literature concerning robust vehicle routing problems. The
second subsection covers studies that explore the generation of uncertainty
sets in the context of robust optimization problems. The third subsection is
dedicated to studies that specifically explore the utilization of contextual
information in decision-making processes. Lastly, the fourth subsection
presents studies that focus on quantile prediction methods.
### 2.1 Robust Vehicle Routing Problem
Due to the extensive body of literature on vehicle routing problems (VRP),
numerous studies have been dedicated to this particular OR application.
However, for the purpose of this literature review, our focus will primarily
be on the robust VRP literature as it pertains specifically to the problem
addressed in this paper. For a detailed discussion on general VRPs, we refer
interested readers to Laporte, (1992) and Toth and Vigo, (2002).
While uncertainty in the context of VRPs has also been studied in several
works, the majority of the existing work focuses on stochastic programming
models. These models typically assume that the uncertain problem parameters
follow known probability distributions. However, this assumption may not hold
true for many real-life applications. Oyola et al., (2018) present detailed
models and discussions on stochastic VRPs. While stochastic VRP models have
received significant attention, robust optimization approaches in the VRP
context have been less explored.
To the best of our knowledge, Sungur et al., (2008) are the first to address
robust VRPs with uncertain demand values. They propose mathematical models
with Miller-Tucker-Zemlin (MTZ) constraints that incorporate different types
of uncertainty sets, namely convex hull, box, and ellipsoidal. They show that
solving these robust models is computationally equivalent to solving the
deterministic version of the problem with augmented demand values based on the
shape of the uncertainty set. Sungur et al., (2008) focus on worst-case
robustness, meaning that the solutions obtained from their models are feasible
for all demand realizations within the defined uncertainty set. This overly-
conservative approach ensures that the solutions are always feasible under any
realization of the demand values. In a more general setting, Bertsimas and
Sim, (2003) prove that any robust combinatorial optimization problem can be
solved by solving its deterministic counterpart a polynomial number of times
if the uncertain parameters are in the objective. Thus, robust VRP with
uncertain arc cost values can be solved by solving $n+1$ deterministic VRPs
where $n$ is the number of arcs. They consider a budgeted uncertainty set
where a parameter $\Gamma$ adjusts the degree of robustness of the solution.
Gounaris et al., (2013) developed exact robust counterparts of the
deterministic capacitated VRP (CVRP) using different formulations, including
two-index, Miller-Tucker-Zemlin, commodity flow, and vehicle assignment
formulations. For each formulation, the authors established necessary and
sufficient conditions to define a robust feasible set of routes. Later,
Gounaris et al., (2016) propose a meta-heuristic for solutions of robust CVRP
under polyhedral demand uncertainty. The key focus of their meta-heuristic
algorithm is to ensure the robust feasibility of the generated route to ensure
that they remain feasible for all demand realizations.
Munari et al., (2019) address RCVRPTW where both demand and travel time
values are uncertain. Firstly, they introduce a compact mathematical
formulation for the RCVRPTW, leveraging the recursive dynamic programming
representation of budgeted uncertainty sets. This formulation offers a simpler
representation of robustness than other existing formulations based on
dualization of the uncertainty sets. Moreover, it demonstrates improved
overall performance when solved using commercial solvers. Munari et al.,
(2019) also propose a branch-and-price method based on the set partition
formulation of the problem. The results of the computational experiments show
the importance of making robust decisions in the existence of uncertainty for
real-life applications.
Finally, we refer an interested reader to Ordóñez, (2010) for a tutorial
paper on robust VRP models.
### 2.2 Generating Uncertainty Sets in Robust Problems
Traditionally, uncertainty sets in robust optimization problems are assumed to
be given and available to the decision maker. However, recent studies also
showed that it is also possible to construct these sets based on the available
data the decision maker has.
Goldfarb and Iyengar, (2003) propose generating uncertainty sets using market
data and a parameter that controls the size of the uncertainty set in the
context of a portfolio optimization problem. They assume that the returns of
random assets are linearly dependent on the factors that drive the market. In
a related study, Tulabandhula and Rudin, (2014) adopt a similar approach by
focusing on designing uncertainty sets for a broad class of decision-making
problems while making minimal assumptions about the distribution. Their work
introduces uncertainty set design based on statistical learning theory.
Similar to our work, Tulabandhula and Rudin, (2014) propose a conditional
quantile prediction method to generate uncertainty sets for robust
optimization problems using contextual information. By utilizing conditional
quantiles, they provide a robustness guarantee within the robust formulation.
However, they do not explicitly mention specific applications or provide
computational results for this approach to generating uncertainty sets.
In their work, Bertsimas et al., (2018) introduce a novel framework for
constructing uncertainty sets in robust optimization using data and hypothesis
tests. Their approach leads to uncertainty sets that offer a probabilistic
guarantee and are typically smaller than sets derived from limited data. This
results in less conservative models compared to traditional robust approaches,
while preserving the same level of robustness guarantees.
Ohmori, (2021) presents an algorithm that uses minimum volume ellipsoids
enclosing $k$-nearest neighbors in feature space as uncertainty sets. The
algorithm employs a nonparametric prediction model, allowing for flexibility
in handling diverse data patterns without making strong distributional
assumptions. This approach provides robustness against prediction errors.
Moreover, the use of ellipsoidal uncertainty sets benefits from extensive
research on efficient algorithms in robust optimization.
In their recent study, Goerigk and Kurtz, (2023) applied unsupervised deep
learning methods to generate uncertainty sets for robust optimization. They
utilized machine learning techniques to uncover anomalies within the data,
leading to the creation of non-convex uncertainty sets. This approach allows
for the derivation of robust solutions by formulating the adversarial problem
as a convex quadratic mixed-integer program. In contrast, we employ contextual
information about the customers to predict multiple quantiles, which are used
for defining the uncertainty sets in the RCVRPTW model.
### 2.3 Contextual Optimization
The emerging paradigm of contextual optimization has attracted significant
interest from both the ML and OR communities. In contextual optimization
problems, decision-making is done without directly observing the uncertain
problem parameters, such as objective function or constraint coefficients.
Instead, decisions are made based on available contextual information, also
referred to as features or covariates.
In their respective works, Ban and Rudin, (2018) and Zhang et al., (2023)
tackle contextual newsvendor problems with different approaches. Ban and
Rudin, (2018) introduces algorithms based on Empirical Risk Minimization and
Kernel-weights Optimization, enabling decision-making solely based on observed
features and eliminating the need for a separate demand estimation step. They
also provide performance bounds specifically for the feature-driven decision
case. On the other hand, Zhang et al., (2023) proposes a distributionally
robust framework utilizing the Wasserstein distance to handle uncertainties
associated with unseen data. Their focus is on preventing ill-defined policies
when faced with data that deviates from the observed historical distribution.
In the study conducted by Ban et al., (2019), the authors introduce the
residual tree method as a means to forecast demand for new products using
contextual information. By incorporating covariates and generating multiple
demand values within a scenario tree structure, their approach improves the
accuracy of demand forecasting and allows decision-makers to assess the
robustness of their decisions under various scenarios. Similarly, in the
research by Kannan et al., (2022), the authors predict point estimates and
out-of-sample residuals of leave-one-out models for generating scenarios,
enabling the use of sample average approximation (SAA) to approximate the true
distribution of problem parameters.
In their study, Bertsimas and Kallus, (2020) explore the use of contextual
information in prediction and decision-making. They propose a methodology that
incorporates predictive machine learning methods applied to feature values,
allowing decisions to be made based on a weighted combination of historical
values. The authors consider different approaches, such as $k$-nearest-
neighbors, kernel methods, local linear models, prediction trees, and
ensembles, to determine these weights. Furthermore, the paper investigates
decision-dependent parameters and examines the asymptotic behavior of the
prescriptions. Similarly, in the work of Lin et al., (2022), feature
information of existing products is leveraged to optimize the order quantity
of a new product based on its similarity to the existing products. By
utilizing contextual information, both studies aim to improve the decision-
making processes and achieve better performance in their respective domains.
In contrast to the studies that separate prediction and optimization,
decision-focused learning (DFL) approaches integrate prediction and
optimization into a single pipeline. DFL distinguishes itself from other
contextual methods by incorporating the decision error, which is dependent on
the downstream optimization problem, into the training process. One example is
the work by Bertsimas et al., (2019), where they introduce the concept of
optimal prescriptive trees where the loss to be minimized is a convex
combination of prediction and prescription errors. By obtaining prescriptions
based on feature values, this approach offers benefits such as
interpretability, scalability, generalizability and competitive performance
when compared to alternative methods.
In the study conducted by Chung et al., (2022), contextual optimization is
applied to optimize a healthcare supply chain. The authors propose a method
that learns a weighting of the points in the training data to minimize
decision loss, effectively incorporating contextual information into the
decision-making process.
In this work, we utilize contextual information about the customers to enhance
robust decision-making in the vehicle routing setting. To the best of our
knowledge, no existing literature has explored contextual optimization for
routing problems. An interested reader may refer to Sadana et al., (2023) for
a general discussion on contextual optimization models.
### 2.4 Quantile Regression
In statistics, the $\beta$ quantile of a random variable $X$ corresponds to
the minimum (or infimum) $x$ value such that the cumulative distribution
function $F(x)$ is equal to or greater than $\beta$. For instance, the $0.5$
quantile, also known as the median, is commonly used in statistics. It
represents the value for which half of the data falls below and half falls
above. Quantile regression, introduced by Koenker and Bassett, (1978), is a
statistical technique that focuses on predicting specific quantiles of a
random variable using observed covariates. Unlike traditional regression
methods that estimate the conditional mean, quantile regression provides
estimates for different quantiles of the response variable.
Quantile regression models aim to minimize a tilted absolute value function,
often referred to as the check function or pinball loss function. This
function places different weights on observations that fall above or below the
predicted quantile, which helps capture the asymmetric nature of the
quantiles, unlike the classical mean squared error (MSE). In Section 5, we
provide a more detailed explanation of quantile regression.
The relationship between covariates and predicted quantiles can be linear
(Koenker and Hallock, (2001)) or nonlinear, e.g., defined by a neural network
(Hatalis et al., (2019); Brando et al., (2022)). When multiple quantiles are
predicted simultaneously, a challenge known as quantile crossing may arise,
where the estimated quantile values exhibit an undesirable reversal in their
order as the quantile level increases. For instance, Dai et al., (2022)
propose using smoothing techniques to eliminate crossing quantiles and ensure
a coherent ordering of the predicted quantiles. This phenomenon is not the
main focus of our study and we adopt a simple strategy to mitigate it as
explained in Section 5.
## 3 Mathematical Model
We will first present the mathematical model for the deterministic CVRPTW for
the sake of clarity. Then, we will extend this model to its robust
counterpart. The problem involves a depot and a set of customers to serve,
denoted by $\mathcal{C}=\\{1,\ldots,n\\}$. The CVRPTW can be represented using
a graph $\mathcal{G}=(\mathcal{N},\mathcal{A})$, where
$\mathcal{N}=\mathcal{C}\cup\\{0,n+1\\}$ represents the set of nodes and
$\mathcal{A}=\\{(0,j):j\in\mathcal{C}\\}\cup\\{(i,j):i,j\in\mathcal{C},i\neq
j\\}\cup\\{(i,n+1):i\in\mathcal{C}\\}$ represents the set of arcs. Both nodes
0 and $n+1$ correspond to the depot node.
A vehicle’s route must begin at node 0, visit a subset of customers and
finally return to node $n+1$. The time and cost required to traverse arc
$(i,j)$ are denoted by $t_{ij}$ and $c_{ij}$, respectively. We assume that the
cost is proportional to the travel time. Each customer $i$ has a predicted
demand denoted by $\tilde{d}_{i}$, where $\tilde{d}_{0}=\tilde{d}_{n+1}=0$.
Recall that the routing decisions are made based on a predicted demand vector
$\tilde{d}$, not on the actual customer demands. Moreover, the vehicles have a
capacity denoted by $Q$ and the total demand of the customers served on the
same route cannot exceed this capacity. To serve a customer $i$, it must be
visited within a specific time window $[T^{min}_{i},T^{max}_{i}]$.
Additionally, each customer $i$ has a service time denoted by $s_{i}$.
The binary decision variable $x_{ij}$ indicates if arc $(i,j)$ is used or not.
The continuous decision variable $u_{i}$ indicates the amount of demand served
right after visiting node $i$ by the vehicle to which node $i$ is assigned.
Here, we formulate the problem as a collection Finally, the continuous
decision variable $w_{i}$ represents the visiting time of node $i$. If the
predicted demands of customers are given by
$\\{\tilde{d}_{i},i\in\mathcal{C}\\}$, we can use the following deterministic
model for CVRPTW:
$\displaystyle\min_{x,u,w}$
$\displaystyle\;\;\sum_{(i,j)\in\mathcal{A}}c_{ij}x_{ij},$ (1) s.t.
$\displaystyle\;\;\sum_{(i,j)\in\mathcal{A}}x_{ij}=1,\;\;\forall
j\in\mathcal{C}$ (2)
$\displaystyle\;\;\sum_{(i,j)\in\mathcal{A}}x_{ij}-\sum_{(j,i)\in\mathcal{A}}x_{ji}=0,\;\;\forall
i\in\mathcal{C}$ (3)
$\displaystyle\;\;\sum_{(0,j)\in\mathcal{A}}x_{ij}-\sum_{(i,n+1)\in\mathcal{A}}x_{i(n+1)}=0,$
(4) $\displaystyle\;\;u_{j}\geq
u_{i}+\tilde{d}_{j}x_{ij}-Q(1-x_{ij}),\;\;\forall(i,j)\in\mathcal{A}$ (5)
$\displaystyle\;\;\tilde{d}_{i}\leq u_{i}\leq Q,\;\;\forall i\in\mathcal{C}$
(6) $\displaystyle\;\;w_{j}\geq
w_{i}+(s_{i}+t_{ij})x_{ij}-T(1-x_{ij}),\;\;\forall(i,j)\in\mathcal{A}$ (7)
$\displaystyle\;\;T^{min}_{i}\leq w_{i}\leq T^{max}_{i},\;\;\forall
i\in\mathcal{C}$ (8)
$\displaystyle\;\;x_{ij}\in\\{0,1\\},\;\;\forall(i,j)\in\mathcal{A}$
$\displaystyle\;\;u_{i}\geq 0,w_{i}\geq 0,\;\;\forall i\in\mathcal{C}$
The objective (1) minimizes the total cost. Constraints (2),(3) and (4) define
the routes of vehicles. Constraints (5) and (6) ensure vehicle capacities are
not exceeded for the predicted demand values. Time window restrictions are
given by Constraints (7) and (8) where $T$ is a large positive number.
For the robust counterpart, we assume that the demand values belong to
intervals, that is, the demand of customer $i$ is in interval
$[\overline{d}_{i}-\hat{d}_{i},\overline{d}_{i}+\hat{d}_{i}]$ for some base
value $\overline{d}_{i}$ and deviation $\hat{d}_{i}$. We can equivalently
write the demand of customer $i$ as $\overline{d}_{i}+\xi_{i}\hat{d}_{i}$ with
$\xi_{i}\in[-1,1]$ for $i\in\mathcal{C}$ and consider a polyhedral uncertainty
set
$D=\\{d\in\mathbb{R}^{|\mathcal{C}|}:d_{i}=\overline{d}_{i}+\xi_{i}\hat{d}_{i},\sum_{i\in\mathcal{C}}\xi_{i}\leq\Gamma,\xi_{i}\in[0,1],i\in\mathcal{C}\\}$
(9)
In Equation (9), the parameter $\Gamma$ is called the budget of uncertainty.
Indeed, the parameter $\Gamma$ determines the level of risk aversion of the
decision-maker. A larger value of $\Gamma$ corresponds to a larger budget of
uncertainty, indicating that the decision-maker is more risk-averse. We also
assume that all demand values are positive and the worst-case scenario occurs
when the demand for customer $i\in\mathcal{C}$ is
$\overline{\overline{d}}_{i}=\overline{d}_{i}+\hat{d}_{i}$. A feasible route
satisfies Constraints (5) and (6) for all possible demand realizations within
the uncertainty set $D$. Previous work by Munari et al., (2019) employed a
recursive dynamic programming approach to represent the robust constraints
when the budget of uncertainty $\Gamma$ is an integer. In our study, we adopt
a similar methodology following their approach.
Let $(0,\ldots,i,j,\ldots,n+1)$ be a route and $u_{j\gamma}$ be the load of
the vehicle after serving customer $j$ if the worst $\gamma$ customers have
reached their worst case demand realization before $j$. Then
$u_{j\gamma}=\max\left\\{u_{i\gamma}+\overline{d}_{j},u_{i(\gamma-1)}+\overline{\overline{d}}_{j}\right\\}\;\;\forall\gamma\in\\{1,\ldots,\Gamma\\},j\in\mathcal{C}$
(10) $u_{j0}=u_{i0}+\overline{d}_{i}$
$u_{0\gamma}=0\;\;\forall\gamma\in\\{0,\ldots,\Gamma\\}$
where the former term in the max operator of (10) corresponds to the case when
the demand of customer $j$ is $\overline{d}_{j}$ and the latter one
corresponds to the case when it is $\overline{\overline{d}}_{j}$. Thus, the
following model can be used to solve RCVRPTW where the equation (10) is
represented by a set of linear equations.
$\displaystyle\min_{x,u,w}$
$\displaystyle\;\;\sum_{(i,j)\in\mathcal{A}}c_{ij}x_{ij},$ (11) s.t.
$\displaystyle\;\;\eqref{eq:det-visit}-\eqref{eq:det-
vehiclenumber},\eqref{eq:det-MTZ-time},\eqref{eq:det-time-window}$
$\displaystyle\;\;u_{j\gamma}\geq
u_{i\gamma}+\overline{d}_{j}x_{ij}-Q(1-x_{ij}),\;\;\forall(i,j)\in\mathcal{A},\forall\gamma\in\\{0,\ldots,\Gamma\\}$
(12) $\displaystyle\;\;u_{j\gamma}\geq
u_{i(\gamma-1)}+\overline{\overline{d}}_{j}x_{ij}-Q(1-x_{ij}),\;\;\forall(i,j)\in\mathcal{A},\forall\gamma\in\\{1,\ldots,\Gamma\\}$
(13) $\displaystyle\;\;\overline{d}_{i}\leq u_{i\gamma}\leq Q,\;\;\forall
i\in\mathcal{C},\forall\gamma\in\\{0,\ldots,\Gamma\\}$ (14)
$\displaystyle\;\;x_{ij}\in\\{0,1\\},\;\;\forall(i,j)\in\mathcal{A}$
$\displaystyle\;\;u_{i\gamma}\geq 0,\;\;\forall
i\in\mathcal{C},\forall\gamma\in\\{0,\ldots,\Gamma\\}$
$\displaystyle\;\;w_{i}\geq 0,\;\;\forall i\in\mathcal{C}$
In the robust formulation, constraints (12) and (13) ensure that equation (10)
holds if customer $j$ is visited right after customer $i$. Constraint (14)
ensures that all routes are feasible in a robust sense since $u_{i\gamma}\leq
Q,\forall i\in\mathcal{C},\forall\gamma\in\\{0,\ldots,\Gamma\\}$ implies that
the vehicle capacities are respected for all demand realizations defined by
the uncertainty set (9). Also, note that the deterministic model requires a
demand prediction $\tilde{d}_{i}$ for each customer $i\in\mathcal{C}$.
However, the robust model requires a base demand $\overline{d}_{i}$ and a
worst-case demand value $\overline{\overline{d}}_{i}$ for each customer.
## 4 Adaptive Large Neighbourhood Search
Although the mathematical models presented in the previous section are compact
models with two-index variables, they are not able to solve large instances in
a reasonable time using commercial MIP solvers. Munari et al., (2019) state
that these formulations can solve small instances with 25 customers optimally
but can only find feasible solutions for problems with 100 customers in
running times of one hour. Therefore, in this section, we present a general
ALNS framework for the solution of larger instances.
ALNS is first proposed by Ropke and Pisinger, (2006) as an extension of the
large neighbor search (LNS) method of Shaw, (1998) as an efficient meta-
heuristic search algorithm. The LNS heuristic can search large neighborhoods
of a given solution using a pair of destroy and repair operations. Unlike LNS,
a set of destroy and repair operators are available in ALNS and these
operators are chosen based on their performances in the previous iterations.
We use the ALNS heuristic given in Algorithm 1 for solving CVRPTW or its
robust counterpart RCVRPTW.
Input: A list of customers $\mathcal{C}$,
Travel cost $c_{ij}$ and time $t_{ij}$ for each
$i,j\in\mathcal{C}\cup\\{0\\}$,
Vehicle capacity $Q$,
Service time windows $[T^{min}_{i},T^{max}_{i}]$ for each customer
$i\in\mathcal{C}$,
A predicted demand vector $\tilde{d}$ for CVRPTW,
(or a base demand vector $\overline{d}$ and a worst-case demand vector
$\overline{\overline{d}}$ for RCVRPTW,)
Set of destroy operators $\Psi^{-}$,
Set of repair operators $\Psi^{+}$,
Initial operator weight vector $W$.
create an initial feasible solution $s_{\text{min}}=s$;
while _stopping criteria not met_ do
Select $rep\in\Psi^{+}$, $des\in\Psi^{-}$ according to probabilities $p$
obtained from $W$;
$s^{\prime}\leftarrow rep(des(s))$;
if _accept( $s$, $s^{\prime}$)_ then
$s\leftarrow s^{\prime}$;
if _$c(s) <c(s_{\text{min}})$_ then
$s_{\text{min}}\leftarrow s$;
end if
end if
Update the weight vector $W$;
end while
return _$s_{\text{min}}$_ ;
Algorithm 1 A general ALNS framework for CVRPTW and RCVRPTW.
The algorithm starts with a set of routes obtained by a greedy algorithm where
the routes are generated by adding the nearest unassigned customer to a route
as long as the vehicle capacity is not exceeded. If it is not possible to
serve the next customer with the current vehicle capacity, a new route starts
from the depot. At each iteration of ALNS with the set of destroy and repair
operators $\Psi^{-}$ and $\Psi^{+}$, respectively, a destroy operator
$des\in\Psi^{-}$ and a repair operator $rep\in\Psi^{+}$ are selected to create
another feasible solution $s^{\prime}$ to the problem given the existing
solution $s$. To improve the exploration, even if the new solution
$s^{\prime}$ is worse than $s$ in terms of the objective, we can make an
update $s\leftarrow s^{\prime}$ using the $accept$ function. The $accept$
function in our heuristic is implemented using a simulated annealing
mechanism, facilitating enhanced solution exploration at the beginning of the
optimization process; that is, even if $s^{\prime}$ is worse than $s$ in terms
of the objective value, it is still accepted with probability
$e^{\frac{o(s)-o(s^{\prime})}{temp}}$ where $temp$ is the temperature
parameter and $o(s)$ is the objective function value of solution $s$. The
temperature parameter is decreased at each iteration to simulate cooling and
reduce exploration in the later iterations of the algorithm. ALNS is stopped
after running for a fixed amount of time and the best solution found in terms
of initial cost is recorded.
We use two destroy operators,
$\Psi^{-}=\\{random\\_removal,string\\_removal\\}$. The $random\\_removal$
operator removes a predefined number of customers from the routes of a
feasible solution randomly. The $string\\_removal$ operator removes partial
routes around a randomly selected customer. As for repair operators, we
utilize $\Psi^{+}=\\{greedy\\_repair,regret\\_repair\\}$. The
$greedy\\_repair$ approach inserts unassigned customers into existing routes
in a greedy manner. The $regret\\_repair$ operator first creates a list of
unassigned customers according to their decreasing regrets. Regret is
determined by calculating the cost difference between assigning a customer to
its best position and its second-best position. The $regret\\_repair$ operator
then inserts these unassigned customers from the list into the solution in a
greedy manner, following the order of the list.
ALNS has demonstrated its effectiveness in addressing various routing-type
problems, as shown by recent studies such as Hemmelmayr et al., (2012);
Adulyasak et al., (2012); Demir et al., (2012). Moreover, several open-
source libraries have been developed for the implementation of ALNS. In our
study, we use the ALNS library, which is a Python implementation of the ALNS
heuristic, provided by Wouda and Lan, (2023).
In order to obtain solutions for both mathematical models presented in Section
3 and the heuristic introduced in this section, some predictions of customer
demand values are needed. In the next section, we will present contextual
demand prediction methods with a special emphasis on quantile predictions.
## 5 Contextual Quantile Prediction
Assume that we have the demand history
$\mathcal{D}_{i}=\\{d_{i1},d_{i2},\ldots,d_{in_{i}}\\}$ and an assumption for
the demand distribution for each customer $i\in\mathcal{C}$. In that case, we
can predict the demand distribution of each customer individually when the
number of observations $n_{i}$ is large. For example, the value
$\overline{\mu}_{i}=\frac{1}{n_{i}}\sum_{l=1}^{n_{i}}d_{il}$ can be used to
predict the mean, and
$\sum_{l=1}^{n_{i}}\frac{1}{n_{i}-1}(d_{il}-\overline{\mu}_{i})^{2}$ can be
used to predict the variance for the demand distribution of customer $i$. In
this case, it becomes possible to sample demand values from the predicted
distribution and use them in the decision-making process, such as sample
average approximation for a stochastic optimization setting (see, for example,
Kleywegt et al., (2002)). However, in many practical applications, the number
of observations $n_{i}$ is small for some customers, making it challenging to
make assumptions about the demand distribution in such cases, which prohibits
the sampling process.
Alternatively, the $\beta$ quantile of the demand can be estimated by
minimizing the tilted loss (also called pinball loss) defined as
$\frac{1}{n_{i}}\sum_{l=1}^{n_{i}}max(\beta(d-d_{il}),(\beta-1)(d-d_{il}))$
(15)
with respect to $d$ (see, for example, Koenker and Hallock, (2001) for a more
detailed discussion). Note that this quantile prediction requires no
assumption on the distribution of demands. The tilted loss assigns asymmetric
weight parameters to account for the under and over-prediction of quantiles.
Figure 1 shows how Equation (15) treats the larger and smaller value for
$d_{il}$. However, as in the LSP case mentioned in the introduction, there
might be new customers with no prior demand data, resulting in $n_{i}=0$.
Figure 1: Tilted loss function
Although a small value of $n_{i}$ or $n_{i}=0$ prohibits individual demand
predictions for each customer, there might be available feature information
for all customers. In such cases, a contextual quantile prediction model can
be employed to obtain demand predictions from customer features. This approach
allows us to make predictions based on the contextual information shared by
all customers, thereby overcoming the limitations of sparse demand data for
specific individuals.
Let $f_{i}\in\mathbb{R}^{F}$ represent the feature vector available for each
customer $i$, where the feature information can include a combination of
continuous values, binary values and categorical values. Then, we have the
aggregated demand history
$\mathcal{D}=\\{(f_{i},d_{il}):i\in\mathcal{C},l\in\\{1,\ldots,n_{i}\\}\\}$.
In this case, it is possible to use a parametric function $q(f,\theta)$ as the
predictor, where $f$ and $\theta$ are feature and parameter vectors,
respectively. In this case, predicting the mean demand entails finding the
optimal $\theta$ that minimizes the MSE loss function
$\frac{1}{|\mathcal{D}|}\sum_{i}\sum_{l=1}^{n_{i}}(d_{il}-q(f_{i},\theta))^{2}$.
The tilted loss function (15) can also be extended to the contextual case by
defining
$\frac{1}{|\mathcal{D}|}\sum_{i}\sum_{l=1}^{n_{i}}max(\beta(q(f_{i},\theta)-d_{il}),(\beta-1)(q(f_{i},\theta)-d_{il}))$
(16)
as the loss function for our contextual quantile prediction model. The
function $q$ can take various forms depending on the complexity desired in the
prediction model. One straightforward option is to use an affine function,
such as $q(f,\theta)=\theta_{0}+\theta_{1}^{T}f$. However, to capture
nonlinear relationships, a more complex structure can be employed. For
example, $q(f,\theta)$ can be defined as the output of a feed-forward neural
network, with the parameters represented by $\theta$.
Using the contextual quantile prediction model is also useful in order to
predict uncertainty intervals used in RCVRPTW. For example, it is possible to
predict $0.6$ and $0.95$ quantile values for $\overline{d}_{i}$ and
$\overline{\overline{d}}_{i}$, respectively, and solve RCVRPTW with these
predicted values. Note that occasionally, we can observe crossing quantiles,
that is, $\overline{\overline{d}}_{i}<\overline{d}_{i}$ for one customer after
the prediction. In this case, we simply let
$\overline{\overline{d}}_{i}\leftarrow\max\\{\overline{d}_{i},\overline{\overline{d}}_{i}\\}$
to avoid numerical problems in solving.
Before presenting the results of the computational experiments, we provide a
summary of the prediction and decision pipelines for CVRPTW and RCVRPTW in
Figures 3 and 3, respectively. In both pipelines, the decision-maker first
selects the method to predict demand quantiles (or means) based on observed
features. For solving the deterministic problem, a single quantile value is
predicted for each customer, while for the robust counterpart, two values
representing the base and worst-case demand values are predicted. These
predictions are used to define the uncertainty set $D$ in RCVRPTW. Following
prediction, the decision-maker can opt for either an exact method or a
heuristic approach to solve CVRPTW or RCVRPTW.
Figure 2: The complete process of decision-making for solving the
deterministic model CVRPTW.
Figure 3: The complete process of decision-making for solving the robust model
RCVRPTW.
## 6 Computational Experiments
In this section, we first provide a description of the data and the
experimental setup used in the computational experiments. Later, we present
the results obtained for both the small (25-customer) and large (100-customer)
instances.
For the experiments, we used a PC with 16 GB RAM and i9-11900H @ 2.50GHz
processor. Python version 3.10.9 is used for the experiments and Gurobi
Optimization, LLC, (2023) version 10.0.1 is used to solve mixed integer
programs. The details of implementation and the code to reproduce the
experiments can be found at https://github.com/irfanmahmutogullari/Leveraging-
Contextual-Information-for-Robustness-in-Vehicle-Routing-Problems.
### 6.1 Experiment Setting
In our experiments, we use Solomon, (1987) instances, a well-known benchmark
in the literature. The dataset includes 56 instances with 100 customers
grouped into three categories C (c101-109,c201-208), R (r101-112,r201-211) and
RC (rc101-108, rc201-208). To introduce customer features and randomness, we
augment the dataset artificially. For each customer $i\in\mathcal{C}$, we
create a 6-dimensional feature vector
$f_{i}=(f_{i0},f_{i1},f_{i2},f_{i3},f_{i4},f_{i5})$ where $f_{i0}$ is the
demand value of the customer in Solomon data. The other five features are
created by assigning a random value between 0 and 1. The features
$f_{i1},f_{i2},f_{i3},f_{i4}$ and $f_{i5}$ could potentially reflect the
economic and financial conditions of a customer, which can directly influence
the customer’s demand. The demand distribution $D_{i}$ of customer $i$ follows
$D_{i}\sim\tilde{D}_{i}|\tilde{D}_{i}\geq 0$ where
$\tilde{D}_{i}\sim\begin{cases}f_{i0}+\mathcal{N}(-10,1)&\text{with
probability }p_{1}\\\ f_{i0}+\mathcal{N}(-5,1)&\text{with probability
}p_{2}\\\ f_{i0}+\mathcal{N}(0,1)&\text{with probability }p_{3}\\\
f_{i0}+\mathcal{N}(5,1)&\text{with probability }p_{4}\\\
f_{i0}+\mathcal{N}(10,1)&\text{with probability }p_{5}\end{cases}$
where $\mathcal{N}(\mu,\sigma)$ is the normal distribution with mean $\mu$ and
standard deviation $\sigma$. The probabilities $p_{1},\ldots,p_{5}$ are
obtained by
$p_{k}=\frac{e^{f_{ik}}}{\sum_{k^{\prime}=1}^{5}e^{f_{ik^{\prime}}}}$ for
$k\in\\{1,\ldots,5\\}$ and used to introduce multimodality and asymmetry in
demand distribution.
Figure 4 illustrates the empirical demand distribution of a customer in
instance c101 for the given feature vector as an example. We also halved the
vehicle capacities in the original data set to observe the effect of demand
uncertainty more clearly.
Figure 4: Histogram of demand for 10,000 sampled scenarios. (Instance = c101)
In our experiments, we investigated various settings concerning the demand
history. We conducted tests for scenarios where we have demand history for all
100 customers (all), half of the customers (half), and a quarter of the
customers (quar). For customers with available demand history, the number of
observations ($n_{i}$) in the data history can take one of three values: 1,
10, and 30. As a result, we have a total of nine different settings, each
representing a different level of data.
After obtaining the demand data, a prediction model is employed to forecast
the demand values to be used in the decision-making process. When not using
contextual information, individual (I) mean or quantile predictions can be
made for customer $i\in\mathcal{C}$ if $n_{i}>1$. On the other hand, in the
contextual setting, both linear (L) and nonlinear (N) prediction models can be
used for predicting the mean and quantiles. In our experiments, we use a
straightforward feedforward neural network with a single hidden layer of 10
neurons with the ReLU activation function. The feature values are normalized
during the training. In order to create and train prediction models, we used
the Keras library. The maximum number of iterations in a training is set to
1,000 and the training is stopped if no improvement is made in the last 50
iterations.
Figure 5 illustrates the quantile predictions for the example customer
mentioned in Figure 4. The blue vertical lines correspond to the empirical
0.50, 0.55,…,0.95 quantile predictions obtained from sampling 10,000
scenarios. When there are 10 observed demand values (indicated by black “x”),
the decision maker can make non-contextual quantile predictions (indicated by
violet “x”) using the realized demands. However, these predictions are highly
influenced by the observed data and may not provide accurate quantile
estimates. On the other hand, the contextual quantile linear and nonlinear
predictions (indicated by a green and red “x”, respectively) offer improved
predictions that better align with the actual quantile values.
Figure 5: The blue vertical lines represent the empirical quantile predictions
obtained from sampling 10,000 scenarios. The black “x” represents the observed
demand values. The violet, green and red “x” represent individual, linear and
nonlinear quantile predictions (Instance = c101).
Using the predicted demand values, the decision maker can solve a
deterministic CVRPTW (D) or its robust counterpart RCVRPTW (R) to make routing
decisions. For small instances, it is feasible to solve the optimization
models presented in Section 3 optimally using a commercial solver. However,
for larger instances, finding the exact solution may not be possible within
reasonable time limits. In such cases, the heuristic presented in Section 4
can be employed to obtain near-optimal solutions efficiently.
For the deterministic models, we adopt the naming convention
D-$\delta$-$\epsilon$, where D refers to deterministic, $\delta\in\\{\text{I
(individual), L (linear prediction model), N (nonlinear prediction model)}\\}$
indicates the prediction method and $\epsilon\in\\{\text{M (indicating
mean)},50,55,\ldots,90,95\\}$ represents the mean or quantile prediction
utilized in the deterministic model. For instance, D-I-M indicates the
solution of the deterministic model, where the mean demand values are
individually used for each customer in the non-contextual setting. On the
other hand, D-N-60 indicates that the 60th quantile predictions are obtained
using the nonlinear prediction model in the deterministic problem.
Similarly, for the robust setting, we employ the naming convention
R-$\delta$-$\overline{\epsilon}$-$\hat{\epsilon}$-$\Gamma\gamma$, where R
denotes robust and $\delta\in\\{\text{I, L, N}\\}$ indicates the prediction
method. $\overline{\epsilon}$ and $\hat{\epsilon}$ indicate the levels of
$\overline{d}$ and $\hat{d}$, respectively, in the robust setting. Finally,
$\gamma$ denotes the budget parameter in the definition of the uncertainty set
(9). For example, R-L-M-95-$\Gamma$1 indicates that the mean and 95th
percentile predictions obtained by the linear prediction model are used for
the robust problem with $\Gamma=1$.
To evaluate the quality of the obtained routes, a simulation is performed by
sampling 10,000 demand values from the true distribution for each customer.
The total cost for these routes is calculated, which includes the initial cost
and the recovery cost. The initial cost of a route is the sum of the used arc
costs as given in (1) and (11). However, due to the uncertainty of demand
values, some routes may become infeasible after demand realization. In such
cases, the recovery cost is the total cost induced by the detours. We assume
that customer $i$ is served at the earliest possible time, which is the
minimum of $T^{min}_{i}$ and the reach time to customer $i$. If the service
beginning time of the customer exceeds $T^{max}_{i}$, a time window violation
occurs, indicating that the customer cannot be served within the specified
time window. Note that the initial routes satisfy time window constraints for
the predicted demand vector; however, detours caused by uncertain demand can
lead to potential violations.
### 6.2 Results for Small Instances
We first present the results of the experiments for the C instances with 25
customers. The objective of these experiments is to observe the performance of
the proposed setting when an exact solver is used to solve the MIP formulation
of the problem. By analyzing the results for these smaller instances, we can
gain insights and make informed decisions when scaling up to larger problem
instances.
For each Solomon instance, we consider the first 25 customers in the problem.
In total, there are 17 instances and for each instance, we considered 9
different settings for the available data. In this settings, all indicates the
demand history exists for all 100 customer. On the other hand, for half and
quar, the demand history is available for the last 50 and 25 customers in the
original instance.
For each instance and setting, the following models are solved
* •
D-$\delta$-$\epsilon$ for all $\delta\in$ {I,L,N},
$\epsilon\in\\{M,50,55,\ldots,90,95\\}$
* •
R-$\delta$-$\overline{\epsilon}$-$\hat{\epsilon}$-$\Gamma\gamma$ for all
$\delta\in$ {I,L,N}, $\overline{\epsilon}\in\\{M,50,55\\}$,
$\hat{\epsilon}\in\\{90,95\\}$, $\gamma\in\\{1,2\\}$
for 5 randomly generated instances. (Note that, for half and quar the
customers considered in the problem do not have a demand history so the models
D-I-$\epsilon$ and R-I-$\overline{\epsilon}$-$\hat{\epsilon}$-$\Gamma\gamma$
are not available.) In total, there are 39,100 instances. Among these, we
applied a time limit of 60 seconds for each instance, and 20,322 instances
were solved optimally within this time constraint. For the remaining
instances, we kept the best solution obtained within the time limit.
Figure 6 shows the normalized total costs obtained from the simulations for
different models and settings. For each setting, only the 10 best-performing
models are presented. The costs are normalized by calculating their percentage
deviation from the base value for each instance. The base value is determined
as the best performance achieved by any model in the all setting with $n_{i}$
= 30. It is worth noting that this value can be negative, as a model from
another setting may occasionally outperform the best model in the all setting
with $n_{i}$ = 30.
Figure 6: Performances of models for different settings instances with 25
customers. White and black diamonds represent the average values and outliers,
respectively.
As depicted in Figure 6, the most data-rich scenario, specifically setting all
with $n_{i}$ = 30, shows that the robust model performs best on average. The
model R-L-55-90-$\Gamma$1 exhibits an average cost increase of 2.77% (with a
range of 0% to 11.49%). Additionally, 9 out of the 10 best-performing models
are robust models, suggesting that in a data-rich environment, predicting
multiple quantiles and employing a robust model for decision-making is a
promising approach. Nevertheless, the deterministic model D-L-65 also shows a
competitive performance with a mean cost increase of 2.86% (with a range of 0%
to 13.99%), indicating that even the deterministic model can provide some
level of robustness by using a value 65th quantile greater than the mean or
median prediction.
Another observation from Figure 6 is the impact of the amount of available
information on the decision quality. As we move from all to quar, and from
$n_{i}=30$ to $n_{i}=1$, we observe an increase in the average total cost.
This outcome is intuitive, as having more data allows for better prediction,
leading to improved performance in the decision-making process.
As mentioned in the previous section, our aim is to achieve robustness in our
decisions. Interestingly, we observe that the solutions of the deterministic
models can also lead to robust decisions, especially in data-scarce settings.
For instance, when the setting is quar with $n_{i}=1$, the solution of D-N-70
gives the most robust result with an average cost increase of 4.72% (ranging
from 0% to 15.44%). Moreover, 7 out of the 10 best-performing models in terms
of total cost are deterministic models. This suggests that a more complex
prediction strategy, such as predicting multiple quantiles accurately, often
fails due to data scarcity. It is noteworthy that this result aligns with
Elmachtoub et al., (2023), which demonstrates that in cases with sufficient
available data, complex prediction models yield better performance in the
subsequent decision-making process.
Another advantage of deterministic models is that they generally require
simpler models and shorter solution times. Table 1 presents the average and
standard deviation of the running times for both the deterministic and robust
models. On average, the running time for the deterministic model is smaller
than that of the robust models. Additionally, when the level of robustness is
set high ($\Gamma=2$), the robust model requires even more time compared to
the low level of robustness $\Gamma=1$. Additionally, 56.25% of the
deterministic instances were solved optimally within the time limit. For the
cases where $\Gamma$ equals 1 or 2, this fraction is slightly lower at 51.89%
and 47.17%, respectively.
Table 1: The mean and standard deviation of running times (in seconds) for the deterministic and robust model and the fraction of instances solved optimally within the time limit. | Solution Time | Fraction of Instances
---|---|---
Model | Mean | St. Deviation | Solved Optimally
Deterministic | 33.94 | 25.35 | 56.25 %
Robust $\Gamma$ = 1 | 35.18 | 26.01 | 51.89 %
Robust $\Gamma$ = 2 | 38.53 | 24.35 | 47.17 %
The final observation from Figure 6 is that contextual prediction strongly
outperforms non-contextual individual predictions in terms of robustness. Only
in the most data-rich setting, all with $n_{i}=30$, R-I-55-90-$\Gamma$1
appears among the 10 best-performing models. In all other cases, the
contextual prediction models demonstrate superior robustness compared to
individual predictions. This highlights the importance of utilizing contextual
information when making demand predictions, especially in scenarios with
limited data for individual customers.
Another analysis can be conducted regarding the violation of customer time
windows. As anticipated, the smallest number of time window violations can be
achieved with very conservative models such as D-$\delta$-95 or
R-$\delta$-$55$-$95$-$\Gamma 2$. The reason behind this is that these models
are overly pessimistic, making their route decisions based on excessively
predicted demand values. While this approach ensures that the resulting
solution is feasible for most demand realizations, it may not be cost-
effective.
Therefore, in Table 2, we present the relative increase in the fraction of
customers with time window violations for the models presented in Figure 6
with respect to the model D-$\delta$-95, $\delta\in$ {I,L,N} which gives the
lowest time window violation.
Table 2: Average percentage increase in the customers with time window violations compared to the most conservative model D-$\delta$-95 for each setting and number of observations ($n_{i}$) for small instances. | Setting
---|---
$n_{i}$ | all | half | quar
30 | Model | Mean (Std) | Model | Mean (Std) | Model | Mean (Std)
| D-L-65 | 9.76 (9.33) | D-N-65 | 10.11 (9.48) | R-N-M-90-$\Gamma$1 | 9.39 (9.28)
| R-L-50-95-$\Gamma$1 | 10.36 (9.41) | R-L-55-90-$\Gamma$1 | 10.57 (8.92) | R-L-50-90-$\Gamma$1 | 9.87 (8.79)
| R-L-55-90-$\Gamma$1 | 10.53 (9.94) | R-L-M-90-$\Gamma$1 | 10.59 (8.63) | D-L-65 | 9.9 (9.34)
| R-N-55-90-$\Gamma$1 | 10.8 (8.79) | R-N-55-90-$\Gamma$1 | 10.68 (8.63) | R-N-M-95-$\Gamma$1 | 10.22 (9.3)
| R-N-55-95-$\Gamma$1 | 10.81 (8.77) | R-L-M-95-$\Gamma$1 | 10.88 (9.67) | R-L-M-95-$\Gamma$1 | 10.57 (8.54)
| R-L-M-95-$\Gamma$1 | 11.28 (9.76) | R-L-50-95-$\Gamma$1 | 10.9 (9.53) | R-N-50-90-$\Gamma$1 | 10.86 (9.45)
| R-N-M-95-$\Gamma$1 | 11.28 (9.76) | R-L-55-95-$\Gamma$1 | 11.05 (9.37) | R-L-50-95-$\Gamma$1 | 10.87 (9.64)
| R-I-55-90-$\Gamma$1 | 11.32 (9.3) | D-N-60 | 11.4 (8.53) | R-L-55-95-$\Gamma$1 | 10.9 (8.97)
| R-L-55-95-$\Gamma$1 | 11.56 (9.61) | D-L-65 | 11.65 (9.54) | R-L-M-90-$\Gamma$1 | 11.06 (8.61)
| R-L-M-90-$\Gamma$1 | 11.78 (9.46) | R-L-50-90-$\Gamma$1 | 11.84 (10.26) | R-L-55-90-$\Gamma$1 | 11.64 (8.39)
10 | Model | Mean (Std) | Model | Mean (Std) | Model | Mean (Std)
| R-I-55-90-$\Gamma$1 | 11.43 (9.56) | D-L-65 | 8.94 (9.48) | R-L-50-90-$\Gamma$1 | 8.73 (9.26)
| R-L-M-95-$\Gamma$1 | 11.45 (8.92) | R-L-M-90-$\Gamma$1 | 9.67 (8.41) | R-L-55-95-$\Gamma$1 | 9.31 (9.25)
| D-N-60 | 11.57 (8.91) | R-L-55-90-$\Gamma$1 | 9.71 (9.04) | D-L-65 | 9.37 (8.29)
| D-L-65 | 11.7 (8.37) | D-N-60 | 9.74 (10.16) | R-L-55-90-$\Gamma$1 | 9.4 (8.95)
| D-L-60 | 11.71 (9.66) | R-L-M-95-$\Gamma$1 | 9.92 (8.46) | R-L-M-95-$\Gamma$1 | 9.44 (8.37)
| R-L-55-95-$\Gamma$1 | 11.75 (9.14) | R-N-M-90-$\Gamma$1 | 10.26 (8.97) | R-N-M-90-$\Gamma$1 | 9.71 (8.59)
| R-L-55-90-$\Gamma$1 | 11.77 (9.56) | R-L-50-90-$\Gamma$1 | 10.34 (8.58) | R-N-50-95-$\Gamma$1 | 10.31 (8.35)
| R-L-M-90-$\Gamma$1 | 11.87 (9.09) | R-N-50-90-$\Gamma$1 | 10.34 (10.36) | R-N-M-95-$\Gamma$1 | 10.41 (9.08)
| R-L-50-90-$\Gamma$1 | 11.93 (9.15) | R-N-M-95-$\Gamma$1 | 10.67 (9.71) | R-L-M-90-$\Gamma$1 | 10.44 (8.36)
| R-L-50-95-$\Gamma$1 | 12.08 (9.45) | R-L-50-95-$\Gamma$1 | 10.96 (8.43) | R-L-50-95-$\Gamma$1 | 10.81 (9.15)
1 | Model | Mean (Std) | Model | Mean (Std) | Model | Mean (Std)
| R-N-M-95-$\Gamma$1 | 6.9 (7.76) | R-N-50-90-$\Gamma$1 | 8.58 (10.17) | D-N-80 | 7.95 (8.36)
| R-L-M-95-$\Gamma$1 | 7.3 (8.27) | R-N-50-95-$\Gamma$2 | 8.74 (9.86) | D-N-70 | 8.18 (8.55)
| R-L-50-90-$\Gamma$2 | 7.42 (7.86) | D-L-55 | 8.8 (10.83) | R-N-55-90-$\Gamma$1 | 8.35 (8.13)
| R-N-55-90-$\Gamma$1 | 7.87 (8.35) | D-N-70 | 8.86 (10.15) | R-N-M-90-$\Gamma$1 | 8.69 (8.57)
| R-L-M-90-$\Gamma$1 | 7.89 (9.09) | D-L-70 | 9.19 (9.91) | R-N-50-95-$\Gamma$1 | 9.55 (8.27)
| D-N-70 | 7.97 (7.54) | R-N-50-95-$\Gamma$1 | 9.9 (10.03) | D-L-65 | 9.6 (7.33)
| R-N-M-90-$\Gamma$1 | 8.04 (8.32) | D-L-60 | 10.04 (10.6) | D-N-60 | 9.73 (8.8)
| D-L-65 | 8.11 (7.92) | R-N-55-90-$\Gamma$1 | 10.23 (9.35) | D-N-65 | 9.74 (8.34)
| D-L-60 | 8.4 (8.3) | D-L-50 | 10.24 (9.12) | D-L-50 | 10.42 (7.7)
| R-L-55-90-$\Gamma$1 | 8.62 (8.22) | D-L-65 | 10.44 (9.05) | D-N-50 | 11.83 (7.95)
Table 2 demonstrates that the methods exhibiting superior cost-based
robustness also perform remarkably well in terms of robustness against time
window violations. Notably, all the methods listed in Table 2 yield almost
equivalent levels of time window robustness, which is approximately 10 percent
worse than the most conservative setting. It’s worth noting that this outcome
is quite interesting, given that the most conservative setting performs poorly
in terms of cost due to its overly predicted demand values.
### 6.3 Results for Large Instances
For the Solomon instances involving 100 customers, we employ an ALNS heuristic
to derive solutions for both CVRPTW and its robust version, RCVRPTW. The ALNS
package already offers a comprehensive framework for CVRPTW, and we have
adapted this framework for RCVRPTW as well.
We use the two destroy operators
$\Psi^{-}=\\{random\\_removal,string\\_removal\\}$ and two repair operators
$\Psi^{+}=\\{greedy\\_repair,regret\\_repair\\}$ in ALNS heuristic as
described in Section 4. The initial weights of the operators are set to one,
and following each iteration, these weights are updated by applying a decay
rate of 0.8 to the existing weights. The new weights assigned to operators in
the current iteration depend on the solutions they produce, whether they are
the best so far, better, accepted or rejected, corresponding to new weights of
25, 5, 1 or 0, respectively. The initial temperature in the simulated
annealing mechanism is tuned to ensure that there is a 0.05 chance of
selecting a solution up to 50% percent worse than the initial solution and the
temperature is updated at each interaction such that it reaches 1 in 5,000
iterations. For each instance, run ALNS for 60 seconds.
Figure 7 illustrates the simulation outcomes in terms of normalized cost,
similar to the small instances. However, an intriguing observation emerges
regarding the performance of deterministic and robust optimization models in
exact and heuristic solution settings. Specifically, in richer data contexts,
the relative performance of deterministic models deteriorates, whereas, in
data-scarce scenarios, their relative performance improves.
In the all setting with $n_{i}=30$, for example, the method D-L-65 ranks as
the third best in terms of average performance in small problems. However, in
the larger instances, it drops to the 10th best position. On the contrary, the
top four methods are all deterministic models for large instances, whereas
this was not the case for small instances. This result underscores the
importance of utilizing complex prediction and optimization models in data-
rich environments, as their advantages become less prominent when data
scarcity is prevalent.
Figure 7: Performances of models for different settings instances with 100
customers. White and black diamonds represent the average values and outliers,
respectively.
Finally, we present the fractions of customers with time window violations for
the larger instances in Table 3. The table consistently demonstrates that the
deterministic models generally result in fewer time window violations compared
to the robust models. Although the difference is not substantial, this finding
underscores another advantage of employing deterministic models with tailored
demand quantile predictions to attain robust solutions, even though the model
itself doesn’t represent robustness explicitly.
Table 3: Average percentage increase in the customers with time window violations compared to the most conservative model D-$\delta$-95 for each setting and number of observations ($n_{i}$) for large instances. | Setting
---|---
$n_{i}$ | all | half | quar
30 | Model | Mean (Std) | Model | Mean (Std) | Model | Mean (Std)
| D-L-65 | 50.05 (23.84) | D-N-60 | 49.88 (24.4) | D-L-65 | 49.93 (24.14)
| R-I-55-90-$\Gamma$1 | 51.15 (25.35) | D-N-65 | 49.95 (24.13) | R-L-55-90-$\Gamma$1 | 50.92 (25.91)
| R-L-M-90-$\Gamma$1 | 51.33 (26.41) | D-L-65 | 49.99 (23.9) | R-L-55-95-$\Gamma$1 | 51.23 (25.04)
| R-L-55-95-$\Gamma$1 | 51.41 (25.24) | R-N-55-90-$\Gamma$1 | 50.64 (24.77) | R-L-M-90-$\Gamma$1 | 51.64 (26.48)
| R-L-55-90-$\Gamma$1 | 51.47 (25.72) | R-L-55-90-$\Gamma$1 | 50.81 (25.25) | R-N-50-90-$\Gamma$1 | 51.83 (26.41)
| R-N-55-90-$\Gamma$1 | 51.75 (26.0) | R-L-55-95-$\Gamma$1 | 50.84 (25.23) | R-N-M-95-$\Gamma$1 | 51.96 (26.36)
| R-L-50-95-$\Gamma$1 | 51.88 (26.46) | R-L-M-95-$\Gamma$1 | 51.74 (26.56) | R-L-50-95-$\Gamma$1 | 52.01 (26.49)
| R-N-M-95-$\Gamma$1 | 51.91 (26.34) | R-L-50-90-$\Gamma$1 | 51.76 (26.65) | R-L-50-90-$\Gamma$1 | 52.2 (26.96)
| R-N-55-95-$\Gamma$1 | 52.12 (26.24) | R-L-50-95-$\Gamma$1 | 52.23 (26.15) | R-N-M-90-$\Gamma$1 | 52.34 (26.67)
| R-L-M-95-$\Gamma$1 | 52.17 (26.52) | R-L-M-90-$\Gamma$1 | 52.49 (26.53) | R-L-M-95-$\Gamma$1 | 52.44 (26.36)
10 | Model | Mean (Std) | Model | Mean (Std) | Model | Mean (Std)
| D-L-60 | 49.98 (24.97) | D-N-60 | 49.99 (24.57) | D-L-65 | 49.81 (23.75)
| D-L-65 | 50.18 (23.96) | D-L-65 | 50.12 (24.0) | R-L-55-90-$\Gamma$1 | 51.22 (25.56)
| D-N-60 | 50.69 (25.25) | R-L-55-95-$\Gamma$1 | 51.47 (25.19) | R-N-50-95-$\Gamma$1 | 51.53 (25.29)
| R-I-55-90-$\Gamma$1 | 51.04 (25.15) | R-N-M-90-$\Gamma$1 | 51.74 (26.59) | R-L-55-95-$\Gamma$1 | 51.64 (25.4)
| R-L-55-95-$\Gamma$1 | 51.2 (25.65) | R-N-M-95-$\Gamma$1 | 51.85 (26.78) | R-L-M-90-$\Gamma$1 | 51.82 (26.34)
| R-L-55-90-$\Gamma$1 | 51.32 (25.26) | R-L-50-95-$\Gamma$1 | 51.92 (26.37) | R-N-M-90-$\Gamma$1 | 51.84 (26.89)
| R-L-50-90-$\Gamma$1 | 52.02 (26.59) | R-L-50-90-$\Gamma$1 | 51.95 (26.56) | R-N-M-95-$\Gamma$1 | 51.85 (26.66)
| R-L-50-95-$\Gamma$1 | 52.09 (26.53) | R-L-M-90-$\Gamma$1 | 52.12 (26.23) | R-L-50-90-$\Gamma$1 | 51.98 (26.51)
| R-L-M-95-$\Gamma$1 | 52.16 (26.72) | R-L-M-95-$\Gamma$1 | 52.21 (26.29) | R-L-50-95-$\Gamma$1 | 52.13 (26.4)
| R-L-M-90-$\Gamma$1 | 52.18 (26.41) | R-N-50-90-$\Gamma$1 | 52.56 (26.61) | R-L-M-95-$\Gamma$1 | 52.31 (26.4)
1 | Model | Mean (Std) | Model | Mean (Std) | Model | Mean (Std)
| D-N-70 | 49.47 (22.84) | D-L-70 | 49.0 (22.47) | D-N-70 | 49.18 (22.84)
| D-L-65 | 49.62 (23.91) | D-N-70 | 49.8 (23.6) | D-N-80 | 49.57 (22.58)
| D-L-60 | 50.01 (24.65) | D-L-60 | 50.11 (24.72) | D-N-60 | 50.16 (24.66)
| R-L-55-90-$\Gamma$1 | 50.51 (24.89) | D-L-65 | 50.35 (24.44) | D-L-50 | 50.93 (27.13)
| R-N-55-90-$\Gamma$1 | 50.65 (24.74) | D-L-55 | 50.44 (25.82) | D-N-65 | 50.99 (25.42)
| R-N-M-95-$\Gamma$1 | 51.18 (25.59) | R-N-55-90-$\Gamma$1 | 51.16 (25.17) | D-L-65 | 51.07 (23.43)
| R-N-M-90-$\Gamma$1 | 51.52 (25.62) | D-L-50 | 51.43 (27.28) | R-N-M-90-$\Gamma$1 | 51.12 (25.21)
| R-L-M-90-$\Gamma$1 | 51.59 (26.46) | R-N-50-90-$\Gamma$1 | 51.86 (26.05) | R-N-50-95-$\Gamma$1 | 51.13 (25.26)
| R-L-M-95-$\Gamma$1 | 51.81 (25.82) | R-N-50-95-$\Gamma$2 | 52.05 (25.38) | D-N-50 | 51.32 (26.71)
| R-L-50-90-$\Gamma$2 | 52.2 (24.79) | R-N-50-95-$\Gamma$1 | 52.11 (26.06) | R-N-55-90-$\Gamma$1 | 51.62 (25.22)
## 7 Conclusion
This paper investigates the utilization of contextual information to address
the RCVRPTW, presenting its practical relevance when facing scenarios with
limited demand history for certain customers. More specifically, we
investigated the use of quantile prediction, both for making single
predictions for use in deterministic optimization as well as for predicting
the uncertainty set for robust optimization. Our results also offer insights
into robust decision-making, particularly for new customers lacking historical
data. The approach chosen can vary depending on data availability. In data-
rich contexts, multiple quantile predictions and robust optimization models
are advantageous. Conversely, for data-scarce situations, single predictions
and deterministic optimization models maintain strong performance in terms of
cost and time window violations.
Predicting quantile demands offers an added advantage by enabling robust
decision-making with the flexibility to adjust the level of robustness. Unlike
mean demand predictions, quantile predictions allow the decision-maker to
fine-tune the degree of robustness. These predicted quantiles can be
integrated into both exact solution methods and heuristic approaches.
The deterministic model is the industry standard and it is widely used by
practitioners. Our results offer valuable insights for practitioners,
showcasing the use of deterministic models, which are often more prevalent
than their robust counterparts. These deterministic models can provide
commendable performance in terms of robustness, making them a viable option
for non-experts in the field who seek user-friendly solutions without
compromising on robustness.
The present study addresses prediction and optimization separately, even
though predictions are made with the awareness that the predicted parameters
will be used in an optimization model. An avenue for expanding this research
involves integrating the prediction and decision-making processes in a single
end-to-end pipeline (see, for example, Mandi et al., (2023) for a survey of
the state-of-the-art end-to-end methods for prediction and optimization).
While such a predict-then-optimize framework may pose computational
challenges, it may potentially improve routing decisions since training loss
is directly defined in terms of the decision quality of the downstream
optimization problem. Furthermore, in an end-to-end pipeline, the decision-
maker is not required to predefine a fixed quantile before solving the
optimization problem to achieve the best performance.
## Acknowledgements
This research received funding from the European Research Council (ERC) under
the European Union’s Horizon 2020 research and innovation program (Grant No.
101002802, CHAT-Opt).
## References
* Adulyasak et al., (2012) Adulyasak, Y., Cordeau, J.-F., and Jans, R. (2012). Optimization-Based Adaptive Large Neighborhood Search for the Production Routing Problem. Transportation Science. Publisher: INFORMS.
* Ban et al., (2019) Ban, G.-Y., Gallien, J., and Mersereau, A. J. (2019). Dynamic Procurement of New Products with Covariate Information: The Residual Tree Method. Manufacturing & Service Operations Management, 21(4):798–815.
* Ban and Rudin, (2018) Ban, G.-Y. and Rudin, C. (2018). The Big Data Newsvendor: Practical Insights from Machine Learning.
* Ben-Tal and Nemirovski, (2002) Ben-Tal, A. and Nemirovski, A. (2002). Robust optimization – methodology and applications. Mathematical Programming, 92(3):453–480.
* Bertsimas et al., (2011) Bertsimas, D., Brown, D. B., and Caramanis, C. (2011). Theory and Applications of Robust Optimization. SIAM Review, 53(3):464–501. Publisher: Society for Industrial and Applied Mathematics.
* Bertsimas et al., (2019) Bertsimas, D., Dunn, J., and Mundru, N. (2019). Optimal Prescriptive Trees. INFORMS Journal on Optimization, 1(2):164–183. Publisher: INFORMS.
* Bertsimas et al., (2018) Bertsimas, D., Gupta, V., and Kallus, N. (2018). Data-driven robust optimization. Mathematical Programming, 167(2):235–292.
* Bertsimas and Kallus, (2020) Bertsimas, D. and Kallus, N. (2020). From Predictive to Prescriptive Analytics. Management Science, 66(3):1025–1044. Publisher: INFORMS.
* Bertsimas and Sim, (2003) Bertsimas, D. and Sim, M. (2003). Robust discrete optimization and network flows. Mathematical Programming, 98(1):49–71.
* Brando et al., (2022) Brando, A., Center, B. S., )*, Gimeno, a. J., Rodriguez-Serrano, J., and Vitria, J. (2022). Deep Non-crossing Quantiles through the Partial Derivative. In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, pages 7902–7914. PMLR. ISSN: 2640-3498.
* Chung et al., (2022) Chung, T.-H., Rostami, V., Bastani, H., and Bastani, O. (2022). Decision-Aware Learning for Optimizing Health Supply Chains. arXiv:2211.08507 [cs].
* Dai et al., (2022) Dai, S., Kuosmanen, T., and Zhou, X. (2022). Non-crossing convex quantile regression. arXiv:2204.01371 [stat].
* Demir et al., (2012) Demir, E., Bektaş, T., and Laporte, G. (2012). An adaptive large neighborhood search heuristic for the Pollution-Routing Problem. European Journal of Operational Research, 223(2):346–359.
* Elmachtoub et al., (2023) Elmachtoub, A. N., Lam, H., Zhang, H., and Zhao, Y. (2023). Estimate-Then-Optimize versus Integrated-Estimation-Optimization versus Sample Average Approximation: A Stochastic Dominance Perspective. arXiv:2304.06833 [cs, stat].
* Goerigk and Kurtz, (2023) Goerigk, M. and Kurtz, J. (2023). Data-driven robust optimization using deep neural networks. Computers & Operations Research, 151:106087.
* Goldfarb and Iyengar, (2003) Goldfarb, D. and Iyengar, G. (2003). Robust Portfolio Selection Problems. Mathematics of Operations Research, 28(1):1–38.
* Gounaris et al., (2016) Gounaris, C. E., Repoussis, P. P., Tarantilis, C. D., Wiesemann, W., and Floudas, C. A. (2016). An Adaptive Memory Programming Framework for the Robust Capacitated Vehicle Routing Problem. Transportation Science, 50(4):1239–1260. Publisher: INFORMS.
* Gounaris et al., (2013) Gounaris, C. E., Wiesemann, W., and Floudas, C. A. (2013). The Robust Capacitated Vehicle Routing Problem Under Demand Uncertainty. Operations Research, 61(3):677–693. Publisher: INFORMS.
* Gurobi Optimization, LLC, (2023) Gurobi Optimization, LLC (2023). Gurobi Optimizer Reference Manual.
* Hatalis et al., (2019) Hatalis, K., Lamadrid, A. J., Scheinberg, K., and Kishore, S. (2019). A Novel Smoothed Loss and Penalty Function for Noncrossing Composite Quantile Estimation via Deep Neural Networks. arXiv:1909.12122 [cs, eess].
* Hemmelmayr et al., (2012) Hemmelmayr, V. C., Cordeau, J.-F., and Crainic, T. G. (2012). An adaptive large neighborhood search heuristic for Two-Echelon Vehicle Routing Problems arising in city logistics. Computers & Operations Research, 39(12):3215–3228.
* Kannan et al., (2022) Kannan, R., Bayraksan, G., and Luedtke, J. R. (2022). Data-Driven Sample Average Approximation with Covariate Information. arXiv:2207.13554 [math, stat].
* Kleywegt et al., (2002) Kleywegt, A. J., Shapiro, A., and Homem-de Mello, T. (2002). The Sample Average Approximation Method for Stochastic Discrete Optimization. SIAM Journal on Optimization, 12(2):479–502. Publisher: Society for Industrial and Applied Mathematics.
* Koenker and Bassett, (1978) Koenker, R. and Bassett, G. (1978). Regression Quantiles. Econometrica, 46(1):33–50. Publisher: [Wiley, Econometric Society].
* Koenker and Hallock, (2001) Koenker, R. and Hallock, K. F. (2001). Quantile Regression. Journal of Economic Perspectives, 15(4):143–156.
* Laporte, (1992) Laporte, G. (1992). The vehicle routing problem: An overview of exact and approximate algorithms. European Journal of Operational Research, 59(3):345–358.
* Lin et al., (2022) Lin, S., Chen, Y. F., Li, Y., and Shen, Z.-J. M. (2022). Data-Driven Newsvendor Problems Regularized by a Profit Risk Constraint. Production and Operations Management, 31(4):1630–1644. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/poms.13635.
* Mandi et al., (2023) Mandi, J., Kotary, J., Berden, S., Mulamba, M., Bucarey, V., Guns, T., and Fioretto, F. (2023). Decision-Focused Learning: Foundations, State of the Art, Benchmark and Future Opportunities. arXiv:2307.13565 [cs, math].
* Munari et al., (2019) Munari, P., Moreno, A., De La Vega, J., Alem, D., Gondzio, J., and Morabito, R. (2019). The Robust Vehicle Routing Problem with Time Windows: Compact Formulation and Branch-Price-and-Cut Method. Transportation Science, 53(4):1043–1066. Publisher: INFORMS.
* Ohmori, (2021) Ohmori, S. (2021). A Predictive Prescription Using Minimum Volume k-Nearest Neighbor Enclosing Ellipsoid and Robust Optimization. Mathematics, 9(2):119.
* Ordóñez, (2010) Ordóñez, F. (2010). Robust Vehicle Routing. In Hasenbein, J. J., Gray, P., and Greenberg, H. J., editors, Risk and Optimization in an Uncertain World, pages 153–178. INFORMS.
* Oyola et al., (2018) Oyola, J., Arntzen, H., and Woodruff, D. L. (2018). The stochastic vehicle routing problem, a literature review, part I: models. EURO Journal on Transportation and Logistics, 7(3):193–221.
* Ropke and Pisinger, (2006) Ropke, S. and Pisinger, D. (2006). An Adaptive Large Neighborhood Search Heuristic for the Pickup and Delivery Problem with Time Windows. Transportation Science, 40(4):455–472. Publisher: INFORMS.
* Sadana et al., (2023) Sadana, U., Chenreddy, A., Delage, E., Forel, A., Frejinger, E., and Vidal, T. (2023). A Survey of Contextual Optimization Methods for Decision Making under Uncertainty. arXiv:2306.10374 [cs, math].
* Shaw, (1998) Shaw, P. (1998). Using Constraint Programming and Local Search Methods to Solve Vehicle Routing Problems. In Maher, M. and Puget, J.-F., editors, Principles and Practice of Constraint Programming — CP98, Lecture Notes in Computer Science, pages 417–431, Berlin, Heidelberg. Springer.
* Solomon, (1987) Solomon, M. M. (1987). Algorithms for the Vehicle Routing and Scheduling Problems with Time Window Constraints. Operations Research, 35(2):254–265. Publisher: INFORMS.
* Sungur et al., (2008) Sungur, I., Ordóñez, F., and Dessouky, M. (2008). A robust optimization approach for the capacitated vehicle routing problem with demand uncertainty. IIE Transactions, 40(5):509–523. Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/07408170701745378.
* Toth and Vigo, (2002) Toth, P. and Vigo, D. (2002). The Vehicle Routing Problem. Discrete Mathematics and Applications. Society for Industrial and Applied Mathematics.
* Tulabandhula and Rudin, (2014) Tulabandhula, T. and Rudin, C. (2014). Robust Optimization using Machine Learning for Uncertainty Sets. arXiv:1407.1097 [cs, math, stat].
* Wouda and Lan, (2023) Wouda, N. A. and Lan, L. (2023). ALNS: a Python implementation of the adaptive large neighbourhood search metaheuristic. Journal of Open Source Software, 8(81):5028.
* Zhang et al., (2023) Zhang, L., Yang, J., and Gao, R. (2023). Optimal Robust Policy for Feature-Based Newsvendor. Management Science. Publisher: INFORMS.
|
# Contributions to the Muon $g-2$ from a Three-Form Field
Da<EMAIL_ADDRESS>National Astronomical Observatories, Chinese
Academy of Sciences, Beijing, 100012, China School of Fundamental Physics and
Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou
310024, China International Centre for Theoretical Physics Asia-Pacific,
Beijing/Hangzhou, China Yong<EMAIL_ADDRESS>University of Chinese
Academy of Sciences (UCAS), Beijing 100049, China School of Fundamental
Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study,
UCAS, Hangzhou 310024, China National Astronomical Observatories, Chinese
Academy of Sciences, Beijing, 100012, China International Centre for
Theoretical Physics Asia-Pacific, Beijing/Hangzhou, China Yue-Liang
<EMAIL_ADDRESS>Institute of Theoretical Physics, Chinese Academy of
Sciences, Beijing 100190, China School of Fundamental Physics and
Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou
310024, China International Centre for Theoretical Physics Asia-Pacific,
Beijing/Hangzhou, China University of Chinese Academy of Sciences (UCAS),
Beijing 100049, China
###### Abstract
We examine contributions to the muon dipole moment $g-2$ from a 3-form field
$\Omega$, which naturally arises from many fundamental theories, such as the
string theory and the hyperunified field theory. In particular, by calculating
the one-loop Feynman diagram, we have obtained the leading-order
$\Omega$-induced contribution to the muon $g-2$, which is found to be finite.
Then we investigate the theoretical constraints from perturbativity and
unitarity. Especially, the unitarity bounds are yielded by computing the tree-
level $\mu^{+}\mu^{-}$ scattering amplitudes of various initial and final
helicity configurations. As a result, despite the strong unitarity bounds
imposed on this model of $\Omega$, we have still found a substantial parameter
space which can accommodates the muon $g-2$ data.
## I Introduction
A three-form field, i.e., an antisymmetric rank-three tensor field, can
generically appear in many fundamental theories trying to unify the General
Relativity and Quantum Field Theory. For example, in the $10$-dimensional
Type-IIA superstring and supergravity theory, a 3-form field occurs as an
important constituent in the Ramond-Ramond sector Green:1987sp ; Green:1987mn
; Polchinski:1998rr . Furthermore, in the recently proposed Hyper-Unified
Field Theory (HUFT) Wu:2015wwa ; Wu:2017rmd ; Wu:2017urh ; Wu:2018xah ;
Wu:2021ign ; Wu:2021ucc ; Wu:2022mzr , such a 3-form field is naturally
identified as a part of the spin gauge field in this 19-dimensional theory.
Unfortunately, the 3-form field in the superstring theory couples to the
ordinary matter with the gravitational strength Green:1987sp ; Green:1987mn ;
Polchinski:1998rr , so it is difficult to be visible at low energies. On the
other hand, in the HUFT, by dynamically breaking the conformal symmetry
Wu:2017urh ; Wu:2021ign ; Wu:2021ucc and compactifying the theory into our
mundane 4-dimensional spacetime, we expect that a light 3-form field $\Omega$
is left and potentially generates new observable effects Sezgin:1980tp .
In this study, we consider the indirect loop effects induced by a 3-form field
on the muon electromagnetic dipole moment defined by $a_{\mu}=(g-2)_{\mu}/2$,
which might solve the long-standing muon $g-2$ anomaly Workman:2022ynf .
Currently, the comparison between the Standard Model (SM) prediction and the
experimental data shows a remarkable $\sim 4.25\sigma$ discrepancy
Muong-2:2021ojo :
$\displaystyle\Delta a_{\mu}=a_{\mu}^{\rm Exp}-a_{\mu}^{\rm SM}=(251\pm
59)\times 10^{-11},$ (1)
where the experimental value $a_{\mu}^{\rm Exp}=(116\,592\,061\pm 41)\times
10^{-11}$ is obtained by combining the earlier Brookhaven Muong-2:2006rrc and
the latest Fermilab Muon $g-2$ Muong-2:2021ojo data, while the SM prediction
is given by $a^{\rm SM}_{\mu}=(116\,591\,810\pm 43)\times 10^{-11}$ based on
the state-of-art evaluations of various contributions Aoyama:2012wk ;
Aoyama:2019ryr ; Czarnecki:2002nt ; Gnendiger:2013pva ; Davier:2017zfy ;
Keshavarzi:2018mgv ; Colangelo:2018mtw ; Hoferichter:2019mqg ; Davier:2019can
; Keshavarzi:2019abf ; Kurz:2014wya ; Melnikov:2003xd ; Masjuan:2017tvw ;
Colangelo:2017fiz ; Hoferichter:2018kwz ; Gerardin:2019vio ; Bijnens:2019ghy ;
Colangelo:2019uex ; Blum:2019ugy ; Colangelo:2014qya (see e.g. Ref.
Aoyama:2020ynm for a recent review and references therein). In light of these
precise measurements and calculations, this muon $g-2$ anomaly is usually
regarded as an indication to the new physics beyond the SM. In the literature,
there have already been many new physics attempts to resolve this anomaly (see
e.g., Ref. Athron:2021iuf for a recent review and references therein). It is
worth to mention that, in the HUFT, the 3-form field $\Omega$, as a part of
the spin gauge field, couples to the SM fermionic fields inevitably. Hence, it
is highly probable to generate an extra contribution to the muon $g-2$ via its
interaction with muons. Here we would like to carefully examine if the one-
loop contributions induced by $\Omega$ could help us to ameliorate or even
eliminate the $(g-2)_{\mu}$ discrepancy. Furthermore, the scenario we are
considering needs a relatively light $\Omega$ and a moderately large coupling
to muons, which might potentially lead to the breakdown of perturbativity and
unitarity. For that reason, we shall also take into account theoretical
constraints from these issues in the framework of effective field theory.
The paper is organized as follows. In Sec. II, we present the effective field
theory framework of the 3-form field $\Omega$. In particular, we show the
Lagrangian and the propagator of $\Omega$, together with a short discussion of
the significance of the $\Omega$ mass term. Then the leading-order
contribution to the muon $g-2$ induced by $\Omega$ is given in Sec. III by
calculating the associated one-loop Feynman diagram. Sec. IV is devoted to
studying the theoretical constraints from perturbativity and unitarity on our
3-form model. The numerical results are shown in Sec. V by carefully examining
the parameter space of interest. Finally, we conclude in Sec. VI, together
with a short discussion of the possible collider signatures of the 3-form
field.
## II Lagrangian and Conventions
In this section, we set up our notations and conventions for our model of the
3-form field $\Omega=\Omega_{\mu\nu\rho}\,dx^{\mu}\wedge dx^{\nu}\wedge
dx^{\rho}$. In the fundamental theories such as the Type-IIA superstring
theory or the HUFT, $\Omega$, as a gauge field, couples universally to other
fields or particles. However, these fundamental theories are generically
living in the spacetime with dimensions higher than 4. In order to explain the
physics of our ordinary 4-dimensional spacetime world, one usually involves
the compactification technique to wrap the extra dimensions in small compact
spaces Polchinski:1998rr ; Green:1987sp ; Green:1987mn . In this process, due
to the wavefunctions of various fields in the extra dimensional space, the
couplings of $\Omega$ with the SM particles can become non-universal. Here we
concentrate on the following 3-form field coupling to the SM muon leptons
Wu:2017urh ; Wu:2018xah ; Sezgin:1979zf 444Note that, in Eq. (2), the
convention for the $\Omega$ coupling $g_{h}$ to muons differs from that given
in Fig. 4 of Ref. Wu:2018xah by a factor $1/4$.
$\displaystyle{\cal L}_{\Omega}\supset-\frac{1}{2\cdot
4!}\Omega^{\mu\nu\rho\sigma}\Omega_{\mu\nu\rho\sigma}-\frac{m_{\Omega}^{2}}{2\cdot
3!}\Omega_{\mu\nu\rho}\Omega^{\mu\nu\rho}-ig_{h}\bar{\mu}\gamma^{\mu\nu\rho}\mu\Omega_{\mu\nu\rho}\,,$
(2)
where $\Omega_{\mu\nu\rho\sigma}$ denotes the four-form field strength of
$\Omega_{\mu\nu\rho}$ defined by
$\displaystyle\Omega_{\mu\nu\rho\sigma}\equiv\partial_{\mu}\Omega_{\nu\rho\sigma}-\partial_{\nu}\Omega_{\rho\sigma\mu}+\partial_{\rho}\Omega_{\sigma\mu\nu}-\partial_{\sigma}\Omega_{\mu\nu\rho}\,,$
(3)
while $\gamma^{\mu\nu\rho}$ can be written as the combination of
$\gamma$-matrices as follows
$\displaystyle\gamma^{\mu\nu\rho}$ $\displaystyle\equiv$
$\displaystyle\gamma^{[\mu}\gamma^{\nu}\gamma^{\rho]}=\frac{1}{3!}\left(\gamma^{\mu}\gamma^{\nu}\gamma^{\rho}-\gamma^{\nu}\gamma^{\mu}\gamma^{\rho}+\gamma^{\nu}\gamma^{\rho}\gamma^{\mu}-\gamma^{\rho}\gamma^{\nu}\gamma^{\mu}+\gamma^{\rho}\gamma^{\mu}\gamma^{\nu}-\gamma^{\mu}\gamma^{\rho}\gamma^{\nu}\right)$
(4) $\displaystyle=$ $\displaystyle
i\epsilon^{\mu\nu\rho\sigma}\gamma_{\sigma}\gamma^{5}\,,$
where $\gamma^{5}\equiv
i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}=-i\epsilon^{\mu\nu\rho\sigma}\gamma_{\mu}\gamma_{\nu}\gamma_{\rho}\gamma_{\sigma}/(4!)$.
In Eq. (2), we have included a mass term to the 3-form field $\Omega$, which
explicitly breaks the original gauge symmetry
$\Omega_{\mu\nu\rho}\to\Omega_{\mu\nu\rho}+3\partial_{[\mu}b_{\nu\rho]}$ with
$b$ a two-form gauge parameter field. Here we do not try to specify the
possible origin of this mass term. Rather, we would like to emphasize its
importance to the theory. Firstly, note that, without this mass term, the
gauge symmetry involving $b$ is recovered in this Lagrangian of $\Omega$.
Thus, the lightness of $\Omega$ is technically natural tHooft:1979rat , which
can be understood in connection with this underlying broken gauge symmetry.
Secondly, further significance of this mass term can be seen by analyzing its
particle content in $\Omega$. In the massless case, $\Omega$ does not possess
any physical propagating degree of freedom (dof) in four spacetime dimensions
Sezgin:1980tp . To see this, let us divide the components of $\Omega$ into two
classes, $\Omega_{0ij}$ and $\Omega_{ijk}$ with the Latin indices denoting the
spatial dimensions. On the one hand, due to the antisymmetric nature of the
field strength, the Lagrangian in Eq. (2) cannot give rise to the kinetic term
for $\Omega_{0ij}$ i.e., $\partial_{0}\Omega_{0ij}$ cannot exist. On the other
hand, $\Omega_{ijk}\equiv\epsilon_{ijk}\phi$, actually being a pseudo-scalar
in nature, suffers from the gauge transformation as
$\Omega_{ijk}\to\Omega_{ijk}+3\partial_{[i}b_{jk]}$, so that it is unphysical
either. However, in the model with a massive 3-form field, the gauge
invariance of $\Omega$ is explicitly broken, so that $\Omega_{ijk}$ retains as
true physical propagating dof in the theory. Following the tensor-projection
operator technique developed in Ref. Rivers:1964 ; Sezgin:1979zf ;
Sezgin:1980tp ; VanNieuwenhuizen:1973fi , we can derive the propagator of the
massive 3-form field $\Omega$ as follows
$\displaystyle
i\Delta^{\Omega}_{\mu\nu\lambda,\,\rho\sigma\kappa}(k)=\frac{-i{\cal
P}_{\mu\nu\lambda\,,\rho\sigma\kappa}(k)}{k^{2}-m_{\Omega}^{2}}\,,$ (5)
where
$\displaystyle{\cal
P}_{\mu\nu\lambda,\,\rho\sigma\kappa}(k)\equiv\frac{1}{6}\left[\theta_{\mu\rho}(\theta_{\nu\sigma}\theta_{\lambda\kappa}-\theta_{\nu\kappa}\theta_{\lambda\sigma})-\theta_{\mu\sigma}(\theta_{\nu\rho}\theta_{\lambda\kappa}-\theta_{\nu\kappa}\theta_{\lambda\rho})-\theta_{\mu\kappa}(\theta_{\nu\sigma}\theta_{\lambda\rho}-\theta_{\nu\rho}\theta_{\lambda\sigma})\right]\,,$
(6)
with
$\displaystyle\theta_{\mu\nu}(k)\equiv\eta_{\mu\nu}-\frac{k_{\mu}k_{\nu}}{m_{\Omega}^{2}}\,.$
(7)
It has been well-known that there are many problems in constructing an
interacting theories for higher-spin fields in the flat spacetime (see e.g.,
Ref. Bekaert:2010hw for a recent review and references therein), which is
hampered by various no-go theorems Weinberg:1964ew ; Coleman:1967ad ;
Haag:1974qh ; Grisaru:1976vm ; Weinberg:1980kq ; Porrati:2008rm ;
Benincasa:2007xk in the literature. One may evade these no-go theorems by
introducing higher-derivative couplings Metsaev:2005ar ; Boulanger:2006gr ;
Boulanger:2008tg or placing the theory on the AdS spacetime Fradkin:1986qy ;
Fradkin:1987ks ; Vasiliev:2011knf . Provided that the 3-form field $\Omega$
transforms non-trivially under the Lorentz symmetry group, one may wonder if
our interacting theory might also be constrained by these no-go theorems. Here
we would like to argue that our model of $\Omega$ is not plagued by these
problems. First of all, as shown before, the massive 3-form field actually
contains a single spin-0 pseudoscalar, rather than any higher-spin particles.
Secondly, the previous no-go theorems focused on the massless higher-spin
case, while $\Omega$ is dynamical only when it becomes massive. Thus, $\Omega$
may be decoupled when the energy is lower than its mass, and does not affect
the infrared behavior of the theory. The above two reasons make either the
Weinberg low-energy theorem Weinberg:1964ew or the Weinberg-Witten no-go
theorem Weinberg:1980kq not be applicable to the present interacting 3-form
model. It has been shown in the HUFT Wu:2015wwa ; Wu:2017rmd ; Wu:2017urh ;
Wu:2018xah ; Wu:2021ign ; Wu:2021ucc ; Wu:2022mzr that the no-go theorems can
also be avoided by introducing the concept of biframe spacetime.
## III The Muon $g-2$ Contribution
Now it is ready to compute the novel contribution to the muon anomalous
magnetic moment induced by $\Omega$. The relevant Feynman diagram is shown in
Fig. 1, where the double-wiggle line represents the massive 3-form field.
Figure 1: One-loop Feynman diagrams that would contribute to the muon $g-2$.
By applying the Feynman rules according to the Lagrangian in Eq. (2), the
corresponding amplitude of Fig. 1 is given by
$\displaystyle i{\cal M}$ $\displaystyle=$
$\displaystyle\int\frac{d^{4}l}{(2\pi)^{4}}\left(g_{h}\gamma^{\rho\sigma\kappa}\right)\frac{i}{\not{l}+\not{k}_{2}-m_{\mu}}\left(-ieQ_{\mu}\gamma^{\mu}\right)\frac{i}{\not{l}+\not{k}_{1}-m_{\mu}}\left(g_{\mu}\gamma^{\tau\nu\lambda}\right)\frac{-i{\cal
P}_{\tau\nu\lambda,\,\rho\sigma\kappa}(l)}{l^{2}-m_{\Omega}^{2}}$ (8)
$\displaystyle=$
$\displaystyle{g_{h}^{2}eQ_{\mu}}\int\frac{d^{4}l}{(2\pi)^{4}}\frac{\gamma^{\rho\sigma\kappa}[\not{l}+\not{k}_{2}+m_{\mu}]\gamma^{\mu}[\not{l}+\not{k}_{1}+m_{\mu}]\gamma^{\tau\nu\lambda}{\cal
P}_{\tau\nu\lambda,\,\rho\sigma\kappa}(l)}{[(l+k_{2})^{2}-m_{\mu}^{2}][(l+k_{1})^{2}-m_{\mu}^{2}](l^{2}-m_{\Omega}^{2})}\,,$
$Q_{\mu}=-1$ and $m_{\mu}$ stand for the muon electric charge and mass,
respectively. By completing the square in the denominator with the Feynman
parameters, the denominator can be transformed into
$\displaystyle\frac{1}{[(l+k_{2})^{2}-m_{\mu}^{2}][(l+k_{1})^{2}-m_{\mu}^{2}](l^{2}-m_{\Omega}^{2})}=\int^{1}_{0}dxdy\frac{\Gamma(3)}{[l^{2}+2l\cdot(xk_{1}+yk_{2})-(1-x-y)m_{\Omega}^{2}]^{3}}$
(9) $\displaystyle=$
$\displaystyle\Gamma(3)\int^{1}_{0}dxdy\frac{1}{[(l+xk_{1}+yk_{2})^{2}-\Delta]^{3}}\,,$
where
$\displaystyle\Delta\equiv(1-x-y)m_{\Omega}^{2}+(x+y)^{2}m_{\mu}^{2}-xyq^{2}\,,$
(10)
where $q\equiv k_{2}-k_{1}$ is the external photon momentum flowing into the
vertex. By shifting the loop momentum as $l\to l-xk_{1}-yk_{2}$, we can
transform the above loop integral into the following form
$\displaystyle i{\cal M}$ $\displaystyle=$ $\displaystyle
eQ_{\mu}g_{h}^{2}\Gamma(3)\int dxdy\int\frac{d^{4}l}{(2\pi)^{4}}{\cal
P}_{\tau\nu\lambda,\,\rho\sigma\kappa}(l-xk_{1}-yk_{2})$ (11)
$\displaystyle\frac{\gamma^{\rho\sigma\kappa}[\not{l}-x\not{k}_{1}+(1-y)\not{k}_{2}+m_{\mu}]\gamma^{\mu}[\not{l}+(1-x)\not{k}_{1}-y\not{k}_{2}+m_{\mu}]\gamma^{\tau\nu\lambda}}{[l^{2}-\Delta]^{3}}\,.$
We now contract the Lorentz indices and classify the obtained terms according
to their divergence degrees. As a result, we find that the terms possibly
proportional to the factor $m_{\mu}(-i\sigma^{\mu\nu}q_{\nu})$ are
logarithmically divergent at most, which indicates that those terms would give
rise to the dominant contribution to the muon $(g-2)$. Thus, we firstly focus
on these logarithmically divergent terms, which, after some tedious
computations, gives
$\displaystyle i{\cal M}^{\rm
log}=eQ_{\mu}g_{h}^{2}m_{\mu}(-i\sigma^{\mu\nu}q_{\nu})\int_{0}^{1}dxdy\frac{6\Gamma(3)}{m_{\Omega}^{2}}2\left[2(x+y)^{2}-6(x+y)+3\right]{\cal
I}_{0}(\Delta)\,,$ (12)
where ${\cal I}_{0}(\Delta)$ is a logarithmically divergent integral
regularized with the dimensional regularization tHooft:1972tcz :
$\displaystyle{\cal
I}_{0}(\Delta)=\frac{i}{16\pi^{2}}\left[\frac{2}{\epsilon}-\gamma+\log\frac{\mu^{2}}{\Delta}\right]\,,$
(13)
where $\epsilon\equiv 4-d$, and $\mu^{2}$ is a reference scale. It can be
shown that the Feynman parameter integral over the UV divergent part of this
amplitude vanishes
$\displaystyle\int^{1}_{0}dx\int^{1-x}_{0}dy[2(x+y)^{2}-6(x+y)+3]=0\,.$ (14)
Therefore, the one-loop contribution to the muon $g-2$ induced by $\Omega$ is
finite. By further considering the large mass hierarchy between $\Omega$ and
muons, we expand $\Delta$ in terms of $m_{\mu}^{2}/m_{\Omega}^{2}$ and just
keep the leading-order term. Thus, the remaining part of $i{\cal M}^{\rm log}$
is of ${\cal O}(m_{\mu}q/m_{\Omega}^{2})$, with the result given by
$\displaystyle i{\cal M}^{\rm log}$ $\displaystyle=$ $\displaystyle
eQ_{\mu}g_{h}^{2}m_{\mu}(-i\sigma^{\mu\nu}q_{\nu})\int^{1}_{0}dx\int^{1-x}_{0}dy\frac{12\Gamma(3)}{m_{\Omega}^{2}}[2(x+y)^{2}-6(x+y)+3]\times$
(15) $\displaystyle(-1)\frac{i}{16\pi^{2}}\log(1-x-y)$ $\displaystyle=$
$\displaystyle
i\frac{e}{2m_{\mu}}(i\sigma^{\mu\nu}q_{\nu})\frac{18}{16\pi^{2}}\frac{Q_{\mu}g_{h}^{2}m_{\mu}^{2}}{m_{\Omega}^{2}}\,.$
The above calculation shows that the $\Omega$-induced anomalous contribution
to the muon $g-2$ is dominated by terms of ${\cal
O}(m_{\mu}^{2}/m_{\Omega}^{2})$, so that we need to further consider the
associated terms in the finite integrals. It can be shown by explicit
calculations that these finite terms are given by
$\displaystyle i{\cal M}^{\rm fin}$ $\displaystyle=$ $\displaystyle
eQ_{\mu}g_{h}^{2}m_{\mu}(-i\sigma^{\mu\nu}q_{\nu})\Gamma(3)\int^{1}_{0}dxdy\left(-\frac{i}{16\pi^{2}}\right)\frac{1}{2}(-12)\left[\frac{(x+y)^{2}-5(x+y)+4}{\Delta}\right]$
(16) $\displaystyle\simeq$ $\displaystyle
ieQ_{\mu}g_{h}^{2}m_{\mu}(i\sigma^{\mu\nu}q_{\nu})\frac{\Gamma(3)(-12)}{2(16\pi^{2})m_{\Omega}^{2}}\int^{1}_{0}dx\int^{1-x}_{0}dy(4-x-y)$
$\displaystyle=$ $\displaystyle
i\frac{e}{2m_{\mu}}(i\sigma^{\mu\nu}q_{\nu})\frac{40}{16\pi^{2}}\frac{(-Q_{\mu})g_{h}^{2}m_{\mu}^{2}}{m_{\Omega}^{2}}\,,$
where we also keep terms up to ${\cal O}(qm_{\mu}/m_{\Omega}^{2})$ in the
amplitude.
Note that the amplitude related to the observable muon $g-2$ is usually
parameterized as follows Peskin:1995ev
$\displaystyle i{\cal
M}(\gamma\mu\to\mu)=\bar{u}(k_{2})(-iQ_{\mu})\left[e\gamma_{\mu}F_{1}(q^{2})+\frac{ie\sigma_{\mu\nu}q^{\nu}}{2m_{\mu}}F_{2}(q^{2})\right]u(k_{1})\,,$
(17)
where the measured muon anomalous magnetic moment can be identified as $\Delta
a_{\mu}=F_{2}(0)$. In comparison with this definition, the leading-order
result of the muon $g-2$ generated by $\Omega$ is given by
$\displaystyle\Delta a_{\mu}^{\Omega}=\Delta a^{\rm log}_{\mu}+\Delta a^{\rm
fin}_{\mu}=\frac{11g_{h}^{2}}{8\pi^{2}}\frac{m_{\mu}^{2}}{m_{\Omega}^{2}}\,.$
(18)
## IV Constraints from Perturbativity and Unitarity
The Lagrangian of the 3-form field $\Omega$ in Eq. (2) suffers from the
theoretical constraints such as perturbativity and unitarity. As a
renormalizable dimension-4 operator, the effective interaction of $\Omega$
with muons should admit the usual perturbativity constraint. Note that the
one-loop correction to this vertex can be estimated as $\delta g_{h}\sim
g_{h}^{3}/(4\pi)^{2}$. The validity of perturbative expansion requires this
one-loop correction be smaller than the tree-level coupling $g_{h}$, which can
be transformed into the upper bound on $g_{h}$ as $|g_{h}|\lesssim 4\pi$
Nebot:2007bc .
Another restriction on the effective action of $\Omega$ is provided by the
perturbative unitarity Gell-Mann:1969cuq ; Weinberg:1971fb ; Lee:1977yc ;
Lee:1977eg (also see Refs. Glashow:1976nt ; Huffel:1980sk ; Maalampi:1991fb ;
Kanemura:1993hm ; Akeroyd:2000wc ; Das:2015mwa ; Kanemura:2015ska ;
Goodsell:2018tti ; Appelquist:1987cf ; Chaichian:1987zt ; Falkowski:2016glr ;
Banta:2021dek ; SekharChivukula:2019yul ; Chivukula:2020hvi ;
Chivukula:2021xod ; Chivukula:2022tla ; Huang:2022zop for an incomplete list
for the application of unitarity bounds in Beyond-SM theories). In order to
obtain these unitarity bounds, we shall compute amplitudes of $\mu^{-}\mu^{+}$
elastic 2-to-2 scatterings shown in Fig. 2.
Figure 2: Feynman diagrams which give rise to the scattering process
$\mu^{-}\mu^{+}\to\mu^{-}\mu^{+}$: (a) $s$-channel; (b) $t$-channel.
By explicit calculations of these Feynman diagrams, the amplitudes of $s$\-
and $t$-channels are given by
$\displaystyle i{\cal M}_{s}$ $\displaystyle=$
$\displaystyle\left[\bar{v}(p_{2})(g_{h}\gamma^{\rho\sigma\kappa})u(p_{1})\right]\frac{-i{\cal
P}_{\rho\sigma\kappa,\,\mu\nu\lambda}(Q)}{Q^{2}-m_{\Omega}^{2}}\left[\bar{u}(k_{1})(g_{h}\gamma^{\mu\nu\lambda})v(k_{2})\right]$
$\displaystyle=$
$\displaystyle\frac{i6g_{h}^{2}}{m_{\Omega}^{2}}[\bar{u}(k_{1})\gamma^{\mu}\gamma^{5}v(k_{2})][\bar{v}(p_{2})\gamma_{\mu}\gamma^{5}u(p_{1})]\,,$
$\displaystyle i{\cal M}_{t}$ $\displaystyle=$
$\displaystyle\left[\bar{u}(k_{1})(g_{h}\gamma^{\rho\sigma\kappa})u(p_{1})\right]\frac{-i{\cal
P}_{\rho\sigma\kappa,\,\mu\nu\lambda}(q)}{q^{2}-m_{\Omega}^{2}}\left[\bar{v}(p_{2})(g_{h}\gamma^{\mu\nu\lambda})v(k_{2})\right]$
(19) $\displaystyle=$
$\displaystyle\frac{i6g_{h}^{2}}{m_{\Omega}^{2}}[\bar{u}(k_{1})\gamma^{\mu}\gamma^{5}u(p_{1})][\bar{v}(p_{2})\gamma_{\mu}\gamma^{5}u(k_{2})]\,,$
where we have defined $Q\equiv p_{1}+p_{2}=k_{1}+k_{2}$ and $q\equiv
p_{1}-k_{1}=k_{2}-p_{2}$ and used the on-shell conditions for external muons.
Also, we have worked in the high-energy limit $s\equiv Q^{2}\gg m_{\mu}^{2}$
so that the muon mass can be ignored. In order to obtain the explicit
expressions of amplitudes, we need to specify the helicty states for the four
external muons. Before making explicit calculations, we should mention that
the Lagrangian for $\Omega$ in Eq. (2) is parity-invariant ($P$-invariant), so
that two processes related by the parity transformation should have exactly
the same amplitude. For example, we can have the following identities:
$\displaystyle{\cal M}(\mu^{-}_{R}\mu^{+}_{L}\to\mu^{-}_{R}\mu^{+}_{L})={\cal
M}(\mu^{-}_{L}\mu^{+}_{R}\to\mu^{-}_{L}\mu^{+}_{R})\,,~{}{\cal
M}(\mu^{-}_{R}\mu^{+}_{L}\to\mu^{-}_{L}\mu^{+}_{R})={\cal
M}(\mu^{-}_{L}\mu^{+}_{R}\to\mu^{-}_{R}\mu^{+}_{L})\,,$ (20)
and so on. Therefore, the number of independent non-vanishing amplitudes are
greatly reduced. In the following, we provide the calculation details for
various nonzero independent amplitudes.
### IV.1 $\mu^{-}_{R}\mu^{+}_{L}\to\mu^{-}_{R}\mu^{+}_{L}$
In the center-of-mass (com) frame, the momenta and spinors of the incoming
particles are given by
$\displaystyle\mu^{-}_{R}:~{}p_{1}=(E,0,0,E)\,,$
$\displaystyle\mu^{+}_{L}:~{}p_{2}=(E,0,0,-E)\,,$ $\displaystyle
u_{R}(p_{1})=\sqrt{2E}\left(\begin{array}[]{c}0\\\ 0\\\ 1\\\
0\end{array}\right)\,,$ $\displaystyle
v_{L}(p_{2})=\sqrt{2E}\left(\begin{array}[]{c}0\\\ 0\\\ 0\\\
-1\end{array}\right)\,,$ (29)
while the outgoing particles are specified by
$\displaystyle\mu_{R}^{-}:~{}k_{1}=(E,E\sin\theta,0,E\cos\theta)\,,$
$\displaystyle\mu_{L}^{+}:~{}k_{2}=(E,-E\sin\theta,0,-E\cos\theta)\,,$
$\displaystyle u_{R}(k_{1})=\sqrt{2E}\left(\begin{array}[]{c}0\\\ 0\\\
\cos\frac{\theta}{2}\\\ \sin\frac{\theta}{2}\end{array}\right)\,,$
$\displaystyle v_{L}(k_{2})=\sqrt{2E}\left(\begin{array}[]{c}0\\\ 0\\\
\sin\frac{\theta}{2}\\\ -\cos\frac{\theta}{2}\end{array}\right)\,,$ (38)
where we have taken the high-energy limit so that external muon masses can be
neglected. By putting these explicit spinors into the general
$\mu^{-}$$\mu^{+}$ elastic scattering amplitudes in Eq. (IV) for the $s$\- and
$t$-channels, we can obtain the total amplitude as follows
$\displaystyle i{\cal
M}(\mu^{-}_{R}\mu^{+}_{L}\to\mu^{-}_{R}\mu^{+}_{L})=i{\cal M}_{s}+i{\cal
M}_{t}=-24ig_{h}^{2}\left(\frac{s+t}{m_{\Omega}^{2}}\right)\,,$ (39)
where we have used the definitions $s=Q^{2}$ and $t=q^{2}$.
### IV.2 $\mu_{R}^{-}\mu_{L}^{+}\to\mu^{-}_{L}\mu^{+}_{R}$
In this case, the momenta for external muons are still taken as in Eqs. (IV.1)
and (IV.1). Since the incoming $\mu^{-}\mu^{+}$ takes the same helicity
configuration, the corresponding spinors are kept to be the same. However, due
to the helicity flips for the outgoing muon pair, the spinors for these two
particles are modified as follows
$\displaystyle
u_{L}(k_{1})=\sqrt{2E}\left(\begin{array}[]{c}-\sin\frac{\theta}{2}\\\
\cos\frac{\theta}{2}\\\ 0\\\ 0\end{array}\right)\,,$ $\displaystyle
v_{R}(k_{2})=\sqrt{2E}\left(\begin{array}[]{c}\cos\frac{\theta}{2}\\\
\sin\frac{\theta}{2}\\\ 0\\\ 0\end{array}\right)\,.$ (48)
It turns out that the $t$-channel amplitude in Eq. (IV) vanishes which is
caused by the helicity mismatching in the spinor bilinears. On the other hand,
the $s$-channel does contribute to the total amplitude as follows
$\displaystyle i{\cal
M}(\mu^{-}_{R}\mu^{+}_{L}\to\mu^{-}_{L}\mu^{+}_{R})=i{\cal
M}_{s}=12ig_{h}^{2}\left(\frac{t}{m_{\Omega}^{2}}\right)\,.$ (49)
### IV.3 $\mu^{-}_{R}\mu^{+}_{R}\to\mu^{-}_{R}\mu^{+}_{R}$
In the com frame, the spinors for incoming and outgoing particles can be given
by
$\displaystyle u_{R}(p_{1})=\sqrt{2E}\left(\begin{array}[]{c}0\\\ 0\\\ 1\\\
0\end{array}\right)\,,$ $\displaystyle
v_{R}(p_{2})=\sqrt{2E}\left(\begin{array}[]{c}1\\\ 0\\\ 0\\\
0\end{array}\right)\,,$ (58) $\displaystyle
u_{R}(k_{1})=\sqrt{2E}\left(\begin{array}[]{c}0\\\ 0\\\
\cos\frac{\theta}{2}\\\ \sin\frac{\theta}{2}\end{array}\right)\,,$
$\displaystyle
v_{R}(k_{2})=\sqrt{2E}\left(\begin{array}[]{c}\cos\frac{\theta}{2}\\\
\sin\frac{\theta}{2}\\\ 0\\\ 0\end{array}\right)\,.$ (67)
By exploiting the above explicit expressions of external muon spinors in Eq.
(IV), we can yield the amplitudes for $s$\- and $t$-channels. As a result, it
is found that the $s$-channel ceases to contributing to the amplitude, while
the $t$-channel gives the dominant contribution as follows
$\displaystyle i{\cal
M}(\mu^{-}_{R}\mu^{+}_{R}\to\mu^{-}_{R}\mu^{+}_{R})=i{\cal
M}_{t}=-12ig_{h}^{2}\left(\frac{s}{m_{\Omega}^{2}}\right)\,.$ (68)
### IV.4 Numerical Analysis of Unitarity Bounds
Due to the parity symmetry of the Lagrangian in Eq. (2), Eqs. (49), (39) and
(68) give all the nonzero independent amplitudes for the process
$\mu^{-}\mu^{+}\to\mu^{-}\mu^{+}$, which can be used to yield the unitarity
bounds on the 3-form field coupling to muons. Note that the unitarity of the
S-matrix gives the following constraint Lee:1977eg ; Goodsell:2018tti ;
Banta:2021dek ; Huang:2022zop on the $s$-wave projected amplitude
$\displaystyle\mbox{Re}(a_{0})(\sqrt{s})\leqslant\frac{1}{2}\,,$ (69)
where $a_{0}(\sqrt{s})$ is defined as
$\displaystyle a_{0}(\sqrt{s})=\sqrt{\frac{4|{\bf p}_{i}||{\bf
p}_{f}|}{2^{\delta_{i}+\delta_{f}}s}}\frac{1}{32\pi}\int^{1}_{-1}d(\cos\theta){\cal
M}(i\to f)=\sqrt{\frac{4|{\bf p}_{i}||{\bf
p}_{f}|}{2^{\delta_{i}+\delta_{f}}s}}\frac{1}{16\pi}\int^{0}_{-s}\frac{dt}{s}{\cal
M}(i\to f)\,,$ (70)
where the indices $\delta_{i(f)}=1$ if the two particles in the initial
(final) states are identical to one another, otherwise $\delta_{i(f)}=0$.
By making use of the definition of $a_{0}(\sqrt{s})$, we can obtain the
$s$-wave projected amplitudes for different helicity configurations as follows
$\displaystyle|a_{0}(\mu_{R}^{-}\mu_{L}^{+}\to\mu_{R}^{-}\mu_{L}^{+})|$
$\displaystyle=$
$\displaystyle\frac{3g_{h}^{2}}{4\pi}\frac{s}{m_{\Omega}^{2}}\sim\frac{3g_{h}^{2}}{\pi}\frac{\Lambda^{2}}{m_{\Omega}^{2}}\,,$
$\displaystyle|a_{0}(\mu_{R}^{-}\mu_{L}^{+}\to\mu_{L}^{-}\mu_{R}^{+})|$
$\displaystyle=$
$\displaystyle\frac{3g_{h}^{2}}{8\pi}\frac{s}{m_{\Omega}^{2}}\sim\frac{3g_{h}^{2}}{2\pi}\frac{\Lambda^{2}}{m_{\Omega}^{2}}\,,$
$\displaystyle|a_{0}(\mu^{-}_{R}\mu^{+}_{R}\to\mu^{-}_{R}\mu^{+}_{R})|$
$\displaystyle=$
$\displaystyle\frac{3g_{h}^{2}}{4\pi}\frac{s}{m_{\Omega}^{2}}\sim\frac{3g_{h}^{2}}{\pi}\frac{\Lambda^{2}}{m_{\Omega}^{2}}\,,$
(71)
where in the last equality of each equation above we have taken the high-
energy limit $s\sim 4\Lambda^{2}$ with $\Lambda$ as the UV cutoff scale. By
further requiring the upper limit on ${\rm Re}(a_{0})(\sqrt{s})$, we can
obtain the following unitarity bound for each channel
$\displaystyle\mu^{-}_{R}\mu^{+}_{L}\to\mu^{-}_{R}\mu^{+}_{L}/\mu^{-}_{R}\mu^{+}_{R}\to\mu^{-}_{R}\mu^{+}_{R}:$
$\displaystyle|g_{h}|\lesssim\sqrt{\frac{\pi}{6}}\frac{m_{\Omega}}{\Lambda}\,,$
(72) $\displaystyle\mu^{-}_{R}\mu^{+}_{L}\to\mu^{-}_{L}\mu^{+}_{R}:$
$\displaystyle|g_{h}|\lesssim\sqrt{\frac{\pi}{3}}\frac{m_{\Omega}}{\Lambda}\,.$
(73)
Therefore, it is found that the channels
$\mu^{-}_{R}\mu^{+}_{L}\to\mu^{-}_{R}\mu^{+}_{L}$ and
$\mu^{-}_{R}\mu^{+}_{R}\to\mu^{-}_{R}\mu^{+}_{R}$ provide the strongest
constraint with $|g_{h}|\lesssim\sqrt{\pi/6}(m_{\Omega}/\Lambda)$.
## V Numerical Results
Given the muon $g-2$ expression in Eq. (18) and the strongest unitarity bound
in Eq. (72), we are now ready to explore the parameter space of this model.
The final numerical result is plotted on the $m_{G}$-$|g_{h}|$ plane in Fig.
3, where the cutoff scale entering the unitarity bound is chosen to be
$\Lambda=500$ GeV (left panel) and 1 TeV (right panel). The solid blue band in
each plot corresponds to the parameter space explaining the muon $g-2$ data at
$2\sigma$ CL, while the red shaded region is disfavored by unitarity. Here we
do not show the theoretical limit from perturbativity $|g_{h}|\lesssim 4\pi$,
since it is too weak to be useful in constraining this model. Moreover, in
order to be consistent with the effective field theory treatment, it is
usually required that the mass of $\Omega$ should be much smaller than the
cutoff scale. Thus, we take the upper boundary of the 3-form field $\Omega$
mass to be $m_{\Omega}=300$ GeV (500 GeV) for $\Lambda=500$ GeV (1 TeV) in
Fig. 3.
Figure 3: Parameter spaces on the $m_{\Omega}$-$g_{h}$ plane for the UV cutoff
to be $\Lambda=500$ GeV (left panel) and 1 TeV (right panel), respectively. In
each plot, the blue region represents the parameter space which can explain
the muon $g-2$ at $2\sigma$ CL, while the red region is excluded by the
perturbative unitarity bound. The cyan region on the right panel can account
for at least $50\%$ of the muon $g-2$ anomaly.
In the left panel of Fig. 3 with the cutoff scale $\Lambda=500$ GeV, even
though the unitarity bound begins to restrict the large coupling region, there
is still a considerable amount of parameter spaces to fully explain the muon
$g-2$ anomaly with 150 GeV $\leqslant m_{G}\leqslant 300$ GeV and
$0.1\lesssim|g_{h}|\lesssim 0.5$. On the other hand, when the cutoff scale is
increased to $\Lambda\gtrsim 800$ GeV, it is found that the blue colored
region totally resolving the muon $g-2$ discrepancy is ruled out by unitarity
arguments, which is illustrated in the right panel of Fig. 3 for $\Lambda=1$
TeV. This implies that the current muon $g-2$ data favors a light tensor field
$\Omega$ with $m_{G}\sim{\cal O}$(100 GeV) and a low cutoff scale
$\Lambda\lesssim 800$ GeV. However, if we relax our requirement and only need
our 3-form model to account for at least $50\%$ of the muon $g-2$ anomaly,
more parameter space now opens, which is colored in cyan in the plot for
$\Lambda=1$ TeV. It is seen that some of the cyan region now escapes the
strong unitarity bound, indicating that our model of the 3-form field can
explain at least $50\%$ of the muon $g-2$ discrepancy.
## VI Conclusions
In the present work, we have considered the possibility to explain the long-
standing muon $g-2$ anomaly by introducing in the SM a 3-form field $\Omega$,
which can naturally appear in many fundamental theories such as the type IIA
string theory and the HUFT. In particular, we have presented the leading-order
analytic expression of the muon $g-2$ by calculating the one-loop Feynman
induced by $\Omega$. We have also taken into account the theoretical
constraints from perturbativity and unitarity. Especially, we have computed
the independent tree-level $\mu^{+}\mu^{-}$ scattering amplitudes of all
initial and final helicty configurations, from which we have seen that the
channels
$\mu^{-}_{R}\mu^{+}_{L}\to\mu^{-}_{R}\mu^{+}_{L}$/$\mu^{-}_{R}\mu^{+}_{R}\to\mu^{-}_{R}\mu^{+}_{R}$
lead to the most stringent bound on this 3-form field model. Then we have
numerically explored the parameter space which is of interest to explain the
muon $g-2$ anomaly. As a result, despite the strong constraint from the
unitarity, we have still found a substantial parameter region which can solve
the muon $g-2$ discrepancy with the benchmark scenario as $m_{\Omega}\sim 250$
GeV, $|g_{h}|\sim 0.3$, and $\Lambda\sim 500$ GeV. However, when the UV cutoff
scale has risen to a larger value $\Lambda\gtrsim 800$ GeV, all parameter
regions fully resolving the $(g-2)_{\mu}$ anomaly have been excluded by
unitarity. Nevertheless, in the case with $\Lambda=1$ TeV, we have also shown
that at least 50$\%$ of the muon $g-2$ discrepancy can be accounted for by our
3-form field theory without conflicting with the strong unitarity bound.
Note that the interpretation of the muon $g-2$ anomaly requires the 3-form
field $\Omega$ to be relatively light and to have a moderately strong coupling
to $\mu$ leptons, so that it should be well tested by the existing collider
experiments. For example, if we extend this model by assuming that $\Omega$
has a universal coupling to all the SM leptons and quarks, the effective
$\Omega$-gluon interaction could be inevitably generated via SM quark loops.
Thus, the 3-form particle $\Omega$ can be produced at hadronic colliders such
as the LHC mainly through the gluon fusion channel. By observing its decay
products such as $t\bar{t}$, dijet, diphoton, diboson, and dileptons, such the
universal coupling model with a light $\Omega$ has already been ruled out by
the current ATLAS ATLAS:2018rvc ; ATLAS:2018qto ; ATLAS:2019nat ;
ATLAS:2019erb ; Wang:2021rvc and CMS CMS:2018rkg ; CMS:2018mgb ; CMS:2019qem
; CMS:2018ipm ; CMS:2018dqv ; Radburn-Smith:2018wfo data. On the other hand,
if this 3-form field is leptophilic, i.e., $\Omega$ only couples to the SM
lepton sector strongly, then the LHC constraints cannot be applied here.
Nevertheless, with the assumption that $\Omega$ interacts with $e^{\pm}$ by a
similar coupling to muons, such a model can be falsified in the near-future
$e^{+}e^{-}$ colliders such as CLIC Linssen:2012hp ; Aicheler:2012bya , ILC
Baer:2013cma , CEPC CEPCStudyGroup:2018rmc ; CEPCStudyGroup:2018ghi and FCC-
ee FCC:2018evy . Note that the cross section of $e^{-}e^{+}\to\mu^{-}\mu^{+}$
mediated by $\Omega$ is given by
$\displaystyle\sigma(e^{-}e^{+}\to\mu^{-}\mu^{+})=\frac{3g_{h}^{4}}{\pi}\frac{s}{m_{\Omega}^{4}}\,,$
(74)
where $s$ is the com energy squared of the $e^{+}e^{-}$ pair. Interestingly,
the cross section in Eq. (74) shows a monotone increase with $s$, and the
absence of any pole structure at $s=m_{\Omega}^{2}$, which is quite unusual in
an $s$-channel process. Such a feature is caused by the fact that
$(s-m_{\Omega}^{2})$ originally appearing in the denominator of the $\Omega$
propagator in Eq. (5) has been canceled exactly by the same factor generated
in the numerator of the $s$-channel amplitude. Furthermore, if we take
benchmark parameters as $m_{\Omega}=250$ GeV, $g_{h}=0.3$, and $\sqrt{s}\sim
300~{}{\rm GeV}$ which is well below the cutoff scale $\Lambda=500$ GeV as
indicated by the left panel of Fig. 3, the cross section for the
$\mu^{-}\mu^{+}$ pair production induced by $\Omega$ can be estimated to be
$\sim 0.7$ ab. Such a large cross section with its unconventional rise with
$s$ indicates that the process $e^{-}e^{+}\to\mu^{-}\mu^{+}$ is a promising
target in searching for the 3-form field $\Omega$.
Finally, we would like to emphasize that our treatment of the 3-form field
$\Omega$ is, at best, effective only at low energies. Indeed, as clearly seen
from Eqs. (39), (49), (68), and (74), the $\mu^{-}\mu^{+}\to\mu^{-}\mu^{+}$
amplitudes of various polarization configurations and the cross section of
$e^{-}e^{+}\to\mu^{-}\mu^{+}$ increases with the com energy squared $s$, and
would break the unitarity at high energies. Such a pathology is why we have
introduced a UV cutoff scale $\Lambda$ in the theory. It is highly possible
that the theory is unitarized at energies around $\Lambda$ by some mechanisms,
such as the Higgs or St$\ddot{\rm u}$ckelberg mechanisms, which might also
give the mass to the 3-form field and make $\Omega$ a propagating degree of
freedom. Or $\Omega$ is just an effective description of the dynamics when $s$
is much lower than $\Lambda$, as the massless counterpart is not dynamical at
all. Another theoretical problem related to $\Omega$ is the renormalizability
of the theory. As shown in Eqs. (5) and (6), the propagator of the 3-form
field $\Omega$ does not obey the conventional power counting rule as the SM
scalar or vector particles, since adding more internal $\Omega$ propagators
would even increase the UV divergences for the loop integrals, rather than
ameliorating the UV behavior. However, in the present work, we concentrate on
explaining the muon $g-2$ in terms of the one-loop effects of the 3-form field
$\Omega$, whose finiteness reveals that its prediction is only determined by
low-energy dynamics, regardless of the high-energy details or the UV
completion of the theory.
## Acknowledgements
DH is supported in part by the National Natural Science Foundation of China
(NSFC) under Grant No. 12005254, the National Key Research and Development
Program of China under Grant No. 2021YFC2203003, and the Key Research Program
of Chinese Academy of Sciences under grant No. XDPB15. YT is supported in part
by NSFC under Grants No. 11851302, the Fundamental Research Funds for the
Central Universities and Key Research Program of the Chinese Academy of
Sciences. YLW is supported in part by the National Key Research and
Development Program of China under Grant No. 2020YFC2201501, and NSFC under
Grants No. 11851303, No. 11690022, No. 11747601, the Strategic Priority
Research Program of the Chinese Academy of Sciences under Grant No.
XDB23030100, and NSFC special fund for theoretical physics under Grant No.
12147103.
## References
* (1) M. B. Green, J. H. Schwarz and E. Witten, 1988, ISBN 978-0-521-35752-4
* (2) M. B. Green, J. H. Schwarz and E. Witten, 1988, ISBN 978-0-521-35753-1
* (3) J. Polchinski, Cambridge University Press, 2007, ISBN 978-0-511-25228-0, 978-0-521-63304-8, 978-0-521-67228-3 doi:10.1017/CBO9780511618123
* (4) Y. L. Wu, Phys. Rev. D 93, no.2, 024012 (2016) doi:10.1103/PhysRevD.93.024012 [arXiv:1506.01807 [hep-th]].
* (5) Y. L. Wu, Sci. Bull. 62, no.16, 1109-1113 (2017) doi:10.1016/j.scib.2017.08.005 [arXiv:1705.06365 [hep-th]].
* (6) Y. L. Wu, Eur. Phys. J. C 78, no.1, 28 (2018) doi:10.1140/epjc/s10052-017-5504-3 [arXiv:1712.04537 [hep-th]].
* (7) Y. L. Wu and R. Zhang, Commun. Theor. Phys. 70, no.2, 161 (2018) doi:10.1088/0253-6102/70/2/161 [arXiv:1808.09797 [physics.gen-ph]].
* (8) Y. L. Wu, Int. J. Mod. Phys. A 36, no.28, 2143001 (2021) doi:10.1142/S0217751X21430016 [arXiv:2104.05404 [physics.gen-ph]].
* (9) Y. L. Wu, Int. J. Mod. Phys. A 36, no.28, 2143002 (2021) doi:10.1142/S0217751X21430028 [arXiv:2104.11078 [physics.gen-ph]].
* (10) Y. L. Wu, [arXiv:2208.03290 [hep-th]].
* (11) E. Sezgin and P. van Nieuwenhuizen, Phys. Rev. D 22, 301 (1980) doi:10.1103/PhysRevD.22.301
* (12) R. L. Workman [Particle Data Group], PTEP 2022, 083C01 (2022)
* (13) B. Abi et al. [Muon g-2 Collaboration], Phys. Rev. Lett. 126, no.14, 141801 (2021) doi:10.1103/PhysRevLett.126.141801 [arXiv:2104.03281 [hep-ex]].
* (14) G. W. Bennett et al. [Muon g-2 Collaboration], Phys. Rev. D 73, 072003 (2006) doi:10.1103/PhysRevD.73.072003 [arXiv:hep-ex/0602035 [hep-ex]].
* (15) T. Aoyama, M. Hayakawa, T. Kinoshita and M. Nio, Phys. Rev. Lett. 109, 111808 (2012) doi:10.1103/PhysRevLett.109.111808 [arXiv:1205.5370 [hep-ph]].
* (16) T. Aoyama, T. Kinoshita and M. Nio, Atoms 7, no.1, 28 (2019) doi:10.3390/atoms7010028
* (17) A. Czarnecki, W. J. Marciano and A. Vainshtein, Phys. Rev. D 67, 073006 (2003) [erratum: Phys. Rev. D 73, 119901 (2006)] doi:10.1103/PhysRevD.67.073006 [arXiv:hep-ph/0212229 [hep-ph]].
* (18) C. Gnendiger, D. Stöckinger and H. Stöckinger-Kim, Phys. Rev. D 88, 053005 (2013) doi:10.1103/PhysRevD.88.053005 [arXiv:1306.5546 [hep-ph]].
* (19) M. Davier, A. Hoecker, B. Malaescu and Z. Zhang, Eur. Phys. J. C 77, no.12, 827 (2017) doi:10.1140/epjc/s10052-017-5161-6 [arXiv:1706.09436 [hep-ph]].
* (20) A. Keshavarzi, D. Nomura and T. Teubner, Phys. Rev. D 97, no.11, 114025 (2018) doi:10.1103/PhysRevD.97.114025 [arXiv:1802.02995 [hep-ph]].
* (21) G. Colangelo, M. Hoferichter and P. Stoffer, JHEP 02, 006 (2019) doi:10.1007/JHEP02(2019)006 [arXiv:1810.00007 [hep-ph]].
* (22) M. Hoferichter, B. L. Hoid and B. Kubis, JHEP 08, 137 (2019) doi:10.1007/JHEP08(2019)137 [arXiv:1907.01556 [hep-ph]].
* (23) M. Davier, A. Hoecker, B. Malaescu and Z. Zhang, Eur. Phys. J. C 80, no.3, 241 (2020) [erratum: Eur. Phys. J. C 80, no.5, 410 (2020)] doi:10.1140/epjc/s10052-020-7792-2 [arXiv:1908.00921 [hep-ph]].
* (24) A. Keshavarzi, D. Nomura and T. Teubner, Phys. Rev. D 101, no.1, 014029 (2020) doi:10.1103/PhysRevD.101.014029 [arXiv:1911.00367 [hep-ph]].
* (25) A. Kurz, T. Liu, P. Marquard and M. Steinhauser, Phys. Lett. B 734, 144-147 (2014) doi:10.1016/j.physletb.2014.05.043 [arXiv:1403.6400 [hep-ph]].
* (26) K. Melnikov and A. Vainshtein, Phys. Rev. D 70, 113006 (2004) doi:10.1103/PhysRevD.70.113006 [arXiv:hep-ph/0312226 [hep-ph]].
* (27) P. Masjuan and P. Sanchez-Puertas, Phys. Rev. D 95, no.5, 054026 (2017) doi:10.1103/PhysRevD.95.054026 [arXiv:1701.05829 [hep-ph]].
* (28) G. Colangelo, M. Hoferichter, M. Procura and P. Stoffer, JHEP 04, 161 (2017) doi:10.1007/JHEP04(2017)161 [arXiv:1702.07347 [hep-ph]].
* (29) M. Hoferichter, B. L. Hoid, B. Kubis, S. Leupold and S. P. Schneider, JHEP 10, 141 (2018) doi:10.1007/JHEP10(2018)141 [arXiv:1808.04823 [hep-ph]].
* (30) A. Gérardin, H. B. Meyer and A. Nyffeler, Phys. Rev. D 100, no.3, 034520 (2019) doi:10.1103/PhysRevD.100.034520 [arXiv:1903.09471 [hep-lat]].
* (31) J. Bijnens, N. Hermansson-Truedsson and A. Rodríguez-Sánchez, Phys. Lett. B 798, 134994 (2019) doi:10.1016/j.physletb.2019.134994 [arXiv:1908.03331 [hep-ph]].
* (32) G. Colangelo, F. Hagelstein, M. Hoferichter, L. Laub and P. Stoffer, JHEP 03, 101 (2020) doi:10.1007/JHEP03(2020)101 [arXiv:1910.13432 [hep-ph]].
* (33) T. Blum, N. Christ, M. Hayakawa, T. Izubuchi, L. Jin, C. Jung and C. Lehner, Phys. Rev. Lett. 124, no.13, 132002 (2020) doi:10.1103/PhysRevLett.124.132002 [arXiv:1911.08123 [hep-lat]].
* (34) G. Colangelo, M. Hoferichter, A. Nyffeler, M. Passera and P. Stoffer, Phys. Lett. B 735, 90-91 (2014) doi:10.1016/j.physletb.2014.06.012 [arXiv:1403.7512 [hep-ph]].
* (35) T. Aoyama, et al. Phys. Rept. 887, 1-166 (2020) doi:10.1016/j.physrep.2020.07.006 [arXiv:2006.04822 [hep-ph]].
* (36) P. Athron, C. Balázs, D. H. J. Jacob, W. Kotlarski, D. Stöckinger and H. Stöckinger-Kim, JHEP 09, 080 (2021) doi:10.1007/JHEP09(2021)080 [arXiv:2104.03691 [hep-ph]].
* (37) E. Sezgin and P. van Nieuwenhuizen, Phys. Rev. D 21, 3269 (1980) doi:10.1103/PhysRevD.21.3269
* (38) G. ’t Hooft, NATO Sci. Ser. B 59, 135-157 (1980) doi:10.1007/978-1-4684-7571-5_9
* (39) R. Rivers, Nuovo Cim. 34 (1964) 387
* (40) P. Van Nieuwenhuizen, Nucl. Phys. B 60, 478-492 (1973) doi:10.1016/0550-3213(73)90194-6
* (41) X. Bekaert, N. Boulanger and P. Sundell, Rev. Mod. Phys. 84, 987-1009 (2012) doi:10.1103/RevModPhys.84.987 [arXiv:1007.0435 [hep-th]].
* (42) S. Weinberg, Phys. Rev. 135, B1049-B1056 (1964) doi:10.1103/PhysRev.135.B1049
* (43) S. R. Coleman and J. Mandula, Phys. Rev. 159, 1251-1256 (1967) doi:10.1103/PhysRev.159.1251
* (44) R. Haag, J. T. Lopuszanski and M. Sohnius, Nucl. Phys. B 88, 257 (1975) doi:10.1016/0550-3213(75)90279-5
* (45) M. T. Grisaru, H. N. Pendleton and P. van Nieuwenhuizen, Phys. Rev. D 15, 996 (1977) doi:10.1103/PhysRevD.15.996
* (46) S. Weinberg and E. Witten, Phys. Lett. B 96, 59-62 (1980) doi:10.1016/0370-2693(80)90212-9
* (47) M. Porrati, Phys. Rev. D 78, 065016 (2008) doi:10.1103/PhysRevD.78.065016 [arXiv:0804.4672 [hep-th]].
* (48) P. Benincasa and F. Cachazo, [arXiv:0705.4305 [hep-th]].
* (49) R. R. Metsaev, Nucl. Phys. B 759, 147-201 (2006) doi:10.1016/j.nuclphysb.2006.10.002 [arXiv:hep-th/0512342 [hep-th]].
* (50) N. Boulanger and S. Leclercq, JHEP 11, 034 (2006) doi:10.1088/1126-6708/2006/11/034 [arXiv:hep-th/0609221 [hep-th]].
* (51) N. Boulanger, S. Leclercq and P. Sundell, JHEP 08, 056 (2008) doi:10.1088/1126-6708/2008/08/056 [arXiv:0805.2764 [hep-th]].
* (52) E. S. Fradkin and M. A. Vasiliev, Nucl. Phys. B 291, 141-171 (1987) doi:10.1016/0550-3213(87)90469-X
* (53) E. S. Fradkin and M. A. Vasiliev, Phys. Lett. B 189, 89-95 (1987) doi:10.1016/0370-2693(87)91275-5
* (54) M. A. Vasiliev, Nucl. Phys. B 862, 341-408 (2012) doi:10.1016/j.nuclphysb.2012.04.012 [arXiv:1108.5921 [hep-th]].
* (55) G. ’t Hooft and M. J. G. Veltman, Nucl. Phys. B 44, 189-213 (1972) doi:10.1016/0550-3213(72)90279-9
* (56) M. E. Peskin and D. V. Schroeder, Addison-Wesley, 1995, ISBN 978-0-201-50397-5
* (57) M. Nebot, J. F. Oliver, D. Palao and A. Santamaria, Phys. Rev. D 77, 093013 (2008) doi:10.1103/PhysRevD.77.093013 [arXiv:0711.0483 [hep-ph]].
* (58) M. Gell-Mann, M. L. Goldberger, N. M. Kroll and F. E. Low, Phys. Rev. 179, 1518-1527 (1969) doi:10.1103/PhysRev.179.1518
* (59) S. Weinberg, Phys. Rev. Lett. 27, 1688-1691 (1971) doi:10.1103/PhysRevLett.27.1688
* (60) B. W. Lee, C. Quigg and H. B. Thacker, Phys. Rev. Lett. 38, 883-885 (1977) doi:10.1103/PhysRevLett.38.883
* (61) B. W. Lee, C. Quigg and H. B. Thacker, Phys. Rev. D 16, 1519 (1977) doi:10.1103/PhysRevD.16.1519
* (62) S. L. Glashow and S. Weinberg, Phys. Rev. D 15, 1958 (1977) doi:10.1103/PhysRevD.15.1958
* (63) H. Huffel and G. Pocsik, Z. Phys. C 8, 13 (1981) doi:10.1007/BF01429824
* (64) J. Maalampi, J. Sirkka and I. Vilja, Phys. Lett. B 265, 371-376 (1991) doi:10.1016/0370-2693(91)90068-2
* (65) S. Kanemura, T. Kubota and E. Takasugi, Phys. Lett. B 313, 155-160 (1993) doi:10.1016/0370-2693(93)91205-2 [arXiv:hep-ph/9303263 [hep-ph]].
* (66) A. G. Akeroyd, A. Arhrib and E. M. Naimi, Phys. Lett. B 490, 119-124 (2000) doi:10.1016/S0370-2693(00)00962-X [arXiv:hep-ph/0006035 [hep-ph]].
* (67) D. Das and I. Saha, Phys. Rev. D 91, no.9, 095024 (2015) doi:10.1103/PhysRevD.91.095024 [arXiv:1503.02135 [hep-ph]].
* (68) S. Kanemura and K. Yagyu, Phys. Lett. B 751, 289-296 (2015) doi:10.1016/j.physletb.2015.10.047 [arXiv:1509.06060 [hep-ph]].
* (69) M. D. Goodsell and F. Staub, Eur. Phys. J. C 78, no.8, 649 (2018) doi:10.1140/epjc/s10052-018-6127-z [arXiv:1805.07306 [hep-ph]].
* (70) T. Appelquist and M. S. Chanowitz, Phys. Rev. Lett. 59, 2405 (1987) [erratum: Phys. Rev. Lett. 60, 1589 (1988)] doi:10.1103/PhysRevLett.59.2405
* (71) M. Chaichian and J. Fischer, Nucl. Phys. B 303, 557-568 (1988) doi:10.1016/0550-3213(88)90394-X
* (72) A. Falkowski and J. F. Kamenik, Phys. Rev. D 94, no.1, 015008 (2016) doi:10.1103/PhysRevD.94.015008 [arXiv:1603.06980 [hep-ph]].
* (73) R. Sekhar Chivukula, D. Foren, K. A. Mohan, D. Sengupta and E. H. Simmons, Phys. Rev. D 101, no.5, 055013 (2020) doi:10.1103/PhysRevD.101.055013 [arXiv:1906.11098 [hep-ph]].
* (74) R. S. Chivukula, D. Foren, K. A. Mohan, D. Sengupta and E. H. Simmons, Phys. Rev. D 101, no.7, 075013 (2020) doi:10.1103/PhysRevD.101.075013 [arXiv:2002.12458 [hep-ph]].
* (75) R. S. Chivukula, D. Foren, K. A. Mohan, D. Sengupta and E. H. Simmons, Phys. Rev. D 103, no.9, 095024 (2021) doi:10.1103/PhysRevD.103.095024 [arXiv:2104.08169 [hep-ph]].
* (76) R. S. Chivukula, D. Foren, K. A. Mohan, D. Sengupta and E. H. Simmons, [arXiv:2206.10628 [hep-ph]].
* (77) D. Huang, C. Q. Geng and J. Wu, [arXiv:2208.01097 [hep-ph]].
* (78) I. Banta, T. Cohen, N. Craig, X. Lu and D. Sutherland, JHEP 02, 029 (2022) doi:10.1007/JHEP02(2022)029 [arXiv:2110.02967 [hep-ph]].
* (79) M. Aaboud et al. [ATLAS Collaboration], Eur. Phys. J. C 78, no.7, 565 (2018) doi:10.1140/epjc/s10052-018-5995-6 [arXiv:1804.10823 [hep-ex]].
* (80) M. Aaboud et al. [ATLAS Collaboration], Phys. Rev. Lett. 121, no.8, 081801 (2018) doi:10.1103/PhysRevLett.121.081801 [arXiv:1804.03496 [hep-ex]].
* (81) G. Aad et al. [ATLAS Collaboration], JHEP 09, 091 (2019) [erratum: JHEP 06, 042 (2020)] doi:10.1007/JHEP09(2019)091 [arXiv:1906.08589 [hep-ex]].
* (82) G. Aad et al. [ATLAS Collaboration], Phys. Lett. B 796, 68-87 (2019) doi:10.1016/j.physletb.2019.07.016 [arXiv:1903.06248 [hep-ex]].
* (83) Y. Wang [ATLAS Collaboration], PoS ICHEP2020, 109 (2021) doi:10.22323/1.390.0109
* (84) A. M. Sirunyan et al. [CMS Collaboration], JHEP 04, 031 (2019) doi:10.1007/JHEP04(2019)031 [arXiv:1810.05905 [hep-ex]].
* (85) A. M. Sirunyan et al. [CMS Collaboration], JHEP 08, 130 (2018) doi:10.1007/JHEP08(2018)130 [arXiv:1806.00843 [hep-ex]].
* (86) A. M. Sirunyan et al. [CMS Collaboration], Eur. Phys. J. C 80, no.3, 237 (2020) doi:10.1140/epjc/s10052-020-7773-5 [arXiv:1906.05977 [hep-ex]].
* (87) A. M. Sirunyan et al. [CMS Collaboration], JHEP 06, 120 (2018) doi:10.1007/JHEP06(2018)120 [arXiv:1803.06292 [hep-ex]].
* (88) A. M. Sirunyan et al. [CMS Collaboration], Phys. Rev. D 98, no.9, 092001 (2018) doi:10.1103/PhysRevD.98.092001 [arXiv:1809.00327 [hep-ex]].
* (89) B. Radburn-Smith [CMS Collaboration], PoS ICHEP2018, 287 (2019) doi:10.22323/1.340.0287
* (90) L. Linssen, A. Miyamoto, M. Stanitzki and H. Weerts, doi:10.5170/CERN-2012-003 [arXiv:1202.5940 [physics.ins-det]].
* (91) M. Aicheler, P. Burrows, M. Draper, T. Garvey, P. Lebrun, K. Peach, N. Phinney, H. Schmickler, D. Schulte and N. Toge, doi:10.5170/CERN-2012-007
* (92) H. Baer, T. Barklow, K. Fujii, Y. Gao, A. Hoang, S. Kanemura, J. List, H. E. Logan, A. Nomerotski and M. Perelstein, et al. [arXiv:1306.6352 [hep-ph]].
* (93) [CEPC Study Group], [arXiv:1809.00285 [physics.acc-ph]].
* (94) J. B. Guimarães da Costa et al. [CEPC Study Group], [arXiv:1811.10545 [hep-ex]].
* (95) A. Abada et al. [FCC Collaboration], Eur. Phys. J. ST 228, no.2, 261-623 (2019) doi:10.1140/epjst/e2019-900045-4
|
# An improved Halo Occupation Distribution prescription from UNITsim Hα
emitters: conformity and modified radial profile
Guillermo Reyes-Peraza 1,2, Santiago Avila 1,2,3, Violeta Gonzalez-Perez 2,4,
Daniel Lopez-Cano 2,5, Alexander Knebe 2,4,6, Sujatha Ramakrishnan2,4, Gustavo
Yepes 2,4
1 Instituto de Física Teorica UAM-CSIC, c/ Nicolás Cabrera 13-15, , 28049
Madrid
2 Departamento de Física Teórica, Facultad de Ciencias M-8, Universidad
Autónoma de Madrid, 28049 Madrid (Spain)
3 Institut de física d’altes energies (IFAE), The Barcelona Institute of
Science and Technology campus UAB, 08193 Bellaterra Barcelona, Spain
4Centro de Investigación Avanzada en Física Fundamental (CIAFF), Facultad de
Ciencias, Universidad Autónoma de Madrid, 28049 Madrid, Spain
5 Donostia International Physics Center (DIPC), Paseo Manuel de Lardizabal, 4,
20018 Donostia-San Sebastián, Spain
6International Centre for Radio Astronomy Research, University of Western
Australia, 35 Stirling Highway, Crawley, Western Australia 6009, Australia
[email protected]<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
The Euclid mission is poised to make unprecedented measurements in cosmology
from the distribution of galaxies with strong Hα spectral emission lines.
Accurately interpreting this data requires understanding the imprints imposed
by the physics of galaxy formation and evolution on galaxy clustering. In this
work we utilize a semi-analytical model of galaxy formation (SAGE) to explore
the necessary components for accurately reproducing the clustering of Euclid-
like samples of Hα emitters. We focus on developing a Halo Occupation
Distribution (HOD) prescription able to reproduce the clustering of SAGE
galaxies. Typically, HOD models assume that satellite and central galaxies of
a given type are independent events. We investigate the need for conformity,
i.e. whether the average satellite occupation depends on the existence of a
central galaxy of a given type. Incorporating conformity into HOD models is
crucial for reproducing the clustering in the reference galaxy sample. Another
aspect we investigate is the radial distribution of satellite galaxies within
haloes. The traditional density profile models, NFW and Einasto profiles, fail
to accurately replicate the small-scale clustering measured for SAGE satellite
galaxies. To overcome this limitation, we propose a generalization of the NFW
profile, thereby enhancing our understanding of galaxy clustering.
###### keywords:
methods: numerical – galaxies: formation – cosmology: large-scale structure of
Universe
††pubyear: 2015††pagerange: An improved Halo Occupation Distribution
prescription from UNITsim Hα emitters: conformity and modified radial
profile–An improved Halo Occupation Distribution prescription from UNITsim Hα
emitters: conformity and modified radial profile
## 1 INTRODUCTION
The study of the large-scale structure in the Universe plays a crucial role in
contemporary cosmology. It serves as a powerful tool for understanding the
fundamental properties of our Universe, including the values of different
cosmological parameters and the nature of dark matter and dark energy. Galaxy
clustering is an observable that allows us to probe both the cosmological
parameters and the physics of galaxy formation.
Over the past few decades, numerous endeavours have been dedicated to creating
large galaxy maps, such as the 2dFGRS (Cole et al., 2005), SDSS (York et al.,
2000; Eisenstein et al., 2005), WiggleZ (Drinkwater et al., 2010; Parkinson et
al., 2012), BOSS (Dawson et al., 2013; Alam et al., 2017), eBOSS (Dawson et
al., 2016; Alam et al., 2021a), and DES (The Dark Energy Survey Collaboration,
2005; Abbott et al., 2018). These endeavors have enhanced our understanding of
the Universe, by providing measurements of its expansion history and
increasing the constrains on dark energy and alternative theories of gravity.
These cosmological surveys have provided the community with large 3-D maps of
galaxies up to $z\sim 1$, allowing for a better understanding of the evolution
of galaxies. Despite significant progress, these areas of investigation remain
open and ongoing.
New generation surveys, including Euclid (Laureijs et al., 2011; Amendola et
al., 2018), DESI (DESI Collaboration et al., 2016), the Nancy Grace Roman
Space Telescope (Spergel et al., 2013, 2015) and the 4-metre Multi-Object
Spectroscopic Telescope (de Jong et al., 2012), are set to map galaxies beyond
$z\sim 1$ and to increase the density of QSOs at higher redshifts. One of the
main tracers beyond $z\sim 1$ are galaxies with strong spectral emission lines
or emission line galaxies (ELGs). In particular, Euclid utilizes near-infrared
grisms to observe galaxies with a strong Hα spectral line, or Hα emitters.
Hα emitters are mostly galaxies with strong star formation rates (e.g. Favole
et al., 2023). From both hydrodynamical and semi-analytical models (SAMs) of
galaxy formation, we expect the connection between galaxies selected by their
star-formation rate and dark matter haloes to be more complex than for
stellar-mass selected samples (e.g. Zheng et al., 2005; Orsi & Angulo, 2018;
Gonzalez-Perez et al., 2020). SAMs of galaxy formation and evolution populate
dark matter haloes at early cosmic times with gas and let them evolve with a
set of coupled differential equations (e.g. Cole et al., 2000; Croton et al.,
2016; Hirschmann et al., 2016; Cora et al., 2018; Lagos et al., 2019). These
equations aim to encapsulate the physical processes that are understood to be
relevant during the formation and evolution of galaxies, such as the cooling
of gas, the formation of stars, the interplay between super massive black
holes and the intergalactic medium, etc (see the reviews on modelling galaxies
by Baugh, 2006; Somerville & Davé, 2015; Wechsler & Tinker, 2018). SAMs are
faster than hydrodynamical simulations (e.g. Schaye et al., 2015; McCarthy et
al., 2017; Springel et al., 2018; Pillepich et al., 2018), as they are ran on
dark matter only N-body simulations, at the expense of loosing the effect that
baryons have over dark matter haloes (Schneider & Teyssier, 2015; Aricò et
al., 2020).
In this work we use SAGE, the SAM introduced in Croton et al. (2016), to
explore how Hα emitters populate dark matter halos. SAMs have been previously
used to produce Euclid-like and Roman-like Hα emitter catalogues that have
been used to understand the clustering of Hα galaxies and their relation to
halos (Merson et al., 2018; Zhai et al., 2019, 2021b, 2021a; Knebe et al.,
2022). Here we use the SAGE run on the UNIT dark-matter only simulation
(Chuang et al., 2019), as described and released in the work by Knebe et al.
(2022). This simulation uses the fixed and paired technique to increase its
effective volume, allowing more precise statistical analyses (Angulo &
Pontzen, 2016).
SAMs can produce model galaxies accounting for a large range of physical
phenomena in simulation volumes comparable to those required to interpret
observations from current cosmological surveys. Nevertheless, they are still
too slow to be able to produce many realisations and they require the
construction of merger trees tracing the evolution of halos in fine time
slices, which is not available in many simulations due to computing/storing
limitations. The Halo Occupation Distribution (HOD) model provides a
simplified way to describe the relation between galaxies and haloes and can be
constructed from a single halo catalogue. These simplifications makes HOD
valid for the target observables they are constructed to match, but sometimes
will show limitations when trying to extend it to other observables (e.g.
Chaves-Montero et al., 2023).
HOD models are computationally efficient. They have demonstrated their
capability to reproduce the observed clustering across a range of galaxies,
including the dependency with luminosity and colour (Zehavi et al., 2005;
Christodoulou et al., 2012; Carretero et al., 2015). In their basic form, HOD
models assume that the mass of haloes determine the average number of galaxies
thy can host (e.g. Benson et al., 2000; Berlind et al., 2003; Zheng et al.,
2005). Catalogues of model galaxies produced with HOD models can be produced
fast enough as to choose the best fit to a set of clustering observables using
statistical techniques that require hundreds or thousands of realisations in
large simulation volumes (e.g. Alam et al., 2021b; Yuan et al., 2023a; Rocher
et al., 2023a). HOD models have become a popular tool for producing realistic
mock catalogs in large simulation volumes to match the clustering of a target
galaxy sample (e.g., in BOSS, DES and eBOSS: Manera et al., 2013; Avila et
al., 2018, 2020).
The objective of this work is to construct an HOD model that accurately
reproduces the galaxy clustering of the Euclid Hα emitters modelled by SAGE.
In particular, we explore if two of the usual assumptions made by HOD models
should be challenged when compared with model Hα emitters. One of this usual
assumptions is that central and satellite galaxies of a certain type are
independent events within a halo, a lack of conformity. The other is the
modelling of the radial profile of satellite galaxies.
One of the main focus of this paper is to study the effect that galactic
conformity has on the clustering of Hα emitters. The term galactic conformity
was first coined by Weinmann et al. (2006) to describe strong correlations
between the properties of satellite galaxies and their central galaxies in
data from the Sloan Digital Sky Survey (SDSS). A strong 1-halo conformity for
ELGs has recently been suggested from the comparison of DESI data with model
galaxies produced with different HOD models (Rocher et al., 2023b; Gao et al.,
2023). Overall, galactic conformity remains an active research topic and a
significant puzzle in the formation and evolution of galaxies.
The second main focus of this work is to explore whether the typical
assumptions made by HOD models for the radial profiles of satellite galaxies
are upheld. Commonly, HOD models assume that the distribution of satellite
galaxies within the dark matter halo follows the distribution of the dark
matter itself, or often an NFW (Navarro et al., 1997) or an Einasto (Einasto,
1969) profile. We will try different ways to sample these types of profiles
and also propose an extension.
The plan of this paper is as follows. In section (§ 2) we introduce UNIT DM
simulation. In section (§ 3) we describe the model galaxy sample of reference.
First (§ 3.1), we introduce UNITsim-SAGE, how our reference Hα galaxy sample
for Euclid is built and we describe the average halo occupation distribution
that we find. We then describe in (§ 3.2) how we build the final sample of
reference by applying the shuffling method to the previous model sample. In
section (§ 4), we detail the various halo occupation models and properties
that we employ to generate mock catalogues. Section (§ 5) discusses
conformity, the method we employ to model it and its effect in galaxy
clustering. Section (§ 6) is devoted to studying the radial profile of SAGE
and how to reproduce it in order to recover accurate Hα galaxy clustering.
Finally, in Section (§ 7), we conclude and discuss our findings and future
prospects.
## 2 UNIT DM simulation
The UNIT suite is a set of N-body cosmological simulations (Chuang et al.,
2019) designed to cover the expected halo masses for emission line galaxies
(ELGs), in particular [OII] emitters targeted by DESI ($\sim 10^{11}h^{-1}{\rm
M}_{\odot}$ Gonzalez-Perez et al., 2020) and Euclid Hα emitters ($\sim 4\times
10^{11}h^{-1}{\rm M}_{\odot}$ Cochrane et al., 2017). To reduce the cosmic
variance in the UNIT simulations, we employ the Fix and Pair method (Angulo &
Pontzen, 2016). Following this method, two pairs of simulations are generated
with initial conditions with a power spectrum with an amplitude fixed to the
expected value, given by the input power spectrum. The initial conditions of
one of the simulations from each pair is set with the opposite phases than the
other one from the pair (Chuang et al., 2019). These simulations were
generated using GADGET2 (Springel, 2005), which fully solves the gravitational
evolution of the continuous distribution of dark matter. The phase-space halo
finder ROCKSTAR (Behroozi et al., 2013a) was used to identify haloes from the
$129$ existing snapshots. Subsequently, the ConsistentTrees software (Behroozi
et al., 2013a) was employed to calculate their merging histories. The software
tracks dark matter halos (regions of the universe where gravity has made
matter collapse into denser structures) as they evolve over time and move in
space, constructing a tree-structure that illustrates how these halos merge
together to form larger halos. These merger trees are ’consistent’ in the
sense that the mass and position of a halo at a given moment in the tree are
informed by the mass and position of its progenitors and its descendants.
For this work we use simulations with boxes of side $1h^{-1}\text{Gpc}$ and
$4096^{3}$ dark matter particles. The cosmological parameters of these
simulations are: $\Omega_{m,0}=1-\Omega_{\Lambda,0}=0.3089,\hskip
7.11317pth=0.6774,\hskip 7.11317ptn_{\rm s}=0.9667,\hskip
7.11317pt\sigma_{8}=0.8147$.
We perform our analysis over the simulation snapshot at $z=1.321$, which
approximately corresponds to the mean redshift expected for Euclid Hα
emitters, $0.9<z<1.8$.
## 3 UNITsim-sage: Reference galaxy sample
We aim to provide an improved Halo Occupation Distribution model for Euclid-
like Hα emitters, focusing on conformity and satellite radial profiles. Our
reference catalogue of galaxies is generated from the UNITsim-Galaxies
catalogues described in Knebe et al. (2022) and released along it
111http://www.unitsims.org. This catalogue is based on the SAGE semi-
analytical model of galaxy formation and evolution (Croton et al., 2016) by
populating the dark-matter-only UNITsim simulations (section 2). For each
model galaxy, spectral emission lines associated with star formation events
are obtained with the get_emlines model described in Orsi et al. (2014). One
of these spectral lines is the Hα, targeted by Euclid. The spectral emission
lines are then attenuated by dust using a Cardelli et al. (1989) law following
the code developed in Favole et al. (2020). All these models were implemented
in the catalogue in Knebe et al. (2022) and we refer the reader to that
publication for a more detailed description.
In order to single out the modelling of both conformity and the satellite
radial profiles, we use a shuffled sample, SAGEsh (§ 3.2). To construct this
sample, we remove the galaxy assembly bias: the effect that other properties
beyond halo mass have on the distribution of galaxies within haloes.
### 3.1 Model Hα emitters with an Euclid-like number density
Sample | $\frac{d^{2}N(z)}{dzdA}$ | Number density | Flux cutoff
---|---|---|---
| $\rm Galaxies\hskip 2.84526ptdeg^{-2}$ | $\rm Mpc^{-3}h^{3}$ | $\rm ergs\rm s^{-1}\rm cm^{-2}$
Pozzetti nº1 | $4377$ | $1.299\times 10^{-3}$ | $1.041\times 10^{-16}$
Pozzetti nº3 | $\mathbf{2268}$ | $\mathbf{6.731\times 10^{-4}}$ | $\mathbf{1.325\times 10^{-16}}$
Euclid flux | $580.7$ | $1.723\times 10^{-4}$ | $2\times 10^{-16}$
Table 1: Properties of the three model galaxy catalogues described in section
3. The first column on the left shows the name given to the samples. In the
second column we quote the galaxies per square degree per redshift interval
from Pozzetti et al. (2016) for the first two samples and the one we obtain
applying the Hα flux Euclid-expected limit to the SAGE sample. The third
column contains the corresponding numerical volume densities ($n$). The fourth
column shows the Hα flux limit needed to recover the target $n$ for the
Pozzetti et al. models and the flux limit expected for Euclid. Our reference
model from subsection 3.2 onwards is Pozzetti nº3 (or the shuffled version of
it, see subsection 3.2 ), which will be referred to as SAGE (or SAGEsh, when
shuffled).
In order to produce Euclid-like catalogues of Hα emitters, we aim to reproduce
the mean number density forecasted by Pozzetti et al. (2016) over the entire
redshift range observed by Euclid: $0.9<z<1.8$. For this, we use a simulation
snapshot at $z=1.321$, which approximately corresponds to the average redshift
of Euclid’s targetted redshift range. This approach is slightly different to
the one taken in Knebe et al. (2022), where we matched the Pozzetti et al.
(2016) differential luminosity function, at the redshift of the snapshot.
Pozzetti et al. (2016) provides a range of forecasts, with the extreme number
densities being $n_{1}=1.299\times 10^{-3}$ $\rm Mpc^{-3}h^{3}$ and
$n_{3}=6.731\times 10^{-4}$ $\rm Mpc^{-3}h^{3}$, which are tabulated in the
first column of Table 1. These volume number densities, $n$, are obtained from
the angular ones, $\eta=\frac{d^{2}N(z)}{dzdA}$, provided by Pozzetti et al.
(2016) (third column in Table 1), as follows:
$n=\frac{\eta\hskip 2.84526pt180^{2}}{\pi^{2}\hskip 2.84526ptD_{\rm
c}^{2}(z)\hskip 2.84526pt\frac{{\rm d}D_{\rm c}(z)}{{\rm d}z}}\,,$ (1)
where $D_{\rm c}(z)$ is the comoving distance to the object at redshift $z$.
We apply the flux cuts indicated in Table 1 to get the extreme number
densities forcasted by Pozzetti et al.. These are well below $2\times
10^{-16}{\rm ergs\,s^{-1}\,cm}^{-2}$, the expected Hα flux limit for Euclid.
If we apply this cut to the UNITsim-Galaxies catalogue, we obtain a sample
with a number density $1.723\times 10^{-4}$ $\rm{Mpc^{-3}}$$h^{3}$, far below
those from Pozzetti et al.’s models. The exact shape of the SAGE Hα luminosity
function is very sensitive to the dust attenuation modelling, in particular at
the tails of the distribution, sampled by the flux limits. An alternative
approach would be to fine tune the dust attenuation model until we match the
Pozzetti et al. (2016) luminosity function as done in Zhai et al. (2019).
However, this approach is not clear to be more physical than the one taken
here, where we are following the approach described in Knebe et al. (2022). A
more physical alternative would be to fit simultaneously the dust attenuation
parameters, the emission line modelling and the SAM to different available
observations with luminosity functions from different lines or even clustering
measurements. However, this is beyond the scope of this paper and deserves and
stand-alone study.
Figure 1: Top: The mass function of our catalogues of dark matter haloes (dark
matter haloes , solid black section 2) and of the different Hα galaxies
samples considered: the Euclid-flux raw sample (gold) together with the
optimistic (green) and reference (red) samples based on Pozzetti et al. (2016)
luminosity functions (see Table 1). The solid curves correspond to the central
galaxies and the dashed curves to the satellite galaxies. The mass function
corresponds to the density of haloes/galaxies for a given halo mass divided by
the halo mass bin size. To achieve this, histograms of objects in the
halo/galaxy catalogues are created as a function of $M_{\rm 200c}$, and they
are normalized by their volume ($1h^{-1}\text{Gpc}$) and the size of the
corresponding log$M_{\rm halo}$ bin. Bottom: the mean halo occupation
distribution as a function of halo mass of the haloes. This is compute as the
ratio of the galaxy to halo mass functions.
In the upper panel of Figure 1, we present the halo mass function of main dark
matter haloes and galaxies for the three samples summarised in Table 1. In
this figure, we separate the contributions of central (solid curves) and
satellite galaxies (dashed curves). The halo mass functions for dark matter
haloes follow a typical shape close to a power law, with a change in the slope
at the massive end. Central Hα emitters follow this trend above a mininimum
halo mass. The numbers of both central and satellite galaxies decrease rapidly
below a minimum halo mass. The number of Hα emitters is lower than that for
haloes for all halo masses. As expected, satellite galaxies populate more
massive haloes than centrals.
The average halo occupation distribution (HOD) for model Hα emitters selected
with different number densities is shown in the lower panel of Figure 1.
Satellite Hα emitters follow the typical truncated power law seen for model
satellite galaxies (e.g. Zheng et al., 2005; Avila et al., 2022).
The average HOD for central model galaxies do not reach unity, as it would do
in a complete sample sample of galaxies without an particular type selection.
This was expected from previous theoretical works on star-forming and ELGs
(e.g. Zheng et al., 2005; Gonzalez-Perez et al., 2018; Zhai et al., 2021b) and
also found for DESI ELGs (Rocher et al., 2023b). This implies that we do not
expect to find a Hα emitter in the center of every halo above a certain mass
threshold. In Avila et al. (2020), a similar shape for the mean HOD of
centrals galaxies, although with a steeper decay, was described as a
asymmetric Gaussian.
From this point onward, we use Pozzetti et al.’s Model 3 by default, labelled
as SAGE. This model has been used as the baseline for several Euclid studies,
as it is expected to be the most realistic.
### 3.2 SAGEsh: the shuffled reference galaxy sample
At first order, galaxy clustering at large scales only depends on the mass of
the host haloes. However, galaxy clustering can be affected by other halo
properties, what is usually known as assembly bias. Assembly bias has been
measured from simulated haloes and galaxies in different ways (e.g. Lacerna &
Padilla, 2011; Contreras et al., 2021; Jiménez et al., 2021). Moreover, there
is compelling observational evidence of galaxy assembly bias (Obuljen et al.,
2020; Yuan et al., 2021; Wang et al., 2022; Alam et al., 2023).
Properties encapsulating the environment of haloes have been found to be the
most relevant secondary property (e.g. Xu et al., 2021). However, in previous
articles, one can observe how other internal properties of the halo, such as
$V_{\rm max}$ (which is the maximum circular velocity of the halo) or the
haloes own concentration, could explain at least part of the assembly bias. We
attempted to construct an HOD prescription based on $V_{\rm max}$ instead of
$M_{\rm halo}$ (à la Figure 1) and we found that it reduces the effect of
assembly bias signal of SAGE galaxies, but it does not completely eliminate
it. The way to quantify the effect was to check the goodness in the recovery
of the 2-halo term (large scales) clustering of SAGE (see discussion of Figure
2 below).
This work is focused on the exploration of both conformity and the satellite
radial profile. To isolate the effect that varying these properties have on
clustering we build a reference galaxy sample for which the effect of assembly
bias is removed, SAGEsh. We can achieve this by shuffling all galaxies within
a given halo to the position of another halo of similar mass. This technique
was first proposed in Croton et al. (2007). In this proccess, the relative
positions (and velocities) of galaxies to the center of a halo as well as
internal properties of galaxies are preserved. It is worth noting that the bin
size for shuffling coincides with the same binning used to calculate the mean
galaxy occupation value (Figure 1). Where the range of the mass is 10.5 < $\rm
log_{10}(M_{\rm 200c})$ < 14.5 with a binning of $\Delta$log$M$ = 0.057.
Throughout this paper we quantify the clustering by calculating the real-space
two-point correlation function, $\xi$, with the publicly available Cute code
(Alonso, 2012)222https://github.com/damonge/CUTE with logarithmic binning,
where the number of bins is equal to 80, the bins in r per decade is $8$ and
the $r_{\rm max}$ is $140h^{-1}{\rm Mpc}$. The clustering of model Euclid-like
Hα emitters before and after the shuffling process is shown in Figure 2. By
construction these two agree for the 1-halo term contribution to the
clustering (pairs of galaxies within the same halo). From $\sim 0.2{\rm
Mpc}/h$, the clustering of the SAGE and SAGEsh Hα emitters start to differ,
and stabilises at larger scales to a difference of about 10 per cent333This is
the difference that was reduced but not eliminated when using a $V_{\rm max}$
based HOD, as we discussed at the beginning of this subsection. This
difference in the scales dominated by the 2-halo term (pairs of galaxies from
different haloes) is due to the history of formation and environments of dark
matter haloes, the assembly bias. While for scales smaller than 0.22, the
clustering is correctly repopulated, that is, the shuffling does not affect
the 1-halo. Note that the shuffling procedure does not alter the two
properties we are investigating here, the 1-halo conformity and the
distribution of satellite galaxies within haloes as, by construction, the
positions and properties of galaxies within haloes are preserved.
In what follows we attempt to reproduce clustering of the shuffled UNITsim-
SAGE Hα catalogue, that we label in short SAGEsh. In Figure 2 we can see our
starting point, labeled as Vanilla HOD. In that figure, we can also see the
model proposed (default, see subsection 4.3) after the improvements developed
in section 5 and section 6. We see that the gain in recovering our reference
clustering from SAGEsh is very remarkable, with our default model closely
following the clustering of SAGEsh. We describe the main properties of these
two models in the next section.
Figure 2: Top: Two-point correlation function in real space of different model
galaxy catalogues: SAGE (solid orange), shuffled SAGE (SAGEsh, dashed black)
and two HOD catalogues fitted to SAGE. The Vanilla HOD (subsection 4.2, solid
blue) uses NFW radial profile and assumes that central and satellite Hα
galaxies are independent events (no conformity). The default HOD uses the
concept of conformity and the analytical $N(r)$ function implemented for the
satellite galaxies fitted to SAGE (subsection 4.3, solid red). Bottom: Ratios
with respect to SAGEsh clustering. The error bars are calculated as the
standard deviation of 100 realisations. Remember that we have eliminated the
assembly bias, studying Euclid like model galaxies from SAGEsh, since we
decided not to focus on the effects of assembly bias but to analyze other
properties.
## 4 Halo occupation distribution model
In this section we summarise the halo occupation distribution (HOD) model we
use to populate with SAGEsh-like galaxies the UNIT dark matter only
simulation.
Halo Occupation Distribution (HOD) models are widely used to assign galaxies
of different types to dark matter haloes (e.g. Avila et al., 2020). In their
basic form, the mass of the halo determines the average number of galaxies
that it hosts (e.g. Benson et al., 2000; Berlind et al., 2003; Zheng et al.,
2005). Central and satellite galaxies are modelled separately in HOD models.
In order to implement an HOD galaxy assignment prescription, we first need to
choose the shape of the mean HOD (§ 4.1.1), then a probability distribution
function determines if a halo contains or not a central galaxy and how many
satellites (§ 4.1.3). The radial positions (§ 4.1.4) and velocities of
satellite galaxies within haloes are chosen next. In this work, we have
disregarded the velocities of satellites as our focus is solely on the two-
point correlation function statistic in real space.
Throughout this work all the model galaxy catalogues are generated at
$z=1.321$ and have fixed number density, $6.731\times 10^{-4}$ $\rm
Mpc^{-3}h^{3}$, and linear bias, $1.86$. These are what we refer to as the
Euclid-like Hα emitters samples.
We start with a vanilla HOD model (§ 4.2), with assumptions commonly employed
in the literature and propose a new model that better fits the characteristics
of the model Euclid-like Hα emitters from SAGEsh, our default HOD model (§
4.3). The default model incorporates 1-halo conformity and utilizes the
analytical function proposed in subsection 6.5 for the radial profile of our
satellite galaxy distribution. The clustering of mock galaxies produced with
these two models, are compared to the SAGE and SAGEsh in Figure 2. This Figure
shows how the vanilla HOD model departs from the expectation of SAGEsh Hα
emitters for the 1-halo term. We see that this model under-predicts the
clustering by $\sim 20\%$ at $0.3$Mpc$h^{-1}$ and over-predicts it by more
than $50\%$ at $0.1$Mpc$h^{-1}$. On the other hand, the default model
reproduces much better the galaxy clustering at small scales.
### 4.1 Ingredients of our HOD models
Below, we detail the different aspects that make up the HOD models we use for
our analysis. We do not incorporate the velocity profile of the satellites, in
this study, as we focus only on the clustering in real space.
#### 4.1.1 Mean HOD shape
For this work we need to know the expected number of central and satellite
galaxies within a halo of mass $M$ is represented by $\langle N_{\rm
cen}(M)\rangle$ and $\langle N_{\rm sat}(M)\rangle$, respectively. The Figure
1 (bottom panel) illustrates the outcomes obtained for the halo occupation
function as a function of halo mass, noting again that we will focus in model
3. As mentioned in subsection 3.1, to obtain the corresponding mean values, we
first count the number of Hα galaxies of each type that we have in a series of
mass intervals and determine the average occupation values by dividing by the
number of haloes in each interval. These mass intervals range from 10.5 to
14.8 in terms of log(M).
In this work, we do not fit a smooth curve to the measured mean HOD, but
simply use it as measured. This is order to have something as close as
possible to SAGE and avoid having other sources of contribution to differences
in the clustering and focusing on the main properties of study in this work
(conformity and radial profiles). When applying our HOD model to a halo
catalogue, for each mean halo, we read its mass, which will fall in one of the
mass-bins defined in our HOD. We then assign to this halo a expected number of
$\langle N_{\rm cen}\rangle$ central galaxies and $\langle N_{\rm sat}\rangle$
satellite galaxies, according to the mass bin. In order to sample the mean
HOD, for each main halo in our simulation Nevertheless, the latter one
($\langle N_{\rm sat}\rangle$) will be modified by a factor ($K_{1}$ or
$K_{2}$, see section 5) whenever we are considering conformity.
#### 4.1.2 Conformity
In this work we define the 1-halo conformity, or just conformity, as
considering that central and satellite galaxies are not independent events for
the HOD model. Conformity turns out to be of great relevance for recovering
the target clustering of SAGEsh Euclid-like Hα emitters.
Reproducing the count of satellite SAGEsh galaxies, both with and without a
central galaxy of the same type, requires to include 1-halo conformity in our
HOD models. In practice, this implies changing the $\langle N_{\rm
sat}\rangle$, depending on whether we have assigned a central (Hα) galaxy or
not. In section 5, we explain in detail the concept of conformity and how we
implement it.
Figure 3: Top: The galaxy probability distribution function (PDF) as a
function of the mean number of satellite galaxies, $N_{\rm sat}$. Results are
shown for the SAGEsh (black bars) model galaxies and those from our default
HOD model (filled symbols). Bottom: Ratio between the number of model galaxies
found in both catalogues. The error bars are calculated as the standard
deviation of $100$ realisations.
#### 4.1.3 Galaxy probability distribution function
The probability distribution function (PDF) describes the sampling from the
mean number of galaxies $\langle N_{\rm i}\rangle$ to a specific realization
$N_{\rm i}$ in a given halo mass, ${P}(N_{\rm i}|\langle N_{\rm i}\rangle)$.
In this work, we fix the PDF for both central and satellite galaxies.
For central galaxies, the mean number of galaxies, $N_{\rm cen}$, can only
take the values of 0 or 1, following the Bernoulli distribution.
For satellite galaxies, we assume the PDF of a Poisson distribution. This is
the most common assumption for HOD models in the literature (e.g. Zheng et
al., 2005; Rocher et al., 2023a). However, previous theoretical models have
shown that the PDF of satellite ELGs might be better described by a non-
Poissonian distribution. (Jiménez et al., 2019) found model ELGs to be better
described when a super-Poisson variance was assumed. Meanwhile, Avila et al.
(2020) and (Vos-Ginés et al., 2023) have found hints for eBOSS ELGs following
a sub-Poisson distribution.
SAGE satellite Hα emitters are well described by a Poisson distribution. We
have reached this conclusion by counting in Figure 3 the number of times that
a halo hosts a given number of satellite galaxies ($N_{\rm sat}$) in our HOD
models compared to those measured from the SAGEsh catalogue.
By assuming a Poisson distribution, we are able to recover very well, within
$1\sigma$444For this comparisons, we assume the noise in counts follow a
Poisson distribution., the number of haloes with no satellites, $N_{\rm
sat}=0$ (not shown in Figure 3), and up to two satellites. For haloes with 3
or 4 satellites, we find slightly larger differences, but still within
$1.5\sigma$.
Figure 3 shows that larger differences appear for higher $N_{\rm sat}$ values,
which are very rare. In particular, haloes with $N_{\rm sat}\geq 5$ appear to
be poorly described by a Poisson distribution. However, in the SAGEsh
catalogue we only find one halo with $N_{\rm sat}=5$. For our HOD galaxy
catalogues we find no halo with $N_{\rm sat}=5$ in $90$% of the cases run and
a single halo in $10$% of the cases. Thus, we consider the differences found
for massive haloes to be statistically negligible. Even if they were not, we
expect such small differences to have a negligible impact on the clustering,
compared to the effect of assuming conformity or a different radial profile
for satellite galaxies.
Assuming a Poisson distribution is not expected to affect the the conclusions
drawn in this paper.
#### 4.1.4 Radial profile for satellite galaxies
While central galaxies are always located at the center of their host halo, a
model is needed to place satellite galaxies within haloes. In this work we
explore several different prescriptions for positioning satellite Hα emitters
within their host haloes that will be discussed in detail in section 6. The
prescriptsions we have considered in this work are:
* •
Sampling individual halo profiles assuming an NFW given by the individual halo
concentration, as detailed in subsection 6.1.
* •
Adjusting the NFW or Einasto curves to the $r/R_{\rm s}$-stacked profile from
all our Hα galaxies, as detailed in subsection 6.2.
* •
Adjusting the NFW or Einasto curves to the $r/R_{\rm vir}$-stacked profile
from all our Hα galaxies, as detailed in subsection 6.3.
* •
The inherent $N(r)$ SAGEsh distribution profile: We make use of the discrete
(in a histogram) distribution profile of satellite galaxies in SAGEsh as a
function of the distance $r$ and sample from there.
* •
Modified NFW profile. We introduce a generalized version of the NFW density
profile that effectively models the stacked profile of all our Hα galaxies as
a function of $r$. We find an excellent fit with our modified NFW curve
(Equation 14). In subsection 6.5, we describe in detail how we use this
continuous function to accurately fit the distribution profile obtained from
SAGEsh.
### 4.2 Vanilla HOD model
The benchmark model for this work, our Vanilla HOD, makes the following
assumptions:
* •
Mean HOD as described in subsubsection 4.1.1 as read-off from our SAGE-UNITsim
catalogues.
* •
Central and satellite Hα galaxies are independent events (no conformity).
* •
Poisson distribution for the satellite PDF.
* •
The radial profiles for satellite galaxies are obtained sampling a NFW profile
given the concentration of each halo.
Whereas the first point differs slightly from the most common practice, which
would rely on assuming an analytical formula (see discussion in subsubsection
4.1.1 and in subsection 3.1), the rest of points matches what is commonly
assumed to generate HOD catalogues (e.g. Avila et al., 2020). As previously
noted, the Vanilla HOD model produces a clustering at small scales very
different from that for SAGEsh Hα emitters, as shown in Figure 2.
### 4.3 Default HOD model
The default HOD model is our proposal to best fit the clustering of SAGEsh
Euclid-like Hα emitters (§ 3). As a summary, these are the choices made for
the default HOD model:
* •
Mean HOD as described in subsubsection 4.1.1: Asymmetric Gaussian for centrals
and truncated power law for satellite galaxies.
* •
We assume conformity, i.e. central and satellite Hα emitters are not modelled
as independent events. We implement the conformity as a global factor
independent of mass. This factor modifies the average random numbers of
satellite galaxies, $\langle N_{\rm sat}\rangle$, depending on whether the
central is another Hα selected galaxy or not, to match the numbers found in
SAGEsh.
* •
Poisson distribution for the satellite PDF.
* •
Satellite galaxies are placed in haloes following an analytical function that
encapsulates the number of model Hα emitters found as a function of their
relative distance to the center of the halo. This function is presented in
subsection 6.5, and can be considered as an extension of the NFW profile
equation.
Figure 2 shows the improvement achieved at small scales when using our
proposed model, with respect to utilizing a vanilla HOD.
The HOD default model summarised in the previous paragraph was obtained from
one of the 4 existing SAGE-UNITsim boxes. The level of noise we find in the
clustering when applying the default HOD model to the other simulation boxes,
is similar to that found for the clustering of galaxies modelled by running
SAGE on different simulation boxes. Hence, we conclude that our default-HOD
model works well for the other simulation boxes and that any noise in our
inferred parameters would not affect our conclusions.
Our analysis is done, by default, for the number density corresponding to the
Pozzetti model number 3, as indicated in Table 1, and using the simulation
snapshot corresponding to $z=1.3$. However, in this work we also explore how
our conclusions might change when a different number density or redshift is
assumed when analysing the Euclid-like Hα emitters, as discussed below.
#### 4.3.1 Clustering as a function of number density
Figure 4: The two point correlation in real space for galaxies generated with
HOD models assuming different number densities and redshifts, with different
line-styles as indicated in the legend. The three samples at $z=1.3$ with
number densities indicated in Table 1 are shown as red lines. The samples with
number density fixed to that of Pozzetti model number 3 are shown at different
redshifts with varying colours, as indicated in the legend.
In Figure 4 we compare the clustering of galaxies at $z=1.3$ generated with
HOD models assuming the three number densities summarised in Table 1 (in three
different line-styles). As expected, the clustering at large scales, grows in
amplitude with decreasing number density. At $r=10{\rm Mpc}h^{-1}$ there is an
increases by a factor of $1.55$ from the sample with the lowest density to the
highest.
At small scales, there is a boost in the clustering for the sample with the
lowest number density considered. This is partly due to the slight decrease
with increasing redshift of the numbers of model satellite Hα-emitters without
a central counterpart. Bearing in mind that for this exercise we only used one
realisation of the HOD (compared to the $100$ seeds used in the default
analysis), we expect that part of these different could also be due to
statistical fluctuations.
#### 4.3.2 Evolution with redshift
Figure 5: Mean halo occupation distribution for galaxies generated with HOD models at different redshift, as indicated in the legend. The number densities of this samples correspond to those from the diferential luminosity function number 3 from Pozzetti et al. and are summarised in Table 2. The contribution from central galaxies is shown as solid lines and that from satellite galaxies as dashed ones. dPozzetti nº3 | Number density | Flux cutoff
---|---|---
Redshift | $\rm Mpc^{-3}h^{3}$ | $\rm erg\,\rm s^{-1}\rm cm^{-2}$
$0.987$ | $1.381\times 10^{-3}$ | $1.88\times 10^{-16}$
$1.220$ | $7.731\times 10^{-4}$ | $1.50\times 10^{-16}$
$1.425$ | $5.185\times 10^{-4}$ | $1.22\times 10^{-16}$
$1.650$ | $3.371\times 10^{-4}$ | $1.00\times 10^{-16}$
Table 2: The columns in this table contain from left to right: the redshift of
each sample; the number densities ($n$) derived from the diferential
luminosity function number 3 from Pozzetti et al. at the given redshifts
(dPozzetti nº3); and the Hα flux cut used in the SAGE to recover each $n$.
Note that for the case of $z=1.3$ we consider a range in redshift, instead of
the single redshifts considered here (Table 1).
In this section we discuss how the clustering changes across the Euclid
redshift range, $0.9<z<1.8$. For this, we follow the approach by Knebe et al.
(2022), where we considered 4 different snapshots in this range ($z=0.99$,
$1.22$, $1.42$ and $1.65$) with a number density matched to the differential
luminosity function number 3 from Pozzetti et al. (2016). The number density
and flux cuts resulting at these four redshifts are listed in Table 2. Note
that this procedure is somewhat different to our default $z=1.3$ analysis
where we are targeting the average number density over the Euclid redshift
range, according to Pozzetti et al. model number 3 (Table 1).
Table 2 shows that the target number densities increase by a factor of $4$
from the lowest at $z=1.65$ to the highest at $z=0.99$. The number density for
the sample at $z=1.321$ (Table 1), agrees with this trend, despite being
calculated differently. The variation in number density is smaller than the
factor of $7$ found between those samples at $z=1.3$ summarised in Table 1.
Accordingly, Figure 5 shows smaller variations of the mean HOD than those seen
in Figure 1 at fixed redshift.
The typical mass of haloes hosting Euclid-like Hα emitters increases slightly
with redshift. Although Figure 5 shows a bigger difference between the sample
at $z=0.987$ and the rest, the change in mean HOD with redshift is small.
Again, we can expect that part of the differences could be due to statistical
fluctuations as we are only using one HOD realisation, instead of the $100$
seed used for our main analysis. This small variation with redshift is
consistent with the findings from Rocher et al. (2023a), who showed that DESI
OII ELGs do not exhibit significant evolution in the HOD parameters.
Figure 4 also shows a slight increase with redshifts of the clustering at
large scales. At $r=10{\rm Mpc}h^{-1}$ there the clustering increases by a
factor of $1.62$ from the sample at $z=0.987$ to $z=1.650$.
At small scales we do see in Figure 4 a larger variation in clustering. This
might be a consequence of the different slopes of the mean HOD for central
galaxies at larger masses. From Figure 5, it is clear how similar the mean HOD
for satellite galaxies are.
The results shown in this section are derived with a single seed for each HOD.
The level of noise we find for the clustering at small scales, appears to be
consistent among the different redshifts.
## 5 Modelling conformity
Figure 6: Top: Mean number of satellite galaxies per halo as a function of
halo mass: total (orange), with a companion central galaxy of the same type
(blue), and without these centrals (green). Middle: The conformity parameter
$K_{1}(M)$ per mass bin (Equation 4) and the global conformity factor
$K_{1,\rm glob}$ (Equation 6), measured for SAGEsh Euclid-like Hα emitters, as
a function of halo mass. These parameters represent the ratio between the mean
number of total satellite galaxies with a central galaxy (shown in blue in the
top panel) with respect to the corresponding number of haloes (see Equation
2). Bottom: Similar to the middle panel, but for $K_{2}(M)$ (Equation 5) and
the global factor, $K_{2,\rm glob}$
(Equation 7). In this case, these parameters represent the ratio between the
mean number of total satellite galaxies without a central galaxy (shown in
green in the top panel) with the number of haloes (see Equation 3).
Usually, HOD models assume satellite and central galaxies of a given type to
be independent events, i.e., the mean occupation of satellite galaxies does
not depend whether or not the central is of the same type. Here we define the
1-halo conformity as deviations from independency, i.e., the mean occupation
of satellite Hα emitters depend on having or not a central Hα emitter in that
halo.
The phenomenon of galactic conformity refers to the correlation between the
properties of neighbouring galaxies (Weinmann et al., 2006). This is thought
to happen due to galaxies having a common evolution as baryonic matter
collapses within dark matter haloes to form galaxies. Observed galaxies
exhibit variations in mass, shape, star formation activity, colors, and ages,
and these properties tend to be correlated. It is important to differentiate
between one-halo conformity, which measures conformity at small separations
within a single halo, and two-halo conformity, which measures conformity at
larger separations between central galaxies and neighboring galaxies in
adjacent haloes.
Conformity has been detected in simulations (e.g Lacerna et al., 2018;
Ayromlou et al., 2023) and there seems to be growing evidence of it from
observations (e.g. Rocher et al., 2023a; Gao et al., 2023). Galactic
conformity continues to be an active area of study and remains a significant
puzzle in our understanding of galaxy formation and evolution.
We define the mean value of satellite galaxies with, $\langle N_{\rm
sat}|N_{\rm cen}=1\rangle$, and without a central galaxy, $\langle N_{\rm
sat}|N_{\rm cen}=0\rangle$, as:
$\langle N_{\rm sat}|N_{\rm cen}=1\rangle=\frac{N_{\rm sat,wc}}{N_{\rm
h,wc}}=K_{1}\hskip 2.84526pt\langle N_{\rm sat}\rangle$ (2) $\langle N_{\rm
sat}|N_{\rm cen}=0\rangle=\frac{N_{\rm sat,w/oc}}{N_{\rm h,w/oc}}=K_{2}\hskip
2.84526pt\langle N_{\rm sat}\rangle\,,$ (3)
where $N_{\rm sat,wc}$ and $N_{\rm sat,w/oc}$ denote the number of satellites
with or without a central Hα galaxy. Note that the total number of satellite
galaxies, $N_{\rm sat}$, is the sum of these two values, $N_{\rm sat}=N_{\rm
sat,wc}+N_{\rm sat,w/oc}$. The total number of dark matter haloes with and
without a central Hα galaxy are $N_{\rm h,wc}$ and $N_{\rm h,w/oc}$,
respectively. $\langle N_{\rm sat}\rangle$ is the average value of satellite
galaxies in all types of haloes (with and without central galaxies of the
given type). $K_{1}$ and $K_{2}$ are the conformity factors.
When there is no conformity, $K_{1}=K_{2}=1$, so the mean number of satellite
galaxies is independent of having a central galaxy of the same type, $\langle
N_{\rm sat}|N_{\rm cen}=0,1\rangle=\langle N_{\rm sat}\rangle$.
When there is conformity, $K_{1}\neq 1\neq K_{2}$. To model the 1-halo
conformity, the average number of satellites should depend on the presence or
absence of a central galaxy of the same type.
To investigate conformity, we initially examine whether the average number of
satellite galaxies, with or without a central galaxy, constitutes two
completely independent events. To achieve this, we calculate the mean values
$\langle N_{\rm sat}|N_{\rm cen}=1\rangle$ and $\langle N_{\rm sat}|N_{\rm
cen}=0\rangle$ for our SAGE galaxy sample and compare them to the mean value
of $\langle N_{\rm sat}\rangle$. In the top panel of Figure 6, we demonstrate
that the total number of satellites with and without a central galaxy in our
SAGE sample is not reproduced when assuming independence. As a result, we can
infer that introducing the concept of conformity is necessary to accurately
reproduce these differences. Furthermore, it enables us to demonstrate the
crucial role of conformity in replicating the distribution of satellites in
SAGE, thus marking it as one of the primary novel findings of our article. It
is worth noting that in other studies, such as Lacerna et al. (2018) and
Rocher et al. (2023b), they have also found the need to include the conformity
property.
### 5.1 Mass dependent conformity
For a given bin of halo masses, $M_{i}$ and using the above expressions, it is
possible to define $K_{1}(M_{i})$ and $K_{2}(M_{i})$ as a function of
quantities that we can measure from the reference galaxy catalogue:
$K_{1}(M_{i})=\frac{N_{\rm sat,wc}(M_{i})}{N_{\rm cen}(M_{i})\ \hskip
2.84526pt\langle N_{\rm sat}(M_{i})\rangle}\,,$ (4)
$K_{2}(M_{i})=\frac{1-K_{1}(M_{i})\hskip 2.84526pt\langle N_{\rm
cen}(M_{i})\rangle}{1-\langle N_{\rm cen}(M_{i})\rangle}\,.$ (5)
The above conformity factors, $K_{1}(M_{i})$ and $K_{2}(M_{i})$, can be
computed using these equations by inserting the galaxy/halo counts measured in
our simulation. These factors are then used in a HOD prescription for each
mass bin. In practice, what we need is to first sample the Bernoulli PDF for
centrals in order to determine whether we have $N_{\rm cen}=0$ or $N_{\rm
cen}=1$ in a given halo (see PDF description in subsubsection 4.1.3) and then
modify $\langle N_{\rm sat}\rangle$ according to Equation 3 or Equation 2,
respectively. Subsequently, we would sample the Poisson distribution (again
subsubsection 4.1.3) from the modified $\langle N_{\rm sat}\rangle$ ($=\langle
N_{\rm sat}\lvert N_{\rm cen}\rangle$) and continue with the rest of steps of
the HOD (subsection 4.1).
In the middle panel of Figure 6, we can observe the conformity parameter
$K_{1}(M)$ (Equation 4) for our SAGEsh Hα emitters sample as a function of
halo mass. These parameters represent the relationship between the average
number of total satellite galaxies associated with a central galaxy and the
overall average number of satellites, depicted in blue for $K_{1}(M)$. This
comparison indicates the presence of conformity, i.e., how much it deviates
from the scenario where the existence of a satellite galaxy and a central
galaxy are two independent events (that would be represented by $K_{1}(M)=1$).
Then, in the bottom panel, similar analyses are performed, but this time for
$K_{2}(M)$ (Equation 5). In this case, these parameters represent the ratio
between the mean number of total satellite galaxies without a central galaxy
and the overall average number of satellites, shown in green for $K_{2}(M)$.
### 5.2 Global conformity
As an alternative, we consider global conformity factors, $K_{1,\rm glob}$ and
$K_{2,\rm glob}$, that are assumed constant for a given reference galaxy
catalogue and to be the same for all halo masses. $K_{1,\rm glob}$ and
$K_{2,\rm glob}$ are then constants in Equation 2 and Equation 3. In this
case, the global conformity factors are computed as:
$K_{1,\rm glob}=\frac{\sum_{i=1}^{n}{N_{\rm
sat,wc}(M_{i})}}{\sum_{i=1}^{n}\langle N_{\rm cen}(M_{i})\rangle\hskip
2.84526ptN_{\rm sat}(M_{i})}\,,$ (6)
$K_{2,\rm glob}=\frac{\sum_{i=1}^{n}{N_{\rm
sat,w/oc}(M_{i})}}{\sum_{i=1}^{n}(1-\langle N_{\rm cen}(M_{i})\rangle)\hskip
2.84526ptN_{\rm sat}(M_{i})}\,,$ (7)
where $i$ indicates each of the different mass bins considered. The somewhat
complicated expressions above are a consequence of populating the mean HOD
$\langle N_{\rm sat}\rangle$ in bins of halo mass, hence, with Equation 3 and
Equation 2 also applied in mass bins. This implies that we can not compute the
global factors, $K_{1,\rm glob}$ and $K_{2,\rm glob}$, by simply dropping the
mass dependence from Equation 4 and Equation 5. Instead, we need to compute
them in individual bins and then do the sum indicated in Equation 6 and
Equation 7.
The middle panel of Figure 6 shows the global conformity factor $K_{1,\rm
glob}$ (Equation 6) for our SAGEsh Hα emitters sample. And the bottom panel
the other global factor, $K_{2,\rm glob}$ (Equation 7). The global conformity
factors, $K_{1,\rm glob}$ and $K_{2,\rm glob}$, roughly correspond to the
respective mean values of the mass-dependent conformity ones, $K_{1}(M)$ and
$K_{2}(M)$.
Sample | Redshift | $K_{1,\rm glob}$ | $K_{2,\rm glob}$
---|---|---|---
Pozzetti nº1 | $1.3$ | $0.799$ | $1.034$
Pozzetti nº3 | $\mathbf{1.3}$ | $\mathbf{0.708}$ | $\mathbf{1.038}$
Euclid flux | $1.3$ | $0.618$ | $1.033$
dPozzetti nº3 | $0.987$ | $0.741$ | $1.033$
dPozzetti nº3 | $1.220$ | $0.717$ | $1.035$
dPozzetti nº3 | $1.425$ | $0.701$ | $1.040$
dPozzetti nº3 | $1.650$ | $0.718$ | $1.042$
Table 3: Global conformity parameters, $K_{1,\rm glob}$ (Equation 6) and
$K_{2,\rm glob}$ (Equation 7) for galaxies produced with HOD models at
different redshifts and with number densities obtained either in a range of
redshifts, first three rows (Table 1) or from a differential luminosity
function (Table 2). Our default choice is the sample at $z=1.3$ with number
density matching that of Pozzetti model number 3 in a range of redshifts. This
is indicated by using bold face in the table.
We find $K_{1,\rm glob}=0.708$ and $K_{2,\rm glob}=1.038$, implying that we
detect conformity of the order of $\sim 15-20$%. These values are summarised
in Table 3, together with those derived for galaxy samples with the other two
number densities considered in this study (see Table 1).
The global conformity factors remain remarkably constant for samples of Hα
emitters with different number densities. The variations are below $20$% for
$K_{1,\rm glob}$ and below $1$% for $K_{2,\rm glob}$. This suggests that the
mechanisms that drives conformity might depend weakly on environment. This is
in line with the proposal of spectral emission lines coming from star forming
regions being triggered by merging events (Yuan et al., 2023b).
The conformity global factors, $K_{1,\rm glob}$ and $K_{2,\rm glob}$, obtained
for samples at different redshifts are summarised in Table 3. We find
variations below $5\%$ for $K_{1,\rm glob}$ and below $1\%$ for $K_{2,\rm
glob}$. The results suggest that there is no evolution in conformity in the
explored redshift range, $0.987<z<1.650$.
### 5.3 Clustering
Figure 7: Ratios of the real space two-point correlation functions from galaxy
catalogues generated with different HOD models with respect to that from the
SAGEsh reference galaxy. The result from the proposed HOD model (§ 4.3), which
uses global conformity factors, is shown in red. Two modifications of this
model are also shown: one with no conformity (centrals and satellites are
treated as independent events), shown in orange; and one using mass dependent
conformity factors, shown in green. Note that the three HOD catalogues use an
analytical function to model the radial distribution of satellite galaxies (§
14). The error bars are calculated as the standard deviation of 100
realisations.
After computing the values of $K_{1}$ and $K_{2}$, either as a global factor
or as a function of mass, we apply Equations 2 $\&$ 3 to our HOD model, based
on either the global conformity or the mass-dependent conformity,
respectively.
Subsequently, we assess their clustering and compare, in Figure 7, the results
to a scenario without conformity (orange dotted dashed). The dot-dashed green
line illustrates the scenario where $K_{1}$ and $K_{2}$ are dependent on the
mass of the bins, while the solid red line corresponds to the global factor
conformity. First of all, it is noticeable that in the case of independence,
the largest deviation from the reference sample in terms of clustering occurs
around $r\sim 0.1,\text{Mpc},h^{-1}$, resulting in an approximately $\sim
60\%$ difference. However, by incorporating conformity, we mitigate this
difference by $\sim 40\%$ at this scale. Moreover, we observe that both
implementations of conformity yield similar outcomes, significantly superior
to the scenario without considering conformity. The clustering is found nearly
indistinguishable between the two models proposed for conformity.
Consequently, for the sake of simplicity, we opt for using global conformity
factors as our default modeling approach. It is important to emphasize that
the results regarding the necessity of including the conformity property align
with what has been found in other studies, as mentioned in Lacerna et al.
(2018) or Rocher et al. (2023b). In the latter, a mean satellite occupation
function was obtained that aligns with physically motivated models of OII
galaxies (ELG) only if conformity between centrals and satellites is
introduced. This implies that the occupation of satellites is conditioned by
the presence of central galaxies of the same type. It turns out that this
aligns with what we obtain in this article, while using very different angles.
## 6 The radial profile of satellite galaxies
The distribution of substructure in dark matter haloes has been accurately
described by either a Navarro-Frenk-White profile (NFW, Navarro et al., 1997)
or an Einasto profile (Einasto, 1969). However, the distribution of galaxies
within haloes could be biased with respect to that of dark matter (e.g Yuan et
al., 2023a; Rocher et al., 2023a; Hadzhiyska et al., 2023).
Central galaxies are placed in the center of potential of the dark matter
halo, so we focus here on satellite galaxies and how they are distributed with
respect to their center of halo.
In this work, we first study the spatial distribution of Euclid-like Hα
satellite galaxies within dark matter haloes in SAGEsh. Then we explore how to
best use the measured distribution to inform HOD models with the aim to
reproduce the SAGEsh 1-halo term clustering.
The radial profile of SAGEsh satellite Hα emitters is shown in Figure 8 as
filled circles. This plot shows the number of satellite galaxies as a function
of their distance to the central galaxy, $r=|\vec{r}|=|\vec{r}_{\rm
sat}-\vec{r}_{\rm h}|$. Note that we have verified the existence of a
displacement between the position of the central galaxy and the center of the
halo. However, we have studied these differences, and they are smaller than
$1\times 10^{-5}$ for all central galaxies, so we can assume that
$\vec{r}_{\rm cen}=\vec{r}_{\rm h}$. Satellite galaxies in SAGEsh are
preferentially placed at a distance of $\sim 0.2{\rm Mpc}h^{-1}$ from their
central galaxy.
In Figure 8, we present the radial profile as a function of the relative
distance to the halo, $r$, because we have found that this approach provides
an HOD model clustering closer to the reference SAGEsh sample. This result is
shown in 9. It is clear from this figure that the 1-halo clustering is better
recovered when the radial profile as a function of $r$ is best matched,
instead of $r/R_{\rm s}$ or $r/R_{\rm vir}$. This is a surprising result as
haloes of different sizes and masses are being mixed in the radial profile
shown in Figure 8, as a function of $r$.
Typically, HOD models place satellite galaxies in dark matter haloes assuming
the NFW profile given the concentration (or mass) of the halo (e.g. Zheng et
al., 2005). This is typically done halo-by-halo. An other commonly used option
is to use the positions of dark matter particles (e.g. Avila et al., 2020;
Rocher et al., 2023a).
Our aim is to have an HOD model with galaxies clustered as close as possible
to that from SAGEsh, in small scales. To achieve this, we explore HOD models
with different ways of placing satellite galaxies within haloes: (i) NFW
profiles using the concentration from each halo (§ 6.1, NFW halo-by-halo);
(ii) NFW and Einasto curves fitted to the $r/R_{\rm s}$-stacked profiles (§
6.2); (iii) NFW and Einasto curves fitted to the $r/R_{\rm vir}$-stacked
profile (§ 6.3); (iv) radial profile from SAGEsh, as a function of $r$, using
it either directly as a discrete distribution function or to fit an analytical
function (§ 6.5).
Figure 8 presents the radial distribution of galaxies modelled with all the
approaches described above, together with the SAGEsh distribution. From this
figure, it is clear that the first three approaches cannot reproduce the
radial profile of satellite galaxies in SAGEsh. A similar conclusion was
reached by Qin et al. (2023) also using the SAGE SAM to study the distribution
of galaxies. In their work they applied their proposed extended profile to
SSDS DR10 group catalogue galaxies.
The different HOD approaches described above are implemented following a
similar method. For each halo, we generate a number of random values equal to
the number of satellite galaxies assigned to that halo, which we then
transform to a distance $x$ using an inverse cumulative distribution sampling
of the profile (in terms of $N(x)\propto x^{2}\rho(x)$). This distance $x$ can
be $r/R_{\rm vir}$, $r/R_{\rm s}$ or $r$ depending on the chosen independent
variable in each prescription. The first two distances can be transformed into
$r$, given the halo $R_{\rm vir}$ or $R_{s}$. Once we have $r$, we place the
satellite galaxies by randomly sampling the angular position with respect to
the halo center.
Figure 8: Radial profile for satellite galaxies as a function of their
distance, $r$, to the center of their host halo. Euclid-like Hα emitters from
SAGEsh are shown as filled symbols with Poisson error bars. The lines
correspond to catalogues produced with different HOD models, as indicated in
the legend. The results from our proposed HOD model (§ 4.3), which uses an
analytical function (Equation 14) to fit the radial distribution of SAGEsh
satellite galaxies, is shown in red. Five modifications of this model are also
shown in the plot: a model assuming a NFW distribution given the concentration
of each halo (solid blue line); models assuming a NFW density profile fitted
to reprocude the average SAGEsh one as a function of $r/R_{\rm vir}$ (dash-
dotted blue line) and $r/R_{\rm s}$ (dashed blue line); models assuming an
Einasto profile fitted to reproduce the average SAGEsh one as a function of
$r/R_{\rm vir}$ (dash-dotted purple line) and $r/R_{\rm s}$ (dashed purple
line). Note that the binning is linear but we have plot in logarithmic scale
to properly appreciate the change of slopes. Figure 9: Ratios of the real
space two-point correlation functions from galaxy catalogues generated with
different HOD models with respect to that from the SAGEsh reference galaxy
sample. The results from our proposed HOD model (§ 4.3), which uses an
analytical function (Equation 14) to fit the radial distribution of SAGEsh
satellite galaxies, is shown in red. Four modifications of this model are also
shown in the plot, with similar colours to those shown in Figure 8. Error bars
are calculated as the standard deviation of 100 realisations with changing
seeds for the generators of random numbers.
### 6.1 NFW halo-by-halo: Sampling individual halo profiles
HOD models usually place satellite galaxies within haloes following the
distribution of dark matter itself (e.g. Avila et al., 2018). This is often
described by a Navarro–Frenk–White (NFW) profile, which was derived from dark
matter only N-body simulations (Navarro et al., 1997). The NFW profile
describes the density of dark matter, $\rho_{\rm NFW}$, as a function of
distance to the center of a halo, $r$:
$\rho_{\rm NFW}(r)=\frac{4\,\rho_{\rm s}}{\frac{r}{R_{\rm
s}}\left(1+\frac{r}{R_{\rm s}}\right)^{2}}\ ,$ (8)
where $R_{\rm s}$ is the scale radius provided by Rockstar, and $\rho_{\rm s}$
is the mass density at this radius. The halo concentration, $C$, relates the
virial and scale radii of haloes, $R_{\rm vir}=CR_{\rm s}$.
Given a halo concentration, the NFW can be sampled in a halo-by-halo basis. We
have followed this method to construct catalogues of Euclid-like Hα emitters.
In Figure 8 we compare the global radial profile for satellite galaxies from
this catalogue (dark blue solid line, NFW halo-by-halo) with that from SAGEsh.
The differences are large, in particular below $0.1h^{-1}{\rm Mpc}$. At these
scales, we obtain differences over $70\%$ compared to our SAGEsh sample. The
difference increases for smaller values of $r$.
The NFW halo-by-halo clearly fails to reproduce the global radial profile of
SAGEsh satellite galaxies. This difference results in a very strong clustering
at scales below $\sim 0.2h^{-1}{\rm Mpc}$. The NFW halo-by-halo results in
galaxy clustering over a factor of $2$ above the clustering measured for
SAGEsh galaxies at scales below $0.1h^{-1}{\rm Mpc}$ (Figure 9). This is of
great importance, as the NFW halo-by-halo approach is very commonly used one
in the literature to populate halo catalogues from large simulations.
### 6.2 $r/R_{\rm s}$-stacked profiles
We have fitted the SAGEsh $r/R_{\rm s}$-stacked satellite profile with single
NFW(Navarro et al., 1997) and Einasto (Einasto, 1969) parametrisations. To
construct the stacked profile from the SAGEsh sample we start by converting
the counts of satellite galaxies as a function of distance to the central
galaxy, $N(r)$, into densities, with $\rho=N(r)/(4\pi r^{2}{\rm d}r)$. We use
a binning of ${\rm d}r=0.1$.
In the previous section, we introduced the NFW parametrisation of radial
profiles (Equation 8). The Einasto parametrisation can be written as a
function of the scale radius, $R_{\rm s}$, as follows:
$\rho_{\rm Ein}(r)=\rho_{\rm s}\hskip
2.84526pte^{-\frac{2}{\alpha}[(\frac{r}{R_{\rm s}})^{\alpha}-1]}\,,$ (9)
where $\rho_{\rm s}$ is the mass density at $r=R_{\rm s}$ and $\alpha$
determines the curvature of the profile.
When fitting the SAGEsh $r/R_{\rm s}$-stacked profiles we aim to recover the
total number of satellite galaxies. To achieve this, the NFW (Equation 8) and
Einasto (Equation 9) parametrisations must be integrated up to $r=R_{\rm
vir}$, which is considered the edge of the halo. Thus, we ensure the total
number of satellite galaxies, $N_{\rm sat}$, is preserved by imposing:
$N_{\rm sat}=R_{\rm s}^{3}\int_{0}^{\left(\frac{r}{R_{\rm
s}}\right)=\frac{R_{\rm vir}}{R_{\rm s}}\equiv C}4\pi\left(\frac{r}{R_{\rm
s}}\right)^{2}\rho\left(\frac{r}{R_{\rm s}}\right){\rm d}\left(\frac{r}{R_{\rm
s}}\right)\,.$ (10)
When fitting the NFW and Einasto parametrisations, we find the halo
concentration, $C=R_{\rm vir}/R_{\rm s}$, that gives the closest value of
$N_{\rm sat}$ to the SAGEsh one. Note that the concentration appears as an
integration limit in Equation 10. For both parametrisations, $\rho_{0}$
provides the normalisation of the fit to the SAGEsh $r/R_{\rm s}$-stacked
satellite profile. The NFW profile has the halo concentration as the only free
parameter, while the Einasto one also has $\alpha$ as a second free parameter.
For performing the fit to the NFW profile, we allow the halo concentration to
vary from $0.1$ to $10$. This truncation is chosen based on the scarcity of
satellite galaxies, $N\lesssim 1$, beyond this limit. This choice is
conservative, compared with the results from Ramakrishnan et al. (2019) where
they used dark matter simulations to measure the distribution of different
halo properties. In their study, the halo concentration is found between $0$
and $\sim 30$.
To determine the parameters that yield the best fit to the SAGEsh $r/R_{\rm
s}$-stacked profiles, we iterate through different values of halo
concentration, $C_{i}$, which gives the upper limit of the integration in
Equation 10. The parameters yielding the lowest $\chi^{2}$ per data point (see
below) value are selected as the optimal fit. The $\chi^{2}$ value is
determined using the following expression:
$\chi^{2}=\sum\frac{(\rho_{\rm SAGE_{sh}}-\rho)^{2}}{\sigma^{2}}\,,$ (11)
where we assume a Poisson error on the counts, which translates to an
uncertainty on the density profile of
$\sigma=\frac{\sqrt{N}}{4\pi\left(\frac{r}{R_{\rm
s}}\right)^{2}\Delta\left(\frac{r}{R_{\rm s}}\right)}\ .$ (12)
We perform the integration within $\Delta(r/R_{\rm s})=0.05$ bins. Thus, for
smaller $C_{i}$ values, the number of data points used to evaluate the
functions will be fewer compared to larger ones, due to the profile cutoff set
by this integration limit (see Equation 10). To address this differences, we
normalize the $\chi^{2}$ value dividing it by the total number of bins within
each interval. We minimize this normalised quantity.
In the case of an Einasto profile, a combination of $C_{i}$ and $\alpha_{i}$
values are sampled from a grid. For $C_{i}$, we follow the same approaches as
for NFW, and consider $\alpha_{i}$ values from $0.1$ to $2$, incremented by
$\Delta\alpha=0.01$
Figure 10: Density profiles of the SAGEsh Euclid-like Hα emitters as a
function of $r/R_{\rm s}$, left, and $r/R_{\rm vir}$, right, in black,
together with fits using either NFW, in blue, or Einasto profiles, in purple.
In the case of density profiles as a function of $r/R_{\rm s}$, the average
concentration, $\langle C\rangle$, is a free parameter that appears as an
integration limit (see Equation 13). We show as solid lines the fit up to
$\langle C\rangle$, beyond this value we show dashed lines. Note that in this
case we are effectively averaging different parts of haloes with different
sizes.
Table 4 presents the best fit parameters to the SAGEsh $r/R_{\rm s}$-stacked
satellite profile for both NFW and Einasto profiles. The left panel of Figure
10 show the best fit profiles compared to the Euclid-like Hα emitters. We can
observe that the Einasto density profile fits reasonably well, whereas the NFW
profile is far from being a good fit.
$r/R_{\rm s}$ | NFW | Einasto
---|---|---
$\rho_{0}\left(\frac{h^{3}}{\rm Mpc^{3}}\right)$ | 1227 | 1742
C | 3.60 | 8.20
$\alpha$ | - | 0.83
$r/R_{\rm vir}$ | |
$\rho_{0}\left(\frac{h^{3}}{\rm Mpc^{3}}\right)$ | 3941 | 10593
C | 0.89 | 1.49
$\alpha$ | - | 0.51
Table 4: Best fit parameters to the SAGEsh $r/R_{\rm s}$ and $r/R_{\rm
vir}$-stacked satellite profile for both NFW and Einasto profiles. It should
be noted that the best fit of NFW for C is very similar to what we obtain by
averaging the $C_{i}$ values of haloes containing satellite galaxies.
Finally, we generate the corresponding mocks using these fitted $r/R_{S}$
profiles. In Figure 8, we can observe the profiles in terms of $r$ (dashed
violet and cyan). We find that even when utilizing our fitted $r/R_{\rm
s}$-stacked profiles, we still achieve a poor $N(r)$ curve, akin to the one
obtained through the halo-by-halo sampling approach (blue curve). This is
particularly striking for the Einasto profile, which showed a good fit in the
stacked $\rho(r/R_{s})$ profile.
### 6.3 $r/R_{\rm vir}$-stacked profiles
In order to address the next scenario, we begin by rewriting equations 8 and 9
in terms of the virial radius of the haloes with $R_{\rm vir}=C\hskip
2.84526ptR_{\rm s}$. The viral radius is defined as the radius that contains a
mass with a virial overdensity, defined by $\rho$ from Bryan & Norman (1998)
and internally computed by Rockstar. It is considered the size of the halo,
hence objects beyond this radius are considered outside the halo. By now
stacking the profile in terms of $r/R_{\rm vir}$ we are encapsulating the
information of the size of each halo (instead of the concentration as in
subsection 6.2).
Following a similar procedure to the previous case, we observe that both
classical models have the same number of free parameters and normalization
parameters for fitting using r/$R_{\rm vir}$. However, in this situation, the
free parameter C does not appear as an integration limit, but rather as a
component within the density profile functions for both models. To ensure the
conservation of the total number of satellites, we integrate these theoretical
expressions up to r = $R_{\rm vir}$, leading to the following equivalences:
$N_{\rm sat}=R_{\rm vir}^{3}\int_{0}^{\left(\frac{r}{R_{\rm
vir}}\right)=\frac{R_{\rm vir}}{R_{\rm vir}}=1}4\pi\left(\frac{r}{R_{\rm
vir}}\right)^{2}\rho\left(\frac{r}{R_{\rm vir}}\right){\rm
d}\left(\frac{r}{R_{\rm vir}}\right)\ $ (13)
It is important to note that, in this case, when performing the fit, we
truncate the density profile obtained from our sample of satellite galaxies up
to $r=R_{\rm vir}$, as there are very few satellite galaxies beyond this
value. Technically, there should not be any. By definition, there are no
subhaloes outside the virial radius, but there are some exceptions, likely due
to how the merger trees are constructed with ConsistentTrees (Behroozi et al.,
2013b).
For the fitting, we use the same procedure as in the previous section, with a
few differences. Now, C does not appear as an integration limit but is
explicit present in the functions of both models. Likewise, in this case we
always evaluate the number of data points from our histograms ($0\leq r\leq
R_{\rm vir}$ with $\Delta(r/R_{\rm vir})$ = 0.01), eliminating the need for
normalization by the total number of bins.
In the right panel of Figure 10, we observe the fittings achieved by the
parameter set that minimizes the $\chi^{2}$ of the classical models (Einasto
and NFW) with respect to our sample of SAGE satellite galaxies, when stacking
the profiles on the variable $r/R_{\rm vir}$. The resulting parameters for
both models are shown in Table 4. For this case, we can confirm that a decent
qualitative fit has been achieved for both the NFW and Einasto profiles,
unlike the $r/R_{\rm s}$ case. However, the Einasto profile still provides a
better fit, as expected, given that it includes one more free parameter.
It is worth noting that even though both models share the same physical
parameters ($\rho_{s}$, $C$, $\alpha$), the values obtained differ when using
$r/R_{\rm s}$ or $r/R_{\rm vir}$ as our variable (see Table 4). These
differences stem from the lack of a direct correlation between the variables
$R_{\rm s}$ and $R_{\rm vir}$, with a Pearson correlation coefficient of only
$r_{\rm R_{\rm vir},R_{\rm s}}$ = 0.36. As a result, we find a great
dispersion between these variables, between these and the concentration $C$
and also between all those quantities and the $r$ found in the ELGs. This non-
univocal relation between the variables considered, makes possible the
differences observed in the goodness of fit and best fit parameters when
considering a fit to an $r/R_{\rm S}$-stacked profile or an $r/R_{\rm
vir}$-stacked profile.
This level of dispersion is consistent with what has been previously presented
in Salvador-Solé et al. (2023). In the aforementioned article, various models
with distinct simulations and redshift values, showing an existence of a
concentration dispersion in relation to the mass of haloes. The resulting
$C-M_{200c}$ diagrams can be extrapolated using the dispersion we discovered
for the variables $R_{\rm s}$ and $R_{\rm vir}$, which are correlated with
mass.
In Figure 8, we can observe the profiles as a function of distance ($r$) that
we have obtained from the generated mocks using the average fits of the NFW
and Einasto density profiles in terms of $r/R_{\rm vir}$, as described in this
subsection. As observed, both Einasto and NFW models show better results when
the fits are performed in terms of $r/R_{\rm vir}$ (dotted dashed) compared to
when we use $r/R_{\rm s}$ (dashed), in comparison with our reference, SAGE
(black dots). It is also possible to appreciate a better adjustment of the
Einasto model with respect to the NFW model, at least on the low-$r$ side, in
line with the findings in Figure 10. In light of these findings, when studying
the clustering of the different samples, we will only show results of the
$r/R_{\rm vir}$-fitted classical profiles (and not the $r/R_{\rm S}$ ones), to
avoid overcrowding our figure.
The differences observed in the profile as a function of distance $N(r)$
translate to differences in the observed galaxy clustering. These differences
are reflected in Figure 9, where a strong clustering is observed at scales
below $\sim 0.1h^{-1}{\rm Mpc}$. This difference reaches almost $\sim 40\%$
for both the NFW and Einasto cases at $r\sim 0.1h^{-1}{\rm Mpc}$. Furthermore,
we can confirm that the Einasto fit proves to be superior to the NFW
concerning small scales ($<0.1h^{-1}{\rm Mpc}$), consistent with the results
obtained earlier in this subsection.
### 6.4 The inherent SAGEsh distribution $\rho_{\rm SAGE_{sh}}(r)$ profile.
In order to achieve a better fit on the small scale in terms of clustering
compared to the various prescriptions discussed in the previous subsections,
we make use of the inherent SAGEsh distribution $\rho_{{\sc SAGE}}(r)$
profile. This allows us to determine the discrete distribution profile (in a
histogram) of our sample of satellite galaxies from SAGEsh. We use linear
binning with a bin size of $\Delta r=0.01h^{-1}$Mpc. We then use this
histogram as a discrete distribution function to generate new catalogs of
satellite galaxies with their respective positions. In Figure 9, we can see
the clustering we obtain for this case (gold curve). Where it can be observed
that it corresponds to the best results in terms of fitting the small scale,
below $r=1h^{-1}{\rm Mpc}$. It is worth noting that for $r\sim 0.1h^{-1}{\rm
Mpc}$, this difference is below $25\%$.
### 6.5 Extending NFW for an average $\rho(r)$profile.
In this section, we present an analytical expression that describes the SAGEsh
profile with greater accuracy. Since the previous prescriptions did not
perform adequately in reproducing the radial profile ($N(r)$) of our SAGEsh
satellite sample, we now attempt to directly fit the profile based on the
variable $r$. This variable is the most closely related one to observable
quantities and during our study we have found that the profiles in $r$ are a
better indicator of the measured clustering. For example, as we see in
subsection 6.4, when we use the $N(r)$ measured directly from SAGEsh, we
achieved good results in terms of clustering. Motivated by this, we now
perform an analytical fit to $N(r)$ that may make more portable our results to
other models and easily implementable in different simulations. After some
search, we found that the following analytical function describes well the
$N(r)$ profile of SAGEsh:
$N(r)=N_{0}\cdot\left(\frac{r}{r_{0}}\right)^{\alpha}\left(1+\left(\frac{r}{r_{0}}\right)^{\beta}\right)^{\kappa}\,,$
(14)
with $N_{0}=3928.273$ , $r_{0}=0.34h^{-1}{\rm Mpc}$, $\alpha=1.23$,
$\beta=3.19$ and $\kappa=-2.1$. Note that these parameters are free, and this
case is applicable to our binning in $\Delta r=0.01$, thus the normalization
would change with the binning.
In Figure 8 it is evident that our implemented analytical function (solid red
line) offers a better fit to describe the SAGE profile compared to all the
other used prescriptions.
The derived analytic expression can be interpreted as a generalization of the
NFW density profile:
$\rho(r)=4\rho_{0}\cdot\left(\frac{r}{r_{0}}\right)^{\alpha-2}\left(1+\left(\frac{r}{r_{0}}\right)^{\beta}\right)^{\kappa}\
,$ (15)
where $\rho_{0}=N_{0}/16\pi r_{0}^{2}$. When $\alpha=3$, $\beta=1$ and
$\kappa=-2$, the expresion above is reduced to the classical NFW profile (8).
In this case, the free parameter $r_{0}$ would correspond to the $R_{s}$
variable in the NFW density profile, and our definition of $\rho_{0}$ with the
$\rho_{s}$ variable.
Figure 9 shows how reproducing the global radial profile of SAGEsh satellite
galaxies using an analytical extension of the NFW density profile yields
similar results to those we obtain if we use the histogram of the distribution
profile of satellite galaxies in SAGEsh (gold curve). For small scales, below
$r=1h^{-1}{\rm Mpc}$, our analytical expression (red curve) achieves a
clustering that reproduces that of SAGEsh within $\sim 25\%$.
Sample | Redshift | $\alpha$ | $\beta$ | $\kappa$ | $r_{0}$
---|---|---|---|---|---
Pozzetti nº1 | $1.3$ | $1.29$ | $2.98$ | $-2.21$ | $0.31$
Pozzetti nº3 | $\mathbf{1.3}$ | $\mathbf{1.23}$ | $\mathbf{3.19}$ | $\mathbf{-2.1}$ | $\mathbf{0.34}$
Euclid flux | $1.3$ | $1.1$ | $3.17$ | $-2.49$ | $0.44$
dPozzetti nº3 | $0.987$ | $1.35$ | $2.53$ | $-2.56$ | $0.33$
dPozzetti nº3 | $1.220$ | $1.24$ | $3.05$ | $-2.16$ | $0.34$
dPozzetti nº3 | $1.425$ | $1.17$ | $3.27$ | $-2.09$ | $0.35$
dPozzetti nº3 | $1.650$ | $1.16$ | $3.05$ | $-2.63$ | $0.39$
Table 5: Values of the free parameter in Equation 14 and Equation 15 that best
fit the radial profiles of our reference samples of Euclid-like Hα emitters
for samples at different redshifts and with number densities obtained either
in a range of redshifts, first three rows (Table 1) or from a differential
luminosity function (Table 2). Our default choice is the sample at $z=1.3$
with number density matching that of Pozzetti model number 3 in a range of
redshifts. This is indicated by using bold face in the table. Figure 11:
Radial profile for satellites at $z=1.3$ from our reference sample of Euclid-
like Hα emitters modelled by SAGEsh (filled symbols) assuming two number
densities: Pozzetti model number 1 and Euclid flux (Table 1), as indicated in
the legend. These reference profiles are compared with their best fits to the
analytical function for the radial profiles described by Equation 14 (Table
5). For reference, the profile from the default case, Pozzetti model number 3,
is shown as a red line. Figure 12: Radial profile for satellites at
different redshift from our reference sample of Euclid-like Hα emitters
modelled by SAGEsh (filled symbols) assuming number densities matching the
differential luminosity function number 3 from Pozzetti et al. (Table 2).
These are compared with the corresponding best fits to the analytical function
described in Equation 14 (Table 5). For reference, the profile from the
default case, Pozzetti model number 3 in a range of redshifts centered at
$z=1.3$, is shown as a red line.
In Figure 11 we compare the radial profile of satellites from the three SAGEsh
Hα samples at $z=1.3$ with different number densities (Table 1). The radial
profiles for the three SAGEsh samples exhibit the same shape with varying
normalisation, as expected from the change in number density. At $r=0.21{\rm
Mpc}h^{-1}$, the value associated with the maximum number of counts for our
reference sample, the count number increases by a factor of $2.19$ from the
reference sample to that with the highest density. This factor practically
coincides with the ratio of total satellite galaxies between both samples,
$2.04$. The count number increases by a factor of $4.15$ from the sample with
the lowest density to the reference one; approaching the value corresponding
to the ratio of total satellite galaxies for that case, which is $3.73$.
The three samples with varying number density at $z=1.3$ are well described by
our proposed extended NFW profile (Equation 14). The best fit parameters
corresponding to each sample are summarised in Table 5. We find variations of
a factor of $1.17$ for $\alpha$, $1.07$ for $\beta$, $1.19$ for $\kappa$, and
$1.42$ for $r_{0}$. Thus, there is not much variation in the shape of the
profiles for these three samples.
In Figure 12 we show the radial profile of SAGEsh satellite samples at
different redshifts (see subsubsection 4.3.2 for details on the construction
of the samples). Following the decrease in number density of the samples
(Table 2), the normalisation of the curves decrease with increasing redshift.
At $r=0.21{\rm Mpc}h^{-1}$, the value associated with the maximum number of
counts for our default sample, the count number increases by a factor of
$4.52$ from the lowest value at $z=1.650$ to that at $z=0.987$. This factor is
close to the ratio of total satellite galaxies between both extreme samples,
which is $4.63$.
At all redshifts, the shape of the profiles remain similar and well described
by Equation 14. Table 5 summarises the best fit parameters to this equation
for each redshift. The parameter $r_{0}$ remains nearly unchanged with
redshift. We find the other parameters to change by at most a factor of $1.16$
for $\alpha$, $1.29$ for $\beta$, and $1.26$ for $\kappa$.
Therefore, we can conclude, that Equation 14 is a good description of the
radial profiles of Euclid-like Hα satellite galaxies at different redshifts.
### 6.6 Clustering
In this subsection, we provide a global discussion of the results obtained for
the clustering, comparing the 2-point correlation function of all previously
mentioned radial profile prescriptions and concluding the respective section.
These include sampling individual halo profiles assuming an NFW profile based
on halo concentration, using NFW and Einasto density profiles as a function of
$R_{\rm vir}$ to fit our data, utilizing the normalized histogram of SAGEsh
satellite galaxies as a function of $r$, and proposing an extension to NFW
that fits the overall distribution observed for SAGEsh galaxies. It is
important to note that we generated 100 mocks with different seeds for each of
the aforementioned descriptions.
In terms of clustering, we show in the Figure 9 how the classical density
profiles (Einasto and NFW) do not correctly reproduce the small scales
clustering of SAGE satellite galaxies for any of the prescriptions described
above (halo-by-halo, $r/R_{\rm S}$-stacked-fit, $r/R_{\rm vir}$-stacked-fit,).
Moving on, we see that the stacked fits show some improvement, but still
exhibit a deviation of around 40$\%$ at 0.1 $h^{-1}{\rm Mpc}$. However, it is
worth noting that among the stacked fits, the Einasto profile (purple curve)
yields better results than the NFW profile (cyan curve), which shows a maximum
difference of 60$\%$ in the range $r<$0.1$h^{-1}{\rm Mpc}$. Finally, we have
the curves corresponding to the radial profile of our SAGEsh sample (yellow
curve) and the extended NFW profile that we have implemented (red curve). We
can verify that these fits provide the best results for small scales compared
to the previous prescriptions, with differences less than 22$\%$ to $r\sim
0.1h^{-1}{\rm Mpc}$ and ratios closer to 1 than the other profiles.
The great improvement in the small scale clustering (when compared to our
reference SAGEsh sample) introduced by the modified NFW in Equation 15 is one
of the main results of this paper. It should be noted that when we analyzed
the $\chi^{2}$ of all the fits shown in the Figure 9, the one that presented
the smallest value for the small scale was the one in which we used the own
distribution profile of SAGE ($N(r_{i})$): $\chi^{2}(r<1h^{-1}{\rm
Mpc})=36.35$, followed by our Default model (subsection 4.3, using Eq. 15):
$\chi^{2}(r<1h^{-1}{\rm Mpc})=42.34$ compared to $\chi^{2}(r<1h^{-1}{\rm
Mpc})=654.32$ and $\chi^{2}(r<1h^{-1}{\rm Mpc})=54.01$ for halo-by-halo and
$r/R_{\rm vir}$ Einasto, respectively. However, we have chosen Equation 15 our
default model because it is more practical to implement an analytical
expression that can be used in other simulations without resorting to the
actual profile of the sample or running SAGE again. In accordance with the
results obtained in terms of clustering, we can observe that it is interesting
how galaxies do not seem to follow the classic dark matter density profiles,
despite the different prescriptions made. This result aligns with various
current studies (Qin et al., 2023).
## 7 Summary and Conclusions
We have explored the essential components needed in a HOD prescription to
reproduce the clustering of that Euclid-like Hα emitters. In particular, we
aim to reproduce the small-scale of the real-space two-point correlation
function (2PCF) from our reference sample. In this work we have proposed a
Halo Occupation Distribution (HOD) prescription capable of significantly
improving the clustering of model Euclid-like Hα emitters compared to HOD
models without conformity and/or assuming a radial NFW profile based on
individual halo concentration.
We have generated our reference catalogue of model galaxies by running the
semi-analytical model (SAM) of galaxy formation SAGE on the UNITsims dark
matter simulations (Chuang et al., 2019), as outlined in Knebe et al. (2022).
The Euclid-like sample of Hα emitters is obtained selecting a UNITsim-SAGE
galaxy sample at $z=1.321$ with a cut in Hα flux to match a target number
density. The chosen redshift, $z=1.321$, approximately corresponds to the mean
value of the redshift range of the Euclid spectroscopic survey, $0.9<z<1.8$.
The target number density corresponds to that predicted by the Pozzetti et al.
(2016) model number 3 over the entire Euclid redshift range. We then shuffle
the SAGE sample (see subsection 3.2), to remove assembly bias from our
reference sample, SAGEsh (section 3). This last step allows us to focus on the
main goal of the paper: the influence of conformity and the radial profiles on
galaxy clustering without contamination from assembly bias.
Our HOD prescription is very modular, containing a series of ingredients that
we have study in comparisson to the clustering measured for the SAGEsh
reference sample (section 4). We focus our analysis on the effect that
conformity and the radial profiles have on the final galaxy clustering.
However, we outline here all the ingredients of our HOD model. We do this
following the order in the code and highlighting the choice made for our
default model:
* •
Mean halo occupation shape for centrals, $\langle N_{\rm cen}(M)\rangle$, and
satellites, $\langle N_{\rm sat}(M)\rangle$. Here, we fix these properties to
what is directly measured in SAGE and shown in Figure 1.
* •
Conformity. We investigate whether satellite and central galaxies of a given
type are independent events. We define the 1-halo conformity as deviations
from independence and we quantify this with two factors $K_{1}$ and $K_{2}$
(Equation 4, Equation 5). We consider two cases for modeling conformity: one
where the modification of satellite occupation is performed in mass bins (mass
dependent conformity), and the other with global factors, constant for all
mass bins (global conformity). Introducing conformity greatly improves the
match to the 2PCF of the reference sample at small scales, with respect to the
independent case. Figure 7 shows that the 2PCF of HOD models assuming
independence, present a $\sim 60\%$ difference at $r\sim 0.1{\rm Mpc}\,h^{-1}$
with respect to the reference sample. This difference is reduced to $\sim
20\%$ when conformity is introduced in the HOD prescriptions. Both models of
conformity result in similar 2PCF, and thus, we adopt the Global conformity as
our default model for simplicity.
* •
Probability Distribution Function. We assume a Bernoulli distribution for
central galaxies and a Poisson one for satellites. These are good descriptions
for our reference sample (subsubsection 4.1.3).
* •
Radial profile. These are the prescriptions for the radial profiles of
satellite galaxies we have implemented in our HOD models (section 6):
1. (i)
Sampling individual halo profiles assuming an NFW given by the concentration
of each halo. This is usually the default in the literature.
2. (ii)
Adjusting the NFW and Einasto curves to the $r/R_{\rm s}$-stacked profile from
our reference sample. With $r$ being the distance between a satellite galaxy
and the center of its host halo. $R_{\rm S}$ is the scale radius of the halo.
3. (iii)
Adjusting the NFW and Einasto curves to the $r/R_{\rm vir}$-stacked profile
from our reference sample. $R_{\rm vir}$ is the viral radius of the halo.
4. (iv)
The inherent distribution profile of satellite galaxies from our reference
sample, measured directly as a normalised histogram of satellite counts as a
function of $r$.
5. (v)
Modified NFW profile. We introduce a generalized version of the NFW density
profile that effectively models the stacked profile of our reference sample of
galaxies as a function of $r$. We find an excellent fit with the following
modified NFW curve (Equation 15) and we adopt it for our default HOD model:
$\rho(r)=4\rho_{0}\cdot\left(\frac{r}{r_{0}}\right)^{\alpha-2}\left(1+\left(\frac{r}{r_{0}}\right)^{\beta}\right)^{\kappa}\,.$
The NFW and Einasto density profiles are unable to replicate the small-scale
clustering of our sample of SAGEsh Euclid-like Hα emitters, with any of the
implementations tried here. It is particularly striking the case of the
Einasto $r/R_{\rm vir}$-stacked curve, which shows a very good fit in
$\rho(r/R_{\rm vir})$, but does not recover the clustering from the reference
sample. We argue that this is due to the low correlation between $r$ and
$R_{\rm vir}$ in the reference sample. Therefore, we provide an analytical
expression for the positional profile (v above) of this type of satellite
galaxies, which can be interpreted as an extension of the NFW profile
(Equation 14 and Equation 15).
We find that a good fit to the radial profile, $N(r)$, of our SAGEsh reference
sample is a good predictor for how good the clustering of galaxies generated
with an HOD model will be reproducing the reference one (Figure 9). The
goodness of the two last presciptions above (inherent, iv, or fitted, v) stand
out with respect to all the others (Figure 8). Directly using the inherent
$N(r)$ profile (iv above) provides a clustering that matches that from the
reference sample slightly better than using the analytical fitted expression
(v above). However, the gain is small compared to the error bars. As we find
difficult to use the inherent SAGEsh $N(r)$ for different simulations or
reference samples, we decide to use as our default the analytical expression
above (Equation 15).
Our proposed (default) HOD model includes a model for the 1-halo conformity
and a modified NFW radial profile for satellite galaxies. The conformity model
is implemented by computing two constants: $K_{1,\,\rm glob}=0.708$ (Equation
6) and $K_{2,\,\rm glob}=1.038$ (Equation 7); and then applying them within
the HOD model using Equation 2 and Equation 3. We assume that satellite
galaxies occupy haloes following the modified NFW profile described by
Equation 15. This equation has 4 free parameters: $\alpha$, $\beta$, $\kappa$
and $r_{0}$ (subsection 6.5).
The proposed default HOD model improves significantly the small scale
clustering when compared to the benchmark model we started from, the vanilla
HOD. This HOD model does not include conformity and assumed a radial profile
for satellite galaxies based on a halo-by-halo NFW profile (4.2). This is
clearly depicted in Figure 2, where we see a $\sim 15\%$ improvement at $r\sim
0.3h^{-1}{\rm Mpc}$ and more than $50\%$ improvement below $\sim 0.1h^{-1}{\rm
Mpc}$.
There are four SAGE-UNITsim simulation boxes. To understand how much of what
we learn from one simulation box can be extrapolated to the other ones, we
have applied our default HOD model to the other simulation boxes. The measured
clustering is consistent among the boxes, with noise levels similar to those
found for the corresponding variations in SAGE. Hence, we conclude that any
noise in our inferred parameters would not affect our conclusions.
The main conclusion of this work is that modelling the clustering of Euclid-
like Hα emitters requires the inclusion of conformity and a radial
distribution of satellite galaxies as a function of the distance to the
corresponding central galaxy (Equation 15). This conclusion is robust to
different number densities (those summarised in Table 1) and redshifts between
$0.987$ and $1.650$.
The work presented here can have a large impact in the way mock catalogues of
emission line galaxies (ELGs) are generated in the future. This can be of
special relevance for Euclid, but also for DESI, which studies other type of
ELGs. As a next step, we aim to implement the velocity profile of the
satellites to investigate its impact on the study of redshift space
distortions. A natural follow up to this paper would be to study the assembly
bias for Euclid galaxies, by exploring the differences in clustering between
SAGE and SAGEsh (Figure 2).
As we enter the fourth stage of dark energy experiments, with sub-percent
precision cosmology, controlling the origin of any contribution to galaxy
clustering is fundamental. Hence, studies like this one will help us in the
future to understand the different contributions of galaxy formation physics
to the observed galaxy clustering.
## Acknowledgements
We would like to thank the anonymous referee for their insightful comments,
that pushed us to explore how our results change with varying number density
and redshift. GRP is supported by the FPI SEVERO OCHOA (SEV-2016-0597-18-2)
program from the the Ministry of Science, Innovation and Universities, with
reference PRE2018-087035. This work has been supported by Ministerio de
Ciencia e Innovación (MICINN) under the following research grants:
PID2021-122603NB-C21 (VGP, AK and GY), PID2021-123012NB-C41 (SA) and
PGC2018-094773-B-C32 (GRP, SA). GRP, SA and VGP have been or are supported by
the Atracción de Talento Contract no. 2019-T1/TIC-12702 granted by the
Comunidad de Madrid in Spain. IFAE is partially funded by the CERCA program of
the Generalitat de Catalunya. AK further thanks Brighter for the ‘around the
world in eighty days’ EP. The UNIT simulations have been run in the
MareNostrum Supercomputer, hosted by the Barcelona Supercomputing Center,
Spain, under the PRACE project number 2016163937.
## Data Availability
The data used in this study are publicly available at http://www.unitsims.org,
as described in (Knebe et al., 2022). The code developed during this analysis
will be released at a later stage. In the meantime it can be shared upon
request.
## References
* Abbott et al. (2018) Abbott T. M. C., et al., 2018, ApJS, 239, 18
* Alam et al. (2017) Alam S., et al., 2017, MNRAS, 470, 2617
* Alam et al. (2021a) Alam S., et al., 2021a, Phys. Rev. D, 103, 083533
* Alam et al. (2021b) Alam S., et al., 2021b, Phys. Rev. D, 103, 083533
* Alam et al. (2023) Alam S., Paranjape A., Peacock J. A., 2023, arXiv e-prints, p. arXiv:2305.01266
* Alonso (2012) Alonso D., 2012, arXiv e-prints, p. arXiv:1210.1833
* Amendola et al. (2018) Amendola L., et al., 2018, Living Reviews in Relativity, 21, 2
* Angulo & Pontzen (2016) Angulo R. E., Pontzen A., 2016, MNRAS, 462, L1
* Aricò et al. (2020) Aricò G., Angulo R. E., Hernández-Monteagudo C., Contreras S., Zennaro M., Pellejero-Ibañez M., Rosas-Guevara Y., 2020, MNRAS, 495, 4800
* Avila et al. (2018) Avila S., et al., 2018, MNRAS, 479, 94
* Avila et al. (2020) Avila S., et al., 2020, MNRAS, 499, 5486
* Avila et al. (2022) Avila S., Vos-Ginés B., Cunnington S., Stevens A. R. H., Yepes G., Knebe A., Chuang C.-H., 2022, MNRAS, 510, 292
* Ayromlou et al. (2023) Ayromlou M., Kauffmann G., Anand A., White S. D. M., 2023, MNRAS, 519, 1913
* Baugh (2006) Baugh C. M., 2006, Reports on Progress in Physics, 69, 3101
* Behroozi et al. (2013a) Behroozi P. S., Wechsler R. H., Wu H.-Y., 2013a, ApJ, 762, 109
* Behroozi et al. (2013b) Behroozi P. S., Wechsler R. H., Wu H.-Y., Busha M. T., Klypin A. A., Primack J. R., 2013b, ApJ, 763, 18
* Benson et al. (2000) Benson A. J., Cole S., Frenk C. S., Baugh C. M., Lacey C. G., 2000, MNRAS, 311, 793
* Berlind et al. (2003) Berlind A. A., et al., 2003, ApJ, 593, 1
* Bryan & Norman (1998) Bryan G. L., Norman M. L., 1998, ApJ, 495, 80
* Cardelli et al. (1989) Cardelli J. A., Clayton G. C., Mathis J. S., 1989, ApJ, 345, 245
* Carretero et al. (2015) Carretero J., Castander F. J., Gaztañaga E., Crocce M., Fosalba P., 2015, MNRAS, 447, 646
* Chaves-Montero et al. (2023) Chaves-Montero J., Angulo R. E., Contreras S., 2023, MNRAS, 521, 937
* Christodoulou et al. (2012) Christodoulou L., et al., 2012, MNRAS, 425, 1527
* Chuang et al. (2019) Chuang C.-H., et al., 2019, MNRAS, 487, 48
* Cochrane et al. (2017) Cochrane R. K., Best P. N., Sobral D., Smail I., Wake D. A., Stott J. P., Geach J. E., 2017, MNRAS, 469, 2913
* Cole et al. (2000) Cole S., Lacey C. G., Baugh C. M., Frenk C. S., 2000, MNRAS, 319, 168
* Cole et al. (2005) Cole S., et al., 2005, MNRAS, 362, 505
* Contreras et al. (2021) Contreras S., Chaves-Montero J., Zennaro M., Angulo R. E., 2021, MNRAS, 507, 3412
* Cora et al. (2018) Cora S. A., et al., 2018, MNRAS, 479, 2
* Croton et al. (2007) Croton D. J., Gao L., White S. D. M., 2007, MNRAS, 374, 1303
* Croton et al. (2016) Croton D. J., et al., 2016, The Astrophysical Journal Supplement Series, 222, 22
* DESI Collaboration et al. (2016) DESI Collaboration et al., 2016, arXiv e-prints, p. arXiv:1611.00036
* Dawson et al. (2013) Dawson K. S., et al., 2013, AJ, 145, 10
* Dawson et al. (2016) Dawson K. S., et al., 2016, AJ, 151, 44
* Drinkwater et al. (2010) Drinkwater M. J., et al., 2010, MNRAS, 401, 1429
* Einasto (1969) Einasto J., 1969, Astronomische Nachrichten, 291, 97
* Eisenstein et al. (2005) Eisenstein D. J., et al., 2005, ApJ, 633, 560
* Favole et al. (2020) Favole G., et al., 2020, MNRAS, 497, 5432
* Favole et al. (2023) Favole G., et al., 2023, arXiv e-prints, p. arXiv:2303.11031
* Gao et al. (2023) Gao H., et al., 2023, arXiv e-prints, p. arXiv:2309.03802
* Gonzalez-Perez et al. (2018) Gonzalez-Perez V., et al., 2018, MNRAS, 474, 4024
* Gonzalez-Perez et al. (2020) Gonzalez-Perez V., et al., 2020, MNRAS, 498, 1852
* Hadzhiyska et al. (2023) Hadzhiyska B., et al., 2023, MNRAS, 524, 2524
* Hirschmann et al. (2016) Hirschmann M., De Lucia G., Fontanot F., 2016, MNRAS, 461, 1760
* Jiménez et al. (2019) Jiménez E., Contreras S., Padilla N., Zehavi I., Baugh C. M., Gonzalez-Perez V., 2019, MNRAS, 490, 3532
* Jiménez et al. (2021) Jiménez E., Padilla N., Contreras S., Zehavi I., Baugh C. M., Orsi Á., 2021, MNRAS, 506, 3155
* Knebe et al. (2022) Knebe A., et al., 2022, MNRAS, 510, 5392
* Lacerna & Padilla (2011) Lacerna I., Padilla N., 2011, MNRAS, 412, 1283
* Lacerna et al. (2018) Lacerna I., Contreras S., González R. E., Padilla N., Gonzalez-Perez V., 2018, MNRAS, 475, 1177
* Lagos et al. (2019) Lagos C. d. P., et al., 2019, MNRAS, 489, 4196
* Laureijs et al. (2011) Laureijs R., et al., 2011, arXiv e-prints, p. arXiv:1110.3193
* Manera et al. (2013) Manera M., et al., 2013, MNRAS, 428, 1036
* McCarthy et al. (2017) McCarthy I. G., Schaye J., Bird S., Le Brun A. M. C., 2017, MNRAS, 465, 2936
* Merson et al. (2018) Merson A., Wang Y., Benson A., Faisst A., Masters D., Kiessling A., Rhodes J., 2018, MNRAS, 474, 177
* Navarro et al. (1997) Navarro J. F., Frenk C. S., White S. D. M., 1997, ApJ, 490, 493
* Obuljen et al. (2020) Obuljen A., Percival W. J., Dalal N., 2020, J. Cosmology Astropart. Phys., 2020, 058
* Orsi & Angulo (2018) Orsi Á . A., Angulo R. E., 2018, Monthly Notices of the Royal Astronomical Society, 475, 2530
* Orsi et al. (2014) Orsi Á ., Padilla N., Groves B., Cora S., Tecce T., Gargiulo I., Ruiz A., 2014, Monthly Notices of the Royal Astronomical Society, 443, 799
* Parkinson et al. (2012) Parkinson D., et al., 2012, Phys. Rev. D, 86, 103518
* Pillepich et al. (2018) Pillepich A., et al., 2018, MNRAS, 473, 4077
* Pozzetti et al. (2016) Pozzetti L., et al., 2016, A&A, 590, A3
* Qin et al. (2023) Qin F., Parkinson D., Stevens A. R. H., Howlett C., 2023, arXiv e-prints, p. arXiv:2308.03298
* Ramakrishnan et al. (2019) Ramakrishnan S., Paranjape A., Hahn O., Sheth R. K., 2019, MNRAS, 489, 2977
* Rocher et al. (2023a) Rocher A., et al., 2023a, arXiv e-prints, p. arXiv:2306.06319
* Rocher et al. (2023b) Rocher A., et al., 2023b, arXiv e-prints, p. arXiv:2306.06319
* Salvador-Solé et al. (2023) Salvador-Solé E., Manrique A., Canales D., Botella I., 2023, MNRAS, 521, 1988
* Schaye et al. (2015) Schaye J., et al., 2015, MNRAS, 446, 521
* Schneider & Teyssier (2015) Schneider A., Teyssier R., 2015, J. Cosmology Astropart. Phys., 2015, 049
* Somerville & Davé (2015) Somerville R. S., Davé R., 2015, ARA&A, 53, 51
* Spergel et al. (2013) Spergel D., et al., 2013, arXiv e-prints, p. arXiv:1305.5422
* Spergel et al. (2015) Spergel D., et al., 2015, arXiv e-prints, p. arXiv:1503.03757
* Springel (2005) Springel V., 2005, MNRAS, 364, 1105
* Springel et al. (2018) Springel V., et al., 2018, MNRAS, 475, 676
* The Dark Energy Survey Collaboration (2005) The Dark Energy Survey Collaboration 2005, arXiv e-prints, pp astro–ph/0510346
* Vos-Ginés et al. (2023) Vos-Ginés B., Avila S., Gonzalez-Perez V., Yepes G., 2023, arXiv e-prints, p. arXiv:2310.18189
* Wang et al. (2022) Wang K., Mao Y.-Y., Zentner A. R., Guo H., Lange J. U., van den Bosch F. C., Mezini L., 2022, MNRAS, 516, 4003
* Wechsler & Tinker (2018) Wechsler R. H., Tinker J. L., 2018, ARA&A, 56, 435
* Weinmann et al. (2006) Weinmann S. M., van den Bosch F. C., Yang X., Mo H. J., 2006, MNRAS, 366, 2
* Xu et al. (2021) Xu X., Zehavi I., Contreras S., 2021, MNRAS, 502, 3242
* York et al. (2000) York D. G., et al., 2000, AJ, 120, 1579
* Yuan et al. (2021) Yuan S., Hadzhiyska B., Bose S., Eisenstein D. J., Guo H., 2021, MNRAS, 502, 3582
* Yuan et al. (2023a) Yuan S., et al., 2023a, arXiv e-prints, p. arXiv:2306.06314
* Yuan et al. (2023b) Yuan S., et al., 2023b, arXiv e-prints, p. arXiv:2310.09329
* Zehavi et al. (2005) Zehavi I., et al., 2005, ApJ, 630, 1
* Zhai et al. (2019) Zhai Z., Benson A., Wang Y., Yepes G., Chuang C.-H., 2019, MNRAS, 490, 3667
* Zhai et al. (2021a) Zhai Z., Wang Y., Benson A., Colbert J., Bagley M., Henry A., Baronchelli I., 2021a, arXiv e-prints, p. arXiv:2109.12216
* Zhai et al. (2021b) Zhai Z., Wang Y., Benson A., Chuang C.-H., Yepes G., 2021b, MNRAS, 505, 2784
* Zheng et al. (2005) Zheng Z., et al., 2005, ApJ, 633, 791
* de Jong et al. (2012) de Jong R. S., et al., 2012, in McLean I. S., Ramsay S. K., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV. p. 84460T (arXiv:1206.6885), doi:10.1117/12.926239
|
# Complementarity between quantum entanglement, geometrical and dynamical
appearances in $N$ spin-$1/2$ system under all-range Ising model
Jamal Elfakir<EMAIL_ADDRESS>LPHE-Modeling and Simulation, Faculty
of Sciences, Mohammed V University in Rabat, Rabat, Morocco. Brahim Amghar
<EMAIL_ADDRESS>LPHE-Modeling and Simulation, Faculty of Sciences,
Mohammed V University in Rabat, Rabat, Morocco. Centre of Physics and
Mathematics, CPM, Faculty of Sciences, Mohammed V University in Rabat, Rabat,
Morocco. Abdallah Slaoui<EMAIL_ADDRESS>LPHE-Modeling and
Simulation, Faculty of Sciences, Mohammed V University in Rabat, Rabat,
Morocco. Centre of Physics and Mathematics, CPM, Faculty of Sciences,
Mohammed V University in Rabat, Rabat, Morocco. Mohammed Daoud
<EMAIL_ADDRESS>Department of Physics, Faculty of Sciences, University
Ibn Tofail, Kenitra, Morocco.
###### Abstract
With the growth of geometric science, including the methods of exploring the
world of information by means of modern geometry, there has always been a
mysterious and fascinating ambiguous link between geometric, topological and
dynamical characteristics with quantum entanglement. Since geometry studies
the interrelations between elements such as distance and curvature, it
provides the information sciences with powerful structures that yield
practically useful and understandable descriptions of integrable quantum
systems. We explore here these structures in a physical system of $N$
interaction spin-$1/2$ under all-range Ising model. By performing the system
dynamics, we determine the Fubini-Study metric defining the relevant quantum
state space. Applying Gaussian curvature within the scope of the Gauss-Bonnet
theorem, we proved that the dynamics happens on a closed two-dimensional
manifold having both a dumbbell-shape structure and a spherical topology. The
geometric and topological phases appearing during the system evolution
processes are sufficiently discussed. Subsequently, we resolve the quantum
brachistochrone problem by achieving the time-optimal evolution. By
restricting the whole system to a two spin-$1/2$ system, we investigate the
relevant entanglement from two viewpoints; The first is of geometric nature
and explores how the entanglement level affects derived geometric structures
such as the Fubini-Study metric, the Gaussian curvature, and the geometric
phase. The second is of dynamic nature and addresses the entanglement effect
on the evolution speed and the related Fubini-Study distance. Further,
depending on the degree of entanglement, we resolve the quantum
brachistochrone problem.
Keywords: Quantum state space, Fubini-Study metric, Gaussian curvature,
Geometric phase, Quantum brachistochrone issue, Quantum entanglement.
## I Introduction
Over the past few years, there has been growing excitement about the
application of geometric ideas to quantum physics. It is argued that the
geometrization of quantum theory provides a significant framework able to
describe, to a large extent, the physical characteristics of solvable quantum
systems [1, 2, 3, 4]. This geometrical approach has introduced the concept of
the quantum phase space, endowed naturally with the Kähler manifold structure,
on which the dynamics of quantum systems is well established [5, 6, 7, 8].
Lately, numerous studies have shown the relevance of geometric structures of
the physical state space for exploring the physical properties of quantum
systems. Indeed, It has been demonstrated that the Fubini-Study distance
traveled by a quantum system during evolution along a given curve in relevant
projective Hilbert space is related to the integral of the energy uncertainty,
which in turn is proportional to the evolution speed [9]. The quantum speed
limit time, which defines the fundamental limit on the rate of evolution of
quantum systems, is also determined by means of Bures length between mixed
quantum states [10]. Additionally, the geometric methods simplify considerably
the resolution of the quantum brachistochrone problem, which is linked to
determining the Hamiltonian that generates the time-optimal evolution between
two states [11, 12, 13]. The efficient quantum circuits in quantum computation
with $n$ qutrits are investigated by use of Riemannian geometry. Indeed, it
has been proven that the optimal quantum circuits are basically equivalent to
the shortest path between two points in a specific curved geometry of
SU($3^{n}$) [14], analogous to the qubit case wherein the geodesic in
SU($2^{n}$) is involved [15]. For other further dynamic properties explored on
the basis of geometric approaches, we recommend that readers look at the
papers [16, 17, 18, 19].
Currently, the geometric quantum mechanics, which forms the theoretical
framework of the geometric formulation of quantum theory, is the bedrock of
information geometric science, in which the quantum phenomena are handled
geometrically in the space of quantum states. One can cite, for instance, the
quantum entanglement being an intriguing physical resource in the protocols of
quantum information theory [20, 21, 22, 23, 24]. Importantly, entanglement is
shown to be closely related to the Mannoury-Fubini-Study distance separating
the entangled state and the nearest disentangled state [25]. Moreover, the
euclidean distance of an entangled state to the disentangled states is equal
to the maximal violation of a generalized Bell inequality with the tangent
functional as entanglement witness [26]. The connection between quantum
entanglement and the state space geometry has been thoroughly explored for a
spin-s system with long-range Ising interaction [27]. Further to that, the
geometrical description of entanglement is also explored within the backdrop
of the Hopf fibration, which is a topological map compactifying the related
quantum state space to an another lower-dimensional space referred to as the
Hopf bundle [28, 29, 30]. For additional findings highlighting the interplay
between quantum entanglement and geometrical characteristics, see, e.g.,
Refs.[31, 32, 33, 34].
Another significant concept that has received much attention in quantum
physics is the geometric phase, a remarkable geometric characteristic in
quantum evolution processes [35, 36, 37, 38]. It can be viewed as the holonomy
acquired by the state vector after a parallel transport along the evolution
trajectory [39, 40]. The geometric phase is now intimately related to other
geometrical features that define the quantum state spaces. In effect, it has
been proven that the geometric phase can be expressed as the line integral of
Berry-Simon connection along the Fubini-Study distance separating the two
quantum states over the corresponding projective Hilbert space [7, 41]. On the
practical side, several recent investigations have shown the most important
role of the geometric phase in the advancement of quantum information science.
Indeed, it is a valuable feature for generating quantum logic gates that are
critical to quantum computation [42, 43, 44]. Furthermore, the conditional
phase gate has been experimentally demonstrated for nuclear magnetic resonance
[45] and trapped ions [46]. The interplay between quantum entanglement and
topological and geometric phases is also extensively studied in the two-qudit
systems [47, 48, 49]. Other geometric phase applications have been realized in
Refs.[50, 51, 52, 53].
The main purpose of this work is to highlight the geometrical and dynamical
characteristics of a many-body system, represented here by $N$ spin-$1/2$
system under all-range Ising model, and their connection to quantum
entanglement. It is noteworthy that the ideas explored in this paper were
primarily inspired by the findings obtained by Krynytskyi and Kuzmak in
Ref.[27]. As a matter of fact, by performing the system dynamics, we derive
the Fubini-Study metric identifying the associated quantum state space.
Moreover, examining the Gaussian curvature (G-curvature) in the framework of
the Gauss-Bonnet theorem, we determine the topology and the structure of this
space. Afterward, we explore the acquired geometric and topological phases and
tackle the quantum brachistochrone issue. Finally, we give a detailed
explanation of the geometrical and dynamical characteristics of two
interacting spin-$1/2$ under the Ising model in connection to quantum
entanglement.
The rest of this paper is structured as follows. In Sec.II, by carrying out
the dynamics of $N$ interacting spin-$1/2$ under all-range Ising model, we
give the Fubini-Study metric and identify the associated quantum state space.
Moreover, by investigating the G-curvature within the scope of the Gauss-
Bonnet theorem, we uncover the topology and the structure of this space. The
geometric and topological phases emerging from the system evolution processes,
over the resulting state space, are thoroughly discussed in Sec.III. The
quantum brachistochrone problem is also addressed based on the evolution
velocity as well as the related Fubini-Study distance, in Sec.IV. In Sec.V, we
study the entanglement between two interacting spin-1/2 under the Ising model
from two different appearances: the first is of a geometric nature and
investigates how the entanglement degree impacts derived geometric features
such as the Fubini-Study metric, the G-curvature, and the geometric phase. The
second is of a dynamic type and addresses the entanglement effect on evolution
speed and the corresponding Fubini-Study distance. Further to this, we resolve
the quantum brachistochrone problem based on quantum entanglement. We supply
concluding remarks in Sec.VI.
## II Unitary evolution and the quantum state manifold of $N$ spin-$1/2$
system
### II.1 Physical model and unitary quantum evolution
To start, the considered system is composed of $N$ qubits represented by $N$
interacting spin-$1/2$ under all-range Ising model described by the following
Hamiltonian
$\mathrm{H}=\mathtt{J}\left(\sum_{i=1}^{N}S^{z}_{i}\right)^{2},$ (1)
with $\mathtt{J}$ is the coupling constant characterizing the interaction and
$\mathtt{S}_{k}^{z}$ denotes the $z$-component of the spin operator
$\textbf{S}_{i}=(\mathtt{S}_{i}^{x},\mathtt{S}_{i}^{y},\mathtt{S}_{i}^{z})^{T}$
associated with $i$th spin-$1/2$ (i.e., $i$th qubit) which fulfills the
eigenvalues equation
$\mathtt{S}_{i}^{z}\left|{{\mathsf{m}_{i}}}\right\rangle=\hbar{\mathsf{m}_{i}}\left|{{\mathsf{m}_{i}}}\right\rangle,$
(2)
where $\mathtt{S}_{i}^{\alpha}=\frac{\hbar}{2}\sigma_{i}^{\alpha}$ and
$\sigma_{i}^{\alpha}$, $(\alpha=x,y,z)$ are the Pauli matrices,
$\mathsf{m}_{i}=\pm\hbar/2$ represent the possible values due to the
projection of the $ith$ spin over the $z$-axis, and
$\left|{{\mathsf{m}_{i}}}\right\rangle$ denote the associated eigenstates. It
is worth noting that the components of spin-$1/2$ operators
$\mathtt{S}_{i}^{x},\,\mathtt{S}_{i}^{y},$ and $\mathtt{S}_{i}^{z}$ satisfy
the algebraic structure of the Lie group SU(2):
$\left[{\mathtt{S}_{i}^{\alpha},\mathtt{S}_{j}^{\beta}}\right]=i{\delta_{ij}}\sum\limits_{\gamma=x,y,z}\epsilon^{\alpha\beta\gamma}{\mathtt{S}_{k}^{\gamma}},$
(3)
where $\delta_{ij}$ and $\epsilon^{ijk}$ designate the Kronecker and Levi-
Civita symbols, respectively. It is straightforward to see that for an even
number of spins, the above Hamiltonian has $(N/2+1)$ eigenvalues, whereas for
an odd number, it has $(N+1)/2$ eigenvalues. Explicitly, the eigenvalues and
related eigenstates are provided as follows
$\begin{array}[]{cc}\frac{\mathtt{J}\hbar^{2}}{4}N^{2}&|\uparrow\uparrow\uparrow\ldots\uparrow\uparrow\rangle,|\downarrow\downarrow\downarrow\ldots\downarrow\downarrow\rangle;\\\\[12.045pt]
\frac{\mathtt{J}\hbar^{2}}{4}(N-2)^{2}&|\downarrow\uparrow\uparrow\ldots\uparrow\uparrow\rangle,|\uparrow\downarrow\uparrow\ldots\uparrow\uparrow\rangle,\ldots,|\uparrow\uparrow\uparrow\ldots\uparrow\downarrow\rangle,\\\\[6.02249pt]
&|\uparrow\downarrow\downarrow\ldots\downarrow\downarrow\rangle,|\downarrow\uparrow\downarrow\ldots\downarrow\downarrow\rangle,\ldots,|\downarrow\downarrow\downarrow\ldots\downarrow\uparrow\rangle;\\\\[12.045pt]
\frac{\mathtt{J}\hbar^{2}}{4}(N-4)^{2}&|\downarrow\downarrow\uparrow\ldots\uparrow\uparrow\rangle,|\downarrow\uparrow\downarrow\ldots\uparrow\uparrow\rangle,\ldots,|\uparrow\uparrow\uparrow\ldots\downarrow\downarrow\rangle,\\\\[6.02249pt]
&|\uparrow\uparrow\downarrow\ldots\downarrow\downarrow\rangle,|\uparrow\downarrow\uparrow\ldots\downarrow\downarrow\rangle,\ldots,|\downarrow\downarrow\downarrow\ldots\uparrow\uparrow\rangle;\\\\[12.045pt]
\ldots&\ldots\end{array}$ (4)
Taking into account all possible combinations of spin states (i.e.,
$|\uparrow\rangle$ and $|\downarrow\rangle$), one finds that each eigenvalue
${{J\hbar^{2}{{(N-2p)}^{2}}}\mathord{\left/{\vphantom{{J{{(N-2p)}^{2}}}4}}\right.\kern-1.2pt}4}$
matches $2\mathrm{C}_{N}^{p}$ eigenstates, with $\mathrm{C}$ stands for the
binomial coefficient, while the index $p=0,...,N/2$ for $N$ even (particle
number) and $p=0,...,(N-1)/{2}$ for $N$ odd. We presume that the evolution of
the $N$ spin-$1/2$ system starts with the initial state
$|\Psi_{i}\rangle=|+\rangle^{\otimes N},$ (5)
where
$|+\rangle=\cos\frac{\theta}{2}|\uparrow\rangle+\sin\frac{\theta}{2}e^{i\varphi}|\downarrow\rangle,$
corresponds to the eigenstate of the spin-$1/2$ projection operator on the
direction denoted by the unit vector
$\textbf{n}=(\sin\theta\cos\varphi,\sin\theta\sin\varphi,\cos\theta)$, where
$\theta$ and $\varphi$ designate the polar and azimuthal angles, respectively.
In this respect, the initial state (5) of the system can be rewritten, using
the binomial theorem, as
$\left|\Psi_{i}\right\rangle=\sum_{p=0}^{N}\cos^{N-p}\frac{\theta}{2}\sin^{p}\frac{\theta}{2}e^{ip\varphi}\sum_{i_{1}<i_{2}<\ldots<i_{p}=1}^{N}\sigma_{i_{1}}^{x}\sigma_{i_{2}}^{x}\ldots\sigma_{i_{p}}^{x}|\uparrow\rangle^{\otimes
N},$ (6)
where we set $\hbar=1$, indicating that the energy is measured in the
frequency units. To investigate the geometrical, topological, and dynamical
features of the system under consideration in the remainder of this paper, we
need to evolve the $N$ spin-1/2 system, maintained initially in the starting
state (6), by applying the time evolution propagator
$\mathcal{P}(t)=e^{-i\mathrm{H}t}$. In this perspective, the evolving state of
the system is obtained as
$\left|\Psi(t)\right\rangle=\sum_{p=0}^{N}\cos^{N-p}\left(\frac{\theta}{2}\right)\sin^{p}\left(\frac{\theta}{2}\right)\exp\left\\{-i\left[\frac{\xi(t)}{4}(N-2p)^{2}-p\varphi\right]\right\\}\sum_{i_{1}<i_{2}<\ldots<i_{p}=1}^{\mathcal{N}}\sigma_{i_{1}}^{x}\sigma_{i_{2}}^{x}\ldots\sigma_{i_{p}}^{x}|\uparrow\uparrow\ldots\uparrow\rangle,$
(7)
with $\xi(t)=\mathtt{J}t$. In this way, we come at demonstrating the evolved
state of a collection of $N$ qubits (i.e., $N$ spin$-1/2$) that is explicitly
dependent on three parameters, namely the spherical angles $(\theta,\varphi)$
and the time $t$. It is also intriguing to note that the state (7) fulfills
the periodic requirement
$\left|\Psi(\xi)\right\rangle=\left|\Psi(\xi+2\pi)\right\rangle$ along the
temporal parameter, implying that $\xi\in[0,2\pi]$. Accordingly, one can
predict that the dynamics of the system occurs on a closed three-dimensional
manifold.
### II.2 Geometry and topology of the resulting state manifold
After evolving the $N$ spin-$1/2$ system by means of temps-evolution operator
and determining the evolved state (7), we will now discover the geometry and
topology associated with the relevant quantum state space, which includes all
possible states that the system may reach during evolution. For this task, we
must compute the Fubini-Study metric, which defines the infinitesimal distance
$d\mathtt{S}$ between two adjoining pure quantum states
$|\Psi(\zeta^{\mu})\rangle$ and $|\Psi(\zeta^{\mu}+d\zeta^{\mu})\rangle$,
having the following form [54, 55]
$d\mathtt{S}^{2}=\mathrm{g}_{\mu\nu}d\zeta^{\mu}d\zeta^{\nu},$ (8)
where $\zeta^{\mu}$ are the physical parameters $\theta,\varphi$ and $\xi$
(i.e., representing the dynamical degrees of freedom of the considered system)
specifying the evolved state (7) and $\mathrm{g}_{\mu\nu}$ denote the
components of this metric tensor given by
$\mathrm{g}_{\mu\nu}=\mathrm{Re}\left(\left\langle\Psi_{\mu}|\Psi_{\nu}\right\rangle-\left\langle\Psi_{\mu}|\Psi\right\rangle\left\langle\Psi|\Psi_{\nu}\right\rangle\right),$
(9)
with
$\left|\Psi_{\mu,\nu}\right\rangle=\frac{\partial}{\partial\zeta^{\mu,\nu}}|\Psi\rangle$.
Using the definition (8), one finds, after a simple numerical calculation, the
explicit version of the Fubini-Study metric as follows
$\displaystyle d\mathtt{S}^{2}=$ $\displaystyle
d\mathtt{S}_{i}^{2}+\frac{1}{4}N(N-1)\sin^{2}\theta\left[N-1-\left(N-\frac{3}{2}\right)\sin^{2}\theta\right]d\xi^{2}$
$\displaystyle+\frac{1}{4}N(N-1)\cos\theta\sin^{2}\theta d\varphi d\xi,$ (10)
with
$d\mathtt{S}_{i}^{2}=\frac{N}{4}\left(d\theta^{2}+\sin^{2}\theta
d\varphi^{2}\right),$ (11)
corresponds to the squared line element defining the sphere of possible
initial states of the system. This can be well remarked when we take $\xi=0$
(i.e., no evolution) in the metric (II.2), we discover indeed that the state
space is reduced to a sphere of radius $2\sqrt{N}$. Furthermore, the space of
$N$ spin-$1/2$ states resulting from the temporal evolution is effectively a
closed three-dimensional manifold. Note that the components of the underlying
metric (II.2) are $\varphi$-independent, signifying that the quantum state
spaces with a predesignated azimuthal angle have the same geometry. Hence, we
draw the conclusion that the appropriate quantum state space (i.e., quantum
phase space) corresponding to the $N$ qubits (under consideration) is a two-
parametric and curved manifold, it is identified by the following metric
tensor
$d\mathtt{S}^{2}=\frac{N}{4}d\theta^{2}+\frac{1}{4}N(N-1)\sin^{2}\theta\left[N-1-\left(N-\frac{3}{2}\right)\sin^{2}\theta\right]d\xi^{2}.$
(12)
To further characterize this state space, we are going to determine its
topology. For this aim, we begin by assessing the corresponding G-curvature,
which can be defined in terms of the relevant metric tensor (12) in the form
[56]
$\displaystyle\mathrm{K}=\frac{1}{\left(\mathrm{g}_{\theta\theta}\mathrm{g}_{\xi\xi}\right)^{1/2}}$
$\displaystyle\left[\frac{\partial}{\partial\xi}\left(\left(\frac{\mathrm{g}_{\xi\xi}}{\mathrm{g}_{\theta\theta}}\right)^{1/2}\Gamma_{\theta\theta}^{\xi}\right)\right.$
$\displaystyle\left.-\frac{\partial}{\partial\theta}\left(\left(\frac{\mathrm{g}_{\xi\xi}}{\mathrm{g}_{\theta\theta}}\right)^{1/2}\Gamma_{\theta\xi}^{\xi}\right)\right],$
(13)
where $\Gamma_{\theta\theta}^{\xi}$ and $\Gamma_{\theta\xi}^{\xi}$ account for
the Christoffel symbols given by
$\Gamma_{\theta\theta}^{\xi}=-\frac{1}{2\mathrm{g}_{\xi\xi}}\left(\frac{\partial\mathrm{g}_{\theta\theta}}{\partial\xi}\right),\quad\text{and}\quad\Gamma_{\theta\xi}^{\xi}=\frac{1}{2\mathrm{g}_{\xi\xi}}\left(\frac{\partial\mathrm{g}_{\xi\xi}}{\partial\theta}\right).$
(14)
It is extremely intriguing to see that the temporal component
$\mathrm{g}_{\xi\xi}$ of the metric cancels out at the points $\theta=0,\pi$.
This implies that the G-curvature is not definable in these positions, hence
we conclude that it exhibits a singularity in each of these two positions.
However, it is well definable at all other positions within the space of $N$
spin-1/2 states. Reporting the metric components $\mathrm{g}_{\theta\theta}$
and $\mathrm{g}_{\xi\xi}$ in the equation (II.2), the explicit expression of
the relevant G-curvature writes
$\mathrm{K}=\frac{8}{N}\left[2-\frac{(2N-3)\cos^{2}\theta+N}{\left((2N-3)\cos^{2}\theta+1\right)^{2}}\right].$
(15)
Remark that the state space curvature (15) is mainly affected by the initial
parameters $\theta$ and $N$, while it is independent of $\xi$ containing the
temporal evolution, meaning that the state space curvature is independent of
the system dynamics. Furthermore, the G-curvature (15) verifies the following
periodic requirement $\mathrm{K}(\theta)=\mathrm{K}(\theta+\pi)$. This is
consistent with our findings because the resulting quantum phase space (12) is
a closed two-dimensional manifold.
Figure 1: The dependence of the G-curvature (15) on the initial parameter
$\theta$ for some spin-$1/2$ numbers.
From the analysis of the Fig.(1) showing the G-curvature behavior with respect
to the initial parameters $(\theta,N)$, we see that the state space geometry
is symmetric with respect to the centerline at $\theta=\pi/2$, being the
position with the minimal curvature. More specifically, in the region
$\theta\in[0,\pi/2]$, the G-curvature declines and therefore the state space
geometry takes the concave shape, whereas in the region
$\theta\in[\pi/2,\pi]$, the G-curvature increases to its maximum value, and
therefore the space geometry takes the convex shape. As a result, we infer
that the $N$ spin-$1/2$ state space has a dumbbell-shape structure.
Additionally, we notice that for $N>2$, the curvature takes negative values
for certain values of $\theta$. This is congruent with the findings given in
Ref. [27].
Considering this fact, in addition to the existence of two singularities in
the G-curvature (15), we get to the conclusion that there are two conical
flaws within the quantum phase space (12); the first one is situated near to
the location $\theta=0$, whereas the second one is situated near to the
location $\theta=\pi$. In light of these outcomes, let us now explore the
topology associated with the space of $N$ spin-$1/2$ states (12). To achieve
this, we have to compute the integer Euler characteristic $\chi(\mathrm{M})$
($\mathrm{M}$ stands for state space (12)) provided in the Gauss-Bonnet
theorem as [56]
$\frac{1}{2\pi}\left[\int_{\mathrm{M}}\mathrm{K}d\mathrm{S}+\oint_{\partial\mathrm{M}}\mathrm{k}_{\mathrm{g}}d{l}\right]=\chi(\mathrm{M}),$
(16)
where the geometric invariants $d\mathrm{S}$, $\mathrm{k}_{\mathrm{g}}$ and
$d{l}$ denote, respectively, the surface element, geodesic curvature and line
element. Additionally, the first and second terms on the left-hand side of the
equation (16) represent, respectively, the bulk and border contributions to
the Euler characteristic identifying the state space topology. Furthermore,
the Gauss-Bonnet theorem (16) can be established in terms of the state space
geometry (12) in the form
$\int_{0}^{\pi}\int_{0}^{{2\pi}}\mathrm{K}\left(\mathrm{g}_{\theta\theta}\mathrm{g}_{\xi\xi}\right)^{1/2}d\theta
d\xi+{\Lambda}=2\pi\chi(\mathrm{M}),$ (17)
here $\Lambda$ standing for the Euler border integral including the conical
flaws contribution. After a simple calculation, the Gauss-Bonnet theorem (17)
reads as
$4\pi(N-1)+\Lambda=2\pi\chi(\mathrm{M}).$ (18)
Therefore, to find the Euler characteristic $\chi$, we must first determine
the Euler border integral $\Lambda$. For this purpose, we presume that the
angular flaws are situated very near to the singular points $\theta=0,\pi$. In
this view, the underlying metric (12) can be evaluated, in the vicinity of
these two singular positions, upto the second order in $\theta$. Indeed, we
obtain
$d\mathtt{S}^{2}=\frac{N}{4}d\theta^{2}+\frac{1}{4}N(N-1)^{2}\theta^{2}d\xi^{2}.$
(19)
As a further note, the solid angle of a revolution cone having the angle at
the peak $2\theta$ is defined by $\Omega=2\pi(1-\cos\theta)$ where the second
term of this definition represents the partial solid angle sketched out by the
system around the cone apex during evolution. Taking into account the
closeness of the angular flaws at the two singular points, we can easily find
$2\pi\cos\theta\approx\frac{\mathtt{S}\left(2\pi\right)}{\mathbf{d}}=\frac{2\pi\sqrt{\mathrm{g}_{\xi\xi}}}{\sqrt{\mathrm{g}_{\theta\theta}}\theta},$
(20)
here $\mathtt{S}(2\pi)$ standing for the distance traveled by the system
within the period of time $\textbf{t}=2\pi/\mathtt{J}$ around one of the two
singular positions $(\theta=0$ or $\theta=\pi)$, while d corresponds to the
distance between evolution of the system and the relevant singular position.
As a result, the angular flaws are explicitly given by
$\Lambda=2\left[2\pi-\frac{2\pi\sqrt{\mathrm{g}_{\xi\xi}}}{\sqrt{\mathrm{g}_{\theta\theta}}\theta}\right]=4\pi\left(2-N\right),$
(21)
where we multiplied it by the factor 2 because we have two singular points.
Putting the equation (21) into (18), one finds the Euler characteristic
$\chi(\mathrm{M})=2$, showing that the quantum phase space associated with the
$N$ spin-$1/2$ system has a spherical topology. The coming section will be
devoted to a thorough analysis of the geometric phase that the $N$ spin-$1/2$
state (7) can accumulate when subjected to cyclic and arbitrary evolution
processes on the underlying quantum state space (12).
## III Geometrical phases acquired by the $N$ spin-$1/2$ state
After investigating the geometry and topology of the $N$ spin-$1/2$ state
space identified by the metric tensor (12). Let us now focus on the geometric
phase that the evolving state (7) can acquire for both arbitrary and cyclic
evolutionary processes.
### III.1 Geometrical phase during an arbitrary evolution
In this instance, we presume that the $N$ spin-$1/2$ system evolves
arbitrarily along any evolution path on the closed two-dimensional manifold
(12). In this picture, the geometric phase gained by the evolved state (7) is
given by
$\Phi_{g}(t)=\arg\langle\Psi_{i}|\Psi(t)\rangle-{\mathrm{Im}}\int_{0}^{t}\langle\Psi(t^{\prime})|\frac{\partial}{\partial
t^{\prime}}|\Psi(t^{\prime})\rangle dt^{\prime},$ (22)
which is defined as the difference between total phase and dynamic phase [47,
57]. To calculate the geometric phase, we must first compute total the phase
acquired by the system, The overlap (i.e., transition-probability amplitude)
between the starting state (6) and the ending state (7) is obtained as
$\langle\Psi_{i}|\Psi(t)\rangle=\sum_{p=0}^{N}\mathrm{C}_{N}^{p}\cos^{2(N-p)}\left(\frac{\theta}{2}\right)\sin^{2p}\left(\frac{\theta}{2}\right)e^{-\frac{i\xi}{4}(N-2p)^{2}}.$
(23)
Inserting the expression of the overlap (23) into the first term on the right
side of the equation (22), the total phase gained by the $N$ spin-$1/2$ system
writes
$\Phi_{\operatorname{tot}}=-\arctan\left[\frac{\sum\limits_{p=0}^{N}\mathrm{C}_{N}^{p}\cos^{2(N-p)}\frac{\theta}{2}\sin^{2p}\frac{\theta}{2}\sin\left(\frac{\xi(N-2p)^{2}}{4}\right)}{\sum\limits_{p=0}^{N}\mathrm{C}_{N}^{p}\cos^{2(N-p)}\frac{\theta}{2}\sin^{2p}\frac{\theta}{2}\cos\left(\frac{\xi(N-2p)^{2}}{4}\right)}\right].$
(24)
It is interesting to observe that the total phase (24) comprises two distinct
phase components: the first is of geometrical origin (known as the geometrical
phase), and it is strongly related to the geometrical and topological features
that characterize the quantum state space of such systems [47, 56]. This
geometric component is explained by the implicit reliance of the total phase
(24) on the G-curvature (15) and the component $\mathrm{g}_{\xi\xi}$ of the
metric (12), as they all depend on the parameters $(N,\theta)$. The second one
is of dynamical origin (known as the dynamical phase), and it results from the
time evolution of Hamiltonian eigenstates (4). Furthermore, the global phase
(24) exhibits a non-linear time dependence and fulfills the following periodic
conditions :
$\Phi_{\operatorname{tot}}(\xi+4\pi)=\Phi_{\operatorname{tot}}(\xi)\qquad\text{for}\;N\;\text{integer},$
(25)
and
$\qquad\Phi_{\operatorname{tot}}(\xi+8\pi)=\Phi_{\operatorname{tot}}(\xi)\qquad\text{for}\;N\;\text{half-
integer}.$ (26)
The dynamical phase, on the other hand, can be calculated by plugging the
evolved state (7) into the second term on the right side in the equation (22).
Indeed, one finds
$\Phi_{\operatorname{dyn}}=-\frac{\xi
N}{4}\left(N\cos^{2}\theta+\sin^{2}\theta\right).$ (27)
It is proportional to the evolution time, meaning the dynamic phase primarily
instructs us about the time spent by the system during evolution. On the other
hand, the geometric phase that can be accrued by the $N$ spin-$1/2$ state (5),
during any arbitrary evolution over the quantum phase space (12), is obtained
as
$\displaystyle\Phi_{\operatorname{g}}=$
$\displaystyle-\arctan\left[\frac{\sum\limits_{p=0}^{N}\mathrm{C}_{N}^{p}\cos^{2(N-p)}\frac{\theta}{2}\sin^{2p}\frac{\theta}{2}\sin\left(\frac{\xi(N-2p)^{2}}{4}\right)}{\sum\limits_{p=0}^{N}\mathrm{C}_{N}^{p}\cos^{2(N-p)}\frac{\theta}{2}\sin^{2p}\frac{\theta}{2}\cos\left(\frac{\xi(N-2p)^{2}}{4}\right)}\right]$
$\displaystyle+\frac{\xi N}{4}\left(N\cos^{2}\theta+\sin^{2}\theta\right).$
(28)
It is clear that the resulting geometric phase (III.1) varies (i.e.,
accumulates or loses) non-linearly with the time, reflecting its dynamic
character. Otherwise, it is dependent on the freedom degrees $(\theta,\xi)$
specifying the physical states over the quantum phase space (12), which means
that the geometric phase depends on the shape of the evolution trajectory
followed by the system, while its reliance on the initial parameters
$(N,\theta)$ signifies that it is also sensitive to the state space geometry.
Accordingly, we conclude that the geometric phase (III.1) can be exploited to
parameterize the possible evolution trajectories of this system. This result
can find applications in quantum computation, because such quantum phases can
be used to design logic gates that are helpful for building good quantum
algorithms [44, 58]. Let us now turn to a special scenario in which we
investigate the geometric phase accrued by the $N$ spin-$1/2$ state (7) over a
very brief period of time. In this framework, by extending the exponential
factor given in (23) up to the second order in $\xi$, one obtains
$\langle\Psi_{i}|\Psi(t)\rangle\simeq
1+\frac{\xi^{2}N(N-1)}{64}\left[4(N-1)(N+2)\cos^{2}\theta-(N-3)(N-2)\sin^{2}2\theta+4(3N-2)\right]-i\frac{\xi
N}{4}\left(N\cos^{2}\theta+\sin^{2}\theta\right).$ (29)
In this perspective, the geometric phase (III.1) can be expressed as
$\Phi_{\operatorname{g}}\simeq-\arctan\left[\frac{{\xi
N}\left(N\cos^{2}\theta+\sin^{2}\theta\right)}{4+\frac{\xi^{2}N(N-1)}{16}\left[4(N-1)(N+2)\cos^{2}\theta-(N-3)(N-2)\sin^{2}2\theta+4(3N-2)\right]}\right]+\frac{\xi
N}{4}\left(N\cos^{2}\theta+\sin^{2}\theta\right).$ (30)
Note that for $\xi=0$, the system does not gain any quantum phase, this is
because the system state remains confined in the starting state (6) (i.e. no
evolution). As we can see that the greater the number of particles $N$, the
more the dynamic phase is dominant. Further, in the thermodynamic limit
($N\to\infty$), the total phase cancels out and therefore the geometric and
dynamic phases coincide at any moment in the evolution process. This offers us
the opportunity of measuring the geometric phase experimentally because, in
this situation, it can be obtained by the temporal integral of the average
eigenvalues of the Ising Hamiltonian (1).
### III.2 Geometrical phase under a cyclic evolution
Here, we are focusing on the study of the geometrical phase resulting from the
cyclic evolution of the $N$ spin-$1/2$ system. In this regard, the wave
function (7) satisfies the cyclic requirement
$|\Psi({T})\rangle=e^{i\Phi_{\operatorname{tot}}}|\Psi(0)\rangle$ wherein
${T}$ represents the time span for a cyclic evolution. The AA-geometric phase
(often referred to as the Aharonov-Anandan phase) gained by the system after a
cyclic evolution (i.e., closed curve on the related parameter space) is given
by [36, 59]
$\Phi_{\operatorname{g}}^{\mathrm{AA}}=i\int_{0}^{{T}}\langle\tilde{\Psi}(t)|\frac{\partial}{\partial
t}|\tilde{\Psi}(t)\rangle dt,$ (31)
here $|\tilde{\Psi}(t)\rangle$ denotes the Anandan-Aharonov section given in
Ref. [36]. It is identified through the evolved state (7) as
$|\tilde{\Psi}(t)\rangle=e^{-i{f}(t)}|\Psi(t)\rangle$ where ${f}(t)$ is any
smooth function checking $f({T})-f(0)=\Phi_{\operatorname{tot}}$. On this
view, the AA-geometric phase (31) rewrites
$\Phi_{\operatorname{g}}^{\mathrm{AA}}=\int_{0}^{{T}}d\Phi_{\operatorname{tot}}+i\int_{0}^{{T}}\langle\Psi(t)|\frac{\partial}{\partial
t}|\Psi(t)\rangle dt.$ (32)
Reporting the equations (24) and (27) into (32), one obtains the AA-geometric
phase acquired by the $N$ spin-$1/2$ system (7) during a cyclic evolution of
the form
$\Phi_{\operatorname{g}}^{\mathrm{AA}}=-\frac{\pi}{2}N(N-1)\sin^{2}\theta.$
(33)
Thus, the obtained AA-geometric phase (33) is independent of the system
dynamics. Otherwise, it depends only on the initial parameters $\theta$ and
$N$ (i.e., the starting state), controlling the shape of the state space (12).
As a result, we conclude that the AA-geometric phase is impacted by the state
space geometry and not by the evolution path followed by the system. Hence the
cyclic evolution paths are not parameterizable by this cyclic phase. Further,
using the equation (15) into (33), one can relate the AA-geometric phase with
the G-curvature as follows
$\Phi_{\operatorname{g}}^{\mathrm{AA}}=\frac{{\pi
N(N-1)}}{2}\left[{\frac{{-56+3N\left({16-(N-1)\mathrm{K}}\right)}}{{\left({2N-3}\right)\left({N\mathrm{K}-16}\right)}}}\right].$
(34)
In Fig.(2), we depict the behavior of the cyclic geometrical phase (33) with
respect to the starting parameters $(N,\theta)$. We find that it behaves
similarly to G-curvature (see Figure 1).
Figure 2: The dependence of the AA-geometric phase (33) on the initial
parameter $\theta$ for some spin-$1/2$ numbers.
Specifically, the AA-geometric phase (33) decreases in the region
$\theta\in[0,\pi/2]$, in which the state space geometry has a concave shape
(i.e., the G-curvature decreases), whereas it grows in the region
$\theta\in[\pi/2,\pi]$, in which this geometry has a convex form (i.e., the
G-curvature increases). Thus, we conclude that the AA-geometric phase (33)
also has a symmetric behavior along the parameter $\theta$, this is due to the
dumbbell-shape structure of the state space. Additionally, we see that it is
interesting to discuss the topological phase that can emerge during the cyclic
evolution of the wave function (7). In reality, it constitutes the part of the
cyclic geometric phase that does not receive any dynamic contribution.
Explicitly, it is found as
$\Phi_{\operatorname{top}}^{\mathrm{AA}}=-\frac{\pi}{2}N^{2}.$ (35)
The resulting topological phase (35) is propotional to the square of the
particle number. Especially, this phase takes fractional values for $N$ odd
and multiples of $\pi$ for $N$ even. This shows that the topological phase
(resp. the particle number) is relevant to control the state space topology
given in (18). This provides the possibility to parametrize the closed
evolution paths traversed by the system through the resulting topological
phase (35). This result looks very interesting in quantum computing,
particularly in the search for efficient quantum circuits [53, 60]. This issue
can be closely connected to the determination of the optimal evolution path of
the system under consideration, by evaluating the evolution speed as well as
the related Fubini-Study distance. This is the quantum Brachistochrone
problem, which will be tackled in the succeeding section.
## IV Speed and quantum brachistochrone issue for $N$ spin-$1/2$ system
Now, we will exploit the Riemannian geometry identifying the quantum state
space (12) to investigate some dynamical properties of the system. In
particular, we examine the evolution speed and the geodesic distance measured
by the Fubini-Study metric (12) in order to solve the related quantum
brachistochrone problem [11, 61]. This issue is often linked to achieving
time-optimal evolution, which is characterized by a maximum speed and the
trajectory between the starting state (6) and the ending state (7) is the
shortest possible. In other terms, the solution to this dynamic problem is to
find the shortest period of evolution.
### IV.1 Speed of quantum evolution
In order to evaluate the evolution speed, we presume that the evolution of the
$N$ spin-$1/2$ system depends only on time while leaving all other parameters
unchanged. In this picture, the metric tensor (12) simplifies to
$d\mathtt{S}^{2}=\frac{1}{4}N(N-1)\sin^{2}\theta\left[N-1-\left(N-\frac{3}{2}\right)\sin^{2}\theta\right]d\xi^{2}.$
(36)
This shows that the dynamics of the system occurs on a circle of radius
$\sqrt{\mathrm{g}_{\theta\theta}}$. Therefore, the evolution speed of the $N$
spin-$1/2$ state (7) takes the form [9]
$\mathrm{V}=\frac{d\mathtt{S}}{dt}={2}\Delta\mathrm{E},$ (37)
where $\Delta\mathrm{E}$ designates the energy uncertainty of the above
Hamiltonian (1). From the analysis of the equation (37), we notice that the
larger the energy uncertainty, the faster the system evolves, and vice versa.
By setting the equation (36) into (37), the evolution velocity of the wave
function (7) is found as
$\mathrm{V}=\frac{\mathtt{J}}{2}\sqrt{N(N-1)\sin^{2}\theta\left[N-1-\left(N-\frac{3}{2}\right)\sin^{2}\theta\right]}.$
(38)
Thus, the evolution rapidity is affected by the coupling interaction
$\mathtt{J}$ and the initial parameters ($\theta$, $N$), i.e., the choice of
the starting state. From the equation (38), we remark that the larger the
particle number and the coupling constant, the faster the system evolves, with
the exception of $\theta=0$ or $\theta=\pi$, in which the evolution velocity
(38) cancels out $(\mathrm{V}=0)$ regardless of these physical parameters.
This is justified by the fact that neither the wave function (7) nor the
G-curvature (II.2) are defined in these two singular points.
Given that the speed depends parameter $\theta$, meaning that it is also
assigned both by the G-curvature (15) and the geometric phase (III.1)
including the AA-geometric-phase (33). This can be clearly seen in the figure
(3) displaying its reliance on the initial parameters $(\theta,N)$.
Figure 3: The dependence of the evolution speed (38) on the initial parameter
$\theta$ for some spin-$1/2$ numbers with $\mathtt{J}=1$.
Note that the evolution speed (38) exhibits a symmetric behavior with respect
to the state space geometry having the dumbbell-shape structure. This makes
perfect sense because the resulting phase space corresponds exactly to the
relevant quantum phase space, on which the dynamics of the $N$ spin-$1/2$
system is well established.
### IV.2 Resolution of the quantum brachistochrone problem
To address this dynamic issue, we should determine the shortest duration
possible necessary to carry out the time-optimal evolution of the considered
system (7). To accomplish this, one begins by maximizing the evolution
velocity (38) by solving the equation $d\mathrm{V}/d\theta=0$, which yields
$N(N-1)(N-1-(2N-3)\sin^{2}\theta)\sin 2\theta=0,$ (39)
which entails that
$\sin\theta_{\max}=\sqrt{\frac{N-1}{2N-3}}.$ (40)
Therefore, the highest speed that the $N$ spin-$1/2$ system can achieve reads
as
$\mathrm{V}_{\max}=\mathtt{J}(N-1)\sqrt{\frac{N(N-1)}{8(2N-3)}},$ (41)
In this way, we manage to establish the maximum velocity, which depends only
on the particle number making up the system. Let us now investigate the
geodesic distance that the system traverses between the departure state (6)
and the arrival state (7). For this, utilizing the equation (37), one finds
$\mathtt{S}=\frac{{\xi}}{2}\sqrt{N(N-1)\sin^{2}\theta\left[N-1-\left(N-\frac{3}{2}\right)\sin^{2}\theta\right]}.$
(42)
As the evolution rapidity (38) is time-independent, the Fubini-Study distance
(42) traveled by the system has a linear behavior with time. Additionally, we
note that, at each instant, the evolution speed and the distance have a
similar behavior against the physical parameters $(\theta,\mathtt{J},N)$ of
the system. Specifically, at singular points $\theta=0,\pi$, the distance
vanishes, which is reasonable because at these points the state of $N$
spin-$1/2$ (7) is not defined. Thereby, one concludes that the Fubini-Study
distance (42) exhibits a local minimum at the point $\theta=\pi/2$, it given
by
$\mathtt{S}_{\min}=\frac{{\xi}}{2}\sqrt{\frac{N(N-1)}{2}}.$ (43)
Thus, the minimum feasible time required for the system to conduct any quantum
evolution writes
$\mathrm{t}_{\min}=\frac{\mathtt{S}_{\min}}{\mathrm{V}_{\max}}=\frac{t}{(N-1)}\sqrt{2N-3}.$
(44)
This is the shortest duration needed to achieve a time-optimal evolution over
the state circle (36). In specific terms, the result (44) defines the optimal
evolution condition, which is typified by the fastest evolution and the
shortest path joining the initial and final states. Therefore, the optimal
evolution states can be produced through the following unitary transformation
$\left|\Psi_{i}\right\rangle\rightarrow|\Psi(\mathrm{t}_{\min})\rangle=e^{-i\mathrm{H}\mathrm{t}_{\min}}\left|\Psi_{i}\right\rangle.$
(45)
so that the set of these states makes up an optimal state circle described by
the Fubini-Study metric of the form
$d\mathtt{S}^{2}_{\mathrm{op}}=\frac{1}{4}N(N-1)\sin^{2}\theta\left[N-1-\left(N-\frac{3}{2}\right)\sin^{2}\theta\right]d\xi^{2}_{\min}.$
(46)
with $\xi_{\min}=\mathtt{J}\mathrm{t}_{\min}$. On other hand, we remark that
the temporal condition (44) is solely influenced by the particle number, this
implies that the state circle topology affects the optimal evolution time as
well. It is also proportional to the ordinary time $t$ (i.e., matches the
evolution over the state circle (36)) with a positive proportionality factor,
meaning that the optimal and ordinary times have the same monotonicity.
Particularly, one finds that for $N=2$ (i.e., two-spin-$1/2$ system) these two
types of time coincide $(\mathrm{t}_{\min}=t)$, while for $N\geq 3$ (i.e., $N$
spin-$1/2$ system), we discover that the optimal time (44) is strictly lower
than the ordinary time, and therefore the time-optimal evolution is
achievable. Besides, in the thermodynamic limit $(N\to\infty)$, the optimal
time decreases to zero $(\mathrm{t}_{\min}\to 0)$. In this respect, the
optimal state circle (46) coincides with a straight line since its radius
$\sqrt{\mathrm{g_{\xi_{\min}\xi_{\min}}}}$ becomes infinite. As a result, we
infer that the particle number and the ordinary time are two physical
magnitudes exploitable for performing the time-optimal evolutions in such
integrable systems. At the end of this section, we note that it is intriguing
to relate the geometric and dynamic structures that we explored above with
quantum entanglement, as a physical resource of great relevance in quantum
information tasks. The next section focuses on this subject.
## V Geometric and dynamic pictures of the entanglement for two-spin system
$(N=2)$
In this section, we shall study the quantum entanglement exchanged between two
interacting spins under Ising model through two different perspectives; the
first is geometric in nature and investigates the entanglement impact on
derived geometric features such as the Fubini-Study metric, G-curvature, and
the geometric phase under arbitrary and cyclic evolutions. The second is
dynamic in nature and explores the entanglement effect on the evolution speed
as well as the Fubini-Study distance covered by the system. Importantly, we
address the quantum Brachistochrone problem depending on the entanglement
degree.
### V.1 Entanglement degree of the two-spin system
The wave function of the global quantum system (7) is narrowed for a two-spin
system to the form
$\displaystyle|\Psi(t)\rangle=$ $\displaystyle
e^{-i\xi(t)}\cos^{2}\frac{\theta}{2}|\uparrow\uparrow\rangle+\frac{1}{2}e^{i\varphi}\sin\theta(|\uparrow\downarrow\rangle+|\downarrow\uparrow\rangle)$
$\displaystyle+e^{i(2\varphi-\xi(t))}\sin^{2}\frac{\theta}{2}|\downarrow\downarrow\rangle.$
(47)
Hence, the two-spin state space, on which the dynamics of the system takes
place, is defined by the following metric tensor
$d\mathtt{S}^{2}=\frac{1}{2}d\theta^{2}+\frac{1}{4}\sin^{2}\theta\left(2-\sin^{2}\theta\right)d\xi^{2}.$
(48)
Using the Wootters concurrence expression given in Ref. [62], one obtains,
after a simple calculation, the entanglement amount contained in the two-spin
state (V.1) of the form
$\boldsymbol{\mathscr{C}}=\sin^{2}\theta|\sin\xi|.$ (49)
It is the same for any two spins of the entire system (7). In other terms,
each spin pair is quantum-correlated as much as any other pair. Interestingly,
we observe that the two-spin entanglement (49) evolves periodically with time,
signifying that it is impacted by the dynamics of the system. Moreover, it
relies on the initial parameter $\theta$, showing that the entanglement degree
is also governed by the starting state choice. Notice that for $\xi=\pi/2$ and
$\theta=\pi/2$, the two-spin state (V.1) reaches its maximum entanglement
value $(\boldsymbol{\mathscr{C}}=1)$, whereas for $\theta=0$ or $\pi$, the two
spins can never be entangled $(\boldsymbol{\mathscr{C}}=0)$, because the
corresponding initial states $|\Psi_{i}\rangle=|\uparrow\uparrow\rangle$ or
$|\downarrow\downarrow\rangle$ are the Hamiltonian eigenstates. This can be
also justified topologically and geometrically by the existence of a conical
defect close to these two singular points.
### V.2 Geometrical picture of the entanglement
In order to evoke the geometric aspect of the quantum correlations between the
two spins under study, we suggest a thorough description illustrating the
nexus between the entanglement and the geometric structures derived above.
Setting the equation (49) into (48), we give the Fubini-Study metric
identifying the two-spin state space in terms of the concurrence as
$d{\mathtt{S}^{2}}=\frac{{d{\boldsymbol{\mathscr{C}}^{2}}}}{{8\boldsymbol{\mathscr{C}}(\left|{\sin\xi}\right|-\boldsymbol{\mathscr{C}})}}-\frac{{d\boldsymbol{\mathscr{C}}d\xi}}{{4\tan\xi(\left|{\sin\xi}\right|-\boldsymbol{\mathscr{C}})}}+\frac{\boldsymbol{\mathscr{C}}}{4}\left({\frac{1}{{2{{\tan}^{2}}\xi(\left|{\sin\xi}\right|-\boldsymbol{\mathscr{C}})}}+\frac{{2\left|{\sin\xi}\right|-\boldsymbol{\mathscr{C}}}}{{{{\sin}^{2}}\xi}}}\right)d{\xi^{2}}.$
(50)
which can be transformed in its diagonal form
$d\mathtt{S}^{2}=\frac{1}{8\boldsymbol{\mathscr{C}}_{r}\left(1-\boldsymbol{\mathscr{C}}_{r}\right)}d\boldsymbol{\mathscr{C}}_{r}^{2}+\frac{1}{4}\boldsymbol{\mathscr{C}}_{r}\left(2-\boldsymbol{\mathscr{C}}_{r}\right)d\xi^{2},$
(51)
with $\boldsymbol{\mathscr{C}}_{r}={\boldsymbol{\mathscr{C}}}/{|\sin\xi|}$
denotes the reduced concurrence varying in the interval $[0,1]$. Thereby, we
have managed to reparameterize the relevant phase space (48) according to the
amount of entanglement shared between the two spins and the evolution time,
which are two measurable physical magnitudes. This demonstrates the
feasibility of examining, experimentally, all the geometrical, topological and
dynamical characteristics of this state space, namely the phase space
geometry, the quantum phases, the evolution speed, and the geodesic distance
covered by the two-spin system (V.1) during its evolution. Importantly, the
quantum entanglement serves in shrinking the state space dimension. For
instance, the states of two spins with the same entanglement level
$(\text{i.e.,}\,\boldsymbol{\mathscr{C}}=\text{constant})$ are located on
closed one-dimensional manifolds defined by
$d{\mathtt{S}^{2}}=\frac{\boldsymbol{\mathscr{C}}}{4}\left({\frac{1}{{2{{\tan}^{2}}\xi(\left|{\sin\xi}\right|-\boldsymbol{\mathscr{C}})}}+\frac{{2\left|{\sin\xi}\right|-\boldsymbol{\mathscr{C}}}}{{{{\sin}^{2}}\xi}}}\right)d{\xi^{2}}.$
(52)
They are, in fact, closed curves, along the metric component
$\mathrm{g}_{\xi\xi}$, on the whole state space (50). On the other hand, the
states with the same degree of reduced entanglement
$(\text{i.e.,}\,\boldsymbol{\mathscr{C}}_{r}=\text{constant})$ are located on
circles identified by
$d\mathtt{S}^{2}=\frac{1}{4}\boldsymbol{\mathscr{C}}_{r}\left(2-\boldsymbol{\mathscr{C}}_{r}\right)d\xi^{2},$
(53)
whose radii
$\mathtt{R}=\sqrt{\boldsymbol{\mathscr{C}}_{r}\left(2-\boldsymbol{\mathscr{C}}_{r}\right)}/2$
depend on the level of reduced entanglement taken into account. In this way,
we demonstrate the relevance of utilizing quantum correlations to minimize the
dimensionality of the state space (50). This finding is applicable to all
integrable quantum systems.
In the same framework, we can also investigate the influence of entanglement
on the G-curvature of the two-spin phase space (50). As a matter of fact,
putting the equation (49) into (15), we derive the curvature according to the
concurrence under the form
$\mathrm{K}=4\left[{2+\frac{{\left|{\sin\xi}\right|\left({\boldsymbol{\mathscr{C}}-3\left|{\sin\xi}\right|}\right)}}{{{{\left({\boldsymbol{\mathscr{C}}-2\left|{\sin\xi}\right|}\right)}^{2}}}}}\right].$
(54)
This outcome demonstrates again the explicit reliance of the state space
geometry on the entanglement amount exchanged between the two interacting
spins. From the equation (54), we observe that for $\xi=0$ (i.e., no
evolution), the G-curvature is $\boldsymbol{\mathscr{C}}$-independent and
takes a constant value $(\mathrm{K}=8)$, which corresponds to the curvature of
the initial state sphere (55), while for $\xi>0$ (i.e., evolution case) it is
$\boldsymbol{\mathscr{C}}$-dependent and its behavior versus the entanglement
is shown in the figure (4). We notice that the G-curvature diminishes as the
entanglement amount exchanged between the two spins increases. This can be
explained by the fact that the existence of quantum correlations causes a
decrease in the state space curvature. Further, the G-curvature reaches
negative values for the entanglement degrees verifying the condition
${\left|{\sin\xi}\right|\left({\boldsymbol{\mathscr{C}}-3\left|{\sin\xi}\right|}\right)}<-2{{{\left({\boldsymbol{\mathscr{C}}-2\left|{\sin\xi}\right|}\right)}^{2}}}.$
(55)
This elucidates the quantum correlation impact between the two particles in
the compactification of the related state space (50) when the requirement (55)
is met. It is interesting to note that the separable states
$(\boldsymbol{\mathscr{C}}=0)$ are housed in the regions of highest curvature
$\mathrm{K}_{\max}=5$,
Figure 4: The G-curvature (54) versus the concurrence (49) for some values of
$\xi$.
whereas the states of maximum entanglement $(\boldsymbol{\mathscr{C}}=1)$ are
housed in the regions of lowest curvature
${{\mathrm{K}}_{\min}}=4\left[{2-\frac{{\left|{\sin\xi}\right|\left({3\left|{\sin\xi}\right|-1}\right)}}{{{{\left({2\left|{\sin\xi}\right|-1}\right)}^{2}}}}}\right].$
(56)
Thus, we conclude that the information about the entanglement degree of the
two-spin system (V.1) allows to determine its localization on the
corresponding phase space (50). This clearly proves the deterministic
character of geometric quantum mechanics, this latter is based on the
geometrization of Hilbert space by introducing the concept of quantum phase
space analogous to that of classical mechanics [4, 6].
The connection between the geometric phase and the quantum entanglement can be
also explored here. Indeed, reporting the equation (49) into (III.1), we
obtain the geometric phase gained by the two-spin state (V.1) in terms of the
concurrence as
$\Phi_{\operatorname{g}}=-\arctan\left[\dfrac{(2\left|{\sin\xi}\right|-\boldsymbol{\mathscr{C}})\sin\xi}{(2\left|{\sin\xi}\right|-\boldsymbol{\mathscr{C}})\cos\xi+\boldsymbol{\mathscr{C}}}\right]+\xi\begin{pmatrix}1-\dfrac{\boldsymbol{\mathscr{C}}}{2\left|{\sin\xi}\right|}\end{pmatrix}.$
(57)
Notice that the geometric phase is determined by the two new physical degrees
of freedom : entanglement and time. This means that it depends on each point
(i.e., system physical state) of the underlying phase space (50).
Consequently, we can say that the geometric phase depends both on the path
followed by the system and the geometry of the state space. Since the
geometrical phase is defined in terms of these two measurable physical
magnitudes (i.e., $\boldsymbol{\mathscr{C}}$ and $\xi$), this offers us the
ability to measure it experimentally for any arbitrary evolution process of
the system. This result is extremely important because it can be exploited to
build efficient quantum circuits based on the amount of entanglement exchanged
between the two interacting spins. To further highlight the interplay between
the geometric phase and entanglement, we have graphed the reliance of the Eq.
(57) with respect to the concurrence for some values of $\xi$ in Figure (5),
we observe that the geometric phase (57) gained by the two-spin system (V.1)
during its evolution from the separable state $(\boldsymbol{\mathscr{C}}=0)$
to the maximum entanglement state $(\boldsymbol{\mathscr{C}}=1)$ exhibits
approximately a parabolic behavior. From here, we can divide its evolvement
into two main stages : the first stage involves the geometric phase decrease
along the concurrence interval
$\boldsymbol{\mathscr{C}}\in[0,\boldsymbol{\mathscr{C}}_{\text{c}}]$ with
$\boldsymbol{\mathscr{C}}_{\text{c}}$ denotes the critical entanglement degree
in which this phase reaches its minimum value (see the figure (5)), it is
given explicitly as
$\boldsymbol{\mathscr{C}}_{\text{c}}=\sin\xi-\cot\frac{\xi}{2}\sqrt{\frac{{\sin\xi}}{\xi}\left({2-\xi\sin\xi-2\cos\xi}\right)}.$
(58)
Figure 5: The geometric phase (57) versus the concurrence (49) for some values
of $\xi$.
In this stage, the evolving state (V.1) acquires a negative geometric phase,
which can be interpreted as the geometric phase part lost by the system.
Geometrically, we can say that during parallel transport, the state vector
(V.1) rotates clockwise (i.e., makes an angle of negative sign) with respect
to the separable state (i.e., starting state). Thereby, we discover that, in
the region $[0,\boldsymbol{\mathscr{C}}_{\text{c}}]$, the quantum correlations
favors the loss of the geometric phase. The second stage concerns the
geometric phase increase along the interval
$\boldsymbol{\mathscr{C}}\in[\boldsymbol{\mathscr{C}}_{\text{c}},1]$ (i.e.,
reverse behavior), the evolving state (V.1) accumulates a positive geometric
phase, which can be viewed as the geometric phase part gained by the system.
We can say, geometrically, that during parallel transport, the state vector
(V.1) rotates counterclockwise (i.e., makes an angle of positive sign) with
respect to the separable state. In this way, we find that, in the region
$[\boldsymbol{\mathscr{C}}_{\text{c}},1]$, the quantum correlations favors the
gain of the geometric phase. Accordingly, the geometric phase behavior versus
the entanglement is approximately symmetric with respect to the critical value
$\boldsymbol{\mathscr{C}}_{\text{c}}$, this is mainly due to the dumbbell-
shape structure of the underlying phase space (50). On the practical side, the
quantum entanglement is then an interesting physical resource that can be
exploited experimentally to control the geometric phase resulting from the
evolution processes of the two-spin system.
In the cyclic evolution scenario, the geometric phase can be also investigated
in connection to the entanglement. Indeed, inserting the equation (49) into
(33), we give the AA-geometric phase accumulated by the evolved state (V.1) in
relation to the concurrence as
$\Phi_{\operatorname{g}}^{\mathrm{AA}}=-{\pi}\frac{\boldsymbol{\mathscr{C}}}{\left|{\sin\xi}\right|}.$
(59)
It is proportional to the entanglement level between the two spins with a
negative proportionality factor, implying that the AA-geometric phase
decreases linearly as entanglement increases. For the sake of clarity, we
display this behavior in Figure (6), we observe indeed that the more the
system is entangled, the more it accumulates an AA-geometric phase of negative
sign.
Figure 6: The AA-geometric phase (59) versus the concurrence (49) for some
values of $\xi$.
This is roughly the same behavior as we observed for the geometric phase (57)
in the first stage (i.e., in the region
$[0,\boldsymbol{\mathscr{C}}_{\text{c}}]$), and hence the same interpretations
can be provided for the AA-geometric phase. Regarding the topological phase
resulting from the cyclic evolutions of the two-spin system, it is given by
$\Phi_{\operatorname{top}}^{\mathrm{AA}}=-2\pi$. Thus, it is unaffected by the
entanglement, because it constitutes the AA-geometrical phase part receiving
no contribution from the dynamic part.
### V.3 Dynamical picture of the entanglement
To close this section, it would be interesting to examine the dynamical aspect
of entanglement by highlighting the link between the entanglement amount
exchanged between the two spins and the relevant dynamical properties, such as
the evolution speed and the traveled geodesic distance during a given
evolution process over the resulting phase space (50). As a result, we address
the quantum brachistochrone problem by relying on the entanglement level of
the two-spin system. To accomplish this, using the equation (49) into (38),
the related evolution speed can be expressed in relation to the concurrence as
follows
$\mathrm{V}=\frac{\mathtt{J}}{2\left|\sin\xi\right|}\sqrt{{\boldsymbol{\mathscr{C}}}\begin{pmatrix}2\left|\sin\xi\right|-{\boldsymbol{\mathscr{C}}}\end{pmatrix}}.$
(60)
In this way, we manage to relate the rapidity of the two-spin system with its
entanglement degree. In other words, the result (60) reflects the explicit
relatedness between the system dynamics and the evolvement of the quantum
correlations. Consequently, we infer that the dynamical characteristics of
such a system are determinable through its entanglement state. The evolution
speed reliance according to the concurrence is shown in Figure (7).
Interestingly, the variation of the evolution velocity (60) is split into two
different parts : the first part shows the the evolution velocity increase of
the two-spin state until it attains its highest value
$\mathrm{V}_{\max}={\mathtt{J}}/2$, matching the critical entanglement level
$\boldsymbol{\mathscr{C}}=\boldsymbol{\mathscr{C}}_{\text{c}}^{\prime}=\left|\sin\xi\right|$.
This
Figure 7: The evolution speed (60) verus the concurrence (49) for some values
of $\xi$ with $\mathtt{J}=1$.
proves that, in this part, the existence of quantum correlations speed up the
system evolution over the related phase space (50). The second part concerns
the concurrence interval
$\boldsymbol{\mathscr{C}}\in[\boldsymbol{\mathscr{C}}_{\text{c}}^{\prime},1]$,
in which the evolution speed has reversed its monotonicity, it diminishes
continuously until it achieves its local minimum
$\mathrm{V}(\boldsymbol{\mathscr{C}}=1)$. This signifies that, in this second
part, the quantum correlations slow down this evolution. As a result, we
conclude that the system dynamics is controllable by its entanglement level.
This outcome can be usefully exploited in quantum information protocols.
Utilizing the equation (37), we can also establish the Fubini-Study distance
traveled by the two-spin state (V.1) in relation to the concurrence, it is
found as
$\mathtt{S}=\frac{\xi}{2\left|\sin\xi\right|}\sqrt{{\boldsymbol{\mathscr{C}}}\begin{pmatrix}2\left|\sin\xi\right|-{\boldsymbol{\mathscr{C}}}\end{pmatrix}}.$
(61)
Thereby, we arrive at expressing the Fubini-Study distance (i.e., dynamical
observable), relating any two quantum states over the two-spin phase space
(50), in terms of the entanglement level and the evolution time. This proves
again the feasibility to investigate experimentally the dynamical properties,
which will motivate their exploitation in the novel applications of quantum
technology [63].
Figure 8: The Fubini-Study distance (61) versus the concurrence (49) for some
values of $\xi$.
By scrutinizing the two figures (7) and (8), we find that the Fubini-Study
distance (61) exhibit, with respect to the entanglement degree, the same
behavior as the evolution velocity (60), and thus the same conclusions can be
achieved. Let us now solve the quantum brachistochrone issue for the two
interacting spins based on their entanglement. To realize this, maximizing the
evolution speed (60) with respect to the concurrence, the shortest time
required to achieve a time-optimal evolution, over the relevant phase space
(50), reads as
$\boldsymbol{\tau}=\frac{\mathtt{S}}{\mathrm{V}_{\max}}=\frac{\xi}{\mathtt{J}\left|\sin\xi\right|}\sqrt{{\boldsymbol{\mathscr{C}}}\begin{pmatrix}2\left|\sin\xi\right|-{\boldsymbol{\mathscr{C}}}\end{pmatrix}}.$
(62)
So, the optimal time (62) depends on both the ordinary time, the coupling
constant, and the entanglement level of the system. Specifically, we observe
that for $\boldsymbol{\mathscr{C}}=0$ the optimal time cancels out
$(\boldsymbol{\tau}=0)$, this because the evolving state (V.1) coincides with
the disentangled starting state $|\Psi_{i}\rangle=|++\rangle$ (i.e., no
evolvement). For the critical entanglement level
$\boldsymbol{\mathscr{C}}=\boldsymbol{\mathscr{C}}_{\text{c}}^{\prime}$ the
optimal time attains its highest value $(\boldsymbol{\tau}=t)$, signifying
that the optimal and ordinary evolutions of the two-spin system coincide,
whereas for
$\boldsymbol{\mathscr{C}}\in\,]0,\boldsymbol{\mathscr{C}}_{\text{c}}^{\prime}[\,\cup\,]\boldsymbol{\mathscr{C}}_{\text{c}}^{\prime},1]$
the optimal time is strictly less than the ordinary time
$(\boldsymbol{\tau}<t)$. In this respect, the optimal evolution states can be
generated via the unitary operation given by
$\left|\Psi_{i}\right\rangle\rightarrow|\Psi(\boldsymbol{\tau})\rangle=e^{-i\mathrm{H}\boldsymbol{\tau}}\left|\Psi_{i}\right\rangle.$
(63)
In fact, they make up one-dimensional space of optimal states, over the whole
space (50), defined by the metric tensor
$d{{\mathtt{S}}}^{2}_{\text{opt}}=\frac{{\boldsymbol{\mathscr{C}}}}{4\sin^{2}\xi}\left(2\left|\sin\xi\right|-\boldsymbol{\mathscr{C}}\right)d\boldsymbol{\xi}^{2},$
(64)
with $\boldsymbol{\xi}=\mathtt{J}\boldsymbol{\tau}$. The behavior of the
optimal time (62) according to the entanglement is illustrated in Figure (9),
we discover that the lower the entanglement level (resp. the ordinary time) of
Figure 9: The optimal time (62) versus the concurrence (49) for some values of
$\xi$ with $\mathtt{J}=1$.
the two spins ${\boldsymbol{\mathscr{C}}}\to 0$ (resp. $t\to 0$), the shorter
the optimal time $\boldsymbol{\tau}\to 0$. Accordingly, we conclude that the
entanglement and the ordinary time are two physical magnitudes exploitable for
realizing the optimal evolutions in such integrable systems. These kinds of
evolutions are of paramount importance in quantum computing for designing good
quantum algorithms [64, 65].
## VI Conclusion and outlook
To summarize, we investigated a physical system consisting of $N$ interacting
spin-$1/2$ under all-range Ising model. We assumed that the starting state is
the tensor product of $N$ eigenstates of the spin-$1/2$ projection operator on
the positive direction denoted by the unit vector. After applying the time
evolution propagator, the obtained evolved state is identified by three
dynamical degrees of freedom, which are the spherical angles
$(\theta,\varphi)$ and the evolution time $t$. We established the Fubini-Study
metric and explored that the related quantum phase space is a closed two-
dimensional manifold. Moreover, by examining the G-curvature in the framework
of the Gauss-Bonnet theorem, we demonstrated that this state space has both a
dumbbell-shape structure and a spherical topology. The geometric phase
acquired by the $N$ spin-$1/2$ system is also studied within the arbitrary and
cyclic evolution processes. This is achieved by evaluating the difference
between the total and dynamic phases. We found that the geometric phase, in
the arbitrary evolution case, varies nonlinearly with the time, reflecting its
dynamic character. Geometrically, we showed that it is affected both by the
evolution trajectory taken by the system and the associated state space
geometry. On this view, we concluded that the geometric phase can be exploited
to parameterize the possible evolution trajectories of the system. This result
may be interesting for building robust quantum gates. Further, in the
thermodynamic limit ($N\to\infty$), the total phase cancels out and therefore
the geometric and dynamic phases coincide at any moment in the evolution
process. This offers the opportunity of measuring the geometric phase
experimentally. In the cyclic evolution case, we have calculated the AA-
geometric phase and discovered that it is independent of the system dynamics.
Otherwise, it depends only on the initial state choice (i.e., the initial
parameters), signifying that it is influenced by the state space geometry and
not by the evolution path followed by the system. Hence the cyclic evolution
paths are not parameterizable by the AA-geometric phase. We have also derived
the topological phase appearing naturally during cyclic evolutions. We found
that it is proportional to the square of the particle number $N^{2}$,
especially it takes fractional values for $N$ odd and multiples of $\pi$ for
$N$ even. This signifies that the number of spins influences the topology of
the state space. The evolution speed and the Fubini-Study distance separating
the quantum states are also well examined. As a result, we resolved the
quantum Brachistochrone problem for the $N$ spin-$1/2$ system by determining
the shortest time (i.e., optimal time) required to perform an time-optimal
evolution, and defined the underlying optimal state space. In this
perspective, we discovered that for $N=2$ (i.e., two spin$-1/2$ system) the
optimal and ordinary times coincide $(\mathrm{t}_{\min}=t)$, while for $N\geq
3$ (i.e., $N$ spin-$1/2$ system), the optimal time (44) is strictly lower than
the ordinary time, and thus the time-optimal evolution is achievable. Besides,
in the thermodynamic limit $(N\to\infty)$, the optimal time decreases to zero
$(\mathrm{t}_{\min}\to 0)$. In this scheme, the optimal state circle coincides
with a straight line since its radius becomes infinite. Thereby, we inferred
that the particle number and the ordinary time are two physical magnitudes
exploitable for performing the time-optimal evolutions in such integrable
systems.
On other hand, by restricting the whole system to a two-spin system, i.e., two
interacting spin under the Ising model, we have studied the quantum
entanglement via two approaches. The first approach is of geometric nature, in
which we give the Fubini-Study metric in connection to the Wootters
concurrence as a quantum correlation quantifier. This outcome may be
interesting for the experimental handling of the geodesic distance between
entangled states and also in the state space geometry adjustment. We proved
that an increase in the entanglement degree between the two spins causes a
decrease in the state space curvature until it reaches negative values,
showing the quantum correlation effect in the compactification of the related
state space. Additionally, the entanglement can be used to identify the
quantum states over the space of states, for example, the states of maximum
entanglement are housed in the regions of lowest curvature, whereas the
separable states are housed in the regions of highest curvature. The geometric
phase acquired by the two-spin system is sufficiently discussed in relation to
the entanglement. We explored that the geometric phase exhibits two different
behaviors with respect to the critical entanglement level
$\boldsymbol{\mathscr{C}}_{\text{c}}$
$(\Phi_{\operatorname{g}}(\boldsymbol{\mathscr{C}}_{\text{c}})={\Phi_{\operatorname{g}}}_{\min})$:
the first one is in the interval $[0,\boldsymbol{\mathscr{C}}_{\text{c}}]$,
wherein the quantum correlations favors the loss of the geometric phase, while
the second one is in the interval $[\boldsymbol{\mathscr{C}}_{\text{c}},1]$,
wherein the quantum correlations favors the gain of the geometric phase. In
the cyclic evolution scenario, the AA-geometric phase behaves similarly to the
geometric phase in the first interval
$[0,\boldsymbol{\mathscr{C}}_{\text{c}}]$. This highlights the significance of
quantum entanglement for controlling the geometric phase evolvement in such
spin systems. The second approach is of dynamic nature, we linked the
evolution speed with the concurrence, we observed that the speed displays two
different behaviors with respect to the critical entanglement level
$\boldsymbol{\mathscr{C}}_{\text{c}}^{\prime}$
$(\mathrm{V}(\boldsymbol{\mathscr{C}}_{\text{c}}^{\prime})=\mathrm{V}_{\max})$:
the first one is in the interval
$[0,\boldsymbol{\mathscr{C}}_{\text{c}}^{\prime}]$, wherein the quantum
correlations speed up the system evolution over the relevant phase space,
while the second one is in the interval
$[\boldsymbol{\mathscr{C}}_{\text{c}}^{\prime},1]$, wherein the quantum
correlations slow down this evolution. Accordingly, we concluded that the
system dynamics is controllable by its entanglement level. The same behavior
is noticed for the Fubini-Study distance separating the entangled states.
Finally, we solved the quantum barchistochrone problem based on the
entanglement amount exchanged between the two spins. We inferred that the
quantum entanglement and the ordinary time are two physical magnitudes
exploitable to realize the time-optimal evolution in such a spin system. Thus,
we were able to illustrate, to a significant extent, the connection between
quantum entanglement and the geometric and dynamical characteristics
characterizing the considered two-spin phase space.
## References
* [1] T. W. Kibble, Geometrization of quantum mechanics, Commun. Math. Phys, 65 (1979), 189-201.
* [2] J. Anandan, A geometric approach to quantum mechanics, Found. Phys, 21 (1991), 1265-1284.
* [3] A. Ashtekar and T. A. Schilling, Geometrical formulation of quantum mechanics, On Einstein’s Path: Essays in Honor of Engelbert Schucking, (1999), 23-65.
* [4] D. C. Brody and L. P. Hughston, Geometric quantum mechanics, J. Geom. Phys, 38 (2001), 19-53.
* [5] J. P. Provost and G. Vallee, Riemannian structure on manifolds of quantum states, Commun. Math. Phys, 76 (1980), 289-301.
* [6] W. M. Zhang, Quantum nonintegrability in finite systems, Phys. Rep, 252 (1995), 1-100.
* [7] A. Botero, Geometric phase and modulus relations for probability amplitudes as functions on complex parameter spaces, J. Math. Phys, 44 (2003), 5279-5295.
* [8] H. Heydari, Geometry and structure of quantum phase space, Found. Phys, 45 (2015), 851-857.
* [9] J. Anandan and Y. Aharonov, Geometry of quantum evolution, Phys. Rev. Lett, 65 (1990), 1697.
* [10] S. Deffner and E. Lutz, Energy–time uncertainty relation for driven quantum systems, J. Phys. A: Math. Theor, 46 (2013), 335302.
* [11] B. Amghar and M. Daoud, Geometrical aspects and quantum brachistochrone problem for a collection of N spin-s system with long-range Ising-type interaction, Phys. Lett. A, 384 (2020), 126682.
* [12] C. M. Bender, D. C. Brody, H. F. Jones and B. K. Meister, Faster than Hermitian quantum mechanics, Phys. Rev. Lett, 98 (2007), 040403.
* [13] A. M. Frydryszak and V. M. Tkachuk, Quantum brachistochrone problem for a spin-1 system in a magnetic field, Phys. Rev. A, 77 (2008), 014103.
* [14] B. Li, Z. H. Yu and S. M. Fei, Geometry of quantum computation with qutrits, Sci. Rep, 3 (2013), 1-6.
* [15] M. A. Nielsen, M. R. Dowling, M. Gu and A. C. Doherty, Quantum computation as geometry, Science, 311 (2006), 1133-1135.
* [16] M. R. Dowling and M. A. Nielsen, The geometry of quantum computation, Quantum Information $\&$ Computation, 8 (2008), 861-899.
* [17] S. Deffner and E. Lutz, Generalized Clausius inequality for nonequilibrium quantum processes, Phys. Rev. Lett, 105 (2010), 170402.
* [18] D. P. Pires, L. C. Céleri and D. O. Soares-Pinto, Geometric lower bound for a quantum coherence measure, Phys. Rev. A, 91 (2015), 042330.
* [19] D. P. Pires, M. Cianciaruso, L. C. Céleri, G. Adesso and D. O. Soares-Pinto, Generalized geometric quantum speed limits, Phys. Rev. X, 6 (2016), 021031.
* [20] R. Horodecki, P. Horodecki, M. Horodecki and K. Horodecki, Quantum entanglement, Reviews of modern physics, 81 (2009), 865.
* [21] J. Elfakir, B. Amghar and M. Daoud, Geometrical and dynamical description of two interacting spins under the XXZ-type Heisenberg model, Int. J. Geom. Methods Mod, 20 (2023), 2350006-201.
* [22] M. El Kirdi, A. Slaoui, N. Ikken, M. Daoud and R. A. Laamara, Controlled quantum teleportation between discrete and continuous physical systems, Phys. Scr, 98 (2023) 025101.
* [23] L. Amico, R. Fazio, A. Osterloh and V. Vedral, Entanglement in many-body systems, Reviews of modern physics, 80 (2008), 517.
* [24] B. Amghar and M. Daoud, Quantum state manifold and geometric, dynamic and topological phases for an interacting two-spin system, Int. J. Geom. Methods Mod, 17 (2020), 2050030.
* [25] P. Levay, The geometry of entanglement: metrics, connections and the geometric phase, J. Phys. A: Math. Gener, 37 (2004), 1821.
* [26] R. A. Bertlmann, H. Narnhofer and W. Thirring, Geometric picture of entanglement and Bell inequalities, Phys. Rev. A, 66 (2002), 032319.
* [27] Y. S. Krynytskyi and A. R. Kuzmak, Geometry and speed of evolution for a spin-s system with long-range zz-type Ising interaction. Ann. Phys, 405 (2019), 38-53.
* [28] R. Mosseri, Two-qubit and three-qubit geometry and Hopf fibrations, Topology in condensed matter, (2006), 187-203.
* [29] R. Mosseri and R. Dandoloff, Geometry of entangled states, Bloch spheres and Hopf fibrations, J. Phys. A: Math. Gener, 34 (2001), 10243.
* [30] B. Amghar and M. Daoud, Geometrical description of the dynamics of entangled two-qubit states under $U(2)\times U(2)$ local unitary operations, Quantum Inf. Process, 20 (2021), 1-21.
* [31] B. Amghar, A. Slaoui, J. Elfakir and M. Daoud, Geometrical, topological, and dynamical description of N interacting spin-s particles in a long-range Ising model and their interplay with quantum entanglement, Phys. Rev. A, 107 (2023), 032402.
* [32] F. Verstraete, J. Dehaene and B. De Moor, On the geometry of entangled states, J. Mod. Opt, 49 (2002), 1277-1287.
* [33] K. C. Ha and S. H. Kye, Geometry for separable states and construction of entangled states with positive partial transposes, Phys. Rev. A, 88 (2013), 024302.
* [34] J. E. Avron and O. Kenneth, Entanglement and the geometry of two qubits, Ann. Phys, 324 (2009), 470-496.
* [35] M. V. Berry, Quantal phase factors accompanying adiabatic changes, Proc. R. Soc. Lond. A. Math. Phys. Sci, 392 (1984), 45-57.
* [36] Y. Aharonov and J. Anandan, Phase change during a cyclic quantum evolution, Phys. Rev. Lett, 58 (1987), 1593.
* [37] J. Anandan, The geometric phase, Nature, 360 (1992), 307-313.
* [38] A. Carollo, I. Fuentes-Guridi, M. F. Santos and V. Vedral, Geometric phase in open systems. Phys. Rev. lett, 90 (2003), 160402.
* [39] O. Andersson, Holonomy in quantum information geometry. arXiv preprint arXiv:1910.08140, (2019).
* [40] E. Demler and S. C. Zhang, Non-Abelian holonomy of BCS and SDW quasiparticles, Ann. Phys, 271 (1999), 83-119.
* [41] J. Samuel and R. Bhandari, General setting for Berry’s phase, Phys. Rev. Lett, 60 (1988), 2339.
* [42] X. Wang, A. Sørensen and K. Mølmer, Multibit gates for quantum computing, Phys. Rev. lett, 86 (2001), 3907.
* [43] F. Kleißler, A. Lazariev and S. Arroyo-Camejo, Universal, high-fidelity quantum gates based on superadiabatic, geometric phases on a solid-state spin-qubit at room temperature, Npj Quantum Inf, 4 (2018), 49.
* [44] R. Das, S. K. Kumar and A. Kumar, Use of non-adiabatic geometric phase for quantum computing by NMR. J. Magn. Reson, 177 (2005), 318-328.
* [45] J. A. Jones, V. Vedral, A. Ekert and G. Castagnoli, Geometric quantum computation using nuclear magnetic resonance, Nature, 403 (2000), 869-871.
* [46] L. M. Duan, J. I. Cirac and P. Zoller, Geometric manipulation of trapped ions for quantum computation, Science, 292 (2001), 1695-1697.
* [47] L. E. Oxman and A. Z. Khoury, Fractional topological phase for entangled qudits, Phys. Rev. Lett, 106 (2011), 240503.
* [48] A. A. Matoso, X. Sánchez-Lozano, W. M. Pimenta, P. Machado, B. Marques, F. Sciarrino, … and S. Pádua, Experimental observation of fractional topological phases with photonic qudits, Phys. Rev. A, 94 (2016), 052305.
* [49] A. Z. Khoury and L. E. Oxman, Topological phase structure of entangled qudits, Phys. Rev. A, 89 (2014), 032106.
* [50] M. Johansson, M. Ericsson, K. Singh, E. Sjöqvist and M. S. Williamson, Topological phases and multiqubit entanglement, Phys. Rev. A, 85 (2012), 032112.
* [51] L. E. Oxman, A. Z. Khoury, F. C. Lombardo and P. I. Villar, Two-qudit geometric phase evolution under dephasing, Ann. Phys, 390 (2018), 159-179.
* [52] V. Vedral, Geometric phases and topological quantum computation, Int. J. Quantum Inf., 1 (2003), 1-23.
* [53] Y. Huang and X. Chen, Quantum circuit complexity of one-dimensional topological phases, Phys. Rev. B, 91 (2015), 195143.
* [54] V. M. Tkachuk, Fundamental problems of quantum mechanics. Ivan Franko National University of Lviv, Lviv, (2011).
* [55] S. Abe, Quantized geometry associated with uncertainty and correlation, Phys. Rev. A, 48 (1993), 4102.
* [56] M. Kolodrubetz, V. Gritsev and A. Polkovnikov, Classifying and measuring geometry of a quantum ground state manifold, Phys. Rev. B, 88 (2013), 064304.
* [57] N. Mukunda and R. Simon, Quantum kinematic approach to the geometric phase. I. General formalism, Ann. Phys, 228 (1993), 205-268.
* [58] X. Wang and P. Zanardi, Simulation of many-body interactions by conditional geometric phases. Phys. Rev. A, 65 (2002), 032327.
* [59] A. K. Pati, New derivation of the geometric phase. Phys. Lett. A, 202 (1995), 40–45.
* [60] P. Roushan, C. Neill, Y. Chen, M. Kolodrubetz, C. Quintana, N. Leung,… and J. M. Martinis, Observation of topological transitions in interacting quantum circuits, Nature, 515 (2014), 241-244.
* [61] A. Mostafazadeh, Quantum brachistochrone problem and the geometry of the state space in pseudo-Hermitian quantum mechanics, Phys. Rev. Lett, 99 (2007), 130502.
* [62] W. K. Wootters, Entanglement of Formation of an Arbitrary State of Two Qubits, Phys. Rev. Lett, 80 (1998), 2245.
* [63] K. Sato, S. Nakazawa, S. Nishida, R. D. Rahimi, T. Yoshino, Y. Morita, … and T. Takui, Novel applications of ESR/EPR: quantum computing/quantum information processing, EPR of Free Radicals in Solids II: Trends in Methods and Applications, (2012), 163-204.
* [64] A. C. C. de Albornoz, J. Taylor and V. Cărare, Time-optimal implementations of quantum algorithms. Phys. Rev. A, 100 (2019), 032329.
* [65] A. Carlini, A. Hosoya, T. Koike and Y. Okudaira, Time-optimal unitary operations. Phys. Rev. A, 75 (2007), 042308.
|
# Cryptanalysis of a Cayley Hash Function Based on
Affine Maps in one Variable over a Finite Field
Bianca Sosnovski Department of Mathematics and Computer Science
Queensborough Community College, CUNY<EMAIL_ADDRESS>
###### Abstract.
Cayley hash functions are cryptographic hashes constructed from Cayley graphs
of groups. The hash function proposed by Shpilrain and Sosnovski (2016), based
on affine maps in one variable over a finite field, was proven insecure. This
paper shows that the variation proposed by Ghaffari and Mostaghim (2018) that
uses Shpilrain and Sosnovski’s hash is also insecure. We demonstrate its
security vulnerability by constructing collisions.
Keywords: Cryptography, hash functions, Cayley hash functions.
The author received support for this project provided by a PSC-CUNY grant,
jointly funded by the Professional Staff Congress and the City University of
New York.
## 1\. Introduction
Hash functions are an essential tool for cryptography. Today’s security of
much of our communication relies on cryptographic protocols that ensure
confidentiality, integrity and authentication, and many such protocols use
hash functions as building blocks. Hash functions are fundamental in
constructing cryptographic protocols, such as database indexing, data
compression, password storage, digital signatures, encryption schemes, and key
derivation systems.
However, not every hash function is good enough for cryptography.
Cryptographic hash functions are hash functions that satisfy desired security
properties such as preimage and collision resistance and may be used in
cryptographic applications.
Furthermore, many cryptosystems in use today are based on finite abelian
groups. Some cryptographic systems will be vulnerable to attacks once large
quantum computers are made possible. Though the current state of quantum
computing is still in its infancy, it is a step forward in the direction where
classical cryptography may be compromised. In [1], hash-based public-key
signatures are one of the classes of cryptographic systems that may resist
quantum attacks and require a standard cryptographic hash function.
Provably secure hash functions are hashes whose security is implied by the
assumption of the hardness of a mathematical problem. Examples of provable-
secure hash functions are the Cayley hash functions. Cayley hash functions are
families of hash functions constructed from Cayley graphs of the groups [13].
The security of Cayley hash functions would follow from the alleged hardness
of a mathematical problem related to the Cayley graph regarding a generating
set of the underlying group [4, 13]. Since Cayley hashes involve non-abelian
groups, it is a priori resistant to quantum attacks, and they may be good
candidates for post-quantum cryptography [8].
In 1991, Zémor introduced the first Cayley hash function that has as
generators the matrices
$\left(\begin{array}[]{cc}1&1\\\ 0&1\end{array}\right)\mbox{ and
}\left(\begin{array}[]{cc}1&0\\\ 1&1\end{array}\right)$
and its hash values are elements in $SL_{2}(\mathbb{F}_{p})$ for $p$ prime.
It was broken by Tillich and Zémor in 1994, who then proposed the hash
function whose generators
$\left(\begin{array}[]{cc}\alpha&1\\\ 1&0\end{array}\right)\mbox{ and
}\left(\begin{array}[]{cc}\alpha&\alpha+1\\\ 1&1\end{array}\right)$
with $\alpha$ as the root of an irreducible polynomial $p(x)$ of degree $n$ in
the ring of polynomials $\mathbf{F}_{2}[x]$, where $\mathbf{F}_{2}$ is the
field with two elements.The above matrices are generators of the Cayley graph
for the group $SL_{2}(\mathbb{F}_{2^{n}})$ with
$\mathbb{F}_{2^{n}}\approx\mathbf{F}_{2}[x]/(p(x))$ where $(p(x))$ is the
ideal generated by an irreducible polynomial $p(x)$.
The Tillich-Zémor hash function was broken in 2009 when Grassl et al. [7]
established a connection between the Tillich-Zémor function and maximal length
chains in the Euclidean algorithm for polynomials over the field with two
elements. Other instances of Cayley hashes based on expander graphs have been
proposed after Tillich-Zémor functions. Detailed discussions of Cayley hash
functions can be found in [11, 12, 13, 4]. These Cayley hashes also have been
proven insecure.
Though many instances of Cayley hash functions have been proved insecure, the
algorithms used to break Cayley hash functions target specific vulnerabilities
of each underlying group used and do not invalidate the generic scheme of
these functions. The factorization, representation and balance problems in
non-abelian groups still are potentially hard problems for general parameters
of Cayley hash functions. There are still Cayley hash functions that remain
unbroken (e.g., [2, 5, 19]).
It may seem a concerning scenario where many hashes have been proven insecure.
But this is also essential and encouraging in cryptography since it
demonstrates that the community invests a lot of time and energy in
cryptanalysis to ensure algorithms are evaluated and that new ones are
developed to sustain quantum attacks. The more researchers and scientists have
looked at these algorithms and they remain unbroken, the higher our level of
confidence in them.
This paper proves that the hash function proposed by Gaffari and Mustaghim [6]
is not collision-resistant, which uses the hash proposed by Shpilrain and
Sosnovski [16] that has been proven insecure by Monico [10]. To show that
Gaffari and Mustaghim’s is also insecure, we apply Monico’s algorithm to find
second-preimages for the Shpilrain-Sosnovski hash function to produce
collisions for the Gaffari and Mustaghim’s hash function.
The remainder of the paper is organized as follows. In Section 2, we recall
some basic definitions and properties of a cryptographic hash function.
Section 3 briefly describes the Shpilrain-Sosnovski hash and Gaffari-Mustaghim
hash. Section 4 presents a summary of the cryptanalysis of the Shpilrain and
Sosnovski’s hash function. In Section 5, we present our main results about the
security of the Gaffari and Mustaghim’s hash:
###### Theorem.
Ghaffari-Mostaghim hash is not collision-resistant.
## 2\. Preliminaries
Hash functions are used as compact representations, or digital fingerprints,
of data to provide message integrity.
###### Definition 1.
A hash function $h:\\{0,1\\}^{*}\longrightarrow\\{0,1\\}^{n}$ is an easy-to-
compute111Easy to compute or computationally feasible means polynomial time
and space or, in practice, with a certain number of machine operations to time
units [9]. function that converts a variable-length input into a fixed-length
output. A cryptographic hash function $h$ must satisfy at least one of the
following properties.
* •
_Preimage resistance_ : Given a hash value $y$ for which a corresponding input
is not known, it is computationally infeasible (or hard) to find any input $x$
such that $y=h(x)$.
* •
_Second-preimage resistance_ : Given an input $x_{1}$ it is computationally
infeasible to find another input $x_{2}$ where $x_{1}\neq x_{2}$ such that
$h(x_{1})=h(x_{2})$.
* •
_Collision resistance_ : It is computationally infeasible to find any two
inputs $x_{1}$ and $x_{2}$ where $x_{1}\neq x_{2}$ such that
$h(x_{1})=h(x_{2})$.
A collision-resistance hash function is also second-preimage resistant.
Preimage resistance does not guarantee second-preimage resistance, and Second-
preimage resistance does not ensure preimage resistance [9].
It is well known that expander graphs are used to produce pseudorandom
behavior. This pseudorandom behavior is due to the rapid mixing of Markov
chains on expander graphs. The initial idea was to use groups whose Cayley
graphs concerning a set of generators are expander graphs to design collision-
resistant hash functions.
###### Definition 2.
Let $G$ be a finite group with a set of generators $\mathcal{S}$ that has the
same size as the text alphabet222In general, we can consider plaintexts as
strings of symbols from a text alphabet $\\{1,2,\ldots,k\\}$ for $k\geq 2$.
Conventionally, we use the text alphabet as $\\{0,1\\}$ for binary strings.
$\mathcal{A}$. Choose a function: $\pi:\mathcal{A}\to\mathcal{S}$ such that
$\pi$ defines a one-to-one correspondence between $\mathcal{A}$ and
$\mathcal{S}$. A _Cayley hash_ $h$ is a function whose hash value of the text
$x_{1}x_{2}\dots x_{k}$ is the group element $h(x_{1}x_{2}\dots
x_{k})=\pi(x_{1})\pi(x_{2})\dots\pi(x_{k})$.
For example, a Cayley hash has $A$ and $B$ as the generators of the underlying
group $G$ with the bit assignments $0\mapsto A$ and $1\mapsto B$. The bit
string 101011 is hashed to the group product $BABAB^{2}$.
In constructing hash functions from expander Cayley graphs, the input to the
hash function gives directions for walking around the graph (without
backtracking), and the hash output is the end vertex of the walk.
For the Cayley hashes described in this paper, the alphabet used corresponds
to $\\{0,1\\}$. One of the advantages of this design is that the computation
of the hash value can be easily parallelized due to the concatenation property
$\pi(xy)=\pi(x)\pi(y)$ for any texts $x$ and $y$ in $\\{0,1\\}^{*}$. Unlike
the SHA family of hash functions that hash blocks of input, this type of
function hashes each bit individually.
The security properties of Cayley hash functions are strongly related to the
hardness of mathematical problems.
Let $G$ be a group and $\mathcal{S}=\\{s_{1},\ldots s_{k}\\}\subset G$ be a
generating set of $G$. Let $L$ be polylogarithmic (small) in the size of $G$.
* •
_Balance problem:_ Find an efficient algorithm that returns two words
$m_{1}\ldots m_{l}$ and $m^{\prime}_{1}\ldots m^{\prime}_{l^{\prime}}$ with
$l,l^{\prime}<L$, $m_{i},m^{\prime}_{i}\in\\{1,\ldots,k\\}$ that yield equal
products in $G$, that is,
$\prod\limits_{i=1}^{l}s_{m_{i}}=\prod\limits_{i=1}^{l^{\prime}}s_{m^{\prime}_{i}}$
* •
_Representation problem:_ Find an efficient algorithm that returns a word
$m_{1}\ldots m_{l}$ with $l<L$, $m_{i}\in\\{1,\ldots,k\\}$ such that
$\prod\limits_{i=1}^{l}s_{m_{i}}=1$.
* •
_Factorization problem:_ Find an efficient algorithm that given any element
$g\in G$ returns a word $m_{1}\ldots m_{l}$ with $l<L$,
$m_{i}\in\\{1,\ldots,k\\}$ such that $\prod\limits_{i=1}^{l}s_{m_{i}}=g$.
A Cayley hash function is collision-resistant if the balance problem is hard
in the underlying group. Suppose the representation problem is hard in the
group. In that case, the associated Cayley hash is second preimage resistant,
and it is preimage resistant if and only if the corresponding factorization
problem is hard in the group [11, 15].
Other requirements considered by Tillich and Zémor [20, 18] in the
construction of Cayley hash functions are that the Cayley graph of G with
generator set S has a large girth and small diameter.
## 3\. Cayley hash functions
### 3.1. The Shpilrain-Sosnovski hash function
In [16], the authors presented a Cayley hash function that uses linear
functions in one variable over $\mathbb{F}_{p}$ with composition operation.
The semigroup generated by $f(x)=ax+b$ and $g(x)=cx+d$ under composition is
isomorphic to the semigroup generated by
$A=\left(\begin{array}[]{cc}a&b\\\ 0&1\end{array}\right)\mbox{ and
}B=\left(\begin{array}[]{cc}c&d\\\ 0&1\end{array}\right)$
under matrix multiplication. Using results about the freeness of upper
triangular matrices by Cassaigne at al. [3], they showed that the semigroup of
linear functions over $\mathbb{Z}$ is free if the generators of the semigroup
do not commute and $a,c\geq 2$.
The functions $f_{0}(x)=2x+1\mod p$ and $f_{1}(x)=3x+1\mod p$ with $p>3$ are
considered the generators of the proposed hash function. The hash value is
obtained by first computing product $h(b_{1}b_{2}\cdots
b_{k})=f_{b_{1}}f_{b_{2}}\cdots f_{b_{k}}\pmod{p}$ where $b_{i}\in\\{0,1\\}$
for $1\leq i\leq k$. The corresponding product linear function is of the form
$\ell(x)=rx+s$ where $s,r\in\mathbb{Z}_{p}$, and the hash value is defined as
$H(b_{1}b_{2}\cdots b_{k})=(r+s,s)$.
The corresponding hash functions are very efficient. A bit string of length
$n$ can be hashed by performing at most $2n$ multiplications and about $2n$
additions in $\mathbb{F}_{p}$.
An advantage of this hash function is that the output bit strings have length
$2\log p$, while the Tillich-Zémor hash function outputs bit strings of length
$4\log p$. Concerning the security of the hash function, the authors recommend
that $p\approx 2^{512}$ or larger to prevent generic attacks. With this
recommended parameter $p$, there will be no collisions unless the length of at
least one of the colliding strings is at least 323. For a short input text
(323 bits or less), the authors recommend padding to extend its length to 512
bits. Subgroup attacks and attacks using elements of small orders can be
prevented by choosing $p$ such that $p=2q+1$ where $q$ is a “large” prime.
### 3.2. The Ghaffari-Mostaghim hash function
As discussed in [16], preimages can be easily computed for short messages in
the Shpilrain-Sosnovski hash, and the option suggested to avoid it is using
padding.
Ghaffari and Mostaghim use a similar idea introduced in [11] to modify the
linear hash function above. The functions $f_{0}(x)=2x+1\mod p$ and
$f_{1}(x)=3x+1\mod p$, where $p>3$ is a prime, are also considered as
generators in this Cayley hash. Let $H(m_{1}m_{2}\cdots
m_{l})=f_{m_{1}}f_{m_{2}}\cdots f_{m_{l}}\pmod{p}$ for $m=m_{1}m_{2}\cdots
m_{l}\in\\{0,1\\}^{*}$. Define the new function
$H_{2}(m)=H(m\parallel(H(m)\oplus c_{rnd}))$, where $c_{rnd}$ is a constant
bit string whose bits look random. $H_{2}$ is meant to be a more secure
version of $H$, especially for short messages, and also avoids the issue of
malleability.
To make the factorization problem harder, Ghaffari and Mostaghim [6] suggested
the following variation. Let $G$ the group generated by $f_{0}$ and $f_{1}$
over $\mathbb{Z}_{p}$, $t>1$ an integer and $g\in
G\setminus\\{e,f_{0},f_{1}\\}$, where $e$ is the identity element of $G$.
Define $\widehat{H}:\\{0,1\\}^{*}\to G$ by
$\widehat{H}(m)=\prod_{i=1}^{l}C_{i}$ where
$C_{i}=\begin{cases}f_{m_{i}}&\mbox{if }t\nmid i\\\ f_{m_{i}}g&\mbox{if }t\mid
i\end{cases}.$
Now define $\widehat{H}_{2}(m)=\widehat{H}(m\parallel(\widehat{H}(m)\oplus
c_{rnd}))$.
For an input bit string of length $l$, the computation of $\widehat{H}$
requires $\lfloor l/t\rfloor$ multiplications more than the original Cayley
hash function proposed by Shpilrain and Sosnovski, thus not affecting too much
the performance of the hash.
Ghaffari and Mostaghim showed that $\widehat{H}$ is at least as secure as the
Shpilrain-Sosnovski hash function $H$, and consequently, so is
$\widehat{H}_{2}$.
## 4\. Monico’s Algorithm
Monico [10] developed an attack that shows that the hash function is not
second-preimage resistant for inputs larger than about 1.9 MB for parameter
$p\approx 2^{256}$. In Monico’s method, the original bit string is not even
required, and having only a bound on its length suffices (preimage weakness).
In Monico’s attack, a hash value $(x,y)$ in $\mathbb{F}_{p}^{2}$ of a bit
string of known length $L$ is given and inverted to $(r,s)=(x-y,y)$. Since
$r=2^{a}3^{b}$ where $a$ is the number of zeros in the original bit string,
and $b$ is the number of ones (or vice-versa), then $L=a+b$. The values of $a$
and $b$ can be recovered with $O(L\log L)$ operations over $\mathbb{F}_{p}$ by
precomputing $L$ powers of $2$, sorting them out and then computing and
testing $r,3^{-1}r,3^{-2}r,\ldots$ until one of the values in the sequence
matches one of the precomputed powers of 2.
Let $n=min\\{a,b\\}$,
$Y=\left(\begin{array}[]{cc}r&s\\\ 0&1\end{array}\right)\mbox{ and
}U=\left(\begin{array}[]{cc}r&u\\\ 0&1\end{array}\right),$
where $U$ is a suitable matrix whose factorization in generators
$A=\left(\begin{array}[]{cc}2&1\\\ 0&1\end{array}\right)\mbox{ and
}B=\left(\begin{array}[]{cc}3&1\\\ 0&1\end{array}\right)$
is known and determined by the values of $a$ and $b$ found in the first step.
The attack aims to transform $U$ into $Y$ by replacing several of the leading
$AB$ factors of $U$ with $BA$. To do so, one must find
$\mathbf{x}\in\\{0,1\\}^{n}$ such that
$\displaystyle\sum_{j=0}^{n-1}x_{j}6^{j}\equiv t\pmod{p}$ where
$t=s-u\pmod{p}$ (for more details, see [10])
To provide a probabilistic algorithm to find such $\mathbf{x}$, Monico reduced
the problem to a dense instance of the Random Modular Subset Sum Problem
(RMSSP), which was considered by Lyubashevsky (2005). Heuristically, his
algorithm is expected to succeed as long as the original bit string had at
least $n$ zeros and $n$ ones for some $n\geq 2^{\sqrt{2\log_{2}p}}$. According
to Monico, the algorithm’s expected running time is $O(n^{2}\log n)$ with an
implied constant small enough to keep the attack practical for $p\approx
2^{256}$.
## 5\. Cryptanalysis of the Ghaffari-Mostaghim hash
This section uses Monico’s algorithm to produce collisions for the Ghaffari-
Mostaghim hash function
###### Lemma 1.
A collision for $\widehat{H}$ is also a collision for $\widehat{H}_{2}$.
###### Proof.
Suppose that $m$ and $m^{\prime}$ are two bit strings such that
$\widehat{H}(m)=\widehat{H}(m^{\prime})$.
$\begin{array}[]{ccc}\widehat{H}_{2}(m)&=&\widehat{H}\left(m\parallel(\widehat{H}(m)\oplus
c_{rnd})\right)\\\ &=&\widehat{H}(m)\widehat{H}(\widehat{H}(m)\oplus
c_{rnd})\\\
&=&\widehat{H}(m^{\prime})\widehat{H}(\widehat{H}(m^{\prime})\oplus
c_{rnd})\\\
&=&\widehat{H}\left(m^{\prime}\parallel(\widehat{H}(m^{\prime})\oplus
c_{rnd})\right)\\\ &=&\widehat{H}_{2}(m^{\prime})\end{array}$
∎
###### Lemma 2.
A collision for $H$ is also a collision for $H_{2}$.
###### Proof.
Similar to the proof in Lemma 1.
∎
###### Theorem 3.
Ghaffari-Mostaghim hash is not collision-resistant.
###### Proof.
Monico’s algorithm can find collisions for $\widehat{H}_{2}$.
Let $t>1$ and $g\in G$ as described in the construction of $\widehat{H}_{2}$.
We can use the algorithm to find a preimage for $g^{-1}$ under the hash $H$,
say $b^{\prime}$ such that $H(b^{\prime})=g^{-1}$.
Suppose that for a given bit string $m=m_{1}m_{2}\cdots m_{l_{1}}$, Monico’s
algorithm returns a bit string $m^{\prime}=m_{1}^{\prime}m_{2}^{\prime}\cdots
m_{l_{2}}^{\prime}$ such that $H(m)=H(m^{\prime})$ with $m\neq m^{\prime}$. We
insert $b^{\prime}$ into the bit strings $m$ and $m^{\prime}$ in the bit
positions multiple of $t+1$, obtaining the following
$m_{*}=m_{1}m_{2}\cdots m_{t}b^{\prime}\cdots m_{l_{1}}$
$m_{*}^{\prime}=m_{1}^{\prime}m_{2}^{\prime}\cdots
m_{t}^{\prime}b^{\prime}\cdots m_{l_{2}}^{\prime}$
We have that $m_{*}\neq m_{*}^{\prime}$ and they are collisions for
$\widehat{H}$ since
$\begin{array}[]{lll}\widehat{H}(m_{*})&=&\widehat{H}(m_{1}m_{2}\cdots
m_{t}b^{\prime}\cdots m_{l_{1}})\\\ &=&H(m_{1})H(m_{2})\cdots
H(m_{t})H(b^{\prime})\cdots H(m_{l_{1}})\\\ &=&f_{m_{1}}f_{m_{2}}\cdots
f_{m_{t}}gH(b^{\prime})\cdots f_{m_{l_{2}}}\\\ &=&f_{m_{1}}f_{m_{2}}\cdots
f_{m_{t}}gg^{-1}\cdots f_{m_{l_{1}}}\\\ &=&H(m)\end{array}$
$\begin{array}[]{lll}\widehat{H}(m_{*}^{\prime})&=&\widehat{H}(m_{1}^{\prime}m_{2}^{\prime}\cdots
m_{t}^{\prime}b^{\prime}\cdots m_{l_{2}}^{\prime})\\\
&=&H(m_{1}^{\prime})H(m_{2}^{\prime})\cdots
H(m_{t}^{\prime})H(b^{\prime})\cdots H(m_{l_{2}}^{\prime})\\\
&=&f_{m_{1}^{\prime}}f_{m_{2}^{\prime}}\cdots
f_{m_{t}^{\prime}}gH(b^{\prime})\cdots f_{m_{l_{2}}^{\prime}}\\\
&=&{}_{m_{1}^{\prime}}f_{m_{2}^{\prime}}\cdots f_{m_{t}^{\prime}}gg^{-1}\cdots
f_{m_{l_{2}}^{\prime}}\\\ &=&H(m^{\prime})\end{array}$
This shows that Monico’s algorithm produces collisions for $\widehat{H}$.
Therefore, $\widehat{H}_{2}$ is not collision-resistant by Lemma 1.
∎
## 6\. Conclusion
This paper proves that the variant proposed by Ghaffari and Mostaghim is
insecure. Our approach is to consider the mathematical structure of the design
of the hash and apply the algorithm that finds second preimages and collisions
for the Shpilrain and Sosnovski’s hash function.
The algorithms used to break Cayley hash functions target specific
vulnerabilities of each underlying group used and do not invalidate the
generic scheme of these functions. Petit and Quisquater [14, 15] suggested
that security might be recovered for the Cayley hash functions design by
introducing new generators.
Although many Cayley hash functions have been proven insecure, and their use
significantly compromises the security of computer systems, we learn from
prior vulnerabilities to develop more robust hash functions. It is essential
to research improvements and develop new designs for Cayley hash functions
that can sustain quantum attacks.
## References
* [1] Bernstein, D.J.: Introduction to post-quantum cryptography. In: Bernstein, D.J., Buchmann, J., Dahmen, E. (eds) Post-Quantum Cryptography. Springer, Berlin, Heidelberg (2009), https://doi.org/10.1007/978-3-540-88702-7˙1
* [2] Bromberg, L., Shpilrain, V., Vdovina, A.: Navigating in the Cayley graph of $SL_{2}(\mathbf{F}_{p})$ and applications to hashing. Semigroup Forum 94, pp. 314–324 (2017), https://doi.org/10.1007/s00233-015-9766-5
* [3] Cassaigne, J., Harju, T., Karhumäki, J.: On the undecidability of freeness of matrix semigroups. International Journal of Algebra and Computation 09(03n04), pp. 295–305 (Jun 1999), http://dx.doi.org/10.1142/S0218196799000199
* [4] Charles, D.X., Lauter, K.E., Goren, E.Z.: Cryptographic hash functions from expander graphs. Journal of Cryptology 22(1), pp. 93–113 (Jan 2009), https://doi.org/10.1007/s00145-007-9002-x
* [5] Le Coz, C., Battarbee, C., Flores, R., Koberda, T., Kahrobaei, D.: Post-quantum hash functions using $\mathrm{SL}_{n}(\mathbb{F}_{p})$. (Jul 2022), https://doi.org/10.1142/S0218196799000199
* [6] Ghaffari, M.H., Mostaghim, Z.: More secure version of a Cayley hash function. Groups Complexity Cryptology 10(1) (Apr 2018), http://dx.doi.org/10.1515/gcc-2018-0002
* [7] Grassl, M., Ilić, I., Magliveras, S., Steinwandt, R.: Cryptanalysis of the Tillich-Zémor hash function. Journal of Cryptology 24(1), pp. 148–156 (Jan 2011), https://doi.org/10.1007/s00145-010-9063-0
* [8] Jo, H., Yamasaki, Y.: LPS-type Ramanujan graphs. 2018 International Symposium on Information Theory and Its Applications (ISITA), pp. 399–403 (2018), https://doi.org/10.23919/ISITA.2018.8664284
* [9] Menezes, A. J., van Oorscho, P. C., Vanstone, S. A.: Handbook of applied cryptography. CRC press (2001)
* [10] Monico, C.: Cryptanalysis of a hash function and the modular subset sum problem. http://www.math.ttu.edu/~cmonico/research/linearhash.pdf (2019)
* [11] Petit, C.: On graph-based cryptographic hash functions. Ph.D. thesis, Universit Catholique de Louvain (2009)
* [12] Petit, C., Lauter, K., Quisquater, J.J.: Full cryptanalysis of LPS and Morgenstern hash functions. In: International Conference on Security and Cryptography for Networks. pp. 263–277. Springer (2008)
* [13] Petit, C., Lauter, K.E., Quisquater, J.J.: Cayley hashes: A class of efficient graph-based hash functions. https://christophe.petit.web.ulb.be/files/Cayley.pdf (preprint 2007)
* [14] Petit, C., Quisquater, J.J.: Preimages for the Tillich-Zémor hash function. In: Biryukov, A., Gong, G., Stinson, D.R. (eds.) Selected Areas in Cryptography. pp. 282–301. Springer Berlin Heidelberg, Berlin, Heidelberg (2011)
* [15] Petit, C., Quisquater, J.J.: Rubik’s for cryptographers. Notices of the American Mathematical Society 60(6), pp. 733–739 (2013)
* [16] Shpilrain, V., Sosnovski, B.: Compositions of linear functions and applications to hashing. Groups Complexity Cryptology 8(2) (Jan 2016), http://dx.doi.org/10.1515/gcc-2016-0016
* [17] Tillich, J.P., Zémor, G.: Group-theoretic hash functions. In: Algebraic Coding: First French–Israeli Workshop. pp. 90–110. Springer (1994), https://doi.org/10.1007/3-540-57843-9
* [18] Tillich, J.P., Zémor, G.: Hashing with ${SL}_{2}$. In: Desmedt, Y.G. (ed.) Advances in Cryptology — CRYPTO ’94. pp. 40–49. Springer Berlin Heidelberg, Berlin, Heidelberg (1994)
* [19] Yuan, S.: (2016). Cryptographic hash functions from sequences of lifted Paley graphs. In: Kahrobaei, D., Cavallo, B. & Garber, G. (eds.), Algebra and Computer Science. AMS (2016), http://dx.doi.org/10.1090/conm/677/13629
* [20] Zémor, G.: Hash functions and graphs with large girths. In: Davies, D.W. (ed.) Advances in Cryptology — EUROCRYPT ’91. pp. 508–511. Springer Berlin Heidelberg, Berlin, Heidelberg (1991)
|
# Matched Illumination Waveforms using Multi-Tone Sinusoidal Frequency
Modulation
###### Abstract
This paper explores the design of constant modulus Matched-Illumination (MI)
waveforms using the Multi-Tone Sinusoidal Frequency Modulation (MTSFM)
waveform model. MI waveforms are optimized for detecting targets in known
noise and clutter Power Spectral Densities (PSDs). There exist well-defined
information theoretic methods that describe the design of MI waveforms for a
myriad of target/noise/clutter models. However, these methods generally only
produce the magnitude square of the MI waveform’s spectrum. Additionally, the
waveform’s time-series is not guaranteed to be constant modulus. The MTSFM is
a constant modulus waveform model with a discrete set of design coefficients.
The coefficients are adjusted to synthesize constant modulus waveforms that
approximate the ideal MI waveform’s spectrum. Simulations demonstrate that the
MTSFM’s detection performance closely approximates an ideal MI waveform
spectrum and generally outperforms flat spectrum waveforms across a range of
transmit energies when the noise and clutter PSDs vary greatly across the
operational band.
Index Terms— Matched-Illumination Waveform, Adaptive Waveform Design, Multi-
Tone Sinusoidal Frequency Modulation.
## 1 Introduction
A fundamental problem in radar system design is the choice of transmit
waveform and receiver processing for detecting targets of interest. There
exist well-defined information theoretic methods in the published literature
that describe how to design waveforms and receivers for optimal target
detection in given noise/clutter Power Spectral Densties (PSDs) [1, 2, 3, 4,
5, 6]. Using a transmit energy constraint, these methods detail the structure
of the magnitude square of the waveform’s spectrum, also known as the Energy
Spectral Density (ESD). Such waveform design methods have demonstrated clear
improvement in detection performance compared to waveforms with a flat ESD
such as the Linear Frequency Modulated (LFM) waveform [4, 5]. The improved
detection performance of these Matched Illumination (MI) waveforms over flat
spectrum waveforms is especially noticeable in scenarios where there is
limited available transmit energy and the noise/clutter PSDs vary widely in
magnitude across the operational band of frequencies [5].
While these MI waveform design techniques specify the optimal waveform’s ESD
shape, they do not directly specify how to synthesize the waveform time-series
that realizes that ESD shape. Since the optimization is over a finite band,
the resulting waveform time-series cannot be perfectly time-limited.
Additionally, it is generally desirable for waveforms to possess a constant
modulus to facilitate transmission on practical transmitter electronics, a
property that information theoretic MI waveform synthesis methods do not
guarantee. There have been a number of efforts to develop phase-retrieval
algorithms that synthesize a constant modulus waveform whose ESD closely
approximates the ideal MI waveform’s ESD [7, 8, 9].
Recently, the Multi-Tone Sinusoidal Frequency Modulated (MTSFM) waveform was
developed for use as an adaptive FM waveform model for cognitive radar and
sonar systems [10]. The MTSFM is a constant modulus waveform model with a
discrete set of parameters that can be adjusted to synthesize novel waveforms
with desirable characteristics. Previous work in [10, 11] demonstrated that
the MTSFM’s design coefficients can be finely tuned to produce waveforms with
specific Ambiguity Function (AF) and Auto-Correlation Function (ACF)
properties, a common focus area for adaptive waveform design [12, 13, 14, 15].
This paper explores applying the MTSFM waveform model to the MI waveform
design problem using the foundational methods developed by Kay in [4] for
point-like targets. Simulations demonstrate that the MTSFM’s detection
performance approaches that of the ideal MI waveform and generally outperforms
flat spectrum waveforms across a range of transmit energies when the
magnitudes of the noise and clutter PSDs vary substantially across the
operational band of frequencies.
## 2 Waveform Signal Model and the Optimal Detection Waveform Problem
This section describes the MI waveform design technique for point targets in
noise and clutter whose respective PSDs are known [4]. This section
additionally describes the MTSFM waveform model and how it can be adapted to
produce constant amplitude waveforms that approximate the ideal MI waveform’s
ESD. This paper assumes the waveform $s\left(t\right)$ with Fourier transform
$S\left(f\right)$ is basebanded and occupies a bandwidth $W$. The waveform is
defined over the time interval $-T/2\leq t\leq T/2$ with duration $T$ and
energy $E$ expressed as
$s\left(t\right)=\sqrt{\dfrac{E}{T}}\operatorname{rect}\left(t/T\right)e^{j\varphi\left(t\right)}$
(1)
where $\varphi\left(t\right)$ is the waveform’s instantaneous phase. The
waveform’s frequency modulation function $m\left(t\right)$ is expressed as
$m\left(t\right)=\dfrac{1}{2\pi}\dfrac{d\varphi\left(t\right)}{dt}.$ (2)
### 2.1 Designing Matched Illumination Waveforms
This paper uses the MI waveform model developed by Kay in [4] and is described
by the block diagram shown in Figure 1. The waveform $s\left(t\right)$ is
transmitted into the medium. The return signal is a combination of the return
from the target, clutter, and additive noise. The target is modeled as a point
reflector with impulse response $g\left(t\right)=A\delta\left(t\right)$ where
$\delta\left(t\right)$ is an impulse function and $A$ is a complex reflecting
parameter modeled as a complex normal distribution
$A\sim\mathcal{CN}\left(0,\sigma_{A}^{2}\right)$. This target model can be
readily generalized to include extended targets as was demonstrated in [2].
The noise $n\left(t\right)$ is modeled as a complex Gaussian random process
with PSD $P_{n}\left(f\right)$. The channel impulse response $h\left(t\right)$
is convolved with the transmit waveform producing the clutter signal
$c\left(t\right)=h\left(t\right)*s\left(t\right)$. Assuming the PSD of the
channel is a Gaussian random process with zero mean and PSD
$P_{h}\left(f\right)$, the PSD of the clutter can correspondingly be expressed
as $P_{c}\left(f\right)=|S\left(f\right)|^{2}P_{h}\left(f\right)$. As shown in
Figure 1, these terms combine to produce the return signal $x\left(t\right)$
expressed as
$x\left(t\right)=As\left(t\right)+h\left(t\right)*s\left(t\right)+n\left(t\right).$
(3)
Note that this model assumes the target and clutter are stationary and thus
this signal model does not contain Doppler shifted echo signals or clutter.
This was primarily utilized in [4] for simplicity in deriving the optimal
waveform/receiver configuration. However, this model also represents a worst
case scenario. Many radar/sonar systems exploit target Doppler in order to
separate the target’s echo signal from clutter. For stationary targets and
clutter this is not possible, and thus target detection performance is purely
dependant upon receiver design and shaping of the waveform’s ESD.
Kay [4] then used this model to derive the optimal Neyman-Pearson detector,
expressed in the frequency domain as
$\left|\sum_{-M/2}^{M/2}\dfrac{X\left(f_{m}\right)S^{*}\left(f_{m}\right)}{P_{h}\left(f_{m}\right)|S\left(f_{m}\right)|^{2}+P_{n}\left(f_{m}\right)}\right|^{2}>\gamma$
(4)
where $f_{m}=m/T$, $M=\lceil WT\rceil$, and $\gamma$ is the detection
threshold. The optimal receiver’s detection performance is determined by the
metric
$d^{2}=\sigma_{A}^{2}\int_{-W/2}^{W/2}\dfrac{|S\left(f\right)|^{2}}{P_{h}\left(f\right)|S\left(f\right)|^{2}+P_{n}\left(f\right)}df.$
(5)
The waveform is also constrained to possess finite energy across the
operational band of frequencies $W$
$E=\int_{W}|S\left(f\right)|^{2}df.$ (6)
The waveform that maximizes $d^{2}$ possesses the ESD
$E_{s}\left(f\right)=|S\left(f\right)|^{2}=\max\left(\dfrac{\sqrt{P_{n}\left(f\right)/\lambda}-P_{n}\left(f\right)}{P_{h}\left(f\right)},0\right)$
(7)
where the parameter $\lambda$ is found from the energy constraint in (6). This
involves solving the following expression
$\int_{W}\max\left(\dfrac{\sqrt{P_{n}\left(f\right)/\lambda}-P_{n}\left(f\right)}{P_{h}\left(f\right)},0\right)df=E.$
(8)
The value for $\lambda$ can be solved for numerically given
$P_{n}\left(f\right)$ and $P_{h}\left(f\right)$. Using the receiver in (5),
the Receiver Operating Characteristic (ROC) which relates the probability of
detection $P_{D}$ and the probability of false alarm $P_{FA}$ is completely
characterized using the detection metric defined in (5) and is expressed as
[4]
$P_{D}=P_{FA}^{\frac{1}{1+d^{2}}}.$ (9)
Setting $d^{2}=0$ results in the line of no discrimination ROC curve where
$P_{D}=P_{FA}$. As $d^{2}\to\infty$, the ROC curve approaches the perfect
detector. Maximizing the detection metric $d^{2}$ via the MI waveform
described by (7) and (8) and its corresponding receiver (4) will therefore
maximize the detection probability $P_{D}$ for a given fixed false alarm
probability $P_{FA}$. Thus, the detection metric $d^{2}$ in (5) is the primary
figure of merit this paper uses to evaluate MI waveform designs using the
model developed by Kay [4].
Fig. 1: Block diagram describing the target scene model. The received signal
at the target’s time-delay of arrival is a superposition of a scaled version
of the transmitted waveform $s\left(t\right)$ plus additive noise
$n\left(t\right)$ and clutter $c\left(t\right)$ from the target scene.
### 2.2 The MTSFM Waveform Model
The MTSFM waveform is realized by representing the modulation function as a
finite Fourier series expansion. While this paper focuses on modulation
functions composed solely of sine harmonics for simplicity, the model can be
readily generalized to include cosine harmonics as well. The MTSFM’s
modulation function is expressed as
$m\left(t\right)=\sum_{k=1}^{K}b_{k}\sin\left(\frac{2\pi kt}{T}\right).$ (10)
The corresponding phase modulation function is expressed as
$\varphi\left(t\right)=-\sum_{k=1}^{K}\beta_{k}\cos\left(\frac{2\pi
kt}{T}\right),$ (11)
where $\beta_{k}=\left(\frac{b_{k}T}{k}\right)$ are the waveform’s modulation
indices and serve as a discrete set of $K$ parameters that adapts the
waveform’s characteristics. Inserting (11) into (1) yields the MTSFM
waveform’s time-series.
The MTSFM waveform time-series can also be represented as a complex Fourier
series expressed as [10]
$s\left(t\right)=\sqrt{\frac{E}{T}}\operatorname{rect}\left(t/T\right)\sum_{m=-\infty}^{\infty}c_{m}e^{j\frac{2\pi
mt}{T}}$ (12)
where the Fourier series coefficients $c_{m}$ are the Modified Generalized
Bessel Functions (M-GBFs) [16] with integer order $m$ expressed as
$\mathcal{I}_{m}^{1:K}\left(\\{-j\beta_{k}\\}\right)$. The expansion in (12)
shows that the MTSFM belongs to the family of generalized multi-carrier
waveform models such as Orthogonal Frequency Division Multiplexing (OFDM)
[17]. A unique characteristic of the MTSFM model is that unlike standard OFDM
models with a generic set of coefficients $c_{m}$, the GBF coefficients of the
MTSFM model ensures the resulting waveform is constant modulus [10, 18]. The
spectrum of the MTSFM waveform is expressed as an orthonormal superposition of
frequency shifted $\operatorname{sinc}$ functions [10]
$S\left(f\right)=\sqrt{ET}\sum_{m=-\infty}^{\infty}\mathcal{I}_{m}^{1:K}\left(\\{-j\beta_{k}\\}\right)\operatorname{sinc}\left[\pi
T\left(f-\frac{m}{T}\right)\right].$ (13)
The design goal is to now use the MTSFM waveform model to approximate the
optimal ESD of the MI waveform design problem specified by (7) and (8).
### 2.3 The Design of MI Waveforms using the MTSFM Model
This section describes a heuristic structured phase retrieval method to design
a constant amplitude MTSFM waveform whose ESD $|S\left(f\right)|^{2}$ closely
approximates the ESD of the MI waveform defined in Section 2.1 denoted as
$|S_{o}\left(f\right)|^{2}$. The first step is to find
$|S_{o}\left(f\right)|^{2}$ using (7) and (8). Since the MI waveform design
method is concerned only with the ESD, the phase of the MI waveform’s spectrum
can be ignored and therefore $S_{o}\left(f\right)=|S_{o}\left(f\right)|$.
Discretizing $S_{o}\left(f\right)$, the generic OFDM coefficients $c_{m}$ can
be solved in matrix form via
$\underline{\mathbf{s_{o}}}=\mathbf{X}\mathbf{\underline{c}}$. Here,
$\underline{\mathbf{s_{o}}}$ is the discrete vector form of
$S_{o}\left(f\right)$, the vector $\underline{\mathbf{c}}$ represents the
generic OFDM coefficients $c_{m}$, and $\mathbf{X}$ is a matrix composed of
discretized frequency shifted versions of the $\operatorname{sinc}$ function
in (13) with frequency spacing $f_{m}=m/T$ as in (4). The frequency spacing
results in $\mathbf{X}$ being square and invertible. Thus a unique solution to
$c_{m}$ exists by solving
$\underline{\mathbf{s_{o}}}=\mathbf{X}\mathbf{\underline{c}}$ via
$\underline{\mathbf{c}}=\mathbf{X}^{-1}\underline{\mathbf{s_{o}}}$.
The resulting coefficients $c_{m}$ are generic OFDM coefficients and not
guaranteed to synthesize a constant modulus waveform. Therefore, synthesizing
a constant modulus MTSFM waveform that approximates $|S_{o}\left(f\right)|$
requires finding a M-GBF based fit to the coefficients $c_{m}$. While the
coefficients $c_{m}$ are real but not necessarily positive, the M-GBF
coefficients can be complex valued. Thus, the authors propose synthesizing a
MTSFM approximation to the ideal MI waveform by minimizing the following
distance metric between the magnitudes $|c_{m}|^{2}=c_{m}^{2}$ and
$|\mathcal{I}_{m}^{1:K}\left(\\{-j\beta_{k}\\}\right)|^{2}$
$\underset{\beta_{k}}{\text{min}}\text{~{}}F\left(\\{\beta_{k}\\}\right)=\|c_{m}^{2}-E|\mathcal{I}_{m}^{1:K}\left(\\{-j\beta_{k}\\}\right)|^{2}\|_{2}^{2}\\\
\text{~{}s.t.}\sum_{k}k\beta_{k}\in\left(1\pm\delta\right)\kappa.$ (14)
where $\kappa$ is the region of support of $c_{m}$ such that
$\sum_{m\in\kappa}c_{m}^{2}\cong E$ and the $\sum_{k}k\beta_{k}$ term loosely
approximates the M-GBF’s region of support [16]. Note that the $E$ term in
front of the M-GBF argument ensures proper scaling with $c_{m}^{2}$ since
$\sum_{m}c_{m}^{2}=E$ and
$\sum_{m}|\mathcal{I}_{m}^{1:K}\left(\\{-j\beta_{k}\\}\right)|^{2}=1$ [16].
The quartic objective function defined in (14) is loosely similar to those
defined in other generalized phase retrieval problems [19, 20]. The quartic
nature of this distance metric between $c_{m}^{2}$ and
$|\mathcal{I}_{m}^{1:K}\left(\\{-j\beta_{k}\\}\right)|^{2}$ coupled with the
highly oscillatory nature of the M-GBFs with varying arguments make the
objective function defined in (14) nonconvex. Such an objective function makes
it unlikely that common iterative methods will solve this problem without
special consideration to initialization. However, efforts in the literature
[21, 22] have demonstrated that heuristic methods work surprisingly well on
these nonconvex problems and produce useful results.
## 3 Two Illustrative Design Examples
The following simulations demonstrate the MTSFM-based fit to the MI waveform
problem. Figure 2 shows the noise and clutter PSDs for two scenarios as well
as the ideal MI waveforms for several energy values. These simlulations
utilize the same point target statistics ($\sigma_{A}^{2}=1$) and noise PSDs
while using different clutter PSDs. The noise PSD has a relatively broad
valley centered about DC. Noise only water-filling techniques commonly
utilized in MI waveform design would therefore emphasize most of the
waveform’s energy about DC. However, the clutter also heavily influences the
MI waveform’s ESD. The first scenario’s clutter PSD is oscillatory across most
of the operational band with a distinct peak at DC (i.e, the “clutter-peak”
case). The second scenario’s clutter PSD is largely flat across the
operational band with a distinct notch centered about DC (i.e, the “clutter-
notch” case). Using (7) and (8) produces MI waveforms whose ESDs vary roughly
20 dB across the operational band.
Fig. 2: Illustration of the noise/clutter PSDs and their corresponding MI
waveform ESDs for several energy values.
Figure 3 shows box-whisker plots of the detection metric $d^{2}$ of the MTSFMs
fitted to the ideal MI waveform ESDs across a range of energy values for both
scenarios shown in Figure 2. For each energy value, 1000 MTSFM waveforms each
with a different set of initial modulation indices $\beta_{k}$ were fit to the
ideal MI waveform’s ESD using (14) with $\delta=0.2$. The black circles denote
statistical outliers. Addtionally, the detection metric for an LFM waveform
with equal RMS bandwidth
$\beta_{rms}^{2}=\left(2\pi\right)^{2}/E\int_{W}f^{2}|S\left(f\right)|^{2}df$
to that of the MI and MTSFM waveforms is also shown for each energy value. For
the clutter-peak case, the MTSFMs outperform the LFM often for lower energies
but noticeably less so for higher energies. One potential explanation for this
result is that for higher energies, the MI waveform starts to resemble a flat
spectrum waveform. An analysis of the trials showed that the MTSFM tends to
better fit spectral shapes where there is notable variation across the
operational band. For the clutter-notch case every MTSFM outperformed the LFM
for energy values $E>1$. This is likely because the MI waveform’s ESD shapes
possess a distinct peak at DC, a spectral shape the MTSFM is much better
suited to fitting than the flat spectrum LFM.
Fig. 3: Box-whisker plots of the detection metric $d^{2}$ of 1000 MTSFM trials
for each energy value and the detection metrics for both the ideal MI
waveforms and LFM waveforms with equivalent RMS bandwidth for each energy
value.
## 4 Conclusion
This paper explored applying the MTSFM waveform model to the MI waveform
design problem for point-like targets using the model in [4]. The M-GBF
coefficients that describe the MTSFM are fit to a set of ideal OFDM
coefficients $c_{m}$ via the nonconvex distance metric in (14). Simulations
show that the MTSFM on average produces waveforms whose detection performance
tends to exceed that of spectrally flat waveforms when the ideal MI waveform’s
ESD varies substantially across the operational band. There are several future
avenues to pursue with this work. The most obvious is expanding this analysis
to extended targets with MTSFM waveforms whose modulation functions include
cosine and sine harmonics which produce a richer set of realizable spectral
shapes. Another avenue is refining the phase retrieval process using methods
from [21] to produce MTSFM waveforms whose ESDs more tightly fit the ideal MI
waveform’s ESD.
## References
* [1] M. R. Bell, “Information theory and radar waveform design,” IEEE Transactions on Information Theory, vol. 39, no. 5, pp. 1578–1597, 1993.
* [2] R. A. Romero, J. Bae, and N. A. Goodman, “Theory and application of SNR and mutual information matched illumination waveforms,” IEEE Transactions on Aerospace and Electronic Systems, vol. 47, no. 2, pp. 912–927, 2011.
* [3] S. U. Pillai, H. S. Oh, D. C. Youla, and J. R. Guerci, “Optimal transmit-receiver design in the presence of signal-dependent interference and channel noise,” IEEE Transactions on Information Theory, vol. 46, no. 2, pp. 577–584, 2000.
* [4] S. M. Kay, “Optimal signal design for detection of Gaussian point targets in stationary Gaussian clutter/reverberation,” IEEE Journal of Selected Topics in Signal Processing, vol. 1, no. 1, pp. 31–41, 2007.
* [5] S. U. Pillai, D. C. Youla, H. S. Oh, and J. R. Guerci, “Optimum transmit-receiver design in the presence of signal-dependent interference and channel noise,” in Conference Record of the Thirty-Third Asilomar Conference on Signals, Systems, and Computers, 1999, vol. 2, pp. 870–875 vol.2.
* [6] J. R. Guerci and S. U. Pillai, “Theory and application of optimum transmit-receive radar,” in Record of the IEEE 2000 International Radar Conference, 2000, pp. 705–710.
* [7] L. K. Patton and B. D. Rigling, “Phase retrieval for radar waveform optimization,” IEEE Transactions on Aerospace and Electronic Systems, vol. 48, no. 4, pp. 3287–3302, 2012.
* [8] S. U. Pillai, K. Y. Li, and H. Beyer, “Reconstruction of constant envelope signals with given Fourier transform magnitude,” in 2009 IEEE Radar Conference, 2009, pp. 1–4.
* [9] J. Bae and N. A. Goodman, “Evaluation of modulus-constrained matched illumination waveforms for target identification,” in 2010 IEEE Radar Conference, 2010, pp. 871–876.
* [10] D. A. Hague, “Adaptive transmit waveform design using multitone sinusoidal frequency modulation,” IEEE Transactions on Aerospace and Electronic Systems, vol. 57, no. 2, pp. 1274–1287, 2021.
* [11] D. A. Hague, “Target resolution properties of the multi-tone sinusoidal frequency modulated waveform,” in 2018 IEEE Statistical Signal Processing Workshop (SSP), 2018, pp. 752–756.
* [12] A. Aubry, A. De Maio, B. Jiang, and S. Zhang, “Ambiguity function shaping for cognitive radar via complex quartic optimization,” IEEE Transactions on Signal Processing, vol. 61, no. 22, pp. 5603–5619, Nov 2013.
* [13] R. Zhou, Z. Zhao, and D. P. Palomar, “Unified framework for minimax mimo transmit beampattern matching under waveform constraints,” in 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2019, pp. 4150–4154.
* [14] L. Wu and D. P. Palomar, “Sequence design for spectral shaping via minimization of regularized spectral level ratio,” IEEE Transactions on Signal Processing, vol. 67, no. 18, pp. 4683–4695, Sep. 2019.
* [15] A. Bose, N. Mohammadi, and M. Soltanalian, “Designing signals with good correlation and distribution properties,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 4349–4353.
* [16] G. Dattoli and A. Torre, Theory and Applications of Generalized Bessel Functions, Aracne Editrice, 1996.
* [17] M. Bică and V. Koivunen, “Generalized multicarrier radar: Models and performance,” IEEE Transactions on Signal Processing, vol. 64, no. 17, pp. 4389–4402, 2016.
* [18] D. A. Hague and J. R. Buck, “An experimental evaluation of the generalized sinusoidal frequency modulated waveform for active sonar systems,” The Journal of the Acoustical Society of America, vol. 145, no. 6, pp. 3741–3755, 2019.
* [19] Ju Sun, Qing Qu, and John Wright, “A geometric analysis of phase retrieval,” Foundations of Computational Mathematics, vol. 18, no. 5, pp. 1131–1198, 2018.
* [20] E. J. Candés, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: Theory and algorithms,” IEEE Transactions on Information Theory, vol. 61, no. 4, pp. 1985–2007, 2015.
* [21] N. Vaswani, “Nonconvex structured phase retrieval: A focus on provably correct approaches,” IEEE Signal Processing Magazine, vol. 37, no. 5, pp. 67–77, 2020\.
* [22] Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: A contemporary overview,” IEEE Signal Processing Magazine, vol. 32, no. 3, pp. 87–109, 2015\.
|
# HyP2 Loss: Beyond Hypersphere Metric Space for Multi-label Image Retrieval
Chengyin Xu SIGS, Tsinghua University , Zenghao Chai SIGS, Tsinghua
University , Zhengzhuo Xu SIGS, Tsinghua University , Chun Yuan SIGS,
Tsinghua University Peng Cheng National Laboratory , Yanbo Fan Tencent AI
Lab and Jue Wang Tencent AI Lab
(2022)
###### Abstract.
Image retrieval has become an increasingly appealing technique with broad
multimedia application prospects, where deep hashing serves as the dominant
branch towards low storage and efficient retrieval. In this paper, we carried
out in-depth investigations on metric learning in deep hashing for
establishing a powerful metric space in multi-label scenarios, where the pair
loss suffers high computational overhead and converge difficulty, while the
proxy loss is theoretically incapable of expressing the profound label
dependencies and exhibits conflicts in the constructed hypersphere space. To
address the problems, we propose a novel metric learning framework with Hybrid
Proxy-Pair Loss (HyP2 Loss) that constructs an expressive metric space with
efficient training complexity w.r.t. the whole dataset. The proposed HyP2 Loss
focuses on optimizing the hypersphere space by learnable proxies and
excavating data-to-data correlations of irrelevant pairs, which integrates
sufficient data correspondence of pair-based methods and high-efficiency of
proxy-based methods. Extensive experiments on four standard multi-label
benchmarks justify the proposed method outperforms the state-of-the-art, is
robust among different hash bits and achieves significant performance gains
with a faster, more stable convergence speed. Our code is available at
https://github.com/JerryXu0129/HyP2-Loss.
Image Retrieval, Deep Hashing, Multi-label, Metric Learning
††journalyear: 2022††copyright: rightsretained††conference: Proceedings of the
30th ACM International Conference on Multimedia; October 10–14, 2022; Lisboa,
Portugal††booktitle: Proceedings of the 30th ACM International Conference on
Multimedia (MM ’22), Oct. 10–14, 2022, Lisboa, Portugal††isbn:
978-1-4503-9203-7/22/10††doi: 10.1145/3503161.3548032††ccs: Computing
methodologies Image representations††ccs: Information systems Top-k retrieval
in databases \begin{overpic}[trim=0.0pt 0.0pt 0.0pt
0.0pt,clip,width=368.57964pt,grid=false]{imgs/teaser.png}
\put(18.0,46.0){\cite[citep]{(\@@bibref{AuthorsPhrase1Year}{Proxy_anchor_loss}{\@@citephrase{,
}}{})}}
\put(18.0,38.0){\cite[citep]{(\@@bibref{AuthorsPhrase1Year}{IDHN}{\@@citephrase{,
}}{})}}
\put(18.0,20.0){\cite[citep]{(\@@bibref{AuthorsPhrase1Year}{Proxy_anchor_loss}{\@@citephrase{,
}}{})}}
\put(18.0,12.0){\cite[citep]{(\@@bibref{AuthorsPhrase1Year}{IDHN}{\@@citephrase{,
}}{})}} \end{overpic} Figure 1. Retrieval comparisons to previous methods
(Zhang et al., 2020; Kim et al., 2020) $\bm{\&}$ ours on the Flickr-25k
(Huiskes and Lew, 2008) dataset (top) $\bm{\&}$ NUS-WIDE (Chua et al., 2009)
dataset (bottom). The Top-$\bm{10}$ images are returned according to the
Hamming distance between the query image and the database. The number below
each retrieved image indicates the matched categories, green (red) box
indicates matched categories
$\bm{\geq 75\%}$
(
$\bm{\leq 25\%}$
). The proposed HyP2 Loss outperforms state-of-the-art (Kim et al., 2020;
Zhang et al., 2020) with fewer contradictions and misclassified results.
## 1\. Introduction
The past decades have witnessed the arrival of the era of big data, floods of
images are uploaded to social platforms and search engines day and night,
which calls for more efficient and accurate image retrieval in multimedia
applications (Feng et al., 2020; Xia et al., 2021; Tu et al., 2021; Cui et
al., 2021; Li et al., 2021). Real-world images typically contain more than one
attribute. Hence, the multi-label retrieval task (Rodrigues et al., 2020; Li
et al., 2021) serves as the crucial and more challenging branch in large-scale
image retrieval.
Hashing techniques (Datta et al., 2008; Zhang and Rui, 2013) are widely used
to accelerate retrieval due to the low storage and computation costs. The
retrieval system can utilize an efficient bit-wise XOR operation to estimate
the distance between hash code pairs. Following the prosperous progress of
Deep Neural Networks (DNNs) in visual recognition, deep hashing (Xia et al.,
2014; Cao et al., 2017) has achieved glary attention and become one of the
most substantial research topics in the image retrieval community (Wang et
al., 2018; Chen et al., 2021; Wang et al., 2016a). The target of deep hashing
is to project numerous samples into the hyper metric space and then convert
them into compact binary codes through hash functions. The parameterized
networks are optimized such that semantically similar data (_i.e_., images
with the same categories) are well clustered and distributed in the
established metric space. Such quality and expressiveness of the metric space
are optimized through elaborate-designed loss functions in a supervised manner
during metric learning.
Pair-based methods (Zhao et al., 2015; Lai et al., 2015; Huang et al., 2018;
Lai et al., 2016; Zhang et al., 2020) are predominant in multi-label
retrieval, which directly consider data-to-data connections in a mini-batch.
However, such approaches prohibitively confront high training complexity that
require square (Zhang et al., 2020; Chopra et al., 2005; Hadsell et al., 2006;
Harwood et al., 2017; Bromley et al., 1993) or even higher (Schroff et al.,
2015; Sohn, 2016; Song et al., 2016) complexity w.r.t. the number of training
samples. Furthermore, the data-to-data correlations in a mini-batch could
deteriorate the robustness and degrade the learned metric space because of the
increasing overfitting risks and training instability.
Compared to the pair-based methods, proxy-based methods (Kim et al., 2020;
Movshovitz et al., 2017) are more efficient and effective in single-label
retrieval to embed samples into a proxy-centered hypersphere space. However,
in multi-label scenarios, we observe and theoretically prove that they are
limited to expressing profound correlations. With the exponentially increasing
combination growth among labels, the inclusive (_i.e_., relevant categories)
and exclusive (_i.e_., irrelevant categories) relations cannot get well
established supervised by proxy loss. Hence, proxy-based methods exhibit
conflicts (see Fig. 2) and are unsatisfactory in such scenarios.
To overcome the above weaknesses, we propose Hybrid Proxy-Pair Loss (HyP2
Loss) to embed samples into expressive hyperspace to establish abundant label
correlations with an efficient training complexity. Concretely, we conceive
the first part of HyP2 Loss by setting the learnable proxy for each category
such that the established metric space roughly clusters samples of similar
categories. Note that simply adding the pair loss will introduce overwhelmed
training complexity and is fruitless to performance gains. We creatively
design additional irrelevant pair constraints as the second part to compensate
for the missing multi-label data-to-data correlations, such that enables HyP2
Loss to alienate irrelevant samples to avoid attribute conflicts effectively.
Finally, we design the overall loss function with the learning algorithm that
ensures more efficient training complexity than pair-based methods, since our
predominant part during training is linear correlated to the total training
dataset. The elaborate-designed loss functions and training framework
guarantee that the established metric space contains expressive multi-label
correlations.
To justify the effectiveness and efficiency of our framework, we conduct
comprehensive experiments on four multi-label benchmark datasets, _i.e_.,
Flickr-25k (Huiskes and Lew, 2008), VOC-2007 (Everingham et al., 2010),
VOC-2012 (Everingham et al., 2010), and NUS-WIDE (Chua et al., 2009). Compared
to existing state-of-the-art (Hoe et al., 2021; Zhang et al., 2020; Yuan et
al., 2020; Huang et al., 2018), the proposed method outperforms existing
techniques among different hash bits and backbones quantitatively and
qualitatively with better convergence speed and stability. Additionally, in-
depth ablation studies and visualization results justify the effectiveness of
our mechanism and the designed loss function.
To summarize, our main contributions are three-fold:
* •
We prove that the hypersphere metric space established by existing proxy-based
methods is limited to expressing the profound inclusive and exclusive
relations in multi-label scenarios. To the best of our knowledge, this is the
first work to theoretically analyze the upper bound of distinguishable
hypersphere number in metric space in multi-label image retrieval task.
* •
We propose HyP2 Loss, a novel loss function that integrates the efficient time
complexity of proxy-based methods and strong data correlations of pair-based
methods. Particularly, the elaborate-designed multi-label proxy loss,
irrelevant pair loss, and overall learning framework contribute well to
embedding multi-label images into expressive metric space.
* •
We conduct extensive experiments on four benchmarks to demonstrate the
superiority and robustness of our proposed HyP2 Loss, which is also feasible
to various deep hashing methods and backbones. HyP2 Loss exhibits its
outperforming retrieval performance on both convergence speed and retrieval
accuracy compared to the existing state-of-the-art.
## 2\. Related Work
Many hashing algorithms (Datta et al., 2008; Zhang and Rui, 2013; Gong et al.,
2013; Weiss et al., 2008; Liu et al., 2012; Shen et al., 2015) have been
proposed to obtain compact binary codes, which are seminal solutions to reduce
the storage and calculation overhead in large-scale image retrieval. Deep
hashing (Chen et al., 2021; Wang et al., 2016a; Xia et al., 2014; Cao et al.,
2017) has become the mainstream for the superiority of CNNs (LeCun et al.,
1998; Krizhevsky et al., 2017; Szegedy et al., 2015) in feature extraction,
especially in scenarios where images are associated with more than one
attribute. The common to all is to design the loss functions to establish
powerful metric space and precise hashing positions.
Pair-based Methods. Pair-based methods (Zhao et al., 2015; Lai et al., 2016;
Wu et al., 2017; Huang et al., 2018; Zhang et al., 2020; Ma et al., 2021a) are
predominant in multi-label retrieval (Rodrigues et al., 2020), which focus on
exploring data-to-data relations from the paired samples through metric
learning. In the field of image retrieval, Constrictive loss (Chopra et al.,
2005; Hadsell et al., 2006) innovatively determines the gradient descent
directions by estimating the similarity between feature vector pairs. Based on
it, CNNH (Xia et al., 2014) and DPSH (Li et al., 2016) utilize CNNs to extract
the features of given images. To reveal the local optima risks in pair-wise
loss (Li et al., 2016), Triplet Loss (Schroff et al., 2015; Wang et al.,
2016b) associates the anchor with one positive and one negative sample for the
loss calculation process.
Recently, researchers concentrate on exploring the profound attribute
correlations in challenging multi-label retrieval (Rodrigues et al., 2020).
The seminal work DSRH (Zhao et al., 2015) introduces CNN-based Triplet Loss to
estimate the semantic distance according to the sorted labels. IAH (Lai et
al., 2016) divides hash codes into groups to separately excavate instance-
aware image representations. DMSSPH (Wu et al., 2017) and RCDH (Ma et al.,
2021a) further improve the performance by considering the semantic similarity
by supervision on grouped labels and additional regularization. Pair-based
methods fully excavate the data correlations and exhibit satisfactory
performance. However, the training complexity generally requires square or
cubic complexity related to the entire large-scale images. Hence, such
approaches suffer high computational consumption and converge difficulty,
especially are more serious and ineluctable in multi-label scenarios.
Proxy-based Methods. To address the challenging issues in pair-based methods,
proxy-based methods are proposed to improve model robustness with efficient
training complexity in single-label scenarios. Some methods (Yuan et al.,
2020; Hoe et al., 2021; Fan et al., 2020) attempt to alleviate the training
difficulty by fixing manually-selected or predefined hash centers. Such
predefined centers are regarded as specific proxies for corresponding
categories. Hence, the training complexity is affordable because each sample
only interacts with a few class proxies.
However, the artificially designed proxies ignore the semantic relationship
between intra-class and inter-class. To fill this gap, existing state-of-the-
art (Movshovitz et al., 2017; Aziere and Todorovic, 2019; Kim et al., 2020)
regard the hash centers as trainable parameters. Proxy NCA (Movshovitz et al.,
2017) calculates the distance between each proxy and positive & negative
samples, while Manifold Proxy Loss (Aziere and Todorovic, 2019) improves
performance with the manifold-aware distance to measure the semantic
similarity. Proxy Anchor Loss (Kim et al., 2020) integrates both advantages of
pair-based and proxy-based schemes with Log-SumExp function. It individually
considers the distances between different samples and proxies to tackle the
hard-pair challenges.
Although the proxy-based methods achieve performance better or on par with
pair-based ones with faster convergence speed and promising training overhead
in terms of single-label datasets (Krizhevsky et al., 2009; Russakovsky et
al., 2015), they always fail when come into multi-label scenarios and hence
haven’t been fully investigated before. We further elaborate on the reasons
and propose our novel HyP2 Loss solution further.
## 3\. Methodology
### 3.1. Task Definition
Given a training set
$\mathscr{D}_{M}\coloneqq\\{({\bm{x}}_{i},\bm{y}_{i})\\}_{i=1}^{M}$ composed
of $M$ data points
$\mathcal{X}\coloneqq\\{\bm{x}_{i}\\}_{i=1}^{M}\in\mathds{R}^{D\times M}$ and
corresponding label
$\mathcal{Y}\coloneqq\\{\bm{y}_{i}\\}_{i=1}^{M}\in\\{0,1\\}^{C\times M}$,
where $D$ represents the resolution of images and $C$ denotes the category
numbers, respectively. The image $\bm{x}_{i}$ contains the attribute of class
$y_{j}$ iff ${\bm{y}_{i}}_{(j)}=1$. In multi-label scenarios, each sample
contains at least one attribute, _i.e_.,
$\sum\nolimits_{j=1}^{C}{{\bm{y_{i}}}_{(j)}}\geq 1$. The target for deep
hashing is to learn a feature extractor $\mathcal{F}$ parameterized by
$\Theta$ that encodes each data point $\bm{x}_{i}$ into a compact $K$-bit
feature vector
$\bm{v}_{i}\coloneqq\mathcal{F}_{\Theta}(\bm{x}_{i})\in\mathds{R}^{K}$ in
metric space, and maps into $K$-bit binary hash code
$\bm{b}_{i}\coloneqq\mathcal{H}(\bm{v}_{i})\in\\{-1,1\\}^{K}$ through the
hashing function $\mathcal{H}$ in the Hamming space. Hence, the image-wise
similarity is preserved in the Hamming space. For given query image
$\bm{x}_{q}$, we sort the hash codes for all the samples in the database
according to their Hamming distance, and return the Top-$N$ images as the
query results. The core challenge of this task is to learn a reliable feature
extractor $\mathcal{F}_{\Theta}^{*}$ to cluster images of different categories
with proper and distinguishable hash positions.
\begin{overpic}[trim=0.0pt 0.0pt 0.0pt
0.0pt,clip,width=433.62pt,grid=false]{imgs/overview_v2.png}
\put(2.5,12.5){(a). Pair loss} \put(2.0,0.5){(b). Proxy loss}
\put(30.0,0.5){(c). Illustration of the proposed {{HyP${}^{2}$ Loss}}}
\put(80.0,0.5){(d). Retrieval results}
\par\par\put(57.0,11.3){\tiny{$\scalebox{0.8}{$\mathcal{L}_{proxy}$}$}}
\put(61.0,14.0){\tiny{$\scalebox{0.8}{$\mathcal{L}_{neg\\_pair}$}$}}
\par\put(25.0,4.5){\scriptsize CNN Feature Extractor}
\put(15.0,4.5){\scriptsize mini-batch} \put(16.0,3.5){\scriptsize Input}
\par\put(35.5,3.5){\scriptsize Corresponding } \put(35.5,2.0){\scriptsize
multi-label G.T.} \par\put(39.4,19.0){\scriptsize$K$-bits continuous}
\put(40.8,18.0){\scriptsize feature vector}
\par\par\put(60.0,22.5){\scriptsize$K$-dimensional metric space}
\par\par\put(53.0,15.5){\scriptsize$\bm{p}_{3}$}
\put(67.0,13.0){\scriptsize$\bm{p}_{4}$}
\put(54.0,13.0){\scriptsize$\bm{p}_{2}$}
\put(63.0,17.5){\scriptsize$\bm{p}_{1}$}
\par\put(57.3,17.0){\scriptsize$\bm{v}_{1}$}
\put(61.0,17.3){\scriptsize$\bm{v}_{2}$}
\par\put(61.0,20.5){\scriptsize$\bm{v}_{3}$}
\put(63.5,16.5){\scriptsize$\bm{v}_{4}$} \par\put(77.0,5.0){\scriptsize
Database} \par\put(75.8,15.0){\scriptsize Retrieve from}
\put(76.5,14.0){\scriptsize database} \par\put(78.5,22.5){\scriptsize{Query}}
\put(77.0,17.5){\scriptsize Dog \& Cat} \par\par\put(82.5,22.0){\scriptsize
Dog \& Cat} \put(88.0,22.0){\scriptsize Cow \& Sheep}
\put(93.0,20.5){\scriptsize Dog \& Sheep} \put(84.5,14.0){\scriptsize Dog}
\put(90.0,14.0){\scriptsize Cat} \put(93.5,14.5){\scriptsize{Proxy Loss}}
\par\put(82.5,12.0){\scriptsize Dog \& Cat} \put(89.0,12.0){\scriptsize Dog}
\put(93.0,10.5){\scriptsize Cow \& Sheep} \put(84.5,3.7){\scriptsize Cat}
\put(88.0,3.7){\scriptsize Dog \& Sheep} \put(95.5,4.2){\scriptsize{Ours}}
\par\put(84.0,2.0){\scriptsize{Top-$\bm{N}$ returned}} \par\end{overpic}
Figure 2. Difference and novelty of HyP2 Loss _vs_. previous losses. (a).
Pair-based methods require square or even cubic complexity w.r.t. the whole
dataset. (b). Proxy-based methods are efficient that only consider proxy-to-
data relations but ignore data-to-data relation and exhibits conflicts in
multi-label retrieval. (c). The novel HyP2 Loss extends proxy loss into multi-
label scenarios to construct powerful metric space beyond hypersphere ones.
The Multi-label Proxy Loss $\bm{\mathcal{L}_{proxy}}$ establishes a proper
metric space with training efficiency guaranteed, while the elaborate-designed
irrelevant pair constraint $\bm{\mathcal{L}_{neg\\_pair}}$ alienates
irrelevant pairs to address conflicts between $\bm{(v}_{1},\bm{v}_{2}\bm{)}$.
(d). Given the query, proxy-based methods misclassify the irrelevant samples
into the Top-$\bm{N}$ results. As a comparison, HyP2 Loss achieves more
accurate top returned retrieval results.
### 3.2. Motivation
Upper Bound of Distinguishable Hypersphere Number in Metric Space. Metric
learning serves as the substantial procedure for deep hashing. It focuses on
embedding the 2D images that consist of various attributes into the
$K$-dimensional metric space (_i.e_., through their feature vectors $\bm{v}$),
where similar samples (_i.e_., with closer categories) should get clustered.
For image retrieval, it is challenging to establish precise mapping that
encodes the input images into the ideal metric space, especially in large-
scale datasets (Russakovsky et al., 2015) that should consider training
complexity.
In single-label retrieval (_e.g_., ImageNet (Russakovsky et al., 2015), CIFAR
(Krizhevsky et al., 2009)), the label correlations are restricted among one
positive label with negative others. Hence, proxy-based methods (Movshovitz et
al., 2017; Aziere and Todorovic, 2019; Kim et al., 2020) are effective to
embed samples around the class proxies in the metric space. Ultimately,
samples tend to distribute in a hypersphere centered by predefined or
learnable proxies, where similar proxies are close while heterogeneous others
are well alienated. While in multi-label retrieval, the inclusive and
exclusive relations (_i.e_., the ability of metric space to distinguish its
relevant and irrelevant categories) is exponential w.r.t. category number $C$.
However, the isotropic (_i.e_., perfectly symmetrical) hypersphere has
inherent side-effect in such scenarios. When the labels of one sample larger
than two, the label correlations cannot get fully expressed in the hypersphere
space. Specifically, we have the following Theorem. A.1 to illustrate the
upper bound of distinguishable hypersphere number $\Omega(K,C)$ (or called the
maximum inclusive and exclusive relations) in multi-label scenarios. Please
refer to the supplementary for proof.
###### Theorem 3.1.
For the $K$-dimensional metric space $\mathds{R}^{K}$ with $C$ hypersphere
$\mathds{S}\in\mathds{R}^{K}$. The upper bound of distinguishable hypersphere
number $\Omega(K,C)$ cannot enumerate the ideal
$\Omega^{*}(K,C)=\sum\nolimits_{c=0}^{C}\tbinom{C}{c}=2^{C}$ when $C>K+1$. The
upper bound is limited at:
(1)
$\tilde{\Omega}(K,C)=\mathop{\sup}\nolimits_{\mathds{S}}{{\Omega}(K,C)}=\tbinom{C-1}{K}+\sum\nolimits_{k=0}^{K}\tbinom{C}{k}<2^{C}$
Embedding Position Conflicts in Multi-label Scenarios. Another limitation of
proxy-based methods in multi-label scenarios is that, some irrelevant samples
(_i.e_., without the same categories) are inevitable to be embedded into
nearby positions. The primary reason is that the proxy loss only considers the
proxy-to-data distance but misses the data-to-data constraints, the irrelevant
samples associated with different attributes will be potentially encoded into
the close positions nearby the middle of category proxies.
The proxy loss will enforce samples with multiple attributes embedded into the
middle among these proxies, because such hash position ensures images with
identical multi-label retrieved by query images in priority, second by images
containing partially same attributes. Hence, proxy loss encourages multi-label
samples to converge nearby the middle of proxies to achieve the optimal
solution. Intuitively, as illustrated in Fig. 2(c), suppose the proxy set
$\mathscr{P}_{4}=\\{\bm{p}_{1},\cdots,\bm{p}_{4}\\}$ is well embedded into the
metric space, where the proxy loss between any sample $(\bm{x},\bm{y})$ and
$\mathscr{P}$ has converged. Hence, image $\bm{x}_{1}$ associated with
$y_{1},y_{2}$ (_e.g_., dog & cat) will be embedded into the middle of
$\bm{p}_{1}$ and $\bm{p}_{2}$, while image $\bm{x}_{2}$ with $y_{3},y_{4}$
(_e.g_., cow & sheep) will be embedded into the middle of $\bm{p}_{3}$ and
$\bm{p}_{4}$. However, the missing data-to-data correlations ignore the
attribute conflicts between the irrelevant $\bm{x}_{1}$ and $\bm{x}_{2}$.
Although proxy loss explicitly alienates $\bm{x}_{1}$ and
$\bm{p}_{3},\bm{p}_{4}$, $\bm{x}_{1}$ is not guaranteed away from the middle
of $\bm{p}_{3}$ and $\bm{p}_{4}$, which is exactly the embedding position of
$\bm{x}_{2}$. Hence, the dissimilar samples $\bm{x}_{1}$ and $\bm{x}_{2}$ are
entangled in the metric space. For given query $\bm{x}_{q}$ with attributes
$y_{1},y_{2}$, $\bm{x}_{1}$ may get retrieved first, second by the closest
sample $\bm{x}_{2}$, but ignores some relevant samples $(\bm{x}_{3},y_{1})$ or
$(\bm{x}_{4},\\{y_{1},y_{4}\\})$. The misclassified conflicts among multi-
label datasets become more prominent. See Fig. 1 for some examples, the proxy-
based method wrongly retrieves the results of given multi-label images.
### 3.3. Hybrid Proxy-Pair Loss
Sec. 3.2 reveals the primary reasons that proxy-based methods are
unsatisfactory in multi-label scenarios, _i.e_., simply embedding samples
distributed among a proxy-centered hypersphere cannot comprehensively
introduce the combination among various categories, and the proxy-to-data
supervisions ignore data-to-data attribute conflicts. As a result, some
crucial label correlations may not be well-expressed, especially under large-
scale datasets with limited $K$-bit hash codes.
The above observations motivate us to consider the data-to-data relations that
contribute to a powerful metric space to represent the correlations among
various attributes. To avoid constructing the metric space into an isotropic
hypersphere without loss of training efficiency, we creatively propose Hybrid
Proxy-Pair Loss (HyP2 Loss) for metric learning to extend the proxy loss into
challenging multi-label scenarios, and compensate for the local optimum and
overfitting risks of pair loss in exploring data-to-data relations. The
carefully designed HyP2 Loss depicts a superior metric space to fully express
the profound label correspondences. The overview framework of HyP2 Loss is
illustrated in Fig. 2, and the details of each component are elaborated as
follows.
Multi-label Proxy Loss. Firstly, we set $C$ learnable proxies
$\mathscr{P}_{C}=\\{\bm{p}_{1},\cdots,\bm{p}_{C}\\}$, each
$\bm{p}_{i}\in\mathscr{P}_{C}$ is a compact $K$-bit vector that is exclusive
for each category. For a given feature vector
$\bm{v}_{i}\coloneqq\mathcal{F}_{\Theta}(\bm{x}_{i})$ and corresponding label
$\bm{y}_{i}$, the energy term between any $\bm{v}_{i}$ and $\bm{p}_{j}$ is
$\cos_{+}(\bm{v}_{i},\bm{p}_{j})\coloneqq-\cos\langle\bm{v}_{i},\bm{p}_{j}\rangle=-\frac{|\bm{v}_{i}\cdot\bm{p}_{j}|}{|\bm{v}_{i}|\cdot|\bm{p}_{j}|}$
iff they are a positive pair, _i.e_., ${\bm{y}_{i}}_{(j)}=1$. Otherwise,
$\bm{v}_{i}$ and $\bm{p}_{j}$ are a negative pair. The energy term is defined
as
$\cos_{-}(\bm{v}_{i},\bm{p}_{j})\coloneqq\left(\cos\langle\bm{v}_{i},\bm{p}_{j}\rangle-\zeta\right)_{+}=\max\left(\frac{|\bm{v}_{i}\cdot\bm{p}_{j}|}{|\bm{v}_{i}|\cdot|\bm{p}_{j}|}-\zeta,0\right)$,
where $\zeta=\zeta(C,K)$ is a margin term that follows HHF (Xu et al., 2021).
Then, the first term of HyP2 Loss is designed as Multi-label Proxy Loss
$\mathcal{L}_{proxy}$, which only optimizes the distance between proxies and
samples, as Eq. 2 illustrates.
(2) $\begin{aligned}
\mathcal{L}_{proxy}(\mathscr{D}_{M},\mathscr{P}_{C})&=\frac{{\sum\nolimits_{i=1}^{M}{\sum\nolimits_{j=1}^{C}{\mathds{1}({\bm{y}_{i}}_{(j)}=1){{\cos}_{+}}(\bm{v}_{i},\bm{p}_{j})}}}}{{\sum\nolimits_{i=1}^{M}{\sum\nolimits_{j=1}^{C}{\mathds{1}({\bm{y}_{i}}_{(j)}=1)}}}}\\\
&+\frac{{\sum\nolimits_{i=1}^{M}{\sum\nolimits_{j=1}^{C}{\mathds{1}({\bm{y}_{i}}_{(j)}=0){{\cos}_{-}}({\bm{v}_{i}},{\bm{p}_{j}})}}}}{{\sum\nolimits_{i=1}^{M}{\sum\nolimits_{j=1}^{C}{\mathds{1}({\bm{y}_{i}}_{(j)}=0)}}}}\end{aligned},$
where $\mathds{1}(\cdot)$ is the indicator function that equals to $1$ ($0$)
iff $(\cdot)$ is True (False). The denominator term balances $\cos_{+}(\cdot)$
and $\cos_{-}(\cdot)$, such that avoids the gradient bias introduced by
overmuch negative pairs. The Multi-label Proxy Loss is used for establishing a
primary metric space to ensure the samples are distributed among the cluster
centers (_c.f_. Fig. 4), where the correlated labels are properly clustered
and irrelevant sample-proxy pairs are roughly alienated.
Irrelevant Pair Loss. Secondly, we focus on exploring data-to-data
correlations to explicitly enforce irrelevant samples get alienated. To
achieve this, we define the irrelevant pairs as: $(\bm{v}_{i},\bm{v}_{j})$
associated with label $(\bm{y}_{i},\bm{y}_{j})$ is irrelevant pairs iff
$|\bm{y}_{i}\cdot\bm{y}_{j}|=0$ and $|\bm{y}_{i}|>1,|\bm{y}_{j}|>1$. Note that
the total number of such irrelevant pairs is far fewer than the total $M\times
M$ pairs, the ratio is defined as $\eta$ in Tab. 2. Suppose the subset
$\mathscr{D}_{M^{\prime}}^{\prime}\coloneqq\\{({\bm{x}}_{i},\bm{y}_{i})\\}_{i=1}^{M^{\prime}}\subseteq\mathscr{D}_{M}$
is composed of $M^{\prime}$ samples, where each sample contains more than one
category. Then the second term of HyP2 Loss is Irrelevant Pair Loss
$\mathcal{L}_{neg\\_pair}$, which is defined as Eq. 3.
(3)
$\mathcal{L}_{neg\\_pair}(\mathscr{D}_{M^{\prime}}^{\prime})=\frac{\sum\nolimits_{i=1}^{M^{\prime}}\sum\nolimits_{j=1}^{M^{\prime}}\mathds{1}(|\bm{y}_{i}\cdot\bm{y}_{j}|=0)\cos_{-}(\bm{v}_{i},\bm{v}_{j})}{\sum\nolimits_{i=1}^{M^{\prime}}\sum\nolimits_{j=1}^{M^{\prime}}\mathds{1}(|\bm{y}_{i}\cdot\bm{y}_{j}|=0)},$
where $\cos_{-}(\bm{v}_{i},\bm{v}_{j})$ indicates the pair-wise similarity of
given irrelevant samples. Compared to the entire $M\times M$ computation, the
proposed $\mathcal{L}_{neg\\_pair}(\mathscr{D}_{M^{\prime}}^{\prime})$ only
considers limited pairs to mine data-to-data correlations that alienates
irrelevant samples effectively without loss of efficiency.
Algorithm 1 The training algorithm of the proposed HyP2 Loss for metric
learning.
0: Training dataset
$\mathscr{D}_{M}$
, hash code length $K$, mini-batch $B$.
0: Optimized network
$\mathcal{F}_{\Theta}^{*}$
and proxy set
$\mathscr{P}^{*}_{C}$
.
1: Initialize
$\Theta\leftarrow\Theta^{(0)}$
,
$\mathscr{P}\leftarrow\mathscr{P}^{(0)}$
, Epoch $T\leftarrow 0$;
2: repeat
3: Randomly sample a mini-batch samples
$\mathscr{D}_{B}\coloneqq\\{(\bm{x}_{i},\bm{y}_{i})\\}_{i=1}^{B}$
;
4: Compute feature vector
$\bm{v}_{i}\coloneqq\mathcal{F}_{\Theta}^{(T)}(\bm{x}_{i})$
for each $(\bm{x}_{i},\bm{y}_{i})\in\mathscr{D}_{B}$ by forward propagation;
5: Compute Multi-label Proxy Loss
$\mathcal{L}_{proxy}(\mathscr{D}_{B},\mathscr{P}^{(T)}_{C})$
via Eq. 2;
6: Compute Irrelevant Pair Loss
$\mathcal{L}_{neg\\_pair}(\mathscr{D}_{B^{\prime}}^{\prime})$
via Eq. 3;
7: Compute Total Loss
$\mathcal{L}_{total}(\mathscr{D}_{B},\mathscr{P}^{(T)}_{C})$
via Eq. 4;
8: Compute gradient
$\frac{{\partial\mathcal{L}_{total}}}{{\partial{\cos\langle\bm{v}_{i},\bm{p}_{j}\rangle}}}$
and
$\frac{{\partial\mathcal{L}_{total}}}{{\partial{\cos\langle\bm{v}_{i},\bm{v}_{j}\rangle}}}$
via Eq. 6;
9: Update
$\Theta^{(T+1)}$
and
$\mathscr{P}^{(T+1)}$
by back propagation;
10: $T\leftarrow T+1$;
11: until Convergence
12: Return
$\Theta^{*}\leftarrow\Theta^{(T)}$
,
$\mathscr{P}^{*}\leftarrow\mathscr{P}^{(T)}$
.
Overall Loss & Gradient of HyP2 Loss. Finally, the overall HyP2 Loss is the
weighted-assumption of the above two loss terms to obtain
$\mathcal{L}_{total}$, as Eq. 4 illustrates.
(4)
$\mathcal{L}_{total}(\mathscr{D}_{M},\mathscr{P}_{C})=\mathcal{L}_{proxy}(\mathscr{D}_{M},\mathscr{P}_{C})+\beta\mathcal{L}_{neg\\_pair}(\mathscr{D}_{M^{\prime}}^{\prime})$
where $\beta$ is a hyperparameter to balance the constraints between multi-
label proxy term and irrelevant pair term.
Hence, to optimize the parameterized network $\mathcal{F}_{\Theta}$ and proxy
set $\mathscr{P}_{C}$, the objective function is to minimize
$\mathcal{L}_{total}$ of the given training set and learnable proxy set, as
Eq. 5 illustrates.
(5)
$\Theta^{*},\mathscr{P}^{*}=\mathop{\arg\min}_{\Theta,\mathscr{P}}\mathcal{L}_{total}(\mathscr{D}_{M},\mathscr{P}_{C})$
To achieve this, the gradient of HyP2 Loss in Eq. 4 w.r.t.
$\cos\langle\bm{v}_{i},\bm{p}_{j}\rangle$ and
$\cos\langle\bm{v}_{i},\bm{v}_{j}\rangle$ is given by Eq. 6.
(6)
$\displaystyle\frac{{\partial{\mathcal{L}_{total}}}}{{\partial{\cos\langle\bm{v}_{i},\bm{p}_{j}\rangle}}}$
$\displaystyle=\left\\{\begin{array}[]{ll}-\frac{1}{\sum\nolimits_{i=1}^{M}{\sum\nolimits_{j=1}^{C}{\mathds{1}({\bm{y}_{i}}_{(j)}=1)}}},&{\bm{y}_{i}}_{(j)}=1\\\
\frac{1}{\sum\nolimits_{i=1}^{M}{\sum\nolimits_{j=1}^{C}{\mathds{1}({\bm{y}_{i}}_{(j)}=0)}}},&{\bm{y}_{i}}_{(j)}=0,\cos\langle\bm{v}_{i},\bm{p}_{j}\rangle>\zeta\\\
0,&{\bm{y}_{i}}_{(j)}=0,\cos\langle\bm{v}_{i},\bm{p}_{j}\rangle\leq\zeta\end{array}\right.$
$\displaystyle\frac{{\partial{\mathcal{L}_{total}}}}{{\partial{\cos\langle\bm{v}_{i},\bm{v}_{j}\rangle}}}$
$\displaystyle=\left\\{\begin{array}[]{ll}\frac{\beta}{\sum\nolimits_{i=1}^{M^{\prime}}{\sum\nolimits_{j=1}^{M^{\prime}}{\mathds{1}(|\bm{y}_{i}\cdot\bm{y}_{j}|=0)}}},&|\bm{y}_{i}\cdot\bm{y}_{j}|=0,\cos\langle\bm{v}_{i},\bm{v}_{j}\rangle>\zeta\\\
0,&|\bm{y}_{i}\cdot\bm{y}_{j}|=0,\cos\langle\bm{v}_{i},\bm{v}_{j}\rangle\leq\zeta\end{array}\right.$
Eq. 6 shows that minimizing the HyP2 Loss enforces $\bm{v}_{i}$ and
$\bm{p}_{j}$ to get close if the two share the same attributes, and
distinguishes the irrelevant proxy-to-data/data-to-data pairs simultaneously.
When HyP2 Loss convergences, we thus construct the powerful metric space by
mapping the images from the database into continuous feature vectors, and
binarizing into hash codes in the Hamming space for efficient retrieval.
### 3.4. Overview of Learning Algorithm
Training Algorithm. With the novel HyP2 Loss, we ensure the constructed metric
space is more powerful than existing proxy-based methods (Movshovitz et al.,
2017; Aziere and Todorovic, 2019; Kim et al., 2020) both theoretically and
experimentally, because HyP2 Loss explicitly enforces the established metric
space considers the data-to-data correspondences that tackles the conflicts
effectively. To achieve this, we present the training algorithm in Algo. 1.
During the training process, the standard back-propagation algorithm
(Rumelhart et al., 1986) with mini-batch gradient descent method is used to
optimize the network.
Table 1. Training complexity of HyP 2 _vs_. previous state-of-the-art. Note
that proxy-based methods are unqualified for multi-label metric learning,
while pair-based methods require
$\bm{\mathcal{O}(M^{2})}$
or even
$\bm{\mathcal{O}(M^{3})}$
time complexity. As a comparison, HyP
2
costs more efficient training complexity since the irrelevant pairs are the
minority to the whole dataset.
Type | Method | Time Complexity
---|---|---
Proxy | Proxy NCA (Movshovitz et al., 2017) | $\mathcal{O}(MC)$
Proxy Anchor (Kim et al., 2020) | $\mathcal{O}(MC)$
OrthoHash (Hoe et al., 2021) | $\mathcal{O}(MC)$
| SoftTriple (Qian et al., 2019) | $\mathcal{O}(MCk^{2})$
Pair | Constrastive (Chopra et al., 2005; Hadsell et al., 2006; Bromley et al., 1993) | $\mathcal{O}(M^{2})$
HashNet (Cao et al., 2017) | $\mathcal{O}(M^{2})$
DHN (Zhu et al., 2016) | $\mathcal{O}(M^{2})$
IDHN (Zhang et al., 2020) | $\mathcal{O}(M^{2})$
Triplet (Smart) (Harwood et al., 2017) | $\mathcal{O}(M^{2})$
Triplet (Semi-Hard) (Schroff et al., 2015) | $\mathcal{O}(M^{3}/B^{2})$
$N$-pair (Sohn, 2016) | $\mathcal{O}(M^{3})$
Lifted Structure (Song et al., 2016) | $\mathcal{O}(M^{3})$
Ours | HyP2 Loss | $\mathcal{O}(MC+\eta M^{2})$
Table 2. Statistics of four benchmarks, where $\bm{\eta}$ indicates the ratio
of irrelevant sample pairs with multiple labels to all sample pairs in the
dataset.
Datasets | # Dataset | $C$ | # Database | # Train | # Query | $\eta$
---|---|---|---|---|---|---
Flickr-25k | 24,581 | 38 | 19,581 | 4,000 | 1,000 | 0.286
NUS-WIDE | 195,834 | 21 | 183,234 | 10,500 | 2,100 | 0.242
VOC-2007 | 9,963 | 20 | 5,011 | 5,011 | 4,952 | 0.062
VOC-2012 | 11,540 | 20 | 5,717 | 5,717 | 5,823 | 0.055
Time Complexity Analysis. The proposed method converges faster and is proven
more efficient and stable than those pair-based methods (Zhao et al., 2015;
Lai et al., 2016; Huang et al., 2018; Ma et al., 2021a; Zhang et al., 2020)
(as we will justify in Fig. 3). Below we analyze the training complexity of
HyP2 Loss. Note that $M$, $C$, $B$, and $k$ denote the training sample number,
category number, mini-batch size, and the proxy number of each category,
respectively. $\eta$ is specifically defined in HyP2 Loss, which indicates the
ratio of irrelevant sample pairs with multiple labels to all $M\times M$ pairs
in the dataset. We omit $k\equiv 1$ in single-proxy methods (Movshovitz et
al., 2017; Kim et al., 2020) and ours for simplicity. $k$ is nontrivial for
managing multiple proxies per class such as SoftTriple Loss (Qian et al.,
2019).
Tab. 1 comprehensively compares the training complexity of HyP2 Loss (ours) to
state-of-the-art pair-based and proxy-based methods. The complexity of HyP2
Loss is $\mathcal{O}(MC+\eta M^{2})$ since it compares each sample with
positive or negative proxies and its irrelevant samples (if exists) in a mini-
batch. More specifically, in Eq. 4, the complexity of the first summation
requires $M_{+}$ ($M_{-}$) times calculation for positive (negative) proxy-to-
data pairs, respectively. Hence the total training complexity is
$\mathcal{O}(M_{+}C+M_{-}C)=\mathcal{O}(MC)$. Then the second term requires
$\eta M^{2}$ times calculation for each irrelevant pair. The first term of
HyP2 Loss is linear correlated to $M$ and $C$, while the second term is
significantly degraded because $\eta$ is much less than $1$ in general.
## 4\. Experiment
Table 3. mAP performance by Hamming Ranking for different hash bits
($\bm{K\in\\{12,24,36,48\\}}$) in Flickr-25k (mAP@$\bm{1,000}$) and NUS-WIDE
(mAP@$\bm{5,000}$) with AlexNet. ∗: reported results with the same experiment
settings from (Zhang et al., 2020). †: our reproduced results through the
publicly available models. Bold font (underlined) values indicate the best
(second best).
Method | Dataset | Flickr-25k | NUS-WIDE
---|---|---|---
Pub. | 12 | 24 | 36 | 48 | avg. $\Delta$ | 12 | 24 | 36 | 48 | avg. $\Delta$
DLBHC (Lin et al., 2015)∗ | CVPR”15 | 0.724 | 0.757 | 0.757 | 0.776 | - | 0.570 | 0.616 | 0.621 | 0.635 | -
DQN (Cao et al., 2016)∗ | AAAI”16 | 0.809 | 0.823 | 0.830 | 0.827 | 0.069 | 0.711 | 0.733 | 0.745 | 0.749 | 0.124
DHN (Zhu et al., 2016)† | AAAI”16 | 0.817 | 0.831 | 0.829 | 0.851 | 0.079 | 0.720 | 0.742 | 0.741 | 0.749 | 0.128
HashNet (Cao et al., 2017)∗ | ICCV”17 | 0.791 | 0.826 | 0.841 | 0.848 | 0.073 | 0.643 | 0.694 | 0.737 | 0.750 | 0.095
DMSSPH (Wu et al., 2017)∗ | ICMR”17 | 0.780 | 0.808 | 0.810 | 0.816 | 0.050 | 0.671 | 0.699 | 0.717 | 0.727 | 0.093
IDHN (Zhang et al., 2020)† | TMM”20 | 0.827 | 0.823 | 0.822 | 0.828 | 0.071 | 0.772 | 0.790 | 0.795 | 0.803 | 0.180
Proxy Anchor (Kim et al., 2020)† | CVPR”20 | 0.796 | 0.831 | 0.834 | 0.853 | 0.075 | 0.767 | 0.802 | 0.809 | 0.815 | 0.188
CSQ (Yuan et al., 2020)† | CVPR”20 | 0.795 | 0.819 | 0.849 | 0.857 | 0.077 | 0.692 | 0.754 | 0.757 | 0.769 | 0.132
OrthoHash (Hoe et al., 2021)† | NeurIPS”21 | 0.837 | 0.869 | 0.877 | 0.891 | 0.115 | 0.770 | 0.802 | 0.810 | 0.825 | 0.191
DCILH (Ma et al., 2021b)† | TMM”21 | 0.852 | 0.879 | 0.884 | 0.888 | 0.122 | 0.775 | 0.793 | 0.797 | 0.804 | 0.182
HyP2 Loss (Ours) | - | 0.845 | 0.881 | 0.893 | 0.901 | 0.127 | 0.794 | 0.822 | 0.831 | 0.843 | 0.212
In this section, we describe the datasets used for evaluation, the test
protocols, and the implementation details. To evaluate our method, we fairly
conduct experiments against existing state-of-the-art (Zhang et al., 2020;
Yuan et al., 2020; Hoe et al., 2021; Huang et al., 2018; Kim et al., 2020) and
previous methods (Lin et al., 2015; Cao et al., 2016; Zhu et al., 2016; Cao et
al., 2017; Wu et al., 2017; Lai et al., 2015; Liu et al., 2019; Lai et al.,
2016) on four standard multi-label benchmarks, and justify the superiority of
the proposed method both quantitatively and qualitatively. Finally, we explore
and conduct in-depth analyses of how each component of the proposed framework
contributes to the performance.
### 4.1. Implementation Details
We implement the proposed method in the PyTorch framework (Paszke et al.,
2019) and train on a single NVIDIA RTX 3090 GPU. We comprehensively adopt
AlexNet (Krizhevsky et al., 2017) and GoogLeNet (Szegedy et al., 2015)
pretrained on ImageNet (Russakovsky et al., 2015) as the backbones to justify
the robustness of the proposed method. We fine-tune the pretrained backbones
for all layers up to the FC layer and map the output layer into
$K$-dimensional hash bits. We adopt stochastic gradient descent (SGD) (Bottou,
2010) to optimize the network with momentum $0.9$ and weight decay $5e-4$. The
initial learning rates for optimizing network $\mathcal{F}_{\Theta}$/proxies
$\mathscr{P}$ are $0.01$/$0.001$ in AlexNet and $0.02$/$0.02$ in GoogLeNet,
respectively. The learning rate decreases by $0.5$ every $10$ epochs with
$100$ epochs in total.
### 4.2. Dataset & Evaluation Metrics
Four standard benchmarks Flickr-25k (Huiskes and Lew, 2008), VOC-2007
(Everingham et al., 2010), VOC-2012 (Everingham et al., 2010), and NUS-WIDE
(Chua et al., 2009) are adopted for evaluation. The statistics of the four
datasets are summarized in Tab. 2, and the detailed descriptions are as
follows.
Flickr-25k. The Flickr-25k dataset contains $25,000$ images. We follow (Zhang
et al., 2020; Lai et al., 2015) to remove the noisy images that do not contain
any labels. The remaining $24,851$ images contained $38$ categories in total.
Among them, $4,000$ samples are randomly selected as the training set, $1,000$
samples as the query set and the rest images to construct the database.
VOC-2007 & VOC-2012. VOC-2007 (VOC-2012) contains $9,963$ ($11,540$) images in
total, each image attaches to a label containing several of the $20$
categories. We follow (Huang et al., 2018) to construct the training set and
database of the two datasets for the experiment separately, with $5,011$
($5,717$) samples in total. The officially provided query set with $4,952$
($5,823$) samples is used for evaluation.
NUS-WIDE. The NUS-WIDE dataset contains $269,648$ images, and each image is
assigned to several $81$ categories. We follow (Zhang et al., 2020; Lai et
al., 2015) to select the most frequent $21$ categories and $195,834$ images
containing these attributes. We randomly selected $10,500$ and $2,100$ samples
as the training and query set, respectively, and the rest samples are
constructed as the database.
Table 4. mAP performance by Hamming Ranking for different hash bits
($\bm{K\in\\{16,32,48,64\\}}$) in VOC-2007 (mAP@$\bm{5,011}$) and VOC-2012
(mAP@$\bm{5,717}$) with GoogLeNet. ∗: reported results with the same
experiment settings from (Huang et al., 2018). †: our reproduced results
through the publicly available models. Bold font (underlined) values indicate
the best (second best).
Method | Dataset | VOC-2007 | VOC-2012
---|---|---|---
Pub. | 16 | 32 | 48 | 64 | avg. $\Delta$ | 16 | 32 | 48 | 64 | avg. $\Delta$
DHN (Zhu et al., 2016)† | AAAI”16 | 0.735 | 0.743 | 0.737 | 0.728 | - | 0.722 | 0.721 | 0.718 | 0.701 | -
NINH (Lai et al., 2015)∗ | CVPR”15 | 0.746 | 0.816 | 0.840 | 0.851 | 0.077 | 0.731 | 0.788 | 0.809 | 0.822 | 0.072
DSH (Liu et al., 2019)∗ | CVPR”16 | 0.763 | 0.767 | 0.769 | 0.775 | 0.033 | 0.753 | 0.766 | 0.776 | 0.782 | 0.054
IAH (Lai et al., 2016)∗ | TIP”16 | 0.800 | 0.862 | 0.878 | 0.883 | 0.120 | 0.794 | 0.844 | 0.862 | 0.864 | 0.126
OLAH (Huang et al., 2018)∗ | TIP”18 | 0.849 | 0.899 | 0.906 | 0.914 | 0.156 | 0.830 | 0.887 | 0.904 | 0.908 | 0.167
IDHN (Zhang et al., 2020)† | TMM”20 | 0.772 | 0.801 | 0.796 | 0.772 | 0.050 | 0.785 | 0.805 | 0.797 | 0.785 | 0.078
Proxy Anchor (Kim et al., 2020)† | CVPR”20 | 0.752 | 0.802 | 0.836 | 0.841 | 0.072 | 0.722 | 0.795 | 0.804 | 0.823 | 0.071
OrthoHash (Hoe et al., 2021)† | NeurIPS”21 | 0.831 | 0.876 | 0.902 | 0.909 | 0.144 | 0.823 | 0.885 | 0.893 | 0.900 | 0.160
HyP2 Loss (Ours) | - | 0.862 | 0.917 | 0.932 | 0.937 | 0.176 | 0.841 | 0.903 | 0.917 | 0.925 | 0.181
Evaluation Protocol. We follow (Jang et al., 2021; Zhao et al., 2015; Lai et
al., 2015; Xu et al., 2021) to employ four metrics for quantitative
evaluation: 1). mean average precision (mAP@$N$), 2). precision w.r.t. Top-$N$
returned images (Top-$N$ curves), 3). the average $l_{2}$ distance of each
sample to the corresponding cluster centers ($d_{intra}$), and 4). the average
$l_{2}$ distance of each cluster to the closest irrelevant cluster centers
($d_{inter}$). Regarding mAP@$N$ score computation, we select the Top-$N$
images from the retrieval ranked-list results. The returned images and the
query image are considered similar iff they share at least one same label.
### 4.3. Quantitative Comparison
Baselines & Settings. We compare the proposed method with 1). standard
baselines, including HashNet (Cao et al., 2017), DMSSPH (Wu et al., 2017), DQN
(Cao et al., 2016), DLBHC (Lin et al., 2015), DHN (Zhu et al., 2016), NINH
(Lai et al., 2015), DSH (Liu et al., 2019), and IAH (Lai et al., 2016), 2).
state-of-the-art deep hashing methods, including IDHN (Zhang et al., 2020),
Proxy Anchor (Kim et al., 2020), CSQ (Yuan et al., 2020), OrthoHash (Hoe et
al., 2021), OLAH (Huang et al., 2018), and DCILH (Ma et al., 2021b). Note that
OLAH and DCILH are two state-of-the-art deep hashing methods specifically
designed for multi-label image retrieval. Besides, Proxy Anchor is the state-
of-the-art proxy-based method. We verify the robustness of such proxy-based
methods in multi-label scenarios to elaborate on how the proposed method
improves the metric space and retrieval performance effectively.
Specifically, to justify the effectiveness of the proposed method, we compare
methods using AlexNet in Flickr-25k and NUS-WIDE among hash bits
$K\in\\{12,24,36,48\\}$ in Tab. 3, and further compare on another backbone
(_i.e_., GoogLeNet) in VOC-2007 and VOC-2012 among $K\in\\{16,32,48,64\\}$ in
Tab. 4, respectively.
Figure 3. Convergence comparisons in Flickr-25k (left) $\bm{\&}$ NUS-WIDE
(right) (48 hash bits with AlexNet). x-axis: training time (sec.), y-axis:
mAP@
$\bm{1,000}$
(left) $\bm{\&}$
$\bm{5,000}$
(right) performance on the query dataset. HyP
2
Loss achieves a faster convergence speed with a more stable training process.
Results & Analysis. As Tab. 3 and Tab. 4 illustrate, HyP2 Loss outperforms
existing methods over different hash bits in the four benchmarks, which
justifies the robustness and effectiveness of the proposed method. Note that
when the hash bits are small (_e.g_., $12$-bit in Flickr-25k/NUS-WIDE and
$16$-bit in VOC-2007/VOC-2012), the proposed method achieves $10.20\%$
performance gains on average compared to Proxy Anchor (Kim et al., 2020), the
state-of-the-art proxy-based method in image retrieval.
We justify that the metric space established by proxy loss is insufficient to
express the profound label correlations, which achieves unsatisfactory mAP and
misclassified retrieval performance. As a comparison, HyP2 Loss effectively
improves the metric space by additional constraints to explicitly improve the
isotropic hypersphere space, and thus improves the retrieval accuracy
remarkably.
### 4.4. Qualitative Comparison
Convergence Comparison. To demonstrate that the convergence speed of the
proposed method outperforms existing methods, we compare (Zhu et al., 2016;
Zhang et al., 2020; Kim et al., 2020; Hoe et al., 2021) to HyP2 Loss in
Flickr-25k and NUS-WIDE datasets. The visualized results are presented in Fig.
3.
Fig. 3 shows that the proposed method achieves a more stable and faster
convergence speed with higher performance compared to the previous state-of-
the-art. Note that pair-based methods (Zhu et al., 2016; Zhang et al., 2020)
exhibit training disturbance because the loss function is restricted in a
mini-batch and thus lacks generalization, while OrthoHash (Hoe et al., 2021)
confronts overfitting risks. As a comparison, Proxy Anchor (Kim et al., 2020)
and ours show better stability during the whole training process.
(a) DHN (Zhu et al., 2016)
(b) IDHN (Zhang et al., 2020)
(c) Proxy Anchor (Kim et al., 2020)
(d) Pair Loss
\begin{overpic}[trim=0.0pt 0.0pt 0.0pt
0.0pt,clip,width=134.42113pt,grid=false]{imgs/proxy.png}
\put(35.0,22.0){\scriptsize bus} \put(30.0,17.0){\scriptsize\& car}
\put(50.0,38.0){\scriptsize person} \put(68.0,33.0){\scriptsize\& train}
\end{overpic} (e) Proxy Loss
\begin{overpic}[trim=0.0pt 0.0pt 0.0pt
0.0pt,clip,width=134.42113pt,grid=flase]{imgs/HyP.png}
\put(17.0,87.0){\scriptsize bus} \put(16.0,82.0){\scriptsize\& car}
\put(60.0,55.0){\scriptsize person} \put(66.0,50.0){\scriptsize\& train}
\end{overpic} (f) HyP2 Loss (Ours)
Figure 4. Visualized t-SNE comparisons in VOC-2007 dataset with $\bm{48}$ hash
bits. The scatters of the same color indicate the same categories. The red
circles indicate irrelevant samples.
t-SNE Plots. To observe how the proposed method contributes better metric
space, we use t-SNE (Van der Maaten and Hinton, 2008) to map $K$-dimensional
feature vectors into 2D plots. For each sample, we assign different colors
around its neighbourhood to present its attributes. Then, the visualized
comparisons among DHN (Zhu et al., 2016), IDHN (Zhang et al., 2020), Proxy
Anchor (Kim et al., 2020) and pair loss, proxy loss baselines to the proposed
HyP2 Loss in VOC-2007 dataset are illustrated in Fig. 4.
Fig. 4 shows that our method achieves visually better data distribution,
especially in the confusion samples. The red circles in Fig. 4(e) and Fig.
4(f) show how HyP2 Loss solves the conflicts in Proxy Loss. Specifically, the
proxy loss improperly embeds irrelevant samples with multiple labels (bus,
car) and (person, train) into nearby positions, which damages the retrieval
accuracy. As a comparison, HyP2 Loss could solve the irrelevant conflicts and
thereby achieves visually better metric space with superior performance.
Top-$\bm{N}$ curve. To further demonstrate that HyP2 Loss genuinely provides
quality search outcomes, we present the precision for the Top-$N$ retrieved
images in Fig. 5. We can observe that the proposed HyP2 Loss consistently
establishes the state-of-the-art retrieval performance with higher scores over
different hash bits.
Figure 5. Performance of different methods in Flickr-25k (top) and NUS-WIDE
(bottom) datasets. From left to right: Top-$\bm{N}$ curves (x-axis
(Top-$\bm{N}$): $\bm{100\rightarrow 1,000}$, y-axis (Precision):
$\bm{0\rightarrow 1}$) w.r.t. $\bm{12,24,36,48}$ hash bits, respectively. Our
HyP2 Loss outperforms previous methods in different datasets among various
hash bit lengths consistently. Table 5. Ablation study $\bm{\&}$
hyperparameter analysis of HyP
2
Loss. We report mAP@$\bm{1,000(5,011)}$ in Flicker-25k (VOC-2007) to show how
the proposed method improves the performance. $\bm{\beta=1(0.5)}$ is
empirically best for the two datasets.
Dataset | Flickr-25k | VOC-2007
---|---|---
Hash Bits | 12 | 24 | 36 | 48 | 16 | 32 | 48 | 64
Pair Loss | 0.779 | 0.832 | 0.837 | 0.847 | 0.732 | 0.766 | 0.777 | 0.781
Proxy Loss | 0.787 | 0.834 | 0.856 | 0.871 | 0.767 | 0.820 | 0.873 | 0.887
Proxy + Pair Loss | 0.817 | 0.851 | 0.857 | 0.870 | 0.818 | 0.843 | 0.875 | 0.881
HyP2 Loss $(\beta=0.50)$ | 0.838 | 0.876 | 0.893 | 0.896 | 0.862 | 0.917 | 0.932 | 0.937
HyP2 Loss $(\beta=0.75)$ | 0.842 | 0.877 | 0.891 | 0.896 | 0.859 | 0.915 | 0.930 | 0.935
HyP2 Loss $(\beta=1.00)$ | 0.845 | 0.881 | 0.893 | 0.901 | 0.857 | 0.912 | 0.926 | 0.934
HyP2 Loss $(\beta=1.25)$ | 0.841 | 0.877 | 0.891 | 0.897 | 0.843 | 0.909 | 0.927 | 0.935
### 4.5. Ablation Study
To justify how each component of HyP2 Loss contributes to a more powerful
metric space, we conduct in-depth ablation studies on investigating the
effectiveness of Multi-label Proxy Loss, Irrelevant Pair Loss, and the
combination of them with different $\beta$, the results of mAP, $d_{intra}$
and $d_{inter}$ are shown in Tab. 5 and Tab. 6.
As Tab. 5 and Tab. 6 illustrate, either pair loss or proxy loss fails to
achieve satisfactory performance due to its inherent limitations as we
analyzed before, and a simple combination of the two terms with $\beta$ is
invalid but confronts overwhelmed training overhead. Specifically, since
smaller $d_{intra}$ indicates better cluster performance, while larger
$d_{inter}$ indicates better disentangle ability on confusion samples. The
pair loss constructs a sparse metric space that fails to cluster samples
tightly, while the proxy loss fails to distinguish the confusing samples that
introduces misclassified results. As a comparison, the proposed HyP2 Loss not
only achieves remarkable performance gains, but also ensures better
$d_{intra}$ and $d_{inter}$ that establishes superior metric space compared to
others, which demonstrates its robustness and retrieval accuracy.
Table 6. Ablation study of HyP2 Loss. We report $\bm{d_{intra}}$ and
$\bm{d_{inter}}$ in Flickr-25k to show HyP2 Loss establishes a better metric
space that ensures the two metrics simultaneously.
Metric | $d_{intra}$ $\downarrow$ | $d_{inter}$ $\uparrow$
---|---|---
Hash Bits | 12 | 24 | 36 | 48 | 12 | 24 | 36 | 48
Pair Loss | 3.043 | 3.453 | 4.320 | 4.749 | 2.264 | 2.868 | 3.480 | 4.152
Proxy Loss | 2.094 | 2.695 | 3.174 | 3.763 | 2.564 | 3.322 | 4.144 | 4.890
Proxy + Pair Loss | 2.387 | 2.792 | 3.351 | 3.734 | 2.402 | 2.995 | 3.524 | 3.837
HyP2 Loss | 1.859 | 2.395 | 2.877 | 3.237 | 3.633 | 4.331 | 4.560 | 5.202
Finally, since the hyperparameter $\beta$ controls the contribution of each
component in HyP2 Loss, we investigate the influence of $\beta$ with different
values and find that $\beta$ should be adjusted in specific datasets but is
stable in different hash bits, while HyP2 Loss keeps relatively high
performance in a wide range $\beta\in[0.50,1.25]$. In Tab. 5, our empirical
study shows that HyP2 Loss performs best in Flickr-25k (VOC-2007) when
$\beta=1.00$ ($0.50$), respectively.
## 5\. Conclusion
In this paper, we focus on theoretically analyzing the primary reasons that
proxy-based methods are disqualified for multi-label retrieval, and propose
the novel HyP2 Loss to preserve the efficient training complexity of proxy
loss with the irrelevant constraint term, which compensates for the limitation
of the hypersphere metric space. We conduct extensive experiments to justify
the superiority of the proposed method in four standard benchmarks with
different backbones and hash bits. Both quantitative and qualitative results
demonstrate that the proposed HyP2 Loss enables fast, reliable and robust
convergence speed, and constructs a powerful metric space to improve the
retrieval performance significantly.
Acknowledgment. This work was supported by SZSTC Grant No.
JCYJ20190809172201639 and WDZC20200820200655001, Shenzhen Key Laboratory
ZDSYS20210623092001004.
## References
* (1)
* Aziere and Todorovic (2019) Nicolas Aziere and Sinisa Todorovic. 2019. Ensemble Deep Manifold Similarity Learning Using Hard Proxies. In _IEEE Conference on Computer Vision and Pattern Recognition, CVPR_. Computer Vision Foundation / IEEE, 7299–7307.
* Bottou (2010) Léon Bottou. 2010\. Large-Scale Machine Learning with Stochastic Gradient Descent. In _19th International Conference on Computational Statistics, COMPSTAT 2010, Paris, France, August 22-27, 2010 - Keynote, Invited and Contributed Papers_. Physica-Verlag, 177–186.
* Bromley et al. (1993) Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. 1993\. Signature Verification Using a Siamese Time Delay Neural Network. In _Advances in Neural Information Processing Systems 6, [7th NIPS Conference, Denver, Colorado, USA, 1993]_. Morgan Kaufmann, 737–744.
* Cao et al. (2016) Yue Cao, Mingsheng Long, Jianmin Wang, Han Zhu, and Qingfu Wen. 2016. Deep Quantization Network for Efficient Image Retrieval. In _Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence_ , Dale Schuurmans and Michael P. Wellman (Eds.). AAAI Press, 3457–3463.
* Cao et al. (2017) Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Philip S. Yu. 2017\. HashNet: Deep Learning to Hash by Continuation. In _IEEE International Conference on Computer Vision, ICCV_. IEEE Computer Society, 5609–5618.
* Chen et al. (2021) Wei Chen, Yu Liu, Weiping Wang, Erwin M. Bakker, Theodoros Georgiou, Paul W. Fieguth, Li Liu, and Michael S. Lew. 2021\. Deep Image Retrieval: A Survey. _arXiv preprint:abs/2101.11282_ (2021).
* Chopra et al. (2005) Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005\. Learning a Similarity Metric Discriminatively, with Application to Face Verification. In _2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition_. IEEE Computer Society, 539–546.
* Chua et al. (2009) Tat-Seng Chua, Jinhui Tang, Richang Hong, Haojie Li, Zhiping Luo, and Yantao Zheng. 2009\. NUS-WIDE: a real-world web image database from National University of Singapore. In _Proceedings of the 8th ACM International Conference on Image and Video Retrieval, CIVR_ , Stéphane Marchand-Maillet and Yiannis Kompatsiaris (Eds.). ACM.
* Cui et al. (2021) Hui Cui, Lei Zhu, Jingjing Li, Zhiyong Cheng, and Zheng Zhang. 2021. Two-pronged Strategy: Lightweight Augmented Graph Network Hashing for Scalable Image Retrieval. In _MM ’21: ACM Multimedia Conference_ , Heng Tao Shen, Yueting Zhuang, John R. Smith, Yang Yang, Pablo Cesar, Florian Metze, and Balakrishnan Prabhakaran (Eds.). ACM, 1432–1440.
* Datta et al. (2008) Ritendra Datta, Dhiraj Joshi, Jia Li, and James Ze Wang. 2008\. Image retrieval: Ideas, influences, and trends of the new age. _ACM Comput. Surv._ 40, 2 (2008), 5:1–5:60.
* Everingham et al. (2010) Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John M. Winn, and Andrew Zisserman. 2010\. The Pascal Visual Object Classes (VOC) Challenge. _Int. J. Comput. Vis._ 88, 2 (2010), 303–338.
* Fan et al. (2020) Lixin Fan, KamWoh Ng, Ce Ju, Tianyu Zhang, and Chee Seng Chan. 2020. Deep Polarized Network for Supervised Learning of Accurate Binary Hashing Codes. In _Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020_ , Christian Bessiere (Ed.). ijcai.org, 825–831.
* Feng et al. (2020) Fangxiang Feng, Tianrui Niu, Ruifan Li, Xiaojie Wang, and Huixing Jiang. 2020. Learning Visual Features from Product Title for Image Retrieval. In _MM ’20: The 28th ACM International Conference on Multimedia_ , Chang Wen Chen, Rita Cucchiara, Xian-Sheng Hua, Guo-Jun Qi, Elisa Ricci, Zhengyou Zhang, and Roger Zimmermann (Eds.). ACM, 4723–4727.
* Gong et al. (2013) Yunchao Gong, Svetlana Lazebnik, Albert Gordo, and Florent Perronnin. 2013. Iterative Quantization: A Procrustean Approach to Learning Binary Codes for Large-Scale Image Retrieval. _IEEE Trans. Pattern Anal. Mach. Intell._ 35, 12 (2013), 2916–2929.
* Hadsell et al. (2006) Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006\. Dimensionality Reduction by Learning an Invariant Mapping. In _2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition_. IEEE Computer Society, 1735–1742.
* Harwood et al. (2017) Ben Harwood, Vijay Kumar B. G, Gustavo Carneiro, Ian D. Reid, and Tom Drummond. 2017\. Smart Mining for Deep Metric Learning. In _IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017_. IEEE Computer Society, 2840–2848.
* Hoe et al. (2021) Jiun Tian Hoe, KamWoh Ng, Tianyu Zhang, Chee Seng Chan, Yi-Zhe Song, and Tao Xiang. 2021\. One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective. _CoRR_ abs/2109.14449 (2021).
* Huang et al. (2018) Chang-Qin Huang, Shang-Ming Yang, Yan Pan, and Hanjiang Lai. 2018. Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning. _IEEE Trans. Image Process._ 27, 9 (2018), 4490–4502.
* Huiskes and Lew (2008) Mark J. Huiskes and Michael S. Lew. 2008. The MIR flickr retrieval evaluation. In _Proceedings of the 1st ACM SIGMM International Conference on Multimedia Information Retrieval, MIR_ , Michael S. Lew, Alberto Del Bimbo, and Erwin M. Bakker (Eds.). ACM, 39–43.
* Jang et al. (2021) Young Kyun Jang, Geonmo Gu, ByungSoo Ko, and Nam Ik Cho. 2021\. Self-Distilled Hashing for Deep Image Retrieval. _CoRR_ abs/2112.08816 (2021).
* Kim et al. (2020) Sungyeon Kim, Dongwon Kim, Minsu Cho, and Suha Kwak. 2020\. Proxy Anchor Loss for Deep Metric Learning. In _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR_. IEEE, 3235–3244.
* Krizhevsky et al. (2009) Alex Krizhevsky, Geoffrey Hinton, et al. 2009\. Learning multiple layers of features from tiny images. (2009).
* Krizhevsky et al. (2017) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2017\. ImageNet classification with deep convolutional neural networks. _Commun. ACM_ 60, 6 (2017), 84–90.
* Lai et al. (2015) Hanjiang Lai, Yan Pan, Ye Liu, and Shuicheng Yan. 2015\. Simultaneous feature learning and hash coding with deep neural networks. In _IEEE Conference on Computer Vision and Pattern Recognition, CVPR_. IEEE Computer Society, 3270–3278.
* Lai et al. (2016) Hanjiang Lai, Pan Yan, Xiangbo Shu, Yunchao Wei, and Shuicheng Yan. 2016. Instance-Aware Hashing for Multi-Label Image Retrieval. _IEEE Trans. Image Process._ 25, 6 (2016), 2469–2479.
* LeCun et al. (1998) Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. _Proc. IEEE_ 86, 11 (1998), 2278–2324.
* Li et al. (2016) Wu-Jun Li, Sheng Wang, and Wang-Cheng Kang. 2016. Feature Learning Based Deep Supervised Hashing with Pairwise Labels. In _Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI_ , Subbarao Kambhampati (Ed.). IJCAI/AAAI Press, 1711–1717.
* Li et al. (2021) Ying Li, Hongwei Zhou, Yeyu Yin, and Jiaquan Gao. 2021\. Multi-label Pattern Image Retrieval via Attention Mechanism Driven Graph Convolutional Network. In _MM ’21: ACM Multimedia Conference_ , Heng Tao Shen, Yueting Zhuang, John R. Smith, Yang Yang, Pablo Cesar, Florian Metze, and Balakrishnan Prabhakaran (Eds.). ACM, 300–308.
* Lin et al. (2015) Kevin Lin, Huei-Fang Yang, Jen-Hao Hsiao, and Chu-Song Chen. 2015. Deep learning of binary hash codes for fast image retrieval. In _2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops_. IEEE Computer Society, 27–35.
* Liu et al. (2019) Haomiao Liu, Ruiping Wang, Shiguang Shan, and Xilin Chen. 2019\. Deep Supervised Hashing for Fast Image Retrieval. _Int. J. Comput. Vis._ 127, 9 (2019), 1217–1234.
* Liu et al. (2012) Wei Liu, Jun Wang, Rongrong Ji, Yu-Gang Jiang, and Shih-Fu Chang. 2012. Supervised hashing with kernels. In _2012 IEEE Conference on Computer Vision and Pattern Recognition_. IEEE Computer Society, 2074–2081.
* Ma et al. (2021a) Cheng Ma, Jiwen Lu, and Jie Zhou. 2021a. Rank-Consistency Deep Hashing for Scalable Multi-Label Image Search. _IEEE Trans. Multim._ 23 (2021), 3943–3956.
* Ma et al. (2021b) Cheng Ma, Jiwen Lu, and Jie Zhou. 2021b. Rank-Consistency Deep Hashing for Scalable Multi-Label Image Search. _IEEE Trans. Multim._ 23 (2021), 3943–3956.
* Movshovitz et al. (2017) Yair Movshovitz, Alexander Toshev, Thomas K. Leung, Sergey Ioffe, and Saurabh Singh. 2017. No Fuss Distance Metric Learning Using Proxies. In _IEEE International Conference on Computer Vision, ICCV_. IEEE Computer Society, 360–368.
* Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In _Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS_. 8024–8035.
* Qian et al. (2019) Qi Qian, Lei Shang, Baigui Sun, Juhua Hu, Tacoma Tacoma, Hao Li, and Rong Jin. 2019. SoftTriple Loss: Deep Metric Learning Without Triplet Sampling. In _2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019_. IEEE, 6449–6457.
* Rodrigues et al. (2020) Josiane Rodrigues, Marco Cristo, and Juan G Colonna. 2020\. Deep hashing for multi-label image retrieval: a survey. _Artificial Intelligence Review_ 53, 7 (2020), 5261–5307.
* Rumelhart et al. (1986) David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1986. Learning representations by back-propagating errors. _nature_ 323, 6088 (1986), 533–536.
* Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Fei-Fei Li. 2015\. ImageNet Large Scale Visual Recognition Challenge. _Int. J. Comput. Vis._ 115, 3 (2015), 211–252.
* Schroff et al. (2015) Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015\. FaceNet: A unified embedding for face recognition and clustering. In _IEEE Conference on Computer Vision and Pattern Recognition, CVPR_. IEEE Computer Society, 815–823.
* Shen et al. (2015) Fumin Shen, Chunhua Shen, Wei Liu, and Heng Tao Shen. 2015\. Supervised Discrete Hashing. In _IEEE Conference on Computer Vision and Pattern Recognition, CVPR_. IEEE Computer Society, 37–45.
* Sohn (2016) Kihyuk Sohn. 2016\. Improved Deep Metric Learning with Multi-class N-pair Loss Objective. In _Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems_. 1849–1857.
* Song et al. (2016) Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. 2016\. Deep Metric Learning via Lifted Structured Feature Embedding. In _2016 IEEE Conference on Computer Vision and Pattern Recognition_. IEEE Computer Society, 4004–4012.
* Szegedy et al. (2015) Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In _IEEE Conference on Computer Vision and Pattern Recognition, CVPR_. IEEE Computer Society, 1–9.
* Tu et al. (2021) Rong-Cheng Tu, Xian-Ling Mao, Cihang Kong, Zihang Shao, Ze-Lin Li, Wei Wei, and Heyan Huang. 2021\. Weighted Gaussian Loss based Hamming Hashing. In _MM ’21: ACM Multimedia Conference_ , Heng Tao Shen, Yueting Zhuang, John R. Smith, Yang Yang, Pablo Cesar, Florian Metze, and Balakrishnan Prabhakaran (Eds.). ACM, 3409–3417.
* Van der Maaten and Hinton (2008) Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. _Journal of machine learning research_ 9, 11 (2008).
* Wang et al. (2016a) Jun Wang, Wei Liu, Sanjiv Kumar, and Shih-Fu Chang. 2016a. Learning to Hash for Indexing Big Data - A Survey. _Proc. IEEE_ 104, 1 (2016), 34–57.
* Wang et al. (2018) Jingdong Wang, Ting Zhang, Jingkuan Song, Nicu Sebe, and Heng Tao Shen. 2018. A Survey on Learning to Hash. _IEEE Trans. Pattern Anal. Mach. Intell._ 40, 4 (2018), 769–790.
* Wang et al. (2016b) Xiaofang Wang, Yi Shi, and Kris M. Kitani. 2016b. Deep Supervised Hashing with Triplet Labels. In _Computer Vision - ACCV 2016 - 13th Asian Conference on Computer Vision_ _(Lecture Notes in Computer Science, Vol. 10111)_ , Shang-Hong Lai, Vincent Lepetit, Ko Nishino, and Yoichi Sato (Eds.). Springer, 70–84.
* Weiss et al. (2008) Yair Weiss, Antonio Torralba, and Robert Fergus. 2008\. Spectral Hashing. In _Advances in Neural Information Processing Systems 21, Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems_ , Daphne Koller, Dale Schuurmans, Yoshua Bengio, and Léon Bottou (Eds.). Curran Associates, Inc., 1753–1760.
* Wu et al. (2017) Dayan Wu, Zheng Lin, Bo Li, Mingzhen Ye, and Weiping Wang. 2017. Deep Supervised Hashing for Multi-Label and Large-Scale Image Retrieval. In _Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, ICMR_ , Bogdan Ionescu, Nicu Sebe, Jiashi Feng, Martha A. Larson, Rainer Lienhart, and Cees Snoek (Eds.). ACM, 150–158.
* Xia et al. (2021) Haifeng Xia, Taotao Jing, Chen Chen, and Zhengming Ding. 2021\. Semi-supervised Domain Adaptive Retrieval via Discriminative Hashing Learning. In _MM ’21: ACM Multimedia Conference_ , Heng Tao Shen, Yueting Zhuang, John R. Smith, Yang Yang, Pablo Cesar, Florian Metze, and Balakrishnan Prabhakaran (Eds.). ACM, 3853–3861.
* Xia et al. (2014) Rongkai Xia, Yan Pan, Hanjiang Lai, Cong Liu, and Shuicheng Yan. 2014. Supervised Hashing for Image Retrieval via Image Representation Learning. In _Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence_ , Carla E. Brodley and Peter Stone (Eds.). AAAI Press, 2156–2162.
* Xu et al. (2021) Chengyin Xu, Zhengzhuo Xu, Zenghao Chai, Hongjia Li, Qiruyi Zuo, Lingyu Yang, and Chun Yuan. 2021. HHF: Hashing-guided Hinge Function for Deep Hashing Retrieval. _CoRR_ abs/2112.02225 (2021).
* Yuan et al. (2020) Li Yuan, Tao Wang, Xiaopeng Zhang, Francis E. H. Tay, Zequn Jie, Wei Liu, and Jiashi Feng. 2020. Central Similarity Quantization for Efficient Image and Video Retrieval. In _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition,CVPR_. Computer Vision Foundation / IEEE, 3080–3089.
* Zhang and Rui (2013) Lei Zhang and Yong Rui. 2013. Image search - from thousands to billions in 20 years. _ACM Trans. Multim. Comput. Commun. Appl._ 9, 1s (2013), 36:1–36:20.
* Zhang et al. (2020) Zheng Zhang, Qin Zou, Yuewei Lin, Long Chen, and Song Wang. 2020. Improved Deep Hashing With Soft Pairwise Similarity for Multi-Label Image Retrieval. _IEEE Trans. Multim._ 22, 2 (2020), 540–553.
* Zhao et al. (2015) Fang Zhao, Yongzhen Huang, Liang Wang, and Tieniu Tan. 2015\. Deep semantic ranking based hashing for multi-label image retrieval. In _IEEE Conference on Computer Vision and Pattern Recognition, CVPR_. IEEE Computer Society, 1556–1564.
* Zhu et al. (2016) Han Zhu, Mingsheng Long, Jianmin Wang, and Yue Cao. 2016\. Deep Hashing Network for Efficient Similarity Retrieval. In _Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence_ , Dale Schuurmans and Michael P. Wellman (Eds.). AAAI Press, 2415–2421.
## Appendix A Appendix
### A.1. Missing Proof
###### Theorem A.1.
For the $K$-dimensional metric space $\mathds{R}^{K}$ with $C$ hypersphere
$\mathds{S}\in\mathds{R}^{K}$. The upper bound of distinguishable hypersphere
number $\Omega(K,C)$ cannot enumerate the ideal
$\Omega^{*}(K,C)=\sum\nolimits_{c=0}^{C}\tbinom{C}{c}=2^{C}$ when $C>K+1$. The
upper bound is limited at:
(7)
$\tilde{\Omega}(K,C)=\mathop{\sup}\nolimits_{\mathds{S}}{{\Omega}(K,C)}=\tbinom{C-1}{K}+\sum\nolimits_{k=0}^{K}\tbinom{C}{k}<2^{C}$
###### Proof.
To begin, we first deduce the optimal distinguishable hyperspace number
$\Omega^{*}(K,C)$ in the $K$-dimensional hyperspace with $C$ categories. Note
that each hyperspace represents one category’s cluster center. Hence, the
ideal distinguishable number of any $c\in\\{0,\cdots,C\\}$ hyperspheres is the
combination of them that is denoted as $n(K,c)$.
(8) $n(K,c)=\tbinom{C}{c}$
Hence, according to the binomial theorem, for $C$ hypersphere space, the
optimal distinguishable hyperspace number is the summation of each
$c\in\\{0,\cdots,C\\}$, as Eq. 9 illustrates.
(9) $\Omega^{*}(K,C)=\sum\nolimits_{c=0}^{C}\tbinom{C}{c}=2^{C}$
Then, note that the ideal distinguishable hyperspace number can only get
achieved when the hyperspace is unconstrained (_i.e_., it can have any
intersections to each other). When it comes to $K$-dimensional isotropic
hypersphere, we raise Lemma. A.2 to further demonstrate the upper bound in the
$C$ hypersphere metric space.
###### Lemma A.2.
When $K\geq 2$, two $K$-dimensional hyperspheres will intersect at one
$(K-1)$-dimensional hypersphere at maximum. Specifically, two 2D circles will
intersect at one $1$-dimensional hypersphere at maximum. The $1$-dimensional
hypersphere is a pair of two points, which are the boundary of a line segment.
We denote the $K$-dimensional hypersphere with the number of $C$ has the
maximum distinguishable regions as $\Omega(K,C)$. Then, the $K$-dimensional
hypersphere with the number of $C-1$ has the maximum distinguishable regions
$\Omega(K,C-1)$. When we add the $C$-th hypersphere into the existing
$K$-dimensional hyperspheres with the number of $C-1$, according to Lemma.
A.2, the $C$-th hypersphere will intersect with each hypersphere in the
hyperspace at one $(K-1)$-dimensional hypersphere at maximum, _i.e_.,
$(K-1)$-dimensional hyperspheres with the number of $C-1$ at maximum are added
into the $K$-dimensional hyperspace. These $(K-1)$-dimensional hyperspheres
thereby generate $(K-1)$-dimensional hypersurfaces with the number of
$\Omega(K-1,C-1)$ at maximum correspondingly, and each of these new
$(K-1)$-dimensional hypersurfaces bisects the $K$-dimensional space into two
parts. So we have the following recurrence relation:
(10) $\Omega(K,C)=\Omega(K,C-1)+\Omega(K-1,C-1)$
Note that one $K$-dimensional hypersphere can only separate hyperspace into
two regions, _i.e_., the inside and outside of the hypersphere, respectively.
Hence, we have the initial condition that for every $K\geq 2$ and $C\equiv 1$:
(11) $\Omega(K,1)\equiv 2$
Without loss of generalization, we first consider the simplest particular case
when $K=2$. Then, the $K$-dimensional hypersphere is degraded into a 2D circle
situation.
Suppose there are $C-1$ circles in the 2D space, when we have an additional
one circle in the space, such that each $C-1$ circle should intersect with the
additional one circle to achieve the maximum distinguishable hypersphere
number.
Then, according to Lemma. A.2, the circles will at most intersect with the
additional circle with $(C-1)$ intersected 1D point pairs. Hence, the
$2\times(C-1)$ points will introduce $2\times(C-1)$ lines that segment the
original regions into bisects. As a result, the additional one circle will
introduce additional $2\times(C-1)$ regions. We can obtain that:
(12) $\displaystyle\Omega(2,C)$ $\displaystyle=\Omega(2,C-1)+2\times(C-1)$
$\displaystyle=\Omega(2,C-2)+2\times(C-1)+2\times(C-2)$ $\displaystyle=\cdots$
$\displaystyle=\Omega(2,1)+2\times(C-1+C-2+\cdots+1)$
According to Eq. 11, we have $\Omega(2,1)=2$, _i.e_., one 2D circle can
represent two distinguishable regions at maximum, then:
(13) $\begin{aligned} \Omega(2,C)&=2+2\times(C-1+C-2+\cdots+1)\\\
&=2+C\times(C-1)\\\ &=C^{2}-C+2\\\
&=\frac{(C-1)(C-2)}{2}+\left(1+C+\frac{C(C-1)}{2}\right)\\\
&=\tbinom{C-1}{2}+\sum\nolimits_{k=0}^{2}\tbinom{C}{k}\end{aligned},$
which satisfies the proposition when $K=2$.
Then, we generalize into the $K$-dimensional situation and will prove Theorem.
A.1 by mathematical induction below.
• a). Considering the initial situation that for any $K\geq 2,K\in N_{+}$ and
$C=1$. As Eq. 11 illustrates, we have: (14)
$\Omega(K,C)=\Omega(K,1)=2=\tbinom{0}{K}+\sum\nolimits_{k=0}^{K}\tbinom{1}{k},$
which satisfies the proposition when $C=1$. $\triangleleft$ • b). For any
$K\geq 2,K\in N_{+}$ and a specific $C\geq 1,C\in N_{+}$, we assume the upper
bound of distinguishable hypersphere number $\Omega(K,C)$ satisfies: (15)
$\Omega(K,C)=\tbinom{C-1}{K}+\sum\nolimits_{k=0}^{K}\tbinom{C}{k},$
$\triangleleft$ • c). Then, considering the situation of $\Omega(K,C+1)$. When
$K=2$, as Eq. 13 illustrates, $\Omega(2,C+1)$ satisfies the proposition. When
$K>2$, according to Eq. 10, we have: (16)
$\displaystyle\Omega(K,C+1)=\Omega(K,C)+\Omega(K-1,C)$
$\displaystyle=\tbinom{C-1}{K}+\sum\nolimits_{k=0}^{K}\tbinom{C}{k}+\tbinom{C-1}{K-1}+\sum\nolimits_{k=0}^{K-1}\tbinom{C}{k}$
$\displaystyle=\tbinom{C-1}{K}+\tbinom{C-1}{K-1}+\tbinom{C}{0}+\sum\nolimits_{k=1}^{K}\tbinom{C}{k}+\sum\nolimits_{k=0}^{K-1}\tbinom{C}{k}$
$\displaystyle=\left(\tbinom{C-1}{K}+\tbinom{C-1}{K-1}\right)+\tbinom{C}{0}+\left(\tbinom{C}{0}+\tbinom{C}{1}\right)+\cdots+\left(\tbinom{C}{K-1}+\tbinom{C}{K}\right)$
Note that $\tbinom{m}{n}=\tbinom{m-1}{n}+\tbinom{m-1}{n-1}$, then we have:
(17) $\begin{aligned}
\Omega(K,C+1)&=\tbinom{C}{K}+\tbinom{C}{0}+\tbinom{C+1}{1}+\tbinom{C+1}{2}+\cdots+\tbinom{C+1}{K}\\\
&=\tbinom{C}{K}+\sum\nolimits_{k=0}^{K}\tbinom{C+1}{k}\end{aligned},$ which
satisfies the proposition when $C=C+1$. $\triangleleft$
Finally, according to mathematical induction, for any $K\geq 2,C\geq 1,K,C\in
N_{+}$, the upper bound of distinguishable hypersphere number in
$K$-dimensional metric space is obtained that:
(18)
$\displaystyle\tilde{\Omega}(K,C)=\mathop{\sup}\nolimits_{\mathds{S}}{{\Omega}(K,C)}=\tbinom{C-1}{K}+\sum\nolimits_{k=0}^{K}\tbinom{C}{k}$
When C = K + 1, we can see that the ideal distinguishable hyperspace number
${\Omega}^{*}(K,C)$ is equal to the upper bound $\tilde{\Omega}(K,C)$ because
the hash bit length is large enough to enumerate all possible situations, as
Eq. 19 illustrates.
(19) $\displaystyle\tilde{\Omega}(K,C)$
$\displaystyle=\tbinom{K+1-1}{K}+\sum\nolimits_{k=0}^{K}\tbinom{K+1}{k}$
$\displaystyle=\tbinom{K}{K}+\tbinom{K+1}{0}+\cdots+\tbinom{K+1}{K}$
$\displaystyle=\tbinom{K+1}{0}+\cdots+\tbinom{K+1}{K}+\tbinom{K+1}{K+1}$
$\displaystyle=\sum\nolimits_{k=0}^{K+1}\tbinom{K+1}{k}=2^{K+1}$
$\displaystyle=2^{C}={\Omega}^{*}(K,C)$
Then, we will prove that when $C\textgreater K+1$,
$\tilde{\Omega}(K,C)\textless{\Omega}^{*}(K,C)$ by mathematical induction
below.
• a). Considering the initial situation that for any $K\geq 2,K\in N_{+}$,
when $C=K+2$, according to Eq. 10, we have: (20)
$\displaystyle\Omega(K,C)=\Omega(K,K+2)$
$\displaystyle=\Omega(K,K+1)+\Omega(K-1,K+1)$
$\displaystyle=\Omega(K,K+1)+\Omega(K-1,K)+\Omega(K-2,K)$
$\displaystyle=\cdots$
$\displaystyle=\Omega(K,K+1)+\Omega(K-1,K)+\cdots+\Omega(3,4)+\Omega(2,4)$ As
Eq. 13 and Eq. 19 illustrate, we have: (21) $\displaystyle\Omega(K,C)$
$\displaystyle=2^{K+1}+2^{K}+\cdots+2^{4}+4^{2}-4+2$ $\displaystyle=2^{K+2}-2$
$\displaystyle=2^{C}-2$ $\displaystyle<2^{C}={\Omega}^{*}(K,C),$ which
satisfies the proposition when $C=K+2$. $\triangleleft$ • b). For any $K\geq
2,K\in N_{+}$ and a specific $I\geq 2,I\in N_{+}$, let $C=K+I$, we assume the
inequation $\tilde{\Omega}(K,C)<\Omega^{*}(K,C)$ satisfies: (22)
$\tilde{\Omega}(K,C)=\tilde{\Omega}(K,K+I)<\Omega^{*}(K,K+I)=2^{K+I},$
$\triangleleft$ • c). Then, when $C=K+I+1$, we have: (23)
$\displaystyle\Omega(K,C)=\Omega(K,K+I+1)$
$\displaystyle=\Omega(K,K+I)+\Omega(K-1,K+I)$
$\displaystyle=\Omega(K,K+I)+\Omega(K-1,K+I-1)+\Omega(K-2,K+I-1)$
$\displaystyle=\cdots$
$\displaystyle=\Omega(K,K+I)+\Omega(K-1,K+I-1)+\cdots+\Omega(3,3+I)$
$\displaystyle\quad+\Omega(2,3+I)$
$\displaystyle<2^{K+I}+2^{K+I-1}+\cdots+2^{3+I}+(3+I)^{2}-(3+I)+2$
$\displaystyle=2^{K+I+1}-2^{3+I}+I^{2}+5I+8$ Note that $2^{3+I}\textgreater
I^{2}+5I+8$ when $I\geq 2$, then we have: (24) $\displaystyle\Omega(K,C)$
$\displaystyle<2^{K+I+1}=\Omega^{*}(K,C),$ which satisfies the proposition
when $C=K+I+1$. $\triangleleft$
Finally, according to mathematical induction,
$\tilde{\Omega}(K,C)<{\Omega}^{*}(K,C)$ when $C\textgreater K+1$.
∎
###### Theorem A.3.
When the proxies have converged to fixed positions (i.e., the angles of proxy
pairs are constant), then the best position of $2$-label samples is the middle
of $2$ positive proxies. The $n$-label scenarios can be deduced in a similar
fashion.
###### Proof.
Suppose the feature vector $\bm{v_{x}}$ contains $2$ labels $y_{1}$, $y_{2}$,
with corresponding proxies $\bm{p}_{1}$, $\bm{p}_{2}$. Let
$\theta_{1}=\langle\bm{v_{x}},\bm{p}_{1}\rangle$,
$\theta_{2}=\langle\bm{v_{x}},\bm{p}_{2}\rangle$,
$\theta_{3}=\langle\bm{p}_{1},\bm{p}_{3}\rangle$, where
$\theta_{1},\theta_{2},\theta_{3}\in[0,\pi]$.
Note that the effect of negative proxies is negligible when about convergence,
because they are away from $\bm{v_{x}}$, so we only consider the gradient from
$\bm{p}_{1}$, $\bm{p}_{2}$.
Then we have $\mathcal{L}_{+}=-(\cos\theta_{1}+\cos\theta_{2})$, and the
objective function is $\arg\max(\cos\theta_{1}+\cos\theta_{2})$ accordingly.
If $\bm{v_{x}}$ is non-coplanar with $\bm{p}_{1}$, $\bm{p}_{2}$, we have the
projection $\bm{v_{x}}^{\prime}$ in the plane defined by
$\bm{p}_{1},\bm{p}_{2}$, and denote the corresponding angles with
$\bm{p}_{1}$, $\bm{p}_{2}$ as $\theta_{1}^{\prime}$, $\theta_{2}^{\prime}$,
such that
$\theta_{1}^{\prime}\textless\theta_{1},\theta_{2}^{\prime}\textless\theta_{2}\Rightarrow\cos\theta_{1}^{\prime}+\cos\theta_{2}^{\prime}\textgreater\cos\theta_{1}+\cos\theta_{2}$.
Hence, the optimal objective function is satisfied when
$\bm{v_{x}},\bm{p}_{1},\bm{p}_{2}$ are coplanar such that
$\theta_{1}^{\prime}=\theta_{1},\theta_{2}^{\prime}=\theta_{2}$, and ensures
$\theta_{3}=\theta_{1}+\theta_{2}$. Then we have:
(25)
$\arg\min\mathcal{L}_{+}=\arg\max((1+\cos\theta_{3})\cos\theta_{1}+\sin\theta_{3}\sqrt{1-\cos^{2}\theta_{1}})$
To obtain the extreme point of $\mathcal{L}_{+}$, let:
(26)
$\frac{\partial\mathcal{L}_{+}}{\partial\cos\theta_{1}}=1+\cos\theta_{3}-\sin\theta_{3}\frac{\cos\theta_{1}}{\sqrt{1-\cos^{2}\theta_{1}}}=0$
Note that $1+\cos\theta_{3}\geq 0$, $\sin\theta_{3}\geq 0$,
$\sqrt{1-\cos^{2}\theta_{1}}\geq 0$, we can get $\cos\theta_{1}\geq 0$. Thus
$\theta_{1}\in[0,\frac{\pi}{2}]$. Then, it is easy to obtain that:
(27)
$\displaystyle\frac{1-\cos^{2}\theta_{3}}{2+2\cos\theta_{3}}=1-\cos^{2}\theta_{1}$
$\displaystyle\Rightarrow\cos^{2}\theta_{1}=1-\frac{1-\cos^{2}\theta_{3}}{2+2\cos\theta_{3}}$
$\displaystyle=\cos^{2}\frac{\theta_{3}}{2}$
Considering the domain of $\theta_{1},\theta_{2},\theta_{3}$, we have
$\cos\theta_{1}=\cos\frac{\theta_{3}}{2}\Rightarrow\theta_{1}=\theta_{2}=\frac{\theta_{3}}{2}$,
_i.e_., $\bm{v_{x}}$ will be embedded into the middle of the two proxies.
Similarly, we can extend the conclusion into $n$-label scenarios where the
optimal solution is satisfied when $\bm{v_{x}}^{*}$ is in the middle of
$n$-proxies, as we claimed in the main paper.
∎
|
# First-principles calculation of the electronic and optical properties of
Gd2FeCrO6 double perovskite: Effect of Hubbard U parameter
Subrata Das a,b, M. D. I. Bhuyan a and M. A. Basith a<EMAIL_ADDRESS>aDepartment of Electrical and Electronic Engineering, Bangladesh University of
Engineering and Technology, Dhaka, 1000, Bangladesh , b Nanotechnology
Research Laboratory, Department of Physics, Bangladesh University of
Engineering and Technology, Dhaka-1000, Bangladesh.
DOI: 10.1016/j.jmrt.2021.06.026
###### Abstract
We have synthesized Gd2FeCrO6 (GFCO) double perovskite which crystallized in
monoclinic structure with P21/n space group. The UV-visible and
photoluminescence spectroscopic analyses confirmed its direct band gap
semiconducting nature. Here, by employing experimentally obtained structural
parameters in first-principles calculation, we have reported the spin-
polarized electronic band structure, charge carrier effective masses, density
of states, electronic charge density distribution and optical absorption
property of this newly synthesized GFCO double perovskite. Moreover, the
effects of on-site d-d Coulomb interaction energy (Ueff) on the electronic and
optical properties were investigated by applying a range of Hubbard Ueff
parameter from 0 to 6 eV to the Fe-3d and Cr-3d orbitals within the
generalized gradient approximation (GGA) and GGA+U methods. Notably, when we
applied Ueff in the range of 1 to 5 eV, both the up-spin and down-spin band
structures were observed to be direct. The charge carrier effective masses
were also found to enhance gradually from Ueff = 1 eV to 5 eV, however, these
values were anomalous for Ueff = 0 and 6 eV. These results suggest that Ueff
should be limited within the range of 1 to 5 eV to calculate the structural,
electronic and optical properties of GFCO double perovskite. Finally we
observed that considering Ueff = 3 eV, the theoretically calculated optical
band gap $\sim$1.99 eV matched well with the experimentally obtained value
$\sim$2.0 eV. The outcomes of our finding imply that the Ueff value of 3 eV
most accurately localized the Fe-3d and Cr-3d orbitals of GFCO keeping the
effect of self-interaction error from the other orbitals almost negligible.
Therefore, we may recommend Ueff = 3 eV for first-principles calculation of
the electronic and optical properties of GFCO double perovskite that might
have potential in photocatalytic and related solar energy applications.
## I Introduction
Over the past few decades, the double perovskite oxides A2BB′O6 (A = rare
earth metal ions, B = transition metal ions) have gained immense research
interest owing to their rich multifunctional characteristics and potential
applications in the next generation spintronic devices Vasala and Karppinen
(2015); Gray _et al._ (2010); Gaikwad _et al._ (2019); Das _et al._ (2008);
Cerón _et al._ (2019); Yin _et al._ (2019). Recently, several double
perovskites have been reported to possess promising applicability in the field
of photocatalysis, photovoltaic devices and photo(electro)chemical energy
storage systems Lin _et al._ (2021); Mohassel _et al._ (2020); Kangsabanik
and Alam (2020); Yin _et al._ (2019). Especially, A2FeCrO6 double perovskites
(A = Pr, Bi etc.) having two 3d transition elements at B and B′ sites have
demonstrated fascinating optoelectronic properties such as favorable band gap
energy, strong absorbance in the visible regime of the solar energy etc Wu
_et al._ (2020); Gaikwad _et al._ (2019); Nechache _et al._ (2015).
Therefore, it is intriguing to investigate other members of A2FeCrO6 double
perovskite family for enormous applications in electronics, photochemistry and
optical technologies.
Nevertheless, it is difficult to synthesize perfectly ordered structure of
A2FeCrO6 double perovskites because of the complementary ionic radii of Fe and
Cr ions Gray _et al._ (2010); Nair _et al._ (2014). Hence, the double
perovskites of this family is yet less explored as compared to analogous
double perovskite materials. Recently, we have successfully synthesized
nanoparticles of double perovskite Gd2FeCrO6 (GFCO) for the first time by
optimizing the synthesis condition of a citrate-based sol-gel technique and
extensively investigated their crystallographic and chemical structure as well
as magnetic and optical behaviors Bhuyan _et al._ (2021). Interestingly, the
favorable surface morphology, optimal direct band gap of $\sim$2.0 eV and the
band edge positions of synthesized GFCO double perovskite have revealed its
promising potential for visible light driven photocatalysis and related
applications. Unfortunately, the complexity of experimental conditions and to
some extent, the unavailability of the required experimental set-up posed
difficulty to understand their optical and electronic characteristics at
atomic level. Such impediment can be overcome by performing density functional
theory (DFT) based first-principles calculation systematically Zhang _et al._
(2016).
However, it should be noted that the standard DFT methods, for instance local
density approximation (LDA) and generalized gradient approximation (GGA) of
the exchange-correlation functional have some limitations in analyzing
correctly the electronic properties of strongly correlated systems like GFCO
Shenton _et al._ (2017). To be specific, the LDA and GGA methods have
demonstrated systemic failures to explain the on-site Coulomb interactions of
highly localized electrons because of the erroneous electron self-interaction
Himmetoglu _et al._ (2014). One of the notable deficiencies of standard
approximations is the underestimation of the optical band gap in
semiconducting double perovskites which might have potentiality in
photocatalytic and optoelectronic applications Terakura _et al._ (1984);
Brown and Page (2020). These limitations of standard DFT can be reasonably
corrected by GGA+U method in which on-site Hubbard-like correction is applied
to the effective potential Lu and Liu (2014); Wang _et al._ (2006); Zhang
_et al._ (2010). Notably, two free parameters, U and J are required to
effectively tune the on-site Coulomb and exchange interactions, respectively
Shenton _et al._ (2017). In an approach proposed by Dudarev et. al. Dudarev
_et al._ (1998), these two parameters can be combined into a single Hubbard
Ueff correction parameter where Ueff = U-J Wang _et al._ (2006).
Typically, within GGA+U calculation, the Ueff value is selected such that the
calculated band gap matches with the experimentally obtained one Shenton _et
al._ (2017). However, a number of recent investigations Shenton _et al._
(2017); Perdew (1985) have demonstrated that choosing Ueff parameter only to
match the band gap, may introduce spurious effects for strongly correlated
materials. For instance, Shenton et al. Shenton _et al._ (2017) reported that
a Ueff value of 5 eV or larger is required to match the experimental band gap
of BiFeO3. However, the ordering of the Fe d orbitals at the conduction band
minimum of BiFeO3 inverts for Ueff $>$ 4 eV. Hence, careful consideration is
required to apply the most accurate Hubbard Ueff parameter in GGA so that the
theoretical band gap value closely matches with the experimental one without
introducing any significant error in the character of electronic band edges.
To the best of our knowledge, the influences of Hubbard Ueff parameter on the
optical and electronic properties of double perovskites possessing two 3d
transition elements like GFCO have not been extensively investigated yet.
Therefore, in the current work, we have extensively investigated the effect of
Hubbard Ueff parameter on the crystallographic parameters, spin-polarized
electronic band structure and optical properties of our recently synthesized
GFCO nanoparticles by performing first-principles calculation via both GGA and
GGA+U methods. To ensure reliability, we have employed our experimentally
obtained structural parameters for theoretical analysis. We observed that the
variation in Ueff had insignificant effect on the structural parameters of
GFCO. However, the character and curvature of electronic band edges could not
be determined accurately without applying Ueff i.e. for Ueff = 0 eV and also,
for Ueff $>$ 5 eV in GGA+U calculation. Finally, considering the theoretically
calculated optical band gap, we conjectured that a Ueff value of 3 eV is
reasonable to employ to the Fe-3d and Cr-3d orbitals of GFCO double perovskite
within this calculation.
## II Experimental and computational details
Double perovskite GFCO nanoparticles were synthesized by adopting a standard
citrate-based sol-gel technique, as discussed in details in our previous work
Bhuyan _et al._ (2021). The crystallographic phase, lattice parameters, bond
angles and bond lengths of as-synthesized GFCO were determined by Rietveld
refinement of the powder XRD data using FullProf computer program package
Rodriguez-Carvajal (1990). An ultraviolet-visible (UV-visible)
spectrophotometer (UV-2600, Shimadzu) was used to obtain absorbance spectrum
of the as-synthesized GFCO for wavelengths ranging from 200 to 800 nm. Steady-
state photoluminescence (PL) spectroscopy was conducted at room temperature by
Spectro Fluorophotometer (RF-6000, Shimadzu). In the present investigation,
based on our experimental findings, spin-polarized optical properties and
accurate electronic band structure of as-prepared GFCO nanoparticles were
determined theoretically by DFT based first-principles calculation.
Figure 1: Plane-wave cutoff energy convergence for structural optimization.
The theoretical calculations were carried out using both generalized gradient
approximation (GGA) and GGA+U methods within the plane wave pseudopotential
(PWPP) framework as implemented in the Cambridge Serial Total Energy Package
(CASTEP) Segall _et al._ (2002); Rozilah _et al._ (2017). The
crystallographic structural parameters obtained from the Rietveld refined
powder XRD spectrum of GFCO Bhuyan _et al._ (2021) were employed for DFT
calculation. Prior to calculation, the geometry was optimized via Brodyden-
Fletcher-Goldfarb-Shanno (BFGS) scheme applying energy of 10-5 eV/atom,
maximum force of 0.05 eV/Å and maximum stress of 0.1 GPa Fischer and Almlof
(1992). The Gd-4f85s25p66s2, Fe-3d64s2, Cr-3s23p63d54s1 and O-2s22p4 electrons
were treated as valence electrons. The plane-wave cutoff energy convergence
result for structural optimization is demonstrated in Fig. 1 where the dashed
line represents the default energy cutoff. As can be observed, 450 eV energy
cutoff was found sufficient to achieve the converged ground-state energy of
GFCO. Hence, the plane-wave basis set was employed with the optimized energy
cutoff of 450 eV. Moreover, Brillouin-zone integration were carried out with a
5$\times$5$\times$3 Monkhorst-Pack k-point mesh Monkhorst and Pack (1976).
Spin polarized mode was endorsed during self-consistent field (SCF)
calculations and a SCF tolerance of 2$\times$10-6 eV per atoms was used.
Notably, to describe the exchange-correlation energy, at first we have used
the GGA (Ueff = 0 eV) method based on the Perdew-Burke-Ernzerhof functional
(PBE) within on-the-fly generated ultrasoft pseudopotentials (USP). Then a
spin-polarized calculation was carried out to confirm the exact ground state
of GFCO double perovskite Zhang _et al._ (2010). Further, the GGA+U
calculation was performed with different values of Ueff to investigate the
effect of Hubbard Ueff parameter on the structural, electronic and optical
properties of the GFCO ground state Dudarev _et al._ (1998). To be specific,
Ueff was varied from 1 to 6 eV for Fe-3d and Cr-3d orbitals whereas for Gd-4f
orbital, Ueff was kept fixed at 6 eV in accordance with previous
investigations Rozilah _et al._ (2017); Chakraborty _et al._ (2008).
The optical absorption coefficient was determined using the equation
$\leavevmode\nobreak\ \alpha=\sqrt[]{2}\omega\sqrt[]{\leavevmode\nobreak\
\sqrt[]{\varepsilon_{1}^{2}\left(\omega\right)+\varepsilon_{2}^{2}\left(\omega\right)}-\varepsilon_{1}\left(\omega\right)}$,
where $\varepsilon_{1}\left(\omega\right)$ and
$\varepsilon_{2}\left(\omega\right)$ denote frequency dependent real and
imaginary parts of dielectric function, $\omega$ represents the photon
frequency Saha _et al._ (2000). $\varepsilon_{1}\left(\omega\right)$ was
calculated from the $\varepsilon_{2}\left(\omega\right)$ by the Kramers-Kronig
relationship Zhang _et al._ (2008).
## III Results and discussion
### III.1 Crystal structure
Table 1: Lattice parameters, monoclinic angle and unit cell volume of
Gd2FeCrO6 for different values of Ueff obtained via first-principles
calculation along with the corresponding experimental values.
| Experimental value | Ueff= 0 eV | Ueff = 1 eV | Ueff = 2 eV | Ueff = 3 eV | Ueff = 4 eV | Ueff = 5 eV | Ueff = 6 eV
---|---|---|---|---|---|---|---|---
a ($\AA$) | 5.359 | 5.397 | 5.442 | 5.448 | 5.457 | 5.466 | 5.470 | 5.502
b ($\AA$) | 5.590 | 5.601 | 5.676 | 5.682 | 5.689 | 5.698 | 5.701 | 5.750
c ($\AA$) | 7.675 | 7.688 | 7.781 | 7.798 | 7.808 | 7.820 | 7.821 | 7.886
$\beta$ (∘) | 89.958 | 89.691 | 90.004 | 90.003 | 89.998 | 89.991 | 89.990 | 89.989
Volume($\AA$3) | 229.92 | 232.43 | 240.35 | 241.42 | 242.46 | 243.73 | 243.85 | 249.54
In our previous investigation Bhuyan _et al._ (2021), we have extensively
investigated the crystallographic structure of our as-synthesized GFCO
nanoparticles at room temperature by performing Rietveld refinement analysis
of their powder XRD pattern. Notably, it was observed that GFCO crystallizes
in monoclinic structure with P21/n space group. The lattice parameters of GFCO
unit cell were found to be $a$ = 5.359(1) $\AA$, $b$ = 5.590(2) $\AA$, $c$ =
7.675(3) $\AA$, monoclinic angle $\beta$ = 89.958(1)∘ with cell volume 229.920
$\AA^{3}$. In the present work, we have performed first-principles calculation
to determine the structural parameters of GFCO by GGA (Ueff = 0 eV) and GGA+U
(Ueff = 1 to 6 eV) approaches. The calculated lattice constants $a$, $b$ and
$c$, monoclinic angle ($\beta$) along with unit cell volume are tabulated in
Table 1. For comparison, we have also included the experimentally obtained
structural parameters in the table. Noticeably, the lattice constants and
monoclinic angles obtained via first-principles calculation were found to be
slightly larger than the experimental values. Nevertheless, this mismatch
remained nominal i.e. within 3% of the experimental results which is
consistent with a number of previous investigations Shenton _et al._ (2017);
Mann _et al._ (2016). For instance, the lattice parameters of CO2-metal
organic framework calculated with Hubbard U corrections were reported to
remain within 3% of experimental values Mann _et al._ (2016). Moreover, it
can be observed that the calculated lattice parameters and unit cell volume
enhanced with the increment of Ueff from 0 to 6 eV Shenton _et al._ (2017).
In contrast, the monoclinic angle obtained by the GGA+U method decreased with
increasing Ueff in the range of 1 to 6 eV. Such observations are in well
agreement with previous investigations of related materials Shenton _et al._
(2017); Neaton _et al._ (2005).
Figure 2: Experimentally obtained (a) Tauc plot for direct optical band gap
estimation and (b) steady-state photoluminescence spectrum of Gd2FeCrO6
nanoparticles Bhuyan _et al._ (2021).
### III.2 Experimentally obtained optical properties
The optical characteristics of as-synthesized GFCO nanoparticles were
extensively investigated by obtaining their UV-visible absorbance spectrum and
was reported previously Bhuyan _et al._ (2021). The absorbance data was
employed to calculate the optical band gap of synthesized GFCO double
perovskite using Tauc relation Tauc _et al._ (1966). The generated Tauc plot
for estimating the direct optical band gap of GFCO nanoparticles is shown in
Fig. 2(a). The abscissa intercept of the tangent to the linear region of the
curve demonstrated that the optical band gap value is $\sim$2.0 eV.
To further ensure the estimated direct optical band gap of GFCO perovskite,
the steady-state PL spectrum of the synthesized material was recorded for an
excitation wavelength of 230 nm Bhuyan _et al._ (2021). The position of the
PL peak in Fig. 2(b) confirmed that the band gap value of GFCO is $\sim$1.98
eV which closely matches with the direct band gap value obtained from the Tauc
plot (Fig. 2(a)). Therefore, both the UV-visible and PL spectroscopic analyses
revealed that our as-synthesized nanostructured GFCO is a direct band-gap
double perovskite with a band gap value of $\sim$2.0 eV. The experimentally
observed optical properties of GFCO perovskite suggest the semiconducting
nature of GFCO and most importantly, demonstrates its ability to absorb light
of the visible spectrum of the solar illumination.
Figure 3: Electronic band structure of Gd2FeCrO6 for (a) Ueff = 0 eV, (b)
Ueff = 1 eV, (c) Ueff = 3 eV, and (d) Ueff = 6 eV. Black and blue curves
represent up-spin and down-spin orientations, respectively. The energy ranges
from -3 to 3 eV and the zero is set to the Fermi energy EF.
### III.3 Theoretical investigation of electronic properties
#### III.3.1 Electronic band structure
The theoretical investigation was initiated by calculating the spin-polarized
electronic band structure of GFCO within the GGA method (Ueff = 0 eV) as shown
in Fig. 3(a). The dotted horizontal line between the valence and conduction
bands represents the Fermi level Hou _et al._ (2014). As can be observed, for
up-spin orientation, we obtained a direct electronic band gap of 0.84 eV
whereas for down-spin, the band gap is found to be indirect having a value of
0.27 eV. Notably, both of these values are much smaller than the direct
optical band gap $\sim$2.0 eV of as-synthesized GFCO as confirmed by UV-
visible and PL spectroscopic analyses. It is well known that electronic band
gap of any material would be larger than its optical band gap Bredas (2014),
hence we may infer that the band structure formed for Ueff = 0 eV is
incorrect. Therefore, we have determined the electronic band structure of GFCO
via GGA+U method with Ueff = 1 to 6 eV. The electronic band structures
obtained for Ueff = 1, 3, 6 eV are presented in Fig. 3(b), (c) and (d),
respectively and the band structures calculated for Ueff = 2, 4, 5 eV are
provided in Fig. S1 of Electronic Supplementary Information (ESI). As can be
seen in these two figures, the gap in the up-spin band enlarges with
increasing Ueff which can be attributed to the enhanced localization of the
Fe-3d and Cr-3d orbitals due to increased Ueff Shenton _et al._ (2017). It is
worth noting that for Ueff = 1 to 5 eV, both the valence band maximum (VBM)
and conduction band minimum (CBM) were within the A symmetry point indicating
direct band structure.
However, at Ueff = 6 eV, we obtained an indirect up-spin band structure which
is inconsistent with all the previous cases. It should be noted that so far
both experimental and theoretical calculations provided strong evidence in
support of direct band structure of GFCO. Therefore, the indirect band
structure obtained for Ueff = 6 eV calls into question the applicability of
employing large Hubbard parameter (i.e. Ueff $>$ 5 eV) in first-principles
calculations of GFCO double perovskite Shenton _et al._ (2017). Moreover, it
is interesting to note that the theoretically calculated up-spin band gap
values were within the range of 2.46 to 2.84 eV which ensures the
semiconducting nature of GFCO double perovskite as was also evident from the
UV-visible and PL spectroscopic analyses (Fig. 2).
In the case of down-spin orientation, for all values of Ueff (1 to 6 eV), both
the CBM and VBM were obtained within the G symmetry point indicating again
direct band structure. Interestingly, the band gap value increased
monotonically from Ueff = 1 to 5 eV but an anomalous decrease can be observed
at Ueff = 6 eV (Fig. 3(d)). Therefore, it might be conjectured that the
optimized value of Ueff for first-principles calculation of GFCO would be less
than 6 eV Shenton _et al._ (2017).
Figure 4: Variation in absolute charge carrier effective mass as a function
of Ueff. me∗ and mh∗ are the electron and hole effective masses, respectively
in units of the electron rest mass, mo
For understanding the carrier transport in the material, we have quantified
the curvature at band extrema by calculating the technologically important
charge carrier effective masses using following expression Reunchan _et al._
(2016).
$m^{*}=\hbar^{2}\left(\frac{\mathrm{d}^{2}E}{\mathrm{d}k^{2}}\right)^{-1}$ (1)
Here, E is the band-edge energy as a function of wave-vector k. The electron
effective mass (me∗) was calculated by parabolic fitting of the E-k curve
within the small region of wave-vector near the CBM Reunchan _et al._ (2016);
Das _et al._ (2019). The hole effective mass (mh∗) was estimated by analyzing
the region near the VBM using similar approach Reunchan _et al._ (2016). The
variations of me∗ and mh∗ of GFCO double perovskite as a function of Ueff are
illustrated in Fig. 4 both for up-spin and down-spin orientations. For up-spin
band structure, we can observe a reduction in me∗ from 15.5mo to 10.6mo for
Ueff = 0 eV to Ueff = 1 eV suggesting an increase in curvature at the CBM.
Between Ueff of 1 to 5 eV, the values of me∗ changed nominally ($\sim$1.3mo).
Further, for Ueff = 6 eV, an enhancement of 3.5mo can be noticed which
corresponds to the reduction in curvature at CBM. Clearly, in Fig. 4, we can
observe a similar dependence of mh∗ on Ueff both for up and down-spin
orientations. Moreover, for down-spin band structure, a notable increase can
be noticed in me∗ from 26.1mo to 81mo for Ueff = 5 eV to Ueff = 6 eV. However,
the variation was comparatively smaller ($\sim$ 11.2mo) in the range of Ueff =
0 to 5 eV. It is worth mentioning that while varying the values of Ueff for
electronic band structure calculation, we had kept the structural parameters
fixed at experimentally obtained values. Hence, we can confirm that the
variations in me∗ and mh∗ with Ueff are solely due to the electronic effects.
Such variation trend again justifies that it would be worthwhile to consider
Ueff within the range 1 to 5 eV for GFCO double perovskite.
#### III.3.2 Density of states
Figure 5: Calculated total density of states (TDOSs) and partial density of
states (PDOSs) of Gd-4f, Fe-3d, Cr-3d and O-2p orbitals for both up-spin and
down-spin channels. The panels (a)–(d) show the DOSs for Ueff = 0, 1, 3, 6 eV,
respectively. The zero is set to the Fermi energy.
In order to resolve the contribution of each individual orbital to the
electronic bands of GFCO double perovskite, we have calculated the total
density of states (TDOS) and partial density of states (PDOS) for Gd-4f,
Fe-3d, Cr-3d and O-2p orbitals via GGA (Ueff = 0 eV) and GGA+U methods (Ueff =
1 to 6 eV). The DOSs obtained for Ueff = 0, 1, 3, 6 eV are presented in Fig.
5(a), (b), (c) and (d), respectively and the DOSs calculated for Ueff = 2, 4,
5 eV are provided in Fig. S2 of ESI.
Beginning with the character of the conduction band (E-EF$>$ 0 eV) obtained
for up-spin orientation, we can make the following observations. At Ueff = 0
eV, Fig. 5(a), the conduction band lying at around 1 eV was made up of a
hybridization of Fe-3d and O-2p orbitals. This band disappeared after
considering the effect of Hubbard Ueff which corresponds to the decrement in
me∗ at Ueff = 1 eV (Fig. 5(b)). Notably, the conduction band (at around 2.5
eV) obtained for Ueff = 1 eV had primarily the characteristics of Cr-3d with a
minor contribution from O-2p. Further, as shown in Fig. 5(c) and ESI Fig. S2,
with the increase of Ueff up to 5 eV, this conduction band shifted to higher
energy resulting in the enlargement of band gap. Also, the contribution of
Cr-3d to the up-spin conduction band enhanced with increasing Ueff.
Interestingly, at Ueff = 6 eV (Fig. 5(d)), we observed that the computed PDOS
for Gd-4f orbital radically altered and the conduction band arose mainly due
to the hybridization of Gd-4f with a minor contribution from Cr-3d orbital.
Such anomaly is another indication for the limitation of Ueff $>$ 5 eV to
explain the electronic band structure of GFCO double perovskite accurately.
Now, if we analyze the valence bands (E-EF$<$ 0 eV) of up-spin electronic band
structure shown in Fig. 5(a), it can be observed that the valence band at Ueff
= 0 eV was composed of Cr-3d, O-2p as well as Fe-3d orbitals. However, when we
employed Ueff, the contribution of Fe-3d got diminished. To be specific, the
up-spin valence band for Ueff = 1 eV was made up of the hybridization of Cr-3d
and O-2p states (Fig. 5(b)). From Fig. 5(c), (d) and ESI Fig. S2, it can be
noticed that with increasing Ueff, the contribution of Cr-3d state gradually
decreased leaving only O-2p to dominate the valence band at Ueff = 6 eV which
can be associated with the enhancement of mh∗.
Further Fig. 5(a) demonstrates that at Ueff = 0 eV, the conduction band
(E-EF$>$ 0 eV) of down-spin structure was dominated mostly by Fe-3d orbital
with a negligible contribution from O-2p. As can be noticed in Fig. 5(b), (c)
and ESI Fig. S2, the characteristics of the conduction band did not change
much due to the effect of Hubbard Ueff up to 5 eV. Only a gradual shifting of
conduction band to a higher energy can be noticed with increasing Ueff as
expected Shenton _et al._ (2017). However, when we employed a Ueff of 6 eV
(Fig. 5(d)), a new flat conduction band appeared at around 1.25 eV owing to
the sole contribution of Gd-4f state. Furthermore, in the case of the down-
spin valence band (E-EF$<$ 0 eV), a significant influence of Ueff can be
observed. Without considering the Coulomb repulsion effect (i.e. for Ueff = 0
eV in Fig. 5(a)), we obtained the valence band near the Fermi level which is
attributed to the hybridization of Fe-3d and O-2p orbitals. After applying
Ueff = 1 eV (Fig. 5(b)), this band disappeared followed by the emergence of a
new band at around -1.5 eV which shifted gradually to higher energy with
increasing Ueff of up to 6 eV (Fig. 5(b)-(d) and ESI Fig. S2). Notably, the
formation of this band has the contribution from only O-2p orbital.
#### III.3.3 Mulliken population analysis
Table 2: Mulliken effective charges of individual atoms, bond populations and bond lengths of Gd2FeCrO6 for different values of Ueff obtained via Mulliken population analysis. | Ueff = 0 eV | Ueff =1 eV | Ueff = 2 eV | Ueff = 3 eV | Ueff = 4 eV | Ueff = 5 eV | Ueff = 6 eV
---|---|---|---|---|---|---|---
Atom | Mulliken effective charge (e) |
Gd | 1.52 | 1.49 | 1.49 | 1.49 | 1.49 | 1.50 | 1.50
Fe | 0.59 | 0.78 | 0.79 | 0.80 | 0.82 | 0.82 | 0.84
Cr | 0.53 | 0.58 | 0.59 | 0.61 | 0.62 | 0.63 | 0.65
O | -0.69 | -0.72 | -0.73 | -0.73 | -0.74 | -0.74 | -0.75
Bond | Bond population |
Gd-O | 0.1450 | 0.1587 | 0.1600 | 0.1625 | 0.1637 | 0.1662 | 0.1700
Fe-O | 0.3200 | 0.3100 | 0.3166 | 0.3166 | 0.3166 | 0.3166 | 0.3066
Cr-O | 0.3533 | 0.3200 | 0.3200 | 0.3200 | 0.3200 | 0.3133 | 0.3100
O-O | -0.0350 | -0.0333 | -0.0333 | -0.0333 | -0.0333 | -0.0333 | -0.0333
Bond | Bond length (Å) |
Gd-O | 2.475 | 2.501 | 2.504 | 2.505 | 2.507 | 2.507 | 2.524
Fe-O | 2.023 | 2.051 | 2.061 | 2.065 | 2.065 | 2.069 | 2.083
Cr-O | 2.026 | 2.036 | 2.042 | 2.049 | 2.050 | 2.057 | 2.079
The effective atomic charge, bond population and bond length in a crystalline
solid can be obtained from Mulliken population analysis which provides insight
into the distribution of electrons among different parts of the atomic bonds,
covalency of bonding as well as bond strength Mulliken (1955); Segall _et
al._ (1996). Mulliken effective charge, Q($\alpha$) of a particular atom
$\alpha$ can be calculated using the following expression Mulliken (1955).
$Q(\alpha)=\sum_{k}\omega_{k}\sum\sum_{\mu}^{on\;\alpha}\sum_{v}P_{\mu\nu}(k)S_{\mu\nu}(k)$
(2)
where Pμν denotes an element of the density matrix and Sμν refers to the
overlap matrix. And the bond population, P($\alpha\beta$) between two atoms
$\alpha$ and $\beta$ can be expressed as Mulliken (1955)-
$P(\alpha\beta)=\sum_{k}\omega_{k}\sum\sum_{\mu}^{on\;\alpha}\sum_{v}^{on\;\beta}2P_{\mu\nu}(k)S_{\mu\nu}(k)$
(3)
Table II provides the calculated Mulliken effective charges of individual
atoms, bond populations and bond lengths between different atoms in GFCO
crystal structure. Noticeably, for all values of Ueff, the Mulliken effective
charges of the individual Gd, Fe, Cr and O atoms are found to be reasonably
smaller than their formal ionic charges which are +3, +3, +3, and -2,
respectively. Such difference between Mulliken effective and formal ionic
charges is an indication of the existence of mixed ionic and covalent bonding
in GFCO Segall _et al._ (1996). It should be noted that small Mulliken
effective charge of an atom is associated with its high level of covalency and
vice versa Mulliken (1955); Rozilah _et al._ (2017). Therefore it can be
inferred that GFCO double perovskite includes chemical bonding with prominent
covalency. Further, in Table II, an enhancement can be observed in the
effective charges of Fe, Cr and O atoms after employing Ueff in the first-
principles calculation (Ueff =1 to 6 eV). This outcome indicates that the
degree of bond covalency in GFCO reduced to an extent due to the effect of on-
site Coulomb interaction.
Table II also presents the bond populations and bond lengths of Gd–O, Fe–O,
Cr–O and O-O bonds in GFCO double perovskite as obtained for Ueff = 0 to 6 eV.
It is noteworthy that a large positive value of bond population is associated
with high degree of covalency whereas a small bond population indicates high
degree of ionicity in the chemical bond Ching and Xu (1999). In the present
investigation, the bond populations of the Gd–O, Fe–O and Cr–O were determined
to be positive whereas the bond population of O-O was negative for all values
of Ueff. This result suggests that no bonds had formed between the O atoms in
GFCO double perovskite Chakraborty _et al._ (2008); Yaakob _et al._ (2015).
Moreover, the calculated bond populations of Fe-O and Cr-O are found to be
considerably larger than that of Gd-O. Such observation implies that the Fe-O
and Cr-O bonds possess higher degree of covalency as compared to Gd-O bonds.
It is also worth noticing that the bond populations of Fe-O and Cr-O
calculated by GGA+U method (Ueff =1 to 6 eV) are lower than the values
obtained by GGA method (Ueff = 0 eV). This result provides further evidence
for the influence of Coulomb repulsion to reduce the covalency of Fe-O and
Cr-O bonding in GFCO as was observed before by analyzing the calculated
Mulliken effective charges. Furthermore, Table II demonstrates that the bond
lengths of Fe-O and Cr-O are reasonably smaller than that of Gd-O which can be
attributed to their larger bond population and consequently, higher degree of
covalency as compared to Gd-O bonding.
#### III.3.4 Electron charge density
Figure 6: Electronic charge density along z-axis of Gd2FeCrO6 for (a) Ueff =
0 eV, (b) Ueff = 1 eV, (c) Ueff = 3 eV, and (d) Ueff = 6 eV.
For further understanding of the chemical bonding nature, we have determined
the electron charge density distribution of GFCO double perovskite along the
z-axis by GGA (Ueff = 0 eV) and GGA+U (Ueff = 1, 3 and 6 eV) methods as
illustrated in Fig. 6. It is worth noting that a typical covalent bond between
two atoms involves overlapping of electron clouds from both of them and the
electrons remain concentrated in the overlapping region Phillips (1968). In
Fig. 6, for all values of Ueff, a larger overlap of electron cloud can be
observed between Fe/Cr and O atoms as compared to Gd and O atoms. Such
observation implies that the covalent bonds between Fe/Cr and O in GFCO are
considerably stronger than the bond between Gd and O as was also revealed by
Mulliken population analysis.
Moreover, it can be clearly seen that no electron clouds are concentrated in
the area between one of the two Gd atoms and O which implies the formation of
an ionic bond between these two atoms Craig _et al._ (1954); Rozilah _et
al._ (2017). Notably, the charge sharing between Fe/Cr-O can be attributed to
the hybridization among Fe/Cr-3d and O-2p orbitals and the Gd-O bond formation
is associated with the hybridization between Gd-4f and O-2p orbitals which was
demonstrated by the DOS curves (Fig. 5).
Furthermore, if we meticulously observe Fig. 6(a) and (b), it can be noticed
that the overlapping of electron clouds between Fe/Cr and O atoms got
reasonably narrower after considering the effect of on-site Coulomb
interaction (Ueff) in first-principles calculation. Afterward, the electron
charge density in the overlapped region between Fe/Cr and O atoms enhanced
with increasing Ueff (see Fig. 6(c) and (d)) which is consistent with the
outcome of Mulliken population analysis.
Figure 7: (a) Variation of theoretically obtained absorption coefficient of
GFCO perovskite as a function of wavelength for different Ueff. (b) The
absorption coefficient vs. Ueff for some fixed values of the wavelength.
### III.4 Light absorption property
The absorption coefficient provides valuable information about a material’s
light-harvesting ability. The optical absorption coefficient of GFCO has been
evaluated by first-principles calculation via GGA (Ueff = 0 eV) and GGA+U
(Ueff = 1 to 6 eV) approaches using a polarization technique which includes
the electric field vector as an isotropic average in all directions Brik _et
al._ (2014). To gain additional distinguishable absorption peaks, a small
smearing value of 0.5 eV was used. Fig. 7(a) illustrates the calculated
absorption coefficients of GFCO double perovskite as a function of wavelength
to demonstrate the effect of Hubbard Ueff parameter on its light absorption
property. In Fig. 7(a), for all values of Ueff, two absorption peaks can be
clearly observed in the UV region which indicates the strong UV light
absorption capacity of GFCO. Noticeably, the stronger absorption coefficient
peak lies at around 120 nm and it undergoes a slight red-shift with increasing
Ueff. On contrary, the weaker absorption peak at around 320 nm is slightly
blue-shifted for higher values of Ueff. For comparison, we have also provided
the absorbance spectrum of GFCO obtained via UV-visible spectroscopy in ESI
Fig. S3 Bhuyan _et al._ (2021). It is worth noticing that the experimentally
obtained spectrum has two additional bands in the visible region along with
the two bands we obtained theoretically in the UV regime. This might be
related to the fact that the DFT calculations were performed for 0 K whereas
the experiment was conducted at room temperature. Such difference in
conditions can be attributed to the discrepancy between the optical and
theoretical absorbance spectra of GFCO Honglin _et al._ (2014); Harun _et
al._ (2020).
Fig. 7(b) shows the absorption coefficient vs. Ueff curves of GFCO for some
fixed values of wavelength ranging from 150 to 600 nm. We can observe that for
the wavelength of 150 nm, the absorption coefficient attains its minimum and
maximum values for Ueff = 4 eV and Ueff = 5 eV, respectively suggesting the
underestimation and overestimation of the coefficient at these Ueff values.
Similarly, it can be found that except at Ueff = 3 eV, the absorption
coefficient is either overestimated or underestimated at all other values of
Ueff for any of the specific wavelengths such as 200, 250, 300 nm etc. This
intriguing observation is highlighted by the rectangle in Fig. 7(b).
### III.5 Comparison of experimental and theoretical optical band gaps
Further, we have employed the calculated absorption coefficients to
theoretically calculate the optical band gap values of GFCO double perovskite
using Tauc relation Tauc _et al._ (1966). Fig. 8 shows the variation in
theoretically calculated direct band gap values as a function of Ueff.
Noticeably, a direct band gap value of 0.5 eV was obtained by GGA method (Ueff
= 0 eV) which is significantly smaller than the experimental value $\sim$2.0
eV. Further, with the increase of Ueff up to 5 eV, an almost linear increase
can be observed in the direct optical band gap values of GFCO double
perovskite. However, an anomalous decrease can be noticed for a further
increase of Ueff to 6 eV. It is intriguing to note that the direct optical
band gap ($\sim$1.99 eV) obtained for Ueff = 3 eV matches well with the
experimentally obtained one ($\sim$2.0 eV) which is marked by a circle in Fig.
8. Also, for Ueff = 6 eV, we found the calculated band gap value ($\sim$2.08
eV) to be quite close to the experimental one. But as we had demonstrated
before, the character and curvature of the conduction and valence bands
inaccurately changed for Ueff $>$ 5 eV. Hence, fitting Ueff to the band gap
alone may provide erroneous results for such double perovskite materials
especially in the cases where the role of the band edge character is crucial.
Finally, it is fascinating to note that our predictions for all the properties
of GFCO (i.e. structural, electronic and optical) considering Ueff = 3 eV
closely matched with the experimental results without any over or
underestimation of band gap values. This implies that the Ueff value of 3 eV
most accurately localized the Fe-3d and Cr-3d orbitals of GFCO. Moreover, the
almost accurate estimation of band gap values suggests that the effect of
self-interaction error from the other orbitals of GFCO was almost negligible
Brown and Page (2020); Perdew (1985). Therefore we recommend Ueff = 3 eV for
the GGA+U calculation of electronic and optical properties of GFCO double
perovskite.
Figure 8: Variation in theoretically calculated direct optical band gap as a
function of Ueff. The red circle represents the experimentally obtained
optical band gap value.
## IV Conclusions
We have employed the first-principles based GGA and GGA+U methods to calculate
the spin-polarized electronic band structure, Mulliken bond population,
electron charge density distribution and optical characteristics of our newly
synthesized direct band-gap semiconductor Gd2FeCrO6 (GFCO) double perovskite
for a range of Ueff between 0 and 6 eV, applied to the Fe-3d and Cr-3d
orbitals. The structural parameters of the monoclinic GFCO crystal varied
nominally with Ueff. To the contrary, the variation of Ueff demonstrated
significant effect on the electronic band structure which indicates the
importance of employing reasonable value of Ueff to correct the over-
delocalization of the Fe/Cr-3d states. For Ueff $>$ 5 eV, the computed partial
density of states for Gd-4f orbital radically altered which had significantly
changed the band structure. In particular, we observed that the character and
curvature of the conduction and valence bands largely varied for Ueff $>$ 5 eV
leading to enormous changes in calculated charge carrier effective masses.
Notably, in the case of Ueff = 3 eV, the theoretically calculated direct
optical band gap $\sim$1.99 eV matched well with the experimental value
$\sim$2.0 eV. These findings justify that it might be worthwhile to employ
Ueff = 3 eV to accurately calculate the structural, electronic and optical
properties of GFCO double perovskite. The outcome of this investigation might
be useful for a keen understating of the electronic and optical properties of
this newly synthesized double perovskite and related material systems for
photocatalytic applications via band gap engineering. This study also reveals
the importance of conducting systematic analysis of the influence of on-site
Coulomb interaction on band gaps as well as on the electronic structure as a
whole for related other double perovskite materials for which experimental
data are still not available.
## Acknowledgements
The computational facility provided by the Institute of Information and
Communication Technology (IICT), Bangladesh University of Engineering and
Technology (BUET) is sincerely acknowledged. The authors would also like to
acknowledge the Committee for Advanced Studies and Research (CASR), BUET.
## Data availability
The raw and processed data required to reproduce these findings cannot be
shared at this time due to technical or time limitations.
## Supplementary Information
Additional electronic supplementary information (ESI) is available. See DOI:
10.1016/j.jmrt.2021.06.026
## References
* Vasala and Karppinen (2015) S. Vasala and M. Karppinen, Progress in solid state chemistry 43, 1 (2015).
* Gray _et al._ (2010) B. Gray, H. N. Lee, J. Liu, J. Chakhalian, and J. Freeland, Applied Physics Letters 97, 013105 (2010).
* Gaikwad _et al._ (2019) V. M. Gaikwad, M. Brahma, R. Borah, and S. Ravi, Journal of Solid State Chemistry 278, 120903 (2019).
* Das _et al._ (2008) H. Das, U. V. Waghmare, T. Saha-Dasgupta, and D. Sarma, Physical review letters 100, 186402 (2008).
* Cerón _et al._ (2019) J. G. Cerón, J. A. Rodríguez, A. Rosales-Rivera, J. C. Farfán, J. C. Vasquez, D. L. Téllez, J. Roa-Rojas, _et al._ , Journal of Materials Research and Technology 8, 3978 (2019).
* Yin _et al._ (2019) W.-J. Yin, B. Weng, J. Ge, Q. Sun, Z. Li, and Y. Yan, Energy & Environmental Science 12, 442 (2019).
* Lin _et al._ (2021) C. Lin, Y. Zhao, Y. Liu, W. Zhang, C. Shao, and Z. Yang, Journal of Materials Research and Technology 11, 1645 (2021).
* Mohassel _et al._ (2020) R. Mohassel, M. Amiri, A. K. Abbas, A. Sobhani, M. Ashrafi, H. Moayedi, and M. Salavati-Niasari, Journal of Materials Research and Technology 9, 1720 (2020).
* Kangsabanik and Alam (2020) J. Kangsabanik and A. Alam, The Journal of Physical Chemistry Letters 11, 5148 (2020).
* Wu _et al._ (2020) H. Wu, Z. Pei, W. Xia, Y. Lu, K. Leng, and X. Zhu, Journal of Alloys and Compounds 819, 153007 (2020).
* Nechache _et al._ (2015) R. Nechache, C. Harnagea, S. Li, L. Cardenas, W. Huang, J. Chakrabartty, and F. Rosei, Nature Photonics 9, 61 (2015).
* Nair _et al._ (2014) H. S. Nair, R. Pradheesh, Y. Xiao, D. Cherian, S. Elizabeth, T. Hansen, T. Chatterji, and T. Brückel, Journal of applied physics 116, 123907 (2014).
* Bhuyan _et al._ (2021) M. Bhuyan, S. Das, and M. Basith, Journal of Alloys and Compounds 878, 160389 (2021).
* Zhang _et al._ (2016) R. Zhang, J. Zhao, Y. Yang, Z. Lu, and W. Shi, Computational Condensed Matter 6, 5 (2016).
* Shenton _et al._ (2017) J. K. Shenton, D. R. Bowler, and W. L. Cheah, Journal of Physics: Condensed Matter 29, 445501 (2017).
* Himmetoglu _et al._ (2014) B. Himmetoglu, A. Floris, S. De Gironcoli, and M. Cococcioni, International Journal of Quantum Chemistry 114, 14 (2014).
* Terakura _et al._ (1984) K. Terakura, T. Oguchi, A. Williams, and J. Kübler, Physical Review B 30, 4734 (1984).
* Brown and Page (2020) J. J. Brown and A. J. Page, The Journal of Chemical Physics 153, 224116 (2020).
* Lu and Liu (2014) D. Lu and P. Liu, The Journal of chemical physics 140, 084101 (2014).
* Wang _et al._ (2006) L. Wang, T. Maxisch, and G. Ceder, Physical Review B 73, 195107 (2006).
* Zhang _et al._ (2010) S. Zhang, O. Y. Kontsevoi, A. J. Freeman, and G. B. Olson, Physical Review B 82, 224107 (2010).
* Dudarev _et al._ (1998) S. Dudarev, G. Botton, S. Savrasov, C. Humphreys, and A. Sutton, Physical Review B 57, 1505 (1998).
* Perdew (1985) J. P. Perdew, International Journal of Quantum Chemistry 28, 497 (1985).
* Rodriguez-Carvajal (1990) J. Rodriguez-Carvajal, in _FULLPROF: a program for Rietveld refinement and pattern matching analysis, Abstracts of the Satellite meeting on powder diffraction of the XV congress of the IUCr, Toulouse, France_ , Vol. 127 (1990).
* Segall _et al._ (2002) M. Segall, P. J. Lindan, M. a. Probert, C. J. Pickard, P. J. Hasnip, S. Clark, and M. Payne, Journal of physics: condensed matter 14, 2717 (2002).
* Rozilah _et al._ (2017) R. Rozilah, M. Yaakob, Z. Mohamed, and A. Yahya, Materials Research Express 4, 066103 (2017).
* Fischer and Almlof (1992) T. H. Fischer and J. Almlof, The Journal of Physical Chemistry 96, 9768 (1992).
* Monkhorst and Pack (1976) H. J. Monkhorst and J. D. Pack, Physical review B 13, 5188 (1976).
* Chakraborty _et al._ (2008) M. Chakraborty, P. Pal, and B. R. Sekhar, Solid state communications 145, 197 (2008).
* Saha _et al._ (2000) S. Saha, T. Sinha, and A. Mookerjee, Physical Review B 62, 8828 (2000).
* Zhang _et al._ (2008) X. Zhang, M. Guo, W. Li, and C. Liu, Journal of Applied Physics 103, 063721 (2008).
* Mann _et al._ (2016) G. W. Mann, K. Lee, M. Cococcioni, B. Smit, and J. B. Neaton, The Journal of chemical physics 144, 174104 (2016).
* Neaton _et al._ (2005) J. Neaton, C. Ederer, U. Waghmare, N. Spaldin, and K. Rabe, Physical Review B 71, 014113 (2005).
* Tauc _et al._ (1966) J. Tauc, R. Grigorovici, and A. Vancu, physica status solidi (b) 15, 627 (1966).
* Hou _et al._ (2014) Y. Hou, H. Xiang, and X. Gong, Physical Review B 89, 064415 (2014).
* Bredas (2014) J.-L. Bredas, Materials Horizons 1, 17 (2014).
* Reunchan _et al._ (2016) P. Reunchan, A. Boonchun, and N. Umezawa, Physical Chemistry Chemical Physics 18, 23407 (2016).
* Das _et al._ (2019) S. Das, M. Bhowal, and S. Dhar, Journal of Applied Physics 125, 075705 (2019).
* Mulliken (1955) R. Mulliken, The Journal of Chemical Physics 23, 1841 (1955).
* Segall _et al._ (1996) M. Segall, R. Shah, C. J. Pickard, and M. Payne, Physical Review B 54, 16317 (1996).
* Ching and Xu (1999) W. Ching and Y.-N. Xu, Physical Review B 59, 12815 (1999).
* Yaakob _et al._ (2015) M. Yaakob, M. Taib, L. Lu, O. Hassan, and M. Yahya, Materials Research Express 2, 116101 (2015).
* Phillips (1968) J. Phillips, Physical Review 166, 832 (1968).
* Craig _et al._ (1954) D. P. Craig, A. Maccoll, R. Nyholm, L. Orgel, and L. Sutton, Journal of the Chemical Society (Resumed) , 332 (1954).
* Brik _et al._ (2014) M. Brik, O. Parasyuk, G. Myronchuk, and I. Kityk, Materials Chemistry and Physics 147, 155 (2014).
* Honglin _et al._ (2014) L. Honglin, L. Yingbo, L. Jinzhu, and Y. Ke, Journal of alloys and compounds 617, 102 (2014).
* Harun _et al._ (2020) K. Harun, N. A. Salleh, B. Deghfel, M. K. Yaakob, and A. A. Mohamad, Results in Physics 16, 102829 (2020).
|
]Presented XXXXX; received XXXXX; accepted XXXXX; published online XXXXX
# Magnetic Reconnection on a Klein Bottle
Luke Xia<EMAIL_ADDRESS>University of California, Irvine 92697, USA M.
Swisdak<EMAIL_ADDRESS>Institute for Research in Electronics and Applied
Physics, University of Maryland, College Park, Maryland 20742, USA
([)
###### Abstract
We present a new boundary condition for simulations of magnetic reconnection
based on the topology of a Klein bottle. When applicable, the new condition is
computationally cheaper than fully periodic boundary conditions, reconnects
more flux than systems with conducting boundaries, and does not require
assumptions about regions external to the simulation as is necessary for open
boundaries. The new condition reproduces the expected features of
reconnection, but cannot be straightforwardly applied in systems with
asymmetric upstream plasmas.
## I Introduction
During reconnection oppositely directed components of a magnetic field come
together at an X-point and undergo a topological reordering that enables the
transfer of magnetic energy to the surrounding plasma. In the collisionless
plasmas of space and astrophysics, reconnection has particular importance due
to its role in catalyzing energy release over a wide range of scales in
flares, jets, and other phenomena.
Unsurprisingly then, numerical simulations of reconnection have been a subject
of significant interest. Any simulation entails certain fundamental choices,
among them the boundary conditions imposed on the computational domain.
Idealized simulations – i.e., those not modeling a particular event – often
begin in a configuration based on the Harris sheet [1], a steady-state
solution of the Vlasov equation in which the magnetic field has the simple
profile
$\mathbf{B}=B_{0}\tanh(y/w_{0})\,\mathbf{\hat{x}},$ (1)
with $w_{0}$ a constant. (For definiteness, we assume the simulation domain
occupies the space $-L_{x}/2\leq x\leq L_{x}/2$, $-L_{y}/2\leq y\leq L_{y}/2$.
A fully three-dimensional domain is not necessary for reconnection to occur
and so the $z$ coordinate will be suppressed.) The reversal of the field
across $y=0$ implies the existence of a current sheet, with the current
pointing in the $-z$ direction.
The form of the magnetic field in the Harris sheet imposes constraints on the
boundary conditions. Periodic boundaries are typically used in the horizontal
($x$) direction, but the anti-symmetric nature of $\mathbf{B}$,
$B_{x}(y=-L_{y}/2)=-B_{x}(y=L_{y}/2)$, precludes the use of periodic
boundaries in the vertical ($y$) direction. Two options are usually
considered. The first is the inclusion of hard conducting walls at the top and
bottom boundaries of the domain. Appropriate boundary conditions are imposed
on the electromagnetic fields (e.g., Dirichlet for $E_{x}$, $B_{y}$, and
$E_{z}$ and von Neumann for the other components) while particles specularly
reflect from the walls. The influential GEM Challenge [2] used these
conditions and so hereinafter we will use its name to denote them. For the GEM
Challenge boundary conditions the computational domain is topologically
equivalent (homeomorphic) to a cylinder.
The second common option is to add a second current sheet of opposite
orientation so that
$\mathbf{B}=B_{0}\left[\tanh\left(\frac{y-L_{y}/4}{w_{0}}\right)-\tanh\left(\frac{y+L_{y}/4}{w_{0}}\right)+1\right]\,\mathbf{\hat{x}}$
(2)
This choice allows periodic boundary conditions in the vertical direction
because $B_{x}(y=-L_{y}/2)=B_{x}(y=L_{y}/2)$. (For both choices, one usually
also requires $w_{0}\ll L_{y}$ so that $\partial B_{x}/\partial y$, and hence
the current, is negligible for $y=\pm L_{y}/2$.) The implementation of fully
periodic boundaries is typically straightforward since no special conditions
need to be placed on the electromagnetic fields or particles at any point in
the grid. In this case, the computational domain is homeomorphic to a torus.
Either choice comes with limitations. Simulations using the GEM Challenge
boundary conditions will stagnate when the reconnected flux accumulating in
the magnetic islands presses against the hard boundaries (see Figure 3 below).
Fully periodic boundary conditions employing the field described in equation 2
can mitigate this stagnation by offsetting the X-points on the two current
layers by $L_{x}/2$, thus allowing the flux accumulating in the island of one
layer to drive inflow towards the X-line on the other. However, keeping the
amount of flux available to reconnect the same for both types of boundary
conditions (equivalently, keeping the aspect ratios of the current sheets the
same) requires a doubling of the size of the computational domain, and hence
the computational cost, in the fully periodic case. Less-common alternatives,
e.g., open boundaries or driven reconnection [3, 4, 5, 6], increase the
computational complexity and can only partially mitigate these issues (often
while raising new ones).
Here we present a novel type of periodic boundary condition in which the
computational domain is homeomorphic to a Klein bottle. Making this choice
allows the simulation to enjoy the benefits of the two major options discussed
above: the computational expense of the GEM Challenge conditions coupled with
the self-driving of the fully periodic domain. We show that these boundary
conditions preserve the important properties of magnetic reconnection. Section
II discusses our computational methods, including the implementation of the
new boundary condition. Results from the simulations are presented in Section
III, while Section IV offers our conclusions.
## II Computational Methods
### II.1 Code
The simulations are performed with p3d, a massively parallel particle-in-cell
(PIC) code with a long history of simulating magnetic reconnection [7]. It
employs units in which a reference density $n_{0}$ and magnetic field strength
$B_{0}$ define the units of length, the proton inertial length
$d_{p0}=c/\omega_{p0}$ (where $\omega_{p0}=\sqrt{4\pi n_{0}e^{2}/m_{p}}$ is
the proton plasma frequency), and time, the proton cyclotron time
$\Omega_{p0}^{-1}=m_{p}c/eB_{0}$. Velocities are measured in units of a
reference Alfvén speed $v_{A0}=\sqrt{B_{0}^{2}/4\pi m_{p}n_{0}}$ while
electric fields and temperatures are normalized to $v_{A0}B_{0}/c$ and
$m_{p}v_{A0}^{2}$, respectively. The simulations presented here follow
particles in a 2x3v (also known as 2.5D) phase space in which no variations
are allowed in the third spatial dimension (i.e., $\partial/\partial z=0$).
With the exception of a slight alteration in the size of the computational
domain, the parameters are chosen to match those of the GEM Challenge [2]. In
order to reduce the separation between the spatial scales of the protons and
the electrons, and hence the computational expense, the electron mass is taken
to be $m_{e}/m_{i}=0.04$, which implies the electron inertial length
$d_{e}=0.2d_{p}$. For similar computational reasons, the ratio of the speed of
light to the Alfvén speed (equivalently, the ratio of the proton plasma and
cyclotron frequencies) is taken to be 20. The initial electron and proton
temperatures are $1/12$ and $5/12$, respectively. The timestep is chosen to
satisfy the CFL condition and, as a consequence, particles can travel at most
one grid cell per timestep.
For later reference, it is useful to discuss the layout of the computational
domain in more detail than usual. To parallelize a computation, p3d breaks the
domain into subdomains in configuration space, each of which is assigned to a
single processor. The processors are organized into a rectangular grid of size
$P=P_{x}\times P_{y}\times P_{z}$ and each processor is further subdivided
into $N_{x}\times N_{y}\times N_{z}$ grid points. For notational simplicity we
will assume $N_{z}=P_{z}=1$ in what follows, but the extension to multiple
processors and gridpoints in the third dimension is straightforward. Each
processor has an overall label that can range from $0$ to $P-1$ as well as a
label in each dimension: $p_{x}\in\\{0,1,2,...,P_{x}-1\\}$,
$p_{y}\in\\{0,1,2,...,P_{y}-1\\}$, and $p_{z}\in\\{0,1,2,...,P_{z}-1\\}$. The
overall label and the triplet $(p_{x},p_{y},p_{z})$ are unique to each
processor and it is straightforward, given the dimensions of the computational
domain, to convert between the two. For example, the processor at the bottom-
left of the domain, $P=0$, has triplet $(0,0,0)$. The processor to its
immediate right, $P=1$, has triplet $(1,0,0)$ and the processor immediately
above is $P=P_{x}$ with triplet $(0,1,0)$. For communication purposes, every
processor needs to know the identities of the other processors bordering it,
and while this calculation is simple for those on the interior of the
computational domain, it can be more complicated for those on the boundaries.
### II.2 Klein Bottles and Processor Mapping
Figure 1: Immersion of a two-dimensional Klein bottle in three-dimensional
space.
The novel boundary condition presented here is based on the topology of a
Klein bottle, a two-dimensional non-orientable manifold. (The perhaps-more-
familiar Möbius strip is another example of a non-orientable manifold.) A
Klein bottle is a surface with no distinct front or back side that can be
obtained by identifying and gluing together specific edges of a rectangle in a
non-trivial way, resulting in a structure that cannot be embedded smoothly in
three-dimensional space without self-intersections or singularities (see
Figure 1 for a representation). Higher-dimensional analogues of Klein bottles
can be similarly constructed. For example, mapping the edges of a rectangular
prism in a particular way produces a system homeomorphic to a Klein geometry
in $\mathbb{R}^{4}$.
Figure 2: Topological mappings of (i) a torus (ii) a Klein bottle (iii) a
phase-shifted Klein bottle. The horizontal boundaries are periodic. Squares
represent processors and those of the same color abut in the vertical
direction. The arrow demonstrates how a vector transforms across the boundary.
In numerical simulations, the identification and gluing-together process is
equivalent to describing how processors abut one another when crossing a
boundary. This process is given a pictorial representation in Figure 2. In
this visualization, small squares represent individual processors; squares
painted the same color neighbor one another in the vertical direction. (Only
the top and bottom boundaries are of interest, as the horizontal direction is
assumed to be periodic in each panel.) In the configuration shown, with
$P=P_{x}\times P_{y}\times P_{z}=16\times 8\times 1=128$, the processors are
numbered from $0$ to $127$, with those on the bottom row having
$P\in\\{0,1,...15\\}$. The fact that the doubly periodic boundary conditions
described in Section I are topologically equivalent to a torus is demonstrated
in the first panel. Travelling downward from the bottom row brings one to the
top row, where the processors have $P\in\\{112,113,...127\\}$. In other words,
processors next to the boundary (on either side) with triplets of general form
$(i,j,0)$ border processors with triplets of the form $(i,P_{y}-(j+1),0)$. The
overplotted arrow illustrates how a vector transforms when moving between the
top and bottom rows of processors. Note that interpreting the arrow as a
magnetic field vector makes it clear why a system with a single current sheet
– across which the field must switch direction – cannot use these boundary
conditions.
To map a two-dimensional sheet to a Klein bottle, one connects the left and
right sides together as usual. However, processors on the top and bottom
boundaries now adjoin counterparts at mirrored positions on the $x$-axis, as
shown in Figure 2(ii). A processor on the top or bottom boundary with triplet
of the form $(i,j,0)$ borders a processor with triplet
$(P_{x}-(i+1),P_{y}-(j+1),0)$. (Note the set of neighboring gridpoints within
processors are also mirrored, so that the leftmost gridpoint on, for example,
the bottom row of processor 1 is considered to be directly above the rightmost
gridpoint in the top row of processor 126.) As a consequence, vectors flip
orientation across the boundary, as shown by the overplotted arrow.
Implementation of this boundary condition requires several algorithmic
changes. Processors on the top and bottom rows have a new set of neighboring
processors and the $x$ and $z$ components of any vector quantity, e.g.,
particle velocities and electromagnetic fields, must be flipped across the
boundary. (In practice, processors in p3d interact with neighbors through
guard cells. Any changes to the signs of gridded quantities required by the
boundary conditions occurs when these cells are updated.) This means that only
one current sheet needs to be included in the domain, as the reversal in
$B_{x}$ is naturally accommodated.
However, the basic Klein boundary condition shown in Figure 2(ii) is not ideal
for reconnection simulations. The reason becomes clear after consideration of
the magnetic configuration at late times. Reconnection proceeding at the
X-line (assumed to be at the center of the domain), produces an island of
magnetic flux centered on the left/right (they are equivalent) boundary. This
island grows, eventually becoming large enough to approach the top and bottom
boundaries. In runs employing the GEM Challenge boundary condition, this
island presses against the conducting walls, producing a back-pressure that
resists further reconnection and eventually stifles the simulation. The same
issue bedevils the Klein boundary condition, although in that case the back-
pressure is not provided by conducting walls but by the island itself – growth
of the island in, for instance, the bottom-left of the domain, presses against
the island in the upper-right.
Fortunately, a straightforward adjustment that mimics the behavior of doubly
periodic boundary conditions can alleviate this issue by, in essence, creating
self-driving reconnection. Arranging for the island growing on the left and
right sides of the domain to emerge from the top and bottom boundaries in the
middle of the domain extends the time that reconnection will occur. The phase
shift is produced by displacing the set of neighboring processors for either
the top or bottom boundary by $L_{x}/2$, forming the configuration displayed
in Figure 2(iii). Returning to the example of a $16\times 8$-processor domain,
each processor with $P\in\\{0,1,\dots 7\\}$ now neighbors the corresponding
processor $P\in\\{119,118,\dots 112\\}$, and vice versa, while each processor
$P\in\\{8,9,\dots 15\\}$ abuts the corresponding processor
$P\in\\{127,126,\dots,120\\}$. More generally, a processor on the border with
triplet $(i,j,0)$ directly borders a processor with triplet
$([P_{x}/2-(i+1)]\,\bmod P_{x},P_{y}-(j+1),0)$. As with the unshifted Klein
boundaries, the set of neighboring gridpoints are again mirrored across the
boundary so, for example, the leftmost gridpoint in the bottom row of
processor 1 is considered to be directly above the rightmost gridpoint of
processor 118.
## III Simulation Results
We now present results from simulations using the various types of boundary
conditions discussed above. The computational domain measures $L_{x}\times
L_{y}=51.2d_{i}\times 12.8d_{i}$ for the GEM Challenge and shifted Klein cases
and is doubled in $L_{y}$ for the doubly periodic case. (In other words, each
reconnecting current sheet has the same aspect ratio in all runs.) The
reconnecting component of the magnetic field has the initial form shown in
either equation 1 or 2 while the density has the asymptotic value of
$0.2n_{0}$ and varies as necessary to ensure force balance in the $y$
direction. Section III.1 considers the case where the asymptotic fields are
anti-parallel, as in the original GEM Challenge, while Section III.2 adds a
$B_{z}$ component that reduces the shear angle between the asymptotic fields.
### III.1 Anti-Parallel Reconnection
Figure 3 shows a comparison between runs with the GEM Challenge and the
shifted Klein boundary conditions with the three panels in each column
depicting the simulations at $t\Omega_{cp}=20$, $40$, and $60$, respectively.
In each, the out-of-plane current density is shown in color and is overplotted
with magnetic field lines. For the GEM Challenge boundaries (left column), the
islands of reconnected magnetic flux grow freely at first (top panel) until
they begin to interact with the walls at the top and bottom of the domain
(middle panel). The increasing pressure in the island exerts a growing force
on plasma ejected from the X-point and suppresses further reconnection (bottom
panel).
Figure 3: Out-of-plane electron current density and magnetic field lines at
$t\Omega_{cp}=20$, $40$, and $60$ for the GEM Challenge boundary conditions
(left) and the Klein boundary conditions (right).
In contrast, the panels in the right column of Figure 3 show results from an
otherwise-identical simulation that employs the shifted Klein boundary
conditions described above. By the time of the middle panel the effects of the
boundary conditions are already noticeable in the magnetic field lines since,
unlike in the GEM Challenge simulation, they are free to pass through the top
and bottom boundaries. By late time (bottom panel), the islands of reconnected
flux have grown substantially and, due to the shift at the boundaries,
continue to drive reconnection at the X-line.
The lack of hard walls at the top and bottom boundaries makes the Klein
boundary conditions similar to the doubly-periodic case. Figure 4 presents a
comparison of several quantities at $t\Omega_{cp}=60$, a time of robust
reconnection, from runs with both boundary conditions. The panels from the
Klein boundaries (left column) show the entire domain, while those from the
doubly periodic case (right column) show only one of the two current sheets.
The panels depict, from top to bottom, $J_{ez}$, $v_{ex}$, $v_{ix}$, $B_{z}$,
and $E_{y}$. Each horizontal pair of images uses the same color scale and so
may be directly compared. The two columns are quite similar, suggesting that
the Klein boundary conditions do not significantly affect the evolution of the
system.
Figure 4: Comparison of results with the Klein (left column) and doubly
periodic (right column) boundary conditions at $t\Omega_{cp}=60$. (A)-(B):
Out-of-plane electron current density ($J_{ez}$); (C)-(D) Horizontal electron
velocity ($v_{ex}$); (E)-(F): Horizontal ion velocity ($v_{ix}$); (G)-(H):
$B_{z}$; (I)-(J): $E_{y}$
We next compare all three runs in Figure 5, which plots the reconnected flux,
$\Delta\psi$, and the reconnection rate, $d(\Delta\psi)/dt$, versus time for
the different cases: GEM Challenge in blue dashes, doubly periodic in red dot-
dashes, and shifted Klein in black. Contours of $\psi$ are the magnetic field
lines shown in Figures 3 and 4 and in the first and third cases $\Delta\psi$
is merely the difference in $\psi$ between the X- and O-points. In the doubly
periodic case the curve plots the average of the differences observed in the
two current sheets. Each of the simulations exhibit an initial overshoot where
the reconnection rate spikes to $\approx 0.25$, but in the GEM Challenge case
that spike is followed by a steady decline, with reconnection essentially
ending after $t\Omega_{cp}\approx 50$ when $\Delta\psi$ plateaus. In contrast,
the doubly periodic and shifted Klein runs both settle into a state with
reasonably constant reconnection between $t\Omega_{cp}=40$ and
$t\Omega_{cp}=80$ with $d(\Delta\psi)/dt\approx 0.15$. The large excursions
near $t\Omega_{cp}=60$ correspond to the time when the once-reconnected field
lines surrounding the magnetic islands interact with an X-point a second time,
either that on the opposite current sheet for the doubly periodic case or the
original one for the shifted Klein (see Figure 3). It is interesting to note
that the reconnection rate after these fluctuations roughly remains unchanged.
The elapsed time (measured on a clock) for the three computations differed by
less than $3\%$. However, the GEM Challenge and shifted Klein cases were run
on the same number of processors, while the doubly periodic case, for which
the computational domain was doubled in size, used twice as many. Hence, while
the doubly periodic and shifted Klein cases found very similar physical
results, the former incurred twice the computational expense of the latter.
(For the relatively small simulations considered here, p3d exhibits near-
perfect weak scaling and so any extra overhead incurred by doubling the number
of processors is minimal.)
Figure 5: Reconnected flux (top), $\Delta\psi$, and reconnection rate
(bottom), $d(\Delta\psi)/dt$ versus time for the GEM Challenge (blue dash),
doubly periodic (red dot-dash), and shifted Klein (black) boundary conditions.
### III.2 Guide Field Reconnection
Adding a spatially constant guide field, $B_{g}$ to the initial conditions of
the GEM Challenge and doubly periodic cases is straightforward. The Klein case
is more subtle. Recall that the $x$ and $z$ components of vectors are flipped
when crossing the Klein boundary. As a consequence, a spatially constant guide
field would exhibit a discontinuous jump – a gridpoint at the top or bottom
boundary with $B_{z}=B_{g}$ would be adjacent to one with $B_{z}=-B_{g}$. Such
a configuration would generate a large current and lead to instability. This
behavior arises because Klein bottles are non-orientable, which means there is
ambiguity in the definition of "out of the page" and "into the page". (This is
perhaps most easily seen with the Möbius strip. If one takes an arrow pointing
normal to the surface and translates it once around the strip to the starting
location, the head will point in the opposite direction.)
To sidestep this obstacle, we can slightly modify the boundary condition.
Write $B_{z}$ as the sum of a constant background $B_{z0}$ and a perturbation:
$B_{z}=B_{z0}+\delta B_{z}$. Then, rather than apply the Klein boundary to the
full magnitude, only change the sign of the perturbation $\delta B_{z}$ when
it crosses the boundary. Note that in the run discussed in Section III.1,
$B_{z0}=0$ so this procedure is identical to modifying the full $B_{z}$. It
might be argued that, for consistency, such a modification should be applied
to any $z$ component (e.g., $E_{z}$ or $J_{z}$), but in the initial states
considered here all other quantities have no uniform component and so making
such an adjustment would have no effect.
We now compare results from three simulations with $B_{g0}=1$. Figure 6 shows
the evolution of the GEM Challenge boundary conditions (left) and the Klein
boundaries with the modification discussed above. As is typical, the addition
of a guide field breaks a symmetry of the system and produces differences
between the separatrices, while the reconnection layer becomes more likely to
form plasmoids, as can be seen in the middle panels. As with the anti-parallel
case shown in Figure 3, the GEM Challenge system is restricted by the walls
and stagnates by the last panel. The Klein boundaries again allow flux to pass
through them as can be seen by the field lines intersecting the top and bottom
of the domain.
A comparison between the Klein and doubly periodic runs shows more significant
differences than in the anti-parallel case. Figure 7 compares the two in the
same format as Figure 4, but at an earlier time, $t\Omega_{cp}=45$. The reason
for this choice will be discussed below. The plasmoids in the Klein case
distort the reconnecting layer, but the overall structure is generally
similar. (Note that the color bar is the same for each pair of images so
direct comparisons can be made.) The largest differences come in panels G and
H, which show $B_{z}$ in the two runs. While the fields in the islands of
reconnected flux are roughly comparable, bands of low-strength (blue) field
are apparent just upstream of the current sheet in the doubly periodic case.
These features represent the last vestiges of unreconnected plasma being
pushed towards the X-line by an island expanding on the other (unseen) sheet.
These features do not appear in the Klein boundary case, although hints are
visible at the the top and bottom of the domain. Still, the boundary
conditions are clearly beginning to retard the inflow.
The effects of the various boundary conditions can also be seen in Figure 8,
which shows the reconnected fluxes and reconnection rates for the three runs
in the same format as Figure 5. After the peak at $t\Omega_{cp}\approx 25$,
the GEM Challenge run essentially stagnates, with the reconnection rate
falling to zero. The other two cases exhibit a similar earlier peak and
decline, but both have a later period of increasingly fast reconnection with
the Klein case peaking at $t\Omega_{cp}\approx 45$, the time shown in Figure
7. After this point, however, the Klein system stagnates while the doubly
periodic system continues to enjoy robust reconnection. It is worth noting
that simulations are usually halted once current sheets begin interacting with
each other, so the lack of agreement after $t\Omega_{cp}\approx 50-55$ is not
significant from a practical standpoint.
The elapsed times for the three computations were again similar, differing by
less than $4\%$. But, as previously, the doubly periodic case incurred twice
the computational expense due to its larger domain.
Figure 6: Out-of-plane electron current density and magnetic field lines at
$t\Omega_{cp}=20$, $40$, and $60$ for the GEM Challenge boundary conditions
(left) and the Klein boundary conditions (right) in runs with $B_{g}=1$.
Figure 7: Comparison of results with the Klein (left column) and doubly
periodic (right column) boundary conditions at $t\Omega_{cp}=45$ in runs with
$B_{g}=1$. (A)-(B): Out-of-plane electron current density ($J_{ez}$); (C)-(D)
Horizontal electron velocity ($v_{ex}$); (E)-(F): Horizontal ion velocity
($v_{ix}$); (G)-(H): $B_{z}$; (I)-(J): $E_{y}$ Figure 8: Reconnected flux
(top), $\Delta\psi$, and reconnection rate (bottom), $d(\Delta\psi)/dt$ versus
time for the GEM Challenge (blue dash), doubly periodic (red dot-dash), and
shifted Klein (black) boundary conditions in runs with a guide field
$B_{g}=1$.
## IV Discussion
We have shown that a novel boundary condition based on the topology of a Klein
bottle can be used in numerical simulations of magnetic reconnection. The new
boundary condition combines the advantages of other common options by being
both self-reinforcing (as in systems with doubly periodic boundary conditions)
and computationally compact (as seen in the single current sheet of the GEM
Challenge boundaries).
The extension of Klein boundary conditions to full three-dimensional (3x3v)
simulations should be straightforward. The additional dimension can be
physically pictured as adding a "thickness" to the Klein bottle in Figure 1.
From a computational perspective, the most significant change is to the
identification of neighboring processors across the Klein boundary.
Neighboring processors in $z$ must change in the manner shown in panel (ii) of
Figure 2 – recall that the $z$ component of vectors flips across the Klein
boundary – but do not have to undergo the shift of panel (iii).
However, regardless of the dimensionality, Klein boundary conditions do have
limitations. Symmetric reconnection. in which the asymptotic plasmas on the
two sides of the current sheet share the same characteristics, is an
idealization, albeit a common one, that sidesteps many of the complexities of
actual systems. In reality, asymmetries can exist in any or all of the plasma
density, plasma temperature, strength of the reconnecting component of the
field, strength of the guide field, etc., and at some locations (e.g., the
terrestrial magnetopause), reconnection is nearly always asymmetric. Such
complications are straightforward to include with GEM Challenge or doubly
periodic boundary conditions because the two sides of a reconnecting current
sheet represent distinct plasmas. For Klein boundaries there is no distinction
between the two sides, and so simulating asymmetric reconnection while
maintaining the associated benefits (low computational cost and non-
stagnation) is not straightforward.
Finally, the topological equivalence of the doubly periodic system to a torus
implies the existence of certain conservation laws. Specifically, consider the
surface integral of Faraday’s Law, $\partial\mathbf{B}/\partial
t=K\bm{\nabla\times}\mathbf{E}$, where the proportionality constant $K$
depends on the choice of units. Performing a surface integral and invoking
Stokes’ theorem transforms the right-hand side into a boundary integral, which
vanishes because a torus has no boundary. Hence, the magnetic flux through the
plane of the simulation must be constant, which can be verified numerically. A
Klein bottle also has no boundary, but it is non-orientable and so Stokes’
theorem does not apply. As a consequence, the magnetic flux through the plane
of the simulation can, and does, change when Klein boundary conditions are
employed. While this difference does not appear to affect the dynamics of
reconnection, further research will be necessary to explore whether it has
physical significance.
###### Acknowledgements.
The authors acknowledge the support of NASA grants 80NSSC22K0352 and
18-DRIVE18_2-0029, "Our Heliospheric Shield", 80NSSC22M0164). The authors also
acknowledge UMD’s TREND program, funded through NSF PHY-2150399. The authors
would like to thank P. R. Colarco and E. A. Colarco for providing visual
inspiration. The simulations were carried out at the National Energy Research
Scientific Computing Center (NERSC). The data used to perform the analysis and
construct the figures for this paper are preserved at
https://doi.org/10.5281/zenodo.12785961. Further data from the associated runs
are stored at the NERSC High Performance Storage System and are available upon
request.
## References
* Harris [1962] E. G. Harris, Nuovo Cim. 23, 115 (1962).
* Birn _et al._ [2001] J. Birn, J. F. Drake, M. A. Shay, B. N. Rogers, R. E. Denton, M. Hesse, M. Kuznetsova, Z. W. Ma, A. Bhattacharjee, A. Otto, and P. L. Pritchett, J. Geophys. Res. 106, 3715 (2001).
* Horiuchi, Pei, and Sato [2001] R. Horiuchi, W. Pei, and T. Sato, Earth Planets Space 53, 439 (2001).
* Daughton, Scudder, and Karimabadi [2006] W. Daughton, J. Scudder, and H. Karimabadi, Phys. Plasmas 13, 072101 (2006), 10.1063/1.2218817.
* Divin _et al._ [2007] A. V. Divin, M. I. Sitnov, M. Swisdak, and J. F. Drake, Geophys. Res. Lett. 34, L09109 (2007), 10.1029/2007GL029292.
* Roytershteyn _et al._ [2010] V. Roytershteyn, W. Daughton, S. Dorfman, Y. Ren, H. Ji, M. Yamada, H. Karimabadi, L. Yin, B. J. Albright, and K. J. Bowers, Phys. Plasmas 17 (2010), 10.1063/1.3399787.
* Zeiler _et al._ [2002] A. Zeiler, D. Biskamp, J. F. Drake, B. N. Rogers, M. A. Shay, and M. Scholer, J. Geophys. Res. 107, 1230 (2002).
|
# Universal Hydrodynamic Transport Times
in the Normal Phase of a Unitary Fermi Gas
Xiang Li, J. Huang, and J. E. Thomas 1Department of Physics, North Carolina
State University, Raleigh, NC 27695, USA
###### Abstract
We determine the hydrodynamic relaxation times $\tau_{\eta}$ for the shear
viscosity and $\tau_{\kappa}$ for the thermal conductivity in the normal phase
of a unitary Fermi gas confined in a box potential. Using a kinetic theory
relaxation model, we extract $\tau_{\eta}$ and $\tau_{\kappa}$ from the time-
dependent free-decay of a spatially periodic density perturbation, yielding
the universal density-shift coefficients for the shear viscosity and thermal
conductivity.
Measurements of the universal hydrodynamic transport properties of a unitary
Fermi gas connect ultracold atoms to nuclear matter [1, 2, 3] and provide new
challenges to theoretical predictions [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
15]. A unitary Fermi gas is a strongly interacting, scale-invariant, quantum
many-body system, created by tuning a trapped, two-component cloud near a
collisional (Feshbach) resonance [16]. At resonance, the thermodynamic and
transport properties are universal functions of the density and temperature
[17], permitting parameter-free comparisons with predictions. Early
measurements on expanding Fermi gas clouds with nonuniform density [18, 19]
have made way for new measurements in optical box potentials [20], where the
density is nearly uniform [21, 22, 23, 24, 25, 26]. For a unitary Fermi gas,
the second bulk viscosity vanishes, as predicted for scale-invariant systems
[27, 28] and demonstrated in experiments on conformal symmetry [29]. Hence, at
temperatures above the superfluid transition [30], the normal phase of a
unitary gas affords the simplest universal system for hydrodynamic transport
measurements, as the static transport properties comprise only the shear
viscosity $\eta$ and the thermal conductivity $\kappa_{T}$.
Remarkably, the measured shear viscosity and thermal conductivity in the
normal phase are well fit by the simple expressions [23, 31],
$\eta=\frac{15}{32\sqrt{\pi}}\frac{(mk_{B}T)^{3/2}}{\hbar^{2}}+\alpha_{2\eta}\,\hbar
n_{0}$ (1)
and
$\kappa_{T}=\frac{15}{4}\frac{k_{B}}{m}\,\eta(\alpha_{2\eta}\rightarrow\alpha_{2\kappa})$
(2)
with $k_{B}$ the Boltzmann constant and $m$ the atom mass. The density shift
coefficients $\alpha_{2\eta}$ and $\alpha_{2\kappa}$ are fit parameters. Here,
the temperature $T$ and density $n_{0}$ contributions can be understood by
dimensional analysis. For the shear viscosity, with a dimension of
momentum/area, we expect $\eta\propto\hbar/L^{3}$, with $L$ a length scale. At
high temperature, $L\rightarrow\lambda_{T}$, the thermal de Broglie wavelength
$\propto T^{-1/2}$. At lower temperature, where the cloud is degenerate,
$1/L^{3}=n_{0}$. For both $\eta$ and $\kappa_{T}$, the leading $T^{3/2}$
dependence is obtained by variational calculations for a unitary gas in the
two-body Boltzmann limit [4, 12, 6]. Density shifts can be shown to arise from
Pauli blocking in an ideal Fermi gas, but are reduced by in medium effects in
a unitary Fermi gas [13]. In contrast to the $T^{3/2}$ coefficients, the
density shift coefficients $\alpha_{2\eta}$ and $\alpha_{2\kappa}$ are unknown
universal constants in the early stages of measurement, with measured values
that are sensitive to the finite hydrodynamic relaxation times.
Figure 1: A unitary Fermi gas is loaded into a repulsive box potential created
by two digital micromirror devices DMDs (top,right). A third DMD (bottom)
generates a static spatially periodic perturbation $\delta U$ with an
adjustable wavelength, creating spatially periodic 1D density profiles (left).
After $\delta U$ is abruptly extinguished, the dominant Fourier component
exhibits an oscillatory decay (see Fig. 2).
In this work, we employ a hydrodynamic relaxation model based on kinetic
theory to extract of the universal hydrodynamic relaxation times $\tau_{\eta}$
and $\tau_{\kappa}$ from the measured time-dependent free-decay of a spatially
periodic density perturbation in a unitary Fermi gas. The extracted relaxation
times determine the static shear viscosity and thermal conductivity, yielding
the two universal density shift parameters $\alpha_{2\eta}$,
$\alpha_{2\kappa}$, corrected for the finite response time over which the
viscous force and heat current relax to their Navier-Stokes forms.
The experiments employ ultracold 6Li atoms in a balanced mixture of the two
lowest hyperfine states, which are evaporatively cooled in a CO2 laser trap
and loaded into a box potential, Fig. 1, producing a sample of nearly uniform
density $n_{0}$. The box comprises six sheets of blue-detuned light, created
by two digital micromirror devices (DMDs) [23, 21] (top and right). The box
potential $U_{0}(\mathbf{r})$ yields a rectangular density profile with
typical dimensions $(x,y,z)=(52\times 50\times 150)\,\mu$m. The density varies
slowly in the direction of the long ($z$) axis, due to the harmonic confining
potential $\propto z^{2}$ arising from the curvature of the bias magnetic
field, which has little effect on the shorter $x$ and $y$ axes. The typical
total central density is $n_{0}=3.3\times 10^{11}$ atoms/cm3, with the Fermi
energy $\epsilon_{\\!F}\equiv k_{B}T_{F}=k_{B}\times 0.18\,\mu$K and Fermi
speed $v_{F}\simeq 2.25$ cm/s. The box depth $U_{0}\simeq 0.75\,\mu$K (see
Ref. [23]).
We add a third DMD Fig. 1 (bottom) to independently project a static optical
potential $\delta U$ that spatially varies with an adjustable wavelength
$\lambda$ along one axis, creating a sinusoidal density perturbation. Once
equilibrium is established, the perturbing potential is abruptly extinguished,
causing an oscillatory decay of the measured density perturbation $\delta
n(z,t)=n(z,t)-n_{0}(z)$. By performing a fast Fourier transform (FFT) of
$\delta n(z,t)$ at each time, in a region containing an integer number
(typically 3-4) of spatial periods, we obtain $\delta n(q,t)$, Fig. 2.
Figure 2: Fourier component of the density perturbation $\delta n(q,t)$ with
$q=2\pi/\lambda$, for wavelengths $\lambda=40.0\,\mu$m, $31.3\,\mu$m, and
$22.7\,\mu$m, at reduced temperatures $T/T_{F}=0.42$, $0.36$, and $0.32$,
respectively. Dots (data); Curves: hydrodynamic relaxation time model. The
error bars are the standard deviation of the mean of $\delta n(q,t)$ for 5-8
runs, taken in random time order.
As shown previously [23], $\delta n(q,t)$ contains a thermally diffusive mode
($\simeq 30$%) that decays at a rate $\propto\kappa_{T}$ and an oscillating
first sound mode, which decays at a rate dependent on both $\eta$ and
$\kappa_{T}$. The relaxation model determines the response of the viscous
force and heat current at the first sound frequency in the sound mode, as well
as the response of the heat current at the decay rate in the thermally
diffusive mode.
We derive the relaxation model in the linear response regime by constructing
four coupled equations: two describe the changes in the density $\delta
n(z,t)$ and temperature $\delta T(z,t)$, and two describe the relaxation of
the viscous force and heat current. We ignore the box potential, since we
measure the free-decay over time scales that avoid perturbing $\delta n(z,t)$
in the measured central region by reflections from the boundaries. After the
perturbing potential is extinguished, the density change obeys [31],
$\delta\ddot{n}=c_{T}^{2}\,\partial_{z}^{2}(\delta n+\delta\tilde{T})+\delta
Q_{\eta}\,.$ (3)
The first term in Eq. 3 arises from the pressure change with $c_{T}$ the
isothermal sound speed, and
$\delta\dot{Q}_{\eta}+\frac{1}{\tau_{\eta}}\,\delta
Q_{\eta}=\frac{4}{3}\,\frac{p}{mn_{0}}\,\partial_{z}^{2}\delta\dot{n}$ (4)
describes the relaxation of the viscous damping force [31]. Here the pressure
is $p=\frac{2}{5}n\epsilon_{F}\\!f_{E}(\theta)$, where the universal function
$f_{E}(\theta)$ has been measured [30], $n_{0}$ is the background density, and
we have used the continuity equation to eliminate the velocity field. For fast
relaxation, Eq. 4 yields the usual Navier-Stokes form for $\delta Q_{\eta}$ in
Eq. 3 with $\eta=\tau_{\eta}p$ the static shear viscosity, independent of the
single particle phase space distribution [31].
In Eq. 3, we have defined a scaled temperature,
$\delta\tilde{T}=n_{0}\beta\,\delta T$ with a dimension of density, where
$\beta=-1/n(\partial n/\partial T)_{P}$ is the thermal expansivity [31]. We
find
$\delta\dot{\tilde{T}}=\epsilon_{LP}\,\delta\dot{n}+\delta Q_{\kappa}\,.$ (5)
The first term describes the adiabatic change in the temperature with
$\epsilon_{LP}\equiv c_{P_{1}}/c_{V_{1}}-1$ the Landau-Placzek parameter. The
heat capacities per particle at constant volume $c_{V_{1}}$ and at constant
pressure $c_{P_{1}}$ are determined by the measured equation of state
$f_{E}(\theta)$ [30, 31] and
$\delta\dot{Q}_{\kappa}+\frac{1}{\tau_{\kappa}}\,\delta
Q_{\kappa}=\frac{5}{2}\frac{k_{B}}{m}\frac{p}{n_{0}c_{V_{1}}}\,\partial_{z}^{2}\delta\tilde{T}$
(6)
describes the relaxation of the heat current. For fast relaxation, Eq. 6
yields the usual heating rate $\delta Q_{\kappa}$ in Eq. 5 with
$\kappa_{T}=\frac{5}{2}\frac{k_{B}}{m}\tau_{\kappa}p$ the static thermal
conductivity. Here, the factor $5/2$ is dependent on a Maxwell-Boltzmann
approximation for the single particle phase space distribution [31].
A spatial Fourier transform of Eqs. 3-6 yields coupled linear equations for
$\delta\ddot{n}(q,t)$, $\delta\dot{\tilde{T}}(q,t)$,
$\delta\dot{Q}_{\eta}(q,t)$, and $\delta\dot{Q}_{\kappa}(q,t)$ with
$q=2\pi/\lambda$. As the system is initially in mechanical equilibrium and
isothermal, only $\delta n(q,0)\equiv A\neq 0$. Fig. 2 shows fits of the
relaxation model (solid curves) to typical data (scaled by $1/A$), where the
fit parameters are $c_{T}q,\tau_{\eta},\tau_{\kappa}$, and the amplitude $A$.
As in our previous work [23], the wavelength of the perturbation and the fit
frequency $c_{T}q$ self-consistently determine the sound speed $c_{T}$ and the
corresponding reduced temperature $T/T_{F}=\theta(c_{T}/v_{F})$ from
$f_{E}(\theta)$, with the Fermi speed $v_{F}$ given for the average central
density $n_{0}$ [31].
Figure 3: Hydrodynamic relaxation times in units of the Fermi time
$\tau_{F}\equiv\lambda_{F}/v_{F}=\pi\hbar/\epsilon_{F}$ versus reduced
temperature $T/T_{F}$.
a) $\tau_{\eta}$ for the shear viscosity (blue circles). b) $\tau_{\kappa}$
for the thermal conductivity (blue squares). Red curves include the density
shift coefficients $\alpha_{2\eta}=0.45$ and $\alpha_{2\kappa}=0.22$. Red
dashed curves: High temperature limits, where $\alpha_{2\eta}=0$ and
$\alpha_{2\kappa}=0$ and $\tau_{\kappa}/\tau_{\eta}=3/2$. (c,d) Wavelength
dependence for $\theta\simeq 0.30$. Error bars are statistical [32]. (color
online)
Our extracted relaxation times $\tau_{\eta}$ for the shear viscosity and
$\tau_{\kappa}$ for the thermal conductivity are shown as functions of
$\theta=T/T_{F}$ in Fig. 3. The relaxation times are given in units of the
Fermi time $\tau_{F}\equiv\lambda_{F}/v_{F}=\pi\hbar/\epsilon_{F}\simeq
120\,\mu$s. The wavelength dependence of $\tau_{\eta}$ and $\tau_{\kappa}$ is
shown for $\theta\simeq 0.30$, demonstrating negligible $\lambda$-dependence
as expected. As discussed below, the fitted density shift coefficients
$\alpha_{\eta}>\alpha_{\kappa}$ make $\tau_{\kappa}/\tau_{\eta}\simeq 1.2$ for
$\theta\simeq 0.4$, deviating significantly from the high temperature limit
(dashed red curves), where $\tau_{\kappa}/\tau_{\eta}=3/2$ [13, 31].
The measured $\tau_{\eta}$ determines the static shear viscosity
$\eta=\tau_{\eta}\,p$ shown in Fig. 4 a (Blue circles). Eq. 1, in units of
$\hbar n_{0}$, gives
$\eta(\theta)=\alpha_{3/2}\,\theta^{3/2}+\alpha_{2\eta}\,,$ (7)
where $\alpha_{3/2}=\frac{45\pi^{3/2}}{64\sqrt{2}}\simeq 2.77$ [4, 12, 6]. The
red curve in Fig. 4 a shows the fit for $\eta(\theta)$ with the density shift
coefficient $\alpha_{2\eta}$ as the only fit parameter, yielding
$\alpha_{2\eta}=0.45(04)$. Fig. 3 a shows the corresponding curve for
$\tau_{\eta}=\eta/p$, with $p=\frac{2}{5}n\epsilon_{F}\\!f_{E}(\theta)$. The
red-dashed curve in Fig. 4 a is the high temperature limit,
$\alpha_{2\eta}=0$.
Figure 4: Transport properties obtained from the measured transport times
$\tau_{\eta}$ and $\tau_{\kappa}$ versus reduced temperature $\theta=T/T_{F}$.
a) Shear viscosity (Blue circles). b) Thermal conductivity (Blue squares). c)
First sound diffusivity (Blue Triangles). Red solid curves include the density
shift coefficients $\alpha_{2\eta}=0.45$ in Eq. 7 and $\alpha_{2\kappa}=0.22$
in Eq. 8. Red-dashed curves are the high temperature limits, where
$\alpha_{2\eta}=0$ and $\alpha_{2\kappa}=0$. Error bars are statistical.
(color online)
Similarly, the measured $\tau_{\kappa}$ determines the static thermal
conductivity $\kappa_{T}=\frac{5}{2}\frac{k_{B}}{m}\tau_{\kappa}\,p$ shown in
Fig. 4 b (Blue squares). In the high temperature two-body Boltzmann limit, one
can show that $\tau_{\kappa}/\tau_{\eta}=3/2$ for any isotropic collision
cross section $d\sigma/d\Omega$, so that
$\kappa_{T}=\frac{15}{4}\frac{k_{B}}{m}\,\eta$. Eq. 2, in units of $n_{0}\hbar
k_{B}/m$, gives
$\kappa_{T}(\theta)=\frac{15}{4}\,(2.77\,\theta^{3/2}+\alpha_{2\kappa})\,.$
(8)
The red curve in Fig. 4 b shows the fit of $\kappa_{T}(\theta)$ with the
density shift coefficient $\alpha_{2\kappa}$ as the only fit parameter,
yielding $\alpha_{2\kappa}=0.22(03)$, i.e., the shift is $15/4\times 0.22$ in
units of $n_{0}\hbar k_{B}/m$. Fig. 3 b shows the corresponding curve for
$\tau_{\kappa}=\frac{2}{5}\frac{m}{k_{B}}\kappa_{T}/p$. The red-dashed curve
in Fig. 4 b is the high temperature limit, $\alpha_{2\kappa}=0$.
Finally, Fig. 4 c shows the corresponding first sound diffusivity [33, 31]
$D_{1}$ in units of $\hbar/m$. The measured transport times determine
$D_{1}=\frac{8}{15}\,\tau_{\eta}\frac{\epsilon_{F}}{\hbar}\,f_{E}(\theta)+\frac{2}{3}\tau_{\kappa}\frac{\epsilon_{F}}{\hbar}\,\theta$
(Blue triangles). The red-solid curve gives $D_{1}$ in terms of the fits for
the static shear viscosity and thermal conductivity
$D_{1}=4/3\,(2.77\,\theta^{3/2}\\!\\!+0.45)+(n\,k_{B}T/p)\,(2.77\,\theta^{3/2}\\!\\!+0.22)$.
Here, the thermal conductivity contribution includes a factor $(n\,k_{B}T/p)$
for the unitary gas, where $p$ is the pressure [31]. The red-dashed curve is
the high temperature limit $D_{1}=7/3\times 2.77\,\theta^{3/2}$.
In conclusion, we have used a kinetic theory model to extract the transport
relaxation times for the shear viscosity and thermal conductivity of a normal-
phase unitary Fermi gas in a box potential. The relaxation times of the
thermal current and viscous force are found to be comparable to the Fermi time
$\lambda_{F}/v_{F}$, small, but not negligible, compared to the time scales
for the oscillatory decay of the density perturbation. The measured transport
times enable new estimates of the universal density shift corrections for both
the static shear viscosity, $\Delta\eta=n_{0}\hbar\,\alpha_{2\eta}$ with
$\alpha_{2\eta}=0.45(04)$ and the static thermal conductivity
$\Delta\kappa_{T}=n_{0}\hbar\frac{k_{B}}{m}\frac{15}{4}\,\alpha_{2\kappa}$,
where $\alpha_{2\kappa}=0.22(03)$. For the measured range of $\theta$, our
$\tau_{\eta}$ and $\eta$ are in reasonable agreement with the predictions of
Enss et al., see Figs. 6 (where $\tau_{F}\equiv\hbar/\epsilon_{F}$) and 7 of
Ref. [8]. However, the measured values of $\kappa_{T}$ are significantly
smaller than predicted in [13] and larger than predicted in [15]. It can be
shown that density shifts of $\eta$ and $\kappa_{T}$, i.e., $\alpha_{2\eta}$
and $\alpha_{2\kappa}$, arise from Pauli blocking in an ideal Fermi gas.
However, the results for the ideal gas are too large and are mitigated by
strong interactions in a unitary Fermi gas [13]. Our measurements emphasize
the need for improved calculations of the leading density-dependent
corrections to the hydrodynamic transport properties as well as more
sophisticated relaxation models.
We thank Thomas Schäfer for stimulating discussions and Ilya Arakelyan for
help with the new optical system. Primary support for this research is
provided by the National Science Foundation (PHY-2006234 and PHY-2307107).
Additional support is provided by the Air Force Office of Scientific Research
(FA9550-22-1-0329).
∗Corresponding author<EMAIL_ADDRESS>
## References
* Adams _et al._ [2012] A. Adams, L. D. Carr, T. Schäfer, P. Steinberg, and J. E. Thomas, Strongly correlated quantum fluids: ultracold quantum gases, quantum chromodynamic plasmas and holographic duality, New J. Phys. 14, 115009 (2012).
* Bloch _et al._ [2012] I. Bloch, J. Dalibard, and S. Nascimbène, Quantum simulations with ultracold quantum gases, Nature Physics 8, 267 (2012).
* Strinati _et al._ [2018] G. C. Strinati, P. Pieri, G. Röpke, P. Schuck, and M. Urban, The BCS-BEC crossover: From ultra-cold Fermi gases to nuclear systems, Physics Reports 738, 1 (2018).
* Bruun and Smith [2007] G. M. Bruun and H. Smith, Shear viscosity and damping for a Fermi gas in the unitary limit, Phys. Rev. A 75, 043612 (2007).
* Taylor and Randeria [2010] E. Taylor and M. Randeria, Viscosity of strongly interacting quantum fluids: Spectral functions and sum rules, Phys. Rev. A 81, 053610 (2010).
* Braby _et al._ [2010] M. Braby, J. Chao, and T. Schäfer, Thermal conductivity and sound attenuation in dilute atomic Fermi gases, Phys. Rev. A 82, 033619 (2010).
* Braby _et al._ [2011] M. Braby, J. Chao, and T. Schäfer, Viscosity spectral functions of the dilute fermi gas in kinetic theory, New Journal of Physics 13, 035014 (2011).
* Enss _et al._ [2011] T. Enss, R. Haussmann, and W. Zwerger, Viscosity and scale invariance in the unitary Fermi gas, Annals Phys. 326, 770 (2011).
* Guo _et al._ [2011] H. Guo, D. Wulin, C.-C. Chien, and K. Levin, Microscopic approach to shear viscosities of uunitary Fermi gases above and below the superfluid transition, Phys. Rev.Lett. 107, 020403 (2011).
* Wlazłowski _et al._ [2012] G. Wlazłowski, P. Magierski, and J. E. Drut, Shear viscosity of a unitary Fermi gas, Phys. Rev. Lett. 109, 020406 (2012).
* Romatschke and Young [2013] P. Romatschke and R. E. Young, Implications of hydrodynamic fluctuations for the minimum shear viscosity of the dilute Fermi gas at unitarity, Phys. Rev. A 87, 053606 (2013).
* Bluhm _et al._ [2017] M. Bluhm, J. Hou, and T. Schäfer, Determination of the density and temperature dependence of the shear viscosity of a unitary Fermi gas based on hydrodynamic flow, Phys. Rev. Lett. 119, 065302 (2017).
* Frank _et al._ [2020] B. Frank, W. Zwerger, and T. Enss, Quantum critical thermal transport in the unitary Fermi gas, Phys. Rev. Research 2, 023301 (2020).
* Hofmann [2020] J. Hofmann, High-temperature expansion of the viscosity in interacting quantum gases, Phys. Rev. A 101, 013620 (2020).
* Zhou and Ma [2021] H. Zhou and Y. Ma, Thermal conductivity of an ultracold Fermi gas in the BCS-BEC crossover, Sci. Rep. 11, 1228 (2021).
* O’Hara _et al._ [2002] K. M. O’Hara, S. L. Hemmer, M. E. Gehm, S. R. Granade, and J. E. Thomas, Observation of a strongly interacting degenerate Fermi gas of atoms, Science 298, 2179 (2002).
* Ho [2004] T.-L. Ho, Universal thermodynamics of degenerate quantum gases in the unitarity limit, Phys. Rev. Lett. 92, 090402 (2004).
* Cao _et al._ [2011] C. Cao, E. Elliott, J. Joseph, H. Wu, J. Petricka, T. Schäfer, and J. E. Thomas, Universal quantum viscosity in a unitary Fermi gas, Science 331, 58 (2011).
* Joseph _et al._ [2015] J. A. Joseph, E. Elliott, and J. E. Thomas, Shear viscosity of a unitary Fermi gas near the superfluid phase transition, Phys. Rev. Lett. 115, 020401 (2015).
* Navon _et al._ [2021] N. Navon, R. P. Smith, and Z. Hadzibabic, Quantum gases in optical boxes, Nat. Phys. 17, 1334 (2021).
* Baird _et al._ [2019] L. Baird, X. Wang, S. Roof, and J. E. Thomas, Measuring the hydrodynamic linear response of a unitary Fermi gas, Phys. Rev. Lett. 123, 160402 (2019).
* Patel _et al._ [2020] P. B. Patel, Z. Yan, B. Mukherjee, R. J. Fletcher, J. Struck, and M. W. Zwierlein, Universal sound diffusion in a strongly interacting Fermi gas, Science 370, 1222 (2020).
* Wang _et al._ [2022] X. Wang, X. Li, I. Arakelyan, and J. E. Thomas, Hydrodynamic relaxation in a strongly interacting Fermi gas, Phys. Rev. Lett. 128, 090402 (2022).
* Li _et al._ [2022] X. Li, X. Luo, S. Wang, K. Xie, X.-P. Liu, H. Hu, Y.-A. Chen, X.-C. Yao, and J.-W. Pan, Second sound attenuation near quantum criticality, Science 375, 528 (2022).
* Hu _et al._ [2018] H. Hu, P. Zou, and X.-J. Liu, Low-momentum dynamic structure factor of a strongly interacting Fermi gas at finite temperature: A two-fluid hydrodynamic description, Phys. Rev. A 97, 023615 (2018).
* Yan _et al._ [2024] Z. Yan, P. B. Patel, B. Mukherjee, C. J. Vale, R. J. Fletcher, and M. W. Zwierlein, Thermography of the superfluid transition in a strongly interacting Fermi gas, Science 383, 629 (2024).
* Son [2007] D. T. Son, Vanishing bulk viscosities and conformal invariance of the unitary Fermi gas, Phys. Rev. Lett. 98, 020604 (2007).
* Hou _et al._ [2013] Y.-H. Hou, L. P. Pitaevskii, and S. Stringari, Scaling solutions of the two-fluid hydrodynamic equations in a harmonically trapped gas at unitarity, Phys. Rev. A 87, 033620 (2013).
* Elliott _et al._ [2014] E. Elliott, J. A. Joseph, and J. E. Thomas, Observation of conformal symmetry breaking and scale invariance in expanding Fermi gases, Phys. Rev. Lett. 112, 040405 (2014).
* Ku _et al._ [2012] M. Ku, A. T. Sommer, L. W. Cheuk, and M. W. Zwierlein, Revealing the superfluid lambda transition in the universal thermodynamics of a unitary Fermi gas, Science 335, 563 (2012).
* [31] See the Supplemental Material for discussion of the linearized hydrodynamic equations, the kinetic theory relaxation model, and the determination of the static transport properties.
* [32] The vertical error bars in Figs. 3 and 4 denote $\pm\sqrt{2\epsilon_{ii}}$, where $\epsilon_{ij}$ is the error matrix obtained from $\chi^{2}(\tau_{\eta},\tau_{\kappa})$ with $A$ and $c_{T}$ fixed.
* Landau and Lifshitz [1959] L. D. Landau and E. M. Lifshitz, _Fluid Dynamics, Course of Theoretical Physics Vol. VI_ (Pergamon Press, Oxford, 1959).
## Appendix A Supplemental Material
In this supplemental material, we derive a kinetic theory relaxation model,
which is used to extract the hydrodynamic transport times $\tau_{\eta}$ for
the shear viscosity and $\tau_{\kappa}$ for the thermal conductivity from the
measured free oscillatory decay of a spatially periodic density perturbation
in a normal fluid unitary Fermi gas.
### A.1 Hydrodynamic linear response for a normal fluid.
We consider a normal phase unitary Fermi gas, which is a single component
fluid with a mass density $\rho\equiv n\,m$, where $n$ is the total particle
density (we assume a 50-50 mixture of two components) and $m$ is the atom
mass. $\rho({\mathbf{r}},t)$ satisfies the continuity equation,
$\partial_{t}\rho+\partial_{i}(\rho\,v_{i})=0,$ (S1)
where a sum over $i=x,y,z$ is implied. The mass flux (momentum density) is
$\rho\,v_{i}$, with $v_{i}({\mathbf{r}},t)$ the velocity field.
Our experiments measure the response of the density in the central region of
the box over short enough time scales that forces arising from the walls of
the box potential can be neglected. As the perturbing potential $\delta U=0$
during the measured evolution, the momentum density and corresponding momentum
flux $\rho\,v_{i}v_{j}$ obey
$\partial_{t}(\rho\,v_{i})+\partial_{j}(\rho\,v_{i}v_{j})=-\partial_{i}p-\partial_{j}p^{1}_{ij},$
(S2)
where $-\partial_{i}p$ is the force per unit volume arising from the scalar
pressure $p$ and $\partial_{j}p^{1}_{ij}$ is the viscous force per unit
volume, which we determine using a kinetic theory relaxation model in S A.2.
Taking the divergence of eq. S2, and using eq. S1, we immediately obtain
$-\partial_{t}^{2}\rho+\partial_{i}\partial_{j}(\rho\,v_{i}v_{j})=-\partial_{i}^{2}p-\partial_{i}\partial_{j}p^{1}_{ij}.$
(S3)
In the linear response regime, the second term on the left hand side is second
order in small quantities and can be dropped. Specializing to one dimension,
and taking $\delta n=n-n_{0}$ with $n_{0}$ is the background density, the
change in density $\delta n(z,t)$ obeys
$\partial_{t}^{2}\delta n=\frac{1}{m}\partial_{z}^{2}\,\delta
p\,+\frac{1}{m}\,\partial_{z}^{2}p^{1}_{zz}\,.$ (S4)
As our initial conditions are isothermal, we find the pressure change $\delta
p=p-p_{0}$ in terms of the changes in the density $\delta n$ and temperature
$\delta T$. In this case, the pressure change can be written in the form [23]
$\delta p=mc_{T}^{2}(\delta n+\delta\tilde{T})$, so that
$\displaystyle\delta\ddot{n}=c_{T}^{2}\,\partial_{z}^{2}(\delta
n+\delta\tilde{T})+\frac{1}{m}\,\partial_{z}^{2}p^{1}_{zz}\,.$ (S5)
where $\delta\tilde{T}\equiv n_{0}\,\beta\,\delta{T}$ with
$\beta=-1/n\,(\partial n/\partial T)_{p}$ the expansivity.
Next, we require the evolution equation for $\delta T$, which obeys [23],
$\delta\dot{T}=\epsilon_{LP}\,\frac{\delta\dot{n}}{\beta\,n_{0}}+\frac{\delta\dot{q}}{n_{0}c_{V_{1}}},$
(S6)
where $\epsilon_{LP}\equiv c_{P_{1}}/c_{V_{1}}-1$ is the Landau-Placzek
parameter, which describes the adiabatic change in the temperature arising
from the change in density. Here, $c_{V_{1}}$, $c_{P_{1}}$ are the heat
capacities per particle. The heat flow per unit volume
$\delta\dot{q}=-\partial_{z}J_{E}$, where $J_{E}$ is the energy current, which
we determine using a kinetic theory relaxation model in S A.2. Multiplying eq.
S6 by $n_{0}\,\beta$, we obtain
$\delta\dot{\tilde{T}}=\epsilon_{LP}\,\delta\dot{n}-\frac{\beta}{c_{V_{1}}}\,\partial_{z}J_{E}.$
(S7)
### A.2 Kinetic theory relaxation model
In this section, we derive the relaxation model equations for a normal phase
unitary Fermi gas, which determine how the viscous force and heat current
relax to their Navier-Stokes forms. To proceed, we rewrite Eq. S5 as
$\displaystyle\delta\ddot{n}=c_{T}^{2}\,\partial_{z}^{2}(\delta
n+\delta\tilde{T})+\delta Q_{\eta}\,,$ (S8)
where
$\delta Q_{\eta}\equiv\frac{1}{m}\,\partial_{z}^{2}p^{1}_{zz}.$ (S9)
Similarly, Eq. S7 is rewritten as
$\delta\dot{\tilde{T}}=\epsilon_{LP}\,\delta\dot{n}+\delta Q_{\kappa}\,,$
(S10)
with
$\delta Q_{\kappa}\equiv-\frac{\beta}{c_{V_{1}}}\,\partial_{z}J_{E}\,.$ (S11)
We derive the evolution equations for $\delta Q_{\eta}$ (see Eq. S37) and
$\delta Q_{\kappa}$ (see Eq. S53) using a relaxation time approximation for
the Boltzmann equation. In this case, the single particle phase space
distribution $f(\mathbf{r},\mathbf{v},t)$ obeys
$\partial_{t}f+\mathbf{v}\cdot\nabla
f=-\frac{1}{\tau}\,(f-f_{0})\equiv-\frac{1}{\tau}\,f_{1}$ (S12)
with $\tau$ is the relaxation time and $f=f_{0}+f_{1}$, with $f_{0}$ the
equilibrium distribution.
In the high-temperature Maxwell-Boltzmann limit, the equilibrium distribution
is
$f_{0}=n_{0}\,W_{0}(\mathbf{U}),$ (S13)
where $\mathbf{U}=\mathbf{v}-\mathbf{u}$ is the particle velocity relative to
the stream velocity $\mathbf{u}(\mathbf{r},t)$ and
$\int\\!d^{3}\mathbf{U}\,W(\mathbf{U})=1$. Here,
$W_{0}(\mathbf{U})=\frac{e^{-\mathbf{U}^{2}/v_{0}^{2}}}{(v_{0}\sqrt{\pi})^{3}}$
(S14)
and $v_{0}=\sqrt{2k_{B}T/m}$ is the thermal speed. In general, the background
temperature $T_{0}$ spatially varies $T\equiv T(\mathbf{r})$. For convenience,
we drop the subscript 0 and use $T$ for the temperature in deriving the
relaxation equations.
Without specifying the phase space distribution $f$, the pressure tensor is
given by
$p_{ij}=m\int\\!d^{3}\mathbf{U}\,U_{i}U_{j}\,f(\mathbf{r},\mathbf{v},t).$
(S15)
Taking $p_{ij}=p^{0}_{ij}+p^{1}_{ij}$, the scalar pressure $p_{0}\equiv p$, is
immediately obtained from $p^{0}_{ij}=\delta_{ij}\,p$ with $f=f_{0}$ and
$i=j=x$,
$p=m\int d^{3}\mathbf{U}\,U^{2}_{x}\,f_{0}.$ (S16)
Writing $\int\\!d^{3}\mathbf{U}\,U^{2}_{x}\,f_{0}=n_{0}\langle
U^{2}_{x}\rangle\equiv n_{0}\,\overline{U^{2}_{x}}$, we have
$p=n_{0}m\,\overline{U^{2}_{x}},$ (S17)
which gives $p\rightarrow n_{0}\,k_{B}T_{0}$ in the Maxwell-Boltsmann limit,
Eq. S14.
#### A.2.1 Shear viscosity
We find the relaxation equation for $\delta Q_{\eta}$ of Eq. S9 from that of
$p^{1}_{ij}$, assuming that the stream velocity $u(\mathbf{r},t)$ is position
dependent, producing a shear stress. Here, we assume that the background
temperature $T_{0}$ is spatially constant. To proceed, we first consider Eq.
S15 for $i\neq j$. The equilibrium distribution $f_{0}$ is a symmetric
function of $\mathbf{U}$, so that
$\int\\!d^{3}\mathbf{U}U_{i}U_{j}\,f_{0}=0.$ (S18)
Then Eq. S15 with $f=f_{0}+f_{1}$ and Eq. S18 yield
$p^{1}_{ij}=m\\!\int\\!d^{3}\mathbf{U}\,U_{i}U_{j}\,f=m\\!\int\\!d^{3}\mathbf{U}\,U_{i}U_{j}\,f_{1}.$
(S19)
Multiplying Eq. S12 by $\int\\!d^{3}\mathbf{U}\,U_{i}U_{j}$ and using Eq. S19,
we obtain
$\dot{p}^{1}_{ij}+m\int\\!d^{3}\mathbf{U}\,U_{i}U_{j}\,v^{k}\partial_{k}f=-\frac{1}{\tau_{\eta}}\,p^{1}_{ij},$
(S20)
where we define $\tau\equiv\tau_{\eta}$ for the shear viscosity. Then,
$\dot{p}^{1}_{ij}+\frac{1}{\tau_{\eta}}p^{1}_{ij}=-I_{ij},$ (S21)
where
$I_{ij}=m\int\\!d^{3}\mathbf{U}\,U_{i}U_{j}\,v^{k}\partial_{k}f=m\int\\!d^{3}\mathbf{U}\,U_{i}U_{j}\,v^{k}\frac{\partial
U^{l}}{\partial x^{k}}\frac{\partial f}{\partial U^{l}}.$ (S22)
For fast relaxation, where $f_{1}\rightarrow\tau_{\eta}v_{k}\partial_{k}f$ in
Eq. S12, we see that $p^{1}_{ij}\simeq\tau_{\eta}I_{ij}$ is already first
order in $\tau_{\eta}$, so that we can take $f\rightarrow f_{0}$ in Eq. S22.
Using $\mathbf{U}=\mathbf{v}-\mathbf{u}$,
$\frac{\partial U^{l}}{\partial x^{k}}=-\frac{\partial u^{l}}{\partial
x^{k}}.$ (S23)
Then with $v_{k}=U_{k}+u_{k}$ in Eq. S22, we obtain
$I_{ij}=-m\frac{\partial u^{l}}{\partial
x^{k}}\int\\!d^{3}\mathbf{U}\,U_{i}U_{j}(U_{k}+u_{k})\frac{\partial
f_{0}}{\partial U^{l}}.$ (S24)
Integrating by parts, we then have
$I_{ij}=m\frac{\partial u^{l}}{\partial
x^{k}}\int\\!d^{3}\mathbf{U}\,\frac{\partial}{\partial
U^{l}}[U_{i}U_{j}(U_{k}+u_{k})]\,f_{0},$ (S25)
Using $\partial U_{i}/\partial U_{l}=\delta_{il}$ and defining $\int
d^{3}\mathbf{U}\,g(\mathbf{U})f_{0}(\mathbf{U})=n_{0}\langle
g(\mathbf{U})\rangle$, Eq. S25 can be rewritten as
$I_{ij}=m\frac{\partial u^{l}}{\partial
x^{k}}\,n_{0}\left\\{\delta_{il}\langle
U_{j}(U_{k}+u_{k})\rangle+\delta_{jl}\langle
U_{i}(U_{k}+u_{k})\rangle+\delta_{kl}\langle U_{i}U_{j}\rangle\right\\}$ (S26)
Eq. S26 is simplified with
$n_{0}\langle
U_{j}(U_{k}+u_{k})\rangle=\int\\!d^{3}\mathbf{U}\,U_{j}(U_{k}+u_{k})\,f_{0}=n_{0}\,\delta_{jk}\overline{U^{2}_{x}},$
(S27)
where the $u_{k}$ term vanishes since $f_{0}$ is symmetric in $U_{j}$. Hence,
we can write $\langle U_{i}U_{j}\rangle=\delta_{ij}\overline{U^{2}_{x}}$. With
$p=n_{0}m\,\overline{U^{2}_{x}}$ from Eq. S17, we have
$I_{ij}=p\,\frac{\partial u^{l}}{\partial
x^{k}}\,\\{\delta_{il}\delta_{jk}+\delta_{jl}\delta_{ik}+\delta_{kl}\delta_{ij}\\}\,.$
(S28)
Carrying out the sums over repeated indices, we obtain
$I_{ij}=p\left(\frac{\partial u^{i}}{\partial x^{j}}+\frac{\partial
u^{j}}{\partial x^{i}}+\delta_{ij}\nabla\cdot\mathbf{u}\right),$ (S29)
where the $\delta_{ij}$ term vanishes for $i\neq j$.
To determine $p^{1}_{ij}$ for all $i,j$, we consider the symmetric second rank
traceless pressure tensor,
$p^{1}_{ij}\rightarrow
m\\!\int\\!d^{3}\mathbf{U}\,\left(U_{i}U_{j}-\frac{1}{3}\delta_{ij}\mathbf{U}^{2}\right)\,f_{1}\,.$
(S30)
where the $f_{0}$ part of $f$ vanishes because it is scalar function of
$\mathbf{U}$. Since $\mathbf{U}^{2}=Tr\\{U_{i}U_{j}\\}$, evaluating Eq. S30
just changes $I_{ij}$ in Eq. S21 and Eq. S29 to
$I_{ij}\rightarrow I_{ij}-\frac{1}{3}\delta_{ij}Tr\\{I_{ij}\\}\equiv
p\,\sigma_{ij}\,.$ (S31)
The $\delta_{ij}$ term in Eq. S29 makes no contribution to the symmetric
traceless tensor, yielding
$\sigma_{ij}=\frac{\partial u^{i}}{\partial x^{j}}+\frac{\partial
u^{j}}{\partial x^{i}}-\frac{2}{3}\,\delta_{ij}\nabla\cdot\mathbf{u}\,.$ (S32)
With Eqs. S31 and S32, Eq. S21 determines the relaxation equation for the
shear stress tensor,
$\dot{p}^{1}_{ij}+\frac{1}{\tau_{\eta}}\,p^{1}_{ij}=-p\,\sigma_{ij}\,.$ (S33)
For small $\tau_{\eta}$, $\dot{p}^{1}_{ij}\ll p^{1}_{ij}/\tau_{\eta}$, we see
that
$p^{1}_{ij}\rightarrow-\tau_{\eta}p\,\delta_{ij}=-\eta\,\sigma_{ij},$ (S34)
where the static shear viscosity is
$\eta=\tau_{\eta}\,p.$ (S35)
Now we can evaluate the relaxation equation for $\delta Q_{\eta}$ of Eq. S9.
We evaluate Eq. S32 for $\sigma_{zz}$ and eliminate the velocity field using
current conservation Eq. S1. To first order in small quantities,
$\delta\dot{n}+n_{0}\,\partial_{z}v_{z}=0$, yielding
$\sigma_{zz}=\frac{4}{3}\,\partial_{z}v_{z}=-\frac{4}{3}\,\frac{\delta\dot{n}}{n_{0}}\,.$
(S36)
We find $\delta\dot{Q_{\eta}}$ using Eq. S9, $\delta
Q_{\eta}\equiv\frac{1}{m}\,\partial_{z}^{2}p^{1}_{zz}$. With Eq. S36 and Eq.
S33 we obtain finally,
$\delta\dot{Q_{\eta}}+\frac{1}{\tau_{\eta}}\delta
Q_{\eta}=\frac{4}{3}\,\frac{p}{mn_{0}}\,\partial^{2}_{z}\delta\dot{n}\,.$
(S37)
For fast relaxation, $\delta\dot{Q_{\eta}}\ll\delta Q_{\eta}/\tau_{\eta}$, we
find,
$\delta
Q_{\eta}\rightarrow\frac{4}{3}\,\frac{\tau_{\eta}p}{mn_{0}}\,\partial^{2}_{z}\delta\dot{n}=\frac{4}{3}\,\frac{\eta}{mn_{0}}\,\partial^{2}_{z}\delta\dot{n}\,.$
(S38)
In this limit, Eq. S8 with Eq. S38 reproduces the Navier-Stokes form used in
Ref. [23]. We note that Eq. S35 for $\eta$ and Eq. S37 for
$\delta\dot{Q_{\eta}}$ are independent of the form of the single particle
phase space distribution, $f_{0}(\mathbf{r},\mathbf{v})$, which has not been
explicitly used to obtain the background pressure $p$ from Eq. S17.
#### A.2.2 Thermal conductivity
Next, we find the relaxation equation for $\delta Q_{\kappa}$ of Eq. S11 from
that of the 1D energy current, $J_{E}$,
$J_{E}=\int d^{3}\mathbf{v}\,v_{z}\frac{m}{2}\,\mathbf{v}^{2}\,f.$ (S39)
To find the relaxation equation of the energy current, we assume that the
system is in mechanical equilibrium with a stream velocity $\mathbf{u}=0$, so
that $\mathbf{U}=\mathbf{v}$ in Eq. S14, and we include a temperature gradient
$T(\mathbf{r})$. Eq. S39 shows that $J_{E}$ vanishes for $f=f_{0}$, since the
integrand would be odd in $v_{z}$.
Using Eq. S39 and Eq. S12 with $\tau\equiv\tau_{\kappa}$, we have
$\dot{J}_{E}+\frac{m}{2}\,\partial_{z}\\!\\!\int
d^{3}\mathbf{v}\,v_{z}^{2}\mathbf{v}^{2}\,f_{0}=-\frac{1}{\tau_{\kappa}}\,J_{E}$
(S40)
or
$\dot{J}_{E}+\frac{1}{\tau_{\kappa}}\,J_{E}=-\partial_{z}I_{zz}.$ (S41)
Here
$I_{zz}\equiv\frac{m}{2}\,\int
d^{3}\mathbf{v}\,v_{z}^{2}\,\mathbf{v}^{2}\,f_{0}.$ (S42)
To evaluate Eq. S42, we require an explicit form for the equilibrium phase
space distribution, which we take to be a Maxwell-Boltzmann distribution,
$f_{0}(\mathbf{r},\mathbf{v})=n(\mathbf{r})\,W_{0}(\mathbf{v})$, where
$W_{0}(\mathbf{v})=\frac{e^{-\mathbf{v}^{2}/v_{0}^{2}}}{(v_{0}\sqrt{\pi})^{3}}$
(S43)
with $v_{0}=\sqrt{2k_{B}T/m}$ the thermal speed. As discussed above, we assume
that $T=T(\mathbf{r})$ spatially varies, producing a temperature gradient, as
discussed further below. Then,
$I_{zz}=\frac{m}{2}\,\frac{1}{3}\,\langle v^{4}\rangle\,n(\mathbf{r}).$ (S44)
Here,
$\langle v^{4}\rangle=\int
d^{3}\mathbf{v}\,v^{4}\,W_{0}(\mathbf{v})=\frac{4}{\sqrt{\pi}}v_{0}^{4}\int
d\left(\frac{v}{v_{0}}\right)\left(\frac{v}{v_{0}}\right)^{6}\,e^{-(v/v_{0})^{2}},$
(S45)
which yields
$\langle
v^{4}\rangle=\frac{15}{4}\,v_{0}^{4}=15\,\left(\frac{k_{B}T}{m}\right)^{2}.$
(S46)
Using Eq. S46 in Eq. S44, and the pressure
$p(\mathbf{r})=n(\mathbf{r})\,k_{B}T(\mathbf{r})$ from Eqs. S17 and S43, we
have
$I_{zz}=\frac{5}{2}\frac{k_{B}}{m}\,n(\mathbf{r})\,k_{B}T^{2}(\mathbf{r})=\frac{5}{2}\frac{k_{B}}{m}\,p(\mathbf{r})\,T(\mathbf{r}).$
(S47)
With Eq. S41, we then obtain
$\dot{J}_{E}+\frac{1}{\tau_{\kappa}}\,J_{E}=-\frac{5}{2}\frac{k_{B}}{m}\,\partial_{z}[\,p(\mathbf{r})\,T(\mathbf{r})\,].$
(S48)
For pure heat flow, mechanical equilibrium requires
$\nabla p(\mathbf{r})=0.$ (S49)
Therefore, Eq. S48 becomes
$\dot{J}_{E}+\frac{1}{\tau_{\kappa}}J_{E}=-\frac{5}{2}\frac{k_{B}}{m}\,p\,\,\partial_{z}\delta
T,$ (S50)
where we suppress the argument $\mathbf{r}$. Here, the temperature
$T=T_{0}+\delta T$, with $T_{0}$ the uniform background temperature, so that
$\partial_{z}T=\partial_{z}\delta T$.
For fast relaxation, where $\dot{J}_{E}\ll J_{E}/\tau_{\kappa}$, Eq. S50 gives
$J_{E}\rightarrow-\frac{5}{2}\frac{k_{B}}{m}\,\tau_{\kappa}\,p\,\,\partial_{z}\delta
T\equiv-\kappa_{T}\,\partial_{z}\delta T,$ (S51)
where the static thermal conductivity is
$\kappa_{T}=\frac{5}{2}\frac{k_{B}}{m}\,\tau_{\kappa}\,p\,.$ (S52)
We find $\delta\dot{Q}_{\kappa}$ using Eq. S11, $\delta
Q_{\kappa}\equiv-(\beta/c_{V_{1}})\,\partial_{z}\,J_{E}$. Then, operating on
Eq. S50 with $-(\beta/c_{V_{1}})\,\partial_{z}$, we have
$\delta\dot{Q}_{\kappa}+\frac{1}{\tau_{\kappa}}\delta{Q}_{\kappa}=\frac{5}{2}\frac{k_{B}}{m}\,\frac{p}{n_{0}c_{V_{1}}}\,\,\partial^{2}_{z}\,\delta\tilde{T},$
(S53)
where $\delta\tilde{T}=n_{0}\beta\,\delta T$ (see Eq. S5). For fast
relaxation, where
$\delta\dot{Q}_{\kappa}\ll\frac{1}{\tau_{\kappa}}\delta{Q}_{\kappa}$, Eq. S53
gives
$\delta{Q}_{\kappa}\rightarrow\frac{5}{2}\frac{k_{B}}{m}\,\frac{\tau_{\kappa}\,p}{n_{0}c_{V_{1}}}\,\partial^{2}_{z}\delta\tilde{T}=\frac{\kappa_{T}}{n_{0}c_{V_{1}}}\,\partial^{2}_{z}\,\delta\tilde{T}.$
(S54)
In this limit, Eq. S10 with Eq. S54 reproduces the Navier-Stokes form used in
Ref. [23]. We note that the coefficient $5/2$ in Eq. S52 for $\kappa_{T}$ and
in Eq. S53 for $\delta\dot{Q}_{\kappa}$ is dependent on the assumed Maxwell-
Boltzmann approximation for the phase space distribution $f_{0}$, which was
needed to evaluate Eq. S42, yielding Eq. S47.
### A.3 Relaxation of the Fourier Components
In the experiments, as described in the main text, a spatially periodic
perturbing potential $\delta U$ is used to create a spatially periodic density
perturbation $\delta n(z,t=0)$, with a wavelength $\lambda$ and corresponding
wavevector $q=2\pi/\lambda$. After the system reaches mechanical and thermal
equilibrium, $\delta U$ is abruptly extinguished and $\delta n(z,t)$ is
measured. A spatial Fourier transform of the relaxing density perturbation
$\delta n(z,t)$ yields the time dependence of the Fourier component, $\delta
n(q,t)$. The evolution of $\delta n(q,t)$ is readily determined from the
Fourier transforms of Eqs. S8, S37, S10 and S53. Defining $\delta
n(q,t)\equiv\delta n$, $\delta\tilde{T}(q,t)\equiv\delta\tilde{T}$, $\delta
Q_{\eta}(q,t)\equiv\delta Q_{\eta}$, and $\delta Q_{\kappa}(q,t)\equiv\delta
Q_{\kappa}$, we find
$\displaystyle\delta\ddot{n}=-\omega_{T}^{2}\,\delta
n-\omega_{T}^{2}\,\delta\tilde{T}+\delta Q_{\eta}\hskip 18.06749pt{\rm
with}\hskip 18.06749pt\omega_{T}\equiv c_{T}q\,,$ (S55)
$\delta\dot{Q_{\eta}}=-\frac{1}{\tau_{\eta}}\delta
Q_{\eta}-\Omega_{\eta}^{2}\,\delta\dot{n}\hskip 18.06749pt{\rm with}\hskip
18.06749pt\Omega_{\eta}^{2}\equiv\frac{4}{3}\,\frac{p}{mn_{0}}\,q^{2}\,,$
(S56)
and
$\delta\dot{\tilde{T}}=\epsilon_{LP}\,\delta\dot{n}+\delta Q_{\kappa}\,,\hskip
18.06749pt{\rm where}$ (S57)
$\delta\dot{Q}_{\kappa}=-\frac{1}{\tau_{\kappa}}\delta{Q}_{\kappa}-\Omega_{\kappa}^{2}\,\delta\tilde{T}\hskip
18.06749pt{\rm with}\hskip
18.06749pt\Omega_{\kappa}^{2}=\frac{5}{2}\frac{k_{B}}{m}\,\frac{p}{n_{0}c_{V_{1}}}\,\,q^{2}.$
(S58)
For a unitary Fermi gas, the pressure $p$ and internal energy density ${\cal
E}$ have the universal forms
$p=\frac{2}{5}\,n\epsilon_{F}(n)\,f_{E}(\theta)=\frac{2}{3}\,{\cal E},$ (S59)
where the universal function $f_{E}(\theta)$ has been measured [30] as a
function of the reduced temperature $\theta\equiv T/T_{F}$. Here, the local
Fermi energy $\epsilon_{F}(n)=\frac{\hbar^{2}}{2m}(3\pi^{2}n)^{2/3}$ with $n$
the total density for a 50-50 mixture of two spin states. Taking
$\epsilon_{F}(n)=mv_{F}^{2}/2$, which defines the Fermi speed $v_{F}$, we can
write in Eq. S56
$\Omega_{\eta}^{2}\equiv\frac{4}{15}\,\omega_{F}^{2}\,f_{E}(\theta),$ (S60)
where we define $\omega_{F}\equiv v_{F}q$. Similarly, we can write in Eq. S58
$\Omega_{\kappa}^{2}\equiv\frac{1}{2}\,\frac{k_{B}}{c_{V_{1}}}\,\omega_{F}^{2}\,f_{E}(\theta).$
(S61)
In the short relaxation time limit, we see that $\delta
Q_{\eta}\rightarrow-\gamma_{\eta}\delta\dot{n}$, with
$\gamma_{\eta}\equiv\tau_{\eta}\Omega_{\eta}^{2}=\frac{4}{3}\frac{\eta}{n_{0}m}\,q^{2}$
and $\delta Q_{\kappa}\rightarrow-\gamma_{\kappa}\delta\tilde{T}$, with
$\gamma_{\kappa}\equiv\tau_{\kappa}\Omega_{\kappa}^{2}=\frac{\kappa_{T}}{n_{0}c_{V_{1}}}\,q^{2}$,
reproducing the results in the supplement of Ref. [23].
The linear time-dependent equations S55, S56, S57, and S58 are easily solved
to determine $\delta n(q,t)$, with the initial conditions $\delta n(q,0)=A$,
$\delta\tilde{T}(q,0)=0$, $\delta Q_{\eta}(q,0)=0$, and $\delta
Q_{\kappa}(q,0)=0$. The data are fit using the amplitude $A$, isothermal
frequency $\omega_{T}$, and relaxation times $\tau_{\eta}$ and $\tau_{\kappa}$
as free parameters. As in our previous experiments [23], the ratio
$\omega_{T}/\omega_{F}$ self consistently determines the reduced temperature
$\theta$ in the fits.
In the main text, the relaxation times are given in units of the “Fermi time,”
which we take to be $\tau_{F}\equiv\lambda_{F}/v_{F}$, i.e., the time for an
atom to move a Fermi wavelength at the Fermi speed. With
$\lambda_{F}=2\pi\hbar/(mv_{F})$, and $mv_{F}^{2}=2\epsilon_{F}$, the Fermi
time is
$\tau_{F}=\frac{\pi\hbar}{\epsilon_{F}}.$ (S62)
### A.4 Static Transport Properties
With the relaxation times determined from the fits, the static shear viscosity
Eq. S35 is determined in units of $\hbar n_{0}$ by
$\eta=\tau_{\eta}\,p=\alpha_{\eta}\,\hbar n_{0}\,,$ (S63)
where Eq. S59 for $p$ gives the dimensionless shear viscosity coefficient,
$\alpha_{\eta}=\frac{2\pi}{5}\,\frac{\tau_{\eta}}{\tau_{F}}\,f_{E}(\theta).$
(S64)
Similarly, the static thermal conductivity Eq. S52 is determined in units of
$\hbar n_{0}\,k_{B}/m$ by
$\kappa_{T}=\frac{5}{2}\frac{k_{B}}{m}\,\tau_{\kappa}\,p=\alpha_{\kappa}\,\frac{k_{B}}{m}\,\hbar
n_{0}\,,$ (S65)
where the dimensionless thermal conductivity coefficient is
$\alpha_{\kappa}=\pi\,\frac{\tau_{\kappa}}{\tau_{F}}\,f_{E}(\theta).$ (S66)
Finally, the first sound diffusivity [23, 33] is determined from the static
transport properties,
$D_{1}=\frac{4}{3}\frac{\eta}{n_{0}m}+\left(\frac{1}{c_{V_{1}}}-\frac{1}{c_{P_{1}}}\right)\frac{\kappa_{T}}{n_{0}}.$
(S67)
For a unitary Fermi gas in a 50-50 mixture of two spin states and total
density $n$ [23, 22],
$\frac{1}{c_{V_{1}}}-\frac{1}{c_{P_{1}}}=\frac{4}{15}\frac{nT}{p}=\frac{1}{k_{B}}\frac{2}{3}\frac{\theta}{f_{E}(\theta)}.$
(S68)
Then
$\frac{D_{1}}{\hbar/m}=\frac{4}{3}\,\alpha_{\eta}+\frac{2}{3}\frac{\theta}{f_{E}(\theta)}\,\alpha_{\kappa}.$
(S69)
Using Eqs. S64 and S66, we then obtain $D_{1}$ in units of $\hbar/m$ from the
measured relaxation times,
$\frac{D_{1}}{\hbar/m}=\frac{8\pi}{15}\,\frac{\tau_{\eta}}{\tau_{F}}\,f_{E}(\theta)+\frac{2\pi}{3}\,\frac{\tau_{\kappa}}{\tau_{F}}\,\theta.$
(S70)
For comparison with the static transport properties shown in the main text,
which are obtained from the measured relaxation times using Eqs. S64, S66, and
S70, we parameterize $\alpha_{\eta}$ as
$\alpha_{\eta}=\alpha_{3/2}\,\theta^{3/2}+\alpha_{2\eta}.$ (S71)
Here, the first term is the high temperature limit, which is obtained from a
variational calculation [4],
$\alpha_{3/2}=45\pi^{3/2}/(64\sqrt{2})=2.76849\simeq 2.77$ and
$\alpha_{2\eta}$ is the universal density shift coefficient, which is used as
a fit parameter. Similarly, we parameterize $\alpha_{\kappa}$ as
$\alpha_{\kappa}=\frac{15}{4}\,\left(\alpha_{3/2}\,\theta^{3/2}+\alpha_{2\kappa}\right).$
(S72)
Here, the leading factor of $15/4$ is chosen to yield the correct high
temperature limit for the ratio $\kappa_{T}/\eta$. In this limit, Eqs. S65 and
S63 yield $\kappa_{T}=\frac{15}{4}\frac{k_{B}}{m}\,\eta$, where we have used
$\tau_{\kappa}/\tau_{\eta}=3/2$, which can be shown to hold for any isotropic
collision cross section.
With Eqs. S71 and S72, the relaxation times are then parameterized in the main
text by inverting Eqs. S64 and S66 as
$\frac{\tau_{\eta}}{\tau_{F}}=\alpha_{\eta}\,\frac{5}{2\pi}\frac{1}{f_{E}(\theta)}$
(S73)
and
$\frac{\tau_{\kappa}}{\tau_{F}}=\alpha_{\kappa}\,\frac{1}{\pi}\frac{1}{f_{E}(\theta)}.$
(S74)
Fig. S1 shows the sound diffusivity obtained from the measured relaxation
times, Eq. S70, which is compared to the sound diffusivity measured for sound
attenuation by Patel et al., in Ref. [22]. We see that the high temperature
behavior obtained from Eq. S69 using the fitted density shift coefficients for
the shear viscosity and thermal conductivity in Eqs. S71 and S72 is in good
agreement with the high temperature sound attenuation measurements. However,
the lower temperature sound diffusivity measurements of Ref. [22] exhibit a
nearly constant upward shift relative to the sound diffusivity obtained from
the relaxation time experiments.
Figure S1: Sound diffusivity from measured $\tau_{\eta}$, $\tau_{\kappa}$,
$D_{1}[\hbar/m]=\frac{8\pi}{15}\,\frac{\tau_{\eta}}{\tau_{F}}\,f_{E}(\theta)+\frac{2\pi}{3}\frac{\tau_{\kappa}}{\tau_{F}}\,\theta$
(Blue dots), in units of $\hbar/m$ versus reduced temperature
$\theta=T/T_{F}$. Red solid curve:
$D_{1}[\hbar/m]=\frac{4}{3}\,(2.77\,\theta^{3/2}\\!\\!+0.44)+\frac{5}{2}\frac{\theta}{f_{E}(\theta)}\,(2.77\,\theta^{3/2}\\!\\!+0.22)$.
Red-dashed curve (high temperature limit):
$f_{E}(\theta)\rightarrow\frac{5}{2}\,\theta$, $D_{1}[\hbar/m]=7/3\times
2.77\,\theta^{3/2}$. Sound diffusivity data of Ref. [22] (Orange dots). Error
bars (blue dots) are statistical. (color online)
|
STATUS: Prepared
(1.25in,2in) Numerical simulation of an extensible capsule using regularized
Stokes kernels and overset finite differences
Dhwanit Agarwal
_Oden Institute, University of Texas at Austin
Austin, TX, 78712_
<EMAIL_ADDRESS>
George Biros
_Oden Institute, University of Texas at Austin
Austin, TX, 78712_
<EMAIL_ADDRESS>
(1.25in,7in)
###### Abstract
In this paper, we present a novel numerical scheme for simulating deformable
and extensible capsules suspended in a Stokesian fluid. The main feature of
our scheme is a partition-of-unity (POU) based representation of the surface
that enables asymptotically faster computations compared to spherical-
harmonics based representations. We use a boundary integral equation
formulation to represent and discretize hydrodynamic interactions. The
boundary integrals are weakly singular. We use the quadrature scheme based on
the regularized Stokes kernels (given in [32]). We also use partition-of unity
based finite differences that are required for the computational of
interfacial forces. Given an $N$-point surface discretization, our numerical
scheme has fourth-order accuracy and $\mathcal{O}(N)$ asymptotic complexity,
which is an improvement over the $\mathcal{O}(N^{2}\log N)$ complexity of a
spherical harmonics based spectral scheme that uses product-rule quadratures
[34]. We use GPU acceleration and demonstrate the ability of our code to
simulate the complex shapes with high resolution. We study capsules that
resist shear and tension and their dynamics in shear and Poiseuille flows. We
demonstrate the convergence of the scheme and compare with the state of the
art.
###### Contents
1. 1 Introduction
2. 2 Problem formulation
1. 2.1 Formulation
2. 2.2 Boundary integral formulation
3. 3 Numerical algorithms
1. 3.1 Surface parameterization
2. 3.2 Surface discretization
3. 3.3 Smooth surface integrals
4. 3.4 Singular integration
5. 3.5 Surface derivatives
6. 3.6 Time stepping and the overall algorithm
4. 4 Results
1. 4.1 Integration results
2. 4.2 Derivative results
3. 4.3 Accuracy and convergence of the full numerical scheme
4. 4.4 Relaxation of capsule
5. 4.5 Capsule in shear flow
6. 4.6 Capsule in Poiseuille flow
7. 4.7 Timing results
5. 5 Fast multipole method based acceleration
6. 6 Conclusions
7. A Appendix
1. A.1 Transition maps
2. A.2 Surface derivative formulas
3. A.3 The choice of parameter $r_{0}$ for the partition of unity
4. A.4 Effect of the blending process in calculating derivatives
5. A.5 Surface derivative and singular integration errors for shear flow and Poiseuille flow terminal shapes
6. A.6 Numerical errors for different reduced volume $\nu$
7. A.7 Sensitivity of the singular quadrature scheme to the regularization parameter $\delta$
8. A.8 Wall clock times for our GPU accelerated code _vs_ the spherical harmonics CPU code
## 1 Introduction
Capsules suspended in a Stokesian fluid describe complex biological flows and
microfluidic devices [35, 12, 20]. In particular, we’re interested in modeling
red blood cell (RBC) suspensions. Several previous works [13, 19, 8, 23, 42]
have modeled red blood cells as elastic membranes filled with a Newtonian
fluid and suspended in a Newtonian fluid, also referred to as _“capsules”_. In
this work, we present a numerical scheme for simulating a single deformable
three-dimensional capsule suspended in Stokes flow where its membrane resists
shear and tension [30, 29] (see Figure 1). The scheme can be extended using
existing techniques to simulating an arbitrary number of capsules [34].
_Related work:_ Stokesian flows with deformable capsules involve moving
interfaces, fluid-structure interaction, large deformations, near and long-
range stiff interactions, and often require high-order surface derivatives.
Efficient methods for simulating Stokesian capsule suspensions include
immersed boundary/interface methods [3], lattice-Boltzmann methods [22], and
boundary integral equations methods [27, 42, 34, 11]. Each of these methods
trades-off aspects of accuracy, simplicity of implementation, the ability to
simulate complex physics, and the numerical efficiency. Without advocating a
particular approach, we focus on integral equation formulations and discuss
the related literature in somehow greater detail.
A nice feature of boundary integral formulations for Stokes flows is that they
require only the discretization of the capsule surface. One difficulty is that
they require specialized singular quadrature schemes to compute layer
potentials Spectrally accurate boundary integral methods [34, 42] have used
spherical harmonics representations for the surface and surface fields,
_e.g.,_ elastic forces, velocity, and shape, to simulate the time evolution of
the surface. This surface representation is spectrally accurate in the degree
$p$ of the spherical harmonics that translates to $O(p^{2})$ discretization
points. For spectrally accurate singular quadrature, the authors in [42] used
the Bruno-Kunyansky quadrature [7, 41] that uses a floating partition of unity
to resolve weakly singular points around each global quadrature point. This
scheme has $O(p^{4}\log{p})$ work complexity and can be reduced to $O(p^{3})$
using an FFT-based scheme that embeds the surface [7] in a volumetric
Cartesian grid. The authors in [34] adapted the Graham-Sloan quadrature [17]
to the Stokes kernel. Given $\mathcal{O}(p^{2})$ global points to represent
the capsule, the scheme uses a product quadrature rule that requires
$\mathcal{O}(p^{2})$ spherical harmonics rotations. For small $p$, the fastest
implementation uses dense linear algebra at a $\mathcal{O}(p^{6})$ cost; this
becomes prohibitively expensive for large scheme. Fast rotation schemes based
on Legendre transforms and FFTs can lower the cost to $O(p^{4}\log p)$. The
scheme has excellent accuracy, and even for small values of $p$ for nearly-
spherical shapes it outperforms the Bruno-Kunyansky quadrature. This is due to
Bruno-Kunyansky’s highly localized floating-partition of unity that increases
the resolution requirements. Despite the spectral convergence of these methods
in the degree $p$ of spherical harmonics used, the spherical harmonics
representation has difficulties in resolving shapes with high curvature, like
the shapes shown in Figure 13(c) and Figure 16(c). Indeed, increasing the
degree $p$ of the spherical harmonics representation is too expensive [1, 2]
because they are not amenable to acceleration via fast multipole methods
because of the product quadrature. An alternative is to use more classical
surface representation that avoids global polynomials. Several studies [6, 5,
43, 31, 11] have used triangulation based surface representation to provide
more control over local resolution. These schemes are effective but they only
provide second-order accurate representation for the surfaces. Using a lower-
order representation makes the computation of higher order derivatives more
difficult and complicate the quadrature rules, although recent advances in the
latter may circumvent this difficulty [36] and [18]. We remark, that we prefer
schemes that no expensive geometry-dependent rule precomputation as the
capsule geometry evolves in time.
_Our Contributions:_ We present a new high-order numerical scheme for
simulating the dynamics of three-dimensional extensible capsules. The new
scheme allows non-uniform discretization and fast evaluations of integral
operators. Our singular quadrature is based on the regularized Stokes kernels
in [32]. We use multiple overlapping patches to parameterize the capsule
surface assuming a spherical topology surface at all times. We use a fourth-
order finite difference scheme using an overset grid based discretization of
the patches to calculate the interfacial elastic forces. For $N$ number of
discretization points of the surface, our scheme has $O(N^{2})$ work
complexity, which is similar to the lower order boundary element schemes [11]
and is fourth order accurate. Hence, our scheme offers a high order accuracy
and independent control over local resolution. Our singular quadrature scheme
is not a product quadrature, which allows acceleration via fast multipole
methods (FMMs) [33]. With FMM acceleration, our numerical scheme becomes an
$O(N)$ scheme.
To summarize, our scheme comprises the following main components:
* •
We describe an overlapping patch based parameterization for capsules that are
diffeomorphic to the unit sphere.
* •
We provide a finite difference scheme based on the overset grid based
discretization of these patches to calculate the surface derivatives.
* •
We evaluate a high order singular quadrature scheme based on the regularized
Stokes kernels given in [32].
* •
We use GPU acceleration for the evaluation of singular quadrature to do high
resolution simulations.
_Limitations:_ We assume spherical-topology of the capsule surface which may
not be the case in certain biophysical flows. In this paper, we do not
consider the bending elastic forces [28, 11, 34] but the can be extended as
is. We do not consider the viscosity contrast, _i.e.,_ the fluid inside and
outside the capsule have the same viscosity. Our algorithm can be extended to
include differences in the interior and exterior fluid viscosity by including
a double layer Stokes potential term in the problem formulation as described
in detail in [28]. The regularized Stokes double layer potential is provided
in [32]. Our scheme requires us to choose several parameters that can affect
the accuracy of the scheme, for example, the configuration of patches, the
extent of overlap of the patches, the partition of unity functions, and a
regularization parameter for the singular quadrature. That is, instead of a
regular grid, we eventually want to switch to quadtree-based representation.
We also do not provide curvature adaptive surface parameterization. Our scheme
does not provide an arbitrary order of accuracy. We remark however, that a
recently developed scheme [39] can be used instead of [32] and provide local
quadrature with high accuracy.
_Outline of the paper:_ In the next section, we state the differential and the
boundary integral formulation of the dynamics of an extensible capsule
suspended in Stokes flow. In Section 3, we describe in detail the surface
parameterization and the numerical schemes (differentiation, integration and
time stepping) we use to simulate the capsule dynamics. In Section 4, we
report the numerical results demonstrating the convergence and accuracy of the
numerical schemes along with the results of capsule dynamics in shear and
Poiseuille flow. In Section 5, we discuss the FMM based acceleration of our
scheme and show results for the speedup obtained via acceleration using a
single level FMM. In Section 6, we provide a brief summary of the paper along
with the conclusions and the future directions that can be taken to improve
upon this work.
Figure 1: A representation of the problem setup. The grey filled region is the
interior of the capsule with membrane $\gamma$. Exterior of the capsule is
filled with a Newtonian fluid and the capsule is suspended freely in it.
$\bm{u}_{\infty}$ is the imposed background fluid velocity.
## 2 Problem formulation
In this section, we formally describe the mathematical formulation of the
problem and introduce the notation. We discuss the differential formulation in
Section 2.1 and then specify the boundary integral formulation of the problem
in Section 2.2. A representation of the setup is shown in Figure 1. Our
discussion follows the work in [27, 34].
### 2.1 Formulation
Let $\gamma$ be the membrane of an extensible capsule filled with a viscous
Newtonian fluid of viscosity $\mu_{i}$ and suspended in another viscous
Newtonian fluid of viscosity $\mu_{e}$. In this work, we assume that the
interior and exterior fluids have same viscosity, _i.e.,_
$\mu_{i}=\mu_{e}=\mu$. The microscopic length scale of the problem implies
that the dynamics of the problem can be described by the Stokes equation. The
boundary value problem is given by [34]:
$\displaystyle-\mu\Delta\bm{u}(\bm{x})+\nabla p(\bm{x})$ $\displaystyle=0\
\quad\forall\bm{x}\in\mathbb{R}^{3}\backslash\gamma,$ (2.1)
$\displaystyle\nabla\cdot\bm{u}(\bm{x})$
$\displaystyle=0\quad\forall\bm{x}\in\mathbb{R}^{3}\backslash\gamma,$ (2.2)
$\displaystyle[[-p\bm{n}+\mu(\nabla\bm{u}+\nabla\bm{u}^{T})\bm{n}]]$
$\displaystyle=\bm{f}\quad\textit{ on }\gamma,$ (2.3)
$\displaystyle\bm{u}(\bm{x})$
$\displaystyle\longrightarrow\bm{u}_{\infty}\quad\textit{ for
}\bm{x}\longrightarrow\infty,$ (2.4)
$\displaystyle\frac{\partial\bm{x}}{\partial t}$
$\displaystyle=\bm{u}\quad\textit{ on }\gamma.$ (2.5)
Here $\mu$ is the viscosity of the exterior and interior fluid, $\bm{u}$ is
the velocity of the fluid, $p$ is the pressure, [[$q$]] represents the jump in
a quantity $q$ across the capsule membrane $\gamma$, $\bm{n}$ is the unit
normal to the capsule membrane, $\bm{f}$ is the total force exerted by the
capsule membrane onto the fluid, $\bm{u}_{\infty}$ is the imposed background
velocity far away from the capsule. Equation 2.1 is the Stokes equation
representing the conservation of momentum, Equation 2.2 is the conservation of
mass, Equation 2.3 is the balance of force on the interface between fluid and
membrane, Equation 2.4 sets the far field velocity to be the background
velocity and Equation 2.5 enforces the no-slip condition on the capsule
membrane.
_Interfacial force:_ Now we discuss in detail the interfacial force $\bm{f}$
exerted by capsule membrane onto the surrounding fluid due to membrane
elasticity. We only consider the elastic forces due to in-plane shear
deformations of the capsule. Previous works have also included bending forces
[42, 11, 38] but we leave it for future work and focus mainly on the numerical
schemes in this paper. The in-plane shear force $\bm{f}_{s}$ is equal to the
surface divergence of the symmetric part of the in-plane shear stress tensor
denoted by $\Lambda$ [29, 42]. Thus, we write,
$\displaystyle\bm{f}=\bm{f}_{s}=\nabla_{\gamma}\cdot\Lambda\textit{. }$ (2.6)
_In-plane shear stress tensor $\Lambda$:_ Our discussion follows the work
based on the Skalak model in [27, 29]. The in-plane shear stress tensor
$\Lambda$ is a function of the surface deformation gradient of the capsule
surface relative to the stress-free reference configuration of the membrane.
Let $\gamma$ be the surface in the current configuration and $\gamma_{r}$ be
the reference configuration of the membrane. If $\bm{x}_{r}\in\gamma_{r}$ is a
point that maps to $\bm{x}\in\gamma$ in the current configuration, let
$\xi(\bm{x}_{r})=\bm{x}$ be the bijective map between the configurations. The
deformation gradient $\bm{F}$ is defined as
$\bm{F}=\frac{\partial\xi}{\partial\bm{x}_{r}}.$ If $\bm{n}_{r}$ is the normal
to the reference configuration at $\bm{x}_{r}\in\gamma_{r}$ and $\bm{n}$ is
the normal to the current configuration at $\bm{x}\in\gamma$, then the
relative surface deformation gradient $\bm{F}_{S}$ is defined as
$\bm{F}_{S}=(\bm{I}-\bm{n}\bm{n}^{T})\bm{F}(\bm{I}-\bm{n}_{r}\bm{n}_{r}^{T})$,
where $\bm{I}$ is the identity tensor. It follows from the above equation that
$\bm{F}_{S}$ maps the surface tangents $\bm{a}_{1r}$ and $\bm{a}_{2r}$ in the
reference configuration $\gamma_{r}$ to the tangents $\bm{a}_{1}$ and
$\bm{a}_{2}$ in the current configuration $\gamma$. Also, it maps the
reference normal $\bm{n}_{r}$ to $0$. Thus, the following set of equations at
every point $\bm{x}_{r}$ uniquely determine $\bm{F}_{S}$ at that point on the
reference configuration: $\bm{F}_{S}\bm{a}_{1r}=\bm{a}_{1},\textit{
}\bm{F}_{S}\bm{a}_{2r}=\bm{a}_{2},\textit{ }\bm{F}_{S}\bm{n}_{r}=0.$ The left
Cauchy-Green deformation tensor $\bm{V}^{2}$ and the surface projection tensor
$\bm{P}$ at point $\bm{x}$ in the current configuration are then defined as
$\displaystyle\bm{V}^{2}(\bm{x})=\bm{F}_{s}(\xi^{-1}(\bm{x}))(\bm{F}_{s}(\xi^{-1}(\bm{x})))^{T},\textit{
}\bm{P}(\bm{x})=\bm{I}-\bm{n}\bm{n}^{T}.$ (2.7)
Following the work in [30, 29], if $\lambda_{1}^{2},\lambda_{2}^{2}$ are the
two non-zero eigenvalues of $\bm{V}^{2}$, we define two scalar invariants
$I_{1}=\lambda_{1}^{2}+\lambda_{2}^{2}-2\textit{ and
}I_{2}=\lambda_{1}^{2}\lambda_{2}^{2}-1$. We now define the shear stress
tensor $\Lambda$ at every point $\bm{x}$ on the current configuration as
$\displaystyle\Lambda(\bm{x})=\frac{E_{s}}{2J_{s}}(I_{1}+1)\bm{V}^{2}(\bm{x})+\frac{J_{s}}{2}(E_{D}I_{2}-E_{s})\bm{P}(\bm{x}),$
(2.8)
where $J_{s}=\lambda_{1}\lambda_{2}$, $E_{s}$ is the elastic shear modulus of
the membrane and $E_{D}$ is the dilatation modulus. High values of the shear
modulus $E_{s}$ represent higher membrane shear resistance. The dilatation
modulus $E_{D}$ controls the local membrane extensibility.
### 2.2 Boundary integral formulation
Following the work in [29, 27], Equations 2.1, 2.2, 2.3, 2.4 and 2.5 can be
formulated into integral equations on the capsule membrane $\gamma$. This
formulation can be written as:
$\displaystyle\bm{f}(\bm{x})=\nabla_{\gamma}\cdot\Lambda(\bm{x})\quad\forall\bm{x}\in\gamma,$
(2.9)
$\displaystyle\bm{u}(\bm{x})=\bm{u}_{\infty}(\bm{x})+\mathcal{S}_{\gamma}[\bm{f}](\bm{x})\quad\forall\bm{x}\in\gamma,$
(2.10) $\displaystyle\frac{\partial\bm{x}}{\partial
t}=\bm{u}(\bm{x})\quad\forall\bm{x}\in\gamma.$ (2.11)
Here $\tau(\bm{x})$ is computed using Equation 2.8.
$\mathcal{S}_{\gamma}[\bm{f}]$ represents the single layer potential of layer
density $\bm{f}$ over the capsule membrane $\gamma$ defined as follows:
$\displaystyle\mathcal{S}_{\gamma}[\bm{f}](\bm{x})=\int_{\gamma}\mathcal{G}(\bm{x},\bm{y})\bm{f}(\bm{y})d\gamma,$
(2.12)
where $\mathcal{G}$ is the Stokes kernel given by
$\mathcal{G}(\bm{x},\bm{y})=\frac{1}{8\pi\mu}\left(\frac{I}{||\bm{r}||}+\frac{\bm{r}\otimes\bm{r}}{||\bm{r}||^{3}}\right)$
with $\bm{r}=\bm{x}-\bm{y}$. Given the initial position of the capsule and its
stress-free reference configuration, we will use Equations 2.9, 2.10 and 2.11
to simulate the time evolution of capsule under the imposed background flow
$\bm{u}_{\infty}$.
## 3 Numerical algorithms
In this section, we describe the numerical algorithms we use to simulate the
dynamics of the capsule. We discretize Equations 2.9, 2.10 and 2.11 discussed
in previous section. In Section 3.1, we discuss our parameterization of
capsule surface (assuming it is diffeomorphic to the unit sphere) using
multiple patches. In Section 3.2, we discuss the discretization of the surface
based on our parameterization of the surface. In Section 3.3, we discuss the
numerical scheme for the surface integration of a smooth function. In Section
3.4, we discuss the numerical scheme for the singular integration to compute
Stokes single layer potential. In Section 3.5, we discuss the numerical scheme
for calculating the surface derivatives for the computation of the elastic
force. Finally in Section 3.6, we discuss the time stepping we use to simulate
the capsule dynamics and provide a summary of the overall algorithm.
### 3.1 Surface parameterization
We assume the capsule membrane $\gamma$ to be smooth and diffeomorphic to the
unit sphere $\mathbb{S}^{2}$ embedded in $\mathbb{R}^{3}$. Thus, we have a
smooth bijective map $\phi:\mathbb{S}^{2}\longrightarrow\gamma$ with the
smooth inverse $\phi^{-1}$. We define $n_{p}$ number of overlapping patches
$\\{\mathcal{P}_{i}^{0}\\}_{i=1}^{i=n_{p}}$, where
$\mathcal{P}_{i}^{0}\subset\mathbb{S}^{2}$ form an open cover of the unit
sphere $\mathbb{S}^{2}$, _i.e.,_
$\bigcup_{i=1}^{n_{p}}\mathcal{P}_{i}^{0}=\mathbb{S}^{2}$. Additionally, we
assume each patch $\mathcal{P}_{i}^{0}$ is diffeomorphic to an open set
$\mathcal{U}_{i}$ ($\subset\mathbb{R}^{2}$) via a coordinate chart
$\eta_{i}^{0}$, _i.e.,_
$\eta_{i}^{0}:\mathcal{U}_{i}\longrightarrow\mathcal{P}_{i}^{0}$ is a
diffeomorphism. The domain $\mathcal{U}_{i}$ is called the coordinate domain
for the patch $\mathcal{P}^{0}_{i}$. Thus, the set
$\mathcal{A}^{0}=\\{(\mathcal{U}_{i},\eta_{i}^{0})\\}_{i=1}^{n_{p}}$ forms an
atlas for the manifold $\mathbb{S}^{2}$ [10] . We also know such an atlas
admits a smooth partition of unity subordinate to the open cover
$\\{\mathcal{P}_{i}^{0}\\}_{i=1}^{n_{p}}$ [37]. We choose a smooth partition
of unity $\\{\psi_{i}^{0}\\}_{i=1}^{n_{p}}$ where
$\psi_{i}^{0}:\mathbb{S}^{2}\longrightarrow\mathbb{R}$ such that
$\mathsf{supp}(\psi_{i}^{0})\subset\mathcal{P}_{i}^{0}$ for
$i=1,\ldots,n_{p}$, where $\mathsf{supp}(\cdot)$ denotes the support of the
function. Thus, for every $\bm{x}_{0}\in\mathbb{S}^{2}$,
$\sum_{i=1}^{n_{p}}\psi_{i}^{0}(\bm{x}_{0})=1$. The precise definition of
$\psi_{i}^{0}$ is given in Equation 3.9.
Define $\mathcal{P}_{i}:=\phi(\mathcal{P}_{i}^{0})$. Now
$\\{\mathcal{P}_{i}\\}_{i=1}^{n_{p}}$ form a set of overlapping patches that
cover the surface $\gamma$, _i.e.,_
$\bigcup_{i=1}^{n_{p}}\mathcal{P}_{i}=\gamma$. We then define coordinate maps
$\eta_{i}:\mathcal{U}_{i}\longrightarrow\mathcal{P}_{i}$, where
$\eta_{i}\equiv\phi\circ\eta_{i}^{0}$ and therefore, a corresponding atlas
$\mathcal{A}:=\\{(\mathcal{U}_{i},\eta_{i})\\}_{i=1}^{n_{p}}$ for the capsule
surface $\gamma$. A representation of the parameterization is shown in Figure
2. Since the patches are overlapping, a point $\bm{x}\in\gamma$ can belong to
multiple $\mathcal{P}_{i}$. For $\bm{x}\in\gamma$, let us define the set
$\mathcal{I}_{\bm{x}}=\\{1\leq i\leq n_{p}|\textit{
}\bm{x}\in\mathcal{P}_{i}\\}$. Thus, $\mathcal{I}_{\bm{x}}$ is the collection
of indices $i$ for which patch $\mathcal{P}_{i}$ contains $\bm{x}$. We also
define $\mathcal{P}_{ij}=\mathcal{P}_{i}\cap\mathcal{P}_{j}$ and
$\mathcal{U}_{ij}=\eta_{i}^{-1}(\mathcal{P}_{ij})$ for $i,j=1,\ldots,n_{p}$.
We also define transition maps
$\tau_{ij}:\mathcal{U}_{ij}\longrightarrow\mathcal{U}_{j}$ as
$\tau_{ij}\equiv(\eta_{j}^{0})^{-1}\circ\eta_{i}^{0}|_{\mathcal{U}_{ij}}$.
Furthermore, the diffeomorphism $\phi$ allows us to create a partition of
unity $\\{\psi_{i}\\}_{i=1}^{n_{p}}$ on $\gamma$ by defining it as
$\displaystyle\psi_{i}(\bm{x})=\psi_{i}^{0}(\phi^{-1}(\bm{x})),\quad\forall\bm{x}\in\gamma.$
(3.1)
For any function $f:\gamma\longrightarrow\mathbb{R}^{d}$, we can write
$\displaystyle
f(\bm{x})=\sum_{i=1}^{n_{p}}f(\bm{x})\psi_{i}(\bm{x}),\quad\forall\bm{x}\in\gamma.$
(3.2)
If $f$ is smooth, then $f\psi_{i}$ is smooth and compactly supported in
$\mathcal{P}_{i}$ for $i=1,\ldots,n_{p}$. We use this partition of unity
representation to compute derivatives and integrals on $\gamma$.
In this paper, we do not consider the adaptive case, and we assume that the
patch parameterization remains unchanged through the calculation. In our
numerical experiments, we used just six patches. Precisely, consider
$\mathcal{U}_{i}=(0,\pi)\times(0,\pi),\textit{ }i=1,\ldots,6$. The coordinate
maps $\\{\eta_{i}^{0}\\}_{i=1}^{6}$ are given below:
$\displaystyle\eta_{1}^{0}(u,v)$
$\displaystyle=(\sin{u}\cos{v},\sin{u}\sin{v},\cos{u}),$ (3.3)
$\displaystyle\eta_{2}^{0}(u,v)$
$\displaystyle=(-\sin{u}\cos{v},-\sin{u}\sin{v},\cos{u}),$ (3.4)
$\displaystyle\eta_{3}^{0}(u,v)$
$\displaystyle=(\sin{u}\sin{v},-\sin{u}\cos{v},\cos{u}),$ (3.5)
$\displaystyle\eta_{4}^{0}(u,v)$
$\displaystyle=(-\sin{u}\sin{v},\sin{u}\cos{v},\cos{u}),$ (3.6)
$\displaystyle\eta_{5}^{0}(u,v)$
$\displaystyle=(\sin{u}\cos{v},-\cos{u},\sin{u}\sin{v}),$ (3.7)
$\displaystyle\eta_{6}^{0}(u,v)$
$\displaystyle=(\sin{u}\cos{v},\cos{u},-\sin{u}\sin{v}).$ (3.8)
These six charts form six hemispherical patches
$\\{\mathcal{P}_{i}^{0}\\}_{i=0}^{6}$ that cover the unit sphere as shown in
Figure 3. The computation of the transition maps $\tau_{ij}$ is relatively
simple and is given in the Section A.1. We use the bump function,
$b(r)=\frac{e^{2e^{-1/|r|}}}{|r|-1}$ for $|r|<1$ and 0 otherwise, to construct
the partition of unity functions on each patch. For the particular
parameterization we use, for each patch of the unit sphere, we define
$\displaystyle\psi_{i}^{0}(\bm{x}_{0})=\frac{b\left(\frac{d(\bm{x}_{0},\eta_{i}^{0}(\pi/2,\pi/2))}{r_{0}}\right)}{\sum_{j=1}^{n_{p}}b\left(\frac{d(\bm{x}_{0},\eta_{j}^{0}(\pi/2,\pi/2))}{r_{0}}\right)},\
\forall\bm{x}_{0}\in\mathbb{S}^{2}$ (3.9)
where $d(\bm{x}_{0},\bm{y}_{0})$ denotes the great circle distance between
$\bm{x}_{0}$ and $\bm{y}_{0}$ on the unit sphere, and $r_{0}>0$ determines the
support of the partition of unity inside the patch. The argument of $b(\cdot)$
is the normalized great circle distance from a point $\bm{x}$ to the center of
the patch $\eta_{i}^{0}(\pi/2,\pi/2)$, where the normalizing factor $r_{0}$ is
chosen to be $r_{0}=5\pi/12$ (see Section A.3). Now, for a given surface
$\gamma$ and diffeomorphism $\phi$, we readily have an atlas and the
transition maps for the parameterization of $\gamma$. The corresponding
partition of unity on $\gamma$ is available using Equation 3.1 and Equation
3.9.
Figure 2: Here we summarize the notation for the atlas construction. The unit
sphere $\mathbb{S}^{2}$ is on the left and the capsule $\gamma$ is on the
right with the diffeomorphism $\phi:\mathbb{S}^{2}\longrightarrow\gamma$. We
show two overlapping patches colored in red and blue. The first patch
$\mathcal{P}_{1}^{0}$ is the red colored arc from point $a_{0}$ to $b_{0}$ on
$\mathbb{S}^{2}$. Its corresponding patch $\mathcal{P}_{1}$ on the capsule
surface $\gamma$ is shown in red as the arc from points $a_{1}$ to $b_{1}$.
The second patch $\mathcal{P}_{2}^{0}$ is the blue colored arc from point
$c_{0}$ to $d_{0}$ on $\mathbb{S}^{2}$. Its corresponding patch
$\mathcal{P}_{2}$ on the capsule surface $\gamma$ is shown in blue as the arc
from points $c_{1}$ to $d_{1}$. Their corresponding coordinate domains
$\mathcal{U}_{1}$ and $\mathcal{U}_{2}$ are also shown in the red and blue
color. The coordinate charts
$\eta_{i}^{0}:\mathcal{U}_{1}\longrightarrow\mathcal{P}^{0}_{i},\ i=1,2$, are
shown as dashed lines in the respective colors. The diffeomorphism $\phi$ also
gives the coordinate charts
$\eta_{i}:\mathcal{U}_{1}\longrightarrow\mathcal{P}_{i},\ i=1,2$, for the
patches on the capsule surface $\gamma$ shown as colored dashed lines from
$\mathcal{U}_{i}$ to $\mathcal{P}_{i}$. The corresponding partition of unity
functions $\psi^{0}_{i},i=1,2,$ with
$\mathsf{supp}(\psi^{0}_{i})\subset\mathcal{P}^{0}_{i}$ are drawn over
coordinate domains $\mathcal{U}_{i}$ for visual clarity since
$\mathcal{P}_{i}^{0}$ is diffeomorphic to $\mathcal{U}_{i}$.
Figure 3: The six hemispherical patches forming an open cover of the unit
sphere $\mathcal{S}^{2}$. Each one is represented by the black grid. a)–f)
$\mathcal{P}_{i}^{0}$ for $i=1,2,\ldots,6$.
### 3.2 Surface discretization
We now describe in detail the discretization of the surface $\gamma$ using the
parameterization described above in Section 3.1. As mentioned above,
$\mathcal{U}_{i}=(0,\pi)\times(0,\pi)$ for our parameterization. We use the
following $m_{\mathrm{th}}$-order uniform grid in $\mathcal{U}_{i}$ as
follows:
$\displaystyle U^{m,i}_{j,k}=\left(\frac{j\pi}{m},\frac{k\pi}{m}\right),\
\forall j,k\in\\{1,\ldots,m-1\\}.$ (3.10)
We use $h_{m}$ to denote the $m_{\mathrm{th}}$-order grid spacing in each
$\mathcal{U}_{i}$ space, _i.e.,_ $h_{m}=\frac{\pi}{m}$. The discretization
points for each patch $\mathcal{P}_{i}^{0}$ on the unit sphere are given by
$\displaystyle X^{0,m,i}_{j,k}=\eta_{i}^{0}(U^{m,i}_{j,k}),\ \forall
j,k\in\\{1,\ldots,m-1\\}.$ (3.11)
The discretization points for each patch $\mathcal{P}_{i}$ on $\gamma$ are
given by
$\displaystyle X^{m,i}_{j,k}=\eta_{i}(U^{m,i}_{j,k}),\ \forall
j,k\in\\{1,\ldots,m-1\\}.$ (3.12)
Thus, each patch contains $(m-1)^{2}$ discretization points for an
$m_{\mathrm{th}}$-order grid leading to a total of $N=6(m-1)^{2}$ points for
the surface. The dynamics of the capsule are represented as the time
trajectories $X^{m,i}_{j,k}(t)$. The partition of unity values at the
discretization points $\psi^{m,i}_{j,k}=\psi_{i}(X^{m,i}_{j,k})$ can be
precomputed and stored to be used later for computing integrals and
derivatives (see Equation 3.1). The sample grids for a unit sphere and an
ellipsoid are given in Figure 4 and Figure 5. To get the diffeomorphism $\phi$
for an ellipsoid $\gamma$ given by the level surface
$\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}+\frac{z^{2}}{c^{2}}=1$, we consider
the spherical angles parameterization
$\beta(u,v):[0,\pi]\times[0,2\pi)\longrightarrow\gamma$ given by
$\displaystyle\beta(u,v)=(a\sin{u}\cos{v},b\sin{u}\sin{v},c\cos{u}),u\in[0,\pi],v\in[0,2\pi).$
(3.13)
We also consider the spherical angles parameterization of the unit sphere
$\beta_{0}(u,v):[0,\pi]\times[0,2\pi)\longrightarrow\mathbb{S}^{2}$ given by
$\displaystyle\beta_{0}(u,v)=(\sin{u}\cos{v},\sin{u}\sin{v},\cos{u}),u\in[0,\pi],v\in[0,2\pi).$
(3.14)
The diffeomorphism $\phi:\mathbb{S}^{2}\longrightarrow\gamma$ is then given as
$\phi(\bm{x}_{0})=\begin{cases}\beta(\beta_{0}^{-1}(\bm{x}_{0}))&\text{ if
}\bm{x}_{0}\in\mathbb{S}^{2}\backslash\\{(0,0,-1),(0,0,1)\\},\\\
(0,0,-c)&\text{ if }\bm{x}_{0}=(0,0,-1),\\\ (0,0,c)&\text{ if
}\bm{x}_{0}=(0,0,1).\\\ \end{cases}$
_Remark 1:_ Given an initial capsule surface $\gamma$, we only use the
diffeomorphism $\phi:\mathbb{S}^{2}\longrightarrow\gamma$ to compute the atlas
$\mathcal{A}$, the partition of unity $\\{\psi_{i}\\}_{i=1}^{n_{p}}$ for
$\gamma$ and the transition maps $\tau_{ij}$ for the coordinate domains. Once
computed, we initialize the surface discretization points (see Equation 3.12),
and track their time trajectories to simulate the dynamics of the capsule. We
do not need to use $\phi$ afterwards for capsule dynamics simulation. We do
not change the partition of unity values after initialization. Hence, we have
$\psi^{m,i}_{j,k}=\psi_{i}(X^{m,i}_{j,k}(t_{0}))=\psi_{i}(X^{m,i}_{j,k}(t))$
where $t_{0}$ is the initial time. These partition of unity values are
computed at initialization and stored for subsequent use. Also, the transition
maps between the coordinate domains remain unchanged and are precomputed for
usage later.
_Notation:_ For brevity, we use $\bm{U}^{m}$ to denote the $N\times 2$ matrix
containing all the $m_{\mathrm{th}}$-order grid points, _i.e.,_
$\bm{U}^{m}=[\\{U^{m,i}_{j,k}\\}_{1\leq j,k\leq m,1\leq i\leq n_{p}}]^{T}$.
Similarly, we define $\bm{X}^{m}=[\\{X^{m,i}_{j,k}\\}_{1\leq j,k\leq m,1\leq
i\leq n_{p}}]^{T}$ to be the $N\times 3$ matrix of corresponding
discretization points on $\gamma$,
$\bm{\Psi}^{m}=[\\{\psi^{m,i}_{j,k}\\}_{1\leq j,k\leq m,1\leq i\leq
n_{p}}]^{T}$ to be the $N\times 1$ column vector containing the values of
partition of unity functions and $\bm{F}^{m}$ to be the $N\times 3$ matrix
containing the interfacial force $\bm{f}$ values at discretization points of
the $m_{\mathrm{th}}$-order grid. Further, we use $\bm{U}^{m,i}$ to denote the
$(m-1)^{2}\times 2$ matrix of grid points belonging to $\mathcal{U}_{i}$.
Similarly, we use $\bm{X}^{m,i},\bm{\Psi}^{m,i},\bm{F}^{m,i}$ to denote the
values in the patch $\mathcal{P}_{i}$. For the unit sphere, we denote the
corresponding discretization points as $\bm{X}^{0,m}$. The partition of unity
column vector on $\bm{X}^{0,m}$ is the same as $\bm{\Psi}^{m}$.
_Remark 2:_ While we do not need $\phi$ for simulating capsule dynamics after
the initialization of atlas, transition maps and partition of unity, we use
$||\nabla_{\mathbb{S}^{2}}\phi||_{\infty}$ as a quality metric to gauge the
smoothness of the capsule surface $\gamma$ during the simulations in this
paper (see Section 3.6). To compute
$||\nabla_{\mathbb{S}^{2}}\phi||_{\infty}$, we will need the discretization
points $\bm{X}^{0,m}$. Hence, we store these values as well. Note that
$\phi(\bm{X}^{0,m})=\bm{X}^{m}$.
(a) $m=8$
(b) $m=16$
(c) $m=32$
Figure 4: Representation of discretization of a unit sphere using
$m_{\mathrm{th}}$-order grids for (a) $m=8$, (b) $m=16$, (c) $m=32$.
(a) $m=8$
(b) $m=16$
(c) $m=32$
Figure 5: Representation of discretization of the ellipsoid
$x^{2}/a^{2}+y^{2}/b^{2}+z^{2}/c^{2}=1$ with $a=0.5,b=1,c=1$, using
$m_{\mathrm{th}}$-order grids for (a) $m=8$, (b) $m=16$, (c) $m=32$.
### 3.3 Smooth surface integrals
Let $f:\gamma\longrightarrow\mathbb{R}^{d}$ be a smooth function. We can write
its surface integral as a sum of the integral over all the patches using the
partition of unity $\\{\psi_{i}\\}_{i=1}^{n_{p}}$. Then, we use the product
periodic trapezoidal rule using the function values at the grid points
mentioned in Equation 3.12, which gives superalgebraic convergence [7]. We
write
$\displaystyle\int_{\gamma}fd\gamma=\sum_{i=1}^{n_{p}}\int_{\mathcal{P}_{i}}f\psi_{i}d\gamma=\sum_{i=1}^{n_{p}}\int_{\mathcal{U}_{i}}f\psi_{i}W_{i}d\mathcal{U}_{i}\approx\sum_{i=1}^{n_{p}}\left(\sum_{j,k\in\\{1,\ldots,m-1\\}}f(X^{m,i}_{j,k})\psi^{m,i}_{j,k}W_{i}(X_{j,k}^{m,i})h_{m}^{2}\right),$
(3.15)
where $W_{i}$ is the surface area element for patch $\mathcal{P}_{i}$
(numerical computation of $W_{i}$ is discussed in Section 3.5).
_Work complexity:_ The work complexity to compute this integral numerically
for $m_{\mathrm{th}}$-order grid is $O(n_{p}m^{2}).$ In our specific
parameterization, $n_{p}=6$. Hence, the work required is $O(6m^{2})$.
### 3.4 Singular integration
We use the regularized Stokes kernel described in [32] to evaluate Stokes
potentials to high-order accuracy using the discretization described above in
Section 3.2.
Consider a smooth vector-valued function
$\bm{f}:\gamma\longrightarrow\mathbb{R}^{3}$. The single-layer Stokes
potential of $\bm{f}$ at $\bm{x}\in\gamma$ as described in Section 2.2 can be
written as
$\displaystyle\mathcal{S}_{\gamma}[\bm{f}](\bm{x})=\frac{1}{8\pi\mu}\int_{\gamma}\left(\frac{\bm{f}(\bm{y})}{r}+(\bm{f}(\bm{y})\cdot(\bm{x}-\bm{y}))(\bm{x}-\bm{y})\frac{1}{r^{3}}\right)d\gamma,$
(3.16)
where $r=||\bm{x}-\bm{y}||$. The regularized version of Stokes single layer
potential [32] is given as
$\displaystyle\mathcal{S}_{\gamma}^{\delta}[\bm{f}](\bm{x})=\frac{1}{8\pi\mu}\int_{\gamma}\left(\frac{\bm{f}(\bm{y})s_{1}(r/\delta)}{r}+(\bm{f}(\bm{y})\cdot(\bm{x}-\bm{y}))(\bm{x}-\bm{y})\frac{s_{2}(r/\delta)}{r^{3}}\right)d\gamma,$
(3.17)
where $\delta>0$ is the regularization parameter and $s_{1},s_{2}$ are
smoothing factors such that $\frac{s_{1}(r/\delta)}{r}$ and
$\frac{s_{2}(r/\delta)}{r^{3}}$ are smooth functions as $r\longrightarrow 0$.
These smoothing factors ensure that $S_{\gamma}^{\delta}[\bm{f}]$ is an
integral involving a smooth kernel and can be evaluated using the periodic
trapezoidal rule as in Section 3.3. Following [32, 24], we choose the
smoothing factors $s_{1}$ and $s_{2}$ to be
$\displaystyle s_{1}(r)$
$\displaystyle=\operatorname{Erf}(r)-\frac{2}{3}r(2r^{2}-5)\frac{e^{-r^{2}}}{\sqrt{\pi}},$
(3.18) $\displaystyle s_{2}(r)$
$\displaystyle=\operatorname{Erf}(r)-\frac{2}{3}r(4r^{4}-14r^{2}+3)\frac{e^{-r^{2}}}{\sqrt{\pi}},$
(3.19)
where $\operatorname{Erf}(\cdot)$ is the error function. Using Taylor
expansion, we can show that as $r\longrightarrow 0$,
$\frac{s_{1}(r/\delta)}{r}\longrightarrow\frac{16}{3\delta\sqrt{\pi}}$ and
$\frac{s_{2}(r/\delta)}{r^{3}}\longrightarrow\frac{32}{3\delta^{3}\sqrt{\pi}}$.
This choice of smoothing factors ensures that the regularized Stokes potential
is $O(\delta^{5})$ accurate [32]. This regularized integral is discretized by
the trapezoidal rule for smooth functions. However, the question is what
should be the relation between $m$ and $\delta$?
_Regularization parameter $\delta$:_ The choice of regularization parameter
$\delta$ is crucial. As discussed in [4, 32], the chosen $\delta$ should be
large enough that regularization error dominates the discretization error in
computing the integral in Equation 3.17. In our simulations, when computing
the Stokes potential for a target point $X^{m,i}_{j,k}$ in patch
$\mathcal{P}_{i}$, we choose a $\mathcal{P}_{i}$-dependent regularization
parameter $\delta_{i}$ at every time step as the capsule evolves. It is
defined by
$\displaystyle\delta^{m,i}_{j,k}=C\delta^{*}\mbox{~{}where~{}}\delta^{*}=\left(\max_{l,l^{\prime}\in\\{1,\ldots,m-1\\}}\\{d_{e}(X^{m,i}_{l,l^{\prime}},\mathcal{X}(X^{m,i}_{l,l^{\prime}})\\}\right).$
(3.20)
Here, we choose the constant $C=1$ (see Section A.7 for more details on the
choice $C$),
$\mathcal{X}(X^{m,i}_{l,l^{\prime}})=\\{X^{m,i}_{l-a,l^{\prime}-b}:a,b\in\\{-1,0,1\\}\\}$
refers to the set of points which are adjacent neighbors of
$X^{m,i}_{l,l^{\prime}}$ and $d_{e}(a,B)$ denotes the maximum Euclidean
distance between $a$ and the points in set $B$. For brevity, we define
$\bm{\delta}^{m}=[\\{\delta^{m,i}_{j,k}\\}_{1\leq j,k\leq m-1,1\leq i\leq
n_{p}}]^{T}$. We experimentally observed that our choice for $\delta$ improves
the accuracy the singular quadrature as in our simulations as capsule changes
shapes and patches evolve over time, as opposed to using a fixed global
regularization parameter $\delta=C^{\prime}h_{m}$ where $C^{\prime}$ is a
positive constant (like in [32]). We tabulate the relative errors for
different values of $C^{\prime}$ in using fixed global regularization
parameter $\delta=C^{\prime}h_{m}$ on a sample ellipsoidal shape in Table
11(a) to illustrate this.
_Upsampling:_ In our experiments, the observed prefactor in our weakly
singular quadrature scheme is quite significant. For example at small $m$, say
$m=16$, we can resolve the surface well but we need more quadrature points to
resolve the weakly singular integrals. For this reason we use upsampling,
especially in the small-$m$ regime. Using numerical experiments, we determined
that a four times upsampling works well. Thus, we use an upsampled grid with
$N_{\mathrm{up}}=6(4m-1)^{2}$ grid points for evaluating the single layer
potential. We use cubic spline interpolation [9] on each patch to upsample the
surface discretization points $\bm{X}^{m}$ and surface interfacial force
$\bm{F}^{m}$. We define this upsampling operator as $\bm{I}_{u}^{m}$. Thus,
the upsampled surface points and surface force vector can be written as
$\displaystyle\bm{X}^{m}_{u}=\bm{I}_{u}^{m}\bm{X}^{m},\bm{F}^{m}_{u}=\bm{I}_{u}^{m}\bm{F}^{m}.$
(3.21)
The partition of unity values on the upsampled grid, denoted by
$\bm{\Psi}^{m}_{u}$, can also be precomputed and stored for use. We use the
upsampled values to compute the Stokes single layer potential ( Equation 3.17)
and then downsample the Stokes potential to get the values on
$m_{\mathrm{th}}$-order grid. Note that both $\bm{I}_{u}^{m}$ and
$\bm{I}_{d}^{m}$ are cubic spline based weighted interpolation operators.
Their matrices can be precomputed and stored for use throughout the
simulation. We also compute the $\bm{\delta}^{m}_{u}$ for the upsampled grid
using Equation 3.20.
_Work complexity:_ The work complexity for the evaluation of Stokes single
layer potential on upsampled grid for a target point is $O(96m^{2})$. For
$N_{up}$ target points in upsampled grid, the total work complexity for
computing single layer potential is $O(9216m^{4})$. The complexity for
upsampling and downsampling is $O(m^{2})$ since $\bm{I}_{u}^{m}$ and
$\bm{I}_{d}^{m}$ are sparse linear operators. Hence, the total work complexity
is $O(m^{4})$.
_GPU acceleration:_ The evaluation of single layer potential on upsampled grid
is the most expensive part of our numerical scheme. In order to accelerate
this computation, we use a CUDA implementation of computation of Stokes single
layer potential computation based on the all-pairs $O(N^{2})$ calculation
using CUDA on GPU (for details refer to [25]).
### 3.5 Surface derivatives
In this section we discuss in detail our numerical scheme for calculating
surface derivatives. The derivatives are needed for computing the interfacial
force $\bm{f}$ and the surface area element $W$ (for Equation 3.15) at each
discretization point. The first step in the computation of the interfacial
force $\bm{f}$ is the computation of the shear stress tensor $\Lambda$. The
computation of $\Lambda$ requires the surface tangents and normals in the
current configuration (see Equation 2.8). After computing the stress tensor
$\Lambda$, the second step is to compute the interfacial force by calculating
the surface divergence of $\Lambda$ as $\bm{f}=\nabla_{\gamma}\cdot\Lambda$.
For the computation of surface area element $W$, we need the coefficients
$E,F,G$ of the first fundamental form of the surface. The formulas for the
surface divergence, denoted by $\nabla_{\gamma}\cdot$, and for the surface
area element $W$ along with $E,F,G$ are summarized in the Section A.2.
Let $\eta(u,v):\mathcal{U}\longrightarrow\mathcal{P}\subset\gamma$ be a
surface patch of $\gamma$ where $\eta\in\\{\eta_{1},\ldots,\eta_{n_{p}}\\}$.
Let $\eta(u_{0},v_{0})=\bm{x}\in\mathcal{P}$. Below, we summarize the steps
required to evaluate the required shape derivatives
$\bm{f}(\eta(u_{0},v_{0}))$ and $W(\eta(u_{0},v_{0}))$.
1. 1.
We need to compute the tangents, denoted by $\bm{x}_{u}$ and $\bm{x}_{v}$, and
the unit normal $\bm{n}(\bm{x})$ for the current and the reference
configuration. . These quantities are given by
$\bm{x}_{u}=\frac{\partial\eta}{\partial
u}|_{(u_{0},v_{0})},\bm{x}_{v}=\frac{\partial\eta}{\partial
v}|_{(u_{0},v_{0})}$ and
$\bm{n}(\bm{x})=\frac{\bm{x}_{u}\times\bm{x}_{v}}{||\bm{x}_{u}\times\bm{x}_{v}||}$.
Given these derivatives we can then compute $\Lambda(\eta(u_{0},v_{0}))$.
2. 2.
The next step is to compute the surface divergence of $\Lambda(\eta(u,v))$ at
$(u_{0},v_{0})$. This involves computation of $E,F,G$ and partial $u,v$
derivatives of the components of $\Lambda(\eta(u,v))$ (see Section A.2). The
computation of $E,F,G$ at point $\bm{x}$ is straightforward using $\bm{x}_{u}$
and $\bm{x}_{v}$. The surface area element is then computed using
$W(\bm{x}(u_{0},v_{0}))=\sqrt{EG-F^{2}}$.
We note here from the above summary that the computation of both $\bm{f}$ and
$W$ boils down to multiple computations of the $u$ and $v$ partial derivatives
of functions like the coordinate chart $\eta(u,v)$ and the components of
$\Lambda(\eta(u,v))$. Now given an arbitrary function
$g:\gamma\longrightarrow\mathbb{R}$, we consider the parametric function
$\tilde{g}^{i}\equiv
g(\eta_{i}(u,v)):\mathcal{U}_{i}\longrightarrow\mathbb{R}$ for
$i=1,\ldots,n_{p}$. We denote its partial $u,v$ derivatives as
$\tilde{g}^{i}_{u}$ and $\tilde{g}^{i}_{v}$. We describe below the numerical
scheme to compute these partial derivatives $\tilde{g}^{i}_{u}$ and
$\tilde{g}^{i}_{v}$. This scheme is then sufficient to compute $\bm{f}$ and
$W$ at the discretization grid $\bm{U}^{m}$ using the formulas mentioned in
Section A.2. Before we describe the scheme in detail, we summarize the three
main steps of the scheme below:
Figure 6: A 2D representation of a patch extension to apply the central finite
difference stencil. The unit sphere $\mathbb{S}^{2}$ is on the left and the
capsule $\gamma$ is on the right with the diffeomorphism
$\phi:\mathbb{S}^{2}\longrightarrow\gamma$. The discrete points on the patch
$\mathcal{P}_{2}^{0}$ are represented by the blue colored dotted arc from
point $c_{0}$ to $d_{0}$ on $\mathbb{S}^{2}$. Its corresponding patch
$\mathcal{P}_{2}$ on the capsule surface $\gamma$ is shown in dotted blue line
as the arc from points $c_{1}$ to $d_{1}$. Its corresponding coordinate domain
$\mathcal{U}_{2}$ is also shown in the blue color. The extended grid points
are shown in ’$+$’ symbol alongside $\mathcal{U}_{2}$. The extended
discretization points are on $\mathbb{S}^{2}$ and $\gamma$ are shown with the
’$\times$’ symbol. The coordinate map $\eta_{2}^{0}$ we use in our
parameterization has a natural extension to the extended domain with extended
grid points with the same expression as in Equation 3.8.
1. 1.
_Patch extension:_ We intend to use central finite difference stencil to
compute $\tilde{g}^{i}_{u}$ and $\tilde{g}^{i}_{v}$ at the discretization
points for $i=1,\ldots,n_{p}$. To apply central difference at the grid points
near the boundary of each coordinate domain $\mathcal{U}_{i}$, we need
function values on ghost grid points outside the discretization grid in
$\mathcal{U}_{i}$. We call this process of obtaining values on the extended
grid points as _patch extension_. We obtain the values of $\tilde{g}^{i}$ on
these ghost grid points using the patch extension process. Let
$\bm{x}_{1}=\eta_{i}(u_{1}^{i},v_{1}^{i})$ for $i=1,\ldots,n_{p}$, be an
extended discretization point (see Figure 6). The value of the function
$\tilde{g}^{i}$ at the extended grid point $(u_{1}^{i},v_{1}^{i})$, is given
in the continuous form as the weighted average of the values across the
patches as follows:
$\displaystyle\tilde{g}^{i}(u_{1}^{i},v_{1}^{i})=\sum_{1\leq j\leq
n_{p}}\tilde{g}^{j}(u_{1}^{j},v_{1}^{j})\psi_{j}(\bm{x}_{1}).$ (3.22)
Here, the partition of unity values are used as the weights for each patch. In
the continuous case,
$\tilde{g}^{i}(u_{1}^{i},v_{1}^{i})=\tilde{g}^{j}(u_{1}^{j},v_{1}^{j})$ for
$i,j=1,\ldots,n_{p}$. Hence, it is easy to see that the Equation 3.22 is valid
in the continuous setting by putting
$\tilde{g}^{i}(u_{1}^{i},v_{1}^{i})=\tilde{g}^{j}(u_{1}^{j},v_{1}^{j})$ for
$i,j=1,\ldots,n_{p}$ $i,j=1,\ldots,n_{p}$ and noting that the partition of
unity values add up to unity. In the discrete case, the above equation
provides a unique definition of discretized function values at every extended
grid point since $\tilde{g}^{i}(u_{1}^{i},v_{1}^{i})$ and
$\tilde{g}^{j}(u_{1}^{j},v_{1}^{j})$ are not necessarily the same for
$i,j\in\\{1,\ldots,n_{p}\\}$ in the discrete case due to numerical errors.
2. 2.
_Finite difference:_ After extension of function values to ghost grid points,
we use central finite difference operator on extended grids to obtain
$\tilde{g}^{i}_{u}$ and $\tilde{g}^{i}_{v}$ on the grid points $\bm{U}^{m,i}$
for $i=1,\ldots,n_{p}$.
3. 3.
_Blending:_ We note that a point $\bm{x}\in\gamma$ can belong to several
different patches. We suppose $\bm{x}=\eta_{i}(u_{0}^{i},v_{0}^{i})$ for
$i=1,\ldots,n_{p}$. The interfacial force $\bm{f}(\bm{x})$ is intrinsic to the
surface $\gamma$ and is independent of the local surface parameterization.
Using patch parameterization dependent numerical derivatives
$\tilde{g}^{i}_{u}$ and $\tilde{g}^{i}_{v}$ leads to different discrete
derivative values at $(u_{0}^{i},v_{0}^{i})$ for different $i=1,\ldots,n_{p}$.
To get unique derivative values consistent across all the patches, we _blend_
the derivatives across all the patches. The blending process is taking the
weighted average of derivative values across all the patches except that we
also have to transform the derivatives from, say the domain $\mathcal{U}_{j}$
to the domain $\mathcal{U}_{i}$. This transformation is given by the Jacobian
$\bm{J}_{\tau_{ij}}$ of the transition map
$\tau_{ij}(u,v)=(\tau_{ij}^{(1)}(u,v),\tau_{ij}^{(2)}(u,v))\in\mathcal{U}_{j}$
as follows:
$\displaystyle\begin{bmatrix}\tilde{g}_{u}^{i}\\\
\tilde{g}_{v}^{i}\end{bmatrix}=\bm{J}_{\tau_{ij}}\begin{bmatrix}\tilde{g}_{u}^{j}\\\
\tilde{g}_{v}^{j}\end{bmatrix},$ (3.23)
where the Jacobian matrix is
$\displaystyle\bm{J}_{\tau_{ij}}=\begin{bmatrix}\partial\tau_{ij}^{(1)}/\partial
u&\partial\tau^{(2)}_{ij}/\partial u\\\ \partial\tau_{ij}^{(1)}/\partial
v&\partial\tau_{ij}^{(2)}/\partial v\end{bmatrix}.$ (3.24)
Here, $\tau_{ij}^{(1)},\tau_{ij}^{(2)}$ are defined as the first and second
component respectively of the transition map, _i.e.,_
$\tau_{ij}(u,v)=(\tau_{ij}^{(1)}(u,v),\tau_{ij}^{(2)}(u,v))\in\mathcal{U}_{j}$.
Both the RHS and the LHS in Equation 3.23 are evaluated at
$(u^{i}_{0},v^{i}_{0})$. Using this Jacobian transformation, we define the
blending process as the weighted average using partition of unity values as
weights for the patches:
$\displaystyle\begin{bmatrix}\tilde{g}_{u}^{i}\\\
\tilde{g}_{v}^{i}\end{bmatrix}=\sum_{j=1}^{n_{p}}\psi_{j}(\bm{x})\bm{J}_{\tau_{ij}}\begin{bmatrix}\tilde{g}_{u}^{j}\\\
\tilde{g}_{v}^{j}\end{bmatrix}$ (3.25)
for all $i=1,\ldots,n_{p}$, where $\bm{x}=\eta_{i}(u^{i}_{0},v^{i}_{0})$. The
partition of unity values $\psi_{j}$ form the respective weights for the
coordinate domain $\mathcal{U}_{i}$ like in the Equation 3.22. Notice that the
Equation 3.25 is trivially valid in the continuous case since Equation 3.23
holds and $\psi_{j}$ sum to unity. The above equation can be expanded as
follows:
$\displaystyle\tilde{g}^{i}_{u}(u^{i}_{0},v^{i}_{0})=\tilde{g}^{i}_{u}(u^{i}_{0},v^{i}_{0})\psi_{i}(\bm{x})+\sum_{j\neq
i,1\leq j\leq
n_{p}}\left(\tilde{g}_{u}^{j}\left.\frac{\partial\tau_{ij}^{(1)}}{\partial
u}\right|_{(u^{i}_{0},v^{i}_{0})}+\tilde{g}_{v}^{j}\left.\frac{\partial\tau_{ij}^{(2)}}{\partial
u}\right\rvert_{(u^{i}_{0},v^{i}_{0})}\right)\psi_{j}(\bm{x})$ (3.26)
$\displaystyle\tilde{g}^{i}_{v}(u^{i}_{0},v^{i}_{0})=\tilde{g}^{i}_{v}(u^{i}_{0},v^{i}_{0})\psi_{i}(\bm{x})+\sum_{j\neq
i,1\leq j\leq
n_{p}}\left(\tilde{g}_{u}^{j}\left.\frac{\partial\tau_{ij}^{(1)}}{\partial
v}\right\rvert_{(u^{i}_{0},v^{i}_{0})}+\tilde{g}_{v}^{j}\left.\frac{\partial\tau_{ij}^{(2)}}{\partial
v}\right\rvert_{(u^{i}_{0},v^{i}_{0})}\right)\,\psi_{j}(\bm{x}).$ (3.27)
The terms inside the parenthesis in Equation 3.26 and Equation 3.27 come from
the Jacobian transformation in Equation 3.23. We list all the transition maps
for the specific parameterization we use in Section A.1. Equation 3.26 and
Equation 3.27 describe the blending equation we use to get the weighted
average of derivatives across all patches.
We now describe in detail all the three steps mentioned above and write the
discretized versions of these steps.
_Notation:_ We use $\tilde{\bm{g}}^{m}$ to denote the column vector of the
values of $\tilde{g}^{i}$ at the grid points $\bm{U}^{m,i}$ for all
$i=1,\ldots,n_{p}$. We use $\tilde{\bm{g}}^{m,i}$ to denote the column vector
of the values at the grid points $\bm{U}^{m,i}$.
_Patch extension:_ As discussed above, for the discretization points near the
boundary of $\mathcal{U}_{i},i=1,\ldots,n_{p}$, we need function values on
_ghost points_ lying outside $\mathcal{U}_{i}$ in order to use the central FDM
stencil. In our simulation, we use a fifth order 7-point symmetric 1D-stencil
[16] for each of the partial derivatives. Thus, in order to calculate the
derivative values on the grid $\bm{U}^{m}$, we need the values of $\tilde{g}$
on the extended grid (including ghost nodes) given by
$\displaystyle U^{m,i,ext}_{j,k}=\left(\frac{j\pi}{m},\frac{k\pi}{m}\right),\
\forall j,k\in\\{-2,-1,\ldots,m+1,m+2\\},$ (3.28)
where the set of points defined by
$G^{m,i}:=\\{U^{m,i,ext}_{j,k}\\}_{j,k\in\\{-2,-1,\ldots,m+1,m+2\\}}\backslash\\{U^{m,i,ext}_{j,k}\\}_{j,k\in\\{1,\ldots,m-1\\}}$
denotes the ghost discretization points for $\mathcal{U}_{i}$. We define the
extended coordinate chart domain as
$\mathcal{U}^{m,ext}_{i}=(\frac{-3\pi}{m},\frac{(m+3)\pi}{m})\times(\frac{-3\pi}{m+1},\frac{(m+4)\pi}{m+1})$
containing the extended grid points as defined in Equation 3.28. The
parameterization of unit sphere $\eta_{i}^{0}$ has a natural extension to
$\mathcal{U}_{i}^{m,ext}$ with the same function expressions as given in
Equation 3.8. Using the diffeomorphism $\phi$, we also have mappings
$\eta_{i}^{m,ext}:\mathcal{U}_{i}^{m,ext}\longrightarrow\gamma$. We define
$\mathcal{P}_{i}^{m,ext}:=\eta_{i}^{m,ext}(\mathcal{U}_{i}^{m,ext})$. We
define the matrix of extended grid points for all the coordinate domains as
$\bm{U}^{m,ext}=[\\{U^{m,i,ext}_{j,k}\\}_{-2\leq j,k\leq m+2,1\leq i\leq
n_{p}}]^{T}$, while $\bm{U}^{m,i,ext}$ refers to the matrix of extended grid
points for the coordinate domain $\mathcal{U}_{i}$. The corresponding
discretization points on the surface $\gamma$ are theoretically given by
$\displaystyle X^{m,i,ext}_{j,k}=\eta_{i}\left(U^{m,i,ext}_{j,k}\right),\
\forall j,k\in\\{-2,\ldots,m+2\\}.$ (3.29)
Let the matrix of all surface discretization nodes on extended patch be
$\bm{X}^{m,ext}=[\\{X^{m,i}_{j,k}\\}_{-2\leq j,k\leq m+2,1\leq i\leq
n_{p}}]^{T}$. Our goal of patch extension is to find the values of the
function $\tilde{g}^{i}$ on $\bm{U}^{m,i,ext}$, denoted by
$\tilde{\bm{g}}^{m,i,ext}$, using the known values on $\bm{U}^{m,i}$. The
discretized form of patch extension is given as
$\displaystyle\tilde{\bm{g}}^{m,i,ext}$ $\displaystyle=\sum_{1\leq j\leq
n_{p}}\left(\bm{I}_{ij}^{m,ext}\tilde{\bm{g}}^{m,j}\right)\left.\psi_{j}\right\rvert_{\bm{X}^{m,i,ext}}.$
(3.30)
This equation is the discretized version of the Equation 3.22. Here, we
additionally use the cubic splines interpolation, denoted by the operator
$\bm{I}_{ij}^{m,ext}$, to get the values of $\tilde{g}^{j}$ on
$\bm{U}^{m,i,ext}$ using the known values on the discrete grid $\bm{U}^{m,j}$.
_Finite Differences:_ We use $\bm{D}^{m,i}_{u}$ and $\bm{D}^{m,i}_{v}$ to
denote the central finite difference operators for computing partial
derivative with respect to $u$ and $v$ respectively on the
$m_{\mathrm{th}}$-order grid in coordinate chart domain $\mathcal{U}_{i}$.
Thus, we write the partial derivatives on $\bm{U}^{m,i}$ as
$\displaystyle\tilde{\bm{g}}_{u}^{m,i}=\bm{D}^{m,i}_{u}\tilde{\bm{g}}^{m,i,ext},\tilde{\bm{g}}_{v}^{m,i}=\bm{D}^{m,i}_{v}\tilde{\bm{g}}^{m,i,ext},$
(3.31)
where $\tilde{\bm{g}}^{m,i,ext}$ is calculated using Equation 3.30.
_Blending:_ As discussed above, applying Equation 3.31 can lead to different
numerical values of derivatives at the same discretization point for different
domains $\mathcal{U}_{i}$. Therefore, we use blending to get unique values
across all the patches. The discretized versions of the blending equations
Equation 3.26 and Equation 3.27 are given as follows:
$\displaystyle\tilde{\bm{g}}_{u}^{m,i}=\tilde{\bm{g}}_{u}^{m,i}\left.\psi_{i}\right\rvert_{\bm{X}^{m,i}}+\sum_{j\neq
i,1\leq j\leq
n_{p}}\left(\left(\bm{I}_{ij}^{m}\tilde{\bm{g}}_{u}^{m,j}\right)\left.\frac{\partial\tau_{ij}^{(1)}}{\partial
u}\right\rvert_{\bm{U}^{m,i}}+\left(\bm{I}_{ij}^{m}\tilde{\bm{g}}_{v}^{m,j}\right)\left.\frac{\partial\tau_{ij}^{(2)}}{\partial
u}\right\rvert_{\bm{U}^{m,i}}\right)\left.\psi_{j}\right\rvert_{\bm{X}^{m,i}},$
(3.32)
$\displaystyle\tilde{\bm{g}}_{v}^{m,i}=\tilde{\bm{g}}_{v}^{m,i}\left.\psi_{i}\right\rvert_{\bm{X}^{m,i}}+\sum_{j\neq
i,1\leq j\leq
n_{p}}\left(\left(\bm{I}_{ij}^{m}\tilde{\bm{g}}_{u}^{m,j}\right)\left.\frac{\partial\tau_{ij}^{(1)}}{\partial
v}\right\rvert_{\bm{U}^{m,i}}+\left(\bm{I}_{ij}^{m}\tilde{\bm{g}}_{v}^{m,j}\right)\left.\frac{\partial\tau_{ij}^{(2)}}{\partial
v}\right\rvert_{\bm{U}^{m,i}}\right)\left.\psi_{j}\right\rvert_{\bm{X}^{m,i}},$
(3.33)
for $i=1,\ldots,n_{p}$, where $\bm{I}_{ij}^{m}$ is the cubic splines
interpolation operator. It returns approximated function values on
$\bm{U}^{m,i}$ grid from the known values on the grid $\bm{U}^{m,j}$.
The three steps, namely the patch extension, the finite differences and and
the blending of derivatives described above constitute our numerical scheme to
calculate the interfacial forces using the formulas given in Section A.2.
_Work complexity:_ The finite difference operators $\bm{D}_{u}^{m,i}$ and
$\bm{D}_{v}^{m,i}$ and the inter-patch interpolation operators
$\bm{I}_{ij}^{m},i,j\in\\{1,\ldots,n_{p}\\}$, can all be precomputed and
stored for application as sparse matrices. The work complexity of patch
extension is of the order of number of additional points in the extended grid,
_i.e.,_ $O(n_{p}(m+5)^{2}-n_{p}(m-1)^{2})\equiv O(n_{p}m)$. The work required
for applying finite difference operators is $O(n_{p}m^{2})$. Finally, the work
required for blending derivatives across patches is $O(n_{p}m^{2})$. Thus, the
total work complexity for calculating derivatives is $O(n_{p}m^{2})$. In our
implementation $n_{p}=6$, so the work complexity is $O(6m^{2})$.
_Accuracy:_ While the finite difference operators we use are fifth order
accurate, our numerical scheme is expected to be fourth order accurate because
we use cubic splines for patch extension and blending. This is confirmed in
our numerical experiments as tabulated in Table 2(b) and Table 2(b) and
discussed in later sections.
_Remark 3:_ We mention that the main purpose of blending is to get consistent
unique derivative values by taking weighted average across different patches
since a point $\bm{x}\in\gamma$ can belong to multiple patches. We note here
that the blending also improves the accuracy of the derivatives as evidenced
in the results tabulated in Table 7 and discussed in Section A.4.
### 3.6 Time stepping and the overall algorithm
Here, we describe the time stepping scheme we use to solve the boundary
integral formulation given by Equations 2.9, 2.10 and 2.11. The equations can
be reformulated into an initial value problem as
$\displaystyle\frac{\partial\bm{x}}{\partial t}$
$\displaystyle=\bm{u}_{\infty}(\bm{x})+\mathcal{S}_{\gamma}[\bm{f}](\bm{x}),$
(3.34) $\displaystyle=\mathcal{L}(t,\bm{x}),\ \forall\bm{x}\in\gamma,$ (3.35)
where the initial position and initial deformation gradient tensor is known.
We use an explicit Runge-Kutta-Fehlberg 4(5) [14, 15] adaptive time stepping
scheme to simulate the time evolution of capsules. This scheme is fourth-order
accurate in time. We use fixed relative tolerance $\epsilon_{tol}=10^{-6}$ for
adaptive time stepping in our simulations. Below we give the overall algorithm
for the evaluation of the right hand side of Equation 3.35:
1. 1.
Given the initial capsule surface, we first initialize the atlas, the
transition maps and its derivatives (for Equation 3.26 and Equation 3.27). We
precompute
$\bm{\Psi}^{m},\bm{\Psi}_{u}^{m},\bm{I}_{u}^{m},\bm{I}_{d}^{m},\bm{I}_{i,j}^{m},\bm{I}_{ij}^{m,ext},\\\
\bm{D}_{u}^{m,i}$, and $\bm{D}_{v}^{m,i}$ for the $m_{\mathrm{th}}$-order
grid. We also precompute the reference tangents and normals at the
discretization points using the derivative scheme in Section 3.5. Let us
denote these reference tangents by $\bm{X}_{\partial u}^{r,m},\bm{X}_{\partial
v}^{r,m}$ and the normals by $\bm{N}^{r,m}$ respectively.
2. 2.
At a time instant $t$, we have the discretization points $\bm{X}^{m}$ on the
capsule surface . We compute the tangents and the normals at all
discretization points in the current configuration using the scheme in Section
3.5. Let us denote these by $\bm{X}_{\partial u}^{m},\bm{X}_{\partial v}^{m}$
and $\bm{N}^{m}$.
3. 3.
Given the reference and the current tangents along with the normals, we
compute the coefficients of the first fundamental form, the surface area
element $\bm{W}^{m}$ and the shear stress tensor $\bm{\Lambda}^{m}$.
4. 4.
We compute the interfacial force
$\bm{F}^{m}=\nabla_{\gamma}\cdot\bm{\Lambda}^{m}$ using the derivative scheme
in Section 3.5.
5. 5.
We compute the upsampled quantities
$\bm{X}^{m}_{u}=\bm{I}^{m}_{u}\bm{X}^{m},\bm{F}^{m}_{u}=\bm{I}_{u}^{m}\bm{F}^{m},\bm{W}^{m}_{u}=\bm{I}_{u}^{m}\bm{W}^{m}$
and the regularization parameters $\bm{\delta}^{m}_{u}$.
6. 6.
We compute the Stokes single layer potential on the upsampled grid, denoted by
$\bm{S}^{m}_{u}$, using $\bm{X}^{m}_{u},\bm{F}^{m}_{u},\bm{\Psi}_{u}^{m}$ and
$\bm{W}^{m}_{u}$.
7. 7.
We downsample the Stokes potential $\bm{S}^{m}=\bm{I}_{d}^{m}\bm{S}^{m}_{u}$
and add $\bm{u}_{\infty}(\bm{X}^{m})$ to get the discretized version of the
RHS of Equation 3.35 at time instant $t$ and position $\bm{X}^{m}$.
_Accuracy and work complexity of the overall algorithm:_ Our quadrature
scheme, differentiation scheme and the time stepping scheme are all fourth
order convergent. Hence, our overall scheme is fourth order convergent. We
observe this convergence numerically in the convergence results in Table 3(b).
The work complexity of our quadrature scheme and the differentiation scheme is
$O(m^{4})$ and $O(m^{2})$ respectively for $m_{\mathrm{th}}$-order grid. Thus,
the overall work complexity of our algorithm for a single time step is
$O(m^{4})$. ,
_Computation of $||\nabla_{\mathbb{S}^{2}}\phi||_{\infty}$_: In the results
section, we monitor the norm of the surface gradient of
$\phi:\mathbb{S}^{2}\longrightarrow\gamma$ with respect to the unit sphere
$\mathbb{S}^{2}$ which measures the smoothness of the capsule surface during
the simulation. It serves as a proxy for measuring the stability of capsule
dynamics during the simulation. This surface gradient
$\nabla_{\mathbb{S}^{2}}\phi$ is calculated using the surface gradient formula
given in the Section A.2 by using the first fundamental form coefficients
$E,F,G$ for the unit sphere and the surface area element $W$ of the unit
sphere $\mathbb{S}^{2}$. We use the derivative scheme in Section 3.5 to
calculate the first fundamental form using the stored discrete points
$\bm{X}^{0,m}$ on the sphere and noting that $\phi(\bm{X}^{0,m})=\bm{X}^{m}$.
## 4 Results
Now we discuss the various results to verify the accuracy and convergence of
our numerical schemes. We also present the results of simulation of extensible
capsule under shear flow and the Poiseuille flow. We validate our simulations
with the existing results in the literature. We also use the spherical
harmonics based numerical scheme used in [34] for quantitative comparisons
with our simulations. Below we give a quick summary of the spherical harmonics
based scheme used in [34].
_Spherical harmonics:_ Spherical harmonics provide an orthonormal basis [26]
for the square-integrable functions defined on the sphere $\mathbb{S}^{2}$.
Hence, they provide a spectral representation of the surfaces and have been
used for simulating Stokesian particulate flows in [34]. Below we summarize
the number of discretization points used and the work complexity for the
differentiation and singular integration in this scheme.
1. 1.
_Discretization:_ We denote by $p$ the degree of spherical harmonics used to
represent the surface and surface fields. For degree $p$, the scheme uses
$2p(p+1)$ number of discretization points.
2. 2.
_Singular quadrature:_ The scheme uses Graham-Sloan quadrature [17] to compute
the Stokes layer potential which is $O(p^{5})$ in work complexity.
3. 3.
_Differentiation:_ For surface derivatives, the scheme uses spherical
harmonics based spectral differentiation which has work complexity $O(p^{3})$.
Thus, the overall work complexity of the algorithm is $O(p^{5})$.
### 4.1 Integration results
Here, we discuss the numerical accuracy and convergence of our integration
schemes discussed in Section 3.3 and Section 3.4. We present the relative
error in computing Stokes single layer potential with and without upsampling
on two different surfaces. First, we take an ellipsoid surface given by
$\displaystyle\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}+\frac{z^{2}}{c^{2}}=1,$
(4.1)
where $a=0.6,b=1,c=1$. The relative error in a quantity $q$, denoted by
$\epsilon_{q}$, is defined as
$\epsilon_{q}:=\frac{||q-q_{\mathrm{ref}}||_{\infty}}{||q_{\mathrm{ref}}||_{\infty}}$,
where $q_{\mathrm{ref}}$ is the reference solution. We use spherical harmonic
solutions with degree $p=64$ as the high accuracy reference solutions for the
surface derivatives [34]. We report the relative errors in computing single
layer Stokes potential with and without upsampling, denoted by
$\epsilon^{up}_{\mathcal{S}[\bm{f}]}$ and $\epsilon_{\mathcal{S}[\bm{f}]}$
respectively, in Table 1(b).
As a second example, we take a 4-bump shape which can be written in standard
spherical parameters as
$\displaystyle\bm{X}(u,v)=\rho(u,v)\begin{pmatrix}\sin{u}\cos{v}\\\
\sin{u}\sin{v}\\\ \cos{u}\\\ \end{pmatrix},\ \forall u\in[0,\pi]\times
v\in[0,2\pi)$ (4.2)
where $\rho(u,v)=1+e^{-3Re(Y_{lm}(u,v))},\textit{ with }l=3,m=2$. This 4-bump
shape is shown in Figure 7. We report the relative errors in Stokes single
layer potential on this surface in Table 1(b).
The upsampling improves the accuracy of the integration scheme and helps us to
long time-horizon simulations that we discuss in the later sections. As
evident from the tabulated results, upsampling for singular quadrature is
required to get similar digits of accuracy as the derivative accuracy 111The
digits of accuracy for mean curvature $H$ and Gaussian curvature $K$ are lower
than the upsampled quadrature but since they are not required in shear force
calculation, we don’t need upsampling for derivatives. However, since the
bending forces [34] require curvature calculations, upsampling for derivatives
could be desirable if bending force is to be included. ( Table 2(b) and Table
2(b)). The reference solution is computed using the Graham-Sloan quadrature
[17, 34] for spherical harmonics order $p=64$. We also present the relative
errors in computing area $A$ and volume $V$222Volume $V$ of surface $\gamma$
is given by $V=\int_{enc(\gamma)}dV$, where $enc(\gamma)$ refers to the volume
enclosed by $\gamma$. Using divergence theorem, we can write it as a smooth
surface integral as $V=\int_{\gamma}\bm{M}(x,y,z)\cdot\bm{n}d\gamma$, where
$\bm{M}(x,y,z)=(x,0,0)$. to demonstrate the convergence and accuracy of our
integration scheme for smooth functions. For further verification and to study
the change in relative errors with the reduced volume $\nu$, we provide the
error in our upsampled quadrature scheme in Section A.6. In Section A.5, we
also tabulate the relative errors demonstrating the convergence of our
derivative and integration schemes for the complex shapes obtained in actual
shear and Poiseuille flow simulations discussed in the later sections. This
further verifies the correctness of our code.
Table 1: Relative error in the computation of Stokes single layer potential
(with density $\bm{f}=(x^{2},y^{2},z^{2})$ on (a) an ellipsoid and (b) the
4-bump shape (as shown in Figure 7). $\epsilon_{\mathcal{S}[\bm{f}]}$ is
relative error with no upsampling and $\epsilon^{up}_{\mathcal{S}[\bm{f}]}$ is
the relative error with 4$\times$ upsampling. $\epsilon_{A}$ and
$\epsilon_{V}$ are the relative errors in computing area and volume
respectively. $N$ is the number of discretization points. Reference solution
is computed using $p=64$ spherical harmonics (see [34]).
$m$ | $N$ | $\epsilon_{\mathcal{S}[\bm{f}]}$ | $\epsilon^{up}_{\mathcal{S}[\bm{f}]}$ | $\epsilon_{A}$ | $\epsilon_{V}$
---|---|---|---|---|---
$8$ | $294$ | $6\text{\times}{10}^{-2}$ | $7\text{\times}{10}^{-3}$ | $5\text{\times}{10}^{-3}$ | $3\text{\times}{10}^{-3}$
$16$ | $1350$ | $8\text{\times}{10}^{-3}$ | $4\text{\times}{10}^{-4}$ | $5\text{\times}{10}^{-4}$ | $4\text{\times}{10}^{-4}$
$32$ | $5766$ | $4\text{\times}{10}^{-4}$ | $2\text{\times}{10}^{-5}$ | $1\text{\times}{10}^{-5}$ | $1\text{\times}{10}^{-5}$
$64$ | $23\,814$ | $1\text{\times}{10}^{-5}$ | $1\text{\times}{10}^{-6}$ | $1\text{\times}{10}^{-6}$ | $1\text{\times}{10}^{-6}$
(a) Relative error in Stokes single layer potential on ellipsoid of reduced
volume $\nu=0.9$, given
$\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}+\frac{z^{2}}{c^{2}}=1$, where
$a=0.6,b=1,c=1$.
$m$ | $N$ | $\epsilon_{\mathcal{S}[\bm{f}]}$ | $\epsilon^{up}_{\mathcal{S}[\bm{f}]}$ | $\epsilon_{A}$ | $\epsilon_{V}$
---|---|---|---|---|---
$8$ | $294$ | $5\text{\times}{10}^{-1}$ | $4\text{\times}{10}^{-1}$ | $2\text{\times}{10}^{-1}$ | $2\text{\times}{10}^{-1}$
$16$ | $1350$ | $2\text{\times}{10}^{-1}$ | $9\text{\times}{10}^{-2}$ | $3\text{\times}{10}^{-2}$ | $4\text{\times}{10}^{-2}$
$32$ | $5766$ | $3\text{\times}{10}^{-2}$ | $8\text{\times}{10}^{-3}$ | $2\text{\times}{10}^{-3}$ | $3\text{\times}{10}^{-3}$
$64$ | $23\,814$ | $3\text{\times}{10}^{-3}$ | $6\text{\times}{10}^{-4}$ | $2\text{\times}{10}^{-4}$ | $2\text{\times}{10}^{-4}$
(b) Relative error in Stokes single layer potential for the 4-bump shape.
### 4.2 Derivative results
Now we discuss the results showing the accuracy and convergence of our
numerical differentiation scheme. We report the relative errors in derivative
calculations on the ellipsoid and the 4-bump shape in Table 2(b) and Table
2(b) respectively. The derivatives converges as the the grid length $h_{m}$
decreases. Empirically, the convergence is of the fourth order which is what
we predicted because of the cubic spline interpolation used in patch extension
operators and blending. We also provide verification of derivatives for
ellipsoids of different reduced volumes and tabulate the results in Section
A.6.
Figure 7: The 4-bump shape with its first three patches shaded with dark mesh.
Table 2: Relative error in the derivative scheme for (a) an ellipsoid and (b)
the 4-bump shape (shown in Figure 7). We report relative error in surface
normals $\bm{n}$, mean curvature $H$, Gaussian curvature $K$ and surface
divergence $div_{\gamma}$. $N$ is the number of discretization points.
Spherical harmonics results for $p=64$, using the algorithms in [34], are used
as true values and the error is computed relative to those values. Surface
divergence is computed for the smooth function
$\bm{g}(x,y,z)=(x^{2},y^{2},z^{2})$ on the surface.
m | $N$ | $\epsilon_{\bm{n}}$ | $\epsilon_{H}$ | $\epsilon_{K}$ | $\epsilon_{div_{\gamma}}$
---|---|---|---|---|---
$8$ | $294$ | $5\text{\times}{10}^{-3}$ | $4\text{\times}{10}^{-3}$ | $6\text{\times}{10}^{-3}$ | $4\text{\times}{10}^{-2}$
$16$ | $1350$ | $2\text{\times}{10}^{-4}$ | $5\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-4}$ | $3\text{\times}{10}^{-3}$
$32$ | $5766$ | $1\text{\times}{10}^{-5}$ | $3\text{\times}{10}^{-5}$ | $6\text{\times}{10}^{-5}$ | $1\text{\times}{10}^{-4}$
$64$ | $23\,814$ | $7\text{\times}{10}^{-7}$ | $2\text{\times}{10}^{-6}$ | $3\text{\times}{10}^{-6}$ | $1\text{\times}{10}^{-5}$
(a) Relative error in derivatives for an ellipsoid of reduced volume
$\nu=0.9$, given by
$\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}+\frac{z^{2}}{c^{2}}=1$, where
$a=0.6,b=1,c=1$.
m | $N$ | $\epsilon_{\bm{n}}$ | $\epsilon_{H}$ | $\epsilon_{K}$ | $\epsilon_{div_{\gamma}}$
---|---|---|---|---|---
$8$ | $294$ | $2\text{\times}{10}^{-1}$ | $3\text{\times}{10}^{-1}$ | $8\text{\times}{10}^{-1}$ | $6\text{\times}{10}^{-1}$
$16$ | $1350$ | $6\text{\times}{10}^{-2}$ | $1\text{\times}{10}^{-1}$ | $1\text{\times}{10}^{-1}$ | $3\text{\times}{10}^{-1}$
$32$ | $5766$ | $6\text{\times}{10}^{-3}$ | $1\text{\times}{10}^{-2}$ | $2\text{\times}{10}^{-2}$ | $6\text{\times}{10}^{-2}$
$64$ | $23\,814$ | $5\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-3}$ | $2\text{\times}{10}^{-3}$ | $5\text{\times}{10}^{-3}$
(b) Relative error in derivatives for the 4-bump shape.
### 4.3 Accuracy and convergence of the full numerical scheme
Now, we discuss the accuracy and convergence of our full numerical solver for
the extensible capsule simulation. To this end, we take an ellipsoidal capsule
with an initial surface configuration given by
$\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}+\frac{z^{2}}{c^{2}}=1$, where
$a=0.9,b=1,c=1$. The initial shape is taken to be the stress-free reference
configuration. We do the simulations under two different imposed background
flows, _i.e,_ shear flow and Poiseuille flow. The flows are described
mathematically as
$\displaystyle\bm{u}_{\infty}(x,y,z)$
$\displaystyle=\dot{\gamma}(y,0,0),\textit{ (shear flow) },$ (4.3)
$\displaystyle\bm{u}_{\infty}(x,y,z)$
$\displaystyle=(\alpha(R_{0}^{2}-y^{2}-z^{2}),0,0)\textit{ (Poiseuille flow)
},$ (4.4)
where $\dot{\gamma}$ is the shear rate of the shear flow, $\alpha$ controls
the curvature of the Poiseuille flow and $R_{0}$ is the radius of the circular
cross-section of the Poiseuille flow. We set $\dot{\gamma}=1$, $\alpha=1$ and
$R_{0}=5$ for these simulations. We use the differentiation, integration and
the time stepping schemes discussed above to simulate these setups for a fixed
time horizon $[0,T]$ using our solver. In Table 3(b), we tabulate the relative
errors in area $A$, volume $V$ and moment of inertia tensor $J$ of the final
capsule shape for different values of the discretization order $m$. We observe
fourth order convergence in the relative errors. The parachute shapes at time
$T=0.5$ for the Posieuille flow simulations for different levels of
discretization are given in Figure 8.
Table 3: Self-convergence results for an extensible capsule simulation under
background (a) shear flow (given by $\bm{u}_{\infty}(x,y,z)=(y,0,0)$) and (b)
Poiseuille flow (given by $\bm{u}_{\infty}(x,y,z)=((25-y^{2}-z^{2}),0,0)$).
The initial shape is an ellipsoid given by
$x^{2}/a^{2}+x^{2}/b^{2}+x^{2}/c^{2}=1,$ where $a=0.9,b=1.0,c=1.0$. We set the
shear modulus to be $E_{s}=2$ and the dilatation modulus to be $E_{D}=20$. We
simulate the capsule for a time horizon $[0,T]$ and report the relative errors
in area $A$, volume $V$ and moment of inertia tensor $J$ of the capsule shape
at time $T=0.5$. The reference solution for the relative error is the
numerical solution computed using the $m=48$ grid. We use Runge-Kutta-Fehlberg
time stepping scheme ( see Section 3.6) with fixed relative tolerance of
$\epsilon_{tol}=10^{-6}$.
$m$ | $\epsilon_{A}$ | $\epsilon_{V}$ | $\epsilon_{J}$
---|---|---|---
$8$ | $2\text{\times}{10}^{-2}$ | $4\text{\times}{10}^{-2}$ | $3\text{\times}{10}^{-2}$
$16$ | $2\text{\times}{10}^{-3}$ | $1\text{\times}{10}^{-3}$ | $2\text{\times}{10}^{-3}$
$32$ | $1\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-4}$
(a) Relative errors in the capsule dynamics driven by a shear flow.
$m$ | $\epsilon_{A}$ | $\epsilon_{V}$ | $\epsilon_{J}$
---|---|---|---
$8$ | $2\text{\times}{10}^{-2}$ | $4\text{\times}{10}^{-2}$ | $2\text{\times}{10}^{-2}$
$16$ | $3\text{\times}{10}^{-3}$ | $1\text{\times}{10}^{-3}$ | $2\text{\times}{10}^{-3}$
$32$ | $2\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-4}$
(b) Relative errors in the capsule dynamics driven by a background Poiseuille
flow.
(a)
(b)
(c)
(d)
Figure 8: The parachute shape under background Poiseuille flow (see Equation
4.4) obtained after time $T=0.5$ for different levels of discretization (a)
$m=8$, (b) $m=16$, (c) $m=32$ and (d) $m=48$ for $E_{s}=2,E_{D}=20$. The
initial shape is same as the stress-free reference state and is given by
$x^{2}/a^{2}+x^{2}/b^{2}+x^{2}/c^{2}=1,$ where $a=0.9,b=1.0,c=1.0$.
### 4.4 Relaxation of capsule
Now, we simulate the relaxation of a capsule to its stress-free reference
state to validate our code. We take a capsule with an initial shape of a unit
sphere. The initial shape is also the stress-free reference configuration of
the capsule. We simulate the dynamics of the capsule under a background linear
shear flow with shear rate $\dot{\gamma}=1$ for a fixed time horizon
$[0,T_{1}]$ followed by a zero background flow velocity for $[T_{1},T]$. We
report the resulting shapes at different times in Figure 9 . We observe that
once the background flow diminishes the capsule returns to the stress-free
state, _i.e.,_ a unit sphere. Using the moment of inertia tensor $J$ and
volume $V$ of the capsule, we compute the instantaneous Taylor asphericity
parameter [21] $D_{a}$ of the capsule shape defined as
$\displaystyle D_{a}=\frac{L-S}{L+S},$ (4.5) $\displaystyle\textit{ where
}S=\sqrt{\frac{J_{xx}+J_{yy}-\sqrt{(J_{xx}-J_{yy})^{2}+4J_{xy}^{2}}}{2V}}$
$\displaystyle,\textit{
}L=\sqrt{\frac{J_{xx}+J_{yy}+\sqrt{(J_{xx}-J_{yy})^{2}+4J_{xy}^{2}}}{2V}}.$
(4.6)
For a sphere, $D_{a}=0$. We plot the Taylor asphericity as a function of time
to monitor the relaxation back to the reference unit sphere shape and plot it
in Figure 9(d). As expected, we observe that the capsule relaxes back to a
unit sphere when the background flow is removed as $D_{a}$ drops back to zero.
Our results agree quantitatively with the results obtained using spherical
harmonics based scheme used in [34].
(a) $t=0$
(b) $t=T_{1}$
(c) $t=T$
(d)
Figure 9: Shapes obtained at different times during simulation of the
relaxation of the capsule with initial spherical shape as the stress-free
reference state. We impose background shear flow
$\bm{u}_{\infty}(x,y,z)=(y,0,0)$ for time horizon $[0,T_{1}]$. This is
followed by zero background velocity from $[T_{1},T]$. The capsule relaxes
back to the stress-free reference shape as expected. We take
$m=16,T_{1}=1,T=17.5,E_{s}=2$ and $E_{D}=20$. (a) Shape at time $t=0$. (b)
Shape at time $t=T_{1}$. (c) Shape at time $t=T$. (d) The plot of Taylor
asphericity $D_{a}$ vs time $t$, where $D_{a}$ increases till $t=1$ under
background shear flow and then drops to zero during the relaxation phase when
the background flow is zero. Blue solid lines is the plot using our scheme and
red dotted lines denote is the plot using spherical harmonics based scheme
(with spherical harmonic degree $p=32$).
### 4.5 Capsule in shear flow
Now, we present results for steady shapes of extensible capsule under
background shear flow (see Equation 4.3). We start with a stress-free
spherical capsule. Under linear shear flow, the capsule is known to take a
terminal nearly-ellipsoidal shape and exhibit a stable tank treading motion
[11, 21]. The terminal shape and inclination angle of its major axes with the
flow direction depends on three key parameters, namely, the shear rate
$\dot{\gamma}$ of the flow, the shear modulus $E_{s}$ and the dilatation
modulus $E_{D}$ of the membrane. We simulate these dynamics for different
values of these parameters and plot the terminal inclination angles $\theta$
(with respect to the flow direction) in Figure 10(a). We compare our results
with the numerical results from [11] and obtain good quantitative agreement.
We also plot the evolution of Taylor asphericity $D_{a}$ with time in Figure
10(b) and observe good agreement with the numerical results obtained using the
spherical harmonics based scheme mentioned in [34]. Figures 11, 12 and 13
shows the terminal shapes of different reduced volumes obtained in shear flows
using our code. Our code is able to resolve shapes for different reduced
volumes obtained for a wide range of ratios of shear modulus $E_{s}$ and
dilatation modulus $E_{D}$.
To further demonstrate the effectiveness of four times upsampling, we plot the
norm of the gradient of mapping $\phi$ from a unit sphere to the capsule
surface $\gamma$ with time in shear flow simulations for different grid orders
$m$ in Figure 14. The gradient norm serves as a proxy for the stability of the
capsule shape. As evident from the plots, we need high numerical accuracy to
do long time horizon simulations. We observe that grid order $m=16$ with four
times upsampling is sufficient to do shear flow simulations. Lower upsampling
factors or lower grid order $m$ results in unstable gradients which blow up
over long time scales as shown in the Figure 14.
(a)
(b)
Figure 10: Validation of shear flow simulation results computed using $m=16$.
(a) Plot of terminal inclination angle $\theta$ for different shear rates and
membrane mechanical properties ($E_{s}$ and $E_{D}$). We compare our results
with results in [11] and obtain good quantitative agreement. (b) We plot the
evolution of Taylor asphericity $D_{a}$ of the the capsule vs time and compare
our results (solid lines) with the spherical harmonics ($p=32$) based scheme
in [34] (dashed lines). We observe good quantitative agreement. For
$E_{D}=200$, the spherical harmonics code is unable to resolve the shapes and
the surface fields and therefore, we do not provide plots for those.
(a) $t=0$
(b) $t=0.25$
(c) $t=7.5$
Figure 11: Snapshots of capsule shape at different time instants for shear
flow simulation with $\dot{\gamma}=1,E_{s}=2,E_{D}=200$. (a) $t=0$, (b)
$t=0.25$ and (c) $t=7.5$. The shape shown in (c) is the terminal shape.
(a) $t=0.5$
(b) $t=2$
(c) $t=15$
Figure 12: Snapshots of capsule shape at different time instants for shear
flow simulation with $\dot{\gamma}=0.5,E_{s}=2,E_{D}=1$. (a) $t=0.5$, (b)
$t=2$ and (c) $t=15$. The shape shown in (c) is the terminal shape.
(a) $t=0.0417$
(b) $t=0.25$
(c) $t=1.25$
Figure 13: Snapshots of capsule shape at different time instants for shear
flow simulation with $\dot{\gamma}=6,E_{s}=2,E_{D}=1$. (a) $t=0.0417$, (b)
$t=0.25$ and (c) $t=1.25$. The shape shown in (c) is the terminal shape.
(a) Terminal reduced volume $\nu=0.85$.
(b) Terminal reduced volume $\nu=0.65$.
Figure 14: Plots of the $||\nabla_{\mathbb{S}^{2}}\phi||_{\infty}$ vs time
$t$ in shear flow simulation where $\phi:\mathbb{S}^{2}\longrightarrow\gamma$
is the mapping from the unit sphere to the capsule $\gamma$. Initial shape is
stress-free unit sphere with (a) $\dot{\gamma}=1$, $E_{s}=2$ and $E_{D}=20$
and (b) $\dot{\gamma}=1.5$, $E_{s}=2$ and $E_{D}=1$. The plots show the
effectiveness of four times upsampling in doing stable long time horizon
simulations while two times upsampling fails for $m=16$.
### 4.6 Capsule in Poiseuille flow
In Figure 16, we present the terminal shapes for a capsule under Poiseuille
flow (see Equation 4.4) for different membrane elasticity parameters leading
to shapes of reduced volume as low as $\nu=0.6$. As for the shear flow
simulations, we plot the norm of the gradient of mapping $\phi$ from a unit
sphere to the capsule surface $\gamma$ with time for the Poiseuille flow
simulations for different grid orders $m$ in Figure 15. We observe that grid
order $m=32$ with four times upsampling is sufficient to do Poiseuille flow
simulations. Lower upsampling factors or lower grid order $m$ results in
unstable gradients which blow up over long time scales as shown in the Figure
15.
(a) Terminal reduced volume $\nu=0.96$.
(b) Terminal reduced volume $\nu=0.86$.
Figure 15: Plots of the $||\nabla_{\mathbb{S}^{2}}\phi||_{\infty}$ vs time
$t$ in Poiseuille flow simulation where
$\phi:\mathbb{S}^{2}\longrightarrow\gamma$ is the mapping from the unit sphere
to the capsule $\gamma$. Initial shape is stress-free unit sphere with (a)
$\alpha=1.5$, $R_{0}=5$, $E_{s}=2$ and $E_{D}=200$ and (b) $\alpha=1.5$,
$R_{0}=5$, $E_{s}=2$ and $E_{D}=20$. Plots show that $m=32$ with four times
upsampling gives stable simulations for Poiseuille flow while lower values of
$m$ are not enough to resolves the shapes in Poiseuille flow.
(a) $\nu=0.96$
(b) $\nu=0.85$
(c) $\nu=0.60$
Figure 16: Terminal parachute shapes of varying reduced volumes $\nu$ under
Poiseuille flow. (a) Terminal shape of reduced volume $\nu=0.96$ obtained for
Poiseuille flow simulation with $\alpha=1.5,E_{s}=2,E_{D}=200$. (b) Terminal
shape of reduced volume $\nu=0.85$ obtained for Poiseuille flow simulation
with $\alpha=1.5,E_{s}=2,E_{D}=20$. (c) Terminal shape of reduced volume
$\nu=0.6$ obtained for shear flow simulation with
$\alpha=1.5,E_{s}=2,E_{D}=1$. We use grid order $m=32$ and the Poiseuille flow
cross-section radius $R_{0}=5$ in these simulations .
### 4.7 Timing results
We compute the GPU wall clock times required for our scheme and give its
breakdown over different stages of our scheme. To this end, we take an
initially spherical capsule in stress free state (with $E_{s}=2,E_{D}=20$) and
simulate its dynamics under shear flow with shear rate $\dot{\gamma}=1$ till
time $T=0.1$. We do these simulations for different grid order $m$ and plot
the total time and its breakdown for different stages in Figure 17(b). We
observe that for $m\geq 16$, majority of the time is required to compute the
singular quadrature. In fact, for $m\geq 32$ virtually all of the time is
consumed in computing the quadrature. The wall clock times required to compute
the singular quadrature once is also plotted separately in Figure 17(a). We
also provide a comparison of the wall clock time per time step taken by our
GPU accelerated scheme with the spherical harmonics CPU code [34] in Section
A.8.
(a)
(b)
Figure 17: (a) GPU wall clock times for computing singular layer quadrature
with four times upsampling for different grid orders $m=8,16,32,48$. (b)
Breakdown of wall clock times for shear flow simulation (with initial shape as
unit sphere over time horizon $T=0.1$) over different stages (computation of
singular quadrature, computation of interfacial forces and the rest of the
algorithm) for different grid orders $m$.
## 5 Fast multipole method based acceleration
Our quadrature does not involve a product quadrature and is amenable to
acceleration via fast multipole method (FMM) [33]. A full FMM acceleration
reduces the time complexity of our scheme to $O(N)$ allowing us to perform
high resolution simulations. We do not implement a multilevel FMM for the
purpose of this work and focus on showcasing the correctness and advantages of
our numerical scheme using a simple single level FMM acceleration for our
scheme. Based on the kernel independend FMM of [40], we describe our schme
below.
1. 1.
We first use a $k$-means clustering algorithm to cluster together the
discretization points on the capsule surface for a reasonable value of $k$. We
will have $k$ clusters. We found that $k=100$ gives us relative accuracy of
accuracy of $10^{-5}$, which is sufficient for the simulations in this paper.
2. 2.
Each cluster is imagined to be enclosed by a surrounding cubical box, called
an equivalent surface, with equivalent sources placed on a regular grid on the
boundary of this cubical box. Thus, we have $k$ equivalent surfaces and let
each equivalent surface have $N_{\mathrm{eq}}$ number of equivalent sources.
We also consider a slightly bigger cube surrounding this enclosing cube which
will be used as a check surface. The density on the equivalent sources lying
on the equivalent surface is estimated by equating the potential on the check
surface due to the actual point sources (_i.e.,_ discretization points) on the
part of the capsule surface lying inside that box or equivalent surface.
3. 3.
We apply direct all pairs single layer potential calculation for the
neighboring boxes while the equivalent sources are used to calculate the
single layer potential for the discretization points in the far field (_i.e.,_
the discretization points inside non-neighboring boxes).
We present convergence and speedup results for calculating single layer on an
ellipsoid of reduced volume $\nu=0.9$ in Table 4(a). While we do not see
speedup for low values of $m$ because of FMM overhead computations, we see
about two times speedup for $m=96$ using our implementation of the single
level FMM on GPU.
Table 4: Relative error and speedup for FMM accelerated simulation (with 4x
upsampling) of a capsule suspended in Poiseuille flow as in Figure 16(c). All
pairs direct calculation for $m=32$ (with 4x upsampling) is used as the
reference solution and the relative error is computed in the moment of inertia
tensor of the capsule at the end of simulation. Time reported is wall clock
time in seconds taken by one time step of the simulation.
$t_{\mathrm{direct}}$ is direct calculation time and $t_{\mathrm{FMM}}$ is
time taken with the FMM acceleration. Number of equivalent surfaces or boxes
is fixed at 100. We vary the number of equivalent sources for an equivalent
surface, denoted by $N_{eq}$, with $m$.
m | $N$ | $N_{\mathrm{eq}}$ | $t_{\mathrm{direct}}$ | $t_{\mathrm{FMM}}$ | $\mathrm{Speedup}$ | $\epsilon_{\mathrm{FMM}}$
---|---|---|---|---|---|---
$32$ | $5766$ | $96$ | $10.78$ | $10.66$ | $1.02$ | $6\text{\times}{10}^{-3}$
$64$ | $23\,814$ | $128$ | $163.21$ | $104.20$ | $1.57$ | $4\text{\times}{10}^{-5}$
$96$ | $54\,150$ | $256$ | $905.31$ | $501.76$ | $1.81$ | $5\text{\times}{10}^{-6}$
(a)
.
## 6 Conclusions
In this work, we described a novel numerical scheme to simulate Stokesian
particulate flows. We described an overlapping patch based discretization of
the surfaces diffeomorphic to the unit sphere. Our numerical scheme uses the
regularized Stokes kernels [32] and finite differences on overset grid to
calculate the Stokes layer potential and interfacial elastic forces. We
presented a battery of results for the verification of our numerical scheme
using results from previous literature. We used our numerical scheme to
simulate an extensible capsule in suspended in Stokes flow and validated our
simulations. Our numerical scheme is a fourth order convergent scheme that is
$O(N^{2})$ in work complexity where $N$ is the number of discretization points
and can be accelerated to run with $O(N)$ work complexity using a multilevel
FMM. This is much better than the asymptotic work complexity of the spherical
harmonics based spectral scheme [34]. Our scheme also allows for independent
control over local resolution due to patch based parameterization of surface.
We used GPU acceleration to demonstrate the ability of our code to simulate
the complex shapes with high resolution.
In this paper, we implemented a single level FMM to demonstrate FMM based
speedup. A GPU based implementation of a multilevel FMM will further enable us
to do highly accurate simulations in a reasonable time and help us better
study the Stokesian particulate flows. We leave it as the subject of our
future work.
## Appendix A Appendix
### A.1 Transition maps
Here, we list the expressions for the transition maps $\tau_{ij}$ for
$i=1,..,6$, for the parameterization we use. The expressions are given as
follows:
$\displaystyle\tau_{12}(u,v)$
$\displaystyle=(u,\pi+v),\tau_{13}(u,v)=(u,\frac{\pi}{2}+v),\tau_{14}(u,v)=(u,v-\frac{\pi}{2}),$
(A.1) $\displaystyle\tau_{15}(u,v)$
$\displaystyle=\left(\cos^{-1}(-\sin{u}\sin{v}),\cos^{-1}\left(\frac{\sin{u}\cos{v}}{\sqrt{\sin^{2}{u}\cos^{2}{v}+\cos^{2}{u}}}\right)\right),$
(A.2) $\displaystyle\tau_{16}(u,v)$
$\displaystyle=\left(\cos^{-1}(\sin{u}\sin{v}),\cos^{-1}\left(\frac{\sin{u}\cos{v}}{\sqrt{\sin^{2}{u}\cos^{2}{v}+\cos^{2}{u}}}\right)\right).$
(A.3)
The transition maps for the next three patches are given below:
$\displaystyle\tau_{21}(u,v)$
$\displaystyle=(u,\pi+v),\tau_{23}(u,v)=(u,v-\frac{\pi}{2}),\tau_{24}(u,v)=(u,v+\frac{\pi}{2}),$
(A.4) $\displaystyle\tau_{25}(u,v)$
$\displaystyle=\left(\cos^{-1}(\sin{u}\sin{v}),\cos^{-1}\left(\frac{-\sin{u}\cos{v}}{\sqrt{\sin^{2}{u}\cos^{2}{v}+\cos^{2}{u}}}\right)\right),$
(A.5) $\displaystyle\tau_{26}(u,v)$
$\displaystyle=\left(\cos^{-1}(-\sin{u}\sin{v}),\cos^{-1}\left(\frac{-\sin{u}\cos{v}}{\sqrt{\sin^{2}{u}\cos^{2}{v}+\cos^{2}{u}}}\right)\right),$
(A.6) $\displaystyle\tau_{31}(u,v)$
$\displaystyle=(u,v-\pi/2),\tau_{32}(u,v)=(u,v+\frac{\pi}{2}),\tau_{34}(u,v)=(u,v+\pi),$
(A.7) $\displaystyle\tau_{35}(u,v)$
$\displaystyle=\left(\cos^{-1}(\sin{u}\cos{v}),\cos^{-1}\left(\frac{\sin{u}\sin{v}}{\sqrt{\sin^{2}{u}\sin^{2}{v}+\cos^{2}{u}}}\right)\right),$
(A.8) $\displaystyle\tau_{36}(u,v)$
$\displaystyle=\left(\cos^{-1}(-\sin{u}\cos{v}),\cos^{-1}\left(\frac{\sin{u}\sin{v}}{\sqrt{\sin^{2}{u}\sin^{2}{v}+\cos^{2}{u}}}\right)\right),$
(A.9) $\displaystyle\tau_{41}(u,v)$
$\displaystyle=(u,v+\pi/2),\tau_{42}(u,v)=(u,v-\frac{\pi}{2}),\tau_{43}(u,v)=(u,v+\pi),$
(A.10) $\displaystyle\tau_{45}(u,v)$
$\displaystyle=\left(\cos^{-1}(-\sin{u}\cos{v}),\cos^{-1}\left(\frac{-\sin{u}\sin{v}}{\sqrt{\sin^{2}{u}\sin^{2}{v}+\cos^{2}{u}}}\right)\right),$
(A.11) $\displaystyle\tau_{46}(u,v)$
$\displaystyle=\left(\cos^{-1}(\sin{u}\cos{v}),\cos^{-1}\left(\frac{-\sin{u}\sin{v}}{\sqrt{\sin^{2}{u}\sin^{2}{v}+\cos^{2}{u}}}\right)\right).$
(A.12)
The transition maps for $\mathcal{P}_{5}^{0}$ and $\mathcal{P}_{6}^{0}$ are
given as follows:
$\displaystyle\tau_{51}(u,v)$
$\displaystyle=\left(\cos^{-1}(\sin{u}\sin{v}),\cos^{-1}\left(\frac{\sin{u}\cos{v}}{\sqrt{\sin^{2}{u}\cos^{2}{v}+\cos^{2}{u}}}\right)\right),$
(A.13) $\displaystyle\tau_{52}(u,v)$
$\displaystyle=\left(\cos^{-1}(\sin{u}\sin{v}),\cos^{-1}\left(\frac{-\sin{u}\cos{v}}{\sqrt{\sin^{2}{u}\cos^{2}{v}+\cos^{2}{u}}}\right)\right),$
(A.14) $\displaystyle\tau_{53}(u,v)$
$\displaystyle=\left(\cos^{-1}(\sin{u}\sin{v}),\cos^{-1}\left(\frac{\cos{u}}{\sqrt{\sin^{2}{u}\cos^{2}{v}+\cos^{2}{u}}}\right)\right),$
(A.15) $\displaystyle\tau_{54}(u,v)$
$\displaystyle=\left(\cos^{-1}(\sin{u}\sin{v}),\cos^{-1}\left(\frac{-\cos{u}}{\sqrt{\sin^{2}{u}\cos^{2}{v}+\cos^{2}{u}}}\right)\right),$
(A.16) $\displaystyle\tau_{54}(u,v)$ $\displaystyle=(\pi-u,v),$ (A.17)
$\displaystyle\tau_{61}(u,v)$
$\displaystyle=\left(\cos^{-1}(-\sin{u}\sin{v}),\cos^{-1}\left(\frac{\sin{u}\cos{v}}{\sqrt{\sin^{2}{u}\cos^{2}{v}+\cos^{2}{u}}}\right)\right),$
(A.18) $\displaystyle\tau_{62}(u,v)$
$\displaystyle=\left(\cos^{-1}(-\sin{u}\sin{v}),\cos^{-1}\left(\frac{-\sin{u}\cos{v}}{\sqrt{\sin^{2}{u}\cos^{2}{v}+\cos^{2}{u}}}\right)\right),$
(A.19) $\displaystyle\tau_{63}(u,v)$
$\displaystyle=\left(\cos^{-1}(-\sin{u}\sin{v}),\cos^{-1}\left(\frac{\cos{u}}{\sqrt{\sin^{2}{u}\cos^{2}{v}+\cos^{2}{u}}}\right)\right),$
(A.20) $\displaystyle\tau_{64}(u,v)$
$\displaystyle=\left(\cos^{-1}(-\sin{u}\sin{v}),\cos^{-1}\left(\frac{-\cos{u}}{\sqrt{\sin^{2}{u}\cos^{2}{v}+\cos^{2}{u}}}\right)\right),$
(A.21) $\displaystyle\tau_{65}(u,v)$ $\displaystyle=(\pi-u,v).$ (A.22)
### A.2 Surface derivative formulas
Here, we list the formulas for the first fundamental form coefficients
$E,F,G$, the unit normal $\bm{n}$, the second fundamental form coefficients
$L,M,N$, the mean curvature $H$, the Gaussian curvature $K$, the surface
divergence (of a vector field $\bm{g}$) and the surface gradient (of a scalar
function $g$) in terms of the local parameterization
$\bm{x}(u,v):D\subset\mathbb{R}^{2}\longrightarrow\gamma$. We use these
quantities in the computation of interfacial force and the verification of
surface derivatives.
Table 5: Formulas used for computing various surface derivative quantities. Symbol | Definition | Symbol | Definition
---|---|---|---
$E$ | $\bm{x}_{u}\cdot\bm{x}_{u}$ | $M$ | $\bm{x}_{uv}\cdot\bm{n}$
$F$ | $\bm{x}_{u}\cdot\bm{x}_{v}$ | $N$ | $\bm{x}_{vv}\cdot\bm{n}$
$G$ | $\bm{x}_{v}\cdot\bm{x}_{v}$ | $H$ | $\frac{EN-2FM+GL}{2W^{2}}$
$W$ | $\sqrt{EG-F^{2}}$ | $\nabla_{\gamma}g$ | $\frac{G\bm{x}_{u}-F\bm{x}_{v}}{W^{2}}g_{u}+\frac{E\bm{x}_{v}-F\bm{x}_{u}}{W^{2}}g_{v}$
$\bm{n}$ | $\frac{\bm{x}_{u}\times\bm{x}_{v}}{W}$ | $\nabla_{\gamma}\cdot\bm{g}$ | $\frac{G\bm{g}_{u}-F\bm{g}_{v}}{W^{2}}\bm{x}_{u}+\frac{E\bm{g}_{v}-F\bm{g}_{u}}{W^{2}}\bm{x}_{v}$
$L$ | $\bm{x}_{uu}\cdot\bm{n}$ | $K$ | $\frac{LN-M^{2}}{W^{2}}$
### A.3 The choice of parameter $r_{0}$ for the partition of unity
Here, we briefly discuss our choice of parameter $r_{0}$ in the construction
of partition of unity in Section 3.1. We note here that for the specific
parameterization of the unit sphere we use (see Equation 3.8), we require
$r_{0}>\frac{3\pi}{12}$ to ensure that every point on $\mathbb{S}^{2}$ belongs
to the support of $\psi_{i}^{0}$ for at least one $i\in\\{1,\ldots,6\\}$. If
this is not the case, then $\\{\psi_{i}^{0}\\}_{i=1}^{6}$ ceases to be a
partition of unity on the unit sphere. Additionally, we need
$r_{0}<\frac{\pi}{2}$ so that the support of each $\psi_{i}^{0}$ is compactly
contained in $\mathcal{P}_{i}^{0}$. We tabulate some numerical results Table 6
for the relative errors in the surface derivatives on the unit sphere for
different values of $\pi>r_{0}>\frac{3\pi}{12}$. We find that
$r_{0}=\frac{5\pi}{12}$ gives optimal accuracy among the values of $r_{0}$ we
experimented with.
Table 6: Relative errors in surface derivatives. We compare the relative errors for different values of $r_{0}$. $m$ | $r_{0}/\pi$ | $\epsilon_{\bm{n}}$ | $\epsilon_{H}$
---|---|---|---
$8$ | $5.5\text{/}12$ | $2\text{\times}{10}^{-4}$ | $1.3\text{\times}{10}^{-3}$
$8$ | $5\text{/}12$ | $2\text{\times}{10}^{-4}$ | $1.5\text{\times}{10}^{-3}$
$8$ | $4\text{/}12$ | $4\text{\times}{10}^{-4}$ | $1.6\text{\times}{10}^{-3}$
$16$ | $5.5\text{/}12$ | $8\text{\times}{10}^{-6}$ | $1.7\text{\times}{10}^{-5}$
$16$ | $5\text{/}12$ | $7\text{\times}{10}^{-6}$ | $1.8\text{\times}{10}^{-5}$
$16$ | $4\text{/}12$ | $7\text{\times}{10}^{-6}$ | $1.9\text{\times}{10}^{-5}$
$32$ | $5.5\text{/}12$ | $4\text{\times}{10}^{-7}$ | $1.3\text{\times}{10}^{-6}$
$32$ | $5\text{/}12$ | $4\text{\times}{10}^{-7}$ | $1.7\text{\times}{10}^{-6}$
$32$ | $4\text{/}12$ | $4\text{\times}{10}^{-7}$ | $1.9\text{\times}{10}^{-6}$
### A.4 Effect of the blending process in calculating derivatives
Here, we provide the relative errors in the computing normals $n$ and the mean
curvature $H$ for the unit sphere with and without blending process discussed
in Section 3.5. The results are tabulated in Table 7. The results show that
blending improves the accuracy of derivatives.
Table 7: Relative error in computing surface normals $n$ and the mean curvature $H$ for unit sphere with and without the blending. $\epsilon^{\mathrm{b}}$ are the errors with blending and $\epsilon^{\mathrm{nb}}$ are the errors without the blending process in computing derivatives. $m$ | $\epsilon_{\bm{n}}^{\mathrm{b}}$ | $\epsilon_{H}^{\mathrm{b}}$ | $\epsilon_{\bm{n}}^{\mathrm{nb}}$ | $\epsilon_{H}^{\mathrm{nb}}$
---|---|---|---|---
$8$ | $2\text{\times}{10}^{-4}$ | $1.3\text{\times}{10}^{-3}$ | $3\text{\times}{10}^{-3}$ | $2\text{\times}{10}^{-2}$
$16$ | $8\text{\times}{10}^{-6}$ | $1.7\text{\times}{10}^{-5}$ | $9\text{\times}{10}^{-5}$ | $2\text{\times}{10}^{-3}$
$32$ | $4\text{\times}{10}^{-7}$ | $1.3\text{\times}{10}^{-6}$ | $3\text{\times}{10}^{-6}$ | $3\text{\times}{10}^{-4}$
### A.5 Surface derivative and singular integration errors for shear flow and
Poiseuille flow terminal shapes
To further verify the convergence and accuracy of our derivative and
integration schemes, we upsample the low reduced volume shapes obtained in
Figure 13(c) and Figure 16(c) to $p=64$ spherical harmonics and use the
spherical harmonics derivatives and Graham-Sloan quadrature [34] as the
reference values to study the convergence of our schemes. We report the
relative errors for these shapes in Table 8(b).
Table 8: Relative errors in the derivative scheme and singular quadrature for
the shapes in Figure 13(c) and Figure 16(c) using our scheme (at the top for
different grid orders $m$ with four times upsampling to compute quadrature ).
The shapes are upsampled to spherical harmonics degree $p=64$ and the error is
computed relative to those values. Surface divergence is computed for the
smooth function $\bm{f}(x,y,z)=(x^{2},y^{2},z^{2})$ on the surface.
m | $N$ | $\epsilon^{up}_{\mathcal{S}[\bm{f}]}$ | $\epsilon_{\bm{n}}$ | $\epsilon_{H}$ | $\epsilon_{K}$ | $\epsilon_{div_{\gamma}}$
---|---|---|---|---|---|---
$8$ | $294$ | $5\text{\times}{10}^{-2}$ | $9\text{\times}{10}^{-2}$ | $5\text{\times}{10}^{-1}$ | $1$ | $9\text{\times}{10}^{-2}$
$16$ | $1350$ | $2\text{\times}{10}^{-2}$ | $6\text{\times}{10}^{-2}$ | $2\text{\times}{10}^{-1}$ | $2\text{\times}{10}^{-1}$ | $5\text{\times}{10}^{-2}$
$32$ | $5766$ | $5\text{\times}{10}^{-3}$ | $9\text{\times}{10}^{-3}$ | $8\text{\times}{10}^{-2}$ | $9\text{\times}{10}^{-2}$ | $9\text{\times}{10}^{-3}$
$64$ | $23\,814$ | $5\text{\times}{10}^{-4}$ | $9\text{\times}{10}^{-4}$ | $8\text{\times}{10}^{-3}$ | $1\text{\times}{10}^{-2}$ | $9\text{\times}{10}^{-4}$
(a) Relative error in singular quadrature and derivatives for the shape in
Figure 13(c) of reduced volume $\nu=0.6$.
m | $N$ | $\epsilon^{up}_{\mathcal{S}[\bm{f}]}$ | $\epsilon_{\bm{n}}$ | $\epsilon_{H}$ | $\epsilon_{K}$ | $\epsilon_{div_{\gamma}}$
---|---|---|---|---|---|---
$8$ | $294$ | $8\text{\times}{10}^{-2}$ | $9\text{\times}{10}^{-2}$ | $7\text{\times}{10}^{-1}$ | $8\text{\times}{10}^{-1}$ | $9\text{\times}{10}^{-2}$
$16$ | $1350$ | $2\text{\times}{10}^{-2}$ | $5\text{\times}{10}^{-2}$ | $2\text{\times}{10}^{-1}$ | $3\text{\times}{10}^{-1}$ | $6\text{\times}{10}^{-2}$
$32$ | $5766$ | $3\text{\times}{10}^{-3}$ | $8\text{\times}{10}^{-3}$ | $6\text{\times}{10}^{-2}$ | $7\text{\times}{10}^{-2}$ | $9\text{\times}{10}^{-3}$
$64$ | $23\,814$ | $6\text{\times}{10}^{-4}$ | $9\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-2}$ | $1\text{\times}{10}^{-2}$ | $8\text{\times}{10}^{-4}$
(b) Relative error in singular quadrature and derivatives for the shape Figure
16(c) of reduced volume $\nu=0.60$.
### A.6 Numerical errors for different reduced volume $\nu$
Here, we tabulate errors in computing the singular quadrature and surface
derivatives using our numerical scheme. We report these errors for ellipsoids
of varying reduced volumes $\nu\in\\{0.92,0.78,0.5,0.3\\}$ in Table 9(b) and
Table 10(b). The tables demonstrate that the accuracy deteriorates with
decreasing reduced volumes. In general, it is harder to do long time horizon
simulations of capsules with lower reduced volumes due to this reason.
Table 9: Relative errors in the derivative scheme and singular quadrature for
ellipsoids of reduced volume (a) $\nu=0.92$ and (b) $\nu=0.78$ using our
scheme (at the top for different grid orders $m$ with four times upsampling to
compute quadrature ) and using spherical harmonics scheme [34] (at the bottom
for different spherical harmonics degrees $p$). Spherical harmonics results
for $p=64$ are used as true values and the error is computed relative to those
values. Surface divergence is computed for the smooth function
$\bm{f}(x,y,z)=(x^{2},y^{2},z^{2})$ on the surface.
m | $N$ | $\epsilon^{up}_{\mathcal{S}[\bm{f}]}$ | $\epsilon_{\bm{n}}$ | $\epsilon_{H}$ | $\epsilon_{K}$ | $\epsilon_{div_{\gamma}}$
---|---|---|---|---|---|---
$8$ | $294$ | $7\text{\times}{10}^{-3}$ | $5\text{\times}{10}^{-3}$ | $4\text{\times}{10}^{-3}$ | $6\text{\times}{10}^{-3}$ | $4\text{\times}{10}^{-2}$
$16$ | $1350$ | $4\text{\times}{10}^{-4}$ | $2\text{\times}{10}^{-4}$ | $5\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-3}$ | $3\text{\times}{10}^{-3}$
$32$ | $5766$ | $2\text{\times}{10}^{-5}$ | $1\text{\times}{10}^{-5}$ | $3\text{\times}{10}^{-5}$ | $6\text{\times}{10}^{-5}$ | $1\text{\times}{10}^{-4}$
$64$ | $23\,814$ | $1\text{\times}{10}^{-6}$ | $7\text{\times}{10}^{-7}$ | $2\text{\times}{10}^{-6}$ | $3\text{\times}{10}^{-6}$ | $1\text{\times}{10}^{-5}$
p | $N$ | $\epsilon_{\mathcal{S}[\bm{f}]}$ | $\epsilon_{\bm{n}}$ | $\epsilon_{H}$ | $\epsilon_{K}$ | $\epsilon_{div_{\gamma}}$
$8$ | $144$ | $4\text{\times}{10}^{-5}$ | $3\text{\times}{10}^{-3}$ | $2\text{\times}{10}^{-3}$ | $6\text{\times}{10}^{-3}$ | $6\text{\times}{10}^{-3}$
$16$ | $544$ | $3\text{\times}{10}^{-8}$ | $1\text{\times}{10}^{-5}$ | $1\text{\times}{10}^{-5}$ | $4\text{\times}{10}^{-5}$ | $2\text{\times}{10}^{-5}$
$24$ | $1200$ | $5\text{\times}{10}^{-11}$ | $4\text{\times}{10}^{-8}$ | $6\text{\times}{10}^{-8}$ | $2\text{\times}{10}^{-7}$ | $1\text{\times}{10}^{-7}$
$32$ | $2112$ | $6\text{\times}{10}^{-13}$ | $5\text{\times}{10}^{-11}$ | $8\text{\times}{10}^{-10}$ | $4\text{\times}{10}^{-9}$ | $3\text{\times}{10}^{-9}$
(a) Relative error in singular quadrature and derivatives for an ellipsoid of
reduced volume $\nu=0.9$, given by
$\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}+\frac{z^{2}}{c^{2}}=1$, where
$a=0.6,b=1,c=1$.
m | $N$ | $\epsilon^{up}_{\mathcal{S}[\bm{f}]}$ | $\epsilon_{\bm{n}}$ | $\epsilon_{H}$ | $\epsilon_{K}$ | $\epsilon_{div_{\gamma}}$
---|---|---|---|---|---|---
$8$ | $294$ | $7\text{\times}{10}^{-3}$ | $3\text{\times}{10}^{-2}$ | $2\text{\times}{10}^{-2}$ | $4\text{\times}{10}^{-2}$ | $4\text{\times}{10}^{-2}$
$16$ | $1350$ | $4\text{\times}{10}^{-4}$ | $9\text{\times}{10}^{-4}$ | $3\text{\times}{10}^{-3}$ | $5\text{\times}{10}^{-3}$ | $9\text{\times}{10}^{-4}$
$32$ | $5766$ | $2\text{\times}{10}^{-5}$ | $8\text{\times}{10}^{-5}$ | $3\text{\times}{10}^{-4}$ | $5\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-4}$
$64$ | $23\,814$ | $1\text{\times}{10}^{-6}$ | $4\text{\times}{10}^{-6}$ | $1\text{\times}{10}^{-5}$ | $3\text{\times}{10}^{-5}$ | $9\text{\times}{10}^{-6}$
p | $N$ | $\epsilon_{\mathcal{S}[\bm{f}]}$ | $\epsilon_{\bm{n}}$ | $\epsilon_{H}$ | $\epsilon_{K}$ | $\epsilon_{div_{\gamma}}$
$8$ | $144$ | $7\text{\times}{10}^{-4}$ | $2\text{\times}{10}^{-2}$ | $3\text{\times}{10}^{-2}$ | $6\text{\times}{10}^{-2}$ | $4\text{\times}{10}^{-2}$
$16$ | $544$ | $2\text{\times}{10}^{-6}$ | $6\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-3}$ | $3\text{\times}{10}^{-3}$ | $1\text{\times}{10}^{-3}$
$24$ | $1200$ | $3\text{\times}{10}^{-8}$ | $2\text{\times}{10}^{-5}$ | $5\text{\times}{10}^{-5}$ | $1\text{\times}{10}^{-4}$ | $5\text{\times}{10}^{-5}$
$32$ | $2112$ | $5\text{\times}{10}^{-10}$ | $1\text{\times}{10}^{-7}$ | $6\text{\times}{10}^{-7}$ | $9\text{\times}{10}^{-6}$ | $4\text{\times}{10}^{-6}$
(b) Relative error in singular quadrature and derivatives for an ellipsoid of
reduced volume $\nu=0.78$, given by
$\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}+\frac{z^{2}}{c^{2}}=1$, where
$a=0.4,b=1,c=1$. Table 10: Relative errors in the derivative scheme and
singular quadrature for ellipsoids of reduced volume (a) $\nu=0.5$ and (b)
$\nu=0.3$ using our scheme (at the top for different grid orders $m$ with four
times upsampling to compute quadrature ) and using spherical harmonics scheme
[34] (at the bottom for different spherical harmonics degrees $p$). Spherical
harmonics results for $p=64$ are used as true values and the error is computed
relative to those values. Surface divergence is computed for the smooth
function $\bm{f}(x,y,z)=(x^{2},y^{2},z^{2})$ on the surface.
m | $N$ | $\epsilon^{up}_{\mathcal{S}[\bm{f}]}$ | $\epsilon_{\bm{n}}$ | $\epsilon_{H}$ | $\epsilon_{K}$ | $\epsilon_{div_{\gamma}}$
---|---|---|---|---|---|---
$8$ | $294$ | $2\text{\times}{10}^{-2}$ | $2\text{\times}{10}^{-1}$ | $2\text{\times}{10}^{-1}$ | $3\text{\times}{10}^{-1}$ | $4\text{\times}{10}^{-2}$
$16$ | $1350$ | $4\text{\times}{10}^{-3}$ | $9\text{\times}{10}^{-3}$ | $3\text{\times}{10}^{-2}$ | $6\text{\times}{10}^{-2}$ | $3\text{\times}{10}^{-3}$
$32$ | $5766$ | $7\text{\times}{10}^{-4}$ | $8\text{\times}{10}^{-4}$ | $4\text{\times}{10}^{-3}$ | $6\text{\times}{10}^{-3}$ | $1\text{\times}{10}^{-4}$
$64$ | $23\,814$ | $6\text{\times}{10}^{-5}$ | $9\text{\times}{10}^{-5}$ | $4\text{\times}{10}^{-4}$ | $7\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-5}$
p | $N$ | $\epsilon_{\mathcal{S}[\bm{f}]}$ | $\epsilon_{\bm{n}}$ | $\epsilon_{H}$ | $\epsilon_{K}$ | $\epsilon_{div_{\gamma}}$
$8$ | $144$ | $9\text{\times}{10}^{-3}$ | $9\text{\times}{10}^{-2}$ | $3\text{\times}{10}^{-1}$ | $4\text{\times}{10}^{-1}$ | $2\text{\times}{10}^{-3}$
$16$ | $544$ | $2\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-2}$ | $6\text{\times}{10}^{-2}$ | $9\text{\times}{10}^{-2}$ | $4\text{\times}{10}^{-2}$
$24$ | $1200$ | $1\text{\times}{10}^{-5}$ | $3\text{\times}{10}^{-3}$ | $1\text{\times}{10}^{-2}$ | $2\text{\times}{10}^{-2}$ | $7\text{\times}{10}^{-3}$
$32$ | $2112$ | $2\text{\times}{10}^{-6}$ | $1\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-3}$ | $5\text{\times}{10}^{-3}$ | $6\text{\times}{10}^{-4}$
(a) Relative error in singular quadrature and derivatives for an ellipsoid of
reduced volume $\nu=0.5$, given by
$\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}+\frac{z^{2}}{c^{2}}=1$, where
$a=0.2,b=1,c=1$.
m | $N$ | $\epsilon^{up}_{\mathcal{S}[\bm{f}]}$ | $\epsilon_{\bm{n}}$ | $\epsilon_{H}$ | $\epsilon_{K}$ | $\epsilon_{div_{\gamma}}$
---|---|---|---|---|---|---
$8$ | $294$ | $4\text{\times}{10}^{-2}$ | $4\text{\times}{10}^{-1}$ | $5\text{\times}{10}^{-1}$ | $6\text{\times}{10}^{-1}$ | $4\text{\times}{10}^{-2}$
$16$ | $1350$ | $1\text{\times}{10}^{-2}$ | $9\text{\times}{10}^{-2}$ | $2\text{\times}{10}^{-1}$ | $3\text{\times}{10}^{-1}$ | $3\text{\times}{10}^{-3}$
$32$ | $5766$ | $4\text{\times}{10}^{-3}$ | $2\text{\times}{10}^{-2}$ | $4\text{\times}{10}^{-2}$ | $7\text{\times}{10}^{-2}$ | $2\text{\times}{10}^{-3}$
$64$ | $23\,814$ | $1\text{\times}{10}^{-3}$ | $2\text{\times}{10}^{-3}$ | $8\text{\times}{10}^{-3}$ | $9\text{\times}{10}^{-3}$ | $9\text{\times}{10}^{-4}$
p | $N$ | $\epsilon_{\mathcal{S}[\bm{f}]}$ | $\epsilon_{\bm{n}}$ | $\epsilon_{H}$ | $\epsilon_{K}$ | $\epsilon_{div_{\gamma}}$
$8$ | $144$ | $4\text{\times}{10}^{-2}$ | $2\text{\times}{10}^{-1}$ | $1$ | $1$ | $4\text{\times}{10}^{-1}$
$16$ | $544$ | $3\text{\times}{10}^{-3}$ | $1\text{\times}{10}^{-1}$ | $3\text{\times}{10}^{-1}$ | $4\text{\times}{10}^{-1}$ | $2\text{\times}{10}^{-1}$
$24$ | $1200$ | $5\text{\times}{10}^{-4}$ | $4\text{\times}{10}^{-2}$ | $1\text{\times}{10}^{-1}$ | $1\text{\times}{10}^{-1}$ | $6\text{\times}{10}^{-2}$
$32$ | $2112$ | $4\text{\times}{10}^{-5}$ | $5\text{\times}{10}^{-3}$ | $3\text{\times}{10}^{-2}$ | $4\text{\times}{10}^{-2}$ | $5\text{\times}{10}^{-3}$
(b) Relative error in singular quadrature and derivatives for an ellipsoid of
reduced volume $\nu=0.3$ given by
$\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}+\frac{z^{2}}{c^{2}}=1$, where
$a=0.1,b=1,c=1$.
### A.7 Sensitivity of the singular quadrature scheme to the regularization
parameter $\delta$
Here, we discuss the effect of choice of regularization parameter $\delta$ on
the singular quadrature accuracy. We experiment with different values of the
constant in $C$ in Equation 3.20. We tabulate the error in singular quadrature
for different values of $C$ in Table 11(a). We find out that $C=1$ works best
for the parameterization we have chosen. We also tabulate in the same table
the results for fixed global regularization parameter $\delta=Ch_{m}$ and show
that our patch-dependent regularization parameter gives better accuracy and
convergence compared to the fixed regularization parameter.
Table 11: Sensitivity of singular quadrature with respect to the regularization parameter $\delta$. First three columns show the relative errors for the patch-dependent $\delta=C\delta^{*}$ where $C$ is a constant. The last three columns show the relative errors for fixed global regularization parameter $\delta=Ch_{m}$. $m$ | $\delta=0.5\delta^{*}$ | $\delta=\delta^{*}$ | $\delta=2\delta^{*}$ | $\delta=0.5h_{m}$ | $\delta=h_{m}$ | $\delta=2h_{m}$
---|---|---|---|---|---|---
$8$ | $2\text{\times}{10}^{-2}$ | $7\text{\times}{10}^{-3}$ | $1\text{\times}{10}^{-2}$ | $2\text{\times}{10}^{-2}$ | $7\text{\times}{10}^{-3}$ | $1\text{\times}{10}^{-2}$
$16$ | $6\text{\times}{10}^{-3}$ | $4\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-3}$ | $6\text{\times}{10}^{-3}$ | $5\text{\times}{10}^{-4}$ | $5\text{\times}{10}^{-3}$
$32$ | $3\text{\times}{10}^{-3}$ | $2\text{\times}{10}^{-5}$ | $6\text{\times}{10}^{-5}$ | $4\text{\times}{10}^{-3}$ | $1\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-3}$
$64$ | $2\text{\times}{10}^{-4}$ | $1\text{\times}{10}^{-6}$ | $6\text{\times}{10}^{-6}$ | $7\text{\times}{10}^{-4}$ | $6\text{\times}{10}^{-5}$ | $6\text{\times}{10}^{-4}$
(a) Relative error in the computation of Stokes single layer potential (with
density $\bm{f}=(x^{2},y^{2},z^{2})$) on the ellipsoid
$\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}+\frac{z^{2}}{c^{2}}=1$, where
$a=0.4,b=1,c=1$, for different choices of regularization parameter $\delta$.
Reference solution is computed using $p=64$ spherical harmonics.
### A.8 Wall clock times for our GPU accelerated code _vs_ the spherical
harmonics CPU code
We report the wall clock time per time step required for our GPU-accelerated
code compared to the spherical harmonics CPU code in Table 12. We note that
the spherical harmonics discretization of degree $p$ contains $2p(p+1)$ points
compared to $6(m-1)^{2}$ for our scheme with $m_{\mathrm{th}}$-order grid.
Since, we use four times upsampling, the number of discretization points for
evaluating singular quadrature is even higher at $N_{\mathrm{up}}=6(4m-1)^{2}$
in our scheme. Even though our discretization has much higher points than the
spherical harmonics, our scheme has lower runtime for higher order grids and
hence, scales better than the spherical harmonics as the number of points
increase. Since the quadrature scheme requires most of the time in our
simulations Figure 17(b), our scheme can be further accelerated using fast
multipole methods(FMMs) [33] and can be used to do faster simulations at
higher resolutions. We leave the FMM acceleration as the subject for future
work.
Table 12: Wall clock time per time step (denoted by $T_{\mathrm{step}}$ in _seconds_) for our code with 4x upsampling (on the left) vs the spherical harmonics scheme (on the right). $N$ denotes the number of discretization points. $N_{\mathrm{up}}$ is the upsampled number of discretization points used for the singular quadrature in our scheme. $m$ | $N$ | $N_{\mathrm{up}}$ | $T_{\mathrm{step}}$ | $p$ | $N$ | $T_{\mathrm{step}}$
---|---|---|---|---|---|---
$8$ | $294$ | $4704$ | $0.61$ | $8$ | $144$ | $0.75$
$16$ | $1350$ | $21\,600$ | $1.26$ | $16$ | $544$ | $1.92$
$32$ | $5766$ | $92\,256$ | $10.10$ | $32$ | $2112$ | $10.05$
$48$ | $13\,254$ | $381\,024$ | $46.46$ | $64$ | $8320$ | $87.86$
## References
* [1] D. Agarwal and G. Biros. Stable shapes of three-dimensional vesicles in unconfined and confined poiseuille flow. Phys. Rev. Fluids, 5:013603, 2020.
* [2] D. Agarwal and G. Biros. Shape dynamics of a red blood cell in poiseuille flow. Phys. Rev. Fluids, 7(9):093602, 2022.
* [3] P. Bagchi. Mesoscale simulation of blood flow in small vessels. Biophs. J., 92(6):1858–1877, 2007.
* [4] J. Beale, W. Ying, and J. Wilson. A simple method for computing singular or nearly singular integrals on closed surfaces. Commun. Comput. Phys., 20(3):733–753, 2016.
* [5] T. Biben, A. Farutin, and C. Misbah. Three-dimensional vesicles under shear flow: numerical study of dynamics and phase diagram. Phys. Rev. E, 83:031921, 2011.
* [6] G. Boedec, M. Leonetti, and M. Jaeger. 3d vesicle dynamics simulations with a linearly triangulated surface. J. Comput. Phys., 230:1020–1034, 2011.
* [7] O. Bruno and L. Kunyansky. A fast, high-order algorithm for the solution of surface scattering problems: basic implementation, tests, and applications. Journal of Computational Physics, 169:80–110, 2001.
* [8] D. Cordasco, A. Yazdani, and P. Bagchi. Comparison of erythrocyte dynamics in shear flow under different stress-free configurations. Phys. Fluids, 26:041902, 2014.
* [9] C. de Boor. A practical guide to splines. Springer-Verlag, 1978.
* [10] M. Delfour and J. Zolésio. Shapes and geometries: metrics, analysis, differential calculus, and optimization. 2011\.
* [11] A. Farutin, T. Biben, and C. Misbah. 3D numerical simulations of vesicle and inextensible capsule dynamics. Journal of Computational Physics, 275:539, 2014.
* [12] D. A. Fedosov, W. Pan, B. Caswell, G. Gompper, and G. E. Karniadakis. Predicting human blood viscosity in silico,. Proc. Natl. Acad. Sci., 108:11772, 2011.
* [13] D. A. Fedosov, M. Peltomäki, and G. Gompper. Deformation and dynamics of red blood cells in flow through cylindrical microchannels. Soft Matter, 10:4258, 2014.
* [14] E. Fehlberg. Classical fifth-, sixth-, seventh-, and eighth-order Runge-Kutta formulas with stepsize control. NASA Technical Report, 287, 1968.
* [15] E. Fehlberg. Low-order classical Runge-Kutta formulas with stepsize control and their application to some heat transfer problems. National aeronautics and space administration, 315, 1969.
* [16] Fornberg and Bengt. Generation of finite difference formulas on arbitrarily spaced grids. Mathematics of Computation, 51(184):699–706, 1988.
* [17] I. Graham and I. Sloan. Fully discrete spectral boundary integral methods for Helmholtz problems on smooth closed surfaces in $\mathbb{R}^{3}$. Numerische Mathematik, 92, 2002.
* [18] L. Greengard, M. O’Neil, M. Rachh, and F. Vico. Fast multipole methods for the evaluation of layer potentials with locally-corrected quadratures. Journal of Computational Physics: X, 10:100092, 2021.
* [19] A. Guckenberger, A. Kihm, T. John, C. Wagner, and S. Gekle. Numerical-experimental observation of shape bistability of red blood cells flowing in a microchannel. Soft Matter, 14:2032, 2018.
* [20] T. Kruger, M. Gross, D. Raabe, and F. Varnik. Predicting human blood viscosity in silico,. Soft Matter, 9:9008, 2013.
* [21] E. Lac, D. Barthès-Biesel, N. Pelekasis, and J. Tsamopoulos. Spherical capsules in three-dimensional unbounded Stokes flows: effect of the membrane constitutive law and onset of buckling. J. Fluid Mech., 516:303–334, 2004.
* [22] R. MacMeccan, J. Clausen, G. Neitzel, and C. Aidun. Simulating deformable particle suspensions using a coupled lattice-boltzmann and finite-element method. J. Fluid Mech., 618:13–39, 2009.
* [23] J. Mauer, S. Mendez, L. Lanotte, F. Nicoud, M. Abkarian, G. Gompper, and D. A. Fedosov. Flow-induced transitions of red blood cell shapes under shear. Phys. Rev. Lett., 121:118103, 2018.
* [24] H.-N. Nguyen and R. Cortez. Reduction of the regularization error of the method of regularized stokeslets for a rigid object immersed in a three dimensional Stokes flow. Commun. Comput. Phys., 15:126–152, 2014.
* [25] L. Nylons, M. Harris, and J. Prins. Fast N-body simulation with CUDA. GPU Gems 3. Addison Wesley, page 677–795, 2007.
* [26] S. Orszag. Fourier series on spheres. Monthly Weather Review, 102:56–75, 1974.
* [27] C. Pozrikidis. Boundary integral and singularity methods for linearized viscous flow. Cambridge University Press, Cambridge, 1992.
* [28] C. Pozrikidis. Effect of membrane bending stiffness on the deformation of capsules in simple shear flow. Journal of Fluid Mechanics, 440:269–291, 2001.
* [29] C. Pozrikidis. Interfacial dynamics for Stokes flow. Journal of Computational Physics, 169:250–301, 2001.
* [30] R. Skalak, A. Tozeren, R. P. Zarda, and S. Chien. Strain energy function of red blood cell membranes. Biophys. J., 13:245, 1973.
* [31] S. Sukumaran and U. Seifert. Influence of shear flow on vesicles near a wall: a numerical study. Phys. Rev. E, 64:011916, 2001.
* [32] S. Tlupova and J. Beale. Regularized single and double layer integrals in 3D Stokes flow. Journal of Computational Physics, 386:568, 2019.
* [33] A. Tornberg and L. Greengard. A fast multipole method for the three-dimensional Stokes equations. J. Comput. Phys., 227:1613–1619, 2008.
* [34] S. K. Veerapaneni, A. Rahimian, G. Biros, and D. Zorin. A fast algorithm for simulating vesicle flows in three dimensions. Journal of Computational Physics, 230:5610, 2011.
* [35] V. Vitkova, M.-A. Mader, B. Polack, C. Misbah, and T. Podgorski. Micro-Macro link in rheology of erythrocyte and vesicle suspensions? Biophys. J., 95:L33, 2008.
* [36] M. Wala and A. Klöckner. A fast algorithm for quadrature by expansion in three dimensions. Journal of Computational Physics, 388:655–689, 2019.
* [37] F. W. Warner. Foundations of differentiable manifolds and lie groups. New York: Springer-Verlag, 1983.
* [38] J. Weiner. On a problem of Chen, Willmore, et al. Indiana University, Mathematics Journal, 27:19–35, 1978.
* [39] B. Wu and P.-G. Martinsson. A unified trapezoidal quadrature method for singular and hypersingular boundary integral operators on curved surfaces. SIAM Journal on Numerical Analysis, 61(5):2182–2208, 2023.
* [40] L. Ying, G. Biros, and D. Zorin. A kernel-independent adaptive fast multipole algorithm in two and three dimensions. Journal of Computational Physics, 196:591–626, 2004.
* [41] L. Ying, G. Biros, and D. Zorin. A high-order 3d boundary integral equation solver for elliptic pdes in smooth domains. Journal of Computational Physics, 219(1):247–275, 2006.
* [42] H. Zhao, A. Isfahani, L. Olson, and J. Freund. A spectral boundary integral method for flowing blood cells. Journal of Computational Physics, 229:10, 2010.
* [43] H. Zhao, A. Spann, and E. Shaqfeh. The dynamics of a vesicle in a wall-bound shear flow. Phys. Fluids, 23:121901, 2011.
|
# An information-based metric for observing strategy optimization,
demonstrated in the context of photometric redshifts with applications in
cosmology
Alex I. Malz,1 Francois Lanusse,2 John Franklin Crenshaw,3 & Melissa L.
Graham4
1Ruhr-University Bochum, German Centre for Cosmological Lensing,
Universitätsstraße 150, 44801 Bochum, Germany
2AIM, CEA, CNRS, Université Paris-Saclay, Université Paris Diderot, Sorbonne
Paris Cité, F-91191 Gif-sur-Yvette, France
3Department of Physics, University of Washington, Box 351560, Seattle, WA
98195
4DiRAC Institute, Department of Astronomy, University of Washington, Box
351580, U.W., Seattle, WA 98195, USA
E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
The observing strategy of a galaxy survey influences the degree to which its
resulting data can be used to accomplish any science goal. LSST is thus
seeking metrics of observing strategies for multiple science cases in order to
optimally choose a cadence. Photometric redshifts are essential for many
extragalactic science applications of LSST’s data, including but not limited
to cosmology, but there are few metrics available, and they are not
straightforwardly integrated with metrics of other cadence-dependent
quantities that may influence any given use case. We propose a metric for
observing strategy optimization based on the potentially recoverable mutual
information about redshift from a photometric sample under the constraints of
a realistic observing strategy. We demonstrate a tractable estimation of a
variational lower bound of this mutual information implemented in a public
code using conditional normalizing flows. By comparing the recoverable
redshift information across observing strategies, we can distinguish between
those that preclude robust redshift constraints and those whose data will
preserve more redshift information, to be generically utilized in a downstream
analysis. We recommend the use of this versatile metric to observing strategy
optimization for redshift-dependent extragalactic use cases, including but not
limited to cosmology, as well as any other science applications for which
photometry may be modeled from true parameter values beyond redshift.
111https://github.com/aimalz/TheLastMetric
###### keywords:
surveys – galaxies: distances and redshifts – methods: statistical
††pubyear: 2021††pagerange: An information-based metric for observing strategy
optimization, demonstrated in the context of photometric redshifts with
applications in cosmology–References
## 1 Introduction
The Vera C. Rubin Observatory will produce a catalog of tens of billions of
astronomical objects over the course of the ten-year Legacy Survey of Space
and Time (LSST; Ivezić et al. 2019). The quality and quantity of resulting
data will depend on LSST’s observing strategy (OS), which encompasses the
choice of frequency and duration of visits to each portion of the night sky
across each of LSST’s $ugrizy$ filters as a function of the survey’s duration.
As the OS directly impacts the science one can accomplish with the resulting
data (Jones et al., 2020), LSST’s Science Collaborations (SCs) are directing
considerable effort to optimizing the choice of OS (e.g., LSST Science
Collaboration et al., 2017; Graham et al., 2018; Lochner et al., 2021, to name
but a few).
Though the space of all OSs is very high-dimensional, a decision of OS may be
informed by how each science goal is affected by each OS considered. LSST has
developed two tools222https://pstn-051.lsst.io/ to facilitate an optimal
choice of OS. OpSim (LSST, 2016; Delgado et al., 2014) forecasts the impact of
an OS on the properties of the resulting photometric observations LSST will
deliver. The MAF (Metrics Analysis Framework) (LSST, 2017) was established to
ensure that the choice of OS would be well-informed by all science cases,
whose proponents are invited to include one or more metrics to be
automatically evaluated for each set of simulated observational conditions.
Though each science application may favor a different OS, a fair choice may be
made by reviewing how all science goals are affected.
The utility of a gargantuan catalog of extragalactic objects, such as what
LSST will provide, relies on the accuracy and precision of its constraints on
their redshifts, which, for a photometric survey such as LSST, represent a
dominant factor in the error budgets of most if not all extragalactic science
applications. Without access to high-fidelity spectroscopic redshift
measurements, users of LSST’s extragalactic catalog will rely on photometric
redshift (photo-$z$) estimates, which suffer from multiple forms of
uncertainty, even under idealized conditions (see Schmidt et al., 2020, and
extensive references therein).
Redshift uncertainty thus represents one of LSST’s greatest liabilities and
one of the utmost importance to multiple LSST SCs (Awan et al., 2016; Lochner
et al., 2018; Scolnic et al., 2018; Almoubayyed et al., 2020), particularly
the Dark Energy SC (DESC), Transients and Variable Stars (TVS) SC, and
Galaxies SC, but photo-$z$ metrics remain underdeveloped. Some metrics are
straightforward, such as the 10-year coadded depth or the number of supernovae
with more than 10 epochs, and can be directly predicted from the OpSim
metadata and visualized as sky maps with HEALpix (Górski et al., 2005).
However, photo-$z$ performance is quantified by derived statistics of a
particular photo-$z$ estimator on an entire simulated galaxy catalog, often as
a function of true redshift. The holy grail of OS metrics would be a map of
“goodness of photo-$z$ quality" insensitive to photo-$z$ estimator for each
HEALpix pixel, a goal that has not yet been achieved, let alone with enough
computational efficiency for practical MAF integration.
Multiple aspects of the current approach to photo-$z$ metrics for OS
optimization would benefit from improvement, ideally addressing as many as
possible of the following needs:
* •
An observing strategy metric for photo-$z$s should be agnostic to the choice
of estimation method as well as whether point estimates or photo-$z$
posteriors are used.
* •
A metric for photo-$z$ should be adaptable to integrate with metrics of
additional quantities sensitive to observing strategy.
* •
An observing strategy metric for photo-$z$s should not preclude direct
comparison of overall metrics between science goals nor between analysis
approaches for a shared science goal.
* •
Any metric included in the MAF should be fast and scalable; simulation and
propagation of mock data through an entire analysis pipeline is not feasible.
This paper explores a potential OS metric of estimated redshift quality that
represents an improvement upon established metrics along the above axes. Our
metric relies directly on estimating the mutual information between photometry
and redshift, in other words, quantifying how much information is gained on
the redshift of galaxies by having access to the photometry under a given OS.
This paper is not the first use of an information criterion for optimization
in the context of redshift estimation. Malz et al. (2018) used an information-
theoretic metric to optimize the storage parameterization of photo-$z$
posteriors, however, Kalmbach et al. (2020) used a metric of mutual
information more closely related to that of this work to optimize filter
design for photo-$z$s. In their work, photometric data with Gaussian error was
simulated using a simple redshift prior and a small number of galaxy SED
templates. This simple forward model provided an analytic method for
calculating the mutual information.
In this paper, we leverage recent advances in machine learning to enable the
calculation of the mutual information contained in more complex simulations
for which an analytic model may be unavailable, as could be anticipated of
even idealized data processed through OpSim under a realistically complex OS.
In Section 2, we introduce the mathematical framework for the variational
mutual information lower bound that serves as the basis for our metric. In
Section 3, we demonstrate the metric in the context of OS optimization for
LSST. And in Section 4, we summarize its strengths and future directions for
its development and application.
## 2 Method: Variational Mutual Information Lower Bound
This section introduces TheLastMetric (TLM), denoted as $\mathchar
1140\relax$,333the last letter of the Hebrew alphabet, pronounced “tav” an
estimator-independent metric for parameters of interest conditioned on
photometry under any OS. Though we derive it in the context of redshift-
dependent cosmological probes, its mathematical structure is not inherently
restricted to redshift; as it is so broadly applicable across science cases,
one may jokingly exaggerate that it’s the penultimate OS metric, hence its
name.
In Section 2.1, we introduce the mathematical formalism and review relevant
information-theoretic concepts. In Section 2.2 we derive the metric itself,
and in Section 2.3 we describe the model by which the metric is calculated.
### 2.1 An information theoretic view of observing strategies
Every science case seeks a metric of how informative the data corresponding to
each OS is with respect to some physical parameters of interest. The
properties of the telescope and its OS correspond to a transformation of the
underlying true data in the universe. Such true data would correspond to
photons that could be observed only by an impossibly perfect, idealized
instrument that collects complete, noiseless data, in contrast to what we can
observe from any real telescope under a given OS, a subset of those photons
restricted by the OS and convolved with instrumental errors. Typically, the
information content of recovered physical parameters is determined by running
the observed data through an end-to-end analysis pipeline, which is
impractical for a MAF. This is particularly true for cosmological
applications, but while our derived cosmological constraints depend on the
analysis procedure (Chang et al., 2018), the potentially recoverable
cosmological information content does not.444Though we derive and demonstrate
our metric with this application in mind, such a statement is no less true of
other physical parameters of interest to a variety of science cases, making
this metric broadly applicable beyond cosmology or even redshifts.
Let us begin by quantifying the total cosmological information content of an
idealized survey with perfect photometry using information theory. We denote
the random variable representing cosmological parameters as $\Theta$ and a
random variable representing survey photometry as $X_{phot}$. The information
content about cosmological parameters $\theta\sim\Theta$ due to photometry
$x_{phot}\sim X_{phot}$ can be described by the mutual information between
$\Theta$ and $X_{phot}$, defined as
$I(\Theta\ ;X_{phot})\equiv\mathbb{E}_{p(\theta,x_{phot})}\
\left[\log\frac{p(\theta,x_{phot})}{p(\theta)p(x_{phot})}\right].$ (1)
The mutual information between two random variables represents the reduction
in uncertainty in one due to knowing the other. It is desirable for this
mutual information to be large, i.e. we want the photometry $x_{phot}$ to be
very informative about the cosmological parameters $\theta$.
In practice, of course, we do not have access to perfect photometry, and a
typical cosmological analysis pipeline does not directly relate photometry to
cosmology in a single step; instead, typical analyses constitute a pipeline
from observed photometry $X_{phot}^{obs}$ to cosmology $\Theta$ via a number
of intermediate quantities. The relationship between cosmological parameters
$\Theta$, redshift distribution $Z$, true photometry $X_{phot}^{true}$, and
observed photometry $X_{phot}^{obs}$ is an example of a Markov chain
$\Theta\rightarrow Z\rightarrow X_{phot}^{true}\rightarrow X_{phot}^{obs}$
satisfying
$p(\theta,z,x_{phot}^{true},x_{phot}^{obs})=p(\theta)\ p(z|\theta)\
p(x_{phot}^{true}|z)\ p(x_{phot}^{obs}|x_{phot}^{true}).$ (2)
The amount of information each stage of this chain retains about cosmology can
be expressed using the data processing inequality (Cover & Thomas, 2006),
$I(\Theta\ ;\ Z)\geq I(\Theta\ ;\ X_{phot}^{true})\geq I(\Theta\ ;\
X_{phot}^{obs})\;,$ (3)
which can be intuitively understood as saying that information can only be
lost through the steps of a chain.
Every application of LSST data aims to minimize the information loss in the
steps of this chain, or one analogous to it, that are under our control. An
example of a step we as experimenters control is the choice of OS that min
information loss in the stages of this chain that relates galaxies’ true
redshifts and their observed photometry $I(Z\ ;\ X_{phot}^{obs})$, which
depends on the specific OS that transforms $X_{phot}^{true}\rightarrow
X_{phot}^{obs}$. Again following the data processing inequality, we expect the
following:
$I(Z\ ;\ X_{phot}^{true})\geq I(Z\ ;\ X_{phot}^{obs})\;,$ (4)
i.e. that the mutual information between redshift and observed photometry
would be saturated if perfect, true, unobservable photometry were available,
but the OS determines how closely we can approximate that bound.
Our goal is to compare the mutual information $I(Z\ ;\ X_{phot}^{obs})$555As
was suggested in the preamble, this paper concerns the mutual information of
redshift and photometry, but the derivation is just as valid for the mutual
information $I(\Psi;X_{phot}^{obs})$ of observed photometry with some other
parameter $\Psi$, for example e.g. stellar mass, used in any science
application, even beyond cosmology. for different OSs, with the understanding
that the OS that can achieve the highest mutual information at this level of
the chain will also maximize the overall mutual information with respect to
cosmology $I(\Theta\ |\ X_{phot}^{obs})$ for a fixed analysis pipeline. Given
this scope, we henceforth use $X_{phot}$ as shorthand for $X_{phot}^{obs}$.
The next challenge to discuss is a practical computation of the mutual
information.
### 2.2 Tractable variational lower bound on the mutual information
Evaluating the mutual information as defined in Equation 1 is in general
extremely challenging, as in most cases only samples from the distributions
involved as accessible, not the underlying distributions themselves. In recent
years however, the concept of mutual information has found many applications
in the machine learning literature (e.g. Tishby & Zaslavsky, 2015; Alemi et
al., 2017; Bachman et al., 2019, as well as the recent review of Poole et al.
(2019)), which has driven significant research into tractable estimators of
lower bounds.
To achieve a tractable expression for the mutual information $I(Z;X_{phot})$
of redshift and photometry, let us first introduce the entropy
$H(Z)\equiv-\int dz\ p(z)\log p(z)$ (5)
of the redshift distribution $p(z)$, which quantifies our uncertainty on the
random variable $Z$. Using Equation 5, we rewrite the mutual information
$I(Z;X_{phot})$ of redshift and photometry in the following way:
$\displaystyle I(Z;X_{phot})$ $\displaystyle=\mathbb{E}_{p(z,x_{phot})}\
\left[\log\frac{p(z|x_{phot})}{p(z)}\right]$
$\displaystyle=\mathbb{E}_{p(z,x_{phot})}\left[\log
p(z|x_{phot})\right]-\mathbb{E}_{p(z)}\left[\log p(z)\right]$
$\displaystyle=\mathbb{E}_{p(z,x_{phot})}\left[\log
p(z|x_{phot})\right]+H(Z).$ (6)
As we do not directly have access to the posterior distribution
$p(z|x_{phot})$, the first term is unknown, making the mutual information
itself intractable.
However, the overall expression for the mutual information can be bounded from
below by introducing a variational approximation $q_{\varphi}(z|x_{phot})$ for
the posterior $p(z|x_{phot})$, which leads to the variational lower bound
introduced in Barber & Agakov (2003):
$\displaystyle I(Z;X_{phot})$
$\displaystyle=\mathbb{E}_{p(z,x_{phot})}\left[\log\frac{p(z|x_{phot})}{q_{\varphi}(z|x_{phot})}\right]$
$\displaystyle+\mathbb{E}_{p(z,x_{phot})}\left[\log
q_{\varphi}(z|x_{phot})\right]+H(Z)$ (7)
$\displaystyle=\mathcal{D}_{KL}\left[p(z|x_{phot})||q_{\varphi}(z|x_{phot})\right]$
$\displaystyle+\mathbb{E}_{p(z,x_{phot})}\left[\log
q_{\varphi}(z|x_{phot})\right]+H(Z).$ (8)
In this expression, $\mathcal{D}_{KL}[p(z|x_{phot})||q_{\varphi}(z|x_{phot})]$
is the Kullback-Leibler Divergence (KLD), a directional measure of the loss of
information due to using the variational model $q_{\varphi}(z|x_{phot})$ as an
approximation to the true, unknown, posterior distribution $p(z|x_{phot})$.
Because this KLD is non-negative, this last expression can be used to provide
the following lower bound on the mutual information:
$\displaystyle I(Z;X_{phot})$
$\displaystyle\geq\mathbb{E}_{p(z,x_{phot})}\left[\log
q_{\varphi}(z|x_{phot})\right]+H(Z)\equiv\mathchar 1140\relax,$ (9)
thereby providing the definition of TheLastMetric (TLM).
Not only is this expression now tractable, as it can be estimated simply by
optimization of the variational parameters $\varphi$, but, moreover, the bound
is tight when $q_{\varphi}(z|x_{phot})=p(z|x_{phot})$ is true. Like the KLD,
$\mathchar 1140\relax$ has units of information, which are nats in the
base-$e$ convention used in this paper but can be trivially converted to, e.g.
base-2 bits.
### 2.3 Lower bound implementation using conditional normalizing flows
The lower bound on mutual information introduced in the previous section
relies on having access to a parametric conditional density
$q_{\varphi}(z|x_{phot})$, also known as the variational distribution, which
is optimized to match the true posterior $p(z|x_{phot})$. Any parametric
conditional density estimator could be used for this purpose, but the more
expressive the model, the tighter the lower bound will be.
In this work, we approximate $p(z|x_{phot})$ with a Normalizing Flow (NF)
(Jimenez Rezende & Mohamed, 2015; Dinh et al., 2015), a flexible class of deep
generative models that represents the current state-of-the-art on many density
estimation tasks (Kobyzev et al., 2020). NFs are Latent Variable Models
(LVMs), which model a given distribution $p(x)$ over a target variable $x$ by
introducing 1) a latent variable $z$ that follows a known prior distribution
(typically a multivariate normal distribution) $p(z)$ as $z\sim p(z)$ and 2) a
parametric mapping $f_{\varphi}$ that maps this latent variable $z$ to a point
in the target distribution $x$ according to $x=f_{\varphi}(z)$. This is no
different from other deep generative latent variable models like Variational
Autoencoders (Kingma & Welling, 2013) or Generative Adversarial Networks
(Goodfellow et al., 2014), but what sets NFs apart is that they are
specifically designed using a bijective mapping $f_{\varphi}$. In the case of
a bijection, the probability density function $q_{\varphi}$ of the model can
be expressed as
$q_{\varphi}(x)=p(z=f^{-1}_{\varphi}(x))\left|\det\frac{\partial
f_{\varphi}}{\partial x}(x)\right|^{-1},$ (10)
where $\det\frac{\partial f_{\varphi}}{\partial x}$ is the Jacobian
determinant of $f_{\varphi}$, which accounts for how $f_{\varphi}$ distorts
volume elements. This expression is nothing more than the change of variable
formula for probabilities, but it gives NFs a crucial advantage over other LVM
models: their probability density function has an explicit closed-form
expression. In other words, for a given set of parameters $\varphi$ we can
explicitly compute the probability $q_{\varphi}(x)$ of a data point $x$ under
the model.
A Conditional NF (CNF) can be trivially made by introducing a conditional
variable $y$ in the mapping $f_{\varphi}(z;y)$ (Winkler et al., 2019) that
preserves the bijectivity of the mapping for all $y$, so Equation 10 still
applies. Thus the conditional distribution modeled by the flow is simply:
$q_{\varphi}(x|y)=p(z=f^{-1}_{\varphi}(x;y))\left|\det\frac{\partial
f_{\varphi}}{\partial x}(x;y)\right|^{-1}.$ (11)
In practice, as the mapping $f_{\varphi}$ is typically implemented using a
neural network, making a normalizing flow conditional simply amounts to adding
the variable $y$ as an input to the networks parameterizing the flow.
Because (C)NFs have tractable likelihoods, they can be trained by directly
optimizing the probability of the training set under the model. For CNFs, the
training loss takes the following form
$\mathcal{L}=-\mathbb{E}_{p(x,y)}\left[\log q_{\varphi}(x|y)\right],$ (12)
which can be shown to minimize the KLD
$\mathcal{D}_{KL}[p(x|y)||q_{\varphi}(x|y)]$, i.e. driving the (approximating)
model distribution to be close to the (true) data distribution. We note that
the CNF’s loss function given by Equation 12 is equal to the first term of the
variational lower bound Equation 2.2. In other terms, training the CNF with
this loss function is exactly equivalent to maximizing the variational mutual
information lower bound.
The approach presented here to build a practical lower bound is agnostic to
the NF architecture employed, but the specific choice should be appropriate to
the details of the problem at hand. We defer the details of the model used in
this work to Section 3.2.
## 3 Demonstration in the context of redshifts for LSST
The necessary and sufficient conditions favoring the use of TLM are that it
must be at least as effective as established metrics and have some additional
advantage(s), such as utility, interpretability, and/or efficiency. To
demonstrate the former, we perform controlled experiments on appropriate data,
described in Section 3.1, using an implementation of a variational
approximation to $\mathchar 1140\relax$, presented in Section 3.2. The latter
is discussed in Section 3.3, where the restults of this experiment are
presented.
Figure 1: CMNN-estimated photo-$z$ statistics for a wide variety of OpSim conditions (grey), the baseline OpSim (black), and the exemplary OpSim OS simulations used in this pilot study (colors as in legend). The baseline_v1.5 OS has mid-quality results, and the simulations used in this analysis were chosen because they represent a range of photo-$z$ qualities that stand out as better and worse than baseline_v1.5. These results should not be taken as representative of the future absolute quality of LSST photo-$z$, but the differences in the results are representative of the impact of different OpSim strategies on the photo-$z$ results. OpSim OS Simulation | m5_u | m5_g | m5_r | m5_i | m5_z | m5_y
---|---|---|---|---|---|---
baseline_v1.5 | 25.86 | 27.02 | 26.99 | 26.42 | 25.70 | 24.94
footprint_stuck_rolling_v1.5 | 25.56 | 26.68 | 26.62 | 26.06 | 25.33 | 24.61
ddf_heavy_nexp2_v1.6 | 25.57 | 26.82 | 26.84 | 26.26 | 25.57 | 24.82
footprint_newA_v1.5 | 25.75 | 26.87 | 26.85 | 26.29 | 25.55 | 24.78
third_obs_pt60_v1.5 | 25.87 | 27.03 | 26.99 | 26.43 | 25.70 | 24.93
barebones_v1.6 | 26.00 | 27.13 | 27.07 | 26.57 | 25.78 | 25.05
Table 1: Simulated median 5$\sigma$ limiting magnitudes in extragalactic
regions for coadded images from the wide-fast-deep survey, from the
baseline_v1.5 OpSim OS simulation and the five exemplary OpSim OS simulations
used in this work. Figure 2: The true catalog redshift distribution for the
mock test galaxy sets used to simulate photo-$z$ results for each OpSim OS
simulation. Differences between the redshift distributions across OpSim OS
simulations are due to statistical fluctuations and correspond to a $0.5\%$
difference in entropy $H(Z)$, meaning they are statistically
indistinguishable.
### 3.1 Data: simulated photo-$z$ catalogs
For this work, we use simulations based on the same mock galaxy catalog to
which the Color-Matched Nearest-Neighbors (CMNN) photo-$z$ estimator666A
demonstration of the CMNN photo-$z$ estimator is available on GitHub at
https://github.com/dirac-institute/CMNN_Photoz_Estimator. was applied to
produce LSST-like photo-$z$ results in Graham et al. (2018, 2020) and the same
set of OpSim conditions Lochner et al. (2021) used for DESC’s assessment of
the impact of OS on multi-probe cosmological constraints. As an overview, we
use CMNN to generate mock photometry for a given OpSim simulation, then use
the traditional photo-$z$ metrics evaluated on CMNN’s photo-$z$ estimates to
identify exemplary OSs to which we then apply TLM.
For each OpSim OS simulation, we first determine the 5$\sigma$ limiting
magnitude of the 10-year coadded images from the wide-fast-deep program in sky
regions (HEALpix $\sim$220 arcmin wide). We identify regions as extragalactic
fields (i.e., appropriate for cosmological studies) if their Galactic dust
extinction was E(B-V)$<$0.2 mag and if they received at least 5 visits per
year in all six $ugrizy$ filters, and then calculate the median 10-year depths
over all extragalactic fields, which are reported in Table 1. For every OpSim
OS simulation, these 5$\sigma$ depths in each filter, $m_{5}$, are passed to
the CMNN Estimator, which uses them to calculate magnitude errors for a
catalog of mock galaxies: $\sigma_{\rm rand}^{2}=(0.04-\gamma)x+\gamma x^{2}$,
where $x=^{0.4(m_{\rm true}-m_{5})}$, $m_{\rm true}$ is the true catalog
apparent magnitude, and $\gamma$ is a filter-dependent factor which account
for the effect of, e.g., sky brightness (see Section 3.2 of Ivezić et al.,
2019). The CMNN Estimator then simulates observed apparent magnitudes by
drawing a random value from a normal distribution with a standard deviation
equal to $\sigma_{\rm rand}$ and adding it to the true catalog magnitude.
Test- and training-sets are drawn randomly (without replacement) from the mock
galaxy catalog, and photo-$z$ estimates for the test set are generated by
identifying training-set galaxies with similar colors, i.e. a subset of color-
matched nearest-neighbors in 5-dimensional color space, and adopting as the
true redshift one test-set galaxy’s photo-$z$ chosen at random. In order to
standardize the mock catalogs used for all simulations, we applied the same
cuts on the observed apparent magnitudes of 25.0, 26.0, 26.0, 25.0, 24.8, and
24.0 mag in filters ugrizy to both the test and training sets, about half a
magnitude brighter than the brightest 5$\sigma$ limiting depth of any given
OpSim OS simulation, as in Lochner et al. (2021).
It is important to note that for all of our simulations, the test- and
training-sets were drawn from the same intrinsic mock catalog, which means
they are perfectly matched in terms of their redshift and apparent magnitude
distributions. While this would be concerning if we aimed to evaluate the
realistic performance of the CMNN estimator, its role in this study is not to
produce LSST’s “official" or even “best" photo-$z$s; rather, we use it as a
forward model of the relationship between the relative quality of photo-$z$
estimates given the quality of input photometry, which provides us with mock
photometry under a given OS upon which we demonstrate TLM.
Several statistical measures are commonly used to evaluate the quality of
test-set photo-$z$ estimates based on the photo-$z$ error, $\Delta
z\equiv(z_{true}-z_{phot})/(1+z_{phot})$. While the bias $\langle\Delta
z\rangle$, representing systematic over- or under-estimates of point
estimates, was calculated, the values under these idealized conditions of a
perfectly representative and complete training-set were too small and similar
for bias to discriminate between OSs. However, our experimental design does
produce simulated photo-$z$ results for which the standard deviation and
fraction of outliers are relatively improved or degraded in a way that
correlates with the 5$\sigma$ depths in the six LSST filters. For a robust
standard deviation in $\Delta z$ we use the interquartile range (IQR) divided
by 1.349, making the fraction of outliers the fraction of test-set galaxies
with $|\Delta z|>3\times$ the robust standard deviation or $>3\times 0.06$,
whichever is larger (matching the definition of the LSST Science Requirements
Document, Ivezić & the LSST Science Collaboration, 2013).
In Figure 1, we show the robust standard deviation and fraction of
outliers777See Graham et al. (2020) for a full description of the standard
deviation and fraction of outliers statistics. in bins of photo-$z$ for a wide
variety of OpSim OS simulations, highlighting the baseline OpSim OS simulation
and a selection of five additional OpSim OS simulations which produced notably
better or worse results than the baseline that we use in this work, whose
median 10-year depths are provided in Table 1.
In Figure 2, we show the distribution of true catalog redshift for the test-
set galaxies used for each simulation. Because the same magnitude cuts were
applied to the test and training sets for all simulations, the training sets
have similar redshift distributions. The differences between the lines are
just the statistical random fluctuations caused by drawing a new test subset
(of 50000) from the greater mock galaxy catalog (of millions of galaxies) for
each simulation. The drop in the number of galaxies in high redshift bins is
realistic for the applied cuts on apparent magnitude, and is the cause for
some of the observed scatter in the photo-$z$ statistical results seen in
Figure 1.
Figure 3: Histogram of the redshift distributions for baseline_v1.5 under
normalizing flows with different values of the tuning parameter $K$. The true
distribution is in gray, while the distribution learned by normalizing flows
with various values of $K$ are displayed in color. Aside from a high-redshift
artifact for the extreme $K=2$ flow, the choice of $K$ appears to have little
effect.
### 3.2 Implementation: density estimation with pzflow
We build the NF lower bound discussed in Section 2.3 using pzflow (Crenshaw,
2021), a GPU-enabled python package for normalizing flows built on Jax
(Bradbury et al., 2018). We refer to our public implementation as
TheLastMetric 888https://github.com/aimalz/TheLastMetric.
For the latent distribution $p(z)$, we use the uniform distribution
$\mathcal{U}(0,3.2)$, as sampling and density estimation are trivial, and the
domain matches the compact support of the redshifts in our data set. Matching
the features of the latent space to the data eases training and prevents any
potential unphysical outliers.
For the bijection $f_{\varphi}$, we use a rational-quadratic neural spline
coupling (RQ-NSC; Durkan et al., 2019), which is a state-of-the-art bijection
both capable of modeling high-dimensional distributions with hundreds of modes
and efficient at both sampling and density estimation. The RQ-NSC transforms
galaxy redshifts with monotonically-increasing piecewise combinations of $K$
segments, each of which is a rational-quadratic function. The bijection
parameters $\varphi$ are the values and derivatives of these rational-
quadratic functions at $K+1$ spline knots. The value of $K$ impacts the
resolution of the distribution learned by the NF, with large $K$ corresponding
to high resolution. After fixing $K$, a neural network calculates the values
and derivatives of the $K+1$ knots from the conditional variables: the five
LSST colors ($u-g$, $g-r$, $r-i$, $i-z$, $z-y$) and the $r$ band magnitude
(which serves as a proxy for overall luminosity). After assessing several
configurations for the neural architecture of our NFs, we adopted a single RQ-
NSC coupling layer, parameterized by a dense neural network with 2 layers of
size 128 and ReLu non-linearities.
To confirm robustness to the choice of tuning parameter $K$, we train flows
with $K=$ 2, 8, 16, and 32 for each of the six OSs under consideration. Each
flow is trained by minimizing the loss of Equation 12 with respect to the
parameters $\varphi$ using the Adam optimizer (Kingma & Ba, 2014). We train
with a learning rate (lr) of $10^{-3}$ for 100 epochs, followed by lr
$=2\times 10^{-4}$ for 100 epochs, followed by lr $=10^{-4}$ for 50 epochs.
For each flow, this takes about 1 minute on a Tesla P100 12GB GPU (or about
twice that on a CPU). Figure 3 shows the redshift distributions learned by the
four flows trained on the baseline_v1.5 OS. Aside from a high-redshift
artifact for $K=2$, the choice of $K$ has little effect on the redshift
distribution learned by the flow. Similar behavior is observed for the
redshift distributions of the other OSs, as well as the cross-correlations
with galaxy colors. Thus, for the remainder of this work, we will just use
$K=16$.
Now that we have set the value of $K$, for each OS we train nine additional
flows with $K=16$ and the same training schedule. The 10 flows per OS will be
used as a deep ensemble (Lakshminarayanan et al., 2016) to account for the
epistemic uncertainty of the model. Deep ensembles perform approximate
Bayesian marginalization (Wilson & Izmailov, 2020) over network parameters by
independently initializing and training neural network parameters a number of
different times. In the case of a non-convex loss function, this procedure
often results in solutions that live in distinct basins of attraction in
parameter space (Fort et al., 2020) and is therefore preferable to methods
that approximately marginalize over single basins of attraction, such as the
Laplace (MacKay, 1992) and SWAG (Maddox et al., 2019) approximations. We
calculate a distribution and report a mean of $\mathchar 1140\relax$ for each
OS based on the ten trained NFs.
Figure 4: Redshift-binned average of negative log-posterior $-\log
q_{\varphi}(z|x_{phot})$ under each OpSim OS simulation, analogous to Figure 1
(i.e. lower is better). This component of TLM indicates the uncertainty on
redshifts given observed photometry, for instance, the reduction in redshift
uncertainty as the Balmer break passes between the LSST photometric filters,
and otherwise follows the dominant trend of the traditional photo-$z$ metrics
by worsening at high redshift. Though the general ranking of OSs matches that
of the traditional metrics, the reorderings between redshift bins is not as
severe, an indication of robustness of TLM. Figure 5: The distribution of
$\mathchar 1140\relax$ estimates for each OS. The shaded distributions
represent an estimate of the epistemic errors in the evaluation of the metric,
obtained using a deep ensemble approach. Mean $\mathchar 1140\relax$ estimates
indicated by vertical lines are obtained by averaging of the deep ensemble
values. The stochasticity exceeds the difference between metric values for two
pairs of OSs, but stratification by TLM is otherwise robust. Figure 6: A
comparison between the mean $\mathchar 1140\relax$ and the traditional
photo-$z$ metrics of intrinsic scatter (top panel) and outlier rate (bottom
panel) from the CMNN estimator, calculated for the redshift range $0.3<z_{\rm
phot}<3.0$ for each OS (colors). Both TLM and traditional photo-$z$ metrics
penalize the footprint_stuck_rolling_v1.5 and ddf_heavy_nexp2_v1.6 OSs and
favor barebones_v1.6 and generally agree on the ranking of OSs.
### 3.3 Results
As a hypothesis, we expect that TLM will confirm the hierarchy of OSs
corresponding to the traditional photo-$z$ statistics; as will be discussed in
Section 4, the advantages of TLM are its potential extensions, but we first
establish its consistency with our intuition based on the established
photo-$z$ metrics.
Before considering TLM’s value for each OS, we aim to build some intuition of
how it behaves in practice. We therefore begin by considering the behavior of
a key component of $\mathchar 1140\relax$: the per-galaxy log-posterior $\log
q_{\varphi}(z|x_{phot})$, which quantifies how probable the true redshift of a
galaxy with photometry $x_{phot}$ is under the approximated posterior redshift
distribution. This value would be high for narrow posteriors centered on the
true redshifts, indicating that the photometry is very constraining of
redshift. Alternatively, a low value indicates that the posterior is not very
concentrated and/or offset from the true redshift, meaning that the photometry
is not very constraining.
Figure 4 shows the redshift-binned negative expected value $\langle-\log
q_{\varphi}(z|x_{phot})\rangle$ for different OSs; the minus sign is included
for easier comparison with Figure 1, i.e. lower is better. We confirm the
conclusions of the CMNN statistics of Figure 1: towards higher redshifts,
photometry becomes less constraining, in the same way that the scatter and
outlier rate increases for CMNN, and the barebones_v1.6 OS consistently
outperforms the footprint_stuck_rolling_v1.5 OS at all redshifts. We also
observe from Figure 4 that the ordering of OSs by these curves does not
significantly depend on redshift, which is largely consistent with the
findings of the traditional metrics of Figure 1. This indicates that there is
not a sub-range of redshifts for which a given strategy would outperform the
others, which implies that a single OS could be optimal for both low-redshift
and high-redshift science use cases.
In addition, $\langle-\log q_{\varphi}(z|x_{phot})\rangle$ achieves a series
of local minima, corresponding to increased information content, around the
redshifts where the 4000 Å Balmer break, a broad feature of galaxy spectra
important for photo-$z$ estimation, crosses between the LSST photometric
filters. This provides a strong consistency check that our implementation of
the metric is capturing the real, physical mutual information between
photometry and redshift. It also demonstrates consistency with the results of
Kalmbach et al. (2020), who found that using information theory to optimize
filters for photo-$z$s corresponds to designing filters that can optimally
constrain the location of the Balmer break as it moves across the optical
wavelength range.
As described in Section 2.2, $\mathchar 1140\relax$ itself is obtained by
taking the expectation of the log-posterior $\log q_{\varphi}(z|x_{phot})$
over the entire sample, thus capturing how informative the observed photometry
is about the redshifts across the whole population, and adding an entropy term
$H(Z)$ which only depends on the redshift distribution.
The distribution of $\mathchar 1140\relax$ from the deep ensembles for each OS
are shown in Figure 5, recalling that higher $\mathchar 1140\relax$ is better.
The epistemic uncertainty in TheLastMetric dominates $\mathchar 1140\relax$’s
discriminatory power for two pairs of OSs, but there is a clear four-tiered
hierarchy of the redshift information each OS’s photometry preserves.
Figure 6 shows the mean values $\langle\mathchar 1140\relax\rangle$ of these
distributions, plotted against the two relevant canonical photo-$z$ metrics of
Figure 1 for each OS, confirming that the behavior of $\mathchar 1140\relax$
is qualitatively similar to that of the traditional photo-$z$ metrics999It
also confirms the close correlation between the traditional intrinsic scatter
and outlier rate, as the latter is defined in terms of the former., favoring
barebones_v1.6 and disfavoring footprint_stuck_rolling_v1.5.
## 4 Discussion & conclusions
In this paper, we introduce TheLastMetric, $\mathchar 1140\relax$, a metric of
mutual information, and apply it in the context of observing strategy
optimization for an astronomical survey with diverse scientific goals. We also
present TheLastMetric to the community, an implementation of a variational
lower bound on the mutual information of photometry with respect to redshift.
We demonstrate the calculation of TheLastMetric on mock photometric galaxy
catalogs corresponding to exemplary observing strategies for LSST, confirming
that it is qualitatively similar to conventional photo-$z$ metrics.
TheLastMetric offers distinct advantages addressing key needs for observing
strategy metrics for LSST’s diverse extragalactic goals:
* •
TheLastMetric is a measure of information in units of nats, meaning it and any
science-case specific extensions thereof are directly comparable, enabling the
isolation of the relative importance of science goals from the raw values of
their observing strategy metrics.
* •
TheLastMetric does not assume any photo-$z$ estimator, freeing it from
assumptions of photo-$z$ template libraries, training sets, and other priors,
as well as from the computational overhead associated with many popular
photo-$z$ estimators.
* •
TheLastMetric is applicable across redshift-dependent science cases as well as
to other quantities informed by photometry that is influenced by observing
strategy.
TheLastMetric is not without its own assumptions of course. We show that it is
robust to the tuning parameters of the TheLastMetric back-end, but evaluation
on draws from the conditional normalizing flow model rather than the same data
upon which it was trained would, strictly speaking, be more self-consistent.
Furthermore, though TheLastMetric eliminates the computational expense of
estimating photo-$z$s, it retains the traditional metrics’ computational
overhead of simulating a mock galaxy catalog from OpSim parameters.
We note that the entropy $H(Z)$ of the redshift distribution of the mock
galaxy catalog, which factors into $\mathchar 1140\relax$ in Equation 9 and
thus influences Figures 5 and 6, may differ between cosmological probes or
other science cases that use subsamples of galaxies under different selection
functions. Though the entropy term $H(Z)$ would need to be recomputed for the
anticipated redshift distribution of the science-motivated subsample, that
term is subdominant in magnitude as well as trivial to calculate. Since the
expected value of Equation 9 is defined in terms of $p(z,x_{phot})$,
$\mathchar 1140\relax$ could be recalculated under a different redshift
distribution without requiring retraining of TheLastMetric to obtain the $\log
q_{\varphi}(z|x_{phot})$ for each mock galaxy. Thus TheLastMetric is
extensible to redshift-dependent science cases without increasing
computational expense beyond what is required of the current photo-$z$
metrics.
Mathematically, TheLastMetric is not inherently exclusive to redshift and may
be extended to any parameter of interest available in the truth catalog of a
mock galaxy sample to yield an interpretable metric of how informative the
photometry is about that parameter. Use of TheLastMetric and potential
extensions thereof, within and beyond cosmological applications, will enable
the identification of an appropriate observing strategy for LSST. We thus
recommend TheLastMetric’s inclusion in the MAF and motivate future development
into further mutual information metrics specific to individual science cases
or probes of a single science application.
## Acknowledgements
This work was incubated at the August 2020 TVS SC MAF
Hackathon101010https://lsst-tvssc.github.io/metricshackathon2020, which was
supported by an LSSTC Enabling Science small programs grant.
AIM acknowledges support from the Max Planck Society and the Alexander von
Humboldt Foundation in the framework of the Max Planck-Humboldt Research Award
endowed by the Federal Ministry of Education and Research. JFC is supported by
the U.S. Department of Energy, Office of Science, under Award DE-SC-0011635,
as well as the National Science Foundation, Division Of Astronomical Sciences,
under Award AST-1715122, and the Office of Advanced Cyberinfrastructure, under
Award OAC-1739419. MLG acknowledges support from the DIRAC Institute in the
Department of Astronomy at the University of Washington. The DIRAC Institute
is supported through generous gifts from the Charles and Lisa Simonyi Fund for
Arts and Sciences, and the Washington Research Foundation.
## References
* Alemi et al. (2017) Alemi A., Fischer I., Dillon J., Murphy K., 2017, in ICLR. https://arxiv.org/abs/1612.00410
* Almoubayyed et al. (2020) Almoubayyed H., et al., 2020, Monthly Notices of the Royal Astronomical Society, 499, 1140
* Awan et al. (2016) Awan H., et al., 2016, ApJ, 829, 50
* Bachman et al. (2019) Bachman P., Devon Hjelm R., Buchwalter W., 2019, arXiv e-prints, p. arXiv:1906.00910
* Barber & Agakov (2003) Barber D., Agakov F., 2003, in Proceedings of the 16th International Conference on Neural Information Processing Systems. NIPS’03. MIT Press, Cambridge, MA, USA, p. 201–208
* Bradbury et al. (2018) Bradbury J., et al., 2018, JAX: composable transformations of Python+NumPy programs, http://github.com/google/jax
* Chang et al. (2018) Chang C., et al., 2018, Mon Not R Astron Soc, 482, 3696
* Cover & Thomas (2006) Cover T. M., Thomas J. A., 2006, ELEMENTS OF INFORMATION THEORY. Telecommunication, Wiley
* Crenshaw (2021) Crenshaw J. F., 2021, jfcrenshaw/pzflow: v1.6.0, doi:10.5281/zenodo.4679913, https://doi.org/10.5281/zenodo.4679913
* Delgado et al. (2014) Delgado F., Saha A., Chandrasekharan S., Cook K., Petry C., Ridgway S., 2014, in Modeling, Systems Engineering, and Project Management for Astronomy VI. International Society for Optics and Photonics, p. 915015
* Dinh et al. (2015) Dinh L., Krueger D., Bengio Y., 2015, in Bengio Y., LeCun Y., eds, Proceedings of the 3rd International Conference on Learning Representations. San Diego, CA (arXiv:1410.8516)
* Durkan et al. (2019) Durkan C., Bekasov A., Murray I., Papamakarios G., 2019, in Wallach H. M., Larochelle H., Beygelzimer A., d’Alché-Buc F., Fox E. B., Garnett R., eds, Advances in Neural Information Processing Systems 32. Curran Associates, Inc., Vancouver, Canada, pp 7511–7522 (arXiv:1906.04032)
* Fort et al. (2020) Fort S., Hu H., Lakshminarayanan B., 2020, arXiv:1912.02757 [cs, stat]
* Goodfellow et al. (2014) Goodfellow I. J., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A., Bengio Y., 2014, arXiv e-prints, p. arXiv:1406.2661
* Górski et al. (2005) Górski K. M., Hivon E., Banday A. J., Wandelt B. D., Hansen F. K., Reinecke M., Bartelmann M., 2005, ApJ, 622, 759
* Graham et al. (2018) Graham M. L., Connolly A. J., Ivezić Ž., Schmidt S. J., Jones R. L., Mario Jurić Daniel S. F., Yoachim P., 2018, AJ, 155, 1
* Graham et al. (2020) Graham M. L., et al., 2020, AJ, 159, 258
* Ivezić & the LSST Science Collaboration (2013) Ivezić Ž., the LSST Science Collaboration 2013, http://ls.st/LPM-17
* Ivezić et al. (2019) Ivezić Ž., et al., 2019, ApJ, 873, 111
* Jimenez Rezende & Mohamed (2015) Jimenez Rezende D., Mohamed S., 2015, arXiv e-prints, p. arXiv:1505.05770
* Jones et al. (2020) Jones R. L., Yoachim P., Ivezic Z., Neilsen E. H., Ribeiro T., 2020, Technical Report 051, Survey Strategy and Cadence Choices for the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST). LSST
* Kalmbach et al. (2020) Kalmbach J. B., VanderPlas J. T., Connolly A. J., 2020, The Astrophysical Journal, 890, 74
* Kingma & Ba (2014) Kingma D. P., Ba J., 2014, arXiv e-prints, p. arXiv:1412.6980
* Kingma & Welling (2013) Kingma D. P., Welling M., 2013, arXiv e-prints, p. arXiv:1312.6114
* Kobyzev et al. (2020) Kobyzev I., Prince S. J. D., Brubaker M. A., 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence, pp 1–1
* LSST (2016) LSST 2016, OpSim 3.3.8
* LSST (2017) LSST 2017, MAF 2.4.0
* LSST Science Collaboration et al. (2017) LSST Science Collaboration et al., 2017, arXiv e-prints, p. arXiv:1708.04058
* Lakshminarayanan et al. (2016) Lakshminarayanan B., Pritzel A., Blundell C., 2016, arXiv e-prints, p. arXiv:1612.01474
* Lochner et al. (2018) Lochner M., et al., 2018, arXiv:1812.00515 [astro-ph]
* Lochner et al. (2021) Lochner M., et al., 2021, arXiv:2104.05676 [astro-ph]
* MacKay (1992) MacKay D. J., 1992, PhD thesis, California Institute of Technology
* Maddox et al. (2019) Maddox W., Garipov T., Izmailov P., Vetrov D., Wilson A. G., 2019, arXiv:1902.02476 [cs, stat]
* Malz et al. (2018) Malz A. I., Marshall P. J., DeRose J., Graham M. L., Schmidt S. J., Wechsler R., Collaboration) L. D. E. S., 2018, AJ, 156, 35
* Poole et al. (2019) Poole B., Ozair S., van den Oord A., Alemi A. A., Tucker G., 2019, CoRR, abs/1905.06922
* Schmidt et al. (2020) Schmidt S. J., et al., 2020, Mon Not R Astron Soc, 499, 1587
* Scolnic et al. (2018) Scolnic D. M., et al., 2018, arXiv:1812.00516 [astro-ph]
* Tishby & Zaslavsky (2015) Tishby N., Zaslavsky N., 2015, CoRR, abs/1503.02406
* Wilson & Izmailov (2020) Wilson A. G., Izmailov P., 2020, arXiv:2002.08791 [cs, stat]
* Winkler et al. (2019) Winkler C., Worrall D., Hoogeboom E., Welling M., 2019, arXiv:1912.00042 [cs, stat]
|
# Universal behaviour of majority bootstrap percolation on high-dimensional
geometric graphs
Maurício Collares and Joshua Erde and Anna Geisler and Mihyun Kang Institute
of Discrete Mathematics, Graz University of Technology, Steyrergasse 30, 8010
Graz, Austria<EMAIL_ADDRESS>{erde, geisler<EMAIL_ADDRESS>
###### Abstract.
Majority bootstrap percolation is a monotone cellular automata that can be
thought of as a model of infection spreading in networks. Starting with an
initially infected set, new vertices become infected once more than half of
their neighbours are infected. The average case behaviour of this process was
studied on the $n$-dimensional hypercube by Balogh, Bollobás and Morris, who
showed that there is a phase transition as the typical density of the
initially infected set increases: For small enough densities the spread of
infection is typically local, whereas for large enough densities typically the
whole graph eventually becomes infected. Perhaps surprisingly, they showed
that the critical window in which this phase transition occurs is bounded away
from $1/2$, and they gave bounds on its width on a finer scale. In this paper
we consider the majority bootstrap percolation process on a class of high-
dimensional geometric graphs which includes many of the graph families on
which percolation processes are typically considered, such as grids, tori and
Hamming graphs, as well as other well-studied families of graphs such as
(bipartite) Kneser graphs, including the odd graph and the middle layer graph.
We show similar quantitative behaviour in terms of the location and width of
the critical window for the majority bootstrap percolation process on this
class of graphs.
###### 1991 Mathematics Subject Classification:
60K35, 60C05 (Primary)
## 1\. Introduction
### 1.1. Motivation
Bootstrap percolation is a process on a graph which models the spread of an
infection through a population. This model was first considered by Chalupa,
Leath and Reich [20] to describe and analyse magnetic systems, and has been
widely used to describe other interacting particle systems, see [37].
More generally, in the physical sciences and in particular statistical
physics, models in which a system evolves according to a set of ‘local’ and
homogeneous update rules are known as _cellular automata_. This includes the
Ising model or lattice gas automata [31, 36]. Bootstrap percolation is then an
example of a _monotone_ cellular automaton – the ‘state’ (infected or
uninfected) of each site can only change in one direction, and the update rule
is _homogeneous_ (the same for each vertex) and _local_ (determined by the
states of the neighbours).
These models were first analysed rigorously by Schonmann [50]. A related
model, introduced by Bollobás [16], is weak saturation, in which the edges of
a graph $G$ get activated if they complete a copy of a fixed graph $H$.
Bootstrap percolation has been used to model various processes, from the zero-
temperature Ising model in physics [27] to neural networks [5] or opinion
spreading processes in sociology [30]. For a survey on monotone cellular
automata results related to bootstrap percolation see [47] and for physical
applications [46].
The bootstrap percolation process on a graph $G$ evolves in discrete time
steps, which we call _rounds_ , starting with an initial set $A_{0}\subseteq
V(G)$ of infected vertices. In each round, new vertices become infected if at
least a certain threshold $r$ of their neighbours have already been infected.
Since infected vertices never recover, the set $A_{i}$ of vertices which are
infected by the $i$-th round is non-decreasing. In other words, we have a
sequence of sets $A_{0}\subseteq A_{1}\subseteq\ldots$ where
$A_{i}=A_{i-1}\cup\left\\{v\in V(G)\colon\left|N(v)\cap
A_{i-1}\right|\geqslant r\right\\}$ for $i\in\mathbb{N}.$ We will refer to
this process as _$r$ -neighbour bootstrap percolation_. If the infection
spreads to the entire population, i.e., if $\bigcup_{i=0}^{\infty}A_{i}=V(G)$,
then the initial set $A_{0}$ is said to _percolate_ on $G$.
It is apparent that given a graph $G$ and an initially infected set $A_{0}$,
the $r$-neighbour bootstrap percolation process is deterministic, and so each
subset $A_{0}\subseteq V(G)$ is either percolating or non-percolating.
Typically, given a graph or a family of graphs, one can ask about the _worst
case_ or extremal behaviour – what are the ‘minimal’ percolating sets [14, 25,
48] – or about the _typical_ behaviour – are ‘most’ sets percolating or non-
percolating? A natural way to frame the latter question is to look at the
probability $\Phi(p,G)$ that a _random_ subset of a fixed density $p$
percolates. More precisely, in random bootstrap percolation, given a graph $G$
and $p\in(0,1)$, we let ${\bf A}_{p}$ be a $p$-random subset of $V(G)$ where
each vertex in $V(G)$ is included in ${\bf A}_{p}$ independently with
probability $p$ and define
$\Phi(p,G):=\mathbb{P}[{\bf A}_{p}\text{ percolates on }G].$
Since the $r$-neighbour bootstrap percolation process is monotone, it is clear
that the probability of percolation $\Phi(p,G)$ is monotonically increasing in
$p$. It follows from standard results on the existence of thresholds for
monotone graph properties [29] that there is a _threshold_ for random
bootstrap percolation – the percolation probability transitions continuously
from almost $0$ to almost $1$ in a very small window around the _critical
probability_
$p_{c}(G):=\inf\left\\{p\in(0,1)\colon\Phi(p,G)\geqslant\frac{1}{2}\right\\}.$
In this paper we will consider this process on a family of graphs
$(G_{n})_{n\in\mathbb{N}}$, where the critical probability $p_{c}(G_{n})$ is
then a function of $G_{n}$. Asymptotically, this transition from percolation
almost never to almost surely then happens in some _critical window_ , an
interval $I_{n}\subseteq[0,1]$ below which $\Phi(p,G_{n})=o_{n}(1)$ and above
which $\Phi(p,G_{n})=1-o_{n}(1)$. We say there is a _sharp threshold_ if the
width of the critical window is small in comparison to the critical
probability, i.e., if the quotient satisfies
$\lim_{n\to\infty}\frac{|I_{n}|}{p_{c}(G_{n})}=0.$
In the first appearance of bootstrap percolation [20] the authors gave some
bounds for the location and width of the critical window in bootstrap
percolation on the _Bethe lattice_. Subsequent work in the area of physics
also focused on lattice-like structures and observed threshold phenomena [1].
The simplest examples of lattices, and the most well-studied in terms of
bootstrap percolation, come from finite grids $[n]^{d}$ and in particular the
_$d$ -dimensional hypercube_ $[2]^{d}$, which we will denote by $Q_{d}$.
Holroyd [35] demonstrated the existence of a sharp threshold in $2$-neighbour
bootstrap percolation in two-dimensional grids $[n]^{2}$, i.e., the case
$r=d=2$ and for arbitrary $n\in\mathbb{N}$. In three dimensions, bounds on the
critical probability for $r=3$ were determined by Cerf and Cirillo [18]. Later
work of Balogh, Bollobás and Morris [7, 9, 10] showed that there is a sharp
threshold in grids in arbitrary dimension. Cerf and Manzo [19] gave the
critical probability for all finite grids with $3\leqslant r\leqslant d$ up to
constant factors. This line of work culminated in the breakthrough result of
Balogh, Bollobás, Duminil-Copin and Morris [8], which gave a sharp threshold
in all dimensions:
###### Theorem 1.1 ([8, Theorem 1]).
Let $d\geqslant r\geqslant 1$ and consider the random $(r+1)$-neighbour
bootstrap percolation process. Then
$p_{c}\left([n]^{d}\right)=\left(\frac{\lambda(d,r)+o(1)}{\log_{r}(n)}\right)^{d-r},$
where $\lambda(d,r)$ is an explicit constant depending on $d$ and $r$, and
$\log_{r}$ is the iterated logarithm.
Here, and throughout the paper, our asymptotics will be in terms of the
parameter $n$ unless otherwise stated.
As well as in lattice-like graphs, the typical behaviour of bootstrap
percolation has been considered on many other graph classes such as binomial
random graphs [38, 40], inhomogeneous random graphs [28] and infinite trees
[13].
In many physical applications there is a related model which is also natural
to consider, where a vertex becomes infected once at least half of its
neighbours have been infected [27]. We call this process the _majority
bootstrap percolation process_. Note that, in general, this update rule is no
longer homogeneous, as we have a different infection threshold $r(v)=d(v)/2$
for each vertex $v$. However, for $d$-regular graphs, this is again an example
of $r$-neighbour bootstrap percolation with $r=d/2$. Note that, when analysing
the asymptotic behaviour of this process on a family of regular graphs
$(G_{n})_{n\in\mathbb{N}}$, even though the infection threshold $r$ is fixed
for each $G_{n}$, it might vary as a function of $n$.
It is not hard to see that, for any $n$-regular graph $G$ whose order is sub-
exponential in $n$, there is a sharp threshold for percolation in the random
majority bootstrap percolation process near the point $p=\frac{1}{2}$. Indeed,
by standard concentration results, if $p>\frac{1}{2}+\varepsilon$, then with
probability tending to one as $n\to\infty$ more than half of the neighbours of
_every_ vertex in $G$ will lie in the initially infected set, and so the
process will percolate after a single round. Conversely, if
$p<\frac{1}{2}-\varepsilon$, then with probability tending to one as
$n\to\infty$ less than half the neighbours of _every_ vertex in $G$ will lie
in the initially infected set, and so the process will stabilise with the
infected set being the initial set, which is most likely not the entire vertex
set.
Whilst this behaviour cannot be _universal_ to all $n$-regular graphs
(consider for example a disjoint union of many copies of a fixed $n$-regular
graph), it was shown by Bollobás, Balogh and Morris [11] that this sharp
threshold occurs as well for graphs of order super-exponential in $n$ which
satisfy certain nice structural properties. The key point here is to have some
control on the _neighbourhood expansion_ of the host graph, to ensure that all
vertices at distance $i$ from a fixed vertex $x$ have ‘many’ neighbours at
distance $i+1$ from $x$.
###### Theorem 1.2 ([11, Theorem 2.2], informal).
Let $G$ be a regular graph whose order is sufficiently small in terms of the
neighbourhood expansion of the graph and consider the random majority
bootstrap percolation process. Then
$p_{c}(G)=\frac{1}{2}+o(1).$
Furthermore, in the case of the hypercube, they looked at the location and
width of the critical window on a finer scale, and found that the point
$p=\frac{1}{2}$ actually lies outside, and in fact above, the critical window.
###### Theorem 1.3 ([11, Theorem 2.1]).
Let $Q_{n}$ be the $n$-dimensional hypercube and consider the random majority
bootstrap percolation process where the set of initially infected vertices is
a $p$-random subset ${\bf A}_{p}$ with
$p:=\frac{1}{2}-\frac{1}{2}\sqrt{\frac{\log n}{n}}+\frac{\lambda\log\log
n}{\sqrt{n\log n}}.$
Then
$\lim_{n\to\infty}\Phi(p,Q_{n})=\left\\{\begin{array}[]{c@{\quad\textup{if}
\quad}l}0&\lambda\leqslant-2,\\\\[4.30554pt]
1&\lambda>\frac{1}{2}.\end{array}\right.$
Since in this context it makes sense to consider the critical probability as
having an ‘expansion’ around $p=\frac{1}{2}$, it will be useful to introduce
some definitions so as to be able to talk about the strength of bounds on the
location and width of the critical window.
Suppose a property has a critical window of the form $I=p_{1}+p_{2}+\ldots\pm
p_{k}$ where $p_{i}=o_{n}(p_{i-1})$ for each $2\leqslant i<k$ and
$p_{k}=O_{n}(p_{k-1})$. Then, whenever $p_{i}=o_{n}(p_{i-1})$ we say that
$p_{i}$ is the _$i$ -th term_ in the expansion of the critical probability and
the _width_ of the critical window is at most $p_{k}$. In this way, Theorem
1.3 determines the first two terms in the expansion of the critical
probability to be $p_{1}=\frac{1}{2}$ and $p_{2}=-\frac{1}{2}\sqrt{\frac{\log
n}{n}}$, and bounds the width of the critical window to be
$O\left(\frac{\log\log n}{\sqrt{n\log n}}\right)$ for random majority
bootstrap percolation on the $n$-dimensional hypercube.
### 1.2. Main Results
The aim of this paper is to investigate the random majority bootstrap
percolation process on _high-dimensional geometric_ graphs. The
$n$-dimensional hypercube has many equivalent representations: For example, it
is the nearest neighbour graph on the set of points $\\{0,1\\}^{n}$; it is the
Cayley graph of $\mathbb{Z}_{2}^{n}$ with the standard generating set; it is
the Cartesian product of $n$ copies of a single edge; and it is the skeleton
of a particularly simple convex polytope in $\mathbb{R}^{n}$ – all of which in
some sense witness the fact that $Q_{n}$ is high-dimensional.
We will study the majority bootstrap percolation process on other graphs with
similar high-dimensional representations. For example, many natural classes of
lattice-like graphs can be realised as Cartesian products of small graphs,
such as grids, tori and Hamming graphs, which are the Cartesian products of
paths, cycles and complete graphs, respectively. Such graphs are commonly
studied in percolation theory [2, 24, 26, 32] and in particular in the context
of bootstrap percolation [14, 41].
Another well-studied class of high-dimensional graphs are Kneser graphs and
bipartite Kneser graphs, which encode how different subsets of a fixed set
intersect. Of particular interest here are the middle layers graph $M_{n}$ and
the odd graph $O_{n}$ whose combinatorial properties have been extensively
studied [15, 49, 42]. The odd graph $O_{n}$ belongs to the family of
_generalised odd graphs_ that are related to incidence geometries and in
particular to _near polygons_ , see [17]. Another well-known example of a
generalised odd graph is the $(n-1)$-dimensional _folded hypercube_
$\tilde{Q}_{n}$ obtained by joining each pair of antipodal points in the
hypercube $Q_{n-1}$. This graph has also commonly been used as a lower-
diameter alternative to the hypercube in the context of network topology [4].
Percolation processes have also been considered on some of these graphs, in
the context of the transference of combinatorial results to sparse random
substructures [12].
In this paper we will study a general class
$\mathcal{H}=\bigcup_{K\in\mathbb{N}}\mathcal{H}(K)$ of graphs satisfying
certain structural properties (see Section 3 for formal definition), which are
satisfied by various high-dimensional geometric graphs, and in particular the
graphs listed above. Roughly speaking, $\mathcal{H}(K)$ consists of graphs
whose structure is _close_ – in a quantitative manner in terms of the
parameter $K$ – to being controlled by some coordinate system, which may only
be defined locally with respect to each vertex and need not be globally
coherent, where edges correspond to changing a single coordinate.
We will show that there is a certain _universal behaviour_ to the random
majority bootstrap percolation process on graphs in the class
$\mathcal{H}(K)$, which is controlled in some way by the degree sequence, and
we will in particular determine the first few terms in the expansion of the
critical probability for regular graphs in $\mathcal{H}(K)$. In fact, we show
that the critical window does not contain $\frac{1}{2}$, for _arbitrary_
graphs $G$ in the class $\mathcal{H}(K)$, where the upper and lower boundary
of the critical window are functions of the minimum and maximum degree of $G$,
denoted by $\delta(G)$ and $\Delta(G)$, respectively.
Our main theorem is as follows.
###### Theorem 1.4.
Let $(G_{n})_{n\in\mathbb{N}}$ be a sequence of graphs in $\mathcal{H}(K)$ for
some $K\in\mathbb{N}$ such that $\delta(G_{n})\to\infty$ as $n\to\infty$ and
let $p=p(n)\in[0,1]$. Consider the random majority bootstrap percolation
process where the set of initially infected vertices is a $p$-random subset
${\bf A}_{p}$. Then for any constant $\varepsilon>0$,
$\lim_{n\to\infty}\Phi(p,G_{n})=\left\\{\begin{array}[]{c@{\quad\textup{if}
\quad}l}0&p<\frac{1}{2}-\left(\frac{1}{2}+\varepsilon\right)\sqrt{\frac{\log\delta(G_{n})}{\delta(G_{n})}},\\\\[4.30554pt]
1&p>\frac{1}{2}-\left(\frac{1}{2}-\varepsilon\right)\sqrt{\frac{\log\Delta(G_{n})}{\Delta(G_{n})}}.\end{array}\right.$
In particular, if the graphs $G_{n}$ are regular, then Theorem 1.4 determines
the first two terms in the expansion of the critical probability, as in
Theorem 1.3 for the hypercube.
###### Corollary 1.5.
Let $(G_{n})_{n\in\mathbb{N}}$ be a sequence of $d$-regular graphs in
$\mathcal{H}(K)$ for some $K\in\mathbb{N}$ such that $d=d(n)\to\infty$ as
$n\to\infty$ and let $p=p(n)\in[0,1]$. Consider the random majority bootstrap
percolation process where the set of initially infected vertices is a
$p$-random subset ${\bf A}_{p}$. Then for any constant $\varepsilon>0$,
$\lim_{n\to\infty}\Phi(p,G_{n})=\left\\{\begin{array}[]{c@{\quad\textup{if}
\quad}l}0&p<\frac{1}{2}-\left(\frac{1}{2}+\varepsilon\right)\sqrt{\frac{\log
d}{d}},\\\\[4.30554pt]
1&p>\frac{1}{2}-\left(\frac{1}{2}-\varepsilon\right)\sqrt{\frac{\log
d}{d}}.\end{array}\right.$
In particular, this holds if $G_{n}$ is
* •
an $n$-dimensional regular Cartesian product graph whose base graphs have
bounded size;
* •
the $n$-dimensional middle layer graph $M_{n}$;
* •
the $n$-dimensional odd graph $O_{n}$; or
* •
the $n$-dimensional folded hypercube $\tilde{Q}_{n}$.
Note that $M_{n}$, $O_{n}$ and $\tilde{Q}_{n}$ are $n$-regular.
Corollary 1.5 shows that for _$d$ -regular_ graphs in $\mathcal{H}(K)$ there
is a sharp percolation threshold for the random majority bootstrap percolation
process. More precisely, we find that the behaviour in Theorem 1.3, when
scaled correctly, is in some way _universal_ for these high-dimensional
graphs, in that the first two terms in the expansion of the critical
probability are $\frac{1}{2}-\frac{1}{2}\sqrt{\frac{\log d}{d}}$ and the
critical window has width $o\left(\sqrt{\frac{\log d}{d}}\right)$: In other
words, for any _$d$ -regular_ graph $G$ in $\mathcal{H}(K)$, the critical
probability satisfies
$p_{c}(G)=\frac{1}{2}-\frac{1}{2}\sqrt{\frac{\log
d}{d}}+o\left(\sqrt{\frac{\log d}{d}}\right).$
### 1.3. Proof Techniques
In [11] Balogh, Bollobás and Morris show that, at least in the $n$-dimensional
hypercube, the event that a vertex becomes infected is in some sense ‘locally
determined’. Indeed, for the $1$-statement they show that with probability
tending to one as $n\to\infty$ (whp) every vertex becomes infected after at
most eleven rounds. For the $0$-statement they analyse a related process,
which stochastically dominates the majority bootstrap percolation process, and
show that whp this process stabilises after three rounds, and in fact whp a
positive proportion of the vertices remains uninfected. Note that whether or
not a vertex $v$ is infected after $k$ rounds only depends on the set of
initially infected vertices at distance at most $k$ from $v$.
A key tool in our proof of Theorem 1.4 is that the degree distributions of
graphs in $\mathcal{H}(K)$ are locally quite ‘flat’, in the sense that the
degrees in a small neighbourhood of a vertex $v$ are relatively close to the
degree of $v$. Roughly, such graphs look _locally approximately regular_. If,
as in the work of Balogh, Bollobás and Morris [11], we expect the probability
that a vertex $v$ becomes infected to be ‘locally determined’, we might hope
that for each vertex $v$ there is a critical probability $\tilde{p}_{c}(v)$,
which depends only on its degree $d(v)$, such that significantly above this
threshold it is very likely that $v$ becomes infected, whereas significantly
below this threshold there is a positive probability that $v$ remains
uninfected, at least for the first few rounds of the process. In particular,
we should expect the process to percolate when $p\gg\max_{v\in
V(G)}\tilde{p}_{c}(v)$, and we should expect the process not to percolate when
$p\ll\min_{v\in V(G)}\tilde{p}_{c}(v)$.
Let us describe in more detail how we make this heuristic argument precise.
For the $1$-statement we note that, since we are considering probabilities
close to $\frac{1}{2}$, the probability that a vertex is not infected in the
first round is roughly $\frac{1}{2}$. We first show that, for each $v\in V(G)$
when $p$ is above $\tilde{p}_{c}(v):=\frac{1}{2}-\frac{1}{2}\sqrt{\frac{\log
d(v)}{d(v)}}$, the probability that $v$ is _not infected_ after two rounds is
already much smaller, at most $\frac{1}{4}$ (Lemma 4.1).
We then bootstrap this result twice, first to show that the probability that
$v$ is not infected after five rounds is exponentially small in $d(v)$ (Lemma
4.2), and then again to show that the probability that $v$ is not infected
after eleven rounds is super-exponentially small in $d(v)$ (Lemma 4.3). Since
we assume that the graphs in $\mathcal{H}(K)$ are not too large, roughly of
exponential order, this is sufficient to deduce the $1$-statement using a
union bound. To avoid dependencies when bootstrapping our results, and to
simplify our analysis, we actually analyse a slightly more restrictive
bootstrap percolation process, which is dominated by the original process,
where the local infection parameter $r(v)$ is slightly larger than
$\frac{d(v)}{2}$. This broad strategy for proving the 1-statement follows the
approach of [11].
A key property of graphs in $\mathcal{H}(K)$ which allows us to bootstrap
these results is a certain “fractal self-symmetry” which roughly says we can
split those graphs into many copies of (lower-dimensional) graphs having the
same structural properties. This property has also been key to the study of
bond and site percolation on high-dimensional graphs [22, 23, 24].
For the $0$-statement, we analyse a slight variant of the majority bootstrap
process which dominates the original process, where the infection parameter
$r$ varies over time, starting slightly below $\frac{d(v)}{2}$ and increasing
to $\frac{d(v)}{2}$ in a finite number of rounds. This idea, which originates
in [11], allows us to assume that vertices which become infected in round $i$
for $i>1$ have a significant number of neighbours which became infected in
round $i-1$. We first show that in this new process, a vertex which is not
infected by the second round is extremely unlikely to become infected in the
third round, and so whp this process stabilises after two rounds (Lemma 5.1).
Here, a judicious choice of parameterisation simplifies the analysis of this
process, allowing us to conclude that the process stabilises after only two
rounds. We then conclude by showing that whp some vertices are not infected by
the second round (Lemma 5.2).
The paper is structured as follows: In Section 2 we introduce some notation
and probabilistic tools that we use. In Section 3 we explicitly describe the
structural assumptions satisfied by our class $\mathcal{H}(K)$ of high-
dimensional graphs. The proof of Theorem 1.4 is then split up into a proof of
the $1$-statement in Section 4 and a proof of the $0$-statement in Section 5,
with some of the technical details deferred to Sections 6 and 7. Afterwards we
give specific examples of several graphs that are contained in the class
$\mathcal{H}$ of graphs in Section 8. We close with a discussion of the limits
of the techniques used and some open questions in Section 9.
## 2\. Preliminaries
For $n\in\mathbb{N}$ we denote by $[n]$ the set of integers up to $n$, i.e.,
$[n]:=\\{1,\dots,n\\}$. Given $y,z\in\mathbb{R}$ we will write $y\pm z$ to
denote the interval $[y-z,y+z]$. Similar to $O(\cdot)$ notation, an inclusion
is meant whenever $\pm$ appears in the context of an equality. For example,
$x=y\pm z$ means that $x\in[y-z,y+z]$. Whenever the base of a logarithm is not
specified, we use the natural logarithm, i.e., $\log x=\ln x=\log_{e}x$. For
ease of presentation we will omit floor and ceiling signs in calculations.
### 2.1. Probabilistic tools
In this section we will state a number of probabilistic tools which are used
throughout the paper.
We will assume knowledge of Markov’s and Chebyshev’s inequalities as standard
(see [3] for basic probabilistic background). However, through the paper we
will need much more precise tail bounds, both from above and below, for
binomial and related distributions. The first is a standard form of the
Chernoff bounds, see for example [3, Appendix A].
###### Lemma 2.1.
Let $d\in\mathbb{N}$, $0<p<1$, and $X\sim\operatorname{Bin}(d,p)$. Then
1. $(a)$
For every $t\geqslant 0$,
$\mathbb{P}\big{[}|X-dp|\geqslant
t\big{]}\;\leqslant\;2\exp\left(-\frac{2t^{2}}{d}\right);$
2. $(b)$
For every $b\geqslant 1$,
$\mathbb{P}[X\geqslant bdp]\leqslant\left(\frac{e}{b}\right)^{bdp}.$
At times we will also need an anti-concentration result for binomial random
variables. As these converge in distribution to a normal distribution we will
derive these from tail bounds for the standard normal distribution $N(0,1)$.
Proofs can be found in [51, Section 5.6].
###### Lemma 2.2.
For $d\in\mathbb{N}$, $p=p(d)\in(0,1)$ and $f(d)=o(d^{1/6})$, it holds that
$\mathbb{P}\left[\frac{\operatorname{Bin}(d,p)-dp}{\sqrt{dp(1-p)}}\geqslant
f(d)\right]=(1+o(1))\mathbb{P}\left[N(0,1)\geqslant f(d)\right].$
Moreover, if $f(d)\to\infty$ as $d\to\infty$, the probability that the
standard normal distribution exceeds $f(d)$ satisfies
$\mathbb{P}[N(0,1)\geqslant
f(d)]=\frac{1+o(1)}{f(d)\sqrt{2\pi}}\exp\left(-\frac{f(d)^{2}}{2}\right).$
Since binomial random variables can be written as sums of independent
Bernoulli random variables, the standard inequality due to Hoeffding [33]
readily implies the following lemma.
###### Lemma 2.3.
Let $k,d_{1},\ldots,d_{k}\in\mathbb{N}$ and $p\in(0,1)$. Let
$X_{i}\sim\operatorname{Bin}(d_{i},p)$ for each $i\in[k]$, let
$Y=\sum_{i=1}^{k}iX_{i}$, and let $D(k)=\sum_{i=1}^{k}i^{2}d_{i}$. Then, for
every $\tau>0$,
$\mathbb{P}\big{[}Y\geqslant\mathbb{E}[Y]+\tau\big{]}\leqslant\exp\left(-\frac{2\tau^{2}}{D(k)}\right)\leqslant\exp\left(-\frac{2\tau^{2}}{k\cdot\mathbb{E}[Y]}\right).$
We will also need the following lemma from [11] which broadly tells us that,
if we think of a $\operatorname{Bin}(n,p)$ random variable as the number of
successful trials in a sequence of $n$ independent trials with success
probability $p$, then there is only a slight correlation between the result
and the outcome of the first trial.
###### Lemma 2.4 ([11], Lemmas 3.8 and 3.9).
Let $d\in\mathbb{N}$, let $p=p(d)=\frac{1}{2}-o(1)$. Let $X_{1},\ldots,X_{d}$
be independent and identically distributed $\textup{Ber}(p)$ random variables
and let $X=\sum_{i=1}^{d}X_{i}\sim\operatorname{Bin}(d,p)$. Then, for any
$0\leqslant m=m(d)\leqslant d/2$,
$\mathbb{P}\left[X\geqslant
m\nonscript\>\middle|\nonscript\>\mathopen{}X_{1}=1\right]=\big{(}1+o(1)\big{)}\,\mathbb{P}\big{[}X\geqslant
m\big{]}$
and
$\mathbb{P}\left[X_{1}=1\nonscript\>\middle|\nonscript\>\mathopen{}X\geqslant
m\right]=\big{(}1+o(1)\big{)}\,\mathbb{P}\big{[}X_{1}=1\big{]}.$
By induction and the law of total probability, it follows from Lemma 2.4 that
for any bounded $k\in\mathbb{N}$ and any
$\varepsilon=(\varepsilon_{1},\ldots,\varepsilon_{k})\in\\{0,1\\}^{k}$,
$\mathbb{P}\big{[}X\geqslant
m\nonscript\>\big{|}\nonscript\>\mathopen{}X_{1}=\varepsilon_{1},\ldots,X_{k}=\varepsilon_{k}\big{]}=\big{(}1+o(1)\big{)}\,\mathbb{P}\big{[}X\geqslant
m\big{]}.$
Finally we will need the fact that the median of a binomial random variable is
essentially equal to its mean, see for example [39, Theorem 1].
###### Proposition 2.5.
Let $X\sim\operatorname{Bin}(d,p)$. Then the median of $X$ is either
$\lfloor\mathbb{E}[X]\rfloor$ or $\lceil\mathbb{E}[X]\rceil$, and thus
$\mathbb{P}[X\geqslant C]\geqslant\frac{1}{2}$ for any integer $C$ such that
$C\leqslant\mathbb{E}[X]$.
## 3\. Nice structural properties of geometric graphs
In this section we discuss the properties of the graphs in the class
$\mathcal{H}(K)$. Roughly, one can think of these graphs as having a structure
which is in some sense _close_ to being governed by some _local coordinate
system_ , where edges are only allowed between vertices which differ in a
single coordinate. Crucially there are only ever a bounded number of
neighbours in each coordinate.
Given a graph $G$ we will write $d(G)$ for the average degree of $G$,
$\delta(G)$ for the minimum degree of $G$ and $\Delta(G)$ for the maximum
degree of $G$. Given two vertices $x,y\in V(G)$ we will write
$\operatorname{dist}_{G}(x,y)$ for the _distance_ between $x$ and $y$ in $G$,
that is, the length of the shortest $x-y$ path in $G$. Given
$k\in\mathbb{N}\cup\\{0\\}$ and $x\in V(G)$ we let
$B_{G}(x,k):=\\{y\in V(G)\colon\operatorname{dist}_{G}(x,y)\leqslant k\\}$
be the _ball of radius $k$_ centred at $x$ and
$S_{G}(x,k):=B_{G}(x,k)\setminus B_{G}(x,k-1)$
be the _sphere of radius $k$_ centred at $x$, where $S_{G}(x,0):=\\{x\\}$.
Note that $B_{G}(x,0)=\\{x\\}$, $S_{G}(x,1)=N_{G}(x)$, and
$B_{G}(x,1)=\\{x\\}\cup N_{G}(x)$. When the underlying graph $G$ is clear from
the context, we will omit the subscript in this notation.
Given $K\in\mathbb{N}$ define the _class $\mathcal{H}(K)$ of geometric graphs_
recursively by taking the single-vertex graph $K_{1}\in\mathcal{H}(K)$ and
then taking every graph $G$ satisfying Properties P1, P2, P3, P4, P5 and P6
below.
###### Property P1 (Locally almost regular).
For every $x\in V(G)$, $\ell\in\mathbb{N}$ and $y\in S(x,\ell)$, we have
$|d(x)-d(y)|\leqslant K\ell.$
In this case we say that $G$ is _locally $K$-almost regular_. Heuristically,
vertices at a small distance should agree in almost all their coordinates, and
so their degrees, which are the sum of the number of neighbours in each
coordinate, should be similar.
###### Property P2 (Bounded backwards expansion).
For every $x\in V(G)$, $\ell\in\mathbb{N}$ and $y\in S(x,\ell)$, we have
$|N(y)\cap B(x,\ell)|\leqslant K\ell.$
In this case we say that $G$ has _$K$ -bounded backwards expansion_. Note that
having bounded backwards expansion is qualitatively equivalent to having good
_neighbourhood expansion_ as in Theorem 1.2 from [11]. Heuristically, a vertex
$y$ at distance $\ell$ from $x$ differs from $x$ in at most $\ell$
coordinates, and the only neighbours of $y$ which are not further away from
$x$ also differ in a subset of these coordinates. Note that Properties P1 and
P2 imply that
$|S(x,\ell)|=\Theta(d(x)^{\ell}).$
###### Property P3 (Typical local structure).
For every $x\in V(G)$ there is a set $D\subseteq V(G)\setminus\\{x\\}$ of
_non-typical_ vertices such that for every $\ell\in\mathbb{N}$ the following
hold:
1. $(i)$
$|D\cap S(x,\ell)|\leqslant K^{\ell-1}d(x)^{\ell-1}$;
2. $(ii)$
$|D\cap N(y)|\leqslant K\ell$ for every vertex $y\in S(x,\ell)\setminus D$;
3. $(iii)$
every two vertices in $S(x,\ell)\setminus D$ have at most one common neighbour
in $S(x,\ell+1)\setminus D$.
For every graph $G$ that fulfils Property P3 and every $x\in V(G)$, we set
$S_{0}(x,\ell):=S(x,\ell)\setminus D.$
Heuristically, since the number of neighbours in each coordinate is always
bounded, a typical vertex at distance $\ell$ from $x$ differs in precisely
$\ell$ coordinates from $x$. The set $D$ takes care of the non-typical
vertices, as well as taking into account the ‘error’ in our approximate
coordinate system. Conditions i and ii say that the set $D$ is small and
locally sparse, whereas Condition iii clarifies a property we should expect of
$S_{0}(x,\ell)$ – if two vertices $u$ and $w$ differ from $x$ in precisely
$\ell$ coordinates there is at most one vertex $v$ which differs from $x$ in
precisely $\ell+1$ coordinates and differs from $u$ and $w$ in a single
coordinate.
###### Property P4 (Projection).
For every $x\in V(G)$, $\ell\in\mathbb{N}$ and $y\in S(x,\ell)$, there is a
subgraph $G(y)$ of $G$ such that the following hold:
1. $(i)$
$y\in V(G(y))$;
2. $(ii)$
$G(y)\in\mathcal{H}(K)$;
3. $(iii)$
$V(G(y))\cap B(x,\ell-1)=\varnothing$;
4. $(iv)$
$|d_{G(y)}(w)-d_{G}(w)|\leqslant K\ell$ for all $w\in V(G(y))$.
In this case we say that $G$ has the _$K$ -projection property_.
Heuristically, the idea here is that vertices at a short distance will only
differ in a small number of ‘coordinates’, and by fixing these coordinates and
allowing the others to vary, we should obtain a lower-dimensional graph $G(y)$
with similar properties to $G$ which is disjoint from $B(x,\ell-1)$. This
property reflects a notion of fractal self-symmetry of the graph.
We note that in all our applications, the graphs $G(y)$ can in fact be taken
to be a ‘lower dimensional’ graph with a similar description to $G$. For
example, in the case of the hypercube these projections can be taken to be
subcubes, although in general they might come from a slightly more general
family than the original graph $G$.
###### Property P5 (Separation).
For every $x\in V(G)$, $\ell\in\mathbb{N}$ and $y\in S_{0}(x,\ell)$, we have
$|B(y,2\ell-1)\cap S_{0}(x,\ell)|\leqslant\ell K^{\ell-1}d(x)^{\ell-1}.$
In this case we say that $G$ satisfies the _$K$ -separation property_.
Heuristically, vertices in $S_{0}(x,\ell)\subseteq S(x,\ell)$ will typically
differ from $x$ in $\ell$ coordinates, and so for each vertex $y\in
S_{0}(x,\ell)$ the vertices in $B(y,2\ell-1)\cap S_{0}(x,\ell)$ will typically
agree with $y$ in one coordinate in which it differs from $x$, leaving one
fewer ‘degree of freedom’ for the choice of such vertices.
In particular, the $K$-separation property implies the following separation
lemma.
###### Lemma 3.1 (Separating partition).
Suppose $G$ satisfies Properties P3 and P5 for some $K\in\mathbb{N}$. For each
$x\in V(G)$ and $\ell\in\mathbb{N}$, there exists a partition $\mathcal{P}$ of
$S(x,\ell)$,
$S(x,\ell)=P_{1}\cup\dots\cup P_{m},$
into $m$ disjoint sets $P_{1},\ldots,P_{m}$ such that the following hold:
1. $(i)$
$m\leqslant(\ell+1)K^{\ell-1}d(x)^{\ell-1}$;
2. $(ii)$
$\operatorname{dist}(y_{1},y_{2})\geqslant 2\ell$ for all distinct
$y_{1},y_{2}\in P_{j}$ and every $j\in[m]$.
###### Proof.
By Property P3 there are at most $K^{\ell-1}d(x)^{\ell-1}$ vertices in $D\cap
S(x,\ell)$. Choose a new partition class $P_{i}$ for each such vertex.
Choose the partition of $S_{0}(x,\ell)$ greedily by putting each vertex $w\in
S_{0}(x,\ell)$ in the first partition class such that the distance condition
is satisfied. For $y\in S_{0}(x,\ell)$, by Property P5 we have
$|B(y,2\ell-1)\cap S_{0}(x,\ell)|\leqslant\ell K^{\ell-1}d(x)^{\ell-1}$. Thus
we get at most this many partition classes for $S_{0}(x,\ell)$.
Combining the number of partition classes from $S(x,\ell)\cap D$ and
$S_{0}(x,\ell)$ proves the claim. ∎
We note that while Properties P1, P2, P3, P4 and P5 make assertions for each
$\ell\in\mathbb{N}$, in fact in the proof of Theorem 1.4 we will only need
these properties for $\ell\leqslant 6$. Finally, our proof methods require
that the graph $G$ is not too large.
###### Property P6 (Exponential order).
$G$ has at most $\exp(K\delta(G))$ vertices.
## 4\. Proof of the 1-statement of Theorem 1.4
Throughout the rest of the paper we assume that $G$ is a graph in
$\mathcal{H}(K)$ for some $K\in\mathbb{N}$ (so $G$ satisfies Properties P1,
P2, P3, P4, P5 and P6 for this $K$), and all asymptotics will be as
$\delta(G)\to\infty$.
In this section we will prove the 1-statement of Theorem 1.4: For any
$\varepsilon>0$, if
$p>\frac{1}{2}-\left(\frac{1}{2}-\varepsilon\right)\sqrt{\frac{\log\Delta(G)}{\Delta(G)}},$
then
$\Phi(p,G):=\mathbb{P}[{\bf A}_{p}\text{ percolates on }G]\to 1.$
In fact, we will prove that
$\mathbb{P}[\exists x\in V(G):x\notin A_{11}]=o(1).$
Along the proof we will introduce auxiliary lemmas, but their proofs are
deferred to Section 6.
For each vertex $x\in V(G)$ let
$\sigma(x):=\sqrt{\frac{\log d(x)}{d(x)}}$
and choose the probability $p$ to be significantly larger than
$\tilde{p}_{c}(x):=\frac{1}{2}-\frac{1}{2}\sigma(x).$
More precisely, for $c<\frac{1}{2}$ we let
$p:=\frac{1}{2}-c\sigma(x).$
Since the initially infected set of vertices $A_{0}$ is distributed as a
$p$-random subset ${\bf A}_{p}$ of $V(G)$, the probability that a vertex $x$
lies in $A_{0}$ is approximately $\frac{1}{2}$. We start by showing that the
probability that $x$ is infected after the first two rounds is already
significantly larger than $\frac{1}{2}$.
In fact, we will show a slightly stronger statement, bounding the probability
that a vertex $x$ is infected in the first two rounds of a slightly weaker
infection process, where the threshold to infect a vertex $v$ is raised from
$\frac{d(v)}{2}$ to $\frac{d(v)}{2}+m$ for some fixed
$m\in\mathbb{N}\cup\\{0\\}$. More precisely, we define recursively
$A_{0}(m):=A_{0}$ and for each $i\in\mathbb{N}\cup\\{0\\}$
$A_{i+1}(m):=A_{i}(m)\cup\left\\{v\in V(G)\colon|A_{i}(m)\cap
N(v)|\geqslant\frac{d(v)}{2}+m\right\\}.$
Note that if $m=0$, then $A_{i}=A_{i}(m)$ for all $i\in\mathbb{N}$ and that
for any $m\geqslant 0$,
$A_{i}(m)\subseteq A_{i}\text{ for all }i\in\mathbb{N}.$
The event that a vertex $x$ is infected by the second round depends only on
vertices at distance at most two from $x$. More precisely, whether $x$ is
infected in the second round only depends on how many neighbours of $x$ are
infected initially and how many are infected in the first round. The number of
initially infected neighbours is given by a binomial random variable with
expectation $d(x)/2-c\sqrt{d(x)\log d(x)}$, and so in order to infect $x$ in
the second round, on average we need around $c\sqrt{d(x)\log d(x)}$ many
additional neighbours of $x$ to be infected in the first round.
For each neighbour of $x$ there is a small but non-negligible chance that it
is infected in the first round. Indeed, since the number of initially infected
neighbours of a vertex is distributed binomially, we can use the anti-
concentration statement in Lemma 2.2 to calculate quite precisely the
probability that it is infected in the first round, which is around
$\Omega\left(d(x)^{-2c^{2}}\right)$ as we will show later, and so the expected
number of neighbours of $x$ which will be infected in the first round is of
order $\Omega\left(d(x)^{1-2c^{2}}\right)\gg\sqrt{d(x)\log d(x)}$.
Furthermore, the local structure of graphs in $\mathcal{H}(K)$ ensures that
for two neighbours of $x$ the events that they get infected in the first round
are close to independent. This allows us to show that it is reasonably likely
that $x$ will be infected in the second round using a second moment argument.
###### Lemma 4.1.
Let $m\in\mathbb{N}\cup\\{0\\}$, $x\in V(G)$, $c<\frac{1}{2}$,
$p=\frac{1}{2}-c\sigma(x)$ and $A_{0}\sim{\bf A}_{p}$. If there is a
$K\in\mathbb{N}$ such that $G\in\mathcal{H}(K)$, then
$\mathbb{P}\left[x\in A_{2}(m)\right]\geqslant\frac{3}{4}+o(1).$
We note that for the proof of Lemma 4.1 we will only need Properties P1, P2
and P3 (see Section 6).
Next we show that, above the probability
$\tilde{p}_{c}(x)=\frac{1}{2}-\frac{1}{2}\sigma(x)$, the probability that a
vertex $x$ is not infected shrinks quickly, from a constant probability in the
second round to an exponentially small one by the fifth round.
Let us sketch the main ideas. If a vertex $x$ is not infected by the fifth
round, then there is a subset $T^{\prime}\subseteq N(x)$ of size
$\frac{d(x)}{2}$ which is not infected by the fourth round. Since no vertex in
$T^{\prime}$ is infected by the fourth round, and each vertex in $T^{\prime}$
has approximately degree $d(x)$ there are at most roughly $d(x)|T^{\prime}|/2$
many edges from $T^{\prime}$ to $A_{3}$. However, using the explicit structure
of our graph $G\in\mathcal{H}(K)$, it is easy to show that the number of edges
from $T^{\prime}$ to the sphere $S(x,2)$ is approximately $d(x)|T^{\prime}|$.
Since, by Lemma 4.1, each vertex is in $A_{2}$ (and hence also in $A_{3}$)
with probability larger than say $2/3$, it should be very unlikely that fewer
than half the edges from $T^{\prime}$ to $S(x,2)$ go to vertices in $A_{3}$.
However, due to the dependencies between the events that vertices in $S(x,2)$
lie in $A_{3}$, it is difficult to make this precise.
Instead, we look one round further – if many vertices in $T=N(T^{\prime})\cap
S(x,2)$ are not infected by the third round then we again find that at most
half of the edges from $T$ to the sphere $S(x,3)$ go to vertices in $A_{2}$.
However, by Lemma 3.1 we can partition $S(x,3)$ into $O(d^{2})$ sets whose
pairwise distance is at least $4$. In particular, in any partition class the
events that the vertices lie in $A_{2}$ are mutually independent. By a double-
counting argument we can find one partition class $B\subseteq S(x,3)$ in which
less than half of the edges from $T$ to $B$ go to vertices in $A_{2}$, which
is very unlikely since each vertex in $B$ lies in $A_{2}$ with probability at
least $2/3$ and these events are independent.
However, whether the vertices in $B$ lie in $A_{2}$ might still depend on the
sets of initially infected vertices in $S(x,1)$ and $S(x,2)$ which also
influence our choices of $T^{\prime},T$ and $B$. To get around this issue we
use Property P4 to find for each $y\in B$ a slightly smaller graph
$G(y)\in\mathcal{H}(K)$ with $G(y)\subseteq G\setminus B(x,2)$, where the
vertex degrees and structure are approximately preserved. This guarantees that
infection with a slightly increased threshold in $G(y)$ is sufficient to imply
infection in $G$, and allows us to apply Lemma 4.1 to $y$ inside the subgraph
$G(y)$.
###### Lemma 4.2.
Let $x\in V(G)$, $c<\frac{1}{2}$, and $p=\frac{1}{2}-c\sigma(x)$. If there is
a $K\in\mathbb{N}$ such that $G\in\mathcal{H}(K)$, then there exists a
$\beta>0$ (independent of $x$) such that
$\mathbb{P}\big{[}x\notin A_{5}\big{]}\leqslant\exp\bigl{(}-\beta
d(x)\bigr{)}.$
We note that for the proof of Lemma 4.2 we will use Properties P1, P2, P3, P4
and P5 (see Section 6).
Finally, we bootstrap Lemma 4.2 to show that after six more rounds the
probability that a vertex is not infected shrinks even further, and becomes
super-exponentially small. Since the order of the graphs in Theorem 1.4 are
exponential in their vertex degrees, this will be enough to deduce that, above
an appropriate threshold, whp all vertices will become infected by the
eleventh round.
Here we can afford to be slightly less careful than in Lemma 4.2. If a vertex
$x$ is not infected by the $k$-th round, it is relatively easy to show that
the number of vertices at distance $\ell$ which are not infected by the
$(k-\ell)$-th round must be growing like $\Theta\left(d(x)^{\ell}\right)$. In
particular, if $x$ is not infected by the eleventh round, there is some set
$T^{\prime}$ with $\Theta(d(x)^{6})$ many vertices in the sphere $S(x,6)$
which are uninfected by the fifth round. By Lemma 3.1 we can partition
$S(x,6)$ into $O(d(x)^{5})$ many sets within which all vertices have pairwise
distance at least twelve. By an averaging argument, some partition class $P$
must contain a large number of vertices in $T^{\prime}$. However, since the
events that the vertices of $P$ are not infected by the fifth round are
independent, and each is unlikely by Lemma 4.2, it follows from Chernoff’s
inequality (Lemma 2.1) that it is extremely unlikely that any partition class
contains a large number of vertices in $T^{\prime}$.
###### Lemma 4.3.
Let $x\in V(G)$, $c<\frac{1}{2}$, and $p=\frac{1}{2}-c\sigma(x)$. If there is
a $K\in\mathbb{N}$ such that $G\in\mathcal{H}(K)$, then there exists a
$\beta>0$ (independent of $x$) such that
$\mathbb{P}\big{[}x\notin A_{11}\big{]}<\exp\bigl{(}-\beta d(x)^{2}\bigr{)}.$
We note that for the proof of Lemma 4.3 we will use Properties P1, P2, P3, P4
and P5 (see Section 6).
We are now in a position to prove the $1$-statement of Theorem 1.4.
###### Proof of $1$-statement in Theorem 1.4.
Let $K\in\mathbb{N}$ and let $(G_{n})_{n\in\mathbb{N}}$ be a sequence of
graphs such that $G_{n}\in\mathcal{H}(K)$ and $\delta(G_{n})\to\infty$ as
$n\to\infty$. We write $G\coloneqq G_{n}$.
Let $\varepsilon>0$ and let
$p>\frac{1}{2}-\left(\frac{1}{2}-\varepsilon\right)\sqrt{\frac{\log\Delta(G)}{\Delta(G)}}.$
Since $d\mapsto\frac{1}{2}-\left(\frac{1}{2}-\varepsilon\right)\sqrt{(\log
d)/d}$ is an increasing function in $d$, by Lemma 4.3 there exists $\beta>0$
such that for every $x\in V(G)$
$\mathbb{P}\left[x\notin A_{11}\right]\leqslant\exp(-\beta
d(x)^{2})\leqslant\exp(-\beta\delta(G)^{2}).$
Hence, by Property P6 we have $|V(G)|\leqslant\exp(K\delta(G))$ and so
$\displaystyle 1-\Phi(p,G)=\mathbb{P}[{\bf A}_{p}\text{ does not percolate on
}G]\ $ $\displaystyle\leqslant\mathbb{P}[\exists x\in V(G):x\notin A_{11}]$
$\displaystyle\leqslant|V(G)|\cdot\exp(-\beta\delta(G)^{2})=o(1),$ (1)
concluding the proof. ∎
###### Remark 4.4.
As mentioned before, Lemmas 4.1, 4.2 and 4.3 do not use Property P6. The union
bound in the last step of (1) is the only use of Property P6 in the proof
above. Therefore, the $1$-statement also holds for graphs satisfying
Properties P1, P2, P3, P4 and P5 and $|V(G)|=\exp(o(\delta(G)^{2}))$.
## 5\. Proof of the 0-statement of Theorem 1.4
In this section we will prove the 0-statement of Theorem 1.4: For any
$\varepsilon>0$, if
$p<\frac{1}{2}-\left(\frac{1}{2}+\varepsilon\right)\sqrt{\frac{\log\delta(G)}{\delta(G)}},$
then
$\Phi(p,G):=\mathbb{P}[{\bf A}_{p}\text{ percolates on }G]\to 0.$
Along the proof we will need some auxiliary lemmas, which are proved in
Section 7. In fact, instead of directly analysing the majority bootstrap
percolation process, we will analyse a generalised process which dominates the
original majority bootstrap percolation process. This new process introduced
in [11] is called the _$\operatorname{Boot}_{k}(\gamma)$ process_: Given
$k\in\mathbb{N}$ and some function $\gamma\colon V(G)\to\mathbb{R}^{+}$, we
recursively define $\hat{A}_{0}:=A_{0}$ and for each
$\ell\in\mathbb{N}\cup\\{0\\}$
$\hat{A}_{\ell+1}:=\hat{A}_{\ell}\cup\left\\{x\in
V(G):\left|N(x)\cap\hat{A}_{\ell}\right|\geqslant\frac{d(x)}{2}-\max\\{0,k-\ell\\}\cdot\gamma(x)\right\\}.$
In other words, the initial infection set is the same as in the majority
bootstrap percolation process, but the infection spreads more easily in the
first $k$ rounds. More precisely, a vertex $x$ is infected in the first round
if it has $d(x)/2-k\cdot\gamma(x)$ infected neighbours, and this requirement
is gradually strengthened over the first $k$ rounds. After the $k$-th round,
the process evolves exactly as the majority bootstrap percolation process
would do.
In particular, given $A_{0}$, we note that
$A_{i}\subseteq\hat{A}_{i}\quad\text{ for all}\quad i\in\mathbb{N}.$
Crucially, however, if a vertex $x$ becomes infected in round $\ell+1\leqslant
k$ of the $\operatorname{Boot}_{k}(\gamma)$ process, then at least $\gamma(x)$
of its neighbours must have become infected in round $\ell$. This simplifies
the task of showing a vertex does _not_ become infected.
For our application, we fix $k=2$ and for each $x\in V(G)$ we let
$\gamma(x):=\sqrt{\frac{d(x)}{\vartheta(d(x))}},$
where $\vartheta(d)=\sqrt{\log d}$. Our goal is to show that if $p$ is small
enough, i.e, $p\leqslant\frac{1}{2}-c\sqrt{\frac{\log\delta(G)}{\delta(G)}}$
for $c>1/2$, then whp
$\hat{A}_{2}=\hat{A}_{3}\neq V(G).$
In other words, whp the process $\operatorname{Boot}_{2}(\gamma)$ stabilises
after two rounds. Therefore,
$\bigcup_{i=0}^{\infty}A_{i}\subseteq\bigcup_{i=0}^{\infty}\hat{A}_{i}=\hat{A}_{2}\neq
V(G)$
and hence the majority bootstrap percolation process does not percolate.
Note that our choice of the parameter $\gamma(\cdot)$ simplifies the argument
presented in [11] and allows us to study the first three instead of four
rounds of the process, while obviating the need for some of the counting
arguments given in [11]. The choice of this parameter has to be such that
$\gamma(x)$ is asymptotically smaller than $d(x)\sigma(x)=\sqrt{d(x)\log
d(x)}$. Our choice of $\gamma(x)$ to be very close to this bound allows us to
simplify the argument, at the cost of giving a weaker bound on the width of
the critical window.
We start by showing that it is very likely that the
$\operatorname{Boot}_{2}(\gamma)$ process stabilises by the second round, by
bounding the probability that a vertex is infected in the third round. Recall
that, as defined in Section 4,
$\sigma(x):=\sqrt{\frac{\log d(x)}{d(x)}}$
for each vertex $x$.
###### Lemma 5.1.
Let $x\in V(G)$, $c>\frac{1}{2}$ and $p=\frac{1}{2}-c\sigma(x)$. If there is a
$K\in\mathbb{N}$ such that $G\in\mathcal{H}(K)$, then there exists a $\beta>0$
(independent of $x$) such that
$\mathbb{P}[x\in\hat{A}_{3}\setminus\hat{A}_{2}]\leqslant\exp\left(-\beta\gamma(x)^{2}\log
d(x)\right).$
We note that the proof of Lemma 5.1 uses Properties P1, P2 and P3 (see Section
7).
We will also need to show that it is unlikely that the
$\operatorname{Boot}_{2}(\gamma)$ process fully percolates by the second
round. Since it is very likely that around half of the vertices are initially
infected, and we can quite precisely bound the probability that a vertex is
infected in the first round, it will be sufficient to bound the probability
that a vertex is infected in the second round.
###### Lemma 5.2.
Let $x\in V(G)$, $c>\frac{1}{2}$ and $p=\frac{1}{2}-c\sigma(x)$. If there is a
$K\in\mathbb{N}$ such that $G\in\mathcal{H}(K)$, then
$\mathbb{P}[x\in\hat{A}_{2}\setminus\hat{A}_{1}]\leqslant\exp\left(-\sqrt{d(x)}\right).$
We note that the proof of Lemma 5.2 needs Properties P1, P2 and P3 (see
Section 7).
Finally, we observe that Lemmas 5.1 and 5.2 together with Property P6 imply
that whp the $\operatorname{Boot}_{2}(\gamma)$-process stabilises after the
second round, without fully percolating. We are now ready to prove the
$0$-statement of our main theorem.
###### Proof of the 0-statement of Theorem 1.4.
Let $K\in\mathbb{N}$ and let $(G_{n})_{n\in\mathbb{N}}$ be a sequence of
graphs such that $G_{n}\in\mathcal{H}(K)$ and $\delta(G_{n})\to\infty$ as
$n\to\infty$. We write $G:=G_{n}$.
Let $\varepsilon>0$ and let
$p\leqslant\frac{1}{2}-\left(\frac{1}{2}+\varepsilon\right)\sqrt{\frac{\log\delta(G)}{\delta(G)}}.$
Again, since the function
$d\mapsto\frac{1}{2}-\left(\frac{1}{2}+\varepsilon\right)\sqrt{(\log d)/d}$ is
increasing in $d$, it follows from Lemma 5.2 that for every $x\in V(G)$
$\mathbb{P}\big{[}x\in\hat{A}_{2}\setminus\hat{A}_{1}\big{]}=o(1).$
Furthermore, since $\gamma(x)=o(\sigma(x)d(x))$, it is a simple consequence of
Chernoff’s inequality (Lemma 2.1) that for every $x\in V(G)$
$\mathbb{P}\big{[}x\in\hat{A}_{1}\setminus\hat{A}_{0}\big{]}\leqslant\mathbb{P}\Big{[}\operatorname{Bin}\bigl{(}d(x),p\bigr{)}\geqslant\frac{d(x)}{2}-2\gamma(x)\Big{]}=o(1).$
Hence, by Markov’s inequality whp
$\big{|}\hat{A}_{2}\setminus\hat{A}_{0}\big{|}=o\bigl{(}|V(G)|\bigr{)}$. It
follows from Chernoff’s inequality (Lemma 2.1) that whp
$\big{|}\hat{A}_{0}\big{|}=\big{|}A_{0}\big{|}\leqslant\frac{3}{4}|V(G)|$, and
hence whp
$\big{|}\hat{A}_{2}\big{|}=\big{|}\hat{A}_{2}\setminus\hat{A}_{0}\big{|}+\big{|}\hat{A}_{0}\big{|}<|V(G)|.$
On the other hand, by Lemma 5.1 there exists $\beta>0$ such that for every
$x\in V(G)$
$\mathbb{P}\big{[}x\in\hat{A}_{3}\setminus\hat{A}_{2}\big{]}=\exp\left(-\beta\gamma(x)^{2}\log
d(x)\right)=\exp\bigl{(}-\omega(\delta(G))\bigr{)}.$
By Property P6, we have $|V(G)|\leqslant\exp(K\delta(G))$, and using the union
bound we get that whp $\hat{A}_{3}=\hat{A}_{2}$. It follows that whp
$A_{i}\subseteq\hat{A}_{i}=\hat{A}_{2}\neq V(G)$ for all $i\geqslant 2$.
Therefore
$\Phi(p,G):=\mathbb{P}[{\bf A}_{p}\text{ percolates on
}G]=\mathbb{P}\left[\bigcup_{i=0}^{\infty}A_{i}=V(G)\right]=o(1),$
completing the proof. ∎
## 6\. Eleven rounds suffice
In this section we prove the auxiliary lemmas (Lemmas 4.1, 4.2 and 4.3) needed
for the proof of the $1$-statement of Theorem 1.4 in Section 4.
Throughout this section we let $x\in V(G)$, $c<\frac{1}{2}$,
$p=\frac{1}{2}-c\sigma(x)$, where $\sigma(x)=\sqrt{\frac{\log d(x)}{d(x)}}$,
and we fix a $K\in\mathbb{N}$ such that $G\in\mathcal{H}(K)$. We start the
section with a simple corollary of the Central Limit Theorem (Lemma 2.2).
###### Lemma 6.1.
Let $d^{\prime}=\Theta(d(x))$ and $C\in\mathbb{R}$ be a constant. Then
$\mathbb{P}\left[\operatorname{Bin}(d^{\prime},p)\geqslant\frac{d^{\prime}}{2}+C\right]=(1+o(1))\mathbb{P}\left[\operatorname{Bin}(d^{\prime},p)\geqslant\frac{d^{\prime}}{2}\right].$
###### Proof.
Observe that since $(d^{\prime})^{-1/2}\ll\sigma(x)\ll(d^{\prime})^{-1/3}$,
the function
$f_{L}(d^{\prime})\coloneqq\frac{cd^{\prime}\sigma(x)+L}{\sqrt{d^{\prime}p(1-p)}}$
satisfies $1\ll f_{L}\ll(d^{\prime})^{1/6}$ for any constant $L$. Therefore,
by Lemma 2.2, it suffices to show that
$\mathbb{P}[N(0,1)\geqslant
f_{C}(d^{\prime})]=(1+o(1))\mathbb{P}[N(0,1)\geqslant f_{0}(d^{\prime})].$ (2)
Since $|f_{C}-f_{0}|=O((d^{\prime})^{-1/2})$, we may estimate
$\frac{1}{f_{C}\cdot\sqrt{2\pi}}\cdot\exp\left(-\frac{f_{C}^{2}}{2}\right)=\frac{1+o(1)}{f_{0}\cdot\sqrt{2\pi}}\cdot\exp\biggl{(}-\frac{f_{0}^{2}}{2}+O\bigl{(}(f_{C}-f_{0})\cdot
f_{0}\bigr{)}\biggr{)}=\frac{1+o(1)}{f_{0}\cdot\sqrt{2\pi}}\cdot\exp\left(-\frac{f_{0}^{2}}{2}\right).$
By the second part of Lemma 2.2, this implies (2), hence proving the lemma. ∎
We proceed to prove the aforementioned auxiliary lemmas.
### 6.1. Proof of Lemma 4.1
As in Lemma 4.1, we let $m\in\mathbb{N}\cup\\{0\\}$ and aim to prove
$\mathbb{P}\left[x\in A_{2}(m)\right]\geqslant\frac{3}{4}+o(1).$
Let $X_{0}:=N(x)\cap A_{0}$ be the set of neighbours of $x$ which are
initially infected and note that
$|X_{0}|\sim\operatorname{Bin}(d(x),p).$
Next we let
$X_{1}:=N(x)\cap(A_{1}(m)\setminus A_{0})$
be the set of the neighbours of $x$ which become infected in the first round.
We note that $x\in A_{2}(m)$ if and only if either $x\in A_{0}$ or
$|X_{0}|+|X_{1}|\geqslant\frac{d(x)}{2}+m$, and so we have
$\mathbb{P}\left[x\in A_{2}(m)\right]=\mathbb{P}\left[x\in
A_{0}\right]+\mathbb{P}\left[x\notin
A_{0}\right]\cdot\mathbb{P}\Big{[}|X_{0}|+|X_{1}|\geqslant\frac{d(x)}{2}+m\nonscript\>\Big{|}\nonscript\>\mathopen{}x\notin
A_{0}\Big{]}.$ (3)
In order to calculate the second term in (3) we consider the event
$\mathcal{E}_{0}$ that $|X_{0}|\geqslant\frac{d(x)}{2}+m-\ell$ and the event
$\mathcal{E}_{1}$ that $|X_{1}|\geqslant\ell$ for a judicious choice of
$m\leqslant\ell\leqslant\frac{d(x)}{2}+m$ which will be given later. By (3),
we have
$\displaystyle\mathbb{P}\left[x\in A_{2}(m)\right]$
$\displaystyle=\mathbb{P}\left[x\in A_{0}\right]+\mathbb{P}\left[x\notin
A_{0}\right]\cdot\mathbb{P}\left[\mathcal{E}_{0}\wedge\mathcal{E}_{1}\nonscript\>\middle|\nonscript\>\mathopen{}x\notin
A_{0}\right]$
$\displaystyle=p+(1-p)\cdot\mathbb{P}\left[\mathcal{E}_{0}\nonscript\>\middle|\nonscript\>\mathopen{}x\not\in
A_{0}\right]\cdot\mathbb{P}\left[\mathcal{E}_{1}\nonscript\>\middle|\nonscript\>\mathopen{}\mathcal{E}_{0}\wedge\left(x\notin
A_{0}\right)\right]$
$\displaystyle=p+(1-p)\cdot\mathbb{P}\left[\mathcal{E}_{0}\right]\cdot\mathbb{P}\left[\mathcal{E}_{1}\nonscript\>\middle|\nonscript\>\mathopen{}\mathcal{E}_{0}\wedge\left(x\notin
A_{0}\right)\right],$ (4)
where the last equality is because $\mathcal{E}_{0}$ is independent of the
event that $x\notin A_{0}$.
For ease of notation, let us write $\mathbb{P}^{*}$ for the probability
distribution conditioned on the events $\\{x\notin A_{0}\\}$ and
$\mathcal{E}_{0}$. Note that, for $y\in N(x)$, we may use the independence of
the events $\\{y\notin A_{0}\\}$ and $\\{x\notin A_{0}\\}$ as well as Lemma
2.4 on correlations to deduce that
$\displaystyle\mathbb{P}^{*}\left[y\notin A_{0}\right]=\mathbb{P}[y\notin
A_{0}\mid(x\notin A_{0})\wedge\mathcal{E}_{0}]=\mathbb{P}[y\notin
A_{0}\nonscript\>\big{|}\nonscript\>\mathopen{}\mathcal{E}_{0}]=(1+o(1))(1-p).$
(5)
We begin with estimating the conditional expectation of $|X_{1}|$.
###### Claim 6.2.
$\mathbb{E}^{*}\big{[}|X_{1}|\big{]}=\mathbb{E}\big{[}|X_{1}|\mid(x\notin
A_{0})\wedge\mathcal{E}_{0}\big{]}=\Omega\left(\frac{d(x)^{1-2c^{2}}}{\sqrt{\log
d(x)}}\right).$
###### Proof of Claim 6.2.
Let us suppose that $x\notin A_{0}$ and let $y\in N(x)$. We have
$\displaystyle\mathbb{P}^{*}\Big{[}y\in X_{1}\Big{]}=\mathbb{P}^{*}[y\notin
A_{0}]\cdot\mathbb{P}^{*}\Big{[}|N(y)\cap
A_{0}|\geqslant\frac{d(y)}{2}+m\nonscript\>\Big{|}\nonscript\>\mathopen{}y\notin
A_{0}\Big{]}.$ (6)
To bound the second term in (6), we note that
$\big{|}N(y)\cap\big{(}\\{x\\}\cup N(x)\big{)}\big{|}\leqslant K$ by Property
P2. Furthermore, we have $d(y)\geqslant d(x)-K$ since $G$ is $K$-locally
almost regular by Property P1. Hence, conditioned on the events $\\{x,y\notin
A_{0}\\}$ and $\mathcal{E}_{0}$, $|N(y)\cap A_{0}|$ stochastically dominates a
$\operatorname{Bin}(d^{\prime},p)$ random variable for $d^{\prime}:=d(x)-2K$.
Recalling that $p=\frac{1}{2}-c\sigma(x)=\frac{1}{2}-c\sqrt{\frac{\log
d(x)}{d(x)}}$, we have
$\mathbb{P}^{*}\left[|N(y)\cap
A_{0}|\geqslant\frac{d(y)}{2}+m\nonscript\>\middle|\nonscript\>\mathopen{}y\not\in
A_{0}\right]\geqslant\mathbb{P}\left[\operatorname{Bin}\left(d^{\prime},p\right)\geqslant\frac{d^{\prime}}{2}+m+K\right].$
(7)
From (6), (5), (7) and Lemma 6.1, we obtain
$\displaystyle\mathbb{P}^{*}\big{[}y\in
X_{1}\big{]}\geqslant(1+o(1))(1-p)\cdot\mathbb{P}\left[\operatorname{Bin}\left(d^{\prime},p\right)\geqslant\frac{d^{\prime}}{2}\right].$
(8)
We will bound the second term on the right-hand side of (8) by applying Lemma
2.2 for $\operatorname{Bin}(d^{\prime},p)$. To do that, we need to scale the
binomial random variable accordingly. Note that
$d^{\prime}\left(\frac{1}{2}-p\right)=cd^{\prime}\sigma(x)=\left(1+O\left(\frac{1}{d(x)}\right)\right)c\sqrt{d(x)\log
d(x)}$
and
$d^{\prime}p(1-p)=d^{\prime}\left(\frac{1}{4}-c^{2}\sigma(x)^{2}\right)=\left(1+O\left(\frac{\log
d(x)}{d(x)}\right)\right)\frac{d(x)}{4}.$
We may apply Lemma 2.2 to bound the right-hand side of (7) and get
$\mathbb{P}\left[\operatorname{Bin}\left(d^{\prime},p\right)\geqslant\frac{d^{\prime}}{2}\right]=(1+o(1))\mathbb{P}[N(0,1)\geqslant
f(d^{\prime})]$ (9)
for
$f(d^{\prime}):=\frac{d^{\prime}(1/2-p)}{\sqrt{d^{\prime}p(1-p)}}=\left(1+O\left(\frac{\log
d(x)}{d(x)}\right)\right)2c\sqrt{\log d(x)}.$
For this value of $f$, we may estimate
$\displaystyle\mathbb{P}[N(0,1)\geqslant f(d^{\prime})]$
$\displaystyle=\frac{1+o(1)}{f(d^{\prime})\sqrt{2\pi}}\exp\left(-\frac{1}{2}f(d^{\prime})^{2}\right)$
$\displaystyle=\frac{1+o(1)}{2c\sqrt{2\pi\log d(x)}}\exp\bigl{(}-2c^{2}\log
d(x)+O(1)\bigr{)}$ $\displaystyle=\Omega\left(\frac{d(x)^{-2c^{2}}}{\sqrt{\log
d(x)}}\right).$ (10)
Hence, by (8), (9) and (10), we obtain
$\mathbb{E}^{*}\left[|X_{1}|\right]=d(x)\cdot\mathbb{P}^{*}\Big{[}y\in
X_{1}\Big{]}=\Omega\left(\frac{d(x)^{1-2c^{2}}}{\sqrt{\log d(x)}}\right),$
(11)
as desired. ∎
We now proceed towards bounding the conditional variance of $|X_{1}|$. Let $M$
be the set of pairs of distinct vertices $y,z\in N(x)$ that have _at most_ two
common neighbours (including $x$). We continue with the following claim, which
states the events $\\{y\in X_{1}\\}$ and $\\{z\in X_{1}\\}$ for such pairs are
not closely correlated.
###### Claim 6.3.
If $(y,z)\in M$, then
$\mathbb{P}^{*}\left[(y\in X_{1})\wedge(z\in X_{1})\right]-\mathbb{P}^{*}[y\in
X_{1}]\cdot\mathbb{P}^{*}[z\in X_{1}]=o\left(\mathbb{P}^{*}[y\in
X_{1}]\cdot\mathbb{P}^{*}[z\in X_{1}]\right).$
###### Proof.
Let $d^{\prime}=d-2K$, $d^{\prime}_{0}=d/2+m-4K$ and
$d^{\prime}_{1}=d/2+m+4K$. We claim that for $w\in\\{y,z\\}$,
$\mathbb{P}\big{[}\operatorname{Bin}(d^{\prime},p)\geqslant
d^{\prime}_{0}\big{]}\geqslant\mathbb{P}^{*}\left[w\in
X_{1}\nonscript\>\middle|\nonscript\>\mathopen{}w\notin
A_{0}\right]\geqslant\mathbb{P}\big{[}\operatorname{Bin}(d^{\prime},p)\geqslant
d^{\prime}_{1}\big{]}.$ (12)
Indeed, assume $w=y$ without loss of generality, and observe that
$\displaystyle\mathbb{P}^{*}\left[y\in
X_{1}\nonscript\>\middle|\nonscript\>\mathopen{}y\notin A_{0}\right]$
$\displaystyle\geqslant\mathbb{P}^{*}\left[y\in
X_{1}\nonscript\>\middle|\nonscript\>\mathopen{}(y\notin A_{0})\wedge(N(y)\cap
N(x)\cap A_{0}=\varnothing)\right]$ $\displaystyle=\mathbb{P}\left[y\in
X_{1}\nonscript\>\middle|\nonscript\>\mathopen{}(x,y\notin
A_{0})\wedge(N(y)\cap N(x)\cap A_{0}=\varnothing)\right],$
$\displaystyle\geqslant\mathbb{P}\left[\operatorname{Bin}(d(y)-K,p)\geqslant
d(y)/2+m\right].$
where the first step follows by stochastic domination. Since $d(y)=d\pm K$ by
Property P1, this proves the last inequality in (12). A similar argument,
conditioning on $N(y)\cap N(x)\subseteq A_{0}$ instead and using that
$\mathbb{P}[\operatorname{Bin}(d^{\prime},p)\geqslant
d_{0}^{\prime}]\geqslant\mathbb{P}[\operatorname{Bin}(d(y),p)\geqslant
d_{0}^{\prime}+3K]$, proves the first inequality.
Let $F=\\{w\in V(G)\setminus\\{x,y,z\\}:|N(w)\cap\\{x,y,z\\}|\geqslant 2\\}$,
and observe that $|F|\leqslant 2K$ by Property P2 and our assumption that $y$
and $z$ have at most one common neighbour besides $x$. Since $F$ contains
$N(y)\cap N(z)$, if we condition on $(F\cup\\{x,y,z\\})\cap A_{0}$ then the
events $\\{y\in X_{1}\\}$ and $\\{z\in X_{1}\\}$ are conditionally independent
and binomially distributed and so by a similar argument, conditioning on
either $F=\varnothing$ or $F\subseteq A_{0}$ allows us to deduce that
$\mathbb{P}\big{[}\operatorname{Bin}(d^{\prime},p)\geqslant
d^{\prime}_{0}\big{]}^{2}\geqslant\mathbb{P}^{*}\left[(y\in X_{1})\wedge(z\in
X_{1})\nonscript\>\middle|\nonscript\>\mathopen{}y,z\notin
A_{0}\right]\geqslant\mathbb{P}\big{[}\operatorname{Bin}(d^{\prime},p)\geqslant
d^{\prime}_{1}\big{]}^{2}.$ (13)
By (12), (13) and Lemma 6.1, we obtain
$\mathbb{P}^{*}\left[(y\in X_{1})\wedge(z\in
X_{1})\nonscript\>\middle|\nonscript\>\mathopen{}y,z\notin
A_{0}\right]=(1+o(1))\mathbb{P}^{*}\left[y\in
X_{1}\nonscript\>\middle|\nonscript\>\mathopen{}y\notin
A_{0}\right]\cdot\mathbb{P}^{*}\left[z\in
X_{1}\nonscript\>\middle|\nonscript\>\mathopen{}z\notin A_{0}\right].$ (14)
Therefore, it only remains to show that
$\mathbb{P}^{*}[y,z\notin A_{0}]=(1+o(1))\mathbb{P}^{*}[y\notin
A_{0}]\cdot\mathbb{P}^{*}[z\notin A_{0}],$ (15)
since multiplying (14) and (15) finishes the proof of the lemma. To do so, let
$\mathbb{P}^{\prime}$ the probability measure obtained by conditioning on
$\\{x\notin A_{0}\\}$, and observe that, by Bayes’ theorem,
$\mathbb{P}^{*}[y,z\notin A_{0}]=\mathbb{P}^{\prime}[y,z\notin
A_{0}\nonscript\>\big{|}\nonscript\>\mathopen{}\mathcal{E}_{0}]=\frac{\mathbb{P}^{\prime}[\mathcal{E}_{0}\nonscript\>\big{|}\nonscript\>\mathopen{}y,z\notin
A_{0}]}{\mathbb{P}^{\prime}[\mathcal{E}_{0}]}\cdot\mathbb{P}^{\prime}[y,z\notin
A_{0}].$ (16)
The fraction is $1+o(1)$ by Lemma 2.4. Since $\mathbb{P}^{\prime}[y,z\notin
A_{0}]=\mathbb{P}[y,z\notin A_{0}]=(1-p)^{2}$ and $\mathbb{P}^{*}[y\notin
A_{0}]\mathbb{P}^{*}[z\notin A_{0}]=(1+o(1))(1-p)^{2}$ by (5), (16) implies
(15) and proves the lemma. ∎
As a direct consequence of Claim 6.3 we have
$\displaystyle\sum_{(y,z)\in M}\mathbb{P}^{*}\left[(y\in X_{1})\wedge(z\in
X_{1})\right]-\mathbb{P}^{*}[y\in X_{1}]\cdot\mathbb{P}^{*}[z\in
X_{1}]=o\left(\mathbb{E}^{*}[|X_{1}|]^{2}\right).$ (17)
This allows us to obtain our bound on the conditional variance of $|X_{1}|$.
###### Claim 6.4.
$\mathbb{V}^{*}\left[|X_{1}|\right]=o\left(\mathbb{E}^{*}[|X_{1}|]^{2}\right).$
$x$$N(x)$$S(x,2)$$X_{0}$$X_{1}$ Figure 1. The set $X_{0}=N(x)\cap A_{0}$ is
depicted in blue, and the set $X_{1}=N(x)\cap(A_{1}\setminus A_{0})$ is
depicted patterned. The red pair of vertices has two common neighbours in
$S(x,2)$ and is thus in $M^{\prime}$, while the yellow pair of vertices has
just one common neighbour in $S(x,2)$.
###### Proof of Claim 6.4.
Let $M^{\prime}$ be the set of pairs of distinct vertices in $N(x)$ that have
strictly more than two common neighbours (see Figure 1), which we will call
bad pairs. We claim that $|M^{\prime}|=O(d(x))$. Indeed,
$|\binom{N(x)}{2}\setminus\binom{S_{0}(x,1)}{2}|=O(d(x))$ by Property P3i, so
there are only $O(d(x))$ bad pairs not contained in $S_{0}(x,1)$. Also due to
Property P3i, we have that $N(x)\cup(D\cap S(x,2))$ has size $O(d(x))$, and
due to Property P2 each of its elements can be a common neighbour of $O(1)$
many pairs of neighbours of $x$, contributing $O(d(x))$ many bad pairs in
total. Since by Property P3iii any two vertices in $S_{0}(x,1)$ have at most
one common neighbour in $S_{0}(x,2)$, there are no further bad pairs to
consider. Summing up, we have
$\displaystyle|M^{\prime}|=O(d(x)).$ (18)
From (17) and (18) it follows that
$\displaystyle\mathbb{V}^{*}\left[|X_{1}|\right]$ $\displaystyle=\sum_{y,z\in
N(x)}\mathbb{P}^{*}\left[(y\in X_{1})\wedge(z\in
X_{1})\right]-\mathbb{P}^{*}[y\in X_{1}]\cdot\mathbb{P}^{*}[z\in X_{1}]$
$\displaystyle\leqslant\sum_{(y,z)\in M}\mathbb{P}^{*}\left[(y\in
X_{1})\wedge(z\in X_{1})\right]-\mathbb{P}^{*}[y\in
X_{1}]\cdot\mathbb{P}^{*}[z\in X_{1}]$
$\displaystyle\qquad+\mathbb{E}^{*}[|X_{1}|]+|M^{\prime}|$
$\displaystyle\leqslant
o\left(\mathbb{E}^{*}[|X_{1}|]^{2}\right)+O(d(x))=o\left(\mathbb{E}^{*}[|X_{1}|]^{2}\right),$
finishing the claim. ∎
###### Proof of Lemma 4.1.
Recall that $|X_{0}|\sim\operatorname{Bin}(d(x),p)$, where
$p=\frac{1}{2}-c\sigma(x)=\frac{1}{2}-c\sqrt{\frac{\log d(x)}{d(x)}}$ for
$c<1/2$. Taking $\ell:=\mathbb{E}^{*}\left[|X_{1}|\right]/2$ we have
$\ell=\Omega\bigl{(}d(x)^{1-2c^{2}}/\sqrt{\log d(x)}\bigr{)}$ by Claim 6.2,
and thus
$\mathbb{E}[|X_{0}|]=\left(\frac{1}{2}-c\sigma(x)\right)d(x)=\frac{d(x)}{2}-c\sqrt{d(x)\log
d(x)}>\frac{d(x)}{2}+m-\ell.$ (19)
By Proposition 2.5 we then have
$\mathbb{P}[\mathcal{E}_{0}]=\mathbb{P}\left[|X_{0}|\geqslant\frac{d(x)}{2}+m-\ell\right]\geqslant\frac{1}{2}.$
(20)
Furthermore, by Claims 6.2 and 6.4 it follows from Chebyshev’s inequality that
$\mathbb{P}^{*}\left[\mathcal{E}_{1}\right]=1-\mathbb{P}^{*}\left[|X_{1}|<\ell\right]=1-\mathbb{P}^{*}\left[|X_{1}|<\frac{\mathbb{E}^{*}\left[|X_{1}|\right]}{2}\right]=1-o(1).$
(21)
Recalling from (4) that $\mathbb{P}[x\in
A_{2}(m)]=p+(1-p)\cdot\mathbb{P}[\mathcal{E}_{0}]\cdot\mathbb{P}^{*}[\mathcal{E}_{1}]$
and using (20) and (21),
$\displaystyle\mathbb{P}\left[x\in A_{2}(m)\right]$ $\displaystyle\geqslant
p+(1-o(1))\frac{1-p}{2}\geqslant\frac{3}{4}+o(1),$
finishing the proof of the lemma. ∎
###### Remark 6.5.
Let $\lambda\in\mathbb{R}$ satisfy $\lambda>1/2$, and assume now that $c$ may
depend on $d(x)$. The inequality in (19) holds as long as
$c<\frac{1}{2}-\frac{\lambda\log\log d(x)}{\log d(x)}.$
Using this, one may check that the $1$-statement of Theorem 1.4 holds for
every
$p\geqslant\frac{1}{2}-\frac{1}{2}\sqrt{\frac{\log\Delta(G)}{\Delta(G)}}+\frac{\lambda\log\log\Delta(G)}{\sqrt{\Delta(G)\log\Delta(G)}},$
recovering the bound obtained in [11, Theorem 2.1] (see Theorem 1.3) if
$G\in\mathcal{H}(K)$ is $n$-regular.
### 6.2. Proof of Lemma 4.2
In this section we will prove Lemma 4.2 about the existence of a constant
$\beta>0$ (independent of $x$) such that
$\mathbb{P}\big{[}x\notin A_{5}\big{]}\leqslant\exp\bigl{(}-\beta
d(x)\bigr{)}.$
Throughout this section we let $0<\alpha_{1}\ll\alpha_{2}\ll\alpha_{3}\ll
K^{-10}$ be sufficiently small constants and let $d=d(x)$.
Let $\mathcal{E}$ be the event that $|N(x)\cap
A_{0}|\geqslant\left(\frac{1}{2}-\alpha_{1}\right)d$. Since $|N(x)\cap
A_{0}|\sim\operatorname{Bin}(d,p)$ and $p=\frac{1}{2}-o(1)$, by Chernoff’s
inequality (Lemma 2.1) it holds that
$\mathbb{P}[\mathcal{E}]\geqslant 1-\exp\left(-\frac{\alpha_{1}d}{2}\right).$
(22)
In what follows we may assume that $\mathcal{E}$ holds deterministically.
Suppose that $x\notin A_{5}$. Then, less than $\frac{d}{2}$ of the vertices in
$N(x)$ are infected by the fourth round. In particular, there is some subset
$T^{\prime}\subseteq N(x)$ with $|T^{\prime}|=\frac{d}{2}$ such that
$T^{\prime}\cap A_{4}=\varnothing$. Similarly, by Property P1, each vertex
$y\in T^{\prime}$ has less than $\frac{d(y)}{2}\leqslant\frac{d+K}{2}$
neighbours in $A_{3}$. Since $T^{\prime}\subseteq N(x)=S(x,1)$, it follows
that
$\phi(T^{\prime}):=e\left(T^{\prime},A_{3}\cap
S(x,2)\right)<\frac{(d+K)d}{4}.$
Hence, in order to prove Lemma 4.2, it will be sufficient to show that
$\sum_{\begin{subarray}{c}T^{\prime}\subseteq N(x)\setminus A_{0}\\\
|T^{\prime}|=\frac{d}{2}\end{subarray}}\mathbb{P}\left[\phi(T^{\prime})<\frac{(d+K)d}{4}\right]\leqslant
e^{-\alpha_{2}d}.$ (23)
Note that, since we are assuming $\mathcal{E}$ holds, the number of possible
sets $T^{\prime}\subseteq N(x)\setminus A_{0}$ with $|T^{\prime}|=\frac{d}{2}$
is at most
$\binom{\left(\frac{1}{2}+\alpha_{1}\right)d}{d/2}=\binom{\left(\frac{1}{2}+\alpha_{1}\right)d}{\alpha_{1}d}\leqslant\left(\frac{e\left(\frac{1}{2}+\alpha_{1}\right)}{\alpha_{1}}\right)^{\alpha_{1}d}\leqslant\left(\frac{e}{\alpha_{1}}\right)^{\alpha_{1}d}\leqslant
e^{\alpha_{2}d},$ (24)
by our choice of $\alpha_{1}$ and $\alpha_{2}$. Therefore, the following claim
readily implies (23).
###### Claim 6.6.
For each $T^{\prime}\subseteq N(x)$ with $|T^{\prime}|=\frac{d}{2}$,
$\mathbb{P}\left[\phi(T^{\prime})<\frac{d^{2}}{4}+c\sqrt{d^{3}\log
d}\right]\leqslant e^{-2\alpha_{2}d}.$ (25)
###### Proof of Claim 6.6.
Let $T^{\prime}\subseteq N(x)$ with $|T^{\prime}|=\frac{d}{2}$ be fixed, and
let
$T:=N(T^{\prime})\cap S(x,2)$
(see Figure 2). By Properties P1 and P2, we have
$e(T^{\prime},T)=\frac{d^{2}}{2}+O(d)$ and
$|T|\geqslant\frac{d^{2}}{2K}+O(d)$. In particular, since
$p=\frac{1}{2}-c\sigma(x)$, the expected size of $e(T^{\prime},T\cap A_{0})$
is $\frac{d^{2}}{4}-\frac{c}{2}\sqrt{d^{3}\log d}+o(d^{3/2})$ and so, by
Hoeffding’s inequality (Lemma 2.3) and the fact that every vertex of $T$ has
at most $2K$ backwards neighbours by Property P2, we obtain
$\mathbb{P}\left[e(T^{\prime},T\cap A_{0})<\frac{d^{2}}{4}-c\sqrt{d^{3}\log
d}\right]<\exp\left(-\frac{c^{2}d\log d}{2K}\right).$ (26)
$x$$N(x)$$S(x,2)$$S(x,3)$$T^{\prime}$$T$ Figure 2. The set
$T^{\prime}\subseteq N(x)\setminus A_{4}$ is depicted patterned. We will
estimate the density of certain types of edges between $T=N(T^{\prime})\cap
S(x,2)$ and $S_{0}(x,3)$, where $D\cap S(x,3)$ is depicted in grey. Applying
Lemma 3.1, we obtain a partition $S_{0}(x,3)$ into sets of vertices with
pairwise distance at least $6$.
In particular, we can bound (25) by showing that it is (exponentially in $d$)
unlikely that very few vertices in $T$ are in $A_{3}\setminus A_{0}$ as in the
following claim, which will be proved after finishing the proof of Claim 6.6.
###### Claim 6.7.
$\displaystyle\mathbb{P}\left[\left|T\cap\left(A_{3}\setminus
A_{0}\right)\right|\leqslant 2c\sqrt{d^{3}\log
d}\right]\leqslant\exp(-3\alpha_{2}d).$
To finish the proof, note that $e(T^{\prime},T\cap(A_{3}\setminus
A_{0}))\geqslant|T\cap(A_{3}\setminus A_{0})|$ since every vertex of $T$ has a
neighbour in $T^{\prime}$. It then follows from (26) and Claim 6.7 that
$\displaystyle\mathbb{P}\left[\phi(T^{\prime})<\frac{d^{2}}{4}+c\sqrt{d^{3}\log
d}\right]$ $\displaystyle\leqslant\mathbb{P}\left[e(T^{\prime},T\cap
A_{0})+\left|T\cap\left(A_{3}\setminus
A_{0}\right)\right|<\frac{d^{2}}{4}+c\sqrt{d^{3}\log d}\right]$
$\displaystyle\leqslant\exp\left(-\frac{c^{2}d\log
d}{2K}\right)+\exp(-3\alpha_{2}d)\leqslant\exp(-2\alpha_{2}d),$
establishing Claim 6.6. ∎
###### Proof of Claim 6.7.
We first note that by an application of Chernoff’s inequality (Lemma 2.1),
$\mathbb{P}\left[\left|T\setminus
A_{0}\right|\geqslant\frac{d^{2}}{8K}\right]\geqslant
1-\exp\left(-\Omega(d^{2})\right).$ (27)
We will assume in what follows that this holds deterministically.
Recall that $G$ is locally $K$-almost regular by Property P1, so every vertex
in $T\subseteq S(x,2)$ has degree $d\pm 2K$. Since $G$ has $K$-bounded
backwards expansion by Property P2, every vertex in $S(x,2)$ has at most $2K$
neighbours in $S(x,1)\cup S(x,2)$. Furthermore, by Property P3ii, we have that
every vertex $y\in S_{0}(x,2)$ has at most $2K$ neighbours in $D$, and so it
follows that every such $y$ has at least $d(y)-4K\geqslant d-6K$ neighbours in
$S_{0}(x,3)$. Moreover, $e(T\setminus S_{0}(x,2),S_{0}(x,3))=O(d^{2})$ by
Properties P1 and P3i. Therefore,
$e\left(T\setminus A_{0},S_{0}(x,3)\right)=\left|T\setminus A_{0}\right|(d\pm
6K)+O(d^{2})=\left|T\setminus A_{0}\right|d+O(d^{2}).$ (28)
Suppose now that the event in the claim statement,
$\left|T\cap\left(A_{3}\setminus A_{0}\right)\right|\leqslant
2c\sqrt{d^{3}\log d}$, holds. Every vertex $y\in T\setminus A_{3}$ has at most
$\frac{d(y)}{2}\leqslant\frac{d}{2}+K$ neighbours in $S_{0}(x,3)\cap A_{2}$
and hence
$\displaystyle e\left(T\setminus A_{0},S_{0}(x,3)\cap A_{2}\right)$
$\displaystyle\leqslant\left|T\setminus
A_{3}\right|\left(\frac{d}{2}+K\right)+\left|T\cap\left(A_{3}\setminus
A_{0}\right)\right|(d+2K)$ $\displaystyle\leqslant\left|T\setminus
A_{0}\right|\left(\frac{d}{2}+K\right)+3c\sqrt{d^{5}\log d}.$ (29)
Hence, it follows from (28) and (29) that
$\displaystyle e\left(T\setminus A_{0},S_{0}(x,3)\setminus A_{2}\right)$
$\displaystyle=e\left(T\setminus A_{0},S_{0}(x,3)\right)-e\left(T\setminus
A_{0},S_{0}(x,3)\cap A_{2}\right)$ $\displaystyle\geqslant\left|T\setminus
A_{0}\right|\cdot\frac{d}{2}-4c\sqrt{d^{5}\log d}.$ (30)
Therefore, the average density from $T\setminus A_{0}$ to $S_{0}(x,3)\setminus
A_{2}$ is at least around half the density from $T\setminus A_{0}$ to
$S_{0}(x,3)$. We will restrict ourselves to a subset $B$ of $S_{0}(x,3)$ with
a similar property such that distinct vertices in $B$ lie at distance at least
four in $G$, in order to ensure the events that the vertices of $B$ lie in
$A_{2}$ are independent.
Using Lemma 3.1, we can find a partition $\mathcal{P}$ of $S_{0}(x,3)$ with
$|\mathcal{P}|\leqslant 4K^{2}d^{2}$ (see Figure 2) such that, for each
$P\in\mathcal{P}$, the family of events $\left\\{y\in A_{2}:y\in P\right\\}$
is independent. We claim that there must be some $P\in\mathcal{P}$ such that
$e\left(T\setminus A_{0},P\setminus
A_{2}\right)\geqslant\max\left\\{\frac{5}{12}e\left(T\setminus
A_{0},P\right),\alpha_{3}d\right\\}.$ (31)
Indeed, assume every $P\in\mathcal{P}$ violates (31). Then let
$\mathcal{P}_{1}$ be the family of sets $P\in\mathcal{P}$ such that
$e\left(T\setminus A_{0},P\setminus A_{2}\right)<\alpha_{3}d$. Similarly, let
$\mathcal{P}_{2}$ be the family of sets $P\in\mathcal{P}$ such that
$e\left(T\setminus A_{0},P\setminus A_{2}\right)<\frac{5}{12}e\left(T\setminus
A_{0},P\right)$. Then we have
$\displaystyle e\left(T\setminus A_{0},S_{0}(x,3)\setminus A_{2}\right)$
$\displaystyle\leqslant\sum_{P\in\mathcal{P}_{1}}e\left(T\setminus
A_{0},P\setminus A_{2}\right)+\sum_{P\in\mathcal{P}_{2}}e\left(T\setminus
A_{0},P\setminus A_{2}\right)$
$\displaystyle\leqslant|\mathcal{P}|\alpha_{3}d+\sum_{P\in\mathcal{P}}\frac{5}{12}e\left(T\setminus
A_{0},P\right)$ $\displaystyle\leqslant
4\alpha_{3}K^{2}d^{3}+\frac{5}{12}e\left(T\setminus A_{0},S_{0}(x,3)\right)$
$\displaystyle\leqslant\left|T\setminus
A_{0}\right|d\cdot\left(\frac{5}{12}+32K^{3}\alpha_{3}+O(1/d)\right),$ (32)
where the last line follows from (27) and (28). If $\alpha_{3}$ is
sufficiently small, however, this contradicts (6.2). Since we assumed
$|T\cap\left(A_{3}\setminus A_{0}\right)|\leqslant 2c\sqrt{d^{3}\log d}$,
combining (32) with (27) proves that
$\mathbb{P}\left[\left|T\cap\left(A_{3}\setminus A_{0}\right)\right|\leqslant
2c\sqrt{d^{3}\log
d}\right]\leqslant\exp\left(-\Omega(d^{2})\right)+\sum_{P\in\mathcal{P}}\mathbb{P}\left[P\text{
satisfies }\eqref{e:Tsteps13unlikely}\right].$ (33)
We now show that the right-hand side of (33) is small. Let $P$ be such that
(31) holds. For each $y\in P$, consider the event $\\{y\in A_{2}\\}$, which we
denote by $\mathcal{E}_{y}$. By Lemma 4.1, for each $y\in P$ we have
$\mathbb{P}\left[\mathcal{E}_{y}\right]\geqslant\frac{3}{4}+o(1)\geqslant\frac{2}{3}$,
and since all vertices in $P$ are at pairwise distance at least four, the
events $\left\\{\mathcal{E}_{y}\colon y\in P\right\\}$ are mutually
independent.
However, the event $\mathcal{E}_{y}$ is _not_ independent of $B(x,2)\cap
A_{0}$, and so in particular not independent of the distribution of $T\cap
A_{0}$. To get around this issue, we will use the $K$-projection property
(Property P4) for $\ell=3$, which implies that for every $y\in P$ there is a
subgraph $G(y)\subseteq G$ which is also in $\mathcal{H}(K)$, contains $y$, is
disjoint from $B(x,2)$ and satisfies $d_{G(y)}(w)\geqslant d_{G}(w)-3K$ for
any $w\in V(G(y))$ (see Figure 3).
$x$$N(x)$$S(x,2)$$S(x,3)$$y_{1}$$y_{2}$$G(y_{1})$$G(y_{2})$ Figure 3. For two
vertices $y_{1},y_{2}\in S_{0}(x,3)$, by applying Property P4 there exist
subgraphs $G(y_{1})$ and $G(y_{2})$ that are disjoint from $B(x,2)$ and lie in
$\mathcal{H}(K)$.
In particular, recall $\sigma(x)=\sqrt{\frac{\log d(x)}{d(x)}}$ and observe
that
$\sigma(x)=(1+o(1))\sqrt{\frac{\log d_{G(y)}(y)}{d_{G(y)}(y)}}.$
Let us consider the bootstrap percolation process restricted on $G(y)$ with
the set of initially infected vertices $A^{y}_{0}(m):=A_{0}\cap V(G(y))$ and
infection threshold of $r(w)=\frac{d(w)}{2}+m$ for each $w\in G(y)$. For each
$i\in\mathbb{N}$ we define
$A^{y}_{i}(m):=A^{y}_{i-1}(m)\cup\left\\{w\in
V(G(y))\;:\;\left|N_{G(y)}(w)\cap
A^{y}_{i-1}(m)\right|\geqslant\frac{d_{G(y)}(w)}{2}+m\right\\}.$
Applying Lemma 4.1 to $G(y)$ with $m=2K$, we deduce that
$\mathbb{P}\left[y\in
A^{y}_{2}(2K)\right]\geqslant\frac{3}{4}+o(1)\geqslant\frac{2}{3}.$ (34)
However, since
$\frac{d_{G(y)}(w)}{2}\geqslant\frac{d_{G}(w)-3K}{2}\geqslant\frac{d_{G}(w)}{2}-2K$
for each $w\in V(G(y))$, it follows that
$A^{y}_{2}(2K)\subseteq A_{2}$
and hence we can deduce that $\mathbb{P}\left[y\in
A_{2}\right]\geqslant\frac{2}{3}$ for each $y\in P$.
At this point, we expose the initially-infected vertices in $T$, i.e., the set
$T\cap A_{0}$. By Property P2, we have that each vertex of $P\subseteq
S_{0}(x,3)$ has at most $3K$ neighbours in $T\subseteq S(x,2)$. For each
$i\in[3K]$, let $b_{i}$ be the number of elements of $P$ with $i$ neighbours
in $T\setminus A_{0}$. It follows that $e\left(T\setminus
A_{0},P\right)=\sum_{i=1}^{3K}ib_{i}$ and $e\left(T\setminus A_{0},P\setminus
A_{2}\right)$ is stochastically dominated by $Y=\sum_{i=1}^{3K}iB_{i}$, where
$B_{i}\sim\operatorname{Bin}\left(b_{i},\frac{1}{3}\right)$. Letting
$\tau\coloneqq\max\left\\{\frac{5}{12}\sum_{i=1}^{3K}ib_{i},\alpha_{3}d\right\\}-\frac{1}{3}\sum_{i=1}^{3K}ib_{i}\geqslant\max\left\\{\frac{1}{12}\left(\sum_{i=1}^{3K}ib_{i}\right),\frac{1}{5}a_{3}d\right\\},$
we have by Hoeffding’s inequality (Lemma 2.3) that
$\displaystyle\mathbb{P}\left[P\text{ satisfies
}\eqref{e:Tsteps13unlikely}\right]\leqslant\mathbb{P}\left[Y\geqslant\frac{1}{3}\sum_{i=1}^{3K}ib_{i}+\tau\right]\leqslant\exp\left(-\frac{2\tau^{2}}{3K\cdot\mathbb{E}[Y]}\right).$
Using that
$\displaystyle 3\cdot\mathbb{E}[Y]=\sum_{i=1}^{3K}ib_{i}\leqslant
12\tau\qquad\text{ and
}\qquad\frac{\tau}{6K}\geqslant\frac{\alpha_{3}d}{30K}\geqslant 4\alpha_{2}d,$
we then have that $\mathbb{P}[P\text{ satisfies
\eqref{e:Tsteps13unlikely}}]\leqslant\exp(-4\alpha_{2}d)$. Since
$|\mathcal{P}|=O(d^{2})$, we obtain by (33) and the union bound that
$\displaystyle\mathbb{P}\left[\left|T\cap\left(A_{3}\setminus
A_{0}\right)\right|\leqslant 2c\sqrt{d^{3}\log
d}\right]\leqslant\exp\left(-3\alpha_{2}d\right),$
proving Claim 6.7. ∎
We are now ready to finish the proof of Lemma 4.2.
###### Proof of Lemma 4.2.
Note that by (22) and (23), we have
$\mathbb{P}\left[x\not\in
A_{5}\right]\leqslant\exp\left(-\frac{\alpha_{1}d}{2}\right)+\exp(-\alpha_{2}d)\leqslant\exp\left(-\frac{\alpha_{1}d}{4}\right),$
and so Lemma 4.2 holds with $\beta=\alpha_{1}/4$. ∎
### 6.3. Proof of Lemma 4.3
In this section we will prove Lemma 4.3 about the existence of a constant a
$\beta>0$ (independent of $x$) such that
$\mathbb{P}\big{[}x\notin A_{11}\big{]}\leqslant\exp\bigl{(}-\beta
d(x)^{2}\bigr{)}.$
Throughout this section we let $d=d(x)$.
###### Proof of Lemma 4.3.
Suppose that $x\notin A_{11}$, and let $T_{\ell}\coloneqq S(x,\ell)\setminus
A_{11-\ell}$. By definition, no vertex of $T_{\ell}$ is infected by time
$11-\ell$. Therefore, each $y\in T_{\ell}$ has at most $d(y)/2$ neighbours in
$A_{11-(\ell+1)}$. Hence, by Properties P1 and P2, each vertex of $T_{\ell}$
has at least $(d-3K\ell)/2$ neighbours in $T_{\ell+1}$. Moreover, since $G$
has $K$-bounded backwards expansion (Property P2), every vertex in
$S(x,\ell+1)$ has at most $K(\ell+1)$ neighbours in $S(x,\ell)$. Therefore,
$|T_{\ell+1}|\geqslant\frac{|T_{\ell}|\cdot(d-3K\ell)}{2K(\ell+1)}=\Omega\big{(}|T_{\ell}|\cdot
d\big{)}.$
Since we are assuming $x\notin A_{11}$, we have $|T_{0}|=1$, and therefore by
induction there exists a constant $\alpha>0$ such that
$|T_{\ell}|\geqslant\alpha d^{\ell}$ for every $0\leqslant\ell\leqslant 6$.
Hence, if $x\notin A_{11}$, we may take a subset $T^{\prime}\subseteq
T_{6}=S(x,6)\setminus A_{5}$ of size $\alpha d^{6}$.
By Lemma 3.1 there is a partition $S(x,6)=P_{1}\cup\dots\cup P_{m}$ where
$m=O\left(d^{5}\right)$ such that $\operatorname{dist}(y_{1},y_{2})\geqslant
12$ for each $j\in[m]$ and distinct $y_{1},y_{2}\in P_{j}$. We claim that for
$\varepsilon\ll\alpha$ sufficiently small there is some $j\in[m]$ such that
$|P_{j}|\geqslant\varepsilon d\qquad\text{and}\qquad|P_{j}\cap
T^{\prime}|\geqslant\varepsilon|P_{j}|.$ (35)
Indeed, $|S(x,6)|=O(d^{6})$ by Property P1. Hence, if (35) fails to hold for
every $j\in[m]$, then
$|T^{\prime}|\leqslant\varepsilon dm+\varepsilon|S(x,6)|<\alpha d^{6},$
a contradiction.
Let $P$ be such that (35) holds. For each $y\in P$, consider the event
$\\{y\notin A_{5}\\}$, which we denote by $\mathcal{E}_{y}$. The events
$\left\\{\mathcal{E}_{y}\colon y\in P\right\\}$ are independent, and by Lemma
4.2 there is some $\beta^{\prime}>0$ (independent of $y$) such that
$\mathbb{P}\left[y\not\in A_{5}\right]\leqslant\exp(-\beta^{\prime}d(y))$ for
every $y\in P$. Since $d(y)\geqslant d/2$ for every $y\in P$, we have that
$|P\setminus A_{5}|$ is stochastically dominated by a
$\operatorname{Bin}(|P|,p^{\prime})$ random variable with
$p^{\prime}\coloneqq\exp(-\beta^{\prime}d/2)$. By (35) and Chernoff’s
inequality (Lemma 2.1b), there is some constant
$\beta^{\prime\prime}(\beta^{\prime},\varepsilon)>0$ such that
$\mathbb{P}\big{[}|P\cap
T^{\prime}|\leqslant\varepsilon|P|\big{]}\leqslant\mathbb{P}\big{[}|P\setminus
A_{5}|\geqslant\varepsilon|P|\big{]}\leqslant\left(\frac{ep^{\prime}}{\varepsilon}\right)^{\varepsilon|P|}\leqslant\exp\left(-\beta^{\prime\prime}d^{2}\right).$
(36)
By a union bound over the partition classes $P_{j}$ ($j\in[m]$), we obtain
$\displaystyle\mathbb{P}\left[x\not\in
A_{11}\right]\leqslant\mathbb{P}\Big{[}\text{some }P_{j}\text{ satisfies
}\eqref{e:Bjprop}\Big{]}\leqslant
m\exp\left(-\beta^{\prime\prime}d^{2}\right)\leqslant\exp\left(-\frac{\beta^{\prime\prime}d^{2}}{2}\right),$
and so the statement holds with $\beta=\beta^{\prime\prime}/2$. ∎
## 7\. Dominating process: stabilisation after two rounds
In this section we prove the auxiliary lemmas (Lemma 5.1–Lemma 5.2) needed for
the proof of the $0$-statement of Theorem 1.4 in Section 5. Throughout this
section we let $x\in V(G)$, $c>\frac{1}{2}$, $\vartheta(d)=\sqrt{\log d}$,
$\sigma(x)=\sqrt{\frac{\log
d(x)}{d(x)}},\qquad\gamma(x)=\sqrt{\frac{d(x)}{\vartheta(d(x))}},$
and $p=\frac{1}{2}-c\sigma(x)$. To ease notation let $d=d(x)$,
$\sigma=\sigma(x)$ and $\gamma=\gamma(x)$. Assume there is a $K\in\mathbb{N}$
such that $G\in\mathcal{H}(K)$.
Observe that if $w\in V(G)$ is such that $\operatorname{dist}(x,w)$ is
constant, then by Property P1 and the asymptotic estimates $\log(d+O(1))/\log
d=1+O(1/d)$ and $\sqrt{1+O(1/d)}=1+O(1/d)$,
$\gamma(w)=\gamma\cdot\left(1+O\left(\frac{1}{d(x)}\right)\right)=\gamma+o(1).$
(37)
### 7.1. Proof of Lemma 5.1
In this section we will prove Lemma 5.1 about the existence of a constant
$\beta>0$ (independent of $x$) such that
$\mathbb{P}[x\in\hat{A}_{3}\setminus\hat{A}_{2}]\leqslant\exp\left(-\beta\gamma(x)^{2}\log
d(x)\right).$
Let $\mathcal{W}$ be the set of pairs $(W_{1},W_{2})$ with $W_{i}\subseteq
S_{0}(x,i)$ satisfying
$W_{2}\subseteq
N(W_{1}),\qquad|W_{1}|=\gamma-3K,\qquad|W_{2}|=(\gamma-3K)^{2}/2K,$
which will be called _witnesses_. The _weight_ of a witness $(W_{1},W_{2})$ is
defined as $\zeta(W_{2})\coloneqq e(W_{2},S_{0}(x,3))$. We observe that, for
every $(W_{1},W_{2})\in\mathcal{W}$, we have
$\zeta(W_{2})=|W_{2}|(d+O(1)),$ (38)
since every vertex of $W_{2}$ has degree $d\pm 2K$ by Property P1 and every
vertex of $S_{0}(x,2)$ has at most $4K$ neighbours outside $S_{0}(x,3)$ by
Properties P2 and P3ii.
The definition of witness is motivated by the following claim (see Figure 4).
$x$$N(x)$$S(x,2)$$W_{1}$$w$$W_{2}^{\prime}$$W_{2,w}$ Figure 4. For a vertex
$x\in(\hat{A}_{3}\setminus\hat{A}_{2})$ there is a set $W_{1}\subseteq
S_{0}(x,1)\cap(\hat{A}_{2}\setminus\hat{A}_{1})$ of size $\gamma-3K$. For each
$w\in W_{1}$ there is a set $W_{2,w}\subseteq
S_{0}(x,2)\cap\hat{A}_{1}\setminus\hat{A}_{0}$, and their union is
$W_{2}^{\prime}$, which contains a subset $W_{2}$ of size exactly
$(\gamma-3K)^{2}/2K$.
###### Claim 7.1.
If $x\in\hat{A}_{3}\setminus\hat{A}_{2}$, there exists a witness
$(W_{1},W_{2})\in\mathcal{W}$ such that
$Z\geqslant\mathbb{E}[Z]+(c\sigma d-3\gamma)|W_{2}|,$
where $Z=Z(W_{2}):=e\bigl{(}W_{2},S_{0}(x,3)\cap\hat{A}_{0}\bigr{)}$.
###### Proof of Claim 7.1.
We start by observing that, by definition of the
$\operatorname{Boot}_{2}(\gamma)$ process, entering $\hat{A}_{3}$ requires
$d/2$ infected neighbours, while entering $\hat{A}_{2}$ requires only
$d/2-\gamma$ ones. Therefore, the event
$\\{x\in\hat{A}_{3}\setminus\hat{A}_{2}\\}$ implies the existence of a set
$W_{1}^{\prime}\subseteq S(x,1)\cap(\hat{A}_{2}\setminus\hat{A}_{1})$ of
recently-infected neighbours of $x$ of size $|W_{1}^{\prime}|=\gamma$. Since
$|D\cap N(x)|\leqslant 1\leqslant 3K$ by Property P3i, we may take a subset
$W_{1}\subseteq W_{1}^{\prime}\cap S_{0}(x,1)$ of size $\gamma-3K$.
Similarly, since each $w\in W_{1}$ is in $\hat{A}_{2}\setminus\hat{A}_{1}$, it
has $\gamma(w)\geqslant\gamma-1$ neighbours in
$\hat{A}_{1}\setminus\hat{A}_{0}$, where the inequality uses (37). Moreover,
since each $w\in S_{0}(x,1)$ has at most $K$ neighbours in $B(x,1)$ by
Property P2 and at most $K$ neighbours in $D$ by Property P3ii, for each $w\in
W_{1}$ the set $W_{2,w}\coloneqq N(w)\cap
S_{0}(x,2)\cap(\hat{A}_{1}\setminus\hat{A}_{0})$ has size at least $\gamma-3K$
(see Figure 4). Moreover, every element of
$W_{2}^{\prime}:=\bigcup_{w\in W_{1}}W_{2,w}$
has at most $2K$ neighbours in $W_{1}$ by Property P2, and so
$|W_{2}^{\prime}|\geqslant\frac{(\gamma-3K)^{2}}{2K}$ by double counting. We
may therefore choose a $W_{2}\subseteq W_{2}^{\prime}$ of size
$\frac{(\gamma-3K)^{2}}{2K}$.
It remains to show that $Z(W_{2})\geqslant\mathbb{E}[Z(W_{2})]+(\sigma
d-3\gamma)|W_{2}|$. By definition of $\zeta$ and $Z$, we have
$\mathbb{E}[Z]=\zeta p$. On the other hand,
$W_{2}\subseteq\hat{A}_{1}\setminus\hat{A}_{0}$ by construction. Therefore, by
Properties P1, P2 and P3ii, every $w\in W_{2}\subseteq S_{0}(x,2)$ has at
least $\frac{d(w)}{2}-2\gamma(w)-4K=\frac{d}{2}-2\gamma+O(1)$ neighbours in
$S_{0}(x,3)\cap\hat{A}_{0}$. Since $p=1/2-c\sigma$, using (38) we obtain
$Z(W_{2})\geqslant|W_{2}|(d/2-2\gamma+O(1))=\zeta/2-|W_{2}|(2\gamma+O(1))\geqslant\zeta
p+(c\sigma d-3\gamma)|W_{2}|,$
where we used that $\zeta p=(\zeta/2)-c\sigma|W_{2}|(d+O(1))$ and $\gamma\gg
1\gg c\sigma$ in the last inequality. ∎
Therefore, if the process has not stabilised after three rounds, there is a
witness such that $Z$ exceeds its expectation. On the other hand, the next
claim shows that this is a low probability event. To simplify notation, we set
$s\coloneqq(\gamma-3K)^{2}/2K$ in the remainder of this section. Recall that
every witness $(W_{1},W_{2})$ has $|W_{2}|=s$.
###### Claim 7.2.
Let $s\in\mathbb{N}$. For every witness $(W_{1},W_{2})$, we have
$\mathbb{P}\left[Z(W_{2})\geqslant\mathbb{E}[Z]+(c\sigma
d-3\gamma)s\right]\leqslant\exp\left(-(2+o(1))c^{2}s\log d\right),$
where $Z=Z(W_{2}):=e\bigl{(}W_{2},S_{0}(x,3)\cap\hat{A}_{0}\bigr{)}$.
###### Proof of Claim 7.2.
Recall that by Property P2, a vertex in $S_{0}(x,3)$ has at most $3K$
neighbours in $S(x,2)$. For $i\in[3K]$, let $X_{i}$ be the set of elements of
$S_{0}(x,3)$ having $i$ neighbours in $W_{2}$. We have
$\zeta=\zeta(W_{2})=\sum_{i=1}^{3K}i\cdot|X_{i}|,$
and letting $M\coloneqq\sum_{i=1}^{3K}i^{2}|X_{i}|$ we have by Lemma 2.3 that,
for any $t\geqslant 0$,
$\mathbb{P}\left[Z(W_{2})\geqslant\mathbb{E}[Z]+t\right]\leqslant\exp\left(-\frac{2t^{2}}{M}\right).$
(39)
Let $X_{\geqslant 2}$ be the set of elements of $S_{0}(x,3)$ having at least
two neighbours in $W_{2}$. Observe that $|X_{\geqslant
2}|\leqslant\binom{|W_{2}|}{2}\leqslant\frac{\gamma^{4}}{2}=o(sd)$, since any
two elements of $S_{0}(x,2)$ have at most one neighbour in $S_{0}(x,3)$ by
Property P3iii. Therefore, recalling (38), we have
$M=\sum_{i=1}^{3K}i^{2}|X_{i}|\leqslant|X_{1}|+o(sd)\leqslant(1+o(1))\zeta=(1+o(1))sd$
and with $t=(c\sigma d-3\gamma)s$ the right hand-side of (39) is
$\exp\left(-\frac{2s^{2}(c\sigma
d-3\gamma)^{2}}{(1+o(1))sd}\right)\leqslant\exp\left(-(2+o(1))sc^{2}\sigma^{2}d\right)=\exp(-(2+o(1))c^{2}s\log
d),$
since $\sigma d\gg\gamma$. This proves the claim. ∎
We are now ready to prove Lemma 5.1.
###### Proof of Lemma 5.1.
Suppose there is an $x\in\hat{A}_{3}\setminus\hat{A}_{2}$ and consider the
witness $(W_{1},W_{2})$ from Claim 7.1. Recall that every witness satisfies
$|W_{2}|=s=(\gamma-3K)^{2}/2K$. Since $W_{2}\subseteq N(W_{1})$ and $\gamma
d/s=\Theta(d/\gamma)=\Theta(\sqrt{d\vartheta})$, for each $W_{1}\subseteq
N(x)$ there are at most
$\binom{|N(W_{1})|}{s}\leqslant\binom{\gamma
d}{s}\leqslant\exp\Bigl{(}(1/2+o(1))s\log d\Bigr{)}$
choices for $W_{2}$ with $|W_{2}|=s$. Therefore, we may use Claim 7.2 and the
union bound to obtain
$\mathbb{P}\left[x\in\hat{A}_{3}\setminus\hat{A}_{2}\right]\leqslant\exp\Bigl{(}(1/2+o(1))s\log
d-(2+o(1))c^{2}s\log d\Bigr{)}.$
There is some $\varepsilon>0$ such that $c=\frac{1}{2}+\varepsilon$, and
therefore
$\mathbb{P}\left[x\in\hat{A}_{3}\setminus\hat{A}_{2}\right]\leqslant\exp(-(2\varepsilon+o(1))s\log
d)\leqslant\exp\left(-\frac{\varepsilon}{2K}\cdot\gamma^{2}\log d\right),$
and so Lemma 5.1 holds with $\beta=\frac{\varepsilon}{2K}$. ∎
### 7.2. Proof of Lemma 5.2
In this section we will prove Lemma 5.2, claiming
$\mathbb{P}[x\in\hat{A}_{2}\setminus\hat{A}_{1}]\leqslant\exp\left(-\sqrt{d(x)}\right).$
The proof is very similar to the proof of Lemma 5.1 in the previous section.
As in the proof of Claim 7.1, since $x\in\hat{A}_{2}\setminus\hat{A}_{1}$,
there exists a set $W^{\prime}\subseteq
N(x)\cap(\hat{A}_{1}\setminus\hat{A}_{0})$ of size $\gamma$. Take $W$ to be a
subset $W^{\prime}\cap S_{0}(x,1)$ of size $\gamma-1$, which is possible by
Property P3i, and define
$\zeta^{\prime}:=e(W,S_{0}(x,2)).$
By Properties P2 and P3ii, each $w\in W\subseteq S_{0}(x,1)$ has at most $2K$
neighbours outside $S_{0}(x,2)$. Together with Property P1 this implies that
$\zeta^{\prime}=|W|(d+O(1)).$ (40)
Moreover, since every $w\in W$ is infected in the first round, we have
$|N(w)\cap S_{0}(x,2)\cap\hat{A}_{0}|\geqslant
d(w)/2-2\gamma+O(1)=d/2-2\gamma+O(1),$
and so
$Z^{\prime}(W)\coloneqq
e(W,S_{0}(x,2)\cap\hat{A}_{0})\geqslant|W|(d/2-2\gamma+O(1))\geqslant\zeta^{\prime}p+(c\sigma
d-3\gamma)|W|,$
observing that $\mathbb{E}[Z^{\prime}]=\zeta^{\prime}p$.
By Property P2, every vertex in $S_{0}(x,2)$ has at most $2K$ neighbours in
$W\subseteq B(x,2)$. If we let $X^{\prime}_{i}$ denote the set of elements of
$S_{0}(x,2)$ with $i$ neighbours in $W$ for $i\in[2K]$, we have that
$\zeta^{\prime}=\sum_{i=1}^{2K}i|X^{\prime}_{i}|.$
Similarly as in the proof of Claim 7.2, let $X^{\prime}_{\geqslant 2}$ be the
set of elements of $S_{0}(x,2)$ having at least two neighbours in $W$. We have
$|X^{\prime}_{\geqslant 2}|\leqslant\binom{|W|}{2}\leqslant\gamma^{2}=o(\gamma
d)$, since any two elements of $S_{0}(x,1)$ have at most one common neighbour
in $S_{0}(x,2)$ by Property P3iii, and therefore, by (40),
$\sum_{i=1}^{2K}i^{2}|X^{\prime}_{i}|\leqslant|X^{\prime}_{1}|+o(\gamma
d)\leqslant(1+o(1))\zeta^{\prime}=(1+o(1))\gamma d.$
By Lemma 2.3, for $t=|W|(c\sigma d-3\gamma)$ we have that
$\mathbb{P}[Z^{\prime}(W)\geqslant\mathbb{E}[Z^{\prime}]+t]\leqslant\exp\left(-\frac{2(\gamma-1)^{2}(c\sigma
d-3\gamma)^{2}}{(1+o(1))\gamma d}\right)=\exp\left(-(2+o(1))c^{2}\gamma\log
d\right).$ (41)
Moreover, since $W\subseteq N(x)$ and
$d/(\gamma-1)=\Theta(\sqrt{d\vartheta})$, there are at most
$\binom{d}{\gamma-1}\leqslant\exp\Bigl{(}(1/2+o(1))\gamma\log d\Bigr{)}$ (42)
choices for $W$. Combining (41) and (42) and using the fact that $c>1/2$ we
obtain
$\mathbb{P}[x\in\hat{A}_{2}\setminus\hat{A}_{1}]\leqslant\exp\left(-(1+o(1))2c^{2}\gamma\log
d+(1/2+o(1))\gamma\log d\right)\leqslant\exp(-\Omega(\gamma\log d)).$
Since $\log d\gg\vartheta(d)$, it follows that $\gamma\log d\gg\sqrt{d}$,
finishing the proof of Lemma 5.2.
## 8\. Examples of geometric graphs
We dedicate this section to the illustration of the class
$\mathcal{H}=\bigcup_{K\in\mathbb{N}}\mathcal{H}(K)$ of high-dimensional
geometric graphs by giving some examples of graph classes that are contained
in $\mathcal{H}$. In addition to the formal proofs that the required
properties are satisfied, we provide some intuition about the _local
coordinate system_ that makes these graphs _high-dimensional_.
### 8.1. Cartesian Product graphs
###### Definition 8.1.
For $n\in\mathbb{N}$ let $(H_{i})_{i\in[n]}$ be a sequence of connected
graphs, called _base graphs_. We define the _(Cartesian) product graph_
$G=\mathbin{\Box}_{i=1}^{n}H_{i}$ as the graph with vertex set
$V(G)\coloneqq\prod_{i=1}^{n}V(H_{i})=\\{(x_{1},\dots,x_{n})\mid x_{i}\in
V(H_{i})\text{ for all }i\in[n]\\}$
and with edge set
$E(G)\coloneqq\left\\{\bigl{\\{}x,y\bigr{\\}}\colon\text{ there is some
}i\in[n]\text{ such that }\\{x_{i},y_{i}\\}\in E(H_{i})\text{ and
}x_{j}=y_{j}\text{ for all }j\neq i\right\\}.$
In the case of product graphs, there is a clear coordinate system coming from
the product structure.
###### Lemma 8.2.
Let $C>1$ be a constant and let $H_{1},\ldots,H_{n}$ be graphs such that
$1<|H_{i}|\leqslant C$ for all $i\in[n]$. Then
$G_{n}=\mathbin{\Box}_{i=1}^{n}H_{i}\in\mathcal{H}(C)$ for all
$n\in\mathbb{N}$.
###### Proof.
Write $G\coloneqq G_{n}$. Given distinct vertices $x,y\in V(G)$, let us define
$I(x,y)\coloneqq\\{i\in[n]\colon x_{i}\neq y_{i}\\},$
noting that $1\leqslant|I(x,y)|\leqslant\operatorname{dist}(x,y)$.
###### Proof of Property P1.
This property of (Cartesian) product graphs was already observed by Lichev
[44]. Indeed, given $x\in V(G)$, $\ell\in\mathbb{N}$ and $y\in S(x,\ell)$, we
see that
$|d(x)-d(y)|=\left|\sum_{i\in[n]}d_{H_{i}}(x_{i})-\sum_{i\in[n]}d_{H_{i}}(y_{i})\right|=\left|\sum_{i\in
I(x,y)}(d_{H_{i}}(x_{i})-d_{H_{i}}(y_{i}))\right|\leqslant C|I(x,y)|\leqslant
C\ell.$
###### Proof of Property P2.
Given $x\in V(G)$, $\ell\in\mathbb{N}$ and $y\in S(x,\ell)$, the neighbours of
$y$ in $B(x,\ell)$ must differ from $y$ in coordinates in $I(x,y)$. Hence
there are at most $C\ell$ neighbours of $y$ in $B(x,\ell)$ as
$|I(x,y)|\leqslant\ell$.
###### Proof of Property P3.
Given $x\in V(G)$ let
$D\coloneqq\\{y\in V(G)\;:\;|I(x,y)|\neq\operatorname{dist}(x,y)\\},$
so that for all $\ell\in\mathbb{N}$
$S_{0}(x,\ell)\coloneqq S(x,\ell)\setminus D=\\{y\in
S(x,\ell)\;:\;|I(x,y)|=\ell\\}.$
Since a vertex in $S(x,\ell)\cap D$ differs from $x$ in at most $\ell-1$
coordinates it is clear that $|S(x,\ell)\cap D|\leqslant
C^{\ell-1}\binom{n}{\ell-1}\leqslant C^{\ell-1}d^{\ell-1}$, and so i holds.
Furthermore, if $y\in S_{0}(x,\ell)$, then a neighbour of $y$ in $D$ must
differ from $y$ in some coordinate in $I(x,y)$, and hence there are at most
$C\ell$ of them, and so ii holds. Finally, if $w,y\in S_{0}(x,\ell)$ are
distinct, and have a common neighbour $z\in S_{0}(x,\ell+1)$, then it is easy
to verify that $|I(x,w)\cup I(x,y)|=\ell+1$ and hence there is a unique common
neighbour $z\in S_{0}(x,\ell+1)$ which agrees with $w$ on $I(x,w)$, with $y$
on $I(x,y)$ and with $x$ on $[n]\setminus(I(x,w)\cup I(x,y))$. Hence iii
holds.
###### Proof of Property P4.
We induct on the dimension $n$ of the product graph $G_{n}$. For the base
case, it is easy to verify that if $|V(G)|\leqslant C$, then
$G\in\mathcal{H}(C)$. Let $x\in V(G),\ell\in\mathbb{N}$ and $y\in S(x,\ell)$.
Let $G(y)\coloneqq\left(\mathbin{\Box}_{i\not\in
I(x,y)}H_{i}\right)\mathbin{\Box}\left(\mathbin{\Box}_{i\in
I(x,y)}\\{y_{i}\\}\right)$. Clearly $y\in V(G(y))$ and $V(G(y))\cap
B(x,\ell-1)=\varnothing$.
Furthermore, for any $w\in V(G(y))$,
$|d_{G(y)}(w)-d_{G}(w)|=\left|\sum_{i\not\in
I(x,y)}d_{H_{i}}(w_{i})-\sum_{i\in[n]}d_{H_{i}}(w_{i})\right|=\sum_{i\in
I(x,y)}d_{H_{i}}(w_{i})\leqslant C|I(x,y)|\leqslant C\ell.$
Finally, since $|I(x,y)|\geqslant 1$, $G(y)$ is a product graph of dimension
at most $n-1$, and so by the induction hypothesis $G(y)\in\mathcal{H}(C)$.
###### Proof of Property P5.
Let $x\in V(G)$, $\ell\in\mathbb{N}$ and $y\in S_{0}(x,\ell)$. Let
$Z\coloneqq B(y,2\ell-1)\cap S_{0}(x,\ell)=\\{z\in
S_{0}(x,\ell)\;:\;\operatorname{dist}(z,y)\leqslant 2\ell-1\\}.$
Note that, for any $x\in V(G)$, $y\in S_{0}(x,\ell)$ and $z\in S_{0}(x,\ell)$
we have $I(x,z)\mathbin{\triangle}I(x,y)\subseteq I(y,z)$ and since
$|I(x,z)|=|I(x,y)|=\ell$, we have
$|I(x,z)\mathbin{\triangle}I(x,y)|=2|I(x,z)\setminus I(x,y)|$. Hence,
$\operatorname{dist}(y,z)\geqslant|I(y,z)|\geqslant|I(x,z)\mathbin{\triangle}I(x,y)|=2|I(x,z)\setminus
I(x,y)|.$ (43)
Rearranging this we have
$|I(x,z)\setminus
I(x,y)|\leqslant\frac{1}{2}\left(\operatorname{dist}(y,z)\right)<\ell,$
for every $z\in Z$. And hence,
$Z\subseteq Z^{\prime}\coloneqq\\{z\in S(x,\ell)\colon|I(x,z)\setminus
I(x,y)|\leqslant\ell-1\\},$
The choices for $z\in Z^{\prime}$ may be bounded by first choosing $I(x,z)$ of
size $\ell$ with $I(x,z)\cap I(x,y)\neq\varnothing$ and then choosing the
values for the coordinates in $I(x,z)$. Therefore,
$|Z^{\prime}|\leqslant\ell\binom{n-1}{\ell-1}C^{\ell-1}\leqslant\ell
C^{\ell-1}d(x)^{\ell-1}$
since $d(x)\geqslant n$.
###### Proof of Property P6.
Finally, it is clear that $|V(G)|\leqslant C^{n}$ and $\delta(G)\geqslant n$,
and hence $|V(G)|\leqslant\exp(\log
C\cdot\delta(G))\leqslant\exp(C\delta(G))$.
This concludes the proof of Lemma 8.2. ∎
###### Remark 8.3.
In particular, the $n$-dimensional hypercube is in $\mathcal{H}(2)$ and we can
pick $D=\varnothing$ in this case.
Note that, under the condition of Lemma 8.2, all degrees in $G_{n}$ are in
$\Theta(n)$, and so $\delta(G_{n})$ and $\Delta(G_{n})$ differ by at most a
constant multiplicative factor. In particular, Theorem 1.4 implies that for
_any_ such product graph, the critical window has width
$O\left(\sqrt{\frac{\log n}{n}}\right)$.
### 8.2. The middle layer and odd graphs
###### Definition 8.4.
The _middle layer graph_ $M_{n}$ is the graph whose vertices are all vectors
of length $2n-1$ where either $n$ or $n-1$ many entries are $1$, while all
other entries are $0$. Two vertices are connected if they differ in exactly
one coordinate.
We note that the middle layer graph is not a (Cartesian) product graph, since
it can be seen to have girth six. Note that the middle layer graph is a
subgraph of a hypercube, i.e., $M_{n}\subseteq Q_{2n-1}$, and again here there
is a clear coordinate system underlying the graph structure. Furthermore,
$M_{n}$ is an isometric subgraph of $Q_{2n-1}$, i.e.,
$\operatorname{dist}_{M_{n}}(u,v)=\operatorname{dist}_{Q_{2n-1}}(u,v)$ for all
$u,v\in V(M_{n})$. Since $Q_{2n-1}\in\mathcal{H}(2)$, this allows us to
transfer some properties from the hypercube directly.
###### Lemma 8.5.
The middle layer graph $M_{n}\in\mathcal{H}(4)$ for all $n\in\mathbb{N}$.
###### Proof.
Let $G=M_{n}$. Given distinct vertices $x,y\in V(G)$, let us define
$I(x,y)\coloneqq\\{i\in[n]\colon x_{i}\neq y_{i}\\},$
noting that $|I(x,y)|=\operatorname{dist}(x,y)$.
###### Proof of Property P1.
$G$ is $n$-regular.
###### Proof of Property P2.
The property is preserved under taking isometric subgraphs.
###### Proof of Property P3.
For every $x\in V(G)$ let $D=\varnothing$, so that for all $\ell\in\mathbb{N}$
we have $S_{0}(x,\ell)=S(x,\ell)$ and thus i and ii hold trivially. Since iii
is satisfied by the hypercube $Q_{2n-1}$ with $D=\varnothing$, and is
preserved under taking isometric subgraphs, it also holds in $M_{n}$ with
$D=\varnothing$.
###### Proof of Property P4.
We induct on $n$. Let $x\in V(G),\ell\in\mathbb{N}$ and $y\in S(x,\ell)$.
Assume first that $\ell$ is even. Let $V(y)=\prod_{i\in
I(x,y)}\\{y_{i}\\}\times\prod_{i\notin I(x,y)}\\{0,1\\}$ and set
$G(y)=G[V(y)]$, i.e., we fix the coordinates where $y$ differs from $x$ and
let the other coordinates vary. It is easy to verify that $G(y)$ is isomorphic
to $M_{n-\ell/2}$ and so by the induction hypothesis, we have
$G(y)\in\mathcal{H}(4)$. Furthermore, since $M_{n}$ is $n$-regular and
$M_{n-\ell/2}$ is $(n-\ell/2)$-regular, it follows that for any $w\in
V(G(y))$, we have $|d_{G(y)}(w)-d_{G}(w)|=\ell/2$. Finally, it is simple to
check that $y\in V(G(y))$ and $V(G(y))\cap B(x,\ell-1)=\varnothing$.
The case when $\ell$ is odd is similar: we fix an extra coordinate to balance
the number of fixed $0$ coordinates the number of fixed $1$ coordinates.
###### Proof of Property P5.
The property is satisfied by the hypercube $Q_{2n-1}$ with $D=\varnothing$ and
$K=2$, and is preserved under taking isometric subgraphs. However, the degrees
of vertices are not the same in both of these graphs. Nevertheless, since
$Q_{2n-1}$ is $(2n-1)$-regular and $M_{n}$ is $n$-regular, it follows that for
each $y\in S_{M_{n}}(x,\ell)$,
$|B_{M_{n}}(y,2\ell-1)\cap S_{M_{n}}(x,\ell)|\leqslant\ell
2^{\ell-1}(2n-1)^{\ell-1}\leqslant\ell 4^{\ell-1}d_{M_{n}}(y)^{\ell-1}.$
###### Proof of Property P6.
Since $V(M_{n})\subseteq V(Q_{2n-1})$, we have $|V(M_{n})|\leqslant 2^{2n-1}$
and it follows that $|V(G)|\leqslant\exp(4\delta(M_{n}))$.
This concludes the proof of Lemma 8.5. ∎
We also consider the _odd graph_ , which is a special _Kneser graph_.
###### Definition 8.6.
For $n,k\in\mathbb{N}$ with $n\geqslant k$ the _Kneser graph_ $K(n,k)$ is the
graph with vertex set $V(K(n,k))=[n]^{(k)}=\binom{[n]}{k}$ consisting of all
the $k$-element subsets of $[n]$, where two $k$-element subsets are adjacent
in $K(n,k)$ if they are disjoint.
Instead of directly proving that $O_{n}\coloneqq K(2n-1,n-1)$ lies in
$\mathcal{H}$ we will use the fact, which is essentially folklore, that
$M_{n}$ and $O_{n}$ are locally isomorphic as follows.
###### Lemma 8.7.
Let $x\in[2n-1]^{(n-1)}$ and let $\ell<n-1$. Then $B_{O_{n}}(x,\ell)\cong
B_{M_{n}}(x,\ell)$.
###### Proof.
Let $x\in V(O_{n})$, so $x$ is a subset of $[2n-1]$ of size $n-1$. We define a
map between the balls $B_{O_{n}}(x,\ell)$ and $B_{M_{n}}(x,\ell)$. For each
$k\in\bigl{[}\lfloor\frac{\ell}{2}\rfloor\bigr{]}$ and $y\in S_{O_{n}}(x,2k)$
we map the set $y$ to its indicator function in $\\{0,1\\}^{2n-1}$. For every
$k\in\bigl{[}\lfloor\frac{\ell-1}{2}\rfloor\bigr{]}$ and $y\in
S_{O_{n}}(x,2k+1)$ we map $y$ to the indicator function of its complement
$[2n-1]\setminus y$. Note that the complement has size $n$ and hence
corresponds to a vertex in $V(M_{n})$.
The map obtained in this way is indeed an isomorphism: If $y,z\in
B_{O_{n}}(x,\ell)$ are such that $\\{y,z\\}\in E(O_{n})$, then $y\cap
z=\varnothing$. Since the odd girth of $O_{n}$ is $2n-1$, without loss of
generality $y\in S_{O_{n}}(x,2k)$ for some
$k\in\bigl{[}\lfloor\frac{\ell}{2}\rfloor\bigr{]}$ and $z\in
S_{O_{n}}(x,2k^{\prime}+1)$ for some
$k^{\prime}\in\bigl{[}\lfloor\frac{\ell-1}{2}\rfloor\bigr{]}$, where
$k^{\prime}=k\pm 1$. But then $y\subseteq[2n-1]\setminus z$ and hence the
image of the edge $\\{y,z\\}$ under the mapping is an edge in $E(M_{n})$. A
similar argument shows that every edge in $B_{M_{n}}(x,\ell)$ is the image of
an edge in $B_{O_{n}}(x,\ell)$ under the mapping and hence, this mapping gives
an isomorphism from $B_{O_{n}}(x,\ell)$ to $B_{M_{n}}(x,\ell)$ for any
$\ell<n-1$. ∎
So the odd graph $O_{n}$ is ‘locally’ isomorphic to a graph, i.e., the middle
layer graph $M_{n}$, which we have already shown lies in $\mathcal{H}$.
As Properties P1, P2, P3 and P5 are all of local nature, they continue to hold
in $O_{n}$ for $\ell<n-1$, which is already sufficient for our proof.
Furthermore, since the diameter of $O_{n}$ is $n-1$, there is only one further
case to consider, in which all of the properties above are trivially satisfied
with say $K=4$, for which also Property P6 trivially holds.
Finally, it remains to show that Property P4 holds in $O_{n}$. Let $x\in
V(O_{n})$, $\ell\in\mathbb{N}$ and $y\in S(x,\ell)$.
Assume first that $\ell=2k$ is even. By the structure of the odd graph, the
sets $a\coloneqq x\setminus y$ and $b\coloneqq y\setminus x$ have size $k$.
Pick a set $a^{\prime}\subseteq x\cap y$ and a set
$b^{\prime}\subseteq[2n-1]\setminus(x\cup y)$, both of size $k$ and set
$V(y)\coloneqq\left\\{v\in V(O_{n})\,\middle|\begin{array}[]{l}\big{(}(b\cup
a^{\prime}\subseteq v)\wedge((b^{\prime}\cup a)\cap
v=\varnothing)\big{)}\;\vee\\\ \big{(}(b^{\prime}\cup a\subseteq
v)\wedge((b\cup a^{\prime})\cap
v=\varnothing)\big{)}\;\phantom{\vee}\end{array}\right\\}.$
Let $G(y)$ be the induced subgraph of $O_{n}$ on the vertex set $V(y)$. Then
$y\in V(G(y))$ as $b\cup a^{\prime}\subseteq y\text{ and }(b^{\prime}\cup
a)\cap y=\varnothing$. For each vertex $w\in V(y)$ we obtain $|w\cap
x|\geqslant k$ and $|w\setminus x|\geqslant k$. The former inequality implies
that $\operatorname{dist}(x,w)\geqslant 2k+1$ if $\operatorname{dist}(x,w)$ is
odd, and the latter implies that $\operatorname{dist}(x,w)\geqslant 2k$
otherwise, and thus $w\notin B(x,\ell-1)$. Furthermore, $G(y)$ can be seen to
be isomorphic to $M_{n-\ell}$ under the mapping which takes each $w\in V(y)$
with $b^{\prime}\cup a\subseteq w\text{ and }(b\cup a^{\prime})\cap
w=\varnothing$ to its complement. Thus by Lemma 8.5 we have
$G(y)\in\mathcal{H}(4)$ and since $G$ is $n$-regular and $M_{n-\ell}$ is
$(n-\ell)$-regular we get $|d_{G(y)}(w)-d_{G}(w)|=\ell\leqslant 4\ell$ for
each $w\in V(y)$.
The case when $\ell=2k+1$ is similar. In this case, $a\coloneqq x\cap y$ is a
set of size $k$ and $b\coloneqq[2n-1]\setminus(x\cup y)$ is a set of size
$k+1$. Pick a set $a^{\prime}\subseteq x\setminus y$ of size $k$ and a set
$b^{\prime}\subseteq y\setminus x$ of size $k+1$, and consider the set $V(y)$
as defined above. Taking $G(y)$ to be the induced subgraph of $O_{n}$ on
$V(y)$, we note that $y\in V(G(y))$ as $b^{\prime}\cup a\subseteq y\text{ and
}(b\cup a^{\prime})\cap y=\varnothing$. Again, for each vertex $w\in V(y)$ we
obtain $|w\cap x|\geqslant k$ and $|w\setminus x|\geqslant k+1$ and thus
$w\notin B(x,\ell-1)$. As before, $G(y)$ can be seen to be isomorphic to
$M_{n-\ell}$ under the mapping which takes each $w\in V(y)$ with $b\cup
a^{\prime}\subseteq v\text{ and }(b^{\prime}\cup a)\cap v=\varnothing$ to its
complement and the degrees in $G$ and $G(y)$ differ by $\ell$.
###### Lemma 8.8.
The odd graph $O_{n}\in\mathcal{H}(4)$ for all $n\in\mathbb{N}$.
### 8.3. The folded hypercube
###### Definition 8.9.
For $n\in\mathbb{N}$ the _folded hypercube_ $\tilde{Q}_{n}$ is a graph on the
same vertex set as the $(n-1)$-dimensional hypercube, i.e.,
$V(\tilde{Q}_{n-1})=\\{0,1\\}^{n-1}$. It is obtained from $Q_{n-1}$ by joining
each vertex $x$ to its antipodal vertex $\tilde{x}$, where $\tilde{x}$ is the
vertex differing from $x$ in every coordinate. That is, $\tilde{Q}_{n}$ is
obtained by adding all edges $\\{x,\tilde{x}\\}$ to the edge set of $Q_{n-1}$.
We can think of the underlying coordinate system here as being
$\\{0,1\\}^{n}$, where the first $n-1$ coordinates represent the ‘actual’
vertex $v\in V(Q_{n-1})$ and the final coordinate represents movement to the
antipodal point.
We note that it is relatively easy to see that the folded hypercube is not
expressible as the product of bounded order graphs. Indeed, as we will see
shortly, $\tilde{Q}_{n}$ is locally isomorphic to $Q_{n}$, and so every
connected subgraph of $\tilde{Q}_{n}$ of bounded order is a subgraph of a
hypercube. However, it is easy to show that any product graph in which every
factor is a subgraph of a hypercube is at most as dense as a hypercube, and
$\tilde{Q}_{n}$ has strictly more edges than $Q_{n-1}$. In fact, with more
care, it can be shown that $\tilde{Q}_{n}$ is not expressible as a Cartesian
product even with factors of unbounded order.
As with $M_{n}$ and $O_{n}$, the folded hypercube is ‘locally’ isomorphic to a
graph, i.e., the hypercube $Q_{n}$, which we have already shown lies in
$\mathcal{H}$. Indeed the following is shown in [43].
###### Lemma 8.10.
Let $x\in\\{0,1\\}^{n-1}$ and $y\in\\{0,1\\}^{n}$ and let
$\ell<\left\lfloor\frac{n}{2}\right\rfloor$. Then
$B_{\tilde{Q}_{n}}(x,\ell)\cong B_{Q_{n}}(y,\ell)$.
As before, since by Remark 8.3 we have $Q_{n}\in\mathcal{H}(2)$, and
Properties P1, P2, P3 and P5 are all of local nature, they continue to hold in
$\tilde{Q}_{n}$ for $\ell<\left\lfloor\frac{n}{2}\right\rfloor$, which is
already sufficient for our proof. Furthermore, since the diameter of
$\tilde{Q}_{n}$ is $\left\lfloor\frac{n}{2}\right\rfloor$, there is only one
further case to consider, in which all of the properties are trivially
satisfied with say $K=3$, for which also Property P6 trivially holds.
Finally, it is a simple exercise to show that Property P4 holds in
$\tilde{Q}_{n}$. Indeed, since all vertices at distance at most $\ell-1$ from
a vertex $x$ differ from $x$ or $\tilde{x}$ in at most $\ell-1$ coordinates,
given a vertex $w\in S_{\tilde{Q}_{n}}(x,\ell)$ it is easy to find a subgraph
of $\tilde{Q}_{n}$ containing $w$ which is isomorphic to $Q_{n-2\ell+2}$ and
disjoint from $B_{\tilde{Q}_{n}}(x,\ell-1)$ by fixing appropriately chosen
coordinate sets similarly as in the proof of Lemma 8.2.
###### Lemma 8.11.
The folded hypercube $\tilde{Q}_{n}\in\mathcal{H}(3)$ for all
$n\in\mathbb{N}$.
## 9\. Discussion
In this paper we extended the results of Balogh, Bollobás and Morris [11] to a
large class of high-dimensional geometric graphs. It is perhaps natural to ask
what the limits of our proof methods are, and what structural conditions are
necessary to exhibit a similar behaviour in terms of the location and width of
the critical window, which in both cases seem to be controlled by the local
structure of the graphs.
###### Question 9.1.
Let $G$ be an $n$-regular graph. Under what assumptions are the first two
terms in the expansion of the critical probability of majority bootstrap
percolation in $G$ given by $\frac{1}{2}-\frac{1}{2}\sqrt{\frac{\log n}{n}}$?
Our proof methods seem to rely on the high-dimensional geometric structure of
the graphs, however there are many other graphs which seem inherently high-
dimensional which our results do not cover. For example, the _halved cube_ is
the graph on $\\{0,1\\}^{n}$ where two vertices are adjacent if they have
Hamming distance exactly two. Geometrically, this is the $1$-skeleton of the
polytope constructed from an _alternation_ of a hypercube. Whilst this graph
is regular and highly symmetric, the presence of many triangles, and indeed
large cliques in every neighbourhood, cause the property of bounded backwards
expansion (Property P2) to fail and so the graph does not lie in the class
$\mathcal{H}$ as defined in Section 3. Furthermore, motivated by the case of
the middle layer graph and the odd graph, it would be interesting to know if
similar behaviour is present in (bipartite) Kneser graphs with a larger range
of parameters or in other graphs arising from intersecting set systems such as
Johnson graphs. Here, whilst the graphs still display a sort of fractal self-
symmetry, an issue arises with the quantitative aspects of Property P4, since
the degrees in these projections drop too quickly and this effect dominates
the variations in the number of infected vertices in a vertex’s neighbourhood.
Another interesting example comes from the _permutahedron_ , a well-studied
graph which like the hypercube has many equivalent ‘high-dimensional’
representations. Here, whilst Properties P1, P2, P3, P4 and P5 are satisfied,
the graph is too large to satisfy Property P6, superexponential in its
regularity. In a forthcoming paper [21] we will show that the analogue of
Corollary 1.5 also holds in the permutahedron.
In comparison to Theorem 1.3, we give a weaker bound on the width of the
critical window. As pointed out in Remark 6.5, our methods recover the upper
bound in Theorem 1.3, that is, if $\lambda>\frac{1}{2}$ and
$p=\frac{1}{2}-\frac{1}{2}\sqrt{\frac{\log\Delta(G_{n})}{\Delta(G_{n})}}+\frac{\lambda\log\log\Delta(G_{n})}{\sqrt{\Delta(G_{n})\log\Delta(G_{n})}},$
then $\lim_{n\to\infty}\Phi(p,G_{n})=1$. However, in the $0$-statement
slightly stronger structural assumptions and more delicate counting arguments
are required to further bound the width in this manner. Since some of these
structural assumptions do not hold in all of our examples (for example in
$M_{n}$ and $O_{n}$), and also for ease of presentation, we have given a
simpler exposition which just determines the first two terms in the expansion
of the critical probability. However, these extra assumptions and counting
arguments will be necessary in order to determine the threshold for the
permutahedron, and so the details of how to recover the stronger bound on the
width of the critical window will be covered in [21].
In the light of the dependence of our bounds on the minimum and maximum
degree, it is also interesting to ask what can happen when the host graph is
irregular. Whilst Theorem 1.4 implies that the critical window is still
bounded away from $\frac{1}{2}$ for graphs in $\mathcal{H}$, it is not clear
if the second term in the expansion of the critical probability is controlled
by the maximum degree, the minimum degree or the average degree of the graph
(if by any of them). In the particular case where $G$ is a product of many
_stars_ , we will show in a forthcoming paper [21] that the upper bound in
Theorem 1.4 is in fact tight, but it is not clear if this behaviour is
universal even in irregular product graphs.
It is perhaps also interesting to consider how the process evolves with other
non-constant infection thresholds. For example, if we take $r(v)=\alpha\cdot
d(v)$ for some constant $\alpha>0$, then the arguments in this paper (with the
application of Hoeffding’s inequality in (39) replaced by an application of
[45, Theorem 2.7]) show that for $d$-regular graphs in $\mathcal{H}$ the
$r$-neighbour bootstrap percolation process undergoes a similar transition
around
$p=\alpha-\sqrt{\alpha(1-\alpha)}\cdot\sqrt{\frac{\log d}{d}},$
and likely this will remain true when $\alpha$ is a very slowly shrinking
function of $d$. It would be interesting to consider what happens for smaller
$\alpha$, for example $d^{-\beta}$ for some $0<\beta<1$.
###### Question 9.2.
Let $G$ be an $d$-regular high-dimensional graph and consider the
$r$-neighbour bootstrap percolation process where $r=\sqrt{d}$. What can we
say about the location and width of the critical window?
In another direction, for small constant values of the infection threshold
$r$, the average case behaviour of the $r$-neighbour bootstrap percolation
process on the hypercube has also been considered, where in particular for
$r=2$ a threshold for percolation was determined by Balogh and Bollobás [6]
and a sharper threshold was shown by Balogh, Bollobás and Morris [9]. It would
be interesting to know if a corresponding threshold could be determined in
other high-dimensional graphs, see [41] for an analysis of bootstrap
percolation on Hamming graphs. Moreover, even for the hypercube $Q_{n}$, very
little is known about the typical behaviour of the process for $r\geqslant 3$,
see for example [9, Conjecture 6.3].
The majority bootstrap percolation process has also been studied in the
binomial random graph $G(m,q)$ [34, 52]. Perhaps surprisingly, in [34] it is
shown that if the edge probability $q$ is close to the connectivity threshold,
then the first two terms in the expansion of the critical probability for
$G(m,q)$ agree with those given by Bollobás, Balogh and Morris for the
hypercube $Q_{n}$ in Theorem 1.3, taking $n=(m-1)q$ to be the expected degree
of a vertex. The authors conjecture that this behaviour is in fact universal
to all approximately regular graphs with a similar density to the hypercube
and sufficiently strong expansion properties. In this context, it would be
interesting to study the majority bootstrap percolation process on random
subgraphs of high-dimensional graphs, in particular of the hypercube, above
the connectivity threshold.
###### Question 9.3.
Let $(Q_{n})_{q}$ denote a random subgraph of the hypercube $Q_{n}$, in which
each edge of $Q_{n}$ is retained independently with probability $q\in(0,1)$.
What is the critical probability for majority bootstrap percolation on
$(Q_{n})_{q}$ for $q>\frac{1}{2}$?
### Acknowledgements
The authors were supported in part by the Austrian Science Fund (FWF)
[10.55776/{DOC183, F1002, P36131, W1230}]. For the purpose of open access, the
authors have applied a CC BY public copyright licence to any Author Accepted
Manuscript version arising from this submission.
## References
* [1] M. Aizenman and J.. Lebowitz “Metastability effects in bootstrap percolation” In _J. Phys. A_ 21.19, 1988, pp. 3801–3813 URL: http://stacks.iop.org/0305-4470/21/3801
* [2] M. Ajtai, J. Komlós and E. Szemerédi “Largest random component of a $k$-cube” In _Combinatorica_ 2.1, 1982, pp. 1–7 DOI: 10.1007/BF02579276
* [3] N. Alon and J.. Spencer “The probabilistic method”, Wiley Series in Discrete Mathematics and Optimization John Wiley & Sons, Inc., Hoboken, NJ, 2016, pp. xiv+375
* [4] A. El-Amawy and S. Latifi “Properties and performance of folded hypercubes” In _IEEE Transactions on Parallel & Distributed Systems_ 2.01 IEEE Computer Society, 1991, pp. 31–42
* [5] H. Amini “Bootstrap percolation in living neural networks” In _J. Stat. Phys._ 141.3, 2010, pp. 459–475 DOI: 10.1007/s10955-010-0056-z
* [6] J. Balogh and B. Bollobás “Bootstrap percolation on the hypercube” In _Probab. Theory Related Fields_ 134.4, 2006, pp. 624–648 DOI: 10.1007/s00440-005-0451-6
* [7] J. Balogh and B. Bollobás “Sharp thresholds in bootstrap percolation” In _Physica A: Stat. Mechanics Applications_ 326.3, 2003, pp. 305–312 DOI: https://doi.org/10.1016/S0378-4371(03)00364-9
* [8] J. Balogh, B. Bollobás, H. Duminil-Copin and R. Morris “The sharp threshold for bootstrap percolation in all dimensions” In _Trans. Amer. Math. Soc._ 364.5, 2012, pp. 2667–2701 DOI: 10.1090/S0002-9947-2011-05552-2
* [9] J. Balogh, B. Bollobás and R. Morris “Bootstrap percolation in high dimensions” In _Combin. Probab. Comput._ 19.5-6, 2010, pp. 643–692 DOI: 10.1017/S0963548310000271
* [10] J. Balogh, B. Bollobás and R. Morris “Bootstrap percolation in three dimensions” In _Ann. Probab._ 37.4, 2009, pp. 1329–1380 DOI: 10.1214/08-AOP433
* [11] J. Balogh, B. Bollobás and R. Morris “Majority bootstrap percolation on the hypercube” In _Combin. Probab. Comput._ 18.1-2, 2009, pp. 17–51 DOI: 10.1017/S0963548308009322
* [12] J. Balogh, B. Bollobás and B.. Narayanan “Transference for the Erdős-Ko-Rado theorem” Id/No e23 In _Forum Math. Sigma_ 3, 2015, pp. 18 DOI: 10.1017/fms.2015.21
* [13] J. Balogh, Y. Peres and G. Pete “Bootstrap percolation on infinite trees and non-amenable groups” In _Combin. Probab. Comput._ 15.5, 2006, pp. 715–730 DOI: 10.1017/S0963548306007619
* [14] M. Bidgoli, A. Mohammadian and B. Tayfeh-Rezaie “Percolating sets in bootstrap percolation on the Hamming graphs and triangular graphs” In _European J. Combin._ 92, 2021, pp. Paper No. 10325616 DOI: 10.1016/j.ejc.2020.103256
* [15] N. Biggs “Some odd graph theory”, Combinatorial mathematics, 2nd int. Conf., New York 1978, Ann. New York Acad. Sci. 319, 71-81 (1979)., 1979
* [16] B. Bollobás “Weakly $k$-saturated graphs” In _Beiträge zur Graphentheorie (Kolloquium, Manebach, 1967)_ B. G. Teubner Verlagsgesellschaft, Leipzig, 1968, pp. 25–31
* [17] A.. Brouwer, A.. Cohen and A. Neumaier “Distance-regular graphs” 18, Ergeb. Math. Grenzgeb., 3. Folge Berlin etc.: Springer-Verlag, 1989
* [18] R. Cerf and E… Cirillo “Finite size scaling in three-dimensional bootstrap percolation” In _Ann. Probab._ 27.4, 1999, pp. 1837–1850 DOI: 10.1214/aop/1022677550
* [19] R. Cerf and F. Manzo “The threshold regime of finite volume bootstrap percolation” In _Stochastic Process. Appl._ 101.1, 2002, pp. 69–82 DOI: 10.1016/S0304-4149(02)00124-2
* [20] J. Chalupa, P.. Leath and G.. Reich “Bootstrap percolation on a Bethe lattice” In _J. Phys. C: Solid State Physics_ 12.1, 1979, pp. L31 DOI: 10.1088/0022-3719/12/1/008
* [21] M Collares, J. Erde, A Geisler and M. Kang “Majority bootstrap percolation on the permutahedron and other high-dimensional graphs” In preparation
* [22] Maurício Collares, Joseph Doolittle and Joshua Erde “The evolution of the permutahedron”, 2024 arXiv:2404.17260 [math.CO]
* [23] S. Diskin and M. Krivelevich “Supercritical site percolation on the hypercube: small components are small” In _Comb. Probab. Comput._ 32.3, 2023, pp. 422–427 DOI: 10.1017/S0963548322000323
* [24] Sahar Diskin, Joshua Erde, Mihyun Kang and Michael Krivelevich “Percolation on high-dimensional product graphs”, 2022 arXiv:2209.03722 [math.CO]
* [25] P. Dukes, J. Noel and A. Romer “Extremal bounds for 3-neighbor bootstrap percolation in dimensions two and three” In _SIAM J. Discrete Math._ 37.3, 2023, pp. 2088–2125 DOI: 10.1137/22M1534195
* [26] P. Erdős and J. Spencer “Evolution of the $n$-cube” In _Comput. Math. Appl._ 5.1, 1979, pp. 33–39 DOI: 10.1016/0898-1221(81)90137-1
* [27] L.. Fontes, R.. Schonmann and V. Sidoravicius “Stretched exponential fixation in stochastic Ising models at zero temperature” In _Comm. Math. Phys._ 228.3, 2002, pp. 495–518 DOI: 10.1007/s002200200658
* [28] N. Fountoulakis, M. Kang, C. Koch and T. Makai “A phase transition regarding the evolution of bootstrap processes in inhomogeneous random graphs” In _Ann. Appl. Probab._ 28.2, 2018, pp. 990–1051 DOI: 10.1214/17-AAP1324
* [29] E. Friedgut “Every monotone graph property has a sharp threshold” In _Proc. Amer. Math. Soc._ 124, 1999 DOI: 10.1090/S0002-9939-96-03732-X
* [30] M. Granovetter “Threshold models of collective behavior” In _Amer. J. Sociology_ 83.6, 1978, pp. 1420–1443 DOI: 10.1086/226707
* [31] J. Hardy, Y. Pomeau and O. Pazzis “Time evolution of a two-dimensional classical lattice system” In _Phys. Rev. Lett._ 31 American Physical Society, 1973, pp. 276–279 DOI: 10.1103/PhysRevLett.31.276
* [32] M. Heydenreich and R. Hofstad “Random graph asymptotics on high-dimensional tori” In _Comm. Math. Phys._ 270.2, 2007, pp. 335–358 DOI: 10.1007/s00220-006-0152-8
* [33] W. Hoeffding “Probability inequalities for sums of bounded random variables” In _J. Amer. Statist. Assoc._ 58, 1963, pp. 13–30 URL: http://links.jstor.org/sici?sici=0162-1459(196303)58:301%3C13:PIFSOB%3E2.0.CO;2-D&origin=MSN
* [34] C. Holmgren, T. Juškevičius and N. Kettle “Majority bootstrap percolation on $G(n,p)$” In _Electron. J. Comb._ 24.1, 2017, pp. research paper p1.132 URL: www.combinatorics.org/ojs/index.php/eljc/article/view/v24i1p1
* [35] A.. Holroyd “Sharp metastability threshold for two-dimensional bootstrap percolation” In _Probab. Theory Related Fields_ 125.2, 2003, pp. 195–224 DOI: 10.1007/s00440-002-0239-x
* [36] E. Ising “Beitrag zur Theorie des Ferromagnetismus” In _Zeitschrift für Physik_ 31, 1925, pp. 253–258 DOI: https://doi.org/10.1007/BF02980577
* [37] Emmanuel Jacob et al. “Interacting particle systems” In _ESAIM, Proc. Surv._ 60, 2017, pp. 246–265 DOI: 10.1051/proc/201760246
* [38] S. Janson, T Łuczak, T. Turova and T. Vallier “Bootstrap percolation on the random graph $G_{n,p}$” In _Ann. Appl. Probab._ 22.5, 2012, pp. 1989–2047 DOI: 10.1214/11-AAP822
* [39] R. Kaas and J.. Buhrman “Mean, median and mode in binomial distributions” In _Statist. Neerlandica_ 34.1, 1980, pp. 13–18 DOI: 10.1111/j.1467-9574.1980.tb00681.x
* [40] M. Kang and T. Makai “Bootstrap percolation on $G(n,p)$ revisited” In _Proceedings of the 27th International Conference on Probabilistic, Combinatorial and Asymptotic Methods for the Analysis of Algorithms—AofA’16_ Jagiellonian Univ., Dep. Theor. Comput. Sci., Kraków, 2016, pp. 12
* [41] Mihyun Kang, Michael Missethan and Dominik Schmid “Bootstrap percolation on the high-dimensional Hamming graph”, 2024 arXiv:2406.13341 [math.CO]
* [42] H.. Kierstead and W.. Trotter “Explicit matchings in the middle levels of the Boolean lattice” In _Order_ 5.2, 1988, pp. 163–171 DOI: 10.1007/BF00337621
* [43] S. Klavžar, J. Koolen and H.. Mulder “Graphs which locally mirror the hypercube structure” In _Inf. Process. Lett._ 71.2, 1999, pp. 87–90 DOI: 10.1016/S0020-0190(99)00089-7
* [44] L. Lichev “The giant component after percolation of product graphs” In _J. Graph Theory_ 99.4, 2022, pp. 651–670 DOI: 10.1002/jgt.22758
* [45] Colin McDiarmid “Concentration” In _Probabilistic methods for algorithmic discrete mathematics_ Berlin: Springer, 1998, pp. 195–248
* [46] R. Morris “Bootstrap percolation, and other automata” In _European J. Combin._ 66, 2017, pp. 250–263 DOI: 10.1016/j.ejc.2017.06.024
* [47] R. Morris “Monotone cellular automata” In _Surveys in combinatorics 2017_ 440, London Math. Soc. Lecture Note Ser. Cambridge Univ. Press, Cambridge, 2017, pp. 312–371
* [48] N. Morrison and J.. Noel “Extremal bounds for bootstrap percolation in the hypercube” In _J. Combin. Theory Ser. A_ 156, 2018, pp. 61–84 DOI: 10.1016/j.jcta.2017.11.018
* [49] T. Mütze “Proof of the middle levels conjecture” In _Proc. Lond. Math. Soc. (3)_ 112.4, 2016, pp. 677–713 DOI: 10.1112/plms/pdw004
* [50] R.. Schonmann “On the behavior of some cellular automata related to bootstrap percolation” In _Ann. Probab._ 20.1, 1992, pp. 174–193 URL: http://links.jstor.org/sici?sici=0091-1798(199201)20:1%3C174:OTBOSC%3E2.0.CO;2-5&origin=MSN
* [51] Joel Spencer and Laura Florescu “Asymptopia” 71, Stud. Math. Libr. Providence, RI: American Mathematical Society (AMS), 2014
* [52] S.. Stefánsson and T. Vallier “Majority bootstrap percolation on the random graph ${G}(n,p)$”, 2015 arXiv:1503.07029 [math.PR]
|
Revised
February 2024
Kerr-Newman and Electromagnetic Acceleration
Paul H<EMAIL_ADDRESS>
Dipartimento di Matematica e Fisica ”Ennio De Giorgi”,
Università del Salento and INFN-Lecce,
Via Arnesano, 73100 Lecce, Italy.
Previous discussions of charged dark matter neglected Primordial Black Hole
(PBH) spin and employed the Reissner-Nordstrom metric. In Nature we expect the
PBHs to possess spin which require use of the technically more challenging
Kerr-Newman metric. It is shown that the use of K-N metric retains the
principal properties already obtained using the R-N metric, in particular the
dominance of Coulomb repulsion requires super-extremality and the presence of
naked singularities. In this sense, the spin of the PBHs is not an essential
complication.
## 1 Introduction
In [1, 2], a cosmological model was introduced which departs from the
conventional wisdom that the energy pie for the visible universe is
approximately 5% normal matter, 25% dark matter and 70% dark energy. The term
dark energy was introduced in 1998 in association with the then newly-
discovered accelerated expansion. In the novel model, the energy pie is
revised to 5% normal matter, 95% dark matter and 0% dark energy shich is
replaced by what we may call charged dark matter.
We are discussing a toy model which is not intended to describe Nature
precisely but is intended to be sufficiently realistic that some general
suggestions about the origins of dark energy can be made. The dark matter
constituents are assumed to be PBHs with a spread of masses from one hundred
to at least one trillion solar masses.
We shall not endeavour to invent a constituent of dark energy directly but
shall conclude that dark energy, describable in an opaque manner by a
cosmological constant $\Lambda\sim(meV)^{4}$, and also possibly interpretable
as a vacuum energy, can be better understood as a result of electromagnetic
forces between extremely heavy PBHs which are a sector of dark matter but,
unlike lighter PBHs, carry electric charge.
The present article is concerned with one issue which is whether the spin of
the PBHs is an essential complication. To check this out, we must promote the
Riessner-Nordstrom metric used in [2] to the Kerr-Newman metric which is
technically more complicated and, again to anticipate our result, we shall
conclude that spin does not alter our main conclusions that the PBHs need to
be super-extremal and to possess naked singularities.
This maximally violates the cosmic censorship hypothesis, which is
nevertheless unproven.
The biggest assumption in this approach to replace dark energy by charged dark
matter is that Primordial Extremely Massive Black Holes (PEMBHs) with
unscreened electric charges exist. It is normally expected that electric
charges are screened from distant observers by accretion of material with
counterbalancing charge. While this is undoubtedly true for stars, planets and
galaxies,there is no observational evidence for or against this compensation
of charge happening for this heavier mass range.
Before PEMBHs are discovered, a promising precursor would be the discovery of
Primordlal Intermediate Mass Black Holes (PIMBHs) in the mass range 100 to
$10^{5}$ solar masses as constituents of the Milky Way dark matter. This may
well be possible in the foreseeable future using microlensing of the stars in
the Magellanic Clouds by, for example, the Vera Rubin Observatory (formally
LSST) in Chile.
## 2 Kerr-Newman metric
The most general metric for describing a black hole with non-zero charge and
spin is [3]
$ds^{2}=c^{2}d\tau^{2}=-\left(\frac{dr^{2}}{\Delta}+d\theta^{2}\right)\rho^{2}+\left(cdt-a\sin^{2}\theta
d\phi\right)^{2}\frac{\Delta}{\rho^{2}}-\left((r^{2}+a^{2})d\phi-
acdt\right)^{2}\frac{\sin^{2}\theta}{\rho^{2}}$ (1)
in which $(r,\theta,\phi)$ are spherical polar coordinates and
$a=\frac{J}{Mc};~{}~{}~{}\rho^{2}=r^{2}+a^{2}\cos^{2}\theta;~{}~{}~{}\Delta=r^{2}-r_{S}r+a^{2}+r_{Q}^{2}=r^{2}fr)$
(2)
with $f(r)=\left(1-r_{s}/r+r_{Q}^{2}/r^{2}\right)$ and
$r_{S}=\frac{2GM}{c^{2}};~{}~{}~{}r_{Q}^{2}=\frac{Q^{2}G}{4\pi\epsilon_{0}c^{4}}$
(3)
The coefficient of $dr^{2}$ is
$-\left(\frac{\rho^{2}}{\Delta}\right)=-\frac{r^{2}+a^{2}\cos\theta}{r^{2}f(r)}.$
(4)
To obtain the gravitational stress-energy tensor $T_{\mu\nu}^{(GRAV)}$
involves taking the second derivative of the metric by employing the
expression
$\displaystyle T_{\mu\nu}^{(GRAV)}$ $\displaystyle=$
$\displaystyle-\frac{1}{8\pi G}\left(G_{\mu\nu}+\Lambda g_{\mu\nu}\right).$
(5) $\displaystyle+\frac{1}{16\pi
G(-g)}\left((-g)(g_{\mu\nu}g_{\alpha\beta}-g_{\mu\alpha}g_{\nu\beta})\right)_{,\alpha\beta}$
where the final subscripts represent simple partial derivatives. We shall need
also to evaluate the electromagnetic counterpart
$T_{\mu\nu}^{(EM)}=F_{\mu\alpha}g^{\alpha\beta}F_{\mu\beta}-\frac{1}{4}g_{\mu\nu}F^{\alpha\beta}F_{\alpha\beta}.$
(6)
Let us begin with $T_{\mu\nu}^{(GRAV)}$. This calculation involves taking up
to the second derivatives of the KN metric in Eq.(1). For the function $f(r)$:
$\frac{\partial}{\partial r}f(r)\sim
O(1/r^{2})~{}~{}~{}~{}~{}\frac{\partial^{2}}{\partial^{2}r}f(r)\sim
O(1/r^{4})$ (7) $\frac{\partial}{\partial r}(f(r))^{2}\sim
O(1/r^{2})~{}~{}~{}~{}~{}\frac{\partial^{2}}{\partial^{2}r}(f(r))^{2}\sim
O(1/r^{4})$ (8) $\frac{\partial}{\partial r}(f(r))^{-1}\sim
O(1/r^{2})~{}~{}~{}~{}~{}\frac{\partial^{2}}{\partial^{2}r}(f(r))^{-1}\sim
O(1/r^{4})$ (9) $\frac{\partial}{\partial r}(f(r))^{-2}\sim
O(1/r^{2})~{}~{}~{}~{}~{}\frac{\partial^{2}}{\partial^{2}r}(f(r))^{-2}\sim
O(1/r^{4})$ (10)
It is not difficult to check, using these derivatives, that the 1st, 3rd and
4th terms in Eq.(5) all fall off as $O(1/r^{2})$ and that only the 2nd term
does not.
In $T_{\mu\nu}^{(EM)}$, we can straightforwardly see that both terms in Eq.(6)
fall off as $O(1/r^{2})$.
We deduce that
$T_{\mu\nu}^{(GRAV)}+T_{\mu\nu}^{(EM)}=-\left(\frac{\Lambda}{8\pi
G}\right)g_{\mu\nu}+O(1/r^{2}).$ (11)
Thus, the principal conclusions for spinless PBHs found in [2] carry over to
PBHs with spin. In particular, the 1/r expansion is unchanged and the EoS for
the dark energy remains $\omega=-1$. The requirement for super-extremality and
naked singularities, to allow Coulomb repulsion to exceed gravitational
attraction, remains the same as in the spinless case.
## 3 Varying cosmological ”constant” $\Lambda(t)$
The Freidmann expansion equation keeping only dark energy is
$\left(\frac{\dot{a}}{a}\right)^{2}=\frac{\Lambda_{0}}{3}$ (12)
which, normalizing to $a(t_{0})=1$, leads to a time dependence of $a(t)$ which
is exponentially growing
$a(t)=\exp\left(\sqrt{\frac{\Lambda_{0}}{3}}(t-t_{0})\right)$ (13)
In the EAU-model, the dark energy is replaced by charged dark matter and we
know from [2] that the equation of state for the dark energy is precisely
$\omega=-1$.
However, the time-dependence of $a(t)$ is no longer exponential. If we use the
dual dark energy description it is governed not by Eq.(12) but by a time-
dependent cosmological ”constant” $\Lambda(t)$
$\left(\frac{\dot{a}}{a}\right)^{2}\approx\frac{\Lambda(t)}{3}$ (14)
The time-dependence follows from the usual dilution of dark matter
$\Lambda(t)\approx a(t)^{-3}\approx t^{-2}$ (15)
so that adopting as the present value $\Lambda(t_{0})=(2~{}meV)^{4}$ we arrive
at the examples of future values for $\Lambda(t)$ provided in Table 1. In the
last row, for example, at a time which is $10^{5}$ times the present age of
the universe, the cosmological constant has decreased by a factor $\sim
10^{-9}$. This behaviour implies that the extroverse never expands to the very
large size discussed for the $\Lambda CDM$-model in [4].
Table 1: COSMOLOGICAL “CONSTANT”. time | $\Lambda(t)$
---|---
$t_{0}$ | $(2.0meV)^{4}$
$t_{0}+10Gy$ | $(1.0meV)^{4}$
$t_{0}+100Gy$ | $(700\mu eV)^{4}$
$t_{0}+1Ty$ | $(230\mu eV)^{4}$
$t_{0}+1Py$ | $(7.4\mu eV)^{4}$
## 4 Previous Work
There are in the literature, several papers on theoretical astrophysics and
cosmology that cast possible doubt on whether electrically-charged black holes
exist in Nature and on whether small charge asymmetries can exist within
galaxies and clusters of galaxies, and within the whole visible universe.
Since the existence of charged black holes is crucial to the EAU model [1, 2],
the present section addresses past work and attempts to convince the reader
that there is no known fatal flaw in EAU.
It is generally accepted that electrically neutral black holes exist in
Nature. Since the main result of the present article is that black hole spin
is an inessential complication in the EAU model, we may associate electrically
neutral BHs with the Schwartzschild (S) metric and electrically charged BHs
with the Reissner-Nordstrom (RN) metric. Both S and RN metrics are exact
solutions of general relativity. The RN metric is richer than the S metric in
that, depending on the electric charge Q, relative to the mass M, it has sub-
extremal (two horizons), extremal (one horizon) and super-extremal (no
horizon) varieties, Mathematically, however, both S and RN are solutions of
Einstein’s field equations and there appears to be no special reason why
Nature should simultaneously adopt the S solution and eschew the RN solution.
It is of considerable interest that in 1959 Lyttleton and Bondi [5] considered
electric charge as underlying the cosmological expansion, by introducing a
small difference between the proton and electron charges. Their theory did not
survive quantitative testing of the neutrality of the hydrogen atom. The LB
model is obviously different from the EAU model, although it does involve a
remarkably similar charge asymmetry $\epsilon_{Q}\sim 10^{-18}$ for the
universe as we have for clusters.
By studying the baryon budget[6, 7] and making reasonable assumptions, one can
show that at a scale of 2Mpc in the EAU model that, unlike in the LB model,
there is no charge asymmetry. This scale is larger than a galaxy. In the EAU
model, all processes conserve electric charge and so certainly the universe
remains electrically neutral unless there was, for an unknown reason, an
initial condition that imposed $|\epsilon_{Q}|\neq 0$ at the Big Bang.
One study of the cross-sections for capture of particles by an RN black holes
was made by Zakharov[8]. This study included, however, capture only of
electrically neutral particles and hence neglected the effects of Coulomb
forces,
In the EAU model, we have assumed that the fraction $f_{DM}$ of the dark
matter made up of PBHs is $f_{DM}=1$. The PBHs fall into different mass ranges
for PIMBHs, PSMBHs and PEMBHs and in the example we gave the PEMBHs contribute
$f_{DM}=0.5$. In [9] an upper limit $f_{DM}=0.01$ for mass
$M=10^{12}M_{\odot}$ which is contradictory. However, we have reason to
believe that the constraints in [9] are far too severe. This can be traced
back to the well known ROM paper[10] which introduced the popular $f_{DM}$
exclusion plot.
In the case of ROM, too severe limits were caused by the use, in numerical
analysis, of an oversimplified accretion model (Bondi model) which assumes
spherical symmetry and radial inflow. This naive model can overestimate
accretion, compared to Nature, not just by a factor 2 but even by some 4 or 5
orders of magnitude![11]. Subsequent exclusion plots appear to have been
overly influenced by ROM’s results. The fact that ROM imposed too severe
constraints has been confirmed to us by its senior author (J.P. Ostriker,
private communication).
There are other mechanisms proposed in the literature able to produce a
charged BH. For instance, in [12], the author argued that a rotating BH
embedded in a plasma may generate a magnetic field and may acquire an electric
charge. However, a recent investigation in [13] has shown that the charge in
[12] is in fact an upper limit and generally is quite small.
To summarise this section, previous work on these issues is undoubtedly
interesting but does not reveal a fatal flaw in the EAU model which replaces
dark energy by charged dark matter.
Dark energy can be simply and accurately parametrised by the cosmological
constant $\Lambda$ introduced by Einstein for a different, almost
antithetical, reason. It may equally be regarded as a type of vacuum energy.
In particle theory, however, vacuum energy is associated with the spontaneous
breaking of a symmetry and it is unclear which symmetry is involved.
The EAU model can provide a more physical picture than either. The model is
very speculative and cries out for more theoretical work to confirm its
internal consistency.
## 5 Discussion
Dark matter and dark energy are the two major unanswered questions in
cosmology. In this article we have discussed a model which suggests specific
answers. The model has the following ingredients.
In the Milky Way, the dark matter constituents are PIMBHs with masses between
100 and 100,000 solar masses. We cannot exaggerate the importance of detecting
these, in our opinion most easily by seeking microlensing light curves with
durations in excess of two years. Such a success would suggest that the idea
that DM constituents are PBHs is on the right track.
PBHs with higher masses between a million and a hundred billion solar masses,
PSMBHs, include the supermassive black holes know to exist at galactic
centres.The PIMBHs and PSMBHs are, like all stars and planets, electrically
neutral and so experience only the attractive gravitational force. The PSMBHs
may be formed from lighter seeds such as PIMBHs by merging and accretion.
At the next and final higher mass scale are the PEMBHs with at least a
trillion solar masses and carrying unscreened like-sign charges of order $\sim
10^{40}$ Coulombs, a charge which corresponds only to a $\sim 10^{-18}$ charge
asymmetry. These electric charges are not compensated by accreted halos of
opposite charge, and hence experience long-range electromagnetic repulsion.
This is the charged dark matter which replaces what was given the name dark
energy immediately after accelerated cosmic expansion was discovered.
The purpose of the present paper is to show that this model of dark matter and
energy [2] survives the generalisation to spinning black holes. We had
expected this to be far more difficult than it is, as we have shown rather
simply by extending the Reissner-Nordstrom metric to the Kerr-Newman metric.
If our EAU model is correct, the present decade promises to resolve the dark
matter and energy issues. The dark energy will have a dual description either
as charged dark matter or as a cosmological constant $\Lambda(t)$ with
constant equation of state $\omega(t)=P(t)/\rho(t)=-1$ with high accuracy. The
cosmological ”constant” itself falls in the future as $\Lambda(t)\propto
t^{-2}$.
This leads to quite a different prediction for the future fate of the universe
than in the conventional $\Lambda CDM$ model with constant $\Lambda$. Instead
of exponential superluminal expansion of the extroverse [4], the growth is as
a power and the extroverse does not grow within a trillion years to a
gargantuan size and, because of this far gentler rate of growth, billions of
other galaxies remain within the visible universe forever.
In the EAU model, we need not include dark energy as a slice of the energy
pie-chart which become simply 5% baryonic matter and 95% dark matter.With
irrational optimism, we may expect that by the end of this decade conventional
wisdom will change to where the accelerated expansion is ascribed to
electromagnetic repulsion rather than an unspecified form of repulsive
gravity.
Let us re-emphasize that the EAU-model is speculative and based on assumptions
which have already been entertained in the existing literature but which have
not been rigorously justified. This fact was already expressed ut supra. Our
aim in the EAU-model was to construct a cosmological theory in which dark
energy, as originally understood, is removed and replaced by charged dark
matter.
Before delving further into the rôle of the PEMBHs, let us first address the
simpler question of why are the PIMBHs electrically neutral. like stars and
galaxies, while the PEMBHs carry negative electric charge? This is due to the
fact that the PIMBHs are formed during the first second after the Big Bang
when the selective accretion of electrons over protons is unavailable. By
contrast the PEMBHs are formed much later, in the dark ages after the creation
of the Cosmic Microwave Background (CMB), when the difference in thermal
velocities of electrons and protons leads to the mechanism for acquisition of
a net negative electric charge.
In the EAU model, the same-sign-charged PEMBHs are so massive, at least a
trillion solar masses, that they are not associated with a specific galaxy or
cluster but move each on their own under the Coulomb repulsion by other
PEMBHs. An electric charge asymmetry
$\epsilon_{Q}\equiv(Q_{+}-Q_{-})(Q_{+}+Q_{-})^{-1}$ is induced in the
intergalactic and intracluster gas due to the remaining protons. It is
straightforward to estimate this charge asymmetry as $\epsilon_{Q}\sim
10^{-18}$ which survives in the intergalactic and intracluster gas due to the
residual protons. This charge-asymmetric gas is not in the PEMBH environment
so there is no opportunity to set up Debye screening of the PEMBH negative
charge and hence long-range inter-PEMBH electric repulsion exists.
There may have developed an incorrect prejudice, based on the observed
electric neutrality of stars and planets, of galaxies and of the dark matter
inside galaxies and clusters, that any macroscopic astrophysical object,
including a PEMBH, must be electrically neutral. If the EAU model is correct,
this prejudice must be discarded.
## Acknowledgement
We are grateful to an anonymous referee for providing a summary of past
research.
## References
* [1] P.H. Frampton,
Electromagnetic Accelerating Universe.
Phys. Lett. B835, 137480 (2022).
arXiv:2210.10632.
* [2] P.H. Frampton,
A Model of Dark Matter and Energy.
Mod. Phys. Lett. A38, 2350032 (2023).
arXiv:2301.10719.
* [3] R.P. Kerr,
Gravitational Field of Spinning Mass as an Example of
Algebraically Special Metrics.
Phys. Rev. Lett. 11, 237 (1963).
* [4] P.H. Frampton,
Cyclic Entropy:An Alternative to Inflationary Cosmology.
Int. J.Mod. Phys. A30, 1550129 (2015).
arXiv:1501.03054.
* [5] R.A. Lyttleton and H. Bondi,
On the Physical Consequences of a General Excess of Charge.
Proc. Roy. Soc. Lond. A252, 313 (1959).
* [6] Y. Rasera and R. Teyssier,
The History of the Baryonic Budget: Cosmic Logistics
in a Hierarchical Universe.
Astron. Astrophys. 445, 1 (2006).
arXiv:astro-ph/0505473[astro-ph].
* [7] F. Durier and J.A. de Freitas Pacheco,
Baryons in the Universe from Cosmological Simulations.
AIP Conf. Proc. 1471, 10 (2012).
* [8] A.F. Zakharov,
Particle Capture Cross-Sections for a
Reissner-Nordstrom Black Hole.
Class. Quant. Grav. 11, 1027 (1994).
* [9] B. Carr and J. Silk,
Primordial Black Holes as Generators of Cosmic Structures.
MNRAS 478, 3756 (2018).
arXiv;1801.00672[astro-ph.CO].
* [10] M. Ricotti, J. P. Ostriker, and K. J. Mack,
Effect of Primordial Black Holes on the Cosmic Microwave
Background and Cosmologocal Parameter Estimates.
Astrophys.J. 680, 829 (2008).
arXiv:0709.0524 [astro-ph].
* [11] C.Y. Kuo, et al.
Measuring Mass Accretion Rate onto a
Supermassive Black Hole in M87
using Faraday Rotation Measure
with the Submillimeter Array.
Astrophys. J. Lett. 783, L33 (2014).
arXiv:1402.5238[astro-ph.GA].
* [12] R.M. Wald,
Black Hole in a Uniform Magnetic Field.
Phys. Rev. D10, 1680 (1974).
* [13] S.S. Komissarov,
Electrically-Charged Black Holes and the
Blandford-Znajek Mechanism.
MNRAS. 512, 2798 (2022).
arXiv:2108.08161[astro-ph.HE].
|
* Krumholz et al. (2019) Krumholz, M. R., McKee, C. F., & Bland-Hawthorn, J. 2019, ARA&A, 57, 227, doi: 10.1146/annurev-astro-091918-104430
* Kuhn et al. (2019) Kuhn, M. A., Hillenbrand, L. A., Sills, A., Feigelson, E. D., & Getman, K. V. 2019, ApJ, 870, 32, doi: 10.3847/1538-4357/aaef8c
* Kunder et al. (2017) Kunder, A., Kordopatis, G., Steinmetz, M., et al. 2017, AJ, 153, 75, doi: 10.3847/1538-3881/153/2/75
* Kurtz et al. (2014) Kurtz, D. W., Saio, H., Takata, M., et al. 2014, MNRAS, 444, 102, doi: 10.1093/mnras/stu1329
* Lee & Song (2019) Lee, J., & Song, I. 2019, MNRAS, 489, 2189, doi: 10.1093/mnras/stz2290
* Lightkurve Collaboration et al. (2018) Lightkurve Collaboration, Cardoso, J. V. d. M., Hedges, C., et al. 2018, Lightkurve: Kepler and TESS time series analysis in Python, Astrophysics Source Code Library. http://ascl.net/1812.013
* Lindegren et al. (2021) Lindegren, L., Klioner, S. A., Hernández, J., et al. 2021, A&A, 649, A2, doi: 10.1051/0004-6361/202039709
* Lopez Martí et al. (2013) Lopez Martí, B., Jimenez Esteban, F., Bayo, A., et al. 2013, A&A, 551, A46, doi: 10.1051/0004-6361/201220128
* Malo et al. (2014) Malo, L., Artigau, É., Doyon, R., et al. 2014, ApJ, 788, 81, doi: 10.1088/0004-637X/788/1/81
* Malo et al. (2013) Malo, L., Doyon, R., Lafrenière, D., et al. 2013, ApJ, 762, 88, doi: 10.1088/0004-637X/762/2/88
* Mamajek (2016) Mamajek, E. E. 2016, in Young Stars & Planets Near the Sun, ed. J. H. Kastner, B. Stelzer, & S. A. Metchev, Vol. 314, 21–26, doi: 10.1017/S1743921315006250
* Mamajek & Bell (2014) Mamajek, E. E., & Bell, C. P. M. 2014, MNRAS, 445, 2169, doi: 10.1093/mnras/stu1894
* McCully et al. (2018) McCully, C., Turner, M., Volgenau, N., et al. 2018, LCOGT/banzai: Initial Release, 0.9.4, Zenodo, doi: 10.5281/zenodo.1257560
* Mentuch et al. (2008) Mentuch, E., Brandeker, A., van Kerkwijk, M. H., Jayawardhana, R., & Hauschildt, P. H. 2008, ApJ, 689, 1127, doi: 10.1086/592764
* Moór et al. (2013) Moór, A., Szabó, G. M., Kiss, L. L., et al. 2013, MNRAS, 435, 1376, doi: 10.1093/mnras/stt1381
* Murphy et al. (2022) Murphy, S. J., Bedding, T. R., White, T. R., et al. 2022, MNRAS, 511, 5718, doi: 10.1093/mnras/stac240
* Murphy et al. (2021) Murphy, S. J., Joyce, M., Bedding, T. R., White, T. R., & Kama, M. 2021, MNRAS, 502, 1633, doi: 10.1093/mnras/stab144
* Paxton et al. (2011) Paxton, B., Bildsten, L., Dotter, A., et al. 2011, ApJS, 192, 3, doi: 10.1088/0067-0049/192/1/3
* Paxton et al. (2013) Paxton, B., Cantiello, M., Arras, P., et al. 2013, ApJS, 208, 4, doi: 10.1088/0067-0049/208/1/4
* Paxton et al. (2015) Paxton, B., Marchant, P., Schwab, J., et al. 2015, ApJS, 220, 15, doi: 10.1088/0067-0049/220/1/15
* Paxton et al. (2018) Paxton, B., Schwab, J., Bauer, E. B., et al. 2018, ApJS, 234, 34, doi: 10.3847/1538-4365/aaa5a8
* Paxton et al. (2019) Paxton, B., Smolec, R., Schwab, J., et al. 2019, ApJS, 243, 10, doi: 10.3847/1538-4365/ab2241
* Portegies Zwart et al. (2010) Portegies Zwart, S. F., McMillan, S. L. W., & Gieles, M. 2010, ARA&A, 48, 431, doi: 10.1146/annurev-astro-081309-130834
* Prisinzano et al. (2022) Prisinzano, L., Damiani, F., Sciortino, S., et al. 2022, arXiv e-prints, arXiv:2206.00249. https://arxiv.org/abs/2206.00249
* Raghavan et al. (2010) Raghavan, D., McAlister, H. A., Henry, T. J., et al. 2010, ApJS, 190, 1, doi: 10.1088/0067-0049/190/1/1
* Reese (2022) Reese, D. R. 2022, Frontiers in Astronomy and Space Sciences, 9, 934579, doi: 10.3389/fspas.2022.934579
* Riaz et al. (2006) Riaz, B., Gizis, J. E., & Harvin, J. 2006, AJ, 132, 866, doi: 10.1086/505632
* Ricker et al. (2015) Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003, doi: 10.1117/1.JATIS.1.1.014003
* Rodriguez et al. (2013) Rodriguez, D. R., Zuckerman, B., Kastner, J. H., et al. 2013, ApJ, 774, 101, doi: 10.1088/0004-637X/774/2/101
* Saio et al. (2015) Saio, H., Kurtz, D. W., Takata, M., et al. 2015, MNRAS, 447, 3264, doi: 10.1093/mnras/stu2696
* Schneider et al. (2019) Schneider, A. C., Shkolnik, E. L., Allers, K. N., et al. 2019, AJ, 157, 234, doi: 10.3847/1538-3881/ab1a26
* Shkolnik et al. (2017) Shkolnik, E. L., Allers, K. N., Kraus, A. L., Liu, M. C., & Flagg, L. 2017, AJ, 154, 69, doi: 10.3847/1538-3881/aa77fa
* Sim et al. (2019) Sim, G., Lee, S. H., Ann, H. B., & Kim, S. 2019, Journal of Korean Astronomical Society, 52, 145, doi: 10.5303/JKAS.2019.52.5.145
* Soubiran et al. (2018) Soubiran, C., Jasniewicz, G., Chemin, L., et al. 2018, A&A, 616, A7, doi: 10.1051/0004-6361/201832795
* Sperauskas et al. (2019) Sperauskas, J., Deveikis, V., & Tokovinin, A. 2019, A&A, 626, A31, doi: 10.1051/0004-6361/201935346
* Steindl et al. (2022a) Steindl, T., Zwintz, K., & Müllner, M. 2022a, A&A, 664, A32, doi: 10.1051/0004-6361/202243242
* Steindl et al. (2022b) Steindl, T., Zwintz, K., & Vorobyov, E. 2022b, Nature Communications, 13, 5355, doi: 10.1038/s41467-022-32882-0
* Steinmetz et al. (2020) Steinmetz, M., Guiglion, G., McMillan, P. J., et al. 2020, AJ, 160, 83, doi: 10.3847/1538-3881/ab9ab8
* Sullivan & Kraus (2021) Sullivan, K., & Kraus, A. L. 2021, ApJ, 912, 137, doi: 10.3847/1538-4357/abf044
* Tofflemire et al. (2019) Tofflemire, B. M., Mathieu, R. D., & Johns-Krull, C. M. 2019, AJ, 158, 245, doi: 10.3847/1538-3881/ab4f7d
* Torres et al. (2006) Torres, C. A. O., Quast, G. R., da Silva, L., et al. 2006, A&A, 460, 695, doi: 10.1051/0004-6361:20065602
* Torres et al. (2001) Torres, C. A. O., Quast, G. R., de La Reza, R., da Silva, L., & Melo, C. H. F. 2001, in Astronomical Society of the Pacific Conference Series, Vol. 244, Young Stars Near Earth: Progress and Prospects, ed. R. Jayawardhana & T. Greene, 43. https://arxiv.org/abs/astro-ph/0105291
* Torres et al. (2008) Torres, C. A. O., Quast, G. R., Melo, C. H. F., & Sterzik, M. F. 2008, in Handbook of Star Forming Regions, Volume II, ed. B. Reipurth, Vol. 5 (Astronomical Society of the Pacific), 757
* Townsend & Teitler (2013) Townsend, R. H. D., & Teitler, S. A. 2013, MNRAS, 435, 3406, doi: 10.1093/mnras/stt1533
* Ulrich (1986) Ulrich, R. K. 1986, ApJ, 306, L37, doi: 10.1086/184700
* White et al. (2007) White, R. J., Gabor, J. M., & Hillenbrand, L. A. 2007, AJ, 133, 2524, doi: 10.1086/514336
* Wood et al. (submitted) Wood, M. L., Mann, A. W., Barber, M. G., & Bush, J. L. submitted
* Wright et al. (2022) Wright, N. J., Goodwin, S., Jeffries, R. D., Kounkel, M., & Zari, E. 2022, arXiv e-prints, arXiv:2203.10007. https://arxiv.org/abs/2203.10007
* Xiang et al. (2019) Xiang, M., Ting, Y.-S., Rix, H.-W., et al. 2019, ApJS, 245, 34, doi: 10.3847/1538-4365/ab5364
* Yen et al. (2018) Yen, S. X., Reffert, S., Schilbach, E., et al. 2018, A&A, 615, A12, doi: 10.1051/0004-6361/201731905
* Zari et al. (2018) Zari, E., Hashemi, H., Brown, A. G. A., Jardine, K., & de Zeeuw, P. T. 2018, A&A, 620, A172, doi: 10.1051/0004-6361/201834150
* Zucker et al. (2015) Zucker, C., Battersby, C., & Goodman, A. 2015, ApJ, 815, 23, doi: 10.1088/0004-637X/815/1/23
* Zuckerman et al. (2019) Zuckerman, B., Klein, B., & Kastner, J. 2019, ApJ, 887, 87, doi: 10.3847/1538-4357/ab45ea
|
Department of Computer Science, University College London, United Kingdom and
https://[email protected]://orcid.org/0000-0002-8241-7277
Wojciech Różowski <ccs2012> <concept>
<concept_id>10003752.10003766.10003776</concept_id> <concept_desc>Theory of
computation Regular languages</concept_desc>
<concept_significance>500</concept_significance> </concept> </ccs2012>
[500]Theory of computation Regular languages This work was partially supported
by ERC grant Autoprobe (grant agreement 101002697).
###### Acknowledgements.
The author wishes to thank Giorgio Bacci, Leo Lobski, Alexandra Silva and
Mateo Torres-Ruiz for discussions and feedback, as well as anonymous reviewers
for their comments.Karl Bringmann, Martin Grohe, Gabriele Puppis, and Ola
Svensson 4 51st International Colloquium on Automata, Languages, and
Programming (ICALP 2024) ICALP 2024 ICALP 2024 July 8–12, 2024 Tallinn,
Estonia 297 137
# A Complete Quantitative Axiomatisation of Behavioural Distance of Regular
Expressions
Wojciech Różowski
###### Abstract
Deterministic automata have been traditionally studied through the point of
view of language equivalence, but another perspective is given by the
canonical notion of _shortest-distinguishing-word_ distance quantifying the of
states. Intuitively, the longer the word needed to observe a difference
between two states, then the closer their behaviour is. In this paper, we give
a sound and complete axiomatisation of _shortest-distinguishing-word_ distance
between regular languages. Our axiomatisation relies on a recently developed
quantitative analogue of equational logic, allowing to manipulate rational-
indexed judgements of the form $e\equiv_{\varepsilon}f$ meaning _term $e$ is
approximately equivalent to term $f$ within the error margin of
$\varepsilon$_. The technical core of the paper is dedicated to the
completeness argument that draws techniques from order theory and Banach
spaces to simplify the calculation of the behavioural distance to the point it
can be then mimicked by axiomatic reasoning.
###### keywords:
Regular Expressions, Behavioural Distances, Quantitative Equational Theories
###### category:
## 1 Introduction
Transition systems have been widely employed to model computational phenomena.
In theoretical computer science, it is customary to model computations as
transition systems and subsequently reason about their equivalence or
similarity. Classical examples include checking language equivalence of
deterministic finite automata using Hopcroft and Karp’s algorithm [18] or
constructing bisimulations between labelled transition systems [25].
Throughout the years, especially in the concurrency theory community,
researchers have studied a plethora of different notions of behavioural
equivalences and preorders one could impose on a transition system [40].
However, in many practical applications, especially when dealing with
probabilistic or quantitative transition systems, asking about such classical
notions of equivalence (or similarity) could be too strict, and it might be
more reasonable to ask quantitative questions about how far apart the
behaviour of the two states is.
A growing line of work on _behavioural distances_ [37, 5, 38, 39, 13] answers
this problem by equipping state-spaces of transition systems with
(pseudo)metric structures quantifying the dissimilarity of states. In such a
setting, states at distance zero are not necessarily the same, but rather
equivalent with respect to some classical notion of behavioural equivalence.
In a nutshell, equipping transition systems with such a notion of distance
crucially relies on the possibility of _lifting_ the distance between the
states to the distance on the observable behaviour of the transition system.
Behavioural distances were originally studied in the context of probabilistic
transition systems [16, 38], where observable behaviour is in the form of
probability distribution among possible transitions. In such a case, the
necessary lifting of distances between states to distances between probability
distributions of possible outcomes relies on the famous
Kantorovich/Wasserstein liftings, studied traditionally in transportation
theory [41]. In general, transition systems can be viewed more abstractly
through a well-established category-theoretic framework of coalgebras for an
endofunctor [27]. Recent work [5] generalised the Kantorovich/Wassertstein
lifting to lifting endofunctors (modelling one-step behaviour of transition
systems) from the category of sets and functions to the category of
(pseudo)metric spaces and nonexpansive functions, thus allowing equipping a
multitude of different kinds of transition systems with a sensible notion of
behavioural distance.
Traditionally, besides looking at behavioural equivalence/similarity purely
from the algorithmic point of view, one can look at those problems
axiomatically, by describing behaviours of transition systems as expressions
and by providing formal systems based on (in)equational logic for reasoning
about equivalence/similarity of the transition systems described by the
expressions. Classic examples include reasoning about language equivalence of
Kleene’s regular expressions representing deterministic finite automata using
inference systems of Salomaa [29] or Kozen [19], or reasoning about
bisimilarity of finite-state labelled transition systems through Milner’s
calculus of finite-state behaviours [24].
In this paper, we are interested in a similar axiomatic point of view, but in
the case of behavioural distances. Unfortunately, the classical (in)equational
logic cannot be applied here, as it has no way of talking about approximate
equivalence. Instead, we rely on the quantitative analogue of equational logic
[22], which deals with the statements of the form $e\equiv_{\varepsilon}f$,
intuitively meaning _term $e$ is within the distance of at most
$\varepsilon\in\mathbb{Q}^{+}$ from the term $f$_. While the existing work [4,
2, 1] looked at quantitative axiomatisations of behavioural distance for
probabilistic transition systems calculated through the
Kantorovich/Wasserstein lifting, which can be thought of as a special case of
the abstract coalgebraic framework relying on lifting endofunctors to the
category of pseudometric spaces, the notions of behavioural distance for other
kinds of transition systems have not been axiomatised before. It turns out
that the approach to completeness used in [2] relies on properties which are
not unique to distances obtained through the Kantorovich/Wasserstein lifting
and can be employed to give complete axiomatisations of behavioural distances
for other kinds of transition systems obtained through the coalgebraic
framework [5]. In this paper, as a starting point, we look at one of the
simplest instantiations of that abstract framework in the case of
deterministic automata, yielding _shortest-distinguishing-word_ distance. To
illustrate that notion of distance, consider the following three deterministic
finite automata:
$\mathbf{q_{0}}$starta$\mathbf{r_{0}}$start$\mathbf{r_{1}}$$\mathbf{r_{2}}$aaa$\mathbf{s_{0}}$starta
Neither of the above automata are language equivalent. Their languages are
respectively (from the left) $\\{\epsilon,a,aa,aaa,\dots\\}$,
$\\{\epsilon,a\\}$ and $\emptyset$. However, one could argue that the
behaviour of the middle automaton is closer to the one on the left rather than
the one on the right. In particular, languages of the left and middle
automaton agree on all of the words of length less than two, while the left
and right one disagree on all words. One can make this idea precise, by
providing $1$-bounded metric
$d_{\mathcal{L}}:\mathcal{P}(A^{*})\times\mathcal{P}(A^{*})\to[0,1]$ on the
set of all formal languages over some fixed alphabet $A$ given by the
following formula, where $\lambda\in]0,1[$ and $L,M\subseteq A^{*}$:
$d_{\mathcal{L}}(L,M)=\begin{cases}\lambda^{|w|}&w\text{ is the shortest word
that belongs to only one of }L\text{ and }M\\\ 0&\text{if }L=M\end{cases}$ (1)
If we set $\lambda=\frac{1}{2}$, then
$d_{\mathcal{L}}(\\{\epsilon,a,aa,aaa,\dots\\},\\{\epsilon,a\\})=\frac{1}{4}$
and $d_{\mathcal{L}}(\\{\epsilon,a,aa,aaa,\dots\\},\emptyset)=1$, which allows
to formally state that the behaviour of the middle automaton is a better
approximation of the left one, rather than the right one. Observe, that we
excluded $\lambda=0$ and $\lambda=1$, as in both cases $d_{\mathcal{L}}$ would
become a pseudometric setting all languages to be at distance zero or one.
Automata in the example above correspond to the regular expressions $a^{*}$,
$a+1$ and $0$ respectively. In order to determine the distance between
arbitrary regular expressions $e$ and $f$ one would have to construct
corresponding deterministic finite automata and calculate (or approximate) the
distance between their languages. Instead, as a main contribution of this
paper, we present a sound and complete quantitative inference system for
reasoning about the shortest-distinguishing-word distance of languages denoted
by regular expressions in question. Formally speaking, if
$\llbracket-\rrbracket:\mathsf{Exp}\to\mathcal{P}(A^{*})$ is a function taking
regular expressions to their languages, then our inference system satisfies
the following:
$\vdash e\equiv_{\varepsilon}f\iff d_{\mathcal{L}}(\llbracket
e\rrbracket,\llbracket f\rrbracket)\leq\varepsilon$
Although much of our development is grounded in category theory and coalgebra,
we spell out all the definitions and results concretely, without the need for
specialised language. We organise the paper as follows:
* •
In Section 2 we review basic definitions from automata theory and recall the
semantics of regular expressions through Brzozowski derivatives [10]. Then, in
order to talk about distances, we state basic definitions and properties
surrounding (pseudo)metric spaces.
* •
In Section 3 we instantiate the framework of coalgebraic behavioural metrics
[5] to the concrete case of deterministic automata. We recall the abstract
results from [5] in simple automata-theoretic terms.
* •
In Section 4 we start by recalling the definitions surrounding the
quantitative equational theories [22] from the literature. We then present the
axioms of our inference system for the shortest-distinguishing-word distance
of regular expressions, give soundness result and provide a discussion about
the axioms. The interesting insight is that when relying on quantitative
equational theories which contain an infinitary rule capturing the notion of
convergence, there is no need for any fixpoint introduction rule. We
illustrate this by axiomatically deriving Salomaa’s fixpoint rule for regular
expressions [29].
* •
The key result of our paper is contained in Section 5, where we prove
completeness of our inference system. The heart of the argument relies on
showing that the behavioural distance of regular expressions can be
approximated from above using Kleene’s fixpoint theorem, which can be then
mimicked through the means of axiomatic reasoning. This part of the paper
makes heavy use of the order-theoretic and Banach space structures carried by
the sets of pseudometrics over a given set.
* •
We conclude in Section 6, review related literature, and sketch directions for
future work.
## 2 Preliminaries
We start by recalling basic definitions surrounding deterministic automata,
regular expressions and (pseudo)metric spaces from the literature.
### Deterministic automata.
A deterministic automaton $\mathcal{M}$ with inputs in a finite alphabet $A$
is a pair $(M,\langle o_{M},t_{M}\rangle)$ consisting of a set of states $M$
and a pair of functions $\langle o_{M},t_{M}\rangle$, where
$o_{M}:M\to\\{0,1\\}$ is the _output_ function which determines whether a
state $m$ is final ($o_{M}(m)=1$) or not ($o_{M}(m)=0$), and $t:M\to M^{A}$ is
the _transition_ function, which, given an input letter $a$ determines the
next state. If the set $M$ of states is finite, then we call an automaton
$\mathcal{M}$ a deterministic finite automaton (DFA). We will frequently write
$m_{a}$ to denote $t_{M}(m)(a)$ and refer to $m_{a}$ as the derivative of $m$
for the input $a$. Definition of derivatives can be inductively extended to
words $w\in A^{\ast}$, by setting $m_{\varepsilon}=m$ and
$m_{aw^{\prime}}=(m_{a})_{w^{\prime}}$. Note that our definition of
deterministic automaton slightly differs from the most common one in the
literature, by not explicitly including the initial state. Instead of talking
about the language of the automaton, we will talk about the languages of
particular states of the automaton. Given a state $m\in M$, we write
$L_{\mathcal{M}}(m)\subseteq{A^{\ast}}$ for its language, which is formally
defined by $L_{\mathcal{M}}(m)=\\{w\in A^{\ast}\mid o(m_{w})=1\\}$. Given two
deterministic automata $(M,\langle o_{M},t_{M}\rangle)$ and $(N,\langle
o_{N},t_{N}\rangle)$, a function $h:M\to N$ is a homomorphism if it preserves
outputs and input derivatives, that is $o_{N}(h(m))=o_{M}(m)$ and
$h(m)_{a}=h(m_{a})$. The set of all languages $\mathcal{P}(A^{\ast})$ over an
alphabet $A$ can be made into a deterministic automaton
$(\mathcal{P}(A^{\ast}),\langle o_{L},t_{L}\rangle)$, where for
$l\in\mathcal{P}(A^{\ast})$ the output function is given by
$o_{L}(l)=[\epsilon\in l]$ and for all $a\in A$ the input derivative is
defined to be $l_{a}=\\{w\mid aw\in l\\}$. This automaton is _final_ , that is
for any other automaton $\mathcal{M}=(M,\langle o_{M},t_{M}\rangle)$ there
exists a unique homomorphism from $M$ to $\mathcal{P}(A^{\ast})$, which is
precisely given by the map $L_{\mathcal{M}}:M\to\mathcal{P}(A^{\ast})$ taking
each state $m\in S$ to its language. Given a set of states
$M^{\prime}\subseteq M$, we write $\langle
M^{\prime}\rangle_{{\mathcal{M}}}\subseteq M$ for the smallest set of states
reachable from $M^{\prime}$ through the transition function of the automaton
$\mathcal{M}$. Clearly, $(\langle M^{\prime}\rangle_{{\mathcal{M}}},\langle
o_{M},t_{M}\rangle)$ is a deterministic automaton. We will abuse the notation
and write $\langle M^{\prime}\rangle_{{\mathcal{M}}}$ for $(\langle
M^{\prime}\rangle_{{\mathcal{M}}},\langle o_{M},t_{M}\rangle)$. The canonical
inclusion map $\iota:\langle M^{\prime}\rangle_{{\mathcal{M}}}\hookrightarrow
M$ given by $\iota(m)=m$ for all $m\in\langle
M^{\prime}\rangle_{{\mathcal{M}}}$ is a homomorphism from $\langle
M^{\prime}\rangle_{{\mathcal{M}}}$ to $\mathcal{M}$. In the case of singleton
and two-element sets of states, we will simplify the notation and write
$\langle m\rangle_{{\mathcal{M}}}$ and $\langle
m,m^{\prime}\rangle_{{\mathcal{M}}}$.
### Regular expressions.
We let $e,f$ range over _regular expressions over $A$_ generated by the
following grammar:
$e,f\in\mathsf{Exp}::=0\mid 1\mid a\in A\mid e+f\mid e\mathbin{;}f\mid
e^{\ast}$
The standard interpretation of regular expressions
$\llbracket-\rrbracket:\mathsf{Exp}\to\mathcal{P}(A^{\ast})$ is inductively
defined by the following:
$\llbracket 0\rrbracket=\emptyset\quad\llbracket
1\rrbracket=\\{\epsilon\\}\quad\llbracket a\rrbracket=\\{a\\}\quad\llbracket
e+f\rrbracket=\llbracket e\rrbracket\cup\llbracket f\rrbracket\quad\llbracket
e\mathbin{;}f\rrbracket=\llbracket e\rrbracket\diamond\llbracket
f\rrbracket\quad\llbracket e^{\ast}\rrbracket=\llbracket e\rrbracket^{\ast}$
We write $\epsilon$ for the empty word. Given $L,M\subseteq A^{\ast}$, we
define $L\diamond M=\\{lm\mid l\in L,m\in M\\}$, where mere juxtaposition
denotes concatenation of words. $L^{\ast}$ denotes the _asterate_ of the
language $L$ defined as $L^{\ast}=\bigcup_{i\in\mathbb{N}}L^{i}$ with
$L^{0}=\\{\epsilon\\}$ and $L^{n+1}=L\diamond L^{n}$.
### Brzozowski derivatives.
The famous Kleene’s theorem states that the formal languages accepted by DFA
are in one-to-one correspondence with formal languages definable by regular
expressions. One direction of this theorem involves constructing a DFA for an
arbitrary regular expression. The most common way is via Thompson
construction, $\epsilon$-transition removal and determinisation. Instead, we
recall a direct construction due to Brzozowski [10], in which the set
$\mathsf{Exp}$ of regular expressions is equipped with a structure of
deterministic automaton $\mathcal{R}=(\mathsf{Exp},\langle
o_{\mathcal{R}},t_{\mathcal{R}}\rangle)$ through so-called Brzozowski
derivatives [10]. The output derivative
$o_{\mathcal{R}}:\mathsf{Exp}\to\\{0,1\\}$ is defined inductively by the
following for $a\in A$ and $e,f\in\mathsf{Exp}$:
$\displaystyle o_{\mathcal{R}}(0)=0\quad o_{\mathcal{R}}(1)=1\quad
o_{\mathcal{R}}(a)=0$ $\displaystyle
o_{\mathcal{R}}(e+f)=o_{\mathcal{R}}(e)\vee o_{\mathcal{R}}(f)\quad
o_{\mathcal{R}}(e\mathbin{;}f)=o_{\mathcal{R}}(e)\wedge
o_{\mathcal{R}}(f)\quad o_{\mathcal{R}}(e^{\ast})=1$
Similarly, the transition derivative $t_{\mathcal{R}}\in\mathsf{Exp}\to
A\to\mathsf{Exp}$ denoted $t_{\mathcal{R}}(e)(a)=(e)_{a}$ is defined by the
following:
$\displaystyle(0)_{a}=0\quad(1)_{a}=0\quad(a^{\prime})_{a}=\begin{cases}1&a=a^{\prime}\\\
0&a\neq a^{\prime}\end{cases}$
$\displaystyle(e+f)_{a}=(e)_{a}+(f)_{a}\quad(e\mathbin{;}f)_{a}=(e_{a})\mathbin{;}f+o_{\mathcal{R}}(e)\mathbin{;}f\quad(e^{\ast})=(e)_{a}\mathbin{;}e^{\ast}$
The canonical language-assigning homomorphism from $\mathcal{R}$ to
$\mathcal{L}$, happens to coincide with the semantics map
$\llbracket-\rrbracket$ assigning a language to each regular expression.
###### Lemma 2.1 ([32, Theorem 3.1.4]).
For all $e\in\mathsf{Exp}$, $\llbracket e\rrbracket=L_{\mathcal{R}}(e)$
Instead of looking at infinite-state automaton defined on the state-space of
all regular expressions, we can restrict ourselves to the subautomaton
$\langle e\rangle_{{\mathcal{R}}}$ of $\mathcal{R}$ while obtaining the
semantics of $e$.
###### Lemma 2.2.
For all $e\in\mathsf{Exp}$, $\llbracket e\rrbracket=L_{\langle
e\rangle_{{\mathcal{R}}}}(e)$
Unfortunately, for an arbitrary regular expression $e\in\mathsf{Exp}$, the
automaton $\langle e\rangle_{{\mathcal{R}}}$ is not guaranteed to have a
finite set of states. However, simplifying the transition derivatives by
removing duplicates in the expressions in the form $e_{1}+\dots+e_{n}$,
guarantees a finite number of reachable states from any expression. Formally
speaking, let
${{\mathrel{\dot{\equiv}}}}\subseteq{\mathsf{Exp}\times\mathsf{Exp}}$ be the
least congruence relation closed under ${{(e+f)+g}{\leavevmode\nobreak\
{\mathrel{\dot{\equiv}}}\leavevmode\nobreak\ }{e+(f+g)}}$ (Associativity),
${e+f}{\leavevmode\nobreak\ {\mathrel{\dot{\equiv}}}\leavevmode\nobreak\
}{f+e}$ (Commutativity) and ${e}{\leavevmode\nobreak\
{\mathrel{\dot{\equiv}}}\leavevmode\nobreak\ }{e+e}$ (Idempotence) for all
$e,f,g\in\mathsf{Exp}$. We will write
${{\mathsf{Exp}}/{{\mathrel{\dot{\equiv}}}}}$ for the quotient of
$\mathsf{Exp}$ by the relation ${\mathrel{\dot{\equiv}}}$ and
$[-]_{{\mathrel{\dot{\equiv}}}}:\mathsf{Exp}\to{\mathsf{Exp}}/{{\mathrel{\dot{\equiv}}}}$
for the canonical map taking each expression $e\in\mathsf{Exp}$ into its
equivalence class $[e]_{{\mathrel{\dot{\equiv}}}}$ modulo
${\mathrel{\dot{\equiv}}}$. Because of [27, Proposition 5.8],
${\mathsf{Exp}}/{{\mathrel{\dot{\equiv}}}}$ can be equipped with a structure
of deterministic automaton
$\mathcal{Q}=({\mathsf{Exp}}/{{\mathrel{\dot{\equiv}}}},\langle
o_{\mathcal{Q}},t_{\mathcal{Q}}\rangle)$, where for all $e\in\mathsf{Exp},a\in
A$, $o_{\mathcal{Q}}([e]_{{\mathrel{\dot{\equiv}}}})=o_{\mathcal{R}}(e)$ and
$([e]_{{\mathrel{\dot{\equiv}}}})_{a}=[e_{a}]_{{\mathrel{\dot{\equiv}}}}$,
which makes the quotient map
$[-]_{{\mathrel{\dot{\equiv}}}}:\mathsf{Exp}\to{\mathsf{Exp}}/{{\mathrel{\dot{\equiv}}}}$
into an automaton homomorphism from the Brzozowski automaton $\mathcal{R}$
into $\mathcal{Q}$. This automaton enjoys the following property:
###### Lemma 2.3 ([10, Theorem 4.3]).
For any $e\in\mathsf{Exp}$, the set $\langle
e\rangle_{{\mathcal{Q}}}\subseteq{\mathsf{Exp}}/{{\mathrel{\dot{\equiv}}}}$ is
finite.
Through an identical line of reasoning as before (Lemma 2.2), we can show
that:
###### Lemma 2.4.
For all $e\in\mathsf{Exp}$,
$L_{\langle[e]_{{\mathrel{\dot{\equiv}}}}\rangle_{{\mathcal{Q}}}}([e]_{{\mathrel{\dot{\equiv}}}})=\llbracket
e\rrbracket$
### (Pseudo)metric spaces.
Let $\top\in\left]0,\infty\right]$ be a fixed maximal element. A
$\top$-bounded _pseudometric_ on a set $X$ (equivalently $\top$-_pseudometric_
or even just a _pseudometric_ if $\top$ is clear from the context) is a
function $d:X\times X\to[0,\top]$ satisfying $d(x,x)=0$ (_reflexivity_),
$d(x,y)=d(y,x)$ (_symmetry_) and $d(x,z)\leq d(x,y)+d(y,z)$ (_triangle
inequality_) for all $x,y,z\in X$. If additionally $d(x,y)=0$ implies $x=y$,
$d$ is called a $\top$-_metric_. A _(pseudo)metric_ space is a pair $(X,d)$
where $X$ is a set and $d$ is a (pseudo)metric on $X$. Given pseudometric
spaces $(X,d_{X})$ and $(Y,d_{Y})$, we call a map $f:X\to Y$ _nonexpansive_ ,
if for all $x,x^{\prime}\in X$, $d_{Y}(f(x),f(x^{\prime}))\leq
d_{X}(x,x^{\prime})$ and an _isometry_ if
$d_{Y}(f(x),f(x^{\prime}))=d_{X}(x,x^{\prime})$. A simple example of a
pseudometric is the discrete metric which can be defined on any set $X$ as
$d_{X}(x,x)=0$ for all $x\in X$ and $d(x,y)_{X}=\top$ for $x,y\in X$ such that
$x\neq y$. The set $D_{X}$ of (pseudo)metrics over some fixed set $X$ can be
equipped with a partial order structure given by the pointwise order, i.e.
$d\sqsubseteq d^{\prime}\iff\forall x,x^{\prime}\in X.d(x,y)\leq
d^{\prime}(x,y)$.
###### Lemma 2.5 ([5, Lemma 3.2]).
$(D_{X},\sqsubseteq)$ is a complete lattice. The join of an arbitrary set of
pseudometrics $D\subseteq D_{X}$ is taken pointwise, ie. $\left(\sup
D\right)(x,y)=\sup\\{d(x,y)\mid d\in D\\}$ for $x,y\in X$. The meet of $D$ is
defined to be $\inf D=\sup\\{d\mid d\in D_{X},\forall{d^{\prime}\in
D},d\sqsubseteq d^{\prime}\\}$.
Crucially for our completeness proof, if we are dealing with descending
chains, that is sequences $\\{d_{i}\\}_{i\in\mathbb{N}}$, such that
$d_{i}\sqsupseteq d_{i+1}$ for all $i\in\mathbb{N}$, then we can also
calculate infima in the pointwise way111Lemma 2.6 is one of the intermediate
results used in the proof of [2, Lemma 5.6] that was communicated to us by the
authors of [2]. As this result was excluded in the mentioned paper, we
incorporated it along with its proof for the sake of completeness. .
###### Lemma 2.6.
Let $\\{d_{i}\\}_{i\in\mathbb{N}}$ be an infinite descending chain in the
lattice $(D_{X},\sqsubseteq)$ of pseudometrics over some fixed set $X$. Then
$(\inf\\{d_{i}\mid i\in\mathbb{N}\\})(x,y)=\inf\\{d_{i}(x,y)\mid
i\in\mathbb{N}\\}$ for any $x,y\in X$.
###### Proof 2.7.
It suffices to argue that $d(x,y)=\inf\\{d_{i}(x,y)\mid i\in\mathbb{N}\\}$ is
a pseudometric. For reflexivity, observe that $d(x,x)=\inf\\{d_{i}(x,x)\mid
i\in\mathbb{N}\\}=\inf\\{0\\}=0$ for all $x\in X$. For symmetry, we have that
$d(x,y)=\inf\\{d_{i}(x,y)\mid i\in\mathbb{N}\\}=\inf\\{d_{i}(y,x)\mid
i\in\mathbb{N}\\}=d(y,x)$ for any $x,y\in X$. The only difficult case is
triangle inequality. First, let $i,j\in\mathbb{N}$ and define $k=\min(i,j)$.
Since $d_{k}\sqsubseteq d_{i}$ and $d_{k}\sqsubseteq d_{j}$, we have that
$d_{k}(x,y)+d_{k}(y,z)\leq d_{i}(x,y)+d_{j}(y,z)$. Therefore
$\inf\\{d_{l}(x,y)+d_{l}(y,z)\mid l\in\mathbb{N}\\}$ is a lower bound of
$d_{i}(x,y)+d_{j}(y,z)$ for any $i,j\in\mathbb{N}$ and hence it is below the
greatest lower bound, that is $\inf\\{d_{l}(x,y)+d_{l}(y,z)\mid
l\in\mathbb{N}\\}\leq\inf\\{d_{i}(x,y)+d_{j}(y,z)\mid i,j\in\mathbb{N}\\}$. We
can use that property to show the following $d(x,y)=\inf\\{d_{i}(z,y)\mid
i\in\mathbb{N}\\}\leq\inf\\{d_{i}(x,y)+d_{i}(y,z)\mid
i\in\mathbb{N}\\}\leq\\{d_{i}(x,y)+d_{j}(y,z)\mid
i,j\in\mathbb{N}\\}=\inf\\{d_{i}(x,y)\mid
i\in\mathbb{N}\\}+\inf\\{d_{j}(y,z)\mid j\in\mathbb{N}\\}=d(x,y)+d(y,z)$,
which completes the proof.
Additionally, the set of pseudometrics can be equipped with a norm. We write
$\overline{\mathbb{R}}=[-\infty,\infty]$ for the set of extended reals. For
any set $X$, the set of functions $\overline{\mathbb{R}}^{X\times X}$, which
is a superset of $D_{X}$, can be seen as a Banach space [26] (complete normed
vector space) by means of the sup-norm $\|d\|=\sup_{x,y\in X}|d(x,y)|$. This
structure will implicitly underly some of the claims used as intermediate
steps in the proof of completeness.
## 3 Behavioural distance
We now instantiate the abstract coalgebraic framework [5] to the case of
deterministic automata relying on the lifting described in [5, Example 5.33].
We concretise the generic results from that paper and spell them in simple
automata-theoretic terms.
### Lifting pseudometrics.
Let $\mathcal{M}=(M,\langle o_{M},t_{M}\rangle)$ be a deterministic automaton.
Its one-step observable behaviour (after applying the output and transition
derivatives) can be seen as pairs of the type $\\{0,1\\}\times M^{A}$, where
the first component determines whether the given state is accepting or not and
the second one gives successor state for each letter from the input alphabet.
Let’s say we have two such observations $\langle o_{1},f_{1}\rangle,\langle
o_{2},f_{2}\rangle\in\\{0,1\\}\times M^{A}$. If we had had some notion of a
distance defined on the state-space of our automaton, or speaking more
formally a $1$-pseudometric $d:M\times M\to[0,1]$, then we can _lift_ this
notion of distance, to a distance on observations, given by the following:
$d_{\\{0,1\\}\times M^{A}}(\langle o_{1},f_{1}\rangle,\langle
o_{2},f_{2}\rangle)=\max\\{d_{2}(o_{1},o_{2}),\lambda\cdot\max_{a\in
A}d(f_{1}(a),f_{2}(a))\\}\qquad\lambda\in\left]0,1\right[$
The definition above involves $d_{2}$, the discrete metric on the set
$\\{0,1\\}$. One can observe that $d_{\\{0,1\\}\times M^{A}}$ is again a
$1$-pseudometric, but this time defined on the set $\\{0,1\\}\times M^{A}$
instead. We now move on to showing how one could use this lifting in order to
equip a state-space of automaton $\mathcal{M}$ with a sensible notion of
behavioural pseudometric.
### Behavioural pseudometric.
If one gave us a $1$-metric $d:M\times M\to[0,1]$ on the state-space of the
automaton, we could use our lifting to produce a new pseudometric
$\Phi_{\mathcal{M}}(d):M\times M\to[0,1]$ on the same set, which would
calculate a distance between an arbitrary pair of states by first applying the
output and transition derivatives to obtain a pair of observations and then by
calculating the distance between them using the aforementioned lifting of
pseudometric $d$ to the pseudometric defined on the set $\\{0,1\\}\times
M^{A}$. Formally speaking, define a map $\Phi_{\mathcal{M}}:D_{M}\to D_{M}$ on
the lattice of $1$-pseudometrics on $M$ given by the following:
$\Phi_{\mathcal{M}}(d)(m,m^{\prime})=\max\\{d_{2}(o_{M}(m),o_{M}(m^{\prime})),\lambda\cdot\max_{a\in
A}d(m_{a},m^{\prime}_{a})\\}\qquad\lambda\in\left]0,1\right[$
The construction above only tells us how to construct new pseudometrics on the
state-space of the automaton out of existing ones, but does not give one to
start with. It turns out, that the map $\Phi_{\mathcal{M}}$ is a monotone
mapping on the lattice of $1$-psuedometrics on the set $M$ [5, Lemma 6.1].
Because of that, one can use the Knaster-Tarski fixpoint theorem [35] and
construct its least fixed point, explicitly given by
$d_{\mathcal{M}}=\inf\\{d\mid d\in D_{M}\wedge\Phi_{\mathcal{M}}(d)\sqsubseteq
d\\}$. Pseudometrics, which are fixpoints of $\Phi_{\mathcal{M}}$ intuitively
interact well with the automaton structure, as they satisfy the property that
the distance between two states is the same as the distance between their
observable behaviour calculated using the lifting. Taking the least such
pseudometric satisfies several desirable properties [5] and thus we will call
$d_{\mathcal{M}}$ a behavioural pseudometric on the automaton $\mathcal{M}$.
First of all, preserving automaton transitions also preserves behavioural
distances.
###### Proposition 3.1.
Let $\mathcal{M}=(M,\langle o_{M},t_{M}\rangle)$ and $\mathcal{N}=(N,\langle
o_{N},t_{N}\rangle)$ be deterministic automata. If $h:M\to N$ is a
homomorphism, then it is also an isometric mapping between pseudometric spaces
$(M,d_{\mathcal{M}})$ and $(N,d_{\mathcal{N}})$.
If we look at the final automaton on the set of all formal languages over a
fixed finite alphabet $A$, then one can easily verify that the behavioural
distance given by the least fixpoint construction precisely corresponds to the
Equation 1 defining the shortest-distinguishing-word distance we stated in
Section 1. In general, states of an arbitrary deterministic automaton
characterised by the behavioural pseudometric to be in distance zero are
language equivalent. When we look at $d_{\mathcal{L}}$ defined on the states
of the final automaton, whose state-space consists of formal languages, then
the language equivalence corresponds to the equality of states. In other words
$d_{\mathcal{L}}$ becomes a metric space.
###### Lemma 3.2.
Let $\mathcal{M}=(M,\langle o_{M},t_{M}\rangle)$ be an arbitrary deterministic
automaton and let $\mathcal{L}=(\mathcal{P}(A^{\ast}),\langle
o_{L},t_{L}\rangle)$ be a deterministic automaton structure on the set of all
languages over an alphabet $A$.
1. 1.
$(\mathcal{P}(A^{\ast}),d_{\mathcal{L}})$ is a metric space.
2. 2.
For any $m,m^{\prime}\in M$, $d_{\mathcal{M}}(m,m^{\prime})=0\iff
L_{\mathcal{M}}(m)=L_{\mathcal{M}}(m^{\prime})$.
## 4 Quantitative Axiomatisation
In order to provide a quantitative inference system for reasoning about the
behavioural distance of languages denoted by regular expressions, we first
recall the definition of quantitative equational theories from the existing
literature [22, 2] following the notational conventions from [2]. We then
present our axiomatisation and demonstrate its soundness. The interesting
thing about our axiomatisation is the lack of any fixpoint introduction rule.
We show that in the case of quantitative analogue of equational logic [22]
containing the infinitary rule capturing the notion of convergence, we can use
our axioms to derive Salomaa’s fixpoint rule from his axiomatisation of
language equivalence of regular expressions [29].
### Quantitative equational theories.
Let $\Sigma$ be an algebraic signature (in the sense of universal algebra
[11]) consisting of operation symbols $f_{n}\in\Sigma$ of arity
$n\in\mathbb{N}$. If we write $X$ for the countable set of _metavariables_ ,
then $\mathbb{T}(\Sigma,X)$ denotes a set of freely generated terms over $X$
built from the signature $\Sigma$. As a notational convention, we will use
letters $t,s,u,\ldots\in\mathbb{T}(\Sigma,X)$ to denote terms. By a
_substitution_ we mean a function of the type $\sigma\colon
X\to\mathbb{T}(\Sigma,X)$ allowing to replace metavariables with terms. Each
substitution can be inductively extended to terms in a unique way by setting
$\sigma(f(t_{1},\dots,t_{n}))=f(\sigma(t_{1}),\dots,\sigma(t_{n}))$ for each
operation symbol $f_{n}\in\Sigma$ from the signature. We will write
$\mathcal{S}(\Sigma)$ for the set of all substitutions. Given two terms
$t,s\in\mathbb{T}(\Sigma,X)$ and a nonnegative rational number
$\varepsilon\in\mathbb{Q}^{+}$ denoting the distance between the terms, we
call $t\equiv_{\varepsilon}s$ a _quantitative equation (of type $\Sigma$)_.
Notation-wise, we will write $\mathcal{E}(\Sigma)$ to denote the set of all
quantitative equations (of type $\Sigma$) and we will use the capital Greek
letters $\Gamma,\Theta,\ldots\subseteq\mathcal{E}(\Sigma)$ to denote the
subsets of $\mathcal{E}(\Sigma)$. By a _deducibility relation_ we mean a
binary relation denoted
${\vdash}\subseteq\mathcal{P}({\mathcal{E}(\Sigma)})\times\mathcal{E}(\Sigma)$.
Similarly, to the classical equational logic, we will use the following
notational shorthands $\Gamma\vdash
t\equiv_{\varepsilon}s\iff(\Gamma,t\equiv_{\varepsilon}s)\in{\vdash}$ and
$\vdash t\equiv_{\varepsilon}s\iff\emptyset\vdash t\equiv_{\varepsilon}s$.
Furthermore, following the usual notational conventions, we will write
$\Gamma\vdash\Theta$ as a shorthand for the situation when $\Gamma\vdash
t\equiv_{\varepsilon}s$ holds for all $t\equiv_{\varepsilon}s\in\Theta$. To
call $\vdash$ a _quantitative deduction system (of type $\Sigma$)_ it needs to
satisfy the following rules of inference:
$\displaystyle(\textsf{Refl})\quad$ $\displaystyle\vdash t\equiv_{0}t\,,$
$\displaystyle(\textsf{Symm})\quad$
$\displaystyle\\{t\equiv_{\varepsilon}s\\}\vdash s\equiv_{\varepsilon}t\,,$
$\displaystyle(\textsf{Triang})\quad$
$\displaystyle\\{t\equiv_{\varepsilon}u,u\equiv_{\varepsilon^{\prime}}s\\}\vdash
t\equiv_{\varepsilon+\varepsilon^{\prime}}s\,,$
$\displaystyle(\textsf{Max})\quad$
$\displaystyle\\{t\equiv_{\varepsilon}s\\}\vdash
t\equiv_{\varepsilon+\varepsilon^{\prime}}s\,,\text{ for all
$\varepsilon^{\prime}>0$}\,,$ $\displaystyle(\textsf{Cont})\quad$
$\displaystyle\\{t\equiv_{\varepsilon^{\prime}}s\mid\varepsilon^{\prime}>\varepsilon\\}\vdash
t\equiv_{\varepsilon}s\,,$ $\displaystyle(\textsf{NExp})\quad$
$\displaystyle\\{t_{1}\equiv_{\varepsilon}s_{1},\ldots,t_{n}\equiv_{\varepsilon}s_{n}\\}\vdash
f(t_{1},\dots,t_{n})\equiv_{\varepsilon}f(s_{1},\dots,s_{n})\,,\text{ for all
$f_{n}\in\Sigma$}\,,$ $\displaystyle(\textsf{Subst})\quad$ If $\Gamma\vdash
t\equiv_{\varepsilon}s$, then
$\sigma(\Gamma)\vdash\sigma(t)\equiv_{\varepsilon}\sigma(s)$, for all
$\sigma\in\mathcal{S}(\Sigma)$ $\displaystyle(\textsf{Cut})\quad$
$\displaystyle\text{If $\Gamma\vdash\Theta$ and $\Theta\vdash
t\equiv_{\varepsilon}s$, then $\Gamma\vdash t\equiv_{\varepsilon}s$}\,,$
$\displaystyle(\textsf{Assum})\quad$ $\displaystyle\text{If
$t\equiv_{\varepsilon}s\in\Gamma$, then $\Gamma\vdash
t\equiv_{\varepsilon}s$}\,.$
where $\sigma(\Gamma)=\left\\{\sigma(t)\equiv_{\varepsilon}\sigma(s)\mid
t\equiv_{\varepsilon}s\in\Gamma\right\\}$. Finally, by a _quantitative
equational theory_ we mean a set ${\mathcal{U}}$ of universally quantified
_quantitative inferences_
$\\{t_{1}\equiv_{\varepsilon_{1}}s_{1},\dots,t_{n}\equiv_{\varepsilon_{n}}s_{n}\\}\vdash
t\equiv_{\varepsilon}s\,,$ with _finitely many premises_ , closed under
$\vdash$-derivability.
### Quantitative algebras.
Quantitative equational theories lie on the syntactic part of the picture. On
the semantic side, we have their models called _quantitative algebras_ ,
defined as follows.
###### Definition 4.1 ([22, Definition 3.1]).
A quantitative algebra is a tuple
$\mathcal{A}=(A,\Sigma^{\mathcal{A}},d^{\mathcal{A}})$, such that
$(A,\Sigma^{\mathcal{A}})$ is an algebra for the signature $\Sigma$ and
$(A,d^{\mathcal{A}})$ is an $\infty$-pseudometric such that for all operation
symbols $f_{n}\in\Sigma$, for all $1\leq i\leq n$, $a_{i},b_{i}\in A$,
$d^{\mathcal{A}}(a_{i},b_{i})\leq\varepsilon$ implies
$d^{\mathcal{A}}(f^{\mathcal{A}}(a_{1},\dots,a_{n}),f^{\mathcal{A}}(b_{1},\dots,b_{n}))\leq\varepsilon$.
Consider a quantitative algebra
$\mathcal{A}=(A,\Sigma^{\mathcal{A}},d^{\mathcal{A}})$. Given an assignment
$\iota\colon X\to A$ of meta-variables from $X$ to elements of carrier $A$,
one can inductively extend it to $\Sigma$-terms $t\in\mathbb{T}(\Sigma,X)$ in
a unique way. We will abuse the notation and just write $\iota(t)$ for the
interpretation of the term $t$ in quantitative algebra $\mathcal{A}$. We will
say that $\mathcal{A}$ _satisfies_ the quantitative inference $\Gamma\vdash
t\equiv_{\varepsilon}s$, written
$\Gamma\models_{\mathcal{A}}t\equiv_{\varepsilon}s$, if for any assignment of
the meta-variables $\iota\colon X\to A$ it is the case that for all
$t^{\prime}\equiv_{\varepsilon^{\prime}}s^{\prime}\in\Gamma$ we have that
$d^{\mathcal{A}}(\iota(t^{\prime}),\iota(s^{\prime}))\leq\varepsilon^{\prime}$
implies $d^{\mathcal{A}}(\iota(t),\iota(s))\leq\varepsilon$. Finally, we say
that a quantitative algebra $\mathcal{A}$ _satisfies_ (or is a _model_ of) the
quantitative theory ${\mathcal{U}}$, if whenever $\Gamma\vdash
t\equiv_{\varepsilon}s\in{\mathcal{U}}$, then
$\Gamma\models_{\mathcal{A}}t\equiv_{\varepsilon}s$.
### Quantitative algebra of regular expressions.
From now on, let’s focus on the signature
$\Sigma=\\{0_{0},1_{0},+_{2},\mathbin{;}_{2},{(-)^{\ast}}_{1}\\}\cup\\{a_{0}\mid
a\in A\\}$, where $A$ is a finite alphabet. This signature consists of all
operations of regular expressions. We can easily interpret all those
operations in the set $\mathsf{Exp}$ of all regular expressions, using trivial
interpretation functions eg. $+^{\mathcal{B}}(e,f)=e+f$, which interpret the
operations by simply constructing the appropriate terms. Formally speaking, we
can do this because the set $\mathsf{Exp}$ is the carrier of initial algebra
[11] (free algebra over the empty set of generators) for the signature
$\Sigma$.
To make this algebra into a quantitative algebra, we first equip the set
$\mathsf{Exp}$ with a $\infty$-pseudometric, given by
$d^{\mathcal{B}}(e,f)=d_{\mathcal{L}}(\llbracket e\rrbracket,\llbracket
f\rrbracket)$ for all $e,f\in\mathsf{Exp}$. Recall that $d_{\mathcal{L}}$ used
in the definition above is a behavioural pseudometric on the final
deterministic automaton carried by the set $\mathcal{P}(A^{\ast})$ of all
formal languages over an alphabet $A$. In other words, we define the distance
between arbitrary expressions $e$ and $f$ to be the distance between formal
languages $\llbracket e\rrbracket$ and $\llbracket f\rrbracket$ calculated
through the shortest-distinguishing-word metric. It turns out, that in such a
situation all the interpretation functions of $\Sigma$-algebra structure on
$\mathsf{Exp}$ are non-expansive with respect to to the pseudometric defined
above. In other words, we have that:
###### Lemma 4.2.
$\mathcal{B}=(\mathsf{Exp},\Sigma^{\mathcal{B}},d^{\mathcal{B}})$ is a
quantitative algebra.
### Axiomatisation.
In order to talk about the quantitative algebra $\mathcal{B}$ of the
behavioural distance of regular expressions in an axiomatic way, we introduce
the quantitative equational theory REG (Figure 1).
$\begin{array}[]{ll}&\textbf{Nondeterministic choice}\\\ (\mathsf{SL1})&\vdash e+e\equiv_{0}e\,,\\\ (\mathsf{SL2})&\vdash e+f\equiv_{0}f+e\,,\\\ (\mathsf{SL3})&\vdash(e+f)+g\equiv_{0}e+(f+g)\,,\\\ (\mathsf{SL4})&\vdash e+0\equiv_{0}e\,,\\\ (\mathsf{SL5})&\\{e\equiv_{\varepsilon}g,f\equiv_{\varepsilon^{\prime}}h\\}\\\ &\quad\vdash e+f\equiv_{\max(\varepsilon,\varepsilon^{\prime})}g+h\,,\\\ \\\ \\\ &\textbf{Loops}\\\ (\mathsf{Unroll})&\vdash e^{\ast}\equiv_{0}e\mathbin{;}e^{\ast}+1\,,\\\ (\mathsf{Tight})&\vdash(e+1)^{\ast}\equiv_{0}e^{\ast}\,,\\\\[4.64996pt] \end{array}$ | $\begin{array}[]{ll}&\textbf{Sequential composition}\\\ (\mathsf{1S})&\vdash 1\mathbin{;}e\equiv_{0}e\,,\\\ (\mathsf{S})&\vdash e\mathbin{;}(f\mathbin{;}g)\equiv_{0}(e\mathbin{;}f)\mathbin{;}g\,,\\\ (\mathsf{S1})&\vdash e\mathbin{;}1\equiv_{0}e\,,\\\ (\mathsf{0S})&\vdash 0\mathbin{;}e\equiv_{0}0\,,\\\ (\mathsf{S0})&\vdash e\mathbin{;}0\equiv_{0}0\,,\\\ (\mathsf{D1})&\vdash e\mathbin{;}(f+g)\equiv_{0}e\mathbin{;}f+e\mathbin{;}g\,,\\\ (\mathsf{D2})&\vdash(e+f)\mathbin{;}g\equiv_{0}e\mathbin{;}g+f\mathbin{;}g\,,\\\ \\\ &\textbf{Behavioural pseudometric}\\\ (\textsf{Top})&\vdash e\equiv_{1}f\,,\\\ (\textsf{$\lambda$-Pref})&\\{e\equiv_{\varepsilon}f\\}\vdash a\mathbin{;}e\equiv_{\varepsilon^{\prime}}a\mathbin{;}f\,,\text{for $\varepsilon^{\prime}\geq\lambda\cdot\varepsilon$}\\\\[4.64996pt] \end{array}$
---|---
Figure 1: Axioms of the quantitative equational theory REG for
$e,f,g\in\mathsf{Exp}$ and $a\in A$.
The first group of axioms capture properties of the nondeterministic choice
operator $+$ (SL1-SL5). The first four axioms (SL1-SL4) are the usual laws of
semilattices with bottom element $0$. (SL5) is a quantitative axiom allowing
one to reason about distances between sums of expressions in terms of
distances between expressions being summed. Moreover, (SL1-SL5) are axioms of
so-called _Quantitative Semilattices with zero_ , which have been shown to
axiomatise the Hausdorff metric [22]. The sequencing axioms (1S), (S1), (S)
state that the set $\mathsf{Exp}$ of regular expressions has the structure of
a monoid (with neutral element $1$) with absorbent element $0$ (0S), (S0).
Additionally, (D1-D2) talk about interaction of the nondeterministic choice
operator $+$ with sequential composition. The loop axioms (Unroll) and (Tight)
are directly inherited from Salomaa’s axiomatisation of language equivalence
of regular expressions [29]. (Unroll) axiom associates loops with their
intuitive behaviour of choosing, at each step, between successful termination
and executing the loop body once. (Tight) states that the loop whose body
might instantly terminate, causing the next loop iteration to be executed
immediately is provably equivalent to a different loop, whose body does not
contain immediate termination. The last remaining group are behavioural
pseudometric axioms. (Top) states that any two expressions are at most in
distance one from each other. Finally, ($\lambda$-Pref) captures the fact that
prepending the same letter to arbitrary expressions shrinks the distance
between them by the factor of $\lambda\in]0,1[$ (used in the definition of
$d^{\mathcal{B}}$). This axiom is adapted from the axiomatisation of
discounted probabilistic bisimilarity distance [2].
Through a simple induction on the length of derivation, one can verify that
indeed $\mathcal{B}$ is a model of the quantitative theory REG.
###### Theorem 4.3.
(Soundness) The quantitative algebra
$\mathcal{B}=(\mathsf{Exp},\Sigma^{\mathcal{B}},d^{\mathcal{B}})$ is a model
of the quantitative theory $\mathsf{REG}$. In other words, for any
$e,f\in\mathsf{Exp}$ and $\varepsilon\in\mathbb{Q}^{+}$, if $\Gamma\vdash
e\equiv_{\varepsilon}f\in\mathsf{REG}$, then
$\Gamma\models_{\mathcal{B}}e\equiv_{\varepsilon}f$
###### Proof 4.4.
By the structural induction on the judgement $\Gamma\vdash
e\equiv_{\varepsilon}f\in\mathsf{REG}$. $(\textsf{Subst})$, $(\textsf{Cut})$
and $(\textsf{Assum})$ deduction rules from classical logic hold immediately.
The soundness of $(\textsf{Refl})$, $(\textsf{Symm})$, $(\textsf{Triang})$,
$(\textsf{Cont})$ and $(\textsf{Max})$ follows from the fact that
$d^{\mathcal{B}}$ is a pseudometric. $(\textsf{NExp})$ follows from the fact
that interpretations of symbols from the algebraic signature are nonexpansive
(Lemma 4.2). Recall that
$d^{\mathcal{B}}=d_{\mathcal{L}}\circ(\llbracket-\rrbracket\times\llbracket-\rrbracket)$.
The soundness of $(\textsf{Top})$ follows from the fact that $d_{\mathcal{L}}$
is a 1-pseudometric. Additionally, for all axioms in the form $\vdash
e\equiv_{0}f$ it suffices to show that $\llbracket e\rrbracket=\llbracket
f\rrbracket$. $(\mathsf{SL1})$, $(\mathsf{SL2})$, $(\mathsf{SL3})$,
$(\mathsf{SL4})$, $(\mathsf{1S})$, $(\mathsf{S})$, $(\mathsf{S1})$,
$(\mathsf{0S})$, $(\mathsf{S0})$, $(\mathsf{D1})$, $(\mathsf{D2})$,
$(\mathsf{Unroll})$ and $(\mathsf{Tight})$ are taken from Salomaa’s
axiomatisation of language equivalence of regular expressions [29] and thus
both sides of those equations denote the same formal languages [42, Theorem
5.2]. For $(\textsf{$\lambda$-Pref})$ assume that the premise is satisfied in
the model, that is $d_{\mathcal{L}}(\llbracket e\rrbracket,\llbracket
f\rrbracket)\leq\varepsilon$. Let
$\varepsilon^{\prime}\geq\lambda\cdot\varepsilon$. We show the following:
$\displaystyle d^{\mathcal{B}}(a\mathbin{;}e,a\mathbin{;}f)$
$\displaystyle=d_{\mathcal{L}}(\llbracket a\mathbin{;}e\rrbracket,\llbracket
a\mathbin{;}f\rrbracket)$ (Def. of $d^{\mathcal{B}}$)
$\displaystyle=\Phi_{\mathcal{L}}(d_{L})(\llbracket
a\mathbin{;}e\rrbracket,\llbracket a\mathbin{;}f\rrbracket)$ ($d_{L}$ is a
fixpoint of $\Phi_{\mathcal{L}}$)
$\displaystyle=\max\\{d_{2}(o_{\mathcal{L}}(a\mathbin{;}e),o_{\mathcal{L}}(a\mathbin{;}f)),\lambda\cdot\max_{a^{\prime}\in
A}d_{\mathcal{L}}(\llbracket a\mathbin{;}e\rrbracket_{a^{\prime}},\llbracket
a\mathbin{;}f\rrbracket_{a^{\prime}})\\}$ $\displaystyle=\lambda\cdot
d_{\mathcal{L}}(\llbracket e\rrbracket,\llbracket f\rrbracket)$ (Def. of final
automaton) $\displaystyle\leq\lambda\cdot\epsilon\leq\varepsilon^{\prime}$
(Assumptions)
Finally, $(\mathsf{SL5})$ is derivable from other axioms222We included
$(\mathsf{SL5})$ as an axiom to highlight the similarity of our inference
system with axiomatisations of language equivalence of regular expressions
[29, 19] containing the axioms of semilattices with bottom. In the previous
work [22], $(\mathsf{SL1-SL5})$ are precisely the axioms of _Quantitative
Semilattices with zero_ axiomatising the Hausdorff distance.. If
$\varepsilon=\max(\varepsilon,\varepsilon^{\prime})$ then
$\\{e\equiv_{\epsilon}g\\}\vdash
e\equiv_{\max(\varepsilon,\varepsilon^{\prime})}g$ holds by
$(\textsf{Assum})$. If $\epsilon<\max(\varepsilon,\varepsilon^{\prime})$, then
we can derive the quantitative judgement above using $(\textsf{Max})$. By a
similar line of reasoning, we can show that
$\\{f\equiv_{\epsilon^{\prime}}h\\}\vdash
f\equiv_{\max(\varepsilon,\varepsilon^{\prime})}h$. Finally, using
$(\textsf{Cut})$ and $(\textsf{NExp})$, we can show that
$\\{e\equiv_{\varepsilon}g,f\equiv_{\varepsilon^{\prime}}h\\}\vdash
e+f\equiv_{\max(\varepsilon,\varepsilon^{\prime})}g+h$ as desired.
We now revisit the example from Section 1. Recall that states marked as
initial of the left and middle automata can be respectively represented as
$a^{*}$ and $a+1$. The shortest word distinguishing languages representing
those expressions is $aa$. If we fix $\lambda=\frac{1}{2}$, then
$d_{\mathcal{L}}(\llbracket a^{*}\rrbracket,\llbracket
a+1\rrbracket)=\frac{1}{4}=\left(\frac{1}{2}\right)^{|aa|}$. We can derive
this distance through the means of axiomatic reasoning using the quantitative
equational theory REG in the following way:
###### Example 4.5.
$\displaystyle\vdash a^{*}$ $\displaystyle\equiv_{1}0$ (Top)
$\displaystyle\vdash a\mathbin{;}a^{*}$
$\displaystyle\equiv_{\frac{1}{2}}a\mathbin{;}0$ ($\lambda$-Pref)
$\displaystyle\vdash a\mathbin{;}a^{*}+1$
$\displaystyle\equiv_{\frac{1}{2}}a\mathbin{;}0+1$ ($\vdash 1\equiv_{0}1$ and
SL5) $\displaystyle\vdash a^{*}$ $\displaystyle\equiv_{\frac{1}{2}}1$ (Triang,
Unroll, S0 and SL4) $\displaystyle\vdash a\mathbin{;}a^{*}$
$\displaystyle\equiv_{\frac{1}{4}}a\mathbin{;}1$ ($\lambda$-Pref)
$\displaystyle\vdash a\mathbin{;}a^{*}+1$
$\displaystyle\equiv_{\frac{1}{4}}a\mathbin{;}1+1$ ($\vdash 1\equiv_{0}1$ and
SL5) $\displaystyle\vdash a^{*}$ $\displaystyle\equiv_{\frac{1}{4}}a+1$
(Triang, Unroll and S1)
### (The lack of) the fixpoint axiom.
Traditionally, completeness of inference systems for behavioural equivalence
of languages of expressions featuring recursive constructs such as Kleene star
or $\mu$-recursion [24] rely crucially on fixpoint introduction rules. Those
allow showing that an expression is provably equivalent to a looping construct
if it exhibits some form of self-similarity, typically subject to productivity
constraints. As an illustration, Salomaa’s axiomatisation of language
equivalence of regular expressions incorporates the following inference rule:
$\inferrule{g\equiv e\mathbin{;}g+f\qquad\epsilon\notin\llbracket
e\rrbracket}{g\equiv e^{*}\mathbin{;}f}$ (2)
The side condition on the right states that the loop body is _productive_ ,
that is a deterministic automaton corresponding to an expression $e$ cannot
immediately reach acceptance without performing any transitions. This is
simply equivalent to the language $\llbracket e\rrbracket$ not containing the
empty word. It would be reasonable for one to expect REG to contain a similar
rule to be complete, especially since it should be able to prove language
equivalence of regular expressions (by proving that they are in distance zero
from each other). Furthermore, all axioms of Salomaa except Equation 2 are
contained in REG as rules for distance zero.
It turns out that in the presence of the infinitary continuity (Cont) rule of
quantitative deduction systems and the ($\lambda$-Pref) axiom of REG, the
Salomaa’s inference rule (Equation 2) becomes a derivable fact for distance
zero. First of all, one can show that ($\lambda$-Pref) can be generalised from
prepending single letters to prepending any regular expression satisfying the
side condition from Equation 2.
###### Lemma 4.6.
Let $e,f,g\in\mathsf{Exp}$, such that $\epsilon\notin\llbracket e\rrbracket$.
Then, $\\{f\equiv_{\varepsilon}g\\}\vdash
e\mathbin{;}f\equiv_{\varepsilon^{\prime}}e\mathbin{;}g$ is derivable using
the axioms of REG for all $\varepsilon^{\prime}\geq\lambda\cdot\varepsilon$.
With the above lemma in hand, one can inductively show that if
$g\equiv_{0}e\mathbin{;}g+f$ and $\epsilon\notin\llbracket e\rrbracket$, then
$g$ gets arbitrarily close to $e^{*}\mathbin{;}f$. Intuitively, the more we
unroll the loop in $e^{*}\mathbin{;}f$ using (Unroll) and the more we unroll
the definition of $g$, then the closer both expressions become.
###### Lemma 4.7.
Let $e,f,g\in\mathsf{Exp}$, such that $\epsilon\notin\llbracket e\rrbracket$
and let $n\in\mathbb{N}$. Then, $\\{g\equiv_{0}e\mathbin{;}g+f\\}\vdash
g\equiv_{\varepsilon}e^{*}\mathbin{;}f$ is derivable using the axioms of
$\mathsf{REG}$ for all $\varepsilon\geq\lambda^{n}$.
Having the result above, we can now use the infinitary (Cont) rule capturing
the limiting property of decreasing chain of overapproximations to the
distance and show the derivability of Salomaa’s inference rule.
###### Lemma 4.8.
Let $e,f,g\in\mathsf{Exp}$, such that $\epsilon\notin\llbracket e\rrbracket$.
Then, $\\{g\equiv_{0}e\mathbin{;}g+f\\}\vdash g\equiv_{0}e^{*}\mathbin{;}f$ is
derivable using the axioms of REG.
###### Proof 4.9.
To deduce that $\vdash g\equiv_{0}e^{*}\mathbin{;}f$ using (Cont) it suffices
to show that $\vdash g\equiv_{\varepsilon}e^{*}\mathbin{;}f$ for all
$\varepsilon>0$. To do so, pick an arbitrary $\varepsilon>0$ and let
$N=\lceil\log_{\lambda}\varepsilon\rceil$. Observe that
$\lambda^{N}=\lambda^{\lceil\log_{\lambda}\varepsilon\rceil}\leq\lambda^{\log_{\lambda}\varepsilon}=\varepsilon$.
Because of Lemma 4.7 we have that $\vdash
g\equiv_{\varepsilon}e^{*}\mathbin{;}f$, which completes the proof.
## 5 Completeness
We now move on to the central result of this paper, which is the completeness
of REG with respect to the shortest-distinguishing-word metric on languages
denoting regular expressions. We use the strategy from the proof of
completeness of quantitative axiomatisation of probabilistic bisimilarity
distance [2]. It turns out that the results from [2] rely on properties that
are not unique to the Kantorovich/Wassertstein lifting and can be also
established for instances of the abstract coalgebraic framework [5].
The heart of our argument relies on the fact that the distance between
languages denoting regular expressions can be calculated in a simpler way than
applying the Knaster-Tarski fixpoint theorem while looking at the infinite-
state final automaton of all formal languages over some fixed alphabet. In
particular, regular expressions denote the behaviour of finite-state
deterministic automata. Since automata homomorphisms are non-expansive
mappings, the distance between languages $\llbracket e\rrbracket$ and
$\llbracket f\rrbracket$ of some arbitrary regular expressions
$e,f\in\mathsf{Exp}$ is the same as the distance between states in some DFA
whose languages corresponds to $\llbracket e\rrbracket$ and $\llbracket
f\rrbracket$. To be precise, we will look at the finite subautomaton
$\langle[e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}}\rangle_{{\mathcal{Q}}}$
of the ${\mathrel{\dot{\equiv}}}$ quotient of the Brzozowski automaton. The
reason we care about deterministic finite automata is that it turns out that
one can calculate the behavioural distance between two states through
iterative approximation from above, which can be also derived axiomatically
using the (Cont) rule of quantitative deduction systems. We start by showing
how this simplification works and then we move on to establishing
completeness.
### Behavioural distance on finite-state automata.
Consider a deterministic automaton $\mathcal{M}=(M,\langle
o_{M},t_{M}\rangle)$. The least fixpoint of a monotone endomap
$\Phi_{\mathcal{M}}:D_{M}\to D_{M}$ on the complete lattice of
$1$-pseudometrics on the set $M$ results in $d_{\mathcal{M}}$, which is a
behavioural pseudometric on the states of the automaton $\mathcal{M}$. It is
noteworthy that $\Phi_{\mathcal{M}}$ exhibits two generic properties. Firstly,
$\Phi_{\mathcal{M}}$ behaves well within the Banach space structure defined by
the supremum norm.
###### Lemma 5.1.
$\Phi_{\mathcal{M}}:D_{M}\to D_{M}$ is nonexpansive with respect to the
supremum norm. In other words, for all $d,d^{\prime}\in D_{M}$ we have that
$\|\Phi_{\mathcal{M}}(d^{\prime})-\Phi_{\mathcal{M}}(d)\|\leq\|d^{\prime}-d\|$.
###### Proof 5.2.
We can safely assume that $d\sqsubseteq d^{\prime}$, as other case will be
symmetric. It sufices to show that for all $m,m^{\prime}\in M$,
$\Phi_{\mathcal{M}}(d^{\prime})(m,m^{\prime})-\Phi_{\mathcal{M}}(d)(m,m^{\prime})\leq\|d^{\prime}-d\|$.
First, let’s consider the case when $o_{M}(m)\neq o_{M}(m^{\prime})$ and hence
$d_{2}(m,m^{\prime})=1$. In such a scenario, it holds that
$\Phi_{\mathcal{M}}(d^{\prime})(m,m^{\prime})-\Phi_{\mathcal{M}}(d)(m,m^{\prime})=0\leq\|d^{\prime}-d\|$.
From now on, we will assume that $o_{M}(m)=o_{M}(m)$ and hence
$d_{2}(m,m^{\prime})=0$. We have the following
$\displaystyle\Phi_{\mathcal{M}}(d^{\prime})(m,m^{\prime})-\Phi_{\mathcal{M}}(d)(m,m^{\prime})$
$\displaystyle=\lambda\cdot\max_{a\in
A}d^{\prime}(m_{a},m^{\prime}_{a})-\lambda\cdot\max_{a\in
A}d(m_{a},m^{\prime}_{a})$ $\displaystyle=\lambda\cdot\left(\max_{a\in
A}d^{\prime}(m_{a},m^{\prime}_{a})-\max_{a\in
A}d(m_{a},m^{\prime}_{a})\right)$
$\displaystyle\leq\lambda\cdot\left(\max_{a\in
A}\\{d^{\prime}(m_{a},m^{\prime}_{a})-d(m_{a},m^{\prime}_{a})\\}\right)$
$\displaystyle\leq\lambda\cdot\sup_{n,n^{\prime}\in
M}\\{d^{\prime}(n,n^{\prime})-d(n,n^{\prime})\\}$
$\displaystyle=\lambda\cdot\|d^{\prime}-d\|\leq\|d^{\prime}-d\|$
Secondly, it turns out that $\Phi_{\mathcal{M}}$ has only one fixpoint. This
means that if we want to calculate $d_{\mathcal{M}}$ it suffices to look at
any fixpoint of $\Phi_{\mathcal{M}}$. This will enable a simpler
characterisation, than the one given by the Knaster-Tarski fixpoint theorem.
###### Lemma 5.3.
$\Phi_{\mathcal{M}}$ has a unique fixed point.
###### Proof 5.4.
Let $d,d^{\prime}\in D_{M}$ be two fixed points of $\Phi_{\mathcal{M}}$, that
is $\Phi_{\mathcal{M}}(d)=d$ and $\Phi_{\mathcal{M}}(d^{\prime})=d^{\prime}$.
We can safely assume that $d\sqsubseteq d^{\prime}$, as the other case is
symmetric. We wish to show that $d=d^{\prime}$ and to do so we will use proof
by contradiction.
Assume that $d\neq d^{\prime}$, and hence there exist $m,m^{\prime}\in M$,
such that $d(m,m^{\prime})<d^{\prime}(m,m^{\prime})$ and
$\|d^{\prime}-d\|=d^{\prime}(m,m^{\prime})-d(m,m^{\prime})\neq 0$. First,
consider the case when $o_{M}(m)\neq o_{M}(m^{\prime})$. In such a case both
$d(m,m^{\prime})$ and $d(m,m^{\prime})$ are equal to $1$ and hence
$\|d^{\prime}-d\|=0$, which leads to contradiction. From now, we can safely
assume that $o_{M}(m)=o_{M}(m^{\prime})$. Through an identical line of
reasoning to the proof of Lemma 5.1, we can show that
$\|\Phi_{\mathcal{M}}(d^{\prime})-\Phi_{\mathcal{M}}(d)\|\leq\lambda\cdot\|d^{\prime}-d\|$.
Since both $d$ and $d^{\prime}$ are fixed points, this would mean that
$\|d^{\prime}-d\|\leq\lambda\cdot\|d^{\prime}-d\|$. Since
$\lambda\in\left]0,1\right[$, this would imply that $\|d^{\prime}-d\|=0$
leading again to contradiction.
In particular, we will rely on the characterisation given by the Kleene
fixpoint theorem [30, Theorem 2.8.5], which allows to obtain the greatest
fixpoint of an endofunction on the lattice as the infimum of the decreasing
sequence of finer approximations obtained by repeatedly applying the function
to the top element of the lattice.
###### Theorem 5.5 (Kleene fixpoint theorem).
Let $(X,\sqsubseteq)$ be a complete lattice with a top element $\top$ and
$f:X\to X$ an endofunction that is $\omega$-cocontinuous or in other words for
any decreasing chain $\\{x_{i}\\}_{i\in\mathbb{N}}$ it holds that
$\inf_{i\in\mathbb{N}}\\{f(x_{i})\\}=f\left(\inf_{i\in\mathbb{N}}x_{i}\right)$.
Then, $f$ possesses a greatest fixpoint, given by
$\operatorname{gfp}(f)=\inf_{i\in\mathbb{N}}\\{f^{(i)}(\top)\\}$ where
$f^{(n)}$ denotes $n$-fold self-composition of $f$ given inductively by
$f^{(0)}(x)=x$ and $f^{(n+1)}(x)=f^{(n+1)}(f(x))$ for all $x\in X$.
The theorem above requires the endomap to be $\omega$-cocontinuous. Luckily,
it is the case for $\Phi_{\mathcal{M}}$ if we restrict our attention to DFA.
To show that, we directly follow the line of reasoning from [2, Lemma 5.6]
generalising the similar line of reasoning for $\omega$-continuity from [36,
Theorem 1]. First, using Lemma 2.6 we show that decreasing chains of
pseudometrics over a finite set converge to their infimum. That result is a
minor re-adaptation of [36, Theorem 1] implicitly used in [2, Lemma 5.6].
###### Lemma 5.6.
Let $\\{d_{i}\\}_{i\in\mathbb{N}}$ be an infinite descending chain in the
lattice $(D_{X},\sqsubseteq)$, where $X$ is a finite set. The sequence
$\\{d_{i}\\}_{i\in\mathbb{N}}$ converges (in the sense of convergence in the
Banach space) to $d(x,y)=\inf_{i\in\mathbb{N}}d_{i}(x,y)$.
###### Proof 5.7.
Let $\varepsilon>0$ and let $x,y\in X$. Since
$d(x,y)=\inf_{i\in\mathbb{N}}d_{i}(x,y)$ there exists an index
$m_{x,y}\in\mathbb{N}$ such that for all $n\geq m_{x,y}$,
$|d_{n}(x,y)-d(x,y)|<\varepsilon$. Now, let $N=\max\\{m_{x,y}\mid x,y\in
X\\}$. This is well-defined because $X$ is finite. Therefore, for all $n\geq
N$ and $x,y\in X$, $|d_{n}(x,y)-d(x,y)|<\varepsilon$ and hence
$\|d_{n}-d\|<\varepsilon$.
We can now use the above to show the desired property, by re-adapting [36,
Theorem 1].
###### Lemma 5.8.
If $\mathcal{M}$ is a deterministic finite automaton, then
$\Phi_{\mathcal{M}}$ is $\omega$-cocontinuous.
###### Proof 5.9.
By Lemma 5.6, the chain $\\{d_{i}\\}_{i\in\mathbb{N}}$ converges to
$\inf_{i\in\mathbb{N}}d_{i}$. Since $\Phi_{\mathcal{M}}$ is nonexpansive
(Lemma 5.1) it is also continuous (in the sense of the Banach space
continuity) and therefore $\\{\Phi_{\mathcal{M}}(d_{i})\\}_{i\in\mathbb{N}}$
converges to $\Phi_{\mathcal{M}}\left(\inf_{i\in\mathbb{N}}d_{i}\right)$.
Recall that $\Phi_{\mathcal{M}}$ is monotone, which makes
$\\{\Phi_{\mathcal{M}}(d_{i})\\}_{i\in\mathbb{N}}$ into a chain, which by
Lemma 2.6 and Lemma 5.6 converges to
$\inf_{i\in\mathbb{N}}\\{\Phi_{\mathcal{M}}(d_{i})\\}$. Since limit points are
unique,
$\inf_{i\in\mathbb{N}}\\{\Phi_{\mathcal{M}}(d_{i})\\}=\Phi_{\mathcal{M}}\left(\inf_{i\in\mathbb{N}}d_{i}\right)$.
We can combine the preceding results and provide a straightforward
characterisation of the distance between languages represented by arbitrary
regular expressions, denoted as $e,f\in\mathsf{Exp}$. Utilising a simple
argument based on Proposition 3.1, which asserts that automata homomorphisms
are nonexpansive, one can demonstrate that the distance between $\llbracket
e\rrbracket$ and $\llbracket f\rrbracket$ in the final automaton is equivalent
to the distance between $[e]_{{\mathrel{\dot{\equiv}}}}$ and
$[f]_{{\mathrel{\dot{\equiv}}}}$ in
$\langle[e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}}\rangle_{{\mathcal{Q}}}$.
This is the least subautomaton of $\mathcal{Q}$ that contains the derivatives
(modulo ${\mathrel{\dot{\equiv}}}$) reachable from
$[e]_{{\mathrel{\dot{\equiv}}}}$ and $[f]_{{\mathrel{\dot{\equiv}}}}$.
Importantly, this automaton is finite (Lemma 2.3), allowing us to apply the
Kleene fixpoint theorem to calculate the distance.
Let ${\Psi}_{e,f}^{(0)}$ denote the discrete metric on the set
$\langle[e]{{\mathrel{\dot{\equiv}}}},[f]{{\mathrel{\dot{\equiv}}}}\rangle_{{\mathcal{Q}}}$
(the top element of the lattice of pseudometrics over that set). Define
${\Psi}_{e,f}^{(n+1)}=\Phi_{\langle[e]{{\mathrel{\dot{\equiv}}}},[f]{{\mathrel{\dot{\equiv}}}}\rangle_{{\mathcal{Q}}}}({\Psi}_{e,f}^{(n)})$.
Additionally, leveraging the fact that infima of decreasing chains are
calculated pointwise (Lemma 2.6), we can conclude with the following:
###### Lemma 5.10.
For all $e,f\in\mathsf{Exp}$, the underlying pseudometric of the quantitative
algebra $\mathcal{B}$ can be given by
$d^{\mathcal{B}}(e,f)=\inf_{i\in\mathbb{N}}\left\\{{\Psi}_{e,f}^{(i)}\left([e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}}\right)\right\\}$
In simpler terms, we have demonstrated that the behavioural distance between a
pair of arbitrary regular expressions can be calculated as the infimum of
decreasing approximations of the actual distance from above. Alternatively,
one could calculate the same distance as the supremum of increasing
approximations from below using the Kleene fixpoint theorem for the least
fixpoint. We chose the former approach because our proof of completeness
relies on the (Cont) rule of quantitative deduction systems. This rule
essentially states that to prove two terms are at a specific distance, we
should be able to prove that for all approximations of that distance from
above. This allows us to replicate the fixpoint calculation through axiomatic
reasoning.
### Completeness result.
We start by recalling that regular expressions satisfy a certain decomposition
property, stating that each expression can be reconstructed from its small-
step semantics, up to $\equiv_{0}$. This property, often referred to as the
fundamental theorem of Kleene Algebra/regular expressions (in analogy with the
fundamental theorem of calculus and following the terminology of Rutten [27]
and Silva [32]) is useful in further steps of the proof of completeness.
###### Theorem 5.11.
(Fundamental Theorem) For any $e\in\mathsf{Exp}$, $\vdash
e_{i}\equiv_{0}\sum_{a\in A}a\mathbin{;}(e_{i})_{a}+o_{\mathcal{R}}(e_{i})$ is
derivable using the axioms of REG.
The theorem above makes use of the $n$-ary generalised sum operator, which is
well defined because of (SL1-SL4) axioms of REG. Let’s now say that we are
interested in the distance between some expressions $e,f\in\mathsf{Exp}$. As
mentioned before, we will rely on
$\langle[e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}}\rangle_{{\mathcal{Q}}}$,
the least subautomaton of the ${\mathrel{\dot{\equiv}}}$ quotient of the
Brzozowski automaton containing states reachable from
$[e]_{{\mathrel{\dot{\equiv}}}}$ and $[f]_{{\mathrel{\dot{\equiv}}}}$. Recall
that by Lemma 2.3 its state space is finite. It turns out that the
approximations from above (from Lemma 5.10) to the distance between any pair
of states in that automaton can be derived through the means of axiomatic
reasoning.
###### Lemma 5.12.
Let $e,f\in\mathsf{Exp}$ be arbitrary regular expressions and let
$[g]_{{\mathrel{\dot{\equiv}}}},[h]_{{\mathrel{\dot{\equiv}}}}\in\langle[e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}}\rangle_{{\mathcal{Q}}}$.
For all $i\in\mathbb{N}$, and
$\varepsilon\geq{\Psi}_{e,f}^{(i)}\left([g]_{{\mathrel{\dot{\equiv}}}},[h]_{{\mathrel{\dot{\equiv}}}}\right)$,
one can derive $\vdash g\equiv_{\varepsilon}h$ using the axioms of
$\mathsf{REG}$.
###### Proof 5.13.
We proceed by induction on $i$. For the base case, observe that
${\Psi}_{e,f}^{(0)}$ is the discrete $1$-pseudometric on the set
$\langle[e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}}\rangle_{{\mathcal{Q}}}$
such that
${\Psi}_{e,f}^{(0)}([g]_{{\mathrel{\dot{\equiv}}}},[h]_{{\mathrel{\dot{\equiv}}}})=0$
if and only if $g{\mathrel{\dot{\equiv}}}h$, or otherwise
${\Psi}_{e,f}^{(0)}([g]_{{\mathrel{\dot{\equiv}}}},[h]_{{\mathrel{\dot{\equiv}}}})=1$.
In the first case, we immediately have that $g\equiv_{0}h$, because
${\mathrel{\dot{\equiv}}}$ is contained in distance zero axioms of REG. In the
latter case, we can just use $(\textsf{Top})$, to show that $g\equiv_{1}h$.
Then, in both cases, we can apply $(\textsf{Max})$ to obtain $\vdash
g\equiv_{\varepsilon}h$, since
$\varepsilon\geq{\Psi}_{e,f}^{(0)}([g]_{{\mathrel{\dot{\equiv}}}},[h]_{{\mathrel{\dot{\equiv}}}})$.
For the induction step, let $i=j+1$ and derive the following:
$\displaystyle\varepsilon\geq{\Psi}_{e,f}^{(j+1)}([g]_{{\mathrel{\dot{\equiv}}}},[h]_{{\mathrel{\dot{\equiv}}}})\iff\varepsilon\geq\Phi_{\langle[e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}}\rangle_{{\mathcal{Q}}}}\left({\Psi}_{e,f}^{(j)}\right)\left([g]_{{\mathrel{\dot{\equiv}}}},[h]_{{\mathrel{\dot{\equiv}}}}\right)$
(Def. of ${\Psi}_{e,f}^{j+1}$) $\displaystyle\iff$
$\displaystyle\varepsilon\geq\max\left\\{d_{2}(o_{\mathcal{Q}}({[g]}_{{\mathrel{\dot{\equiv}}}}),o_{\mathcal{Q}}([h]_{{\mathrel{\dot{\equiv}}}})),\lambda\cdot\max_{a\in
A}\left\\{{\Psi}_{e,f}^{(j)}\left({[g]_{{\mathrel{\dot{\equiv}}}}}_{a},{[h]_{{\mathrel{\dot{\equiv}}}}}_{a}\right)\right\\}\right\\}$
(Def. of $\Phi$) $\displaystyle\iff$
$\displaystyle\varepsilon\geq\max\left\\{d_{2}\left(o_{\mathcal{R}}(g),o_{\mathcal{R}}(h)\right),\lambda\cdot\max_{a\in
A}\left\\{\Phi_{\langle[e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}}\rangle_{{\mathcal{Q}}}}^{(j)}\left({[{(g)}_{a}]_{{\mathrel{\dot{\equiv}}}}},{[{(h)}_{a}]_{{\mathrel{\dot{\equiv}}}}}\right)\right\\}\right\\}$
(Def. of $\mathcal{Q}$) $\displaystyle\iff$ $\displaystyle{\varepsilon\geq
d_{2}(o_{\mathcal{R}}(g),o_{\mathcal{R}}(h))}\text{ and for all $a\in
A$},\leavevmode\nobreak\
\varepsilon\cdot\lambda^{-1}\geq{\Psi}_{e,f}^{(j)}\left({[{(g)}_{a}]_{{\mathrel{\dot{\equiv}}}}},{[{(h)}_{a}]_{{\mathrel{\dot{\equiv}}}}}\right)$
Firstly, since $d_{2}$ is the discrete $1$-pseudometric on the set
$\\{0,1\\}$, we can use $(\textsf{Refl})$ or $(\textsf{Top})$ depending on
whether $o_{\mathcal{R}}(g)=o_{\mathcal{R}}(h)$ and then apply
$(\textsf{Max})$ to derive $\vdash
o_{\mathcal{R}}(g)\equiv_{\varepsilon}o_{\mathcal{R}}(h)$.
Let $a\in A$. We will show that $\vdash
a\mathbin{;}(g)_{a}\equiv_{\varepsilon}a\mathbin{;}(h)_{a}$. Since
$\varepsilon\cdot\lambda^{-1}$ is not guaranteed to be rational, we cannot
immediately apply the induction hypothesis. Instead, we rely on (Cont) rule.
First, pick an arbitrary rational $\varepsilon^{\prime}$ strictly greater than
$\varepsilon$ and fix $\\{r_{n}\\}_{n\in\mathbb{N}}$ to be any decreasing
sequence of rationals that converges to $\lambda^{-1}$. Let $r_{N}$ be an
element of that sequence such that
$\varepsilon^{\prime}\geq\lambda\cdot\varepsilon\cdot r_{N}$. It is always
possible to pick such element because $\\{\lambda\cdot
r_{n}\\}_{n\in\mathbb{N}}$ is a decreasing sequence that converges to $1$ and
$\varepsilon^{\prime}>\varepsilon$. Since $\epsilon\cdot
r_{N}\geq\epsilon\cdot\lambda^{-1}$ and $\varepsilon\cdot
r_{N}\in\mathbb{Q}^{+}$, we can use the induction hypothesis and derive
$\vdash(g)_{a}\equiv_{\epsilon\cdot r_{N}}(h_{a})$. Then, by
$(\textsf{$\lambda$-Pref})$ axiom we have that $\vdash
a\mathbin{;}(g)_{a}\equiv_{\varepsilon^{\prime}}a\mathbin{;}(h)_{a}$. Since we
have shown it for arbitrary $\varepsilon^{\prime}>\varepsilon$, by (Cont) rule
we have that $\vdash
a\mathbin{;}(g)_{a}\equiv_{\varepsilon}a\mathbin{;}(h)_{a}$. Using (SL5), we
can combine all subexpressions involving the output and transition derivatives
into the following:
$\vdash\sum_{a\in
A}a\mathbin{;}(g)_{a}+o_{\mathcal{R}}(g)\equiv_{\varepsilon}\sum_{a\in
A}a\mathbin{;}(h)_{a}+o_{\mathcal{R}}(h)$
Since both sides are normal forms of $g$ and $h$ existing because of Theorem
5.11, we can apply (Triang) on both sides and obtain $\vdash
g\equiv_{\varepsilon}h$ thus completing the proof.
At this point, we have done all the hard work, and establishing completeness
involves a straightforward argument that utilises the (Cont) rule and the
lemma above.
###### Theorem 5.14 (Completeness).
For any $e,f\in\mathsf{Exp}$ and $\varepsilon\in\mathbb{Q}^{+}$, if
$\models_{\mathcal{B}}e\equiv_{\varepsilon}f$, then $\vdash
e\equiv_{\varepsilon}f\in\mathsf{REG}$
###### Proof 5.15.
Assume that $\models_{\mathcal{B}}e\equiv_{\varepsilon}f$, which by the
definition of $\models_{\mathcal{B}}$ is equivalent to
$d^{\mathcal{B}}(e,f)\leq\varepsilon$. In order to use (Cont) axiom to derive
$\vdash e\equiv_{\varepsilon}f$, we need to be able to show $\vdash
e\equiv_{\varepsilon^{\prime}}f$ for all $\varepsilon^{\prime}>\varepsilon$.
Because of iterative characterisation of $d^{\mathcal{B}}$ from Lemma 5.10, we
have that $\inf_{i\in
I}\\{{\Psi}_{e,f}^{(i)}([e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}})\\}<\varepsilon^{\prime}$.
Since $\varepsilon^{\prime}$ is strictly above the infimum of the descending
chain of approximants, there exists a point $i\in\mathbb{N}$, such that
$\varepsilon^{\prime}>{\Psi}_{e,f}^{(i)}\left([e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}}\right)$.
We can show this by contradiction.
Assume that for all $i\in\mathbb{N}$,
$\varepsilon^{\prime}\leq{\Psi}_{e,f}^{(i)}\left([e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}}\right)$.
This would make $\varepsilon^{\prime}$ into the lower bound of the chain
$\left\\{{\Psi}_{e,f}^{(i)}([e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}})\right\\}_{i\in\mathbb{N}}$
and in such a case $\varepsilon^{\prime}$ would be less than or equal to the
infimum of that chain, which by assumption is less than or equal to
$\varepsilon$. By transitivity, we could obtain
$\varepsilon^{\prime}\leq\varepsilon$. Since
$\varepsilon^{\prime}>\varepsilon$, by antisymmetry we could derive that
$\varepsilon^{\prime}=\varepsilon$, which would lead to the contradiction.
Using the fact shown above, we can use Lemma 5.12 to obtain $\vdash
e\equiv_{\varepsilon^{\prime}}f\in\mathsf{REG}$ for any
$\varepsilon^{\prime}>\varepsilon$, which completes the proof.
In simpler terms, the (Cont) rule enables us to demonstrate that two terms are
at a specific distance by examining all strict overapproximations of that
distance. Due to the iterative nature outlined in Lemma 5.10, this implies
that we only need to consider finite approximants used in the Kleene fixpoint
theorem. Each of those finite approximants can be axiomatically derived using
Lemma 5.12.
## 6 Discussion
We have presented a sound and complete axiomatisation of the shortest-
distinguishing word distance between languages representing regular
expressions through a quantitative analogue of equational logic [22]. Before
our paper, only axiomatised behavioural distances of probabilistic/weighted
transition systems existed, through (variants of) the Kantorovich/Wasserstein
lifting [21, 12, 4, 2, 1], while we looked at a behavioural distance obtained
through a more general coalgebraic framework [5]. Outside of the coalgebra
community, the shortest-distinguishing word distance and its variants also
appear in the model checking [20] and in the automata learning [14]
literature.
We have followed the strategy for proving completeness from [2]. The
interesting insight about that strategy is that it relies on properties that
are not exclusive to distances obtained through the Kantorovich/Wasserstein
lifting and can be established for notions of behavioural distance for other
kinds of transition systems stemming from the coalgebraic framework. In
particular, one needs to show that the monotone map on the lattice of
pseudometrics used in defining the distance of finite-state systems is
nonexpansive with respect to the sup norm (and hence $\omega$-cocontinuous)
and has a unique fixpoint, thus allowing to characterise the behavioural
distance as the greatest fixpoint obtained through the Kleene fixpoint
theorem. This point of view allows one to reconstruct the fixpoint calculation
in terms of axiomatic manipulation involving the (Cont) rule, eventually
leading to completeness.
We have additionally observed that in the presence of the infinitary (Cont)
rule and the ($\lambda$-Pref) axiom, there is no need for a fixpoint rule,
which is common place in all axiomatisations of regular expressions but also
in other work on distances. In particular, the previous work on axiomatising a
discounted probabilistic bisimilarity distance from [2] includes both
($\lambda$-Pref) and the fixpoint introduction rule, but its proof of
completeness [2, Theorem 6.4] does not involve the fixpoint introduction rule
at any point. We are highly confident that in the case of that axiomatisation,
the fixpoint introduction rule could be derived from other axioms in a similar
fashion to the way we derived Salomaa’s rule for introducing the Kleene star
[29]. Additionally, we are interested in how much this argument relates to the
recent study of fixpoints in quantitative equational theories [23].
Moreover, the axiomatisations from [1, 2] rely on a slight modification of
quantitative equational theories, which drop the requirement of all operations
from the signature to be nonexpansive. This is dictated by the fact that the
interpretation of $\mu$-recursion in Stark and Smolka’s probabilistic process
algebra [34] can increase the behavioural distances in the case of unguarded
recursion, while in regular expressions recursive behaviour is introduced
through Kleene’s star, whose interpretation is non-expansive with respect to
the shortest-distinguishing-word distance. This allowed us to fit instantly
into the original framework of quantitative equational theories. The earlier
work [4] focusing on Markov processes [8] also relies on quantitative
equational theories, but its syntax does not involve any recursive primitives.
Instead, the recursive behaviour is introduced through Cauchy completion of a
pseudometric induced by the axioms. The earliest works on axiomatising
behavioural distances of weighted [21] and probabilistic [12] transition
systems, studied before the introduction of quantitative equational theories,
rely on ad-hoc inference systems that cannot be easily generalised.
The pioneering works [13, 38, 36, 39, 37, 3] laid foundations for behavioural
(pseudo)metrics of various flavours of probabilistic transition systems. The
coalgebraic point of view [5] allowed to generalise these ideas to a wide
range of transition systems by moving from the Kantorovich/Wasserstein lifting
to the abstract setting of lifting endofunctors from the category of sets to
the category of pseudometric spaces. Building upon this theory, further lines
of work were dedicated to asymmetric distances (called hemimetrics) through
the theory of quasi-lax liftings [43], fuzzy analogues of Hennessy-Milner
logic characterising behavioural distance [17, 6], fibrational generalisations
involving quantale-enriched categories [7], up-to techniques allowing for
efficient approximation of behavioural distances [9] and quantitative
analogues of van Glabbek’s linear-time branching-time spectrum [15].
In this paper, we have focused on the simplest and most intuitive
instantiation of the coalgebraic framework in the case of deterministic
automata, but the natural next step would be to generalise our results to a
wider class of transition systems. A good starting point could be to consider
coalgebras for _polynomial_ endofunctors, in the fashion of the framework of
_Kleene Coalgebra_ [32]. Alternatively, it would be interesting to look at
recent work on a family of process algebras parametric on an equational theory
representing the branching constructs [31] and study its generalisations to
quantitative equational theories. A related and interesting avenue for future
work are equational axiomatisations of behavioural equivalence of Guarded
Kleene Algebra with Tests (GKAT) [33, 31] and its probabilistic extension
(ProbGKAT) [28], whose completeness results rely on a powerful uniqueness of
solutions axiom (UA). The soundness of UA in both cases is shown through an
involved argument relying on equipping the transition systems giving the
operational semantics with a form of behavioural distance and showing that
recursive specifications describing finite-state systems correspond to certain
contractive mappings. It may be more sensible, particularly for ProbGKAT to
consider quantitative axiomatisations in the first place and give the proofs
of completeness through the pattern explored in this paper.
## References
* [1] Giorgio Bacci, Giovanni Bacci, Kim G. Larsen, and Radu Mardare. Complete axiomatization for the total variation distance of markov chains. In Sam Staton, editor, Proceedings of the Thirty-Fourth Conference on the Mathematical Foundations of Programming Semantics, MFPS 2018, Dalhousie University, Halifax, Canada, June 6-9, 2018, volume 341 of Electronic Notes in Theoretical Computer Science, pages 27–39. Elsevier, 2018. URL: https://doi.org/10.1016/j.entcs.2018.03.014, doi:10.1016/J.ENTCS.2018.03.014.
* [2] Giorgio Bacci, Giovanni Bacci, Kim G. Larsen, and Radu Mardare. A complete quantitative deduction system for the bisimilarity distance on markov chains. Log. Methods Comput. Sci., 14(4), 2018. doi:10.23638/LMCS-14(4:15)2018.
* [3] Giorgio Bacci, Giovanni Bacci, Kim G. Larsen, and Radu Mardare. Converging from branching to linear metrics on markov chains. Math. Struct. Comput. Sci., 29(1):3–37, 2019. doi:10.1017/S0960129517000160.
* [4] Giorgio Bacci, Radu Mardare, Prakash Panangaden, and Gordon D. Plotkin. An algebraic theory of markov processes. In Anuj Dawar and Erich Grädel, editors, Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2018, Oxford, UK, July 09-12, 2018, pages 679–688. ACM, 2018. doi:10.1145/3209108.3209177.
* [5] Paolo Baldan, Filippo Bonchi, Henning Kerstan, and Barbara König. Coalgebraic behavioral metrics. Log. Methods Comput. Sci., 14(3), 2018. doi:10.23638/LMCS-14(3:20)2018.
* [6] Harsh Beohar, Sebastian Gurke, Barbara König, and Karla Messing. Hennessy-milner theorems via galois connections. In Bartek Klin and Elaine Pimentel, editors, 31st EACSL Annual Conference on Computer Science Logic, CSL 2023, February 13-16, 2023, Warsaw, Poland, volume 252 of LIPIcs, pages 12:1–12:18. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2023. URL: https://doi.org/10.4230/LIPIcs.CSL.2023.12, doi:10.4230/LIPICS.CSL.2023.12.
* [7] Harsh Beohar, Sebastian Gurke, Barbara König, Karla Messing, Jonas Forster, Lutz Schröder, and Paul Wild. Expressive quantale-valued logics for coalgebras: an adjunction-based approach. CoRR, abs/2310.05711, 2023. URL: https://doi.org/10.48550/arXiv.2310.05711, arXiv:2310.05711, doi:10.48550/ARXIV.2310.05711.
* [8] Richard Blute, Josée Desharnais, Abbas Edalat, and Prakash Panangaden. Bisimulation for labelled markov processes. In Proceedings, 12th Annual IEEE Symposium on Logic in Computer Science, Warsaw, Poland, June 29 - July 2, 1997, pages 149–158. IEEE Computer Society, 1997. doi:10.1109/LICS.1997.614943.
* [9] Filippo Bonchi, Barbara König, and Daniela Petrisan. Up-to techniques for behavioural metrics via fibrations. In Sven Schewe and Lijun Zhang, editors, 29th International Conference on Concurrency Theory, CONCUR 2018, September 4-7, 2018, Beijing, China, volume 118 of LIPIcs, pages 17:1–17:17. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2018. URL: https://doi.org/10.4230/LIPIcs.CONCUR.2018.17, doi:10.4230/LIPICS.CONCUR.2018.17.
* [10] Janusz A. Brzozowski. Derivatives of regular expressions. J. ACM, 11(4):481–494, 1964. doi:10.1145/321239.321249.
* [11] Stanley Burris and H P Sankappanavar. A Course in Universal Algebra. Lecture Notes in Statistics. Springer, New York, NY, November 1981.
* [12] Pedro R. D’Argenio, Daniel Gebler, and Matias David Lee. Axiomatizing bisimulation equivalences and metrics from probabilistic SOS rules. In Anca Muscholl, editor, Foundations of Software Science and Computation Structures - 17th International Conference, FOSSACS 2014, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2014, Grenoble, France, April 5-13, 2014, Proceedings, volume 8412 of Lecture Notes in Computer Science, pages 289–303. Springer, 2014. doi:10.1007/978-3-642-54830-7\\_19.
* [13] Josée Desharnais, Vineet Gupta, Radha Jagadeesan, and Prakash Panangaden. Metrics for labelled markov processes. Theor. Comput. Sci., 318(3):323–354, 2004. URL: https://doi.org/10.1016/j.tcs.2003.09.013, doi:10.1016/J.TCS.2003.09.013.
* [14] Tiago Ferreira, Gerco van Heerdt, and Alexandra Silva. Tree-based adaptive model learning. In Nils Jansen, Mariëlle Stoelinga, and Petra van den Bos, editors, A Journey from Process Algebra via Timed Automata to Model Learning - Essays Dedicated to Frits Vaandrager on the Occasion of His 60th Birthday, volume 13560 of Lecture Notes in Computer Science, pages 164–179. Springer, 2022. doi:10.1007/978-3-031-15629-8\\_10.
* [15] Jonas Forster, Lutz Schröder, and Paul Wild. Quantitative graded semantics and spectra of behavioural metrics. CoRR, abs/2306.01487, 2023. URL: https://doi.org/10.48550/arXiv.2306.01487, arXiv:2306.01487, doi:10.48550/ARXIV.2306.01487.
* [16] Alessandro Giacalone, Chi-Chang Jou, and Scott A. Smolka. Algebraic reasoning for probabilistic concurrent systems. In Manfred Broy and Cliff B. Jones, editors, Programming concepts and methods: Proceedings of the IFIP Working Group 2.2, 2.3 Working Conference on Programming Concepts and Methods, Sea of Galilee, Israel, 2-5 April, 1990, pages 443–458. North-Holland, 1990.
* [17] Sergey Goncharov, Dirk Hofmann, Pedro Nora, Lutz Schröder, and Paul Wild. Kantorovich functors and characteristic logics for behavioural distances. In Orna Kupferman and Pawel Sobocinski, editors, Foundations of Software Science and Computation Structures - 26th International Conference, FoSSaCS 2023, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2023, Paris, France, April 22-27, 2023, Proceedings, volume 13992 of Lecture Notes in Computer Science, pages 46–67. Springer, 2023. doi:10.1007/978-3-031-30829-1\\_3.
* [18] John E. Hopcroft and Richard M. Karp. A linear algorithm for testing equivalence of finite automata. 1971\. URL: https://api.semanticscholar.org/CorpusID:120207847.
* [19] Dexter Kozen. A completeness theorem for kleene algebras and the algebra of regular events. Inf. Comput., 110(2):366–390, 1994. URL: https://doi.org/10.1006/inco.1994.1037, doi:10.1006/INCO.1994.1037.
* [20] Marta Z. Kwiatkowska. A metric for traces. Inf. Process. Lett., 35(3):129–135, 1990. doi:10.1016/0020-0190(90)90061-2.
* [21] Kim G. Larsen, Uli Fahrenberg, and Claus R. Thrane. Metrics for weighted transition systems: Axiomatization and complexity. Theor. Comput. Sci., 412(28):3358–3369, 2011. URL: https://doi.org/10.1016/j.tcs.2011.04.003, doi:10.1016/J.TCS.2011.04.003.
* [22] Radu Mardare, Prakash Panangaden, and Gordon D. Plotkin. Quantitative algebraic reasoning. In Martin Grohe, Eric Koskinen, and Natarajan Shankar, editors, Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science, LICS ’16, New York, NY, USA, July 5-8, 2016, pages 700–709. ACM, 2016. doi:10.1145/2933575.2934518.
* [23] Radu Mardare, Prakash Panangaden, and Gordon D. Plotkin. Fixed-points for quantitative equational logics. In 36th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2021, Rome, Italy, June 29 - July 2, 2021, pages 1–13. IEEE, 2021. doi:10.1109/LICS52264.2021.9470662.
* [24] Robin Milner. A complete inference system for a class of regular behaviours. J. Comput. Syst. Sci., 28(3):439–466, 1984. doi:10.1016/0022-0000(84)90023-0.
* [25] David Michael Ritchie Park. Concurrency and automata on infinite sequences. In Theoretical Computer Science, 1981. URL: https://api.semanticscholar.org/CorpusID:206841958.
* [26] Walter Rudin. Functional Analysis. International Series in Pure & Applied Mathematics. McGraw Hill Higher Education, Maidenhead, England, 2 edition, October 1990.
* [27] Jan J. M. M. Rutten. Universal coalgebra: a theory of systems. Theor. Comput. Sci., 249(1):3–80, 2000. doi:10.1016/S0304-3975(00)00056-6.
* [28] Wojciech Różowski, Tobias Kappé, Dexter Kozen, Todd Schmid, and Alexandra Silva. Probabilistic guarded KAT modulo bisimilarity: Completeness and complexity. In Kousha Etessami, Uriel Feige, and Gabriele Puppis, editors, 50th International Colloquium on Automata, Languages, and Programming, ICALP 2023, July 10-14, 2023, Paderborn, Germany, volume 261 of LIPIcs, pages 136:1–136:20. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2023. URL: https://doi.org/10.4230/LIPIcs.ICALP.2023.136, doi:10.4230/LIPICS.ICALP.2023.136.
* [29] Arto Salomaa. Two complete axiom systems for the algebra of regular events. J. ACM, 13(1):158–169, 1966. doi:10.1145/321312.321326.
* [30] Davide Sangiorgi. Coinduction and the duality with induction, page 28–88. Cambridge University Press, 2011.
* [31] Todd Schmid, Wojciech Rozowski, Alexandra Silva, and Jurriaan Rot. Processes parametrised by an algebraic theory. In Mikolaj Bojanczyk, Emanuela Merelli, and David P. Woodruff, editors, 49th International Colloquium on Automata, Languages, and Programming, ICALP 2022, July 4-8, 2022, Paris, France, volume 229 of LIPIcs, pages 132:1–132:20. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022. URL: https://doi.org/10.4230/LIPIcs.ICALP.2022.132, doi:10.4230/LIPICS.ICALP.2022.132.
* [32] A.M Silva. Kleene coalgebra. PhD thesis, Radboud Universiteit Nijmegen, 2010.
* [33] Steffen Smolka, Nate Foster, Justin Hsu, Tobias Kappé, Dexter Kozen, and Alexandra Silva. Guarded kleene algebra with tests: verification of uninterpreted programs in nearly linear time. Proc. ACM Program. Lang., 4(POPL):61:1–61:28, 2020. doi:10.1145/3371129.
* [34] Eugene W. Stark and Scott A. Smolka. A complete axiom system for finite-state probabilistic processes. In Gordon D. Plotkin, Colin Stirling, and Mads Tofte, editors, Proof, Language, and Interaction, Essays in Honour of Robin Milner, pages 571–596. The MIT Press, 2000.
* [35] Alfred Tarski. A lattice-theoretical fixpoint theorem and its applications. Pacific Journal of Mathematics, 5(2):285 – 309, 1955.
* [36] Franck van Breugel. On behavioural pseudometrics and closure ordinals. Inf. Process. Lett., 112(19):715–718, 2012. URL: https://doi.org/10.1016/j.ipl.2012.06.019, doi:10.1016/J.IPL.2012.06.019.
* [37] Franck van Breugel. Probabilistic bisimilarity distances. ACM SIGLOG News, 4(4):33–51, 2017. doi:10.1145/3157831.3157837.
* [38] Franck van Breugel and James Worrell. Towards quantitative verification of probabilistic transition systems. In Fernando Orejas, Paul G. Spirakis, and Jan van Leeuwen, editors, Automata, Languages and Programming, 28th International Colloquium, ICALP 2001, Crete, Greece, July 8-12, 2001, Proceedings, volume 2076 of Lecture Notes in Computer Science, pages 421–432. Springer, 2001. doi:10.1007/3-540-48224-5\\_35.
* [39] Franck van Breugel and James Worrell. Approximating and computing behavioural distances in probabilistic transition systems. Theor. Comput. Sci., 360(1-3):373–385, 2006. URL: https://doi.org/10.1016/j.tcs.2006.05.021, doi:10.1016/J.TCS.2006.05.021.
* [40] Rob J. van Glabbeek. The linear time-branching time spectrum (extended abstract). In Jos C. M. Baeten and Jan Willem Klop, editors, CONCUR ’90, Theories of Concurrency: Unification and Extension, Amsterdam, The Netherlands, August 27-30, 1990, Proceedings, volume 458 of Lecture Notes in Computer Science, pages 278–297. Springer, 1990. URL: https://doi.org/10.1007/BFb0039066, doi:10.1007/BFB0039066.
* [41] Cédric Villani. Optimal Transport. Springer Berlin Heidelberg, 2009. URL: http://dx.doi.org/10.1007/978-3-540-71050-9, doi:10.1007/978-3-540-71050-9.
* [42] Jana Wagemaker, Marcello M. Bonsangue, Tobias Kappé, Jurriaan Rot, and Alexandra Silva. Completeness and incompleteness of synchronous kleene algebra. In Graham Hutton, editor, Mathematics of Program Construction - 13th International Conference, MPC 2019, Porto, Portugal, October 7-9, 2019, Proceedings, volume 11825 of Lecture Notes in Computer Science, pages 385–413. Springer, 2019. doi:10.1007/978-3-030-33636-3\\_14.
* [43] Paul Wild and Lutz Schröder. Characteristic logics for behavioural hemimetrics via fuzzy lax extensions. Log. Methods Comput. Sci., 18(2), 2022. URL: https://doi.org/10.46298/lmcs-18(2:19)2022, doi:10.46298/LMCS-18(2:19)2022.
## Appendix A Preliminaries
See 2.2
###### Proof A.1.
Let $\iota:\langle e\rangle_{{\mathcal{R}}}\hookrightarrow\mathsf{Exp}$ be the
canonical inclusion homomorphism. Composing it with $L_{\mathcal{R}}$ a unique
homomorphism from $\mathcal{R}$ into the final automaton $\mathcal{L}$ yields
a homomorphism $L_{\mathcal{R}}\circ\iota$ from $\langle
e\rangle_{{\mathcal{R}}}$ to the final automaton, which is the same as
$L_{\langle e\rangle_{{\mathcal{R}}}}$, since homomorphisms into the final
automaton are unique. Using Lemma 2.2 we can show the following:
$\llbracket
e\rrbracket=L_{\mathcal{R}}(e)=L_{\mathcal{R}}(\iota(e))=L_{\langle
e\rangle_{{\mathcal{R}}}}(e)$
## Appendix B Behavioural distances
See 3.1
###### Proof B.1.
Follows from [5, Theorem 5.23] and [5, Lemma 6.1]
See 3.2
###### Proof B.2.
1. 1.
Follows from [5, Example 5.33], [5, Lemma 5.24] and [5, Theorem 6.10].
2. 2.
Let $m,m^{\prime}\in M$. Recall that by Proposition 3.1,
$d_{\mathcal{M}}(m,m^{\prime})=d_{\mathcal{L}}(L_{\mathcal{M}}(m),L_{\mathcal{M}}(m^{\prime}))$.
The implication $d_{\mathcal{M}}(m,m^{\prime})=0\implies
L_{\mathcal{M}}(m)=L_{\mathcal{M}}(m^{\prime})$ follows from the fact that
$d_{\mathcal{L}}$ is a metric. The converse holds because of [5, Lemma 6.6].
## Appendix C Quantitative algebras
See 4.2
###### Proof C.1.
Since $d_{\mathcal{L}}$ is a pseudometric, then
$d^{\mathcal{B}}=d_{\mathcal{L}}\circ(\llbracket-\rrbracket\times\llbracket-\rrbracket)$
is also a pseudometric. We now verify the nonexpansivity of interpretations of
operations with non-zero arity. Let $e,f,g,h\in\mathsf{Exp}$,
$d^{\mathcal{B}}(e,g)\leq\varepsilon$ and
$d^{\mathcal{B}}(f,h)\leq\varepsilon$.
1. 1.
We show that $d^{\mathcal{B}}(e+f,g+h)\leq\varepsilon$. In the case when
$\varepsilon=0$, the proof simplifies to showing that if $\llbracket
e\rrbracket=\llbracket g\rrbracket$ and $\llbracket f\rrbracket=\llbracket
h\rrbracket$ then $\llbracket e+f\rrbracket=\llbracket g+h\rrbracket$, which
holds immediately. For the remaining case, when $\varepsilon>0$, let
$n=\lceil\log_{\lambda}\varepsilon\rceil$. Observe that in such a case, we
have that $d^{\mathcal{B}}(e,g)\leq\lambda^{n}$ and
$d^{\mathcal{B}}(f,h)\leq\lambda^{n}$. Using it, we can deduce that
$\llbracket e\rrbracket$ and $\llbracket g\rrbracket$ (and similarly
$\llbracket f\rrbracket$ and $\llbracket h\rrbracket$) agree on all words of
length strictly below $n$ (because the shortest word for which they disagree
is at least of length $n$). To put that more formally:
$\forall w\in A^{\ast}\ldotp|w|<n\implies\left(w\in\llbracket e\rrbracket\iff
w\in\llbracket g\rrbracket\right)\wedge\left(w\in\llbracket f\rrbracket\iff
w\in\llbracket h\rrbracket\right)$
Let $w\in A^{\ast}$, such that $|w|<n$. We have that
$\displaystyle w\in\llbracket e+f\rrbracket$ $\displaystyle\iff w\in\llbracket
e\rrbracket\cup\llbracket f\rrbracket\iff\left(w\in\llbracket
e\rrbracket\right)\vee\left(w\in\llbracket f\rrbracket\right)$
$\displaystyle\iff\left(w\in\llbracket
g\rrbracket\right)\vee\left(w\in\llbracket h\rrbracket\right)$ ($|w|<n$)
$\displaystyle\iff w\in\llbracket g+h\rrbracket$
And thus $\llbracket e+f\rrbracket$ and $\llbracket g+h\rrbracket$ agree on
all words of the length below $n$ and therefore
$d^{\mathcal{B}}(e+f,g+h)\leq\lambda^{n}\leq\varepsilon$.
2. 2.
The case for $\varepsilon=0$ holds immediately through the same line of
reasoning as before, relying on well-definedness of $\diamond$ (concatenation)
operation on languages. We focus on the remaining case, making the same
simplification as before, that is we assume that $\llbracket e\rrbracket$ and
$\llbracket g\rrbracket$ (as well as $\llbracket f\rrbracket$ and $\llbracket
h\rrbracket$) agree on all word of length strictly below $n$). We show that
$\llbracket e\mathbin{;}f\rrbracket$ and $\llbracket g\mathbin{;}h\rrbracket$
also agree on all words of the length strictly less than $n$. Let $w\in
A^{\ast}$, such that $|w|<n$. We have that:
$\displaystyle w\in\llbracket e\mathbin{;}f\rrbracket$ $\displaystyle\iff
w\in\llbracket e\rrbracket\diamond\llbracket f\rrbracket$
$\displaystyle\iff\left(\exists u,v\in A^{*}.w=uv\wedge w\in\llbracket
e\rrbracket\wedge v\in\llbracket f\rrbracket\right)$
$\displaystyle\iff\left(\exists u,v\in A^{*}.w=uv\wedge w\in\llbracket
g\rrbracket\wedge v\in\llbracket h\rrbracket\right)$ ( $|u|<n$ and $|v|<n$)
$\displaystyle\iff w\in\llbracket g\rrbracket\diamond\llbracket
h\rrbracket\iff w\in\llbracket g\mathbin{;}h\rrbracket$
3. 3.
We use the same line of reasoning as before. Assume that $\llbracket
e\rrbracket$ and $\llbracket g\rrbracket$ agree on all words of length below
$n$. Let $w\in A^{*}$, such that $|w|<n$. We have the following:
$\displaystyle w\in\llbracket e^{*}\rrbracket$ $\displaystyle\iff
w\in\llbracket e\rrbracket^{*}$ $\displaystyle\iff
w=\varepsilon\vee\left(\exists k\geq 1\ldotp\exists u_{1},\dots,u_{k}\in
A^{*}\ldotp w=u_{1}\dots u_{k}\right.$ $\displaystyle\left.\qquad\qquad\wedge
u_{1}\in\llbracket e\rrbracket\wedge\dots\wedge u_{k}\in\llbracket
e\rrbracket\right)$ $\displaystyle\iff w=\varepsilon\vee\left(\exists k\geq
1\ldotp\exists u_{1},\dots,u_{k}\in A^{*}\ldotp w=u_{1}\dots u_{k}\right.$
$\displaystyle\left.\qquad\qquad\wedge u_{1}\in\llbracket
g\rrbracket\wedge\dots\wedge u_{k}\in\llbracket g\rrbracket\right)$
($|u_{1}|<n,\dots,|u_{k}|<n$ ) $\displaystyle\iff w\in\llbracket
g\rrbracket^{*}\iff w\in\llbracket g^{*}\rrbracket$
See 4.6
###### Proof C.2.
By induction on $e\in\mathsf{Exp}$. The cases when $e=1$ and $e=(e_{1})^{*}$
are not possible, because of the assumption that $\epsilon\notin\llbracket
e\rrbracket$.
$e=0$ Because of the $(\mathsf{0S})$ axiom, we can derive that
$e\mathbin{;}f\equiv_{0}0\equiv_{0}0\mathbin{;}g\equiv_{0}e\mathbin{;}g$. We
can show the desired conclusion, using $(\textsf{Max})$ axiom.
$e=a$ Holds immediately, because of $(\textsf{$\lambda$-Pref})$ axiom.
$e=e_{1}+e_{2}$ Because of the assumption, both $\epsilon\notin\llbracket
e_{1}\rrbracket$ and $\epsilon\notin\llbracket e_{2}\rrbracket$. Using the
induction hypothesis, we can derive that $\vdash
e_{1}\mathbin{;}f\equiv_{\varepsilon^{\prime}}e_{1}\mathbin{;}g$ and
$e_{2}\mathbin{;}f\equiv_{\varepsilon^{\prime}}e_{2}\mathbin{;}g$. We can
apply the $(\mathsf{SL5})$ axiom to derive that $\vdash
e_{1}\mathbin{;}f+e_{2}\mathbin{;}f\equiv_{\varepsilon^{\prime}}e_{1}\mathbin{;}g+s_{2}\mathbin{;}g$.
Finally, we can apply the $(\mathsf{D2})$ axiom to both sides through (Triang)
and derive
$\vdash(e_{1}+e_{2})\mathbin{;}f\equiv_{\varepsilon^{\prime}}(e_{1}+e_{2})\mathbin{;}g$
as desired.
$e=e_{1}\mathbin{;}e_{2}$ Because of the assumption, $\epsilon\notin\llbracket
e_{1}\rrbracket$ or $\epsilon\notin\llbracket e_{2}\rrbracket$. First, let’s
consider the subcase when both $\epsilon\notin\llbracket e_{1}\rrbracket$ and
$\epsilon\notin\llbracket e_{2}\rrbracket$. By induction hypothesis, we have
that $\vdash e_{2}\mathbin{;}f\equiv_{\varepsilon^{\prime}}e_{2}\mathbin{;}g$.
Since $\lambda\in]0,1[$, we have that
$\lambda\cdot\varepsilon^{\prime}<\varepsilon^{\prime}$. Because of that, we
can apply induction hypothesis again and obtain $\vdash
e_{1}\mathbin{;}e_{2}\mathbin{;}f\equiv_{\varepsilon^{\prime}}e_{1}\mathbin{;}e_{2}\mathbin{;}g$.
Now, let’s consider the subcase when $\epsilon\notin\llbracket
e_{1}\rrbracket$, but $\epsilon\in\llbracket e_{2}\rrbracket$. Using
$(\textsf{NExp})$, we can obtain $\vdash
e_{2}\mathbin{;}f\equiv_{\varepsilon}e_{2}\mathbin{;}g$. Then, since
$\epsilon\notin\llbracket e_{1}\rrbracket$, we can apply the induction
hypothesis and obtain $\vdash
e_{1}\mathbin{;}e_{2}\mathbin{;}f\equiv_{\varepsilon^{\prime}}e_{1}\mathbin{;}e_{2}\mathbin{;}g$
as desired. The remaining subcase, when $\epsilon\notin\llbracket
e_{2}\rrbracket$ but $\epsilon\in\llbracket e_{1}\rrbracket$ is symmetric and
therefore omitted.
See 4.7
###### Proof C.3.
By induction. If $n=0$, then using $(\textsf{Top})$, we can immediately
conclude that $\vdash g\equiv_{1}e^{*}\mathbin{;}f$. Since by the assumption
$\varepsilon\geq\lambda^{0}=1$, we can apply (Max) and obtain $\vdash
g\equiv_{\varepsilon}e^{*}\mathbin{;}f$.
For the inductive cases, we have that $\varepsilon\geq\lambda^{n+1}$ and hence
$\varepsilon\cdot\lambda^{-1}\geq\lambda^{n}$. We cannot instantly apply the
induction hypothesis, as $\varepsilon\cdot\lambda^{-1}$ is not guaranteed to
be rational. Instead, we will use (Cont) of quantitative deduction systems.
Let $\varepsilon^{\prime}$ be an arbitrary rational number strictly greater
than $\varepsilon$ and let $\\{r_{n}\\}_{n\in\mathbb{N}}$ be any decreasing
sequence of rationals that converges to $\lambda^{-1}$. Pick element $r_{N}$
of that sequence that satisfies that
$\varepsilon^{\prime}\geq\varepsilon\cdot\lambda\cdot r_{N}$. We can always
pick such an element, as $\\{r_{n}\\}_{n\in\mathbb{N}}$ gets arbitrarily close
to $\lambda^{-1}$, so $\\{\lambda\cdot r_{n}\\}_{n\in\mathbb{N}}$ is a
decreasing sequence that converges to $1$ and additionally we have that
$\varepsilon^{\prime}>\varepsilon$, so
$\frac{\varepsilon^{\prime}}{\varepsilon}>1$. From the definition of the
limit, we know that there exists large enough $N\in\mathbb{N}$, such that
$|\lambda\cdot r_{N}-1|\leq\frac{\varepsilon^{\prime}}{\varepsilon}-1$. We can
simplify the above relying on the fact that $\lambda\cdot r_{n}\geq 1$ for all
$n\in\mathbb{N}$ and obtain that indeed
$\varepsilon^{\prime}\geq\varepsilon\cdot\lambda\cdot r_{N}$ as desired.
Since $\varepsilon\cdot r_{N}\geq\varepsilon\cdot\lambda^{-1}\geq\lambda^{n}$,
we can apply induction hypothesis and obtain that $\vdash
e\equiv_{\varepsilon\cdot r_{N}}g^{*}\mathbin{;}f$. Since
$\epsilon\notin\llbracket e\rrbracket$, we can now use Lemma 4.6 to derive
that
$e\mathbin{;}g\equiv_{\varepsilon^{\prime}}e\mathbin{;}e^{*}\mathbin{;}f$.
Since we have shown it for arbitrary $\varepsilon^{\prime}>\varepsilon$, we
can use (Cont) rule of the quantitative deduction systems and conclude that
$\vdash e\mathbin{;}g\equiv_{\varepsilon}e\mathbin{;}e^{*}\mathbin{;}f$, as
desired.
Then, because of $(\textsf{Refl})$, we have that $\vdash f\equiv_{0}f$. We can
combine those two quantitative inferences using $(\mathsf{SL5})$ axiom in
order to get $\vdash
e\mathbin{;}g+f\equiv_{\varepsilon}e\mathbin{;}e^{*}\mathbin{;}f+f$. By
assumption, the left hand side satisfies that $\vdash
g\equiv_{0}e\mathbin{;}g+f$. Now, consider the right hand side of that
quantitative inference:
$\displaystyle\vdash e\mathbin{;}e^{*}\mathbin{;}f+f$
$\displaystyle\equiv_{0}e\mathbin{;}e^{*}\mathbin{;}f+1\mathbin{;}f$
($\mathsf{1S}$) $\displaystyle\equiv_{0}(e\mathbin{;}e^{*}+1)\mathbin{;}f$
($\mathsf{D2}$) $\displaystyle\equiv_{0}e^{*}\mathbin{;}f$ ($\mathsf{Unroll}$)
We can combine the reasoning above and conclude (using Triang) that $\vdash
g\equiv_{\varepsilon}e^{*}\mathbin{;}f$.
## Appendix D Completeness
See 5.10
###### Proof D.1.
Recall that
$d^{\mathcal{B}}=d_{\mathcal{L}}\circ(\llbracket-\rrbracket\times\llbracket-\rrbracket)$.
Moreover, the canonical quotient map
$[-]_{{\mathrel{\dot{\equiv}}}}:\mathsf{Exp}\to{\mathsf{Exp}}/{{\mathrel{\dot{\equiv}}}}$
is an automaton homomorphism from $\mathcal{R}$ to $\mathcal{Q}$. Composing it
with a language assigning homomorphism
$L_{\mathcal{Q}}:{\mathsf{Exp}}/{{\mathrel{\dot{\equiv}}}}\to\mathcal{P}(A^{\ast})$
yields an automaton homomorphism
$L_{\mathcal{Q}}\circ[-]_{{\mathrel{\dot{\equiv}}}}:\mathsf{Exp}\to\mathcal{P}(A^{\ast})$,
which by finality must be the same as
$L_{\mathcal{R}}:\mathsf{Exp}\to\mathcal{P}(A^{\ast})$, and thus (by Lemma
2.2) the same as $\llbracket-\rrbracket$. Using the fact that automata
homomorphisms are isometric mappings between pseudometric spaces obtained
through constructing behavioural pseudometrics (Proposition 3.1), we can
derive the following:
$\displaystyle d^{\mathcal{B}}$
$\displaystyle=d_{\mathcal{L}}\circ(\llbracket-\rrbracket\times\llbracket-\rrbracket)$
$\displaystyle=d_{\mathcal{L}}\circ((L_{\mathcal{Q}}\circ([-]_{{\mathrel{\dot{\equiv}}}})\times(L_{\mathcal{Q}}\circ([-]_{{\mathrel{\dot{\equiv}}}}))$
$\displaystyle=d_{\mathcal{L}}\circ(L_{\mathcal{Q}}\times
L_{\mathcal{Q}})\circ([-]_{{\mathrel{\dot{\equiv}}}}\times[-]_{{\mathrel{\dot{\equiv}}}})$
$\displaystyle=d_{\mathcal{Q}}\circ([-]_{{\mathrel{\dot{\equiv}}}}\times[-]_{{\mathrel{\dot{\equiv}}}})$
(Proposition 3.1)
Additionally, since
$\langle[e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}}\rangle_{{\mathcal{Q}}}$
is the subautomaton of $\mathcal{Q}$ containing all the derivatives (modulo
${\mathrel{\dot{\equiv}}}$) of $e$ and $f$, the canonical inclusion map
$\iota:\langle[e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}}\rangle_{{\mathcal{Q}}}\hookrightarrow\mathcal{Q}$
is an automaton homomorphism. Because
$\iota([e]_{{\mathrel{\dot{\equiv}}}})=[e]_{{\mathrel{\dot{\equiv}}}}$ and
$\iota([f]_{{\mathrel{\dot{\equiv}}}})=[f]_{{\mathrel{\dot{\equiv}}}}$, we can
again use Proposition 3.1 to show that
$d^{\mathcal{B}}(e,f)=d_{\mathcal{Q}}([e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}})=d_{\langle[e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}}\rangle_{{\mathcal{Q}}}}([e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}})$
Because of the fact that
$([e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}})$ has finitely
many states (Lemma 2.3) then by Lemma 5.3, Lemma 5.8 and Theorem 5.5 one can
use the simplified iterative formula to calculate the behavioural pseudometric
on
$\langle[e]_{{\mathrel{\dot{\equiv}}}},[f]_{{\mathrel{\dot{\equiv}}}}\rangle_{{\mathcal{Q}}}$.
See 5.11
###### Proof D.2.
We proceed by induction on $e\in\mathsf{Exp}$. The base cases are trivial, so
we just demonstrate the case when $e=a$ for $a\in A$.
$e=a$
$\displaystyle\vdash a$ $\displaystyle\equiv_{0}a\mathbin{;}1$ ($\mathsf{S1}$)
$\displaystyle\equiv_{0}a\mathbin{;}1+0$ ($\mathsf{SL4}$)
$\displaystyle\equiv_{0}a\mathbin{;}(a)_{a}+o_{\mathcal{R}}(a)$ (Def. of
derivatives)
Now, observe that for all $a^{\prime}\in A\setminus\\{a\\}$, we have that
$(a)_{a}^{\prime}=0$. Using axiom ($\mathsf{S0}$),
$a^{\prime}\mathbin{;}(a)_{a^{\prime}}\equiv_{0}0$. Through induction on the
size of $A\setminus\\{a\\}$, using axioms ($\mathsf{SL1}$) and
($\mathsf{SL4}$), one can show that $\sum_{a^{\prime}\in
A\setminus\\{a\\}}a^{\prime}\mathbin{;}(a)_{a^{\prime}}\equiv_{0}0$. We can
now combine the intermediate results into the following:
$\displaystyle\vdash a$
$\displaystyle\equiv_{0}a\mathbin{;}(a)_{a}+o_{\mathcal{R}}(a)$ (Previous
derivations) $\displaystyle\equiv_{0}a\mathbin{;}(a)_{a}+o_{\mathcal{R}}(a)+0$
($\mathsf{SL1}$)
$\displaystyle\equiv_{0}a\mathbin{;}(a)_{a}+0+o_{\mathcal{R}}(a)$
($\mathsf{SL2}$)
$\displaystyle\equiv_{0}a\mathbin{;}(a)_{a}+\sum_{a^{\prime}\in
A\setminus\\{a\\}}a^{\prime}\mathbin{;}(a)_{a^{\prime}}+o_{\mathcal{R}}(a)$
(Previous inductive argument) $\displaystyle\equiv_{0}\sum_{a^{\prime}\in
A}a^{\prime}\mathbin{;}(a)_{a^{\prime}}+o_{\mathcal{R}}(a)$ (Def. of $n$-ary
sum)
$e=f+g$
$\displaystyle\vdash f+g$ $\displaystyle\equiv_{0}\left(\sum_{a\in
A}a\mathbin{;}(f)_{a}+o_{\mathcal{R}}(f)\right)+\left(\sum_{a\in
A}a\mathbin{;}(g)_{a}+o_{\mathcal{R}}(g)\right)$ (Induction hypothesis)
$\displaystyle\equiv_{0}\sum_{a\in
A}\left(a\mathbin{;}(f)_{a}+a\mathbin{;}(g)_{a}\right)+\left(o_{\mathcal{R}}(f)+o_{\mathcal{R}}(g)\right)$
($\mathsf{S3}$) $\displaystyle\equiv_{0}\sum_{a\in
A}\left(a\mathbin{;}\left((f)_{a}+(g)_{a}\right)\right)+\left(o_{\mathcal{R}}(f)+o_{\mathcal{R}}(g)\right)$
($\mathsf{D1}$) $\displaystyle\equiv_{0}\sum_{a\in
A}\left(a\mathbin{;}\left((f+g)_{a}\right)\right)+o_{\mathcal{R}}(f+g)$ (Def.
of derivatives)
$e=f\mathbin{;}g$
$\displaystyle\vdash f\mathbin{;}g$ $\displaystyle\equiv_{0}\left(\sum_{a\in
A}a\mathbin{;}(f)_{a}+o_{\mathcal{R}}(f)\right)\mathbin{;}g$ (Induction
hypothesis) $\displaystyle\equiv_{0}\sum_{a\in
A}a\mathbin{;}(f)_{a}\mathbin{;}g+o_{\mathcal{R}}(f)\mathbin{;}g$
($\mathsf{D2}$) $\displaystyle\equiv_{0}\sum_{a\in
A}a\mathbin{;}(f)_{a}\mathbin{;}g+o_{\mathcal{R}}(f)\mathbin{;}\left(\sum_{a\in
A}a\mathbin{;}(g)_{a}+o_{\mathcal{R}}(g)\right)$ (Induction hypothesis)
$\displaystyle\equiv_{0}\sum_{a\in
A}a\mathbin{;}(f)_{a}\mathbin{;}g+\left(\sum_{a\in
A}o_{\mathcal{R}}(f)\mathbin{;}a\mathbin{;}(g)_{a}+o_{\mathcal{R}}(f)\mathbin{;}o_{\mathcal{R}}(g)\right)$
($\mathsf{D1}$) $\displaystyle\equiv_{0}\sum_{a\in
A}a\mathbin{;}(f)_{a}\mathbin{;}g+\left(\sum_{a\in
A}a\mathbin{;}o_{\mathcal{R}}(f)\mathbin{;}(g)_{a}+o_{\mathcal{R}}(f)\mathbin{;}o_{\mathcal{R}}(g)\right)$
(($\mathsf{1S}$) and ($\mathsf{S1}$) if $o_{\mathcal{R}}(f)=1$ or
($\mathsf{0S}$) and ($\mathsf{S0}$) if $o_{\mathcal{R}}(f)=0$)
$\displaystyle\equiv_{0}\sum_{a\in
A}\left(a\mathbin{;}(f)_{a}\mathbin{;}g+a\mathbin{;}o_{\mathcal{R}}(f)\mathbin{;}(g)_{a}\right)+o_{\mathcal{R}}(f)\mathbin{;}o_{\mathcal{R}}(g)$
($\mathsf{SL3}$) $\displaystyle\equiv_{0}\sum_{a\in
A}a\mathbin{;}\left((f)_{a}\mathbin{;}g+o_{\mathcal{R}}(f)\mathbin{;}(g)_{a}\right)+o_{\mathcal{R}}(f)\mathbin{;}o_{\mathcal{R}}(g)$
($\mathsf{D1}$) $\displaystyle\equiv_{0}\sum_{a\in
A}a\mathbin{;}\left(f\mathbin{;}g\right)_{a}+o_{\mathcal{R}}(f\mathbin{;}g)$
(Def. of derivatives)
$e=f^{\ast}$
$\displaystyle\vdash f^{\ast}$ $\displaystyle\equiv_{0}\left(\sum_{a\in
A}a\mathbin{;}(f)_{a}+o_{\mathcal{R}}(f)\right)^{\ast}$ (Induction hypothesis)
$\displaystyle\equiv_{0}\left(\sum_{a\in A}a\mathbin{;}(f)_{a}\right)^{\ast}$
(($\mathsf{Tight}$) if $o_{\mathcal{R}}=1$ or ($\mathsf{SL4}$) if
$o_{\mathcal{R}}=0$) $\displaystyle\equiv_{0}\left(\sum_{a\in
A}a\mathbin{;}(f)_{a}\right)\mathbin{;}\left(\sum_{a\in
A}a\mathbin{;}(f)_{a}\right)^{\ast}+1$ ($\mathsf{Unroll}$)
$\displaystyle\equiv_{0}\left(\sum_{a\in
A}a\mathbin{;}(f)_{a}\right)\mathbin{;}f^{\ast}+1$ (Steps 1-2 and
$(\textsf{NExp})$) $\displaystyle\equiv_{0}\left(\sum_{a\in
A}a\mathbin{;}(f)_{a}\mathbin{;}f^{\ast}\right)+1$ ($\mathsf{D2}$)
$\displaystyle\equiv_{0}\left(\sum_{a\in
A}a\mathbin{;}\left(f^{\ast}\right)_{a}\right)+o_{\mathcal{R}}\left(f^{\ast}\right)$
(Def. of derivatives)
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.